I recently came across a tweet with a screenshot from a calculus book that defines the derivative with no mention of a limit. Not even Leibniz’s differentials have a home here. And although the definition is quite verbose, I found that it makes the derivative feel natural — *tangible* even.

If you’ve taken a calculus course in the last 120 years or so, then you’ve undoubtedly encountered the concept of a limit. In fact, a quick search for the question “Why are limits important?” returns hundreds, if not thousands, of results stating that limits are an essential part of calculus. How could you possibly understand the subject without them?

It turns out that you can develop a great deal of calculus — at least the entire first-year curriculum — without limits. And doing so, in my opinion, provides a deeper understanding of the subject, its history, and a better appreciation for limits and why they are useful.

While statements professing that you can’t do calculus without limits surely have Leibniz turning in his grave, it’s not surprising that students come away with this mindset. Limits are everywhere in the calculus curriculum.

Limits are abstract, though, and students who struggle to understand them will likely struggle with all of calculus. Some students will be doomed to failure, which is a tragedy considering Newton had little more than an intuition about limits. It took nearly a century after Newton for Cauchy and Weierstrass to formalize limits into the ϵ-δ definition used today.

**Definition. **Let *f(x)* be a function defined on an open interval around *a* (note that *f(a)* need not be defined). Then we say that *the limit of f(x) as x approaches a is L* and write

$$ \lim_{x \to a} f(x) = L $$

if for every number ϵ > 0 there exists some number δ > 0 such that

$$ |f(a) - L| < \epsilon \hspace{6pt} \textrm{whenever} \hspace{6pt} 0 < |x - a| < \delta $$

What makes this definition difficult to understand? In my opinion, the reasons are twofold:

**Quantifier complexity:**Most first-year calculus students have no exposure to propositional logic, much less statements with multiple quantifiers whose order is important. Unpacking this logic — some of which is literally written in Greek — is a tall order.**“Circular” reasoning:**There’s nothing logically inconsistent with the ϵ-δ definition of a limit, but using the definition can feel circular. To show that the limit of*x²*as*x →*2 is 4, you must*guess*that the limit is 4. Then you have to find a suitable δ in terms of ϵ, but proofs that omit how δ is determined feel magical. And not the fun kind of magic, mind you.

Math educators, aware of students’ difficulties with limits, have endured their own struggle to invent better ways of teaching the topic. In 1981, Jerrold Marsden and Alan Weinstein, professors at the University of California, Berkely, proposed a new method — don’t teach limits.

Calculus without limits isn’t new. Gottfried Leibniz devised calculus using *differentials*, which are infinitesimal positive quantities less than any real number. The *method of exhaustion*, developed independently by the ancient Greeks and the Chinese, can be used to find areas and volumes of round shapes, like circles and cones.

Marsden and Weinstein took the latter approach and adapted the method of exhaustion to differentiation. In the preface to their book *Calculus Unlimited*, on the page screenshotted in the tweet that caught my attention, the authors present instructors with the idea of *overtaking *[1].

**Definition.** Let *f* and *g* be real-valued functions with domains contained in ℝ, and let *a* be a real number. We say that *f overtakes g at a* if there is an open interval *I *containing *a* such that

*x ∈ I*and*x ≠ a*implies*x*is in the domain of*f*and*g*.*x ∈ I*and*x < a*implies*f(x) < g(x).**x ∈ I*and*x > a*implies*f(x) > g(x).*

There’s a lot of notation in that definition, but the idea lends itself to a natural geometric interpretation.

For example, draw the graphs of two functions *f *and *g *that intersect at some point whose *x-*coordinate is *a*. If, around some interval *I *on the *x-*axis, the graph of *f *is below the graph of *g *to the left of *a* above the graph of *g *to the right of *a*, then *f *overtakes *g *at *a*.

To define the derivative, Marsden and Weinstein look at the slopes of lines through *a.*

**Definition: **Let *f *be a function defined on an open interval containing *a*. Consider the family of lines *lₘ(x) = f(a) + m(x-a).* Suppose there is a number *b* such that

*m < b*implies*f*overtakes*lₘ*at*a.**m > b*implies*lₘ*overtakes*f*at*a.*

Then we say that *f is differentiable at a*, and that *b* is *the derivative of f* at *a.*

Like the definition of overtaking, Marsden and Weinstein’s definition of the derivative paints a convenient geometric picture. It even provides a tangible method to find the derivative of a function at some point using graphing software.

For instance, consider the graph of *f(x) = x²* and some point, say *a = 1. *Graph any line that passes through *f(a) = 1² = 1*. Now vary the slope of the line until you find a new line that neither overtakes *f* nor is overtaken by *f.* If you can find such a line, its slope is the derivative of *f(x)* at *a.*

If you play around with this method in graphing software long enough, you’ll notice that you need to zoom in very close to the point *f(a) *to get any semblance of precision. I found this to be a beautiful and natural manifestation of what the ϵ-δ definition of a limit represents — namely the concern with points on *f *arbitrarily close to the limit — but manages to obscure from students through its cryptic formulation.

This visual method for finding the derivative at a point isn’t rigorous. It also lacks precision for all but the nicest of functions at the nicest of points. It does emphasize something, however, that was lost on me as a student for a long time: the tangent line is *special*.

In chapter 2 of Marsden and Weinstein’s textbook, the authors introduce the notion of a *transition* *point *as a point at which something suddenly changes. There are countless examples of transition points in nature: sunrise is the transition point from day to night, 100 degrees celsius is the point at which liquid water transitions to water vapor (at sea level, anyway).

The tangent line can be thought of as the transition point between the set of all lines through a point *a* that are overtaken by *f *and the set of all lines through *a* that overtake *f. *Something special happens with the tangent line that splits the set of all lines through the point *a* into two disjoint sets.

Transition points are perhaps the foundational concept in *Calculus Unlimited*. Marsden and Weinstein frame integrals through the lens of transition points, as well, and the concept plays an important role in their proof of The Fundamental Theorem of Calculus.

Marsden and Weinstein mention in their preface that “as far as [they] know, [their] definition [of the derivative] has not appeared elsewhere” [1]. That may be true of their specific definition. Still, a method established by Apollonius of Perga and later re-discovered by John Wallis calculates the derivative in a manner that feels very much like Marsden and Weinstein’s approach [2].

Math historian John Suzuki, in his essay *The Lost Calculus, *explains how the definition of the tangent line used by Apollonius and Wallis can be described in modern terms:

Suppose we wish to find the tangent to a curve y = f(x) at the point (a, f(a)). The tangent liney = T(x) may be defined as the line resting on one side of the curve; hence weither f(x) > T(x) for all x ≠ a, or f(x) < T(x) for all x ≠ a. (This is generally true for curves that do not change concavity; if the curve does change concavity, we must restrict our attention to intervals where the concavity is either positive or negative.)

Suzuki explores the work on derivatives by René Descartes, Jan Hudde, and Isaac Barrow, all of whom approached the problem without the need for infinitesimals and limits. Barrow even proved a version of The Fundamental Theorem of Calculus [2]!

There are many things that I like about Marsden and Weinstein’s textbook. But I’m not a calculus student anymore. I’ve been out of the classroom for nearly a decade, so it’s hard for me to say how students would respond to these ideas.

Marsden and Weinstein’s definition of the derivative is verbose, and it doesn’t address the quantifier complexity introduced by the formal definition of a limit. However, it provides a geometric algorithm for finding the derivative and avoids introducing logic that feels circular.

Although their approach has its appeal, a glance through the book is enough to see how arduous the calculations are using the book’s definitions. Whatever is gained in intuition is offset by painfully boring algebraic manipulations. Not to mention that removing limits from calculus is a disservice to students who aspire to become mathematicians.

There is a lesson here, though, that shouldn’t be overlooked.

Newton’s and Leibniz’s ideas are indisputably important, but framing calculus as the study of limits gives students an inaccurate picture of what calculus really is. So much emphasis is placed on the limit that, at least in many students’ minds, limits become synonymous with calculus itself. That’s like saying that painting is the art of using a paintbrush!

Even worse, focusing on limits hides centuries of effort that provides historical context and validates students who struggle with the modern definitions of the limit and derivative.

Exploring derivatives without limits provides the opportunity to better understand the *content* of calculus, not just the tools, as well as the long history behind the problems calculus solves. In doing so, you might just come away with a better appreciation of why limits are useful. I certainly did.

*If you find an error in this article, you can report it here.*

[1] Marsden, Jerrold and Weinstein, Alan J. (1981) *Calculus Unlimited.* Benjamin/Cummings Publishing Company, Inc. , Menlo Park, CA. ISBN 0–8053–6932–5. https://resolver.caltech.edu/CaltechBOOK:1981.001

[2] Suzuki, Jeff. (2005). *The Lost Calculus.* Mathematics Magazine, vol. 78, (2005), pp. 339–353. https://www.maa.org/programs/maa-awards/writing-awards/the-lost-calculus-1637-1670-tangency-and-optimization-without-limits

*The diagrams and figures in this article were created with Canva and Geogebra.*

Some numbers are hard to compute.

In a 1990 article for *Scientific American*, mathematicians Ronald Graham and Joel Spencer quoted Paul Erdös as saying:

Suppose aliens invade the earth and threaten to obliterate it in a year's time unless human beings can find the Ramsey number for red five and blue five. We could marshal the world's best minds and fastest computers, and within a year we could probably calculate the value. If the aliens demanded the Ramsey number for red six and blue six, however, we would have no choice but to launch a preemptive attack.

It's an intriguing quote. But what the heck is a Ramsey number, and why is it so hard to calculate?

Let's start by playing a game.

The game is called Sim — a two-player game that you can play with pencil and paper. Every game of Sim is played on a "board" made from the six vertices of a hexagon:

Each player takes turns drawing an edge between a pair of vertices, but they do so with two different colors. For example, player one might use a blue pencil, and player two might use a red pencil.

The first player to join any two of their existing edges into a triangle immediately loses:

But will one player *always* lose? Or is it possible for the game to end in a draw? Play a few rounds of the game with a friend — or against a computer — and see if you can force a tie!

To determine whether or not Sim can end in a draw, let's think about a different question — one that, at first glance, might not seem related at all.

Your friend messages you and says, "I want to host a party for a group of randomly selected people, but I need to make sure that either: 1) there are three people who are all mutually strangers, or 2) there are three people who are all friends with each other. I'm serving dinner, and I'm on a tight budget. What's the smallest number of people I need to invite?"

Let's see if we can help your friend out.

Since we need to guarantee at least three mutual strangers *or* three people who are all friends, it seems reasonable that we need to invite at least three people to the party.

But three people isn't enough to guarantee your friend's conditions are met. For example, your friend might invite two people that know each other and one that is a stranger to the other two:

Four people won't work, either. Your friend could invite two people who are friends and two people who are strangers to each other but friends with one other person at the party:

What about five people? That feels more promising because even if your friend invites two people who are friends, the remaining three people could all be mutual strangers:

But the situation isn't so simple. There are lots of ways that five people could be in stranger/friend relationships with one another. For instance, you might find a group of three people out of the five where Person 1 knows Person 2 and Person 2 knows Person 3, but Person 1 *doesn't *know Person 3:

You can extend this situation to five people so that everyone at the party is friends with *exactly* two other people and a stranger to everyone else.

How? Well, let's represent each of the five people at the party as the vertex of a pentagon and use blue edges to indicate two people are friends and red edges to indicate two people are strangers.

Join each of the vertices around the face of the pentagon using blue edges. Then connect each pair of vertices that aren't joined by a blue edge with a red edge:

The diagram must have three vertices joined into a triangle with all blue edges to have three mutual friends. The same goes for three mutual strangers, except the edges of the triangle need to be red.

But there aren't *any* triangles in the diagram with three edges of the same color! So inviting five people to the party won't guarantee that your friend's conditions are met.

At this point, you might be wondering if there is *any* number of people your friend can invite to the party so that at least three people are mutual friends or three people are mutual strangers.

Does a similar diagram rule out six people? For example, what happens if you draw the vertices of a hexagon, join all of the vertices around the face with blue edges, and then connect everything else with red edges?

Well, look at that! Our first triangle! The strategy that worked for five people fails for six, but it doesn't mean that *any *group of six will necessarily contain three mutual strangers.

Do you notice any similarities between your friend's dinner party problem and the Game of Sim?

In both cases, we've drawn the vertices of a polygon and added blue and red edges. The difference is that in Sim, you try to *avoid* triangles with the same color edges, and in the dinner party problem, you *want* to form triangles whose edges are all the same color.

Although explaining the diagrams in terms of polygons helps us visualize the problem, it's not strictly necessary. What we've really been doing is drawing **graphs** — not the graphs of functions you might be familiar with from algebra, but a different kind of graph, comprised of vertices and edges. These graphs are studied in an area of mathematics called **graph theory**.

In particular, we've been drawing **complete graphs**, which are graphs with every possible edge between every pair of distinct vertices.

The Game of Sim and the dinner party problem are concerned with the inevitability of certain kinds of structures appearing whenever you color all the edges in a complete graph with two colors. These structures are called **subgraphs.**

A subgraph is a portion of a graph comprised of one or more of the vertices in the original graph and a subset of the edges in the original graph that connect the vertices in the subgraph.

If a subgraph is complete, it is called an** \(n\)****-****clique**, where *n* is the number of vertices in the subgraph.

Using the language of graph theory, you can reframe the dinner party problem as a question about graphs.

**The Dinner Party Problem (Graph-Theoretical Version): **Does a complete graph exist for which any red-blue coloring of its edges results in either a red 3-clique or a blue 3-clique? And if one *does* exist, what is the smallest one?

You can also reframe the question about whether or not the Game of Sim can end in a draw in graph-theoretical terms.

**The Game of Sim (Graph-Theoretical Version):** Does every red-blue coloring of the complete graph on six vertices contain either a red 3-clique or a blue 3-clique?

In 1928, British philosopher and mathematician Frank Ramsey published a paper called *On a Problem in Formal Logic*. The subject of Ramsey's paper bears little resemblance to the dinner party problem or the Game of Sim.

But one of the results in the paper — one that was not the focus of the discussion but needed in a more important argument — showed that for a sufficiently large system, no matter how chaotic, there exists a substructure with some kind of order. This minor result became one of Ramsey's most iconic legacies, and it can be stated in graph-theoretical terms.

**Ramsey's Theorem (Graph-Theoretical Version):** Given any two positive integers \(r\) and \(b\), there exists a minimum number \(R(r, b)\)* *such that any red-blue coloring of the complete graph on \(R(r, b)\) vertices contains either a red \(r\)-clique or a blue \(b\)-clique.

The number \(R(r, b)\) is called a **Ramsey number**, and the notation reminds us that Ramsey numbers depend on the choice of \(r\) and \(b\).

If you set both \(r\) and \(b\) to 3, then Ramsey's Theorem looks a *lot* like the dinner party problem. In fact, the theorem tells you that a solution to the problem *must* exist! And the smallest number of vertices you need — that is, the smallest number of people your friend needs to invite to the party — is \(R(3, 3)\).

Telling your friend to invite \(R(3, 3)\) people to their party isn't particularly helpful, though. It's good to know that they can solve their problem, but they need to know the *value* of \(R(3, 3)\)*.*

The analysis we did for the dinner party problem tells us that \(R(3, 3)\)* *is *at least *six. But just because we found *one* red-blue coloring of the edges for the complete graph on six vertices with a red 3-clique doesn't mean that *all* red-blue edge colorings will have that property.

Let's look at the situation in a bit more depth. First, choose any vertex in the complete graph on six vertices and call it \(v\).

The vertex \(v\) has five edges connected to it. Now, pretend that you've colored all of the edges in the graph blue or red. What can you say about the number of red and blue edges connected to \(v\)?

There are six possibilities:

- Five red edges and no blue edges
- Four red edges and one blue edge
- Three red edges and two blue edges
- Two red edges and three blue edges
- One red edge and four blue edges
- No red edges and five blue edges

In all six cases, \(v\) is connected to at least three edges of the same color. For the sake of argument, assume that \(v\) is connected to at least three red edges. (If it isn't, swap red and blue in the following argument.) Let \(x\), \(y\), and \(z\) be the vertices at the other end of those edges.

In the diagram, there are three specific edges colored red, but we could have chosen *any* three of the five edges connected to \(v\).

If any of the edges \(xy\), \(xz\), or \(yz\) is red, then the coloring contains a red 3-clique.

On the other hand, if *none* of the edges \(xy\), \(xz\), or \(yz\) is red — in other words, they are all blue — then the coloring contains a blue 3-clique.

So, any red-blue edge coloring of the complete graph on six vertices inevitably contains either a red 3-cycle or a blue 3-cycle. This means that \(R(3, 3)\) is *at most* six, but since we already knew that \(R(3, 3)\) is *at least *six, we can conclude that \(R(3, 3)\) is *exactly* six!

This simultaneously solves the dinner party problem and whether the Game of Sim can end in a draw.

You can tell your friend that they only need to invite six people to the party to guarantee that at least three people are mutual friends or are mutual strangers.

And since the Game of Sim iteratively colors the edges of a complete graph on six vertices red or blue, at some point, one of the players will be forced to complete a triangle whose edges are the same color.

Interest in Ramsey's Theorem seems to have begun when the 1953 Putnam Competition included a graph-theoretic version of the dinner party problem. The solution to the problem established that \(R(3, 3)\) is less than six, and in 1955, mathematicians Robert Greenwood and Andrew Gleason proved that \(R(3, 3)\) is equal to six.

In the same paper, Greenwood and Gleason showed that \(R(4, 4)\) is eighteen using techniques considerably more advanced than what is needed for \(R(3, 3)\).

The search for \(R(5, 5)\), and Ramsey numbers for different values of \(r\) and \(b\), was on. In 1965, H. L. Abbott published the first lower bound for \(R(5, 5)\) in his doctoral thesis, setting the value to at least thirty-eight. By 1989, the value had been raised to forty-three.

The current best upper bound for \(R(5, 5)\) is forty-eight, and it took over a decade to lower that from the previous upper bound of forty-nine. So, after more than half a century, the best estimates for \(R(5, 5)\) are:

$$ 43 \leq R(5, 5) \leq 48. $$

The situation is even worse for \(R(6, 6)\). The current best estimates are:

$$ 102 \leq R(6, 6) \leq 165. $$

Why is calculating Ramsey numbers so hard, even for small values of \(r\) and \(b\)? It's all thanks to an explosion in the number of possible edge colorings as the number of vertices grows in the complete graphs that need to be checked.

Every vertex in a complete graph on \(n\) vertices is connected to \(n-1\) edges, so adding up the edges connected to each vertex gives \(n(n-1)\) edges. But each edge is connected to two vertices, so adding up the edges this way counts each edge twice. This means that the complete graph on \(n\) vertices has \(\frac{n(n-1)}{2}\) edges.

When you color each edge in a complete graph on \(n\) vertices red or blue, you have two choices of color for each edge. So, the total number of ways that you can color the edges of the graph is:

$$ \underbrace{2 \times 2 \times \cdots \times 2}_{\frac{n(n-1)}{2} \text{ times}} = 2^{\frac{n(n-1)}{2}}. $$

That's two to the *power* of \(\frac{n(n-1)}{2}\). In the best case — that is, *if* \(R(5, 5)\) is forty-three — you need to check 2^{903} edge colorings to verify that every single one has at least one red 5-clique or one blue 5-clique.

2^{903} is roughly 10^{271}. For comparison, astrophysicists estimate that the universe contains about 10^{80} atoms!

The number of edge colorings you'd need to check to compute \(R(6, 6)\) is even more staggering. At best, you'd have to check 2^{5151} — roughly 10^{1550} — edge colorings!

For the last half-century, mathematicians have spent significant resources checking edge colorings of the complete graph on forty-three vertices. Each edge coloring checked has contained either a red 5-clique or a blue 5-clique. There is a consensus that \(R(5, 5)\) is likely equal to forty-three.

But why haven't we found the value yet? Erdös posed his hypothetical scenario involving alien invaders in 1990. He thought that \(R(5, 5)\) could be found within a year, assuming the fastest computers and brightest minds were working on the problem.

Since 1990, computer processors have sped up exponentially, thanks in part to Moore's Law. This "law" states that the number of transistors on a microchip doubles roughly every two years.

If we assume that this means that processor speeds also double every two years, then modern processors in 2021 are approximately 2^{10} times faster than processors in 1990. Under these assumptions, a task that took a year to compute in 1990 should take about eight-and-a-half hours to compute today.

Of course, these are just rough estimates. They don't consider other limiting factors, such as interest in the problem and access to sufficient computing resources. And — perhaps most importantly — we haven't had an alien force threatening to obliterate us if we don't compute \(R(5, 5)\) quickly enough.

Given the exponential increase in processing power, though, and the prior work on \(R(5, 5)\), it seems feasible that we could meet the aliens' demands, perhaps in much less time than a year.

What about \(R(6, 6)\), though?

To give you some idea of how large the number of edge colorings that need to be checked is, consider that the universe has been around for about 13.77 billion years. There are 86,400 seconds in a day and, using the Gregorian year of 365.2425 days, a total of 31,556,952 seconds every year.

This means that approximately 4.34 \(\times\) 10^{17} seconds have elapsed since the big bang. That's 434 *quadrillion* seconds!

Using the best lower bound for \(R(6, 6)\), the best-case scenario means checking approximately 2^{5151} edge colorings. That's nearly 9.3 \(\times\) 10^{1532} times *more* edge colorings than seconds since the beginning of the universe!

Even if you could check one edge coloring every *nanosecond*, it would take you more than 2.94 \(\times\) 10^{1516} *years* to check them all! And, from the looks of it, you'd need a computer with more processors than there are atoms in the universe to leverage parallel computing to knock the time needed down to a year.

Not even quantum computing offers much hope. The best algorithm for the kinds of search needed to sift through edge colorings is called Grover's Algorithm, and it can search through a space of \(N\) items in about \(\sqrt{N}\) steps.

In other words, searching through the 2^{5151} possible edge colorings of the complete graph on six vertices would take about 2^{2576} steps. Assuming each of those steps took about a nanosecond, you still need about 9 \(\times\) 10^{758} years to search through all of the edge colorings.

The numbers are mind-numbingly huge. And while it's true that not *all* edge colorings need to be checked — after all, mathematicians have found ways to rule out certain configurations — the scale of the numbers is so massive that, even using our best computers, we'd need an earth-shattering breakthrough in mathematics if we are to find \(R(6, 6)\) in less than a year.

Let's hope Erdös's alien invaders never pay us a visit.

*Was this article helpful or enjoyable to you? If so, consider supporting me. Every little bit helps motivate me to keep producing more content.*

*If you'd like to get notified whenever I publish new content, you can subscribe via RSS or join my email newsletter.*

*Many thanks to Tariq Rashid for reading an early draft of this article and providing valuable feedback.*

*If you find an error in this article, you can report it here.*

Angeltveit, V., & McKay, B. (2018). R(5, 5) ≤ 48. *Journal of Graph Theory,* 89, 5– 13. https://doi.org/10.1002/jgt.22235

Graham, R., & Spencer, J. (1990). Ramsey Theory. *Scientific American,* 112–117.

Greenwood, R., & Gleason, A. (1955). Combinatorial Relations and Chromatic Graphs. * Canadian Journal of Mathematics,7*, 1–7. doi:10.4153/CJM-1955-001-4

McKay, B., & Radziszowski, S. (1997). Subgraph Counting Identities and Ramsey Numbers, *Journal of Combinatorial Theory, Series B*, 69, Issue 2, 193–209. https://doi.org/10.1006/jctb.1996.1741

Radziszowski, S. (2011). Small Ramsey Numbers. Dynamic Surveys. *Electronic Journal of Combinatorics*.

*The diagrams and figures in this article were created with Canva and Excalidraw.*

Imagine meeting an extraterrestrial civilization and, for the sake of argument, let’s assume that you can somehow communicate with these beings. This civilization is a peculiar one in that it has no concept of what we call mathematics.

During some friendly conversation, you mention something about math.

Your curious alien companion stops you and asks, “What is this ‘mathematics’ that you speak of?”

“Well,” you begin, “mathematics is the study of…”

You pause.

What exactly *is* mathematics? How do you describe it?

This question has recently plagued me, and I fear I have no good answer. Fortunately, I’m not alone. Even Wikipedia recognizes that mathematics “has no generally accepted definition.”

As any good digital citizen would, I turned to Twitter to crowdsource an answer:

I didn’t put too much thought into my poll choices.

I chose *Numbers* because whenever I told people that I was studying mathematics, one of the most common responses was, “Oh? You must be really good with numbers!” (I’m not.)

*Logic* was meant to be a bit of a red herring. Mathematicians certainly use logic as a tool. But so do theoretical physicists and philosophers. Logicians study logic, and while there have been many logicians that study mathematics, the two pursuits have always been separate in my mind.

Finally, *Quantitative ideas* was a choice I threw in because it sounded reasonable. It’s general enough to capture a wide range of topics but still keeps the focus on numbers. This isn’t entirely accurate, though. Ask a topologist if their work is quantitative, and they’ll probably say, “Not really.”

To my surprise, *Logic* and *Quantitative ideas* received the same portion of votes. Those who answered *Other* commented with notions such as space, structure, and patterns.

What I hope that poll participants realized is that, to some degree, mathematics studies all of these things. Yet every answer to the question “What is mathematics the study of?” feels inadequate.

While pondering the definition of mathematics, one thing that struck me is that it seems possible to complete this exercise for other scientific disciplines. For example, you could say that physics is the study of how the universe works. Chemistry is the study of what the universe is made of. Biology is the study of living organisms.

I’d be a fool not to point out that these “definitions” are simplistic. Every human endeavor is fraught with nuance. But, to a certain extent, you can somehow distill these disciplines into a single sentence. This doesn’t appear to be the case with mathematics.

Throughout history, mathematics has been placed on a pedestal and given an almost god-like significance. One example of this is the oft-quoted article titled *The unreasonable effectiveness of mathematics in the natural sciences* penned in 1960 by physicist Eugene Wigner.

In his article, Wigner expresses his awe of mathematics’ applicability to physics:

“The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve.”

Despite my love for mathematics, I find Wigner’s attitude a bit strange and borderline fanatical. In my mind, it’s not surprising that one can describe the observable universe in the language of mathematics. After all, mathematics was born in human minds attempting to understand the things they observed.

I’m no historian, but from what I’ve gathered, the advent of mathematics stems primarily from the need to measure quantity. As civilization formed and trade and economies developed, we humans needed to keep track of everything from land area to amounts of goods to currency.

Humans are an inquisitive bunch, though, and our capacity for abstraction is not a modern development. You can find examples in Indian texts from as early as 800 BCE of what we now call the Pythagorean Theorem.

The concept of number itself is a powerful abstraction. We don’t know who the first human was that recognized that three grains of sand and a group of three friends gathered together to have a shared three-ness. But we know that at some point, it *did* occur — likely a *very* long time ago and, I would guess, to multiple people independently.

What is clear to me, though, is that mathematics began with observations of our world. Necessity drove the invention of numerical systems and measurement. Curiosity carried the invention to new heights.

Greek mathematics is widely considered the birthplace of mathematical proof. Most of the accounts of the history of mathematics I've read put the Greeks at the center of modern mathematical thought. Whether or not this narrative is accurate is up for debate.

The story goes that Thales of Miletus learned mathematics from the Babylonians and Egyptians. Later, Thales brilliantly applied logic and reasoning to explain mathematics earning him the title of the first person to apply deductive reasoning to geometry.

Some modern scholars are skeptical of the Babylonian influence on Thales’ mathematics. But the Babylonian and Egyptian influence on Greek society is hardly controversial. And, as pointed out by Karine Chemla in her prologue to the book *The History of Mathematical Proof in Ancient Traditions**,* “several reasons suggest that we should be wary regarding the standard narrative.”

For one, it appears that sources have gone missing from the historical record. Chemla offers as one piece of evidence how remarks made by Henry Thomas Colebrooke, the first person to translate ancient Sanskrit texts to European languages, seem to have been ignored by later historians:

“Colebrooke read in the Sanskrit texts a rather elaborate system of proof in which the algebraic rules used in the application of algebra were themselves proved. Moreover, he pointed resolutely to the use in these writings of ‘algebraic proofs’. It is striking that these remarks were not taken up in later historiography. Why did this evidence disappear from subsequent accounts?”

So, Indian mathematicians — and quite possibly Babylonian and Egyptian mathematicians — were doing mathematical proofs before the Greeks. They just didn’t *look* like Greek proofs, and historical bias has all but erased their contribution to this part of the development of mathematics.

What we can gather, though, is that at some time, mathematics moved from accounting and engineering to something more like philosophy. The statement of a mathematical principle must be accompanied by a proof explaining *why* that principle is true.

So what, then, *is* mathematics? Mathematics has exploded into something far beyond the geometry and algebra handed down to us from the ancients. Some of the objects studied by mathematicians are so abstract that they have no apparent example in nature.

This is where I have to come clean. I really don’t know what mathematics is. I know a few things that it is not, and a few things that it is in part.

The most significant difference I see between mathematics and sciences like physics and chemistry is that mathematics, although rooted in observations of our universe, is not concerned with material objects.

So perhaps mathematics is best described as the science of the immaterial universe — the universe of human thought. Just as physical science is divided into various disciplines — like astronomy or chemistry — so too can mathematics be broken down into things like geometry, algebra, and analysis.

In other words, I wonder if the word *mathematics* is more akin to the word *science* than it is *physics*. It is not a discipline in itself but encompasses many disciplines that share a common toolkit. Science has the scientific method, and mathematics has deductive logic.

Doesn’t this place mathematics into the realm of philosophy? Perhaps. But even philosophers struggle to explain mathematics. Stewart Shapiro, in the Oxford Online Handbook’s introduction to the philosophy of mathematics, has this to say about the trouble mathematics presents to philosophers:

“The conflict between rationalism and empiricism reflects some tension in the traditional views concerning mathematics, if not logic. Mathematics seems necessary and a priori, and yet it has something to do with the physical world. How is this possible? How can we learn something important about the physical world by a priori reflection in our comfortable armchairs? As noted above, mathematics is essential to any scientific understanding of the world, and science is empirical if anything is—rationalism notwithstanding.”

Is it this duality of rationalism and empiricism that plagues anyone who dares try to define mathematics? Maybe. I don’t know. I’m not sure I’ll ever know.

There is one thing I *can* say now that I’ve spent a couple of weeks thinking about the definition of mathematics. The ineffectiveness of any definition now seems to me to be perfectly reasonable.

For the past few years, I've referred to myself as a *recovering mathematician*. Let me explain.

I fell in love with mathematics about a decade ago while taking a calculus class at my local community college. Yeah, I know. Calculus. Yuck. It wasn't so much the course content that I fell in love with, though, but the way the professor explained it to us.

Nothing was presented to us as fact. Everything was questionable. Everything was *a question*. The math, it seemed, wasn't that the derivative of $x^n$ is $nx^{n-1}$ but *how* you get there using the limit definition of a derivative.

I began to pay close attention when the professor wrote the word "proof" on the whiteboard. A story was about to be told.

College and I hadn't always gotten along. After high school, I spent about two months goofing off at The University of Texas at Austin "discovering myself."

I was supposed to study music, you see. I was going to be a jazz pianist. You know, like Thelonius Monk or something. But I liked minor seconds and tritones, and the piano professor at UT-Austin liked major thirds and perfect fourths.

Don't get me wrong; those are some beautiful intervals. But life isn't major and perfect. It's dirty and dissonant. And good jazz is rooted in Real Life™. And also Egyptian sun gods and trips to Saturn.

Well, the good professor disagreed. So I quit and switched my major to "general studies," where you learn nothing about everything. I spent my time skipping class to play music.

For the next couple of years, music was my mistress (may you rest in peace, Duke Ellington). I left UT and, at my mother's behest, took some classes in audio engineering at the Arlyn recording studio.

I graduated as a certified audio engineer of the studio arts and promptly started working as a live sound engineer in various nightclubs back in my hometown of Houston, where pretty much everything I had learned was useless.

I played the Wurlitzer electric piano in a duo with a drummer. He played in weird time signatures, and I forced the electrons from my Wurlitzer's pickups through fuzz and wah-wah pedals. I picked up a regular Thursday night gig with a cover band. We played things like Cannonball Adderley, Pink Floyd, and the Ghostbuster's theme song, all in the same set.

It was wild. It was *fun*.

Then one day, I got a call from this guy I'd gone to engineering school with. He'd landed an unpaid internship at the hottest studio in Hollywood: Oceanway. Radiohead had recorded *Hail to the Thief* there a couple of years earlier. Beyoncé and Beck were laying down tracks. And he was in the middle of it, putting all of his audio engineering knowledge to work getting coffee for producers or whatever.

He called me because he had met a band that needed a keyboardist for their latest album. I phoned the drummer, and he played some tracks for me, holding his phone up to the speaker because this was like 2005. It was British pop rock.

They agreed to pay for a plane ticket and put me up in their apartment for a week. I just had to pay for food. I'd never been to Los Angeles and didn't want to pass up the chance to step foot in a studio where Miles Davis and Frank Sinatra had recorded. So I agreed.

Six months later, I joined the band and moved to LA to be a rockstar. We called ourselves the Small Hours. And while we came close to getting close to making it — we played one show at the Viper room — I had more or less lost interest after a year of grinding the LA scene. And I'd fallen head over heels for a girl I met at my day job.

I worked shipping and receiving for a company that distributed the finest smoking accessories and hippie decor in Van Nuys. Like cheap acrylic bongs and poorly painted ceramic figurines of mushrooms and dragons.

A few months into the job, I met one of the girls that worked the drill press in the shop. She had gorgeous jet-black hair and a smile that made my knees weak. But she only spoke Spanish. And three years of high school Spanish class had somehow only prepared me to say "Hola, me llamo David" and then stare blankly and awkwardly.

I started studying Spanish. I bought a bilingual dictionary and began memorizing phrases to say each day. I enlisted the help of another woman who worked in the shipping area of the warehouse — and who knew a little bit of English and wanted to learn more — to help me practice Spanish.

Eventually, I worked up the courage to ask my crush on a date. I spent hours the night before memorizing Spanish phrases. The next day after work, I approached her and began to recite: "Hola. Yo quiero saber si usted quiere ir conmigo a ver una... una... peli... Ugh! I can't remember how to say movie!"

Fortunately, I had written everything down and taken it to work in my back pocket. I pulled out the paper and finished reading from my script. To my absolute shock, she agreed.

Three — yes, *three* — weeks later, we finally went on that date. We saw *Ice Age 2*.

Nine months later, and after nearly destroying my bilingual dictionary from so much daily use, I asked Raquel to marry me. And for reasons I'll never understand, she said yes.

I had a hobby that bewildered Raquel. I read science books. Like, textbooks. I bought logic puzzle books to work on in my spare time.

One day she had an idea. "Why don't you go back to school and study science?"

"Yeah, I guess so," I thought. "I like physics. Maybe I'll do that."

So, after moving back to Houston with Raquel and getting married in my parent's backyard, I started casually taking classes at the local community college. I planned to knock out some core classes and then transfer to the University of Houston and major in physics. Or maybe chemistry. I didn't know. They're both awesome.

I knew I'd have to get caught up in mathematics, no matter what science I decided to study. So I took a placement test and got placed in a college algebra class. It was fine. I passed without too much trouble. Then I took trigonometry and pre-calculus. I was doing well, but the math just felt like a means to an end.

Then I signed up for calculus, and my life changed. The professor was known around campus for being the most difficult math teacher. So I wasn't sure what to expect. I'd never taken calculus before, and I'd heard it was hard.

It turns out this professor taught calculus with proofs. I'd only seen proofs in my high school geometry class, and I remember them being an exercise in pedantry. But these proofs were different. They felt like stories.

The characters were the definitions of things like limits and derivatives. The stories became theorems, and we referred to them when telling other stories about continuous functions and convergent sequences. It was like we were building up an entire mythology of mathematical ideas from scratch.

I started to write miniature essays on my math tests. I spent as much time practicing taking derivatives as I did crafting written explanations for solutions.

I stopped reading science books and started reading math books. I found a used copy of a book called *Mathematics for the Nonmathematician* by Morris Kline. It told the story of mathematics in a way that was both engaging and thought-provoking. I'd never thought about math having a rich heritage, and I was intrigued with how integral mathematics was to the human experience.

Pretty soon, I had forgotten about physics and chemistry and knew exactly what I wanted to study: Math.

Around this time, my first daughter was born. Raquel and I started to get serious about establishing her permanently in the United States. See, Raquel didn't have any papers.

Thinking it would be easy for a US citizen to get permanent residency for his wife, we started meeting with immigration attorneys. We were wrong. Raquel had orders for deportation for failure to report to court after entering the US.

Not only would she have to leave the United States voluntarily, but she would also have to remain outside of the US for *ten years* before she'd be eligible to apply for a spouse visa. Because she was married to a US citizen, she could qualify for a pardon after five years, at which point she could then apply for permanent residency. All of this, of course, as long as the immigration laws didn't change.

We were devastated. But we had to do it.

Raquel moved back to her home country of El Salvador in 2008 with our one-year-old daughter. I took a semester off of school to move with them but eventually returned to the US to continue studying.

For the next five years, I spent about nine months living and studying in the US each year and three months visiting my wife and daughter in El Salvador. Three months a year with my daughter for five years. That's all I got.

After a few years of community college, I transferred to the University of Houston – Downtown (UHD) and entered the mathematics program. I decided to minor in computer science since I had some programming experience from earlier life interests.

My time at UHD was one of relentless focus. The only upside to being involuntarily separated from your family is that you end up with a lot of free time. I dedicated myself to studying mathematics, and I devoured the curriculum with the appetite of an American pygmy shrew.

I took several mentored studies, which put me one-on-one with professors exploring topics in more depth than a traditional course allows. My favorite was a study in linear algebra, and I credit that course for introducing me to "higher-level mathematics."

I'd always viewed linear algebra as the study of matrices. It's an easy mistake to make, given the typical linear algebra curriculum. But a matrix is just a concrete representation of an abstract mathematical idea. The mechanics of matrices, while important for computation, are just a means to an end. And in all reality, they distract from the beauty of linear algebra.

The mentored study used Sheldon Axler's classic textbook *Linear Algebra Done Right* — which remains one of my favorite mathematics books to this day — and his manifesto *Down With Determinants!*, as well as Paul Halmos's *Finite Dimensional Vector Spaces*.

Reading Axler and Halmos I fielt, for the first time, a textbook speak to me as a future mathematician. Those books are deep. They're technical, for sure, but they also spoke to me on a human level. Axler and Halmos are expert teachers. Their expository style is refreshing, and it planted in me a seed that would grow into a love of writing mathematics.

The UHD mathematics department is small, but it had a massive impact on me. I had the opportunity to take a graph theory course taught using the Moore method. Each week we were given a packet with some definitions of key terms and concepts and a list of twenty or so mathematical statements about graphs. Our task was to determine the truth of as many of the statements as we could. The following week, we took turns presenting our ideas to the class.

Finally, I got to experience what math research felt like first-hand. And I was addicted.

After my graph theory class, I'd pretty much made up my mind that I wanted to be a graph theorist. I approached Dr. Ermalinda DeLaViña, one of the graph theorists at UHD, about working with her for my senior thesis. I also started regularly hanging out in Dr. Ryan Pepper's office, the professor who taught my graph theory course.

My last year at UHD was one full of research. I spent countless hours with Dr. DeLaViña working on conjectures generated by her computer program graffiti.pc. Under her tutelage, I learned the value of using computers to aid mathematical research. I also learned how controversial that topic could be. I wrote and published my first paper with Dr. Pepper.

In the Spring of 2013, I graduated from UHD with a Bachelor of Science in Applied Mathematics and set out to complete a Ph.D. at Texas A&M University.

While I was moving to Bryan, Texas, and preparing to start graduate school, the immigration situation with my wife started moving, too. The five years were up, and Raquel was now eligible to apply for a pardon for the remaining five years of her exile from the US.

At the same time that I was taking graduate-level courses in things like abstract algebra, combinatorics, and differential geometry, I was putting together a statement. I had to prove, with evidence, that I would suffer tangible harm should the government decide to deny Raquel her pardon.

I wrote a comprehensive statement in an effort to give the government zero reasons to deny our family the chance to live together in the US. I think it was around forty pages, including all of the appendices with supporting evidence.

I bound everything together and mailed it to some address in Virginia. Or was it Washington? It doesn't matter. All that matters is that I poured my heart and soul into that statement. And nearly six months later, I received a reply stating that the government had only received a single bank statement from me showing that I sent Raquel some money back in 2008. I had a few weeks to submit the rest of my statement, or we would lose our place in line and potentially the chance for a pardon.

I scrambled to re-assemble and re-submit my statement, absolutely bewildered — although, in hindsight, it's not at all surprising — that the government had managed to lose a forty-page bound document.

Then, the unthinkable happened.

One evening, while visiting my parents in Houston, I got a call from Raquel's cousin. She was in the hospital, and it was serious.

While withdrawing money from an ATM, she was assaulted by someone at knifepoint. They took the $20 or whatever she had and then slit her throat. She was found sometime later by the police.

My entire world caved in.

Suddenly, all that mattered was getting to my wife and getting her and my daughter out of El Salvador.

I filed for a leave of absence at Texas A&M. They agreed to hold my spot in the Ph.D. program for one year. Then I moved to El Salvador and started working on an exit plan.

We couldn't come to the US. Even if we did apply for asylum, the process was so backlogged that it wouldn't provide any immediate relief for our family. And we needed to act fast, especially since my wife had recently given birth to our second daughter.

As citizens of El Salvador, Raquel and our daughter could move freely between the five Central American nations. We decided to move the family to Guatemala.

I needed a job. I reached out to a friend from my days as a live sound engineer. He had opened his own event production company, and I was desperate. It would be tough to work for an event production company in Houston while living in Guatemala, though. But he knew someone that needed a programmer.

I traveled to Houston and met with the owner of an audio/visual installation company. The industry was shifting. The systems they installed were controlled by central servers that processed all the audio and video throughout the facility. He needed someone who could program the servers and build user interfaces for customers to interact with the system from a phone or tablet.

I'd done lots of scientific computing in college, both for mathematics research and while working in a computational chemistry lab (read: a closet with a computer) for a research grant I'd been awarded as an undergraduate.

I had no experience with the kind of programming he needed. But I was desperate, and so was he. Plus, I could work remotely from Guatemala, email files to the company, and support the installation crew via video chat.

We lived in Guatemala for nine months, and our family situation stabilized. I loved getting to spend every day with my wife and daughters. And finally, after a total of seven and a half years of dealing with immigration hurdles, Raquel was issued a pardon.

She applied for permanent residency, and a few months later, we were on our way back to the US.

For almost an entire year, I had done practically no mathematics. There were too many urgent priorities that demanded my attention. But I was anxious to get back to Texas A&M.

There was a problem, though. We were broke. We'd spent all of our money getting the family to safety and paying immigration and legal fees. My parents had helped us cover expenses beyond what we could afford, but even then, we had very little left when we came to the US.

Every Ph.D. student at Texas A&M is supported. Your tuition is paid for, and you're given a monthly stipend for living expenses. And that stipend is enough for a single person living in a tiny apartment or splitting rent with roommates. But my wife and I had to support a family of four.

I had to make a choice.

And I decided that, for the benefit of my children, I needed to leave graduate school and focus on a tech career.

I stand by that decision. But I'd be lying if I told you it didn't leave me feeling hollow.

I've had the privilege of finding moderate success in tech. I never worked at a big tech company, and honestly, I never wanted to. But I found a niche that has afforded me a comfortable lifestyle.

After working for about five years as a programmer, I landed a job at Real Python writing about the Python programming language. The job put me back into the education space, and I've loved it. I get to write and create video courses, and I help other content creators as a technical reviewer.

I even got to write a book, something I'd always wanted to do and something I hope to do again. But the hole that formed in my soul when I left Texas A&M has never been filled.

I don't know if I'll ever go back to graduate school. My oldest daughter is a teenager, and the youngest is halfway between a toddler and a pre-teen. Both are busy with activities: swim team, soccer, gymnastics, violin lessons. Our life is busy, and it's hard enough to find time to write a blog post, let alone finish an advanced degree.

So, I'm still recovering. I might always be recovering.

I might not get the chance to finish my Ph.D., but I've realized that I don't need a doctorate to make an impact in mathematics. All I need is my passion, a drive to continue learning, and a desire to share what I learn with others.

Let's learn some mathematics together. Whaddya say?

]]>