Logarithms represented at this time in so many ways both what was old and what was new. This relation looked back to reflect concerns of computation, but looked forward to nascent notions about mathematical functions. Although logarithms were primarily a tool for facilitating computation, they were but another of the crucial insights that directed the attention of mathematical scholars towards more abstract organizing notions. But one thing is very clear: the concept of logarithm as we understand it today as a function is quite different in many respects from how it was originally conceived. But eventually, through the work, consideration, and development of many mathematicians, the logarithm became far more than a useful way to compute with large unwieldy numbers. It became a mathematical relation and function in its own right.
In time, the logarithm evolved from a labor saving device to become one of the core functions in mathematics. Today, it has been extended to negative and complex numbers and it is vital in many modern branches of mathematics. It has an important role in group theory and is key to calculus, with its straightforward derivatives and its appearance in the solutions to various integrals. Logarithms form the basis of the Richter scale and the measure of pH, and they characterize the music intervals in the octave, to name but a few applications. Ironically, the logarithm still serves as a labor saving device of sorts, but not for the benefit of human effort! It is often used by computers to approximate certain operations that would be too costly, in terms of computer power, to evaluate directly, particularly those of the form xn.
Possibly, the first approaches to the subjects of logarithms, also including trigonometric functions, was described by the Scottish mathematician John Napier (1550–1617) in his 1614 work Mirifici logarithmorum canonis descriptio. However, the value e, now known as the Euler constant, was a later contribution by Jacob Bernoulli (1655–1705). In a short period of time, these contributions started being widely adopted as means to facilitate numerical calculations, especially of products, with the help of logarithmic tables. Interestingly, the mechanical device developed by Napier, and known as Napier’s bones constitutes a resource for calculation of products and quotients between values that is not based on the concept of logarithms. After preliminary related developments by the English mathematician Roger Cotes (1682–1716), the important result now amply known as the Euler’s formula was described by Leonard Euler (1707–1783) in 1748 in his two volume work Introduction in anaysin infinitorum. The concepts of logarithm and exponential functions, in particular, contributed substantially for establishing relationships with the concept and calculation of the powers and roots, including concerning complex values, especially thanks to developments by Augustin-Louis Cauchy (1789–1857), in his Cours d’analyze (1821). The Fourier series was developed mainly by Jean-Baptiste Joseph Fourier (1768–1830) as a means to solve the heat equation (diffusion) on a metal plate, which he described in his reference work Mémoire sur la propagation de la chaleur dans les corps solides (1807). The development of matrix algebra was to a great extent pioneered by the British mathematician Arthur Cayley (1821-1895), who also employed matrices as resources for addressing linear systems of equations. Cayley focus on pure mathematics included also important contributions to analytic geometry, group theory, as well as in graph theory. One of the first systematic approaches to the application of matrices to dynamics and differential equations has been developed in the book Elementary matrices and some applications to dynamics and differential equations, whose first 155 pages present a treatise on matrices, including infinite series of matrices and differential operators. The remainder of the book described the solution of differential equations by using matrices, as well as applications to dynamics of airplanees.
https://hal.science/hal-03845390v2/document
Overview of the exponential function
The exponential function is one of the most important functions in mathematics (though it would have to admit that the linear function ranks even higher in importance). To form an exponential function, we let the independent variable be the exponent. A simple example is the function f(x)=2x.
As illustrated in the above graph of f, the exponential function increases rapidly. Exponential functions are solutions to the simplest types of dynamical systems. For example, an exponential function arises in simple models of bacteria growth
An exponential function can describe growth or decay. The function
is an example of exponential decay. It gets rapidly smaller as x increases, as illustrated by its graph.
In the exponential growth of f(x), the function doubles every time you add one to its input x. In the exponential decay of g(x), the function shrinks in half every time you add one to its input x. The presence of this doubling time or half-life is characteristic of exponential functions, indicating how fast they grow or decay.
Parameters of the exponential function
As with any function, the action of an exponential function f(x) can be captured by the function machine metaphor that takes inputs x and transforms them into the outputs f(x).
The function machine metaphor is useful for introducing parameters into a function. The above exponential functions f(x) and g(x) are two different functions, but they differ only by the change in the base of the exponentiation from 2 to 1/2. We could capture both functions using a single function machine but dials to represent parameters influencing how the machine works.
We could represent the base of the exponentiation by a parameter b. Then, we could write f as a function with a single parameter (a function machine with a single dial): f(x)=bx.
When b=2, we have our original exponential growth function f(x), and when b=12, this same f turns into our original exponential decay function g(x). We could think of a function with a parameter as representing a whole family of functions, with one function for each value of the parameter.
We can also change the exponential function by including a constant in the exponent. For example, the function h(x)=23x is also an exponential function. It just grows faster than f(x)=2x since h(x) doubles every time you add only 1/3 to its input x. We can introduce another parameter k into the definition of the exponential function, giving us two dials to play with. If we call this parameter k, we can write our exponential function f as f(x)=bkx.
It turns out that adding both parameters b and k to our definition of f is really unnecessary. We can still get the full range of functions if we eliminate either b or k. […]. For example, you can see that the function f(x)=32x (k=2, b=3) is exactly the same as the function f(x)=9x (k=1, b=9). In fact, for any change you make to k, you can make a compensating change in b to keep the function the same. […].
Since it is silly to have both parameters b and k, we will typically eliminate one of them. The easiest thing to do is eliminate k and go back to the function f(x)=bx.
We will use this function a bit at first, changing the base b to make the function grow or decay faster or slower.
However, once you start learning some calculus, you’ll see that it is more natural to get rid of the base parameter b and instead use the constant k to make the function grow or decay faster or slower. Except, we can’t exactly get rid of the base b. If we set b=1, we’d have the boring function f(x)=1, or, if we set b=0, we’d have the even more boring function f(x)=0. We need to choose some other value of b.
If we didn’t have calculus, we’d probably choose b=2, writing our exponential function as f(x)=2kx. Or, since we like the decimal system so well, maybe we’d choose b=10 and write our exponential function of f(x)=10kx. According to the above discussion, it shouldn’t matter whether we use b=2 or b=10, as we can get the same functions either way (just with different values of k).
But, it turns out that calculus tells us there is a natural choice for the base b. Once you learn some calculus, you’ll see why the most common base b throughout the sciences is the irrational number
e=2.718281828459045….
Fixing b=e, we can write the exponential functions as f(x)=ekx.
Using e for the base is so common, that ex (“e to the x”) is often referred to simply as the exponential function.
To increase the possibilities for the exponential function, we can add one more parameter c that scales the function: f(x)=cbkx.
Since f(0)=cbk0=c, we can see that the parameter c does something completely different than the parameters b and k. We’ll often use two parameters for the exponential function: c and one of b or k. For example, we might set k=1 and use f(x)=cbx or set b=e and use f(x)=cekx
https://mathinsight.org/exponential_function
The number e is a mathematical constant, approximately equal to 2.71828, that is the base of the natural logarithm and exponential function. It is sometimes called Euler’s number, after the Swiss mathematician Leonhard Euler, though this can invite confusion with Euler numbers, or with Euler’s constant, a different constant typically denoted γ. Alternatively, e can be called Napier’s constant after John Napier. The Swiss mathematician Jacob Bernoulli discovered the constant while studying compound interest.
The first references to this constant were published in 1618 in the table of an appendix of a work on logarithms by John Napier. However, this did not contain the constant itself, but simply a list of logarithms to the base e It is assumed that the table was written by William Oughtred. In 1661, Christiaan Huygens studied how to compute logarithms by geometrical methods and calculated a quantity that, in retrospect, is the base-10 logarithm of e, but he did not recognize e itself as a quantity of interest.
The constant itself was introduced by Jacob Bernoulli in 1683, for solving the problem of continuous compounding of interest. In his solution, the constant e occurs as the limit
where n represents the number of intervals in a year on which the compound interest is evaluated (for example, n = 12 for monthly compounding).
The first symbol used for this constant was the letter b by Gottfried Leibniz in letters to Christiaan Huygens in 1690 and 1691.
Leonhard Euler started to use the letter e for the constant in 1727 or 1728, in an unpublished paper on explosive forces in cannons, and in a letter to Christian Goldbach on 25 November 1731. The first appearance of e in a printed publication was in Euler’s Mechanica (1736). It is unknown why Euler chose the letter e. Although some researchers used the letter c in the subsequent years, the letter e was more common and eventually became standard.
Euler proved that e is the sum of the infinite series
where n! is the
factorial
::: of n. The equivalence of the two characterizations using the limit and the infinite series can be proved via the binomial theorem.
https://en.m.wikipedia.org/wiki/E_(mathematical_constant)
The number e first comes into mathematics in a very minor way. This was in 1618 when, in an appendix to Napier’s work on logarithms, a table appeared giving the natural logarithms of various numbers. However, that these were logarithms to base e was not recognised since the base to which logarithms are computed did not arise in the way that logarithms were thought about at this time. Although we now think of logarithms as the exponents to which one must raise the base to get the required number, this is a modern way of thinking. We will come back to this point later in this essay. This table in the appendix, although carrying no author’s name, was almost certainly written by Oughtred. A few years later, in 1624, again e almost made it into the mathematical literature, but not quite. In that year Briggs gave a numerical approximation to the base 10 logarithm of e but did not mention e itself in his work.
The next possible occurrence of e is again dubious. In 1647 Saint-Vincent computed the area under a rectangular hyperbola. Whether he recognised the connection with logarithms is open to debate, and even if he did there was little reason for him to come across the number e explicitly. Certainly by 1661 Huygens understood the relation between the rectangular hyperbola and the logarithm. He examined explicitly the relation between the area under the rectangular hyperbola yx=1 and the logarithm. Of course, the number ee is such that the area under the rectangular hyperbola from 1 to e is equal to 1. This is the property that makes e the base of natural logarithms, but this was not understood by mathematicians at this time, although they were slowly approaching such an understanding.
Huygens made another advance in 1661. He defined a curve which he calls “logarithmic” but in our terminology we would refer to it as an exponential curve, having the form y=kax. Again out of this comes the logarithm to base 10 of e, which Huygens calculated to 17 decimal places. However, it appears as the calculation of a constant in his work and is not recognised as the logarithm of a number (so again it is a close call but e remains unrecognised).
Further work on logarithms followed which still does not see the number e appear as such, but the work does contribute to the development of logarithms. In 1668 Nicolaus Mercator published Logarithmotechnia which contains the series expansion of log(1+x). In this work Mercator uses the term “natural logarithm” for the first time for logarithms to base ee. The number e itself again fails to appear as such and again remains elusively just round the corner.
Perhaps surprisingly, since this work on logarithms had come so close to recognising the number e, when e is first “discovered” it is not through the notion of logarithm at all but rather through a study of compound interest. In 1683 Jacob Bernoulli looked at the problem of compound interest and, in examining continuous compound interest, he tried to find the limit of (1+1/n)n as n tends to infinity. He used the binomial theorem to show that the limit had to lie between 2 and 3 so we could consider this to be the first approximation found to e. Also if we accept this as a definition of e, it is the first time that a number was defined by a limiting process. He certainly did not recognise any connection between his work and that on logarithms.
We mentioned above that logarithms were not thought of in the early years of their development as having any connection with exponents. Of course from the equation x = at, we deduce that t = log x where the log is to base a, but this involves a much later way of thinking. Here we are really thinking of log as a function, while early workers in logarithms thought purely of the log as a number which aided calculation. It may have been Jacob Bernoulli who first understood the way that the log function is the inverse of the exponential function. On the other hand the first person to make the connection between logarithms and exponents may well have been James Gregory. In 1684 he certainly recognised the connection between logarithms and exponents, but he may not have been the first.
So much of our mathematical notation is due to Euler that it will come as no surprise to find that the notation e for this number is due to him. The claim which has sometimes been made, however, that Euler used the letter e because it was the first letter of his name is ridiculous. It is probably not even the case that the e comes from “exponential”, but it may have just be the next vowel after “a” and Euler was already using the notation “a” in his work. Whatever the reason, the notation e made its first appearance in a letter Euler wrote to Goldbach in 1731.
Most people accept Euler as the first to prove that ee is irrational. Certainly it was Hermite who proved that ee is not an algebraic number in 1873.
https://mathshistory.st-andrews.ac.uk/HistTopics/e/
All exponential functions are proportional to their own derivative, but the exponential function base e alone is the special number so that the proportionality constant is 1, meaning et actually equals its own derivative.
If you look at the graph of et, it has the peculiar property that the slope of a tangent line to any point on the graph equals the height of that point above the horizontal axis.
Examples of the slope of the tangent line for the exponential function.
So how does the exponential function help us find the derivatives of other exponential functions? Well, maybe you noticed that different exponentials look like horizontally scaled versions of each other. This is true for all exponential functions, but best seen with exponential functions with related bases.
This means that you can re-write one exponential in terms of another’s base. For example, if we have an exponential function of base 2 and want to re-write the function in terms of base 4, it can be written like this.
2x=4(1/2)x
One way to see how to convert between two bases is to zoom in on the graph between 0 and 1 to see how fast the first base grows to to the value of the second base. In this case, base 4 grows twice as fast as base 2 and reaches the output of 2 in half the time. So to convert base 4 to base 2 we can multiply the input t of the base 4 function by the constant 1/2, which is the same as scaling 4x by a factor of 2 in the horizontal direction.
So we’ve found a function, the exponential function of base e, with a really nice derivative property. Can we take any old exponential function and re-write it in terms of the exponential function? Or in other words, what constant do we multiply the input variable by to make the exponential function have the same output as another exponential function?
For example, let’s try to re-write 2t in terms of the exponential function.
ect = 2t
As before, we can zoom in on a plot of the two functions, and compare their behavior. Specifically, how long does it take the exponential function to grow to 2?
Well, looking at the graph, it takes about t=0.693… units which is exactly equal to the same proportionality constant we found before! If we multiply the input variable t in the exponential function by this constant, the exponential function has the same output as 2t.
e(0.69314718056…)⋅t = 2t
This type of question we are asking leads us directly towards another function, the inverse of the exponential function, the natural logarithm function.
The existence of a function like this can answer the question of the mystery constants, and it’s because it gives a different way to think about functions that are proportional to their own derivative. There’s nothing fancy here, this is simply the definition of the natural log, which asks the question “e to the what equals 2”.
e?? = 2
And indeed, go plug in the natural log of 2 to a calculator, and you’ll find that it’s 0.6931…, the mystery constant we ran into earlier. And same goes for all the other bases, the mystery proportionality constant that pops up when taking derivatives and when re-writing exponential functions using e is the natural log of the base; the answer to the question “e to the what equals that base”.
Importantly, the natural logarithm function gives us the missing tool we need to find the derivative of any exponential function. The key is to re-write the function and then use the chain rule. For example, what is the derivative of the function 3t? Well, let’s re-write this function in terms of the exponential function using the natural logarithm to calculate the horizontally-scaling proportionality constant.
3t = eln(3)t
Then, we can calculate the derivative of eln(3)t using the chain rule by. First, take the derivative of the outermost function, which due to the special nature of the exponential funtion is itself. Then, second, multiply this by the derivative of the inner function ln(3)t, which is the constant ln(3).
This is the same derivative we found using algebra above, since ln(3)=1.09861228867…
The same technique can be used to find the derivative of any exponential function.
In fact, throughout applications of calculus, you rarely see exponentials written as some base to a power t. Instead you almost always write exponentials as e raised to some constant multiplied by t. It’s all equivalent; any function like 2t or 3t can be written as ec⋅t. The difference is that framing things in terms of the exponential function plays much more smoothly with the process of derivatives.
Why we care
I know this is all pretty symbol heavy, but the reason we care is that all sorts of natural phenomena involve a certain rate of change being proportional to the thing changing.
For example, the rate of growth of a population actually does tend to be proportional to the size of the population itself, assuming there isn’t some limited resource slowing that growth down. If you put a cup of hot water in a cool room, the rate at which the water cools is proportional to the difference in temperature between the room and the water. Or said differently, the rate at which that difference changes is proportional to itself. If you invest your money, the rate at which it grows is proportional to the amount of money there at any time.
In all these cases, where some variable’s rate of change is proportional to itself, the function describing that variable over time will be some exponential. And even though there are lots of ways to write any exponential function, it’s very natural to choose to express these functions as ect, since that constant c in the exponent carries a very natural meaning: It’s the same as the proportionality constant between the size of the changing variable and the rate of change.