## Real Analysis – Differential Calculus III

Posted in 01 Basic Mathematics, 01 Real Analysis on April 26, 2015 by ateixeira
 Theorem 65 (Cauchy’s theorem) Let ${[a,b]\subset\mathbb{R}}$ and ${f}$, ${g}$ continuous such as ${f;g:[a,b]\rightarrow \mathbb{R}}$. If ${f}$ and ${g}$ are differentiable in ${]a,b[}$ and ${g'}$ doesn’t vanish in ${]a,b[}$, there exists ${c \in ]a,b[}$ such as $\displaystyle \frac{f(b)-f(a)}{g(b)-g(a)}=\frac{f'(c)}{g'(c)} \ \ \ \ \ (32)$ Proof: It is ${g(a)\neq g(b)}$ since if it were ${g(a)=g(b)}$ ${g'}$ would vanish in ${]a,b[}$ by Theorem 63. Let $\displaystyle \lambda=\frac{f(b)-f(a)}{g(b)-g(a)}$ and define ${\varphi}$ as ${\varphi:[a,b]\rightarrow\mathbb{R}}$(differentiable in ${]a,b[}$ and continuous in ${[a,b]}$) such as ${\varphi=f(x)-\lambda g(x)}$ ${\forall x \in [a,b]}$ it is $\displaystyle \varphi(a)=f(a)-\lambda g(a)=\ldots=\varphi(b)$ Thus by applying Theorem 63 in ${[a,b]}$ there exists ${c\in [a,b]}$ such as ${\varphi'=0}$. That is $\displaystyle f'(c)-\lambda g'(c)=0 \Leftrightarrow \lambda=\frac{f'(c)}{g'(c)}$ $\Box$

The previous theorem is perhaps more of a lemma than a theorem per se. Because it will allows us to prove more important results. Also this result can be seen as providing a method of finding (very) local approximations to functions at a given point and as such it is the same as a Taylor expansion of first order (we’ll see what this means in futures posts).

 Theorem 66 (Cauchy first limit rule) Let ${I \subset \mathbb{R}}$, ${c\in I'}$ and ${f,g:I\setminus \{c\}\rightarrow \mathbb{R}}$ differentiable. Moreover ${g'}$ doesn’t vanish in ${I\setminus \{c\}}$ and ${\displaystyle \lim _{x\rightarrow c}f(x)=\displaystyle \lim _{x\rightarrow c}g(x)=0}$. If ${\displaystyle \lim _{x\rightarrow c}\frac{f'(x)}{g'(x)}}$ exists it is $\displaystyle \lim _{x\rightarrow c}\frac{f(x)}{g(x)}=\lim _{x\rightarrow c}\frac{f'(x)}{g'(x)} \ \ \ \ \ (33)$ Proof: Let ${c\in\mathbb{R}}$. Since ${f,g}$ are continuous in ${I\setminus \{ c \}}$ and ${\displaystyle \lim _{x\rightarrow c}f(x)=\displaystyle \lim _{x\rightarrow c}g(x)=0}$ we can set ${f(c)=g(c)=0}$. Let ${x_n: \mathbb{N}\rightarrow I\setminus \{c\}}$ such as ${x_n\rightarrow c^+}$. Applying Cauchy’s Theorem 65 to each interval ${[c,x_n]}$ it is $\displaystyle \frac{f(x_n)}{g(x_n)}=\frac{f(x_n)-f(c)}{g(x_n)-g(c)}=\frac{f'(u_n)}{g'(u_n)}$ with ${c. Then ${u_n\rightarrow c}$ by the Squeezed Sequence Theorem 17 And $\displaystyle \lim _{x\rightarrow c}\frac{f'(u_n)}{g'(u_n)}=\lim _{x\rightarrow c}\frac{f'(x)}{g'(x)}$ by the definition of limit Thus $\displaystyle \lim _{x\rightarrow c}\frac{f(x_n)}{g(x_n)}=\lim _{x\rightarrow c}\frac{f'(x)}{g'(x)}$ Hence, by the definition of limit it is $\displaystyle \lim _{x\rightarrow c}\frac{f(x)}{g(x)}=\lim _{x\rightarrow c^+}\frac{f'(x)}{g'(x)} \ \ \ \ \ (34)$ Analogously if ${x_n}$ is $\displaystyle x_n\rightarrow c^-$ applying Cauchy’s Theorem 65 to each interval ${[x_n,c]}$ it is $\displaystyle \frac{f(x_n)}{g(x_n)}=\frac{f(x_n)-f(c)}{g(x_n)-g(c)}=\frac{f(c)-f(x_n)}{g(c)-g(x_n)}=\frac{f'(u_n)}{g'(u_n)}$ with ${x_n. Just like in the previous steps it is $\displaystyle \lim _{x\rightarrow c}\frac{f(x)}{g(x)}=\lim _{x\rightarrow c^-}\frac{f'(x)}{g'(x)} \ \ \ \ \ (35)$ From equation 34 and equation 35 it is $\displaystyle \lim _{x\rightarrow c}\frac{f(x)}{g(x)}=\lim _{x\rightarrow c}\frac{f'(x)}{g'(x)}$ Finally let ${c=+\infty}$. Let ${x=1/t}$ it is ${x\rightarrow p\infty \Leftrightarrow t\rightarrow 0^+}$. From what we proved thus far it is {\begin{aligned} \displaystyle \lim _{x\rightarrow +\infty}\frac{f(x)}{g(x)} &= \displaystyle \lim_{t \rightarrow 0^+}\frac{f(1/t)}{g(1/t)}\\ &= \displaystyle\lim_{t \rightarrow 0^+}\frac{(f(1/t))'}{(g(1/t))'}\\ &=\displaystyle \lim_{t \rightarrow 0^+}\frac{-1/t^2f'(1/t)}{-1/t^2g'(1/t)}\\ &=\displaystyle \lim_{t \rightarrow 0^+}\frac{f'(1/t)}{g'(1/t)}\\ &=\displaystyle \lim_{t \rightarrow 0^+}\frac{f'(x)}{g'(x)}\\ \end{aligned}} Hence, for this case it also is ${\displaystyle\lim _{x\rightarrow c}\frac{f(x)}{g(x)}=\lim _{x\rightarrow c}\frac{f'(x)}{g'(x)}}$. The case ${c=-\infty}$ can be proven in a similar way with the change of variable ${x=-1/t}$. $\Box$
 Theorem 67 (Cauchy second limit rule) Let ${I \subset \mathbb{R}}$, ${c\in I'}$ and ${f,g:I\setminus \{c\}\rightarrow \mathbb{R}}$ differentiable. Moreover ${g}$ doesn’t vanish in ${I\setminus \{c\}}$ and ${\displaystyle \lim _{x\rightarrow c}f(x)=\displaystyle \lim _{x\rightarrow c}g(x)=+\infty}$. If ${\displaystyle \lim _{x\rightarrow c}\frac{f'(x)}{g'(x)}}$ exists it is $\displaystyle \lim _{x\rightarrow c}\frac{f(x)}{g(x)}=\lim _{x\rightarrow c}\frac{f'(x)}{g'(x)} \ \ \ \ \ (36)$ Proof: Left as an exercise for the reader. $\Box$

The two previous theorems are known by a variety of names on the mathematical literature and are one of the most used theorems in the practice of calculating limits.

A few examples will now be used to showcase their powers

 Example 1 The functions ${e^x}$ and ${x}$ tend to infinity as ${x}$ goes to infinity. We already saw that the exponential function goes to infinity faster than any polynomial of ${x}$, but Cauchy’s second theorem allows us to prove that result much faster. As always a method that proves itself to tame thorny results in a more efficient way surely is a powerful method. $\displaystyle \lim_{x\rightarrow \infty}\frac{e^x}{x} \ \ \ \ \ (37)$ {\begin{aligned} \displaystyle \lim _{x\rightarrow +\infty}\frac{e^x}{x} &= \displaystyle \lim_{x \rightarrow +\infty}\frac{(e^x)'}{x'}\\ &= \displaystyle \lim _{x\rightarrow +\infty}\frac{e^x}{1}\\ &= \infty \end{aligned}}
 Example 2 The functions ${\cos x-1}$ and ${x^2}$ tend to ${0}$ as ${x}$ goes to ${0}$. But which one of them tends to ${0}$ more strongly? {\begin{aligned} \displaystyle \lim_{x\rightarrow 0}\frac{\cos x-1}{x^2} &= \displaystyle \lim_{x\rightarrow 0}\frac{(\cos x-1)'}{(x^2)'}\\ &= \displaystyle \lim_{x\rightarrow 0}\frac{-\sin x}{2x}\\ &= \ldots \end{aligned}}

At the end of the last example we arrived once again at the type of limit ${\displaystyle \lim_{x\rightarrow 0}\frac{f(x)}{g(x)}}$ where ${\displaystyle \lim_{x\rightarrow 0}f(x)=\displaystyle \lim_{x\rightarrow 0}g(x)=0}$.

But the thing is that Cauchy’s first rule (and in fact the second rule too) can be used more than one time. Hence we’ll just apply it again (we’ll start from the begining again) just so we don’t lose our train of thought

{\begin{aligned} \displaystyle \lim_{x\rightarrow 0}\frac{\cos x-1}{x^2} &= \displaystyle \lim_{x\rightarrow 0}\frac{(\cos x-1)'}{(x^2)'}\\ &= \displaystyle \lim_{x\rightarrow 0}\frac{-\sin x}{2x}\\ &= \displaystyle \lim_{x\rightarrow 0}\frac{-\cos x}{x}\\ &= -\dfrac{1}{2} \end{aligned}}

As an exercise calculate

$\displaystyle \lim_{x \rightarrow 0} \frac{e^x-1}{1}$

Another mathematical theorem from real analysis which is very important to Physics, in a conceptual level, is what we’ll call Lagrange’s theorem. Even though it is a theorem in Real Analysis it has a very nice interpretation in geometrical and in kinematic terms.

 Theorem 68 (Lagrange’s theorem) Let ${[a,b]\subset\mathbb{R}}$ and ${f:[a,b]\rightarrow\mathbb{R}}$ continuous. If ${f}$ is differentiable in ${]a,b[}$ there exists ${c\in ]a,b[}$ such as $\displaystyle \frac{f(b)-f(a)}{b-a}=f'(c) \ \ \ \ \ (38)$ Proof: In theorem 65 let ${g(x)=x}$ and the result follows trivially. $\Box$

As I was saying before the statement and proof of this theorem it can be interpreted both geometrically and kinematically. The geometric interpretation states that the secant to the function ${f(x)}$ in the interval ${[a,b]}$ has a given slope and that we can always find a tangent to the function ${f}$ in the interval ${[a,b]}$ whose slope is the same as the secant. Hence the straight lines defined by these secant and tangent are parallel.

In a kinematic sense if ${x}$ represents time and ${f(x)}$ represents the distance travelled this result implies that if we transverse the distance ${f(b)-f(a)}$ in the time interval ${b-a}$ then we have an average speed which is

$\displaystyle \frac{f(b)-f(a)}{b-a}$

Since in this context ${f'(x)}$ can be interpreted as the being the instantaneous speed (or just speed for short) the previous result states that there exists a time instant ${c}$ in which your instantaneous speed is the same as you average speed for the whole time interval.

 Example 3 Show that ${e^x-1>x\quad \forall x \neq 0}$. Proof: Let ${f(t)=e^t}$. Assume that ${x>0}$ and apply Theorem 68 to the interval ${[0,x]}$. $\displaystyle \frac{e^x-e^0}{x-0}=\left( e^t \right)'_{t=c}$ with ${0. Then $\displaystyle \frac{e^x-1}{x}=e^c>1$ Assume now that ${x<0}$ and apply once again 68, but this time to the interval ${[x,0]}$. $\displaystyle \frac{e^0-e^x}{0-x}=\left( e^t \right)'_{t=c}$ with ${x. Then $\displaystyle \frac{1-e^x}{-x}=e^cx$ Notice that in the we didn’t invert the sign of the inequality while multiplying by ${-x}$ because ${x<0}$ and consequently ${-x>0}$. $\Box$

The last theorem has two important corollaries that we’ll state below.

 Corollary 69 Let ${I}$ be an interval in ${\mathbb{R}}$ and ${f:I\rightarrow\mathbb{R}}$ continuous. If ${f'}$ exists and vanishes in the interior of ${I}$, then ${f}$ is constant. Proof: By reductio ad absurdum let us assume that ${f}$ is not constant. Then there exists ${a,b \in I}$ such as ${a and ${f(a)\neq f(b)}$. Since ${f}$ is continuous in ${[a,b]}$ and differentiable in ${]a,b[}$, by theorem 68 it is $\displaystyle \frac{f(b)-f(a)}{b-a}=f'(c)$ with ${c\in ]a,b[}$. Hence ${\frac{f(b)-f(a)}{b-a}=0}$ which is absurd since it would imply that ${f(b)=f(a)}$, which is contrary to our initial hypothesis. $\Box$
 Corollary 70 () Let ${I}$ be an interval in ${\mathbb{R}}$ and ${f:I\rightarrow\mathbb{R}}$ continuous. If ${f'}$ exists and is positive (negative) in the interior of ${I}$, then ${f}$ is strictly increasing (decreasing). Proof: Let us take the case ${f'>0}$. Take ${a,b \in I}$ such as ${a. From theorem 68 it is $\displaystyle frac{f(b)-f(a)}{b-a}=f'(c)>0$ with ${c \in ]a,b[}$. Since ${b-a>0}$ it is ${f(b)>f(a)}$ and ${f}$ is strictly increasing. $\Box$

And with these results we finish the Differential Calculus part of our course in Real Analysis. The next theoretical posts of Real Analysis will dwell on the theory of numerical series.

## Real Analysis – Differential Calculus II

Posted in 01 Basic Mathematics, 01 Real Analysis on April 29, 2014 by ateixeira
 Theorem 60 (Differentiability of the composite function) Let ${D, E \in C}$, ${g:D\rightarrow E}$, ${f:E\rightarrow\mathbb{R}}$ and ${c\in D\cap D'}$. If ${g}$ is differentiable in ${c}$ and ${f}$ is differentiable in ${g(c)}$, then ${f\circ g}$ in ${c}$ and it is $\displaystyle (f\circ g)'(c)=f'(g(c))g'(c) \quad\mathrm{if}\quad \varphi=f(t) \quad\mathrm{with}\quad t=g(x) \ \ \ \ \ (24)$ $\displaystyle (f\circ g)'(x)=f'(g(x))g'(x) \quad\mathrm{if}\quad \varphi=f(g(x)) \ \ \ \ \ (25)$ Using Leibniz’s notation we can also write the previous theorem as $\displaystyle \frac{dy}{dx}=\frac{dy}{dt}\cdot\frac{dt}{dx} \ \ \ \ \ (26)$ A notation that formally suggests that we can cancel out the ${dt}$. Proof: Let ${a=g(c)}$. Since ${f}$ is differentiable in ${a}$ by Theorem 57 it is $\displaystyle f(t)=f(a)+(f'(a)+\varphi (t)(t-a)\quad \forall t \in E$ with ${\varphi}$ continuous in ${a}$. Taking ${g(x)=t}$ and ${g(c)=a}$ it is $\displaystyle f(g(x))=f(g(c))+f'(g(c))+\varphi(g(x))(g(x)-g(c))\quad\forall x \in D$ Hence $\displaystyle \frac{f(g(x))-f(g(c))}{x-c}=(f'(g(c))+\varphi(g(x)))\frac{g(x)-g(c)}{x-c } \ \ \ \ \ (27)$ Since ${g}$ is differentiable in ${c}$ it also is continuous in ${c}$ by Corollary 59. Then ${\varphi (g(x))}$ also is continuous in ${c}$ (by Theorem 43). Hence $\displaystyle \lim_{x\rightarrow c}\varphi(g(x))=\varphi (g(c))=\varphi(a)=0$ Taking the limit ${x\rightarrow c}$ in 27 it is $\displaystyle \lim_{x\rightarrow c}\frac{f(g(x))-f(g(c))}{x-c}=f'(g(c))$ Which is to say $\displaystyle (f \circ g)'(c)=f'(g(c)g'(c)$ $\Box$

As an application of Theorem 60 let us look into some simple examples.

1. ${\left( e^{g(x)} \right)'=?}$

Now ${e^{g(x)}=f(g(x))}$ and let ${t=g(x)}$. Hence

{\begin{aligned} \left( e^{g(x)} \right)' &= \left(e^t\right)'g'(x)\\ &= e^t g'(x)\\ &= e^{g(x)}g'(x) \end{aligned}} Hence

$\displaystyle \left( e^{g(x)} \right)'=g'(x) e^{g(x)})$

2. Let ${\alpha\in\mathbb{R}}$ and ${x>0}$ and calculate ${\left( x^\alpha \right)'}$.

{\begin{aligned} \left( x^\alpha\right)'&=\left( e^{\alpha\log x}\right)'\\ &=(\alpha\log x)'e^{\alpha\log x}\\ &=\dfrac{\alpha}{x}e^{\alpha\log x}\\ &=\dfrac{\alpha}{x}x^\alpha\\ &=\alpha x^{\alpha -1} \end{aligned}}

which generalizes the know rule for integer exponents.

Hence

$\displaystyle \left( x^\alpha\right)'= \alpha x^{\alpha -1}\quad \forall\alpha\in\mathbb{R},\forall x>0$

3. ${(\log g(x))'=?}$

Like in the first example the construction of interest is ${\log g(x)=f(g(x))}$ where ${f(t)=\log t}$ and ${t=g(x)}$.

Hence

{\begin{aligned} (\log g(x))'&=(\log t)'g'(x)\\ &= \dfrac{1}{t}g'(x)\\ &=\dfrac{g'(x)}{g(x)} \end{aligned}}

Hence for ${g(x)>0}$

$\displaystyle (\log g(x))'=\frac{g'(x)}{g(x)}$

In particular one can calculate ${(\log |x|)'}$

$\displaystyle (\log |x|)'=\frac{|x|'}{|x|}=\begin{cases} \dfrac{1}{|x|}\quad x>0\\-\dfrac{1}{|x|}\quad x<0 \end{cases}$

Since ${-|x|=x}$ for ${x<0}$ it always is

$\displaystyle (\log |x|)'=\frac{1}{x}\quad\forall x\neq 0$

 Theorem 61 (Differentiability of the inverse function) Let ${D\subset\mathbb{R}}$, ${f:D\rightarrow\mathbb{R}}$ an injective function and ${c\in D\cap D'}$. If ${f}$ is differentiable in ${c}$ ${f'(c)\neq 0}$ ${f^{-1}}$ is continuous then ${f^{-1}}$ is differentiable and it is $\displaystyle \left( f^{-1} \right)'(f(c))=\frac{1}{f(c)} \ \ \ \ \ (28)$ In Leibniz’s notation one introduces ${y=f(x)}$, then ${x=f^{-1}(y)}$ and the differentiability of the inverse function equation is $\displaystyle \frac{dx}{dx}=\frac{1}{\frac{dy}{dx}} \ \ \ \ \ (29)$ Proof: Omitted. $\Box$

Just like in Theorem 60 we will state an application of the previous theorem.

Let ${y=\sin x}$ and ${x\in [-\pi /2,\pi /2]}$, then ${x=\arcsin y}$.

Now

• ${f(x)}$ is differentiable in all points of the interval
• ${f'(x)=\cos x\neq 0}$ for all points contained in the interval ${[-\pi /2,\pi /2]}$
• ${\arcsin y}$ is continuous ${[-1,1]}$

Then

{\begin{aligned} (\arcsin y)' &= \left( f^{-1}(y) \right)'\\ &=\dfrac{1}{f'(x)}\\ &=\dfrac{1}{\cos x}\\ &=\dfrac{1}{\sqrt{1-\sin^2x}}\\ &=\dfrac{1}{\sqrt{1-y^2}} \end{aligned}}

Finally

$\displaystyle (\arcsin y)'=\frac{1}{\sqrt{1-y^2}} \quad y \in [-1,1]$

Or, writing in a notation that is more usual

$\displaystyle (\arcsin x)'=\frac{1}{\sqrt{1-x^2}} \quad x \in [-1,1]$

In general one can define superior derivatives by using recursion.

Let us denote the nth derivative of ${f}$ by ${f^{(n)}}$ (since for the 50th derivative it isn’t very practical to use ${'}$ fifty times). One first define ${f^{(0)}=f}$. Now for ${f^{(n+1)}}$ it is

$\displaystyle f^{(n+1)}=\left( f^{(n)} \right)'$

That is to say that

1. ${f'=\dfrac{df}{dx}}$
2. ${f''=\dfrac{d}{dx}\dfrac{df}{dx}=\left( \dfrac{d}{dx} \right)^2 f=\dfrac{d^2}{dx^2}f}$
3. ${f'''=\dfrac{d}{dx}\dfrac{d^2}{dx^2}f=\dfrac{d^3}{dx^3}f}$
4. ${f^{(n)}=\left( \dfrac{d}{dx} \right)^n f=\dfrac{d^n f}{dx^n}}$

Given the previous discussion it makes sense to introduce the following definition

 Definition 39 A function ${f}$ is said to be ${n}$ times differentiable in ${c}$ if all ${f^{(n)}}$ exist and are finite.

We already know that a differentiable function is continuous (via Corollary 59 in Real Analysis – Differential Calculus I) , but is it that the derivative of a differentiable function also is a continuous function?

As an (counter-)example let us look into the following function:

$\displaystyle f(x)=\begin{cases} x^2\sin (1/x) \quad &x\neq 0\\ 0 & x=0 \end{cases}$

It is easy to see that ${f}$ is differentiable in ${\mathbb{R}}$

$\displaystyle f'(x)=\begin{cases} 2x\sin (1/x)-\cos (1/x) & x\neq 0\\ 0 & x=0 \end{cases}$

but ${f'}$ isn’t continuous for ${x=0}$. The reader is invited to calculate ${\displaystyle\lim_{x\rightarrow 0^+}f'(x)}$ and ${\displaystyle\lim_{x\rightarrow 0^-}f'(x)}$.

Apparently the derivative of a function either is continuous or it is strongly discontinuous. That being said it is obvious that it makes sense to introduce differentiability classes, which classifies a function according to its derivatives properties.

 Definition 40 A function ${f}$ is said to be of class ${C^n}$ if it is ${n}$ times differentiable and ${f^{(n)}}$ is continuous.

It is easy to see that a function that is of class ${C^{n+1}}$ also is of classs ${C^n}$.

A function is said to be of class ${c^\infty}$ if it has finite derivatives in all orders (which are necessarily continuous).

If ${f,g}$ are ${n}$ times differentible in ${c}$ then ${f+g}$, ${fg}$, ${f/g\quad g(c)\neq 0}$ are also ${n}$ times differentiable in ${c}$.

 Definition 41 Let ${D\subset\mathbb{R}}$, ${f:D\rightarrow\mathbb{R}}$ and ${c\in D}$. ${c}$ is said to be a relative maximum of ${f}$ if $\displaystyle \exists r>0:x\in V (c,r)\cap D \Rightarrow f(x)

 Definition 42 Let ${D\subset\mathbb{R}}$, ${f:D\rightarrow\mathbb{R}}$ and ${c\in D}$. ${c}$ is said to be a relative minimum of ${f}$ if $\displaystyle \exists r>0:x\in V (c,r)\cap D \Rightarrow f(x)>f(c) \ \ \ \ \ (31)$

 Theorem 62 (Interior Extremum Theorem) Let ${I\in\mathbb{R}}$ and ${c}$ is an interior point of ${I}$. If ${f}$ has a relative extremum in ${c}$ and ${f'(c)}$ exists then ${f'(c)=0}$ Proof: Let us suppose without loss of generality that ${f}$ has a relative maximum in ${c}$. Since ${c}$ is an interior point of ${I}$ and ${f'(c)}$ exists, ${f_+(c)}$ and ${f_-(c)}$ exist and are equal. It is ${f_+'(c)=\displaystyle\lim_{x\rightarrow c^+}\dfrac{f(x)-f(c)}{x-c}}$ From our hypothesis $\displaystyle \exists r>0:x\in V (c,r)\cap D \Rightarrow f(x) Hence $\displaystyle x\in V(c,r)\cap I\quad\mathrm{and}\quad x>c \Rightarrow \frac{f(x)-f(c)}{x-c}\leq 0$ Then by corollary 31 (Real Analysis – Limits and Continuity II) it is $\displaystyle f_+'(c)=\lim_{x\rightarrow c^+}\dfrac{f(x)-f(c)}{x-c}\leq 0$ Likewise $\displaystyle x\in V(c,r)\cap I\quad\mathrm{and}\quad x Hence $\displaystyle f_-'(c)=\lim_{x\rightarrow c^-}\dfrac{f(x)-f(c)}{x-c}\geq 0$ Since ${f_+'(c)=f_-'(c)=f'(c)}$ it has to be ${f_+'(c)=f_-'(c)=0}$ and consequently ${f'(c)=0}$. $\Box$

One can visualize the previous theorem in the following way. Imagine that you have a relative maximum in a given interval for a continuous function. For some vicinity of that point we must have function values that are inferior to ${c}$. Since we are assuming that ${c}$ is a maximum values to its left are increasing as we approach ${c}$ and values to its right are decreasing as we mover further away from ${c}$.

Hence for its left side the derivative of ${f}$ has positive values, while to its left the derivative of ${f}$ has negative values, since we also assume that the derivative exists in ${c}$ we can reason by continuity that its value is ${0}$.

 Theorem 63 (Rolle’s Theorem) Let ${[a,b]\subset\mathbb{R}}$ and ${f}$ continuous such as ${f:[a,b]\rightarrow \mathbb{R}}$. If ${f}$ is differentiable ${]a,b[}$ and ${f(a)=f(b)}$ there exists a point ${c\in ]a,b[}$ such as ${f'(c)=0}$. Proof: Since ${f}$ is continuous in the compact interval ${[a,b]}$ it has a maximum and a minimum in ${[a,b]}$ (see Extreme Value Theorem which is theorem 55 in Real Analysis – Limits and Continuity VII). If for ${c\in ]a,b[}$ ${f(c)}$ is either a maximum or a minimum then by Theorem 62 ${f'(c)=0}$. Let ${m}$ denote the minimum and ${M}$ denote the maximum. and let us analyze the case were the extrema values occur at the extremities of the interval. Since by hypothesis ${f(a)=f(b)}$ then ${m=M}$. In this case ${f}$ is constant and it trivially is ${f'(c)=0\quad\forall c\in [a,b]}$ $\Box$

 Corollary 64 Let ${I\in\mathbb{R}}$, ${f}$ continuous such as ${f:I\rightarrow\mathbb{R}}$. If ${f}$ is differentiable in the interior of ${I}$ and ${f'}$ doesn’t vanish in the interior of ${I}$, then ${f}$ doesn’t have more than one ${0}$ in ${I}$. Proof: Let us use the method of reductio ad absurdum that ${f}$ vanishes for two points of ${I}$ (${a}$ and ${b}$). applying Theorem 63 in ${[a,b]}$ (since ${f(a)=f(b)}$) there exists ${c}$ in ${]a,b[}$ such as ${f'(c)=0}$. Hence ${f'}$ vanishes in the interior of ${I}$ which contradicts our hypothesis. $\Box$

## Real Analysis – Differential Calculus I

Posted in 01 Basic Mathematics, 01 Real Analysis on April 28, 2014 by ateixeira

— 7. Differential Calculus —

 Definition 36 Let ${D\subset\mathbb{R}}$, ${f:D\rightarrow\mathbb{R}}$ and ${c\in D\cap D'}$. ${f}$ is differentiable in point ${c}$ if the following limit exists $\displaystyle \lim_{x\rightarrow c}\frac{f(x)-f(c)}{x-c} \ \ \ \ \ (22)$ This limit is represented by ${f'(x)}$ and is said to be the derivative of ${f}$ in ${c}$.

The geometric interpretation of the value of the derivative is that it is the slope of the tangent of the curve that passes through ${c}$.

On the other hand if we represent the time evolution of the position of a particle by the function ${x=f(t)}$ the definition of its average velocity, on the time interval ${[t_0,t]}$, is

$\displaystyle v_a(t_0,t)=\frac{f(t)-f(t_0)}{t-t_0}$

If one is interested in knowing the velocity of a particle in a given instant, instead of knowing its average velocity in a given time interval, one has to take the previous definition and make the time interval as small as possible. If ${f}$ is a smooth function then the limit exists and the it is the velocity of the particle:

$\displaystyle v(t_0)=\lim_{t\rightarrow t_0}v_a(t_0,t)=\lim_{t\rightarrow t_0}\frac{f(t)-f(t_0)}{t-t_0}=f'(t_0)$

Hence the concept of derivative unifies two apparently different concepts:

1. The concept of the tangent to a curve. Which is a geometric concept.
2. The concept of the instantaneous velocity of a particle. Which is a kinematic concept.

The fact the two concepts that apparently have nothing in common are unified by a unique mathematical concepts is an indication of the importance of derivative.

Let ${f:D\rightarrow\mathbb{R}}$. if ${c\in D\cap D_{c^+}'}$, then one can define the right derivative in ${c}$ by

$\displaystyle f_+'(c)=\lim_{x\rightarrow c^+}\frac{f(x)-f(c)}{x-c}$

Let ${f:D\rightarrow\mathbb{R}}$. if ${c\in D\cap D_{c^-}'}$, then one can define the left derivative in ${c}$ by

$\displaystyle f_-'(c)=\lim_{x\rightarrow c^-}\frac{f(x)-f(c)}{x-c}$

If ${c\in D_{c^+}\cap D_{c^-}}$, ${f'(c)}$ exists iff ${f_+'(c)}$ and ${f_-'(c)}$ exist and are equal.

 Definition 37 A function ${f}$ is said to be differentiable in ${c}$ if ${f'(c)}$ exists and is finite.
 Definition 38 Let ${f:D\rightarrow\mathbb{R}}$ differentiable in ${D}$. The map ${x \in D \rightarrow f'(x)\in\mathbb{R}}$ is called the derivative of ${f}$ and is represented by ${f'}$.

With the change of variable ${h=x-c}$ in definition 36 one can also define the derivative by the following expression:

$\displaystyle f'(x)=\lim_{h\rightarrow 0}\frac{f(x+h)-f(x)}{h}$

Finally when can also denote the derivative of ${f}$ using Leibniz’s notation.

• ${\Delta x}$ is the increment along the ${x}$ axis
• ${\Delta f = f(x+h)-f(x)}$ is the increment along the ${y}$ axis

If one makes the increments infinitely small, that is to say if the increments are infinitesimals, then we denote them by:

• ${dx}$ is the infinitely small increment along the ${x}$ axis
• ${df}$ is the infinitely small increment along the ${y}$ axis

we can write the derivative as

$\displaystyle f'(x)=\frac{df}{dx}$

As an example let us look into the function ${f(x)=e^x}$.

{\begin{aligned} f'(x)&=\lim_{h\rightarrow 0}\dfrac{e^{x+h}-e^x}{h}\\ &=e^x\lim_{h\rightarrow 0}\dfrac{e^h-1}{h}\\ &=e^x \end{aligned}}

for all ${x\in\mathbb{R}}$.

As another example we’ll now look into ${f(x)=\log x}$

{\begin{aligned} f'(x)&=\lim_{h\rightarrow 0}\dfrac{\log (x+h)-\log x}{h}\\ &=\lim_{h\rightarrow 0}\dfrac{\log \left(x(1+h/x)\right)-\log x}{h}\\ &=\lim_{h\rightarrow 0}\dfrac{\log (1+h/x)}{h}\\ &=\lim_{h\rightarrow 0}\dfrac{h/x}{h}\\ &=1/x \end{aligned}}

for all ${x\in\mathbb{R}}$.

The following relationships are left as an exercise for the reader.

• ${(\sin x)'=\cos x}$
• ${(\cos x)'=-\sin x}$
 Theorem 57 Let ${D\subset\mathbb{R}}$, ${f:D\rightarrow\mathbb{R}}$ and ${c\in D\cap D'}$. If ${f}$ is differentiable in ${c}$, there exists a continuous function ${\varphi:D\rightarrow\mathbb{R}}$ and vanishing in ${c}$ such as: $\displaystyle f(x)=f(c)+\left( \left( f'(c)+\varphi(x) \right) (x-c) \right)\quad x\in D \ \ \ \ \ (23)$   Proof: Defining ${\varphi (x)}$ in the following way $\displaystyle f(x) = \begin{cases} \dfrac{f(x)-f(c)}{x-c}-f'(c) \quad \mathrm{if}\quad x \in D\setminus \{c\}\\ 0 \quad \mathrm{if}\quad x =c \end{cases}$ Since ${\displaystyle \lim_{x\rightarrow c}\varphi (x)=\lim_{x\rightarrow c} \left(\dfrac{f(x)-f(c)}{x-c}-f'(c)\right)=(f'(c)-f'(c)=0 }$, then ${\varphi}$ is null and vanishing in ${c}$. To complete the proof one ha to check that with the previous construction of ${\varphi}$ the identity of the theorem holds. $\Box$
 Corollary 58 Let ${f=D\rightarrow\mathbb{R}}$ differentiable in ${c}$. Then it is ${f(x)=f(c)+f'(c)(x-c)+o(x-c)}$ when ${x\rightarrow c}$Proof: Let ${r(x)=\varphi (x)(x-c)}$. Using Theorem 57 it is $\displaystyle f(x)=f(c)+f'(c)(x-c)+r(x)$ Since ${\lim_{x-to c}\varphi (x)=\varphi (c)=0}$ it is ${r(x)=o(x-c)}$ when ${x\rightarrow c}$. $\Box$
 Corollary 59 Let ${f}$ be differentiable in ${c}$. Then ${f}$ is continuous in ${c}$.Proof: From Theorem 57 it is {\begin{aligned} \lim_{x\rightarrow c} f(x)&=\lim_{x\rightarrow c}(f(c)+(f'(c)+\varphi (x))(x-c))\\ &=f(c) \end{aligned}} $\Box$

From corollary 59 it follows that all differentiable functions are continuous too. But is the converse also true? Is it true that all continuous functions are also differentiable?

The answer to the previous question is no. A simple example is the absolute value function:

An even more extreme example is the Weierstrass function:

$\displaystyle \sum_{n=0}^\infty a^n\cos\left( b^n\pi x \right)$

with ${0, ${b}$ a positive odd integer and ${ab>1+3/2\pi}$.

## Real Analysis Limits and Continuity VII

Posted in 01 Basic Mathematics, 01 Real Analysis on March 8, 2014 by ateixeira

— 6.10. Global properties of continuous functions —

 Theorem 51 (Intermediate Value Theorem) Let ${I=[a,b] \in \mathbb{R}}$ and ${f: I \rightarrow \mathbb{R}}$ is a continuous function. Let ${u \in \mathbb{R}}$ such that ${\inf(I), then there exists ${c \in I}$ such that ${f(c)=u}$.Proof: Omitted. $\Box$

Intuitively speaking the previous theorem states that the graph of a continuous function doesn’t have holes in it if the domain of the function doesn’t have any holes in it too.

 Corollary 52 Let ${[a,b]}$ be an interval in ${\mathbb{R}}$, ${f:[a,b]\rightarrow\mathbb{R}}$ a continuous function. Suppose that ${f(a)f(b)<0}$. Then ${\exists c \in ]a,b[}$ such that ${f(c)=0}$.Proof: In the codomain of ${f}$ there exists values bigger than ${0}$ and values smaller than ${0}$. Hence ${\sup f(I)>0}$ and ${\inf f(I)<0}$. Therefore ${0}$ is strictly between between the infimum and supremum of the codomain of ${f}$. By hypothesis the function doesn’t vanish on the extremities of the interval, hence the ${0}$ value has to be in the interior of the interval $\Box$
 Corollary 53 Let ${I\in\mathbb{R}}$, ${f:I\rightarrow\mathbb{\mathbb R}}$ a continuous function. Then ${f(I)}$ is also an interval.Proof: Let ${\alpha=\inf(I)}$ and ${\beta=\sup(I)}$. By definition of infimum and supremum it is ${f(I)\subset [\alpha , \beta]}$. Using Theorem 51 it is ${]a,b[\subset f(I)}$.Thus we have the following four possibilities for ${f(I)}$: ${f(I)=\begin{cases}[\alpha , \beta] \\ ]\alpha , \beta] \\ [\alpha , \beta[ \\ ]\alpha , \beta[ \end{cases}}$ $\Box$

As an application let us look into ${P(x)=a_xx^n+\cdots +a_1x+a_0}$ with ${n}$ odd and ${a_n <0}$. It is ${P(x)\sim a_x^n}$ for large (positively or negatively) values of ${x}$. It is ${\displaystyle \lim_{x\rightarrow +\infty} P(x)=+\infty}$ and ${\displaystyle \lim_{x\rightarrow -\infty} P(x)=-\infty}$.

Now

• ${P(x)}$ is a continuous function.
• The domain, ${D}$ of ${P(x)}$ is ${\mathbb{R}}$ which is an interval.
• ${\sup(D)=+\infty}$ and ${\inf(D)=-\infty}$, implying ${P[\mathbb{R}]=]-\infty, +\infty[}$

By corollary 52 it is ${0\in P[\mathbb{R}]}$. Which means that every odd polynomial function has at least one ${0}$.

 Theorem 54 (Continuity of the inverse function) Let ${I}$ be an interval in ${\mathbb{R}}$ and ${f:I\rightarrow\mathbb{R}}$ a continuous function and strictly monotonous. Then ${f^{-1}}$ is continuous and strictly monotonous.Proof: Omitted. $\Box$

This theorem has important applications since it allows us to define the inverse functions of the trigonometric functions.

— 6.10.1. Arcsine function —

In ${[-\pi/2,\pi/2]}$ the function ${\sin x}$ is injective:

Sine function

Hence one can define the inverse of the sine function in this suitably restricted domain.

$\displaystyle y=\sin x\quad\mathrm{with}\quad x\in [\pi/2,\pi/2]\Leftrightarrow x=\arcsin x$

Where ${\arcsin}$ denotes the inverse of ${\sin}$.

Since ${\sin x:[-\pi/2,\pi/2]\rightarrow[-1,1]}$ it is ${\arcsin x:[-1,1]\rightarrow [-\pi/2,\pi/2]}$. Also by theorem 54 ${\arcsin}$ is continuous.

The graphical representation of ${\arcsin x}$ is

Arcsine function

and it is evident by its representation that ${\arcsin x}$ is an odd function.

— 6.10.2. Arctangent function —

In ${]-\pi/2,\pi/2[}$ the function ${\tan x}$ is injective:

Tangent function

Hence one can define the inverse of the tangent function in this suitably restricted domain.

$\displaystyle y=\tan x\quad\mathrm{with}\quad x\in ]\pi/2,\pi/2[\Leftrightarrow x=\arctan x$

Where ${\arctan}$ denotes the inverse of ${\tan}$.

Since ${\tan x:]-\pi/2,\pi/2[\rightarrow]-\infty,+\infty[}$ it is ${\arctan x:]-\infty,+\infty[\rightarrow ]-\pi/2,\pi/2[}$. Also by theorem 54 ${\arctan}$ is continuous.

The graphical representation of ${\arctan x}$ is

Arctangent function

and it is evident by its representation that ${\arctan x}$ is an odd function.

— 6.10.3. Arccosine function —

In ${[0,\pi]}$ the function ${\cos x}$ is injective:

Cosine function

Hence one can define the inverse of the cosine function in this suitably restricted domain.

$\displaystyle y=\cos x\quad\mathrm{with}\quad x\in [0,\pi]\Leftrightarrow x=\arccos x$

Where ${\arccos}$ denotes the inverse of ${\cos}$.

Since ${\cos x:[0,\pi]\rightarrow[-1,1]}$ it is ${\arccos x:[-1,1]\rightarrow [0,\pi]}$. Also by theorem 54 ${\arccos}$ is continuous.

The graphical representation of ${\arccos x}$ is

Arccosine function

Another way to define the arccosine function is to first use the relationship

$\displaystyle \cos=\sin(\pi/2-x)$

to write

$\displaystyle \arccos y=\frac{\pi}{2}-\arcsin y$

— 6.10.4. Continuous functions and intervals —

 Theorem 55 (Extreme value theorem) Let ${[a,b]\subset \mathbb{R}}$ and ${f:[a,b]\rightarrow\mathbb{R}}$. Then ${f}$ has a maximum and a minimum.Proof: Let ${E}$ be the codomain of ${f}$ and ${s=\sup E}$.By Theorem 17 in post Real Analysis – Sequences II there exists a sequence ${y_n}$ of points in ${E}$ such that ${\lim y_n=s}$. Since the terms of ${y_n}$ are points of ${f}$ for each ${n}$ there exists ${x_n\in [a,b]}$ such that ${y_=f(x_n)}$. Since ${x_n}$ is a sequence of points in the compact interval (see definition 22 in post Real Analysis – Sequences IV) ${[a,b]}$, by Corollary 27 (also in post Real Analysis – Sequences IV) there exists a subsequence ${x_{\alpha n}}$ of ${x_n}$ that converges to a point in ${[a,b]}$. Let ${c\in [a,b]}$ such that ${x_n\rightarrow c}$. Since ${f}$ is continuous in ${c}$ it is, by definition of continuity, (see definition 34) ${\lim f(x_{\alpha n})=f(c)}$. But ${f(x_{\alpha n})=y_{\alpha n}}$, which is a subsequence of ${y_n}$. Since ${y_n\rightarrow s}$ it also is ${y_{\alpha n}\rightarrow s}$. But ${y_{\alpha n}=f(x_{\alpha n})\rightarrow f(c)}$. In conclusion it is ${s=f(c)}$, hence ${s\in E}$. That is ${s=\max E}$. For the minimum one can construct a similar proof. This proof is left as an exercise for the reader. $\Box$

One easy way to remember the previous theorem is:

Continuous functions have a maximum and a minimum in compact intervals.

 Theorem 56 Let ${I}$ be a compact interval of ${\mathbb{R}}$ and ${f:I\rightarrow\mathbb{R}}$ continuous. Then ${f(I)}$ is a compact interval.Proof: By corollary 53 ${f(I)}$ is an interval.By theorem 55 ${f(I)}$ has a maximum and a minimum. Hence ${f(I)}$ is of the form ${[\alpha , \beta]}$. Thus ${f(I)}$ is a limited and closed interval, which is the definition of a compact interval. $\Box$

One easy way to remember the previous corollary is:

Compactness is preserved under a continuous map.

## Real Analysis – Limits and Continuity VI

Posted in 01 Basic Mathematics, 01 Real Analysis on February 15, 2014 by ateixeira

— More properties of continuous functions —

 Definition 35 Let ${D \subset \mathbb{R}}$; ${f: D\rightarrow \mathbb{R}}$ and ${c \in D'\setminus D}$. If ${\displaystyle \lim_{x\rightarrow c}f(x)=a\in \mathbb{R}}$, we can define ${\tilde{f}}$ as: $\displaystyle \tilde{f}(x)=\begin{cases} f(x) \quad x \in D \\ a \quad x=c \end{cases} \ \ \ \ \ (16)$

As an application of the previous definition let us look into ${f(x)= \sin x/x}$. It is ${D= \mathbb{R}\setminus \{0\}}$.

Since ${\displaystyle\lim_{x \rightarrow 0} \sin x/x=1}$ we can define ${\tilde{f}}$ as

$\displaystyle \tilde{f}(x)=\begin{cases} \sin x/x \quad x \neq 0 \\ 1 \quad x=0 \end{cases}$

As another example let us look into ${f(x)=1/x}$ Since ${\displaystyle\lim_{x\rightarrow 0^+}f(x)=+\infty}$ and ${\displaystyle\lim_{x\rightarrow 0^-}f(x)=-\infty}$ we can’t define ${\tilde{f}}$ for ${1/x}$.

Finally if we let ${f(x)=1/x^2}$ we have ${\displaystyle\lim_{x\rightarrow 0^+}f(x)=\displaystyle\lim_{x\rightarrow 0^-}f(x)=+\infty}$. Since the limits are divergent we still can’t define ${\tilde{f}}$.

In general one can say that given ${f: D\rightarrow \mathbb{R}}$ and ${c \in D'\setminus D}$ ${\tilde{f}}$ exists if and only if ${\displaystyle\lim_{x \rightarrow c}f(x)}$ exists and is finite.

 Theorem 42 Let ${D \subset \mathbb{R}}$; ${f,g: D\rightarrow \mathbb{R}}$ and ${c \in D}$. If ${f}$ and ${g}$ are continuous functions then ${f+g}$, ${fg}$ and (if ${g(c)\neq 0}$)${f/g}$ are also continuous functions. Proof: We’ll prove that ${fg}$ is continuous and let the other cases for the reader. Let ${x_n}$ be a sequence of points in ${D}$ such that ${x_n \rightarrow c}$. Then ${f(x_n) \rightarrow f(c)}$ and ${g(x_n) \rightarrow c}$ (since ${f}$ and ${g}$ are continuous functions). Hence it follows ${f(x_n)g(x_n) \rightarrow f(x)g(x)}$ from property ${6}$ of Theorem 19. Which is the definition of a continuous function. $\Box$

Let ${f(x)=5x^2-2x+4}$. First we note that ${f_1(x)=5}$, ${f_2(x)=-2}$ and ${f_3(x)=4}$ are continuous functions. Now ${f_4(x)=4}$ also a continuous function. ${f_5(x)=x^2}$ is continuous since it is the product of ${2}$ continuous functions. ${f_6(x)=-2x}$ is continuous since it is the product of ${2}$ continuous functions. Finally ${f(x)=5x^2-2x+4}$ is continuous since it is the sum of continuous functions.

 Theorem 43 Let ${D, E \subset \mathbb{R}}$, ${g: D\rightarrow E}$, ${f: E \rightarrow \mathbb{R}}$ and ${c \in D}$. If ${g}$ is continuous in ${c}$ and ${f}$ is continuous in ${g(c)}$, then the composite function ${f \circ g (x)=f(g(x)) }$ is continuous in point ${c}$. Proof: Let ${x_n}$ be a sequence of points in ${D}$ with ${x_n \rightarrow c}$. Hence ${\lim g(x_n)=g(c)}$. If ${f}$ is continuous in ${g(c)}$ it also is ${\lim f(g(x_n))=f(g(c))}$. This is ${\lim (f \circ g)(x_n)= (f \circ g)(c)}$. Thus ${f \circ g}$ is continuous in ${c}$. $\Box$

As an application of the previous theorem let ${f(x)=a^x}$. Since ${a^x=e^{\log a^x}=e^{x \log a}}$ we can write ${a^x=e^t \circ t=x\log a}$. Since ${f(t)=e^t}$ is a continuous function and ${g(x)=x \log a}$ is also a continuous function it follows that ${a^x}$ is a continuous function (it is the composition of two continuous functions).

By the same argument we can also show that with ${\alpha \in \mathbb{R}}$, ${x^\alpha}$ (for ${x \in \mathbb{R}^+}$) is also a continuous function in ${\mathbb{R}^+}$.

 Theorem 44 Let ${D, E \subset \mathbb{R}}$, ${g: D\rightarrow E}$, ${f: E \rightarrow \mathbb{R}}$ and ${c \in D'}$. Suppose that ${\displaystyle \lim_{x \rightarrow c}g(x)=a}$ and that ${\displaystyle \lim_{t \rightarrow a}f(t)}$ exists. If ${f}$ is continuous it follows ${\displaystyle \lim_{x \rightarrow c}f(g(x))=\lim_{t \rightarrow a}f(t)}$. Proof: Omitted. $\Box$

Find ${\displaystyle \lim_{x \rightarrow +\infty} \sin (1/x)}$.

We can write ${\sin (1/x)= \sin t \circ (t=1/x)}$. Since ${\displaystyle \lim_{x \rightarrow + \infty}(1/x)=0}$ it is, from Theorem 44, ${\displaystyle \lim_{x \rightarrow +\infty} \sin (1/x)=\displaystyle\lim_{t \rightarrow 0}\sin t =0}$.

In general if ${\displaystyle \lim_{x \rightarrow c} g(x)= a \in \mathbb{R}}$ it is ${\displaystyle \lim_{x \rightarrow c} \sin (g(x))=\displaystyle\lim_{t \rightarrow a} \sin t = \sin a}$. In conclusion

$\displaystyle \lim_{x \rightarrow c}\sin (g(x))=\sin (\lim_{x \rightarrow c}g(x))$

Suppose that ${\displaystyle \lim_{x \rightarrow c}g(x)=0}$ and let ${\tilde{f}}$ be the function that makes ${\sin x/x}$ be continuous in ${x=0}$.

It is ${\sin x = \tilde{f}(x)x}$, hence it is ${\sin g(x) = \tilde{f}(g(x))g(x)}$.

By definition ${\tilde{f}}$ is continuous so by Theorem 44 ${\displaystyle \lim_{x \rightarrow c^+}f(g(x))=\displaystyle\lim_{t \rightarrow 0}\tilde{f}(t)=1}$.

Thus we can conclude that when ${\displaystyle \lim_{x \rightarrow c}g(x)=0}$ it is

$\displaystyle \sin (g(x))\sim g(x)\quad (x \rightarrow c)$

For example ${\sin (x^2-1) \sim (x^2-1)\quad (x \rightarrow 1)}$.

Let ${\displaystyle \lim_{x \rightarrow c}g(x)=a \in \mathbb{R}}$. By Theorem 44 it is ${\displaystyle \lim_{x \rightarrow c} e^{g(x)}=\lim_{t \rightarrow a}e^t=e^a}$ (with the conventions ${e^{+\infty}=+\infty}$ and ${e^{-\infty}=0}$). Thus ${\displaystyle \lim_{x \rightarrow c}e^{g(x)}=e^{\displaystyle\lim_{x \rightarrow c}g(x)}}$.

Analogously one can show that ${\displaystyle \lim_{x \rightarrow c} \log g(x)= \log (\lim_{x \rightarrow c}g(x))}$ (with the conventions ${\displaystyle \lim_{x \rightarrow +\infty} \log g(x)=+\infty}$ and ${\displaystyle \lim_{x \rightarrow 0} \log g(x)=-\infty}$).

Let ${a>1}$. It is ${\displaystyle \lim_{x \rightarrow +\infty}a^x =\displaystyle\lim_{x \rightarrow +\infty}e^{x\log a}=e^{\displaystyle\lim_{x \rightarrow +\infty} x\log a}=+\infty }$ (since ${\log a>0}$).

On the other hand for ${\alpha > 0}$ it also is ${\displaystyle \lim_{x \rightarrow +\infty}x^\alpha =\displaystyle\lim_{x \rightarrow +\infty}e^{\alpha \log x}= e^{\displaystyle \lim_{x \rightarrow +\infty}\alpha \log x}=+\infty}$.

The question we want to answer is ${\displaystyle \lim_{x \rightarrow +\infty}\dfrac{a^x}{x^\alpha} }$ since the answer to this question tell us which of the functions tends more rapidly to ${+\infty}$.

 Theorem 45 Let ${ a<1}$ and ${\alpha > 0}$. Then $\displaystyle \lim_{\infty}\frac{a^x}{x^\alpha}=+\infty \ \ \ \ \ (17)$ Proof: Let ${b=a^{1/(2\alpha)}}$ (${b>1}$). It is ${a=b^{2\alpha}}$. Hence ${a^x=b^{2\alpha x}}$. Moreover ${\dfrac{a^x}{x^\alpha}=\dfrac{b^{2\alpha x}}{x^\alpha}=\dfrac{b^{2\alpha x}}{\sqrt{x}^{2\alpha}}}$. which is $\displaystyle \frac{a^x}{x^\alpha}=\left( \frac{b^x}{\sqrt{x}} \right)^{2\alpha} \ \ \ \ \ (18)$ Let ${[x]}$ denote the nearest integer function and using Bernoulli’s Inequality (${b^m\geq 1+ m(b-1)}$) it is ${b^x\geq x^{}[x]\geq 1+[x](b-1)>[x](b-1)>(x-1)(b-1)}$. Hence ${\dfrac{b^x}{\sqrt{x}}>\dfrac{x-1}{\sqrt{x}}(b-1)=\left( \sqrt{x}-1/\sqrt{x}\right)(b-1)}$. Since ${\displaystyle \lim_{x \rightarrow +\infty}\left( \sqrt{x}-1/\sqrt{x}\right)(b-1)=+\infty}$ it follows from Theorem 32 that ${\displaystyle\lim_{x \rightarrow \infty} \frac{b^x}{\sqrt{x}}=+\infty}$. Using 18 and setting ${t=b^x/\sqrt{x}}$ it is ${\displaystyle\lim_{x \rightarrow \infty}\frac{a^x}{x^\alpha}=\displaystyle\lim_{t \rightarrow +\infty}t^{2\alpha}=+\infty}$ $\Box$

 Corollary 46 Let ${\alpha > 0}$, then $\displaystyle \lim_{x \rightarrow +\infty}\frac{x^\alpha}{\log x}=+\infty$ Proof: Left as an exercise for the reader (remember to make the convenient change of variable). $\Box$

 Theorem 47 Let ${a>1}$, then ${\displaystyle \lim \frac{a^n}{n!}}$=0. Proof: First remember that ${\log n!=n\log n -n + O(\log n)}$ which is Stirling’s Approximation. Since ${\dfrac{\log n}{n} \rightarrow 0}$ it also is ${\dfrac{O(\log n)}{n} \rightarrow 0}$. And $\displaystyle \frac{a^n}{n!}=e^{\log (a^n/n!)}=e^{n\log a - \log n!}$ Thus $\displaystyle \lim \frac{a^n}{n!}=e^{\lim(n\log a - \log n!)}$ For the argument of the exponential function it is {\begin{aligned} \lim(n\log a - \log n!) &= \lim n\log a-n\log n+n-O(\log n) \\ &=\lim \left(n\left(\log a -\log n+1 -\dfrac{O(\log n)}{n}\right)\right) \\ &=+\infty\times -\infty=-\infty \end{aligned}} Hence ${\displaystyle \lim \frac{a^n}{n!}=e^{-\infty}=0}$. $\Box$

 Lemma 48 $\displaystyle \lim_{x \rightarrow +\infty}\left( 1+\frac{1}{x}\right)^x=e \ \ \ \ \ (19)$ Proof: Omitted. $\Box$

 Theorem 49 $\displaystyle \lim_{x \rightarrow 0}\frac{\log (1+x)}{x}=1 \ \ \ \ \ (20)$ Proof: Will be proven as an exercise. $\Box$

 Corollary 50 $\displaystyle \lim_{x \rightarrow 0}\frac{e^x-1}{x} \ \ \ \ \ (21)$ Proof: Left as an exercise for the reader. Make the change of variables ${e^x=t+1}$ and use the previous theorem. $\Box$

Generalizing the previous results one can write with full generality:

• ${\sin g(x) \sim g(x) \quad (x \rightarrow c)}$ if ${\displaystyle \lim_{x \rightarrow c} g(x)=0}$
• ${\log (1+g(x)) \sim g(x) \quad (x \rightarrow c)}$ if ${\displaystyle \lim_{x \rightarrow c} g(x)=0}$
• ${e^{g(x)}-1 \sim g(x) \quad (x \rightarrow c)}$ if ${\displaystyle \lim_{x \rightarrow c} g(x)=0}$

## 5 years ago

Posted in 00 Announcements on October 26, 2013 by ateixeira

Today I logged in again to my wordpress.com account, as I do from time to time, and I received a notification telling me that I had joined wordpress.com 5 years ago. Wow!

5 years ago (and a couple of days) I started this blog with a grand plan. I wanted to review my Physics education in order to get my knowledge of Physics solidified and also to provide a good online resource to Physics students all over the world.

The thing is that running a blog like this takes time and will power and I lacked on both of them. Actually running a blog like this only takes will power. You see I’ve recently learned that having no time is a big fat lie!!! What one has are priorities. Simply put, running this blog wasn’t one of my priorities until now.

The thing is that running this blog and once again being part of the Physics gang is one of priorities.

What I’m trying to say is: stay tuned in this space because this time it is for real.

## Newtonian Physics – Introduction

Posted in 00 Announcements, 01 Classical Mechanics, 01 Newtonian Formalism, 02 Physics on July 6, 2011 by ateixeira

The first thing I want to say about this post is that its title is actually a misnomer. Much of what I’ll say here is valid for pretty much the rest of the blog, while some things are only pertinent to Newtonian Physics.

The approach taken in this blog for developing the physical theories will be the axiomatic one. I’ll do this because of brevity, internal elegance and consistency. Of course, I’m well aware of the fact that this is only possible with hindsight but I think that one has a lot to gain when physics is presented this way. Maybe the one who has more to gain is the presenter than the presentee, but since this is my blog I’m calling all the shots.

Maybe a word is in order for what the word axiom means and a little bit of history will be needed (gasp!!! the first self-contradiction!!!). In ancient Greece, the place where normally one thinks real science started to take shape (actually it wasn’t but this is a whole other can of worms), people who concerned themselves with such matters used two words to signify two things that nowadays are taken as synonyms. Those two words were: axiom and postulate.

Back in the day axiom was taken to be a self-evident truth while a postulate was taken to be something that one would have to take for certain for the sake of constructing an argument. So, axiom was a deep truth of nature while a postulate was something that humans had to resort to in order to reach new knowledge.

As an example of an axiom we have Euclid’s fifth (which revealed itself to be quite the deep mathematical truth!) and as an example of a postulate one has the assumption that Hipparchus made that the sun rays travelled in straight lines from the Sun to the Earth and Moon while he calculated the distances and sizes of those three bodies.

People have become a lot more cynical and in modern day usage those two terms are used as synonyms (and the meaning that prevails is the postulate one).

Axioms arise in Mathematics when one is willing to construct a theory that will unify a body of (not so) disjoint facts into a coherent whole. One should take proper care that the propositions one uses as the building blocks are enough for completeness and internal coherence and can derive the maximum amount of new facts with the minimum amount of assumed propositions.

In Physics things seem to be different at first sight but let me show you that things aren’t that different after all. For starters one knows ever since Galileo that the verbal method of Aristotle – (metha)physics – isn’t the way to go for one to know, predict and even interfere in natural phenomena. For all of this to happen mathematical tools are needed. One gets deeper into the truth of things, and one is also able to get technological progress that, besides of messing up with the natural environment, also makes people’s life easier. It isn’t enough to tell that bodies fall under gravity, one has also to specify where, with what energy, under what time interval such a fall happens.

For instance Newton’s theory as it was done by Newton was axiomatic. His three laws are just another name for axioms. They are the propositions that contain the undefined terms whose validity one has to accept in order to achieve new results.

One fundamental difference now arises. While in Mathematics things are normally evaluated in terms of self-consistency and internal elegance (this is a HUGE oversimplification) in physics things are also judged by how good the new results compare to actual measurements in the real world. In Physics physical theories have to be consistent with what see around us (another HUGE oversimplification). Hence if Newton’s Principia predicted squared orbits for the planets, Newton’s Principia would have to be scrapped.

Another difference is at the way we physicists arrive at the axioms: normally one has some experimental facts and start thinking about them and how they are linked with each other. Hopefully one will then be able to put the most fundamental properties as building blocks of our theories and call them axioms (in Physics it is more usual to call them laws).

After digressing a little, thanks for reading by the way, let me proceed with the defense of the axiomatic way in Physics. One other thing is that I think that knowledge is a lot more sound when one knows where one stands and why one is standing there and not some other place. So, if  I tell you what are our basics (it doesn’t matter how we get to them) and derive all that can be derived from them I believe that sounder knowledge is achieved.

The historical/phenomenological method has as its big advantage (according to me at least) of showing the inner struggles each concept has to endure before being accepted and being part of the reigning paradigm. It also makes things more approachable at a first attempt, but I think that the merits of this approach stop at this initial pedagogy.

The downsides of the axiomatic way are that, at first sight, it seems highly artificial, and may not be what most people are used to and want to see when wanting to learn physics.

Moving on from this rather big lecture let me explain what I’ll do in the Newtonian Physics part of this blog:

1. I’ll start off by introducing units of measurement, dimensional analysis and explain why they are important in Physics.
2. A little bit on error propagation and why it matters in physics. Yes, this is mostly a theoretical blog but I consider this to be part of the physicist knowing where he/she stands paradigm.
3. Assume that the reader knows differential and integral calculus (even though I’ll continue my posts on Basic Mathematics).
4. Introduce the Newtonian axioms and what most people think Newton meant to say what while introducing them.
5. Do a lot of calculations.
6. Have a lot of fun!

## Real Analysis – Limits and Continuity V

Posted in Uncategorized on April 23, 2011 by ateixeira

The ${\epsilon}$ ${\delta}$ condition is somewhat hard to get into our heads as neophytes. On top of that the similarity of the ${\epsilon}$ ${\delta}$ definition for limit and continuity can increase the confusion and to try to counter those frequent turn of events the first part of this post will try to clarify the ${\epsilon}$ ${\delta}$ condition by means of examples.

${\epsilon}$ ${\delta}$ for Continuity —

First we’ll start things off with something really simple.

Let ${f(x)=\alpha}$ which is obviously continuous.

The gist of the the ${\epsilon}$ ${\delta}$ reasoning is that we want to show that no matter the ${\delta}$ that is chosen at first it is always possible to find an ${\epsilon}$ that satisfies Heine’s criterion for continuity.

Getting back to our function ${f(x)=\alpha}$ it is ${|f(x)-f(c)| < \delta}$. Here ${f(x)=f(c)=\alpha}$ so

{\begin{aligned} |f(x)-f(c)| &< \delta \\ |\alpha-\alpha| &< \delta \\ |0| &< \delta \\ 0 &< \delta \end{aligned}}

Which is trivially true since ${\delta > 0}$ by assumption. Hence any value of ${\epsilon}$ will satisfy Heine’s criterion for continuity and ${f(x)=\alpha}$ is continuous at ${c}$.

Since we never made any assumption about ${c}$ other than ${c \in {\mathbb R}}$ we conclude that ${f(x)=\alpha}$ is continuous in all points of its domain.

Let us now look at ${f(x)=x}$. Again we’ll look at continuity for point ${c}$ (${f(c)=c}$):

{\begin{aligned} |f(x)-f(c)| &< \delta \\ |x-c| &< \delta \end{aligned}}

The last expression is just we want at this stage since want to have something of the form ${x-c}$ (the first part of the ${\epsilon}$ ${\delta}$ criterion).

If we let ${\epsilon=\delta}$ it is ${|x-c| < \epsilon}$ and this completes our proof that ${f(x)=x}$ is continuous at point ${c}$.

And again since we never made any assumption about ${c}$ other than ${c \in {\mathbb R}}$ we conclude that ${f(x)=\alpha}$ is continuous in all points of its domain.

Now we let ${f(x)=\alpha x + \beta}$ and will see if ${f(x)}$ is continuous at ${c}$.

{\begin{aligned} |f(x)-f(c)| &< \delta \\ |\alpha x + \beta-(\alpha c + \beta)| &< \delta \\ |\alpha x -\alpha c| &< \delta \\ |\alpha||x-c| &< \delta \\ |x-c| &< \dfrac{\delta}{|\alpha|} \end{aligned}}

Hence if we let ${\epsilon=|\delta|/ |\alpha|}$ it is ${|x-c|< \epsilon}$ and ${f(x)=\alpha x + \beta}$ is continuous at ${c}$.

As a final example of Heine’s criterion of continuity we’ll look into ${f(x)=\sin x}$.

{\begin{aligned} |f(x)-f(c)| &< \delta \\ |\sin x-\sin c| &< \delta \end{aligned}}

Since we want something like ${|x-c| < g(\delta)}$ the last expression isn’t very useful to us.

In this case we’ll take an alternative approach which nevertheless works and has exactly the same spirit of what we’ve using so far.

Please look at every step I make with a critical eye and see if you can really understand what’s going on with this deduction.

{\begin{aligned} |\sin x-\sin c| &= 2\left| \cos\left( \dfrac{x+c}{2}\right)\right| \left| \sin\left( \dfrac{x-c}{2}\right)\right|\\ &< 2\left| \sin\left( \dfrac{x-c}{2}\right)\right| \end{aligned}}

Since ${x \rightarrow c}$ we know that at some point ${\dfrac{x-c}{2}}$ will be in the first quadrant. Thus

{\begin{aligned} 2\left| \sin\left( \dfrac{x-c}{2}\right)\right| &< 2\left|\dfrac{x-c}{2}\right| \\ &= |x-c|\\ &< \epsilon \end{aligned}}

Where the last inequality follows by hypothesis.

That is to say that if we let ${\epsilon=\delta}$ it is ${|x-c|<\epsilon \Rightarrow | \sin x - \sin x | < \delta}$ which is the epsilon delta definition of continuity.

${\epsilon}$ ${\delta}$ for Limits —

After looking into some simple ${\epsilon}$ ${\delta}$ proofs for continuity we’ll take a look at ${\epsilon}$ ${\delta}$ for limits.

The procedure is the same, but we’ll state it explicitly so that people can see it in action.

Let ${f(x)=2}$. We want to show that it is ${\displaystyle \lim_{x \rightarrow 1}f(x)=2}$.

{\begin{aligned} |f(x)-2| &< \delta \\ |2-2| &< \delta \\ 0 &< \delta \end{aligned}}

Which is trivially true for any value of ${\delta}$, hence ${\epsilon}$ can be any positive real number.

Let ${f(x)=2x+3}$. We want to show that it is ${\displaystyle \lim_{x \rightarrow 1}f(x)=5}$.

{\begin{aligned} |f(x)-5| &< \delta \\ |2x+3-5| &< \delta \\ |2x-2| &< \delta \\ 2|x-1| &< \delta \\ |x-1| &< \dfrac{\delta}{2} \end{aligned}}

With ${\epsilon=\delta/2}$ we satisfy the ${\epsilon}$ ${\delta}$ for limit.

As a final example let us look at the modified Dirichlet function that was introduced at this post.

$\displaystyle f(x) = \begin{cases} o \quad x \in \mathbb{Q}\\ x \quad x \in \mathbb{R}\setminus \mathbb{Q} \end{cases}$

At that post it was proved that for ${a \neq 0}$ ${\displaystyle\lim_{x \rightarrow a}f(x)}$ didn’t exist and it was promised that in a later date I’d show that ${\displaystyle\lim_{x \rightarrow 0}f(x)=0}$ using the epsilon delta condition.

Since we now know what the epsilon delta condition is and already have some experience with it will tackle this somewhat more abstruse problem.

{\begin{aligned} |f(x)-f(0)| &< \delta \\ |f(x)-0| &< \delta \end{aligned}}

Since ${f(x)=0}$ or ${f(x)=x}$ we have two cases to look at.

In the first case it is ${|0-0| < \delta}$ which is trivially valid, hence ${\epsilon}$ can be any real positive number.

In the second case it is ${|x-0| < \delta}$. Hence letting ${\epsilon=\delta}$ gets the job done.

Since we proved that ${\displaystyle\lim_{x \rightarrow 0}f(x)=0=f(0)}$ the conclusion is that the modified Dirichlet function that was presented is only continuous at ${x=0}$.

As was said previously, they don’t make local concepts more local than that.