Archive for the 01 Real Analysis Category

Real Analysis – Differential Calculus III

Posted in 01 Basic Mathematics, 01 Real Analysis on April 26, 2015 by ateixeira
Theorem 65 (Cauchy’s theorem) Let {[a,b]\subset\mathbb{R}} and {f}, {g} continuous such as {f;g:[a,b]\rightarrow \mathbb{R}}. If {f} and {g} are differentiable in {]a,b[} and {g'} doesn’t vanish in {]a,b[}, there exists {c \in ]a,b[} such as

\displaystyle \frac{f(b)-f(a)}{g(b)-g(a)}=\frac{f'(c)}{g'(c)} \ \ \ \ \ (32)

Proof: It is {g(a)\neq g(b)} since if it were {g(a)=g(b)} {g'} would vanish in {]a,b[} by Theorem 63.

Let

\displaystyle \lambda=\frac{f(b)-f(a)}{g(b)-g(a)}

and define {\varphi} as {\varphi:[a,b]\rightarrow\mathbb{R}}(differentiable in {]a,b[} and continuous in {[a,b]}) such as {\varphi=f(x)-\lambda g(x)} {\forall x \in [a,b]} it is

\displaystyle \varphi(a)=f(a)-\lambda g(a)=\ldots=\varphi(b)

Thus by applying Theorem 63 in {[a,b]} there exists {c\in [a,b]} such as {\varphi'=0}. That is

\displaystyle f'(c)-\lambda g'(c)=0 \Leftrightarrow \lambda=\frac{f'(c)}{g'(c)}

\Box

The previous theorem is perhaps more of a lemma than a theorem per se. Because it will allows us to prove more important results. Also this result can be seen as providing a method of finding (very) local approximations to functions at a given point and as such it is the same as a Taylor expansion of first order (we’ll see what this means in futures posts).

Theorem 66 (Cauchy first limit rule) Let {I \subset \mathbb{R}}, {c\in I'} and {f,g:I\setminus \{c\}\rightarrow \mathbb{R}} differentiable. Moreover {g'} doesn’t vanish in {I\setminus \{c\}} and {\displaystyle \lim _{x\rightarrow c}f(x)=\displaystyle \lim _{x\rightarrow c}g(x)=0}.

If {\displaystyle \lim _{x\rightarrow c}\frac{f'(x)}{g'(x)}} exists it is

\displaystyle \lim _{x\rightarrow c}\frac{f(x)}{g(x)}=\lim _{x\rightarrow c}\frac{f'(x)}{g'(x)} \ \ \ \ \ (33)

Proof: Let {c\in\mathbb{R}}. Since {f,g} are continuous in {I\setminus \{ c \}} and {\displaystyle \lim _{x\rightarrow c}f(x)=\displaystyle \lim _{x\rightarrow c}g(x)=0} we can set {f(c)=g(c)=0}. Let {x_n: \mathbb{N}\rightarrow I\setminus \{c\}} such as {x_n\rightarrow c^+}.

Applying Cauchy’s Theorem 65 to each interval {[c,x_n]} it is

\displaystyle \frac{f(x_n)}{g(x_n)}=\frac{f(x_n)-f(c)}{g(x_n)-g(c)}=\frac{f'(u_n)}{g'(u_n)}

with {c<u_n<x_n}.

Then {u_n\rightarrow c} by the Squeezed Sequence Theorem 17

And

\displaystyle \lim _{x\rightarrow c}\frac{f'(u_n)}{g'(u_n)}=\lim _{x\rightarrow c}\frac{f'(x)}{g'(x)}

by the definition of limit

Thus

\displaystyle \lim _{x\rightarrow c}\frac{f(x_n)}{g(x_n)}=\lim _{x\rightarrow c}\frac{f'(x)}{g'(x)}

Hence, by the definition of limit it is

\displaystyle \lim _{x\rightarrow c}\frac{f(x)}{g(x)}=\lim _{x\rightarrow c^+}\frac{f'(x)}{g'(x)} \ \ \ \ \ (34)

Analogously if {x_n} is

\displaystyle x_n\rightarrow c^-

applying Cauchy’s Theorem 65 to each interval {[x_n,c]} it is

\displaystyle \frac{f(x_n)}{g(x_n)}=\frac{f(x_n)-f(c)}{g(x_n)-g(c)}=\frac{f(c)-f(x_n)}{g(c)-g(x_n)}=\frac{f'(u_n)}{g'(u_n)}

with {x_n<u_n<c}.

Just like in the previous steps it is

\displaystyle \lim _{x\rightarrow c}\frac{f(x)}{g(x)}=\lim _{x\rightarrow c^-}\frac{f'(x)}{g'(x)} \ \ \ \ \ (35)

From equation 34 and equation 35 it is

\displaystyle \lim _{x\rightarrow c}\frac{f(x)}{g(x)}=\lim _{x\rightarrow c}\frac{f'(x)}{g'(x)}

Finally let {c=+\infty}. Let {x=1/t} it is {x\rightarrow p\infty \Leftrightarrow t\rightarrow 0^+}. From what we proved thus far it is

{\begin{aligned} \displaystyle \lim _{x\rightarrow +\infty}\frac{f(x)}{g(x)} &= \displaystyle \lim_{t \rightarrow 0^+}\frac{f(1/t)}{g(1/t)}\\ &= \displaystyle\lim_{t \rightarrow 0^+}\frac{(f(1/t))'}{(g(1/t))'}\\ &=\displaystyle \lim_{t \rightarrow 0^+}\frac{-1/t^2f'(1/t)}{-1/t^2g'(1/t)}\\ &=\displaystyle \lim_{t \rightarrow 0^+}\frac{f'(1/t)}{g'(1/t)}\\ &=\displaystyle \lim_{t \rightarrow 0^+}\frac{f'(x)}{g'(x)}\\ \end{aligned}}

Hence, for this case it also is {\displaystyle\lim _{x\rightarrow c}\frac{f(x)}{g(x)}=\lim _{x\rightarrow c}\frac{f'(x)}{g'(x)}}.

The case {c=-\infty} can be proven in a similar way with the change of variable {x=-1/t}. \Box

Theorem 67 (Cauchy second limit rule) Let {I \subset \mathbb{R}}, {c\in I'} and {f,g:I\setminus \{c\}\rightarrow \mathbb{R}} differentiable. Moreover {g} doesn’t vanish in {I\setminus \{c\}} and {\displaystyle \lim _{x\rightarrow c}f(x)=\displaystyle \lim _{x\rightarrow c}g(x)=+\infty}. If {\displaystyle \lim _{x\rightarrow c}\frac{f'(x)}{g'(x)}} exists it is

\displaystyle \lim _{x\rightarrow c}\frac{f(x)}{g(x)}=\lim _{x\rightarrow c}\frac{f'(x)}{g'(x)} \ \ \ \ \ (36)

Proof: Left as an exercise for the reader. \Box

The two previous theorems are known by a variety of names on the mathematical literature and are one of the most used theorems in the practice of calculating limits.

A few examples will now be used to showcase their powers

Example 1 The functions {e^x} and {x} tend to infinity as {x} goes to infinity. We already saw that the exponential function goes to infinity faster than any polynomial of {x}, but Cauchy’s second theorem allows us to prove that result much faster. As always a method that proves itself to tame thorny results in a more efficient way surely is a powerful method.

\displaystyle \lim_{x\rightarrow \infty}\frac{e^x}{x} \ \ \ \ \ (37)

{\begin{aligned} \displaystyle \lim _{x\rightarrow +\infty}\frac{e^x}{x} &= \displaystyle \lim_{x \rightarrow +\infty}\frac{(e^x)'}{x'}\\ &= \displaystyle \lim _{x\rightarrow +\infty}\frac{e^x}{1}\\ &= \infty \end{aligned}}

Example 2 The functions {\cos x-1} and {x^2} tend to {0} as {x} goes to {0}. But which one of them tends to {0} more strongly?

{\begin{aligned} \displaystyle \lim_{x\rightarrow 0}\frac{\cos x-1}{x^2} &= \displaystyle \lim_{x\rightarrow 0}\frac{(\cos x-1)'}{(x^2)'}\\ &= \displaystyle \lim_{x\rightarrow 0}\frac{-\sin x}{2x}\\ &= \ldots \end{aligned}}

At the end of the last example we arrived once again at the type of limit {\displaystyle \lim_{x\rightarrow 0}\frac{f(x)}{g(x)}} where {\displaystyle \lim_{x\rightarrow 0}f(x)=\displaystyle \lim_{x\rightarrow 0}g(x)=0}.

But the thing is that Cauchy’s first rule (and in fact the second rule too) can be used more than one time. Hence we’ll just apply it again (we’ll start from the begining again) just so we don’t lose our train of thought

{\begin{aligned} \displaystyle \lim_{x\rightarrow 0}\frac{\cos x-1}{x^2} &= \displaystyle \lim_{x\rightarrow 0}\frac{(\cos x-1)'}{(x^2)'}\\ &= \displaystyle \lim_{x\rightarrow 0}\frac{-\sin x}{2x}\\ &= \displaystyle \lim_{x\rightarrow 0}\frac{-\cos x}{x}\\ &= -\dfrac{1}{2} \end{aligned}}

As an exercise calculate

\displaystyle \lim_{x \rightarrow 0} \frac{e^x-1}{1}

Another mathematical theorem from real analysis which is very important to Physics, in a conceptual level, is what we’ll call Lagrange’s theorem. Even though it is a theorem in Real Analysis it has a very nice interpretation in geometrical and in kinematic terms.

Theorem 68 (Lagrange’s theorem) Let {[a,b]\subset\mathbb{R}} and {f:[a,b]\rightarrow\mathbb{R}} continuous. If {f} is differentiable in {]a,b[} there exists {c\in ]a,b[} such as

\displaystyle \frac{f(b)-f(a)}{b-a}=f'(c) \ \ \ \ \ (38)

Proof: In theorem 65 let {g(x)=x} and the result follows trivially. \Box

As I was saying before the statement and proof of this theorem it can be interpreted both geometrically and kinematically. The geometric interpretation states that the secant to the function {f(x)} in the interval {[a,b]} has a given slope and that we can always find a tangent to the function {f} in the interval {[a,b]} whose slope is the same as the secant. Hence the straight lines defined by these secant and tangent are parallel.

In a kinematic sense if {x} represents time and {f(x)} represents the distance travelled this result implies that if we transverse the distance {f(b)-f(a)} in the time interval {b-a} then we have an average speed which is

\displaystyle \frac{f(b)-f(a)}{b-a}

Since in this context {f'(x)} can be interpreted as the being the instantaneous speed (or just speed for short) the previous result states that there exists a time instant {c} in which your instantaneous speed is the same as you average speed for the whole time interval.

Example 3 Show that {e^x-1>x\quad \forall x \neq 0}.

Proof: Let {f(t)=e^t}. Assume that {x>0} and apply Theorem 68 to the interval {[0,x]}.

\displaystyle \frac{e^x-e^0}{x-0}=\left( e^t \right)'_{t=c}

with {0<c<x}.

Then

\displaystyle \frac{e^x-1}{x}=e^c>1

Assume now that {x<0} and apply once again 68, but this time to the interval {[x,0]}.

\displaystyle \frac{e^0-e^x}{0-x}=\left( e^t \right)'_{t=c}

with {x<c<0}.

Then

\displaystyle \frac{1-e^x}{-x}=e^c<e^0=1\Leftrightarrow 1-e^x<-x\Leftrightarrow e^x-1>x

Notice that in the we didn’t invert the sign of the inequality while multiplying by {-x} because {x<0} and consequently {-x>0}. \Box

The last theorem has two important corollaries that we’ll state below.

Corollary 69 Let {I} be an interval in {\mathbb{R}} and {f:I\rightarrow\mathbb{R}} continuous. If {f'} exists and vanishes in the interior of {I}, then {f} is constant.

Proof: By reductio ad absurdum let us assume that {f} is not constant. Then there exists {a,b \in I} such as {a<b} and {f(a)\neq f(b)}. Since {f} is continuous in {[a,b]} and differentiable in {]a,b[}, by theorem 68 it is

\displaystyle \frac{f(b)-f(a)}{b-a}=f'(c)

with {c\in ]a,b[}.

Hence {\frac{f(b)-f(a)}{b-a}=0} which is absurd since it would imply that {f(b)=f(a)}, which is contrary to our initial hypothesis. \Box

Corollary 70 () Let {I} be an interval in {\mathbb{R}} and {f:I\rightarrow\mathbb{R}} continuous. If {f'} exists and is positive (negative) in the interior of {I}, then {f} is strictly increasing (decreasing).

Proof: Let us take the case {f'>0}. Take {a,b \in I} such as {a<b}. From theorem 68 it is

\displaystyle frac{f(b)-f(a)}{b-a}=f'(c)>0

with {c \in ]a,b[}.

Since {b-a>0} it is {f(b)>f(a)} and {f} is strictly increasing. \Box

And with these results we finish the Differential Calculus part of our course in Real Analysis. The next theoretical posts of Real Analysis will dwell on the theory of numerical series.

Real Analysis – Differential Calculus II

Posted in 01 Basic Mathematics, 01 Real Analysis on April 29, 2014 by ateixeira
Theorem 60 (Differentiability of the composite function) Let {D, E \in C}, {g:D\rightarrow E}, {f:E\rightarrow\mathbb{R}} and {c\in D\cap D'}. If {g} is differentiable in {c} and {f} is differentiable in {g(c)}, then {f\circ g} in {c} and it is

\displaystyle  (f\circ g)'(c)=f'(g(c))g'(c) \quad\mathrm{if}\quad \varphi=f(t) \quad\mathrm{with}\quad t=g(x) \ \ \ \ \ (24)

\displaystyle  (f\circ g)'(x)=f'(g(x))g'(x) \quad\mathrm{if}\quad \varphi=f(g(x)) \ \ \ \ \ (25)

Using Leibniz’s notation we can also write the previous theorem as

\displaystyle  \frac{dy}{dx}=\frac{dy}{dt}\cdot\frac{dt}{dx} \ \ \ \ \ (26)

A notation that formally suggests that we can cancel out the {dt}.

Proof: Let {a=g(c)}. Since {f} is differentiable in {a} by Theorem 57 it is

\displaystyle f(t)=f(a)+(f'(a)+\varphi (t)(t-a)\quad \forall t \in E

with {\varphi} continuous in {a}.

Taking {g(x)=t} and {g(c)=a} it is

\displaystyle  f(g(x))=f(g(c))+f'(g(c))+\varphi(g(x))(g(x)-g(c))\quad\forall x \in D

Hence

\displaystyle  \frac{f(g(x))-f(g(c))}{x-c}=(f'(g(c))+\varphi(g(x)))\frac{g(x)-g(c)}{x-c	} \ \ \ \ \ (27)

Since {g} is differentiable in {c} it also is continuous in {c} by Corollary 59. Then {\varphi (g(x))} also is continuous in {c} (by Theorem 43).

Hence

\displaystyle  \lim_{x\rightarrow c}\varphi(g(x))=\varphi (g(c))=\varphi(a)=0

Taking the limit {x\rightarrow c} in 27 it is

\displaystyle  \lim_{x\rightarrow c}\frac{f(g(x))-f(g(c))}{x-c}=f'(g(c))

Which is to say

\displaystyle  (f \circ g)'(c)=f'(g(c)g'(c)

\Box

As an application of Theorem 60 let us look into some simple examples.

  1. {\left( e^{g(x)} \right)'=?}

    Now {e^{g(x)}=f(g(x))} and let {t=g(x)}. Hence

    {\begin{aligned} \left( e^{g(x)} \right)' &= \left(e^t\right)'g'(x)\\ &= e^t g'(x)\\ &= e^{g(x)}g'(x) \end{aligned}} Hence

    \displaystyle  \left( e^{g(x)} \right)'=g'(x) e^{g(x)})

  2. Let {\alpha\in\mathbb{R}} and {x>0} and calculate {\left( x^\alpha \right)'}.

    {\begin{aligned} \left( x^\alpha\right)'&=\left( e^{\alpha\log x}\right)'\\ &=(\alpha\log x)'e^{\alpha\log x}\\ &=\dfrac{\alpha}{x}e^{\alpha\log x}\\ &=\dfrac{\alpha}{x}x^\alpha\\ &=\alpha x^{\alpha -1} \end{aligned}}

    which generalizes the know rule for integer exponents.

    Hence

    \displaystyle \left( x^\alpha\right)'= \alpha x^{\alpha -1}\quad \forall\alpha\in\mathbb{R},\forall x>0

  3. {(\log g(x))'=?}

    Like in the first example the construction of interest is {\log g(x)=f(g(x))} where {f(t)=\log t} and {t=g(x)}.

    Hence

    {\begin{aligned} (\log g(x))'&=(\log t)'g'(x)\\ &= \dfrac{1}{t}g'(x)\\ &=\dfrac{g'(x)}{g(x)} \end{aligned}}

    Hence for {g(x)>0}

    \displaystyle  (\log g(x))'=\frac{g'(x)}{g(x)}

In particular one can calculate {(\log |x|)'}

\displaystyle (\log |x|)'=\frac{|x|'}{|x|}=\begin{cases} \dfrac{1}{|x|}\quad x>0\\-\dfrac{1}{|x|}\quad x<0 \end{cases}

Since {-|x|=x} for {x<0} it always is

\displaystyle  (\log |x|)'=\frac{1}{x}\quad\forall x\neq 0

Theorem 61 (Differentiability of the inverse function) Let {D\subset\mathbb{R}}, {f:D\rightarrow\mathbb{R}} an injective function and {c\in D\cap D'}. If

  • {f} is differentiable in {c}
  • {f'(c)\neq 0}
  • {f^{-1}} is continuous

then {f^{-1}} is differentiable and it is

\displaystyle  \left( f^{-1} \right)'(f(c))=\frac{1}{f(c)} \ \ \ \ \ (28)

In Leibniz’s notation one introduces {y=f(x)}, then {x=f^{-1}(y)} and the differentiability of the inverse function equation is

\displaystyle  \frac{dx}{dx}=\frac{1}{\frac{dy}{dx}} \ \ \ \ \ (29)

Proof: Omitted. \Box

Just like in Theorem 60 we will state an application of the previous theorem.

Let {y=\sin x} and {x\in [-\pi /2,\pi /2]}, then {x=\arcsin y}.

Now

  • {f(x)} is differentiable in all points of the interval
  • {f'(x)=\cos x\neq 0} for all points contained in the interval {[-\pi /2,\pi /2]}
  • {\arcsin y} is continuous {[-1,1]}

Then

{\begin{aligned} (\arcsin y)' &= \left( f^{-1}(y) \right)'\\ &=\dfrac{1}{f'(x)}\\ &=\dfrac{1}{\cos x}\\ &=\dfrac{1}{\sqrt{1-\sin^2x}}\\ &=\dfrac{1}{\sqrt{1-y^2}} \end{aligned}}

Finally

\displaystyle  (\arcsin y)'=\frac{1}{\sqrt{1-y^2}} \quad y \in [-1,1]

Or, writing in a notation that is more usual

\displaystyle  (\arcsin x)'=\frac{1}{\sqrt{1-x^2}} \quad x \in [-1,1]

In general one can define superior derivatives by using recursion.

Let us denote the nth derivative of {f} by {f^{(n)}} (since for the 50th derivative it isn’t very practical to use {'} fifty times). One first define {f^{(0)}=f}. Now for {f^{(n+1)}} it is

\displaystyle  f^{(n+1)}=\left( f^{(n)} \right)'

That is to say that

  1. {f'=\dfrac{df}{dx}}
  2. {f''=\dfrac{d}{dx}\dfrac{df}{dx}=\left( \dfrac{d}{dx} \right)^2 f=\dfrac{d^2}{dx^2}f}
  3. {f'''=\dfrac{d}{dx}\dfrac{d^2}{dx^2}f=\dfrac{d^3}{dx^3}f}
  4. {f^{(n)}=\left( \dfrac{d}{dx} \right)^n f=\dfrac{d^n f}{dx^n}}

Given the previous discussion it makes sense to introduce the following definition

Definition 39 A function {f} is said to be {n} times differentiable in {c} if all {f^{(n)}} exist and are finite.

We already know that a differentiable function is continuous (via Corollary 59 in Real Analysis – Differential Calculus I) , but is it that the derivative of a differentiable function also is a continuous function?

As an (counter-)example let us look into the following function:

\displaystyle f(x)=\begin{cases} x^2\sin (1/x) \quad &x\neq 0\\ 0 & x=0 \end{cases}

It is easy to see that {f} is differentiable in {\mathbb{R}}

\displaystyle f'(x)=\begin{cases} 2x\sin (1/x)-\cos (1/x) & x\neq 0\\ 0 & x=0 \end{cases}

but {f'} isn’t continuous for {x=0}. The reader is invited to calculate {\displaystyle\lim_{x\rightarrow 0^+}f'(x)} and {\displaystyle\lim_{x\rightarrow 0^-}f'(x)}.

Apparently the derivative of a function either is continuous or it is strongly discontinuous. That being said it is obvious that it makes sense to introduce differentiability classes, which classifies a function according to its derivatives properties.

Definition 40 A function {f} is said to be of class {C^n} if it is {n} times differentiable and {f^{(n)}} is continuous.

It is easy to see that a function that is of class {C^{n+1}} also is of classs {C^n}.

A function is said to be of class {c^\infty} if it has finite derivatives in all orders (which are necessarily continuous).

If {f,g} are {n} times differentible in {c} then {f+g}, {fg}, {f/g\quad g(c)\neq 0} are also {n} times differentiable in {c}.

Definition 41 Let {D\subset\mathbb{R}}, {f:D\rightarrow\mathbb{R}} and {c\in D}. {c} is said to be a relative maximum of {f} if

\displaystyle  \exists r>0:x\in V (c,r)\cap D \Rightarrow f(x)<f(c) \ \ \ \ \ (30)

Definition 42 Let {D\subset\mathbb{R}}, {f:D\rightarrow\mathbb{R}} and {c\in D}. {c} is said to be a relative minimum of {f} if

\displaystyle  \exists r>0:x\in V (c,r)\cap D \Rightarrow f(x)>f(c) \ \ \ \ \ (31)

Theorem 62 (Interior Extremum Theorem) Let {I\in\mathbb{R}} and {c} is an interior point of {I}. If {f} has a relative extremum in {c} and {f'(c)} exists then {f'(c)=0}

Proof: Let us suppose without loss of generality that {f} has a relative maximum in {c}. Since {c} is an interior point of {I} and {f'(c)} exists, {f_+(c)} and {f_-(c)} exist and are equal.

It is {f_+'(c)=\displaystyle\lim_{x\rightarrow c^+}\dfrac{f(x)-f(c)}{x-c}}

From our hypothesis

\displaystyle  \exists r>0:x\in V (c,r)\cap D \Rightarrow f(x)<f(c)

Hence

\displaystyle x\in V(c,r)\cap I\quad\mathrm{and}\quad x>c \Rightarrow \frac{f(x)-f(c)}{x-c}\leq 0

Then by corollary 31 (Real Analysis – Limits and Continuity II) it is

\displaystyle f_+'(c)=\lim_{x\rightarrow c^+}\dfrac{f(x)-f(c)}{x-c}\leq 0

Likewise

\displaystyle  x\in V(c,r)\cap I\quad\mathrm{and}\quad x<c \Rightarrow \frac{f(x)-f(c)}{x-c}\geq 0

Hence

\displaystyle f_-'(c)=\lim_{x\rightarrow c^-}\dfrac{f(x)-f(c)}{x-c}\geq 0

Since {f_+'(c)=f_-'(c)=f'(c)} it has to be {f_+'(c)=f_-'(c)=0} and consequently {f'(c)=0}. \Box

One can visualize the previous theorem in the following way. Imagine that you have a relative maximum in a given interval for a continuous function. For some vicinity of that point we must have function values that are inferior to {c}. Since we are assuming that {c} is a maximum values to its left are increasing as we approach {c} and values to its right are decreasing as we mover further away from {c}.

Hence for its left side the derivative of {f} has positive values, while to its left the derivative of {f} has negative values, since we also assume that the derivative exists in {c} we can reason by continuity that its value is {0}.

Theorem 63 (Rolle’s Theorem) Let {[a,b]\subset\mathbb{R}} and {f} continuous such as {f:[a,b]\rightarrow \mathbb{R}}. If {f} is differentiable {]a,b[} and {f(a)=f(b)} there exists a point {c\in ]a,b[} such as {f'(c)=0}.

Proof: Since {f} is continuous in the compact interval {[a,b]} it has a maximum and a minimum in {[a,b]} (see Extreme Value Theorem which is theorem 55 in Real Analysis – Limits and Continuity VII).

If for {c\in ]a,b[} {f(c)} is either a maximum or a minimum then by Theorem 62 {f'(c)=0}.

Let {m} denote the minimum and {M} denote the maximum. and let us analyze the case were the extrema values occur at the extremities of the interval. Since by hypothesis {f(a)=f(b)} then {m=M}. In this case {f} is constant and it trivially is {f'(c)=0\quad\forall c\in [a,b]} \Box

Corollary 64 Let {I\in\mathbb{R}}, {f} continuous such as {f:I\rightarrow\mathbb{R}}. If {f} is differentiable in the interior of {I} and {f'} doesn’t vanish in the interior of {I}, then {f} doesn’t have more than one {0} in {I}.

Proof: Let us use the method of reductio ad absurdum that {f} vanishes for two points of {I} ({a} and {b}). applying Theorem 63 in {[a,b]} (since {f(a)=f(b)}) there exists {c} in {]a,b[} such as {f'(c)=0}. Hence {f'} vanishes in the interior of {I} which contradicts our hypothesis. \Box

Real Analysis – Differential Calculus I

Posted in 01 Basic Mathematics, 01 Real Analysis on April 28, 2014 by ateixeira

— 7. Differential Calculus —

Definition 36 Let {D\subset\mathbb{R}}, {f:D\rightarrow\mathbb{R}} and {c\in D\cap D'}. {f} is differentiable in point {c} if the following limit exists

\displaystyle \lim_{x\rightarrow c}\frac{f(x)-f(c)}{x-c} \ \ \ \ \ (22)

This limit is represented by {f'(x)} and is said to be the derivative of {f} in {c}.

The geometric interpretation of the value of the derivative is that it is the slope of the tangent of the curve that passes through {c}.

800px-Tangent-calculus02

On the other hand if we represent the time evolution of the position of a particle by the function {x=f(t)} the definition of its average velocity, on the time interval {[t_0,t]}, is

\displaystyle v_a(t_0,t)=\frac{f(t)-f(t_0)}{t-t_0}

If one is interested in knowing the velocity of a particle in a given instant, instead of knowing its average velocity in a given time interval, one has to take the previous definition and make the time interval as small as possible. If {f} is a smooth function then the limit exists and the it is the velocity of the particle:

\displaystyle v(t_0)=\lim_{t\rightarrow t_0}v_a(t_0,t)=\lim_{t\rightarrow t_0}\frac{f(t)-f(t_0)}{t-t_0}=f'(t_0)

Hence the concept of derivative unifies two apparently different concepts:

  1. The concept of the tangent to a curve. Which is a geometric concept.
  2. The concept of the instantaneous velocity of a particle. Which is a kinematic concept.

The fact the two concepts that apparently have nothing in common are unified by a unique mathematical concepts is an indication of the importance of derivative.

Let {f:D\rightarrow\mathbb{R}}. if {c\in D\cap D_{c^+}'}, then one can define the right derivative in {c} by

\displaystyle f_+'(c)=\lim_{x\rightarrow c^+}\frac{f(x)-f(c)}{x-c}

Let {f:D\rightarrow\mathbb{R}}. if {c\in D\cap D_{c^-}'}, then one can define the left derivative in {c} by

\displaystyle f_-'(c)=\lim_{x\rightarrow c^-}\frac{f(x)-f(c)}{x-c}

If {c\in D_{c^+}\cap D_{c^-}}, {f'(c)} exists iff {f_+'(c)} and {f_-'(c)} exist and are equal.

Definition 37 A function {f} is said to be differentiable in {c} if {f'(c)} exists and is finite.
Definition 38 Let {f:D\rightarrow\mathbb{R}} differentiable in {D}. The map {x \in D \rightarrow f'(x)\in\mathbb{R}} is called the derivative of {f} and is represented by {f'}.

With the change of variable {h=x-c} in definition 36 one can also define the derivative by the following expression:

\displaystyle f'(x)=\lim_{h\rightarrow 0}\frac{f(x+h)-f(x)}{h}

Finally when can also denote the derivative of {f} using Leibniz’s notation.

  • {\Delta x} is the increment along the {x} axis
  • {\Delta f = f(x+h)-f(x)} is the increment along the {y} axis

If one makes the increments infinitely small, that is to say if the increments are infinitesimals, then we denote them by:

  • {dx} is the infinitely small increment along the {x} axis
  • {df} is the infinitely small increment along the {y} axis

we can write the derivative as

\displaystyle f'(x)=\frac{df}{dx}

As an example let us look into the function {f(x)=e^x}.

{\begin{aligned} f'(x)&=\lim_{h\rightarrow 0}\dfrac{e^{x+h}-e^x}{h}\\ &=e^x\lim_{h\rightarrow 0}\dfrac{e^h-1}{h}\\ &=e^x \end{aligned}}

for all {x\in\mathbb{R}}.

As another example we’ll now look into {f(x)=\log x}

{\begin{aligned} f'(x)&=\lim_{h\rightarrow 0}\dfrac{\log (x+h)-\log x}{h}\\ &=\lim_{h\rightarrow 0}\dfrac{\log \left(x(1+h/x)\right)-\log x}{h}\\ &=\lim_{h\rightarrow 0}\dfrac{\log (1+h/x)}{h}\\ &=\lim_{h\rightarrow 0}\dfrac{h/x}{h}\\ &=1/x \end{aligned}}

for all {x\in\mathbb{R}}.

The following relationships are left as an exercise for the reader.

  • {(\sin x)'=\cos x}
  • {(\cos x)'=-\sin x}
Theorem 57 Let {D\subset\mathbb{R}}, {f:D\rightarrow\mathbb{R}} and {c\in D\cap D'}. If {f} is differentiable in {c}, there exists a continuous function {\varphi:D\rightarrow\mathbb{R}} and vanishing in {c} such as:

\displaystyle f(x)=f(c)+\left( \left( f'(c)+\varphi(x) \right) (x-c) \right)\quad x\in D \ \ \ \ \ (23)

 

Proof: Defining {\varphi (x)} in the following way

\displaystyle f(x) = \begin{cases} \dfrac{f(x)-f(c)}{x-c}-f'(c) \quad \mathrm{if}\quad x \in D\setminus \{c\}\\ 0 \quad \mathrm{if}\quad x =c \end{cases}

Since {\displaystyle \lim_{x\rightarrow c}\varphi (x)=\lim_{x\rightarrow c} \left(\dfrac{f(x)-f(c)}{x-c}-f'(c)\right)=(f'(c)-f'(c)=0 }, then {\varphi} is null and vanishing in {c}.

To complete the proof one ha to check that with the previous construction of {\varphi} the identity of the theorem holds. \Box

Corollary 58 Let {f=D\rightarrow\mathbb{R}} differentiable in {c}. Then it is {f(x)=f(c)+f'(c)(x-c)+o(x-c)} when {x\rightarrow c}Proof: Let {r(x)=\varphi (x)(x-c)}. Using Theorem 57 it is

\displaystyle f(x)=f(c)+f'(c)(x-c)+r(x)

Since {\lim_{x-to c}\varphi (x)=\varphi (c)=0} it is {r(x)=o(x-c)} when {x\rightarrow c}. \Box

Corollary 59 Let {f} be differentiable in {c}. Then {f} is continuous in {c}.Proof: From Theorem 57 it is

{\begin{aligned} \lim_{x\rightarrow c} f(x)&=\lim_{x\rightarrow c}(f(c)+(f'(c)+\varphi (x))(x-c))\\ &=f(c) \end{aligned}} \Box

From corollary 59 it follows that all differentiable functions are continuous too. But is the converse also true? Is it true that all continuous functions are also differentiable?

The answer to the previous question is no. A simple example is the absolute value function:

AbsValue

An even more extreme example is the Weierstrass function:

\displaystyle \sum_{n=0}^\infty a^n\cos\left( b^n\pi x \right)

with {0<a<1}, {b} a positive odd integer and {ab>1+3/2\pi}.

WeierstrassFunction

 

Real Analysis Limits and Continuity VII

Posted in 01 Basic Mathematics, 01 Real Analysis on March 8, 2014 by ateixeira

— 6.10. Global properties of continuous functions —

Theorem 51 (Intermediate Value Theorem) Let {I=[a,b] \in \mathbb{R}} and {f: I \rightarrow \mathbb{R}} is a continuous function. Let {u \in \mathbb{R}} such that {\inf(I)<u<\sup(I)}, then there exists {c \in I} such that {f(c)=u}.Proof: Omitted. \Box

Intuitively speaking the previous theorem states that the graph of a continuous function doesn’t have holes in it if the domain of the function doesn’t have any holes in it too.

Corollary 52 Let {[a,b]} be an interval in {\mathbb{R}}, {f:[a,b]\rightarrow\mathbb{R}} a continuous function. Suppose that {f(a)f(b)<0}. Then {\exists c \in ]a,b[} such that {f(c)=0}.Proof: In the codomain of {f} there exists values bigger than {0} and values smaller than {0}. Hence {\sup f(I)>0} and {\inf f(I)<0}. Therefore {0} is strictly between between the infimum and supremum of the codomain of {f}. By hypothesis the function doesn’t vanish on the extremities of the interval, hence the {0} value has to be in the interior of the interval \Box
Corollary 53 Let {I\in\mathbb{R}}, {f:I\rightarrow\mathbb{\mathbb R}} a continuous function. Then {f(I)} is also an interval.Proof: Let {\alpha=\inf(I)} and {\beta=\sup(I)}. By definition of infimum and supremum it is {f(I)\subset [\alpha , \beta]}. Using Theorem 51 it is {]a,b[\subset f(I)}.Thus we have the following four possibilities for {f(I)}:

{f(I)=\begin{cases}[\alpha , \beta] \\ ]\alpha , \beta] \\ [\alpha , \beta[ \\ ]\alpha , \beta[ \end{cases}}

\Box

As an application let us look into {P(x)=a_xx^n+\cdots +a_1x+a_0} with {n} odd and {a_n <0}. It is {P(x)\sim a_x^n} for large (positively or negatively) values of {x}. It is {\displaystyle \lim_{x\rightarrow +\infty} P(x)=+\infty} and {\displaystyle \lim_{x\rightarrow -\infty} P(x)=-\infty}.

Now

  • {P(x)} is a continuous function.
  • The domain, {D} of {P(x)} is {\mathbb{R}} which is an interval.
  • {\sup(D)=+\infty} and {\inf(D)=-\infty}, implying {P[\mathbb{R}]=]-\infty, +\infty[}

By corollary 52 it is {0\in P[\mathbb{R}]}. Which means that every odd polynomial function has at least one {0}.

Theorem 54 (Continuity of the inverse function) Let {I} be an interval in {\mathbb{R}} and {f:I\rightarrow\mathbb{R}} a continuous function and strictly monotonous. Then {f^{-1}} is continuous and strictly monotonous.Proof: Omitted. \Box

This theorem has important applications since it allows us to define the inverse functions of the trigonometric functions.

— 6.10.1. Arcsine function —

In {[-\pi/2,\pi/2]} the function {\sin x} is injective:

Sine function

Sine function

Hence one can define the inverse of the sine function in this suitably restricted domain.

\displaystyle y=\sin x\quad\mathrm{with}\quad x\in [\pi/2,\pi/2]\Leftrightarrow x=\arcsin x

Where {\arcsin} denotes the inverse of {\sin}.

Since {\sin x:[-\pi/2,\pi/2]\rightarrow[-1,1]} it is {\arcsin x:[-1,1]\rightarrow [-\pi/2,\pi/2]}. Also by theorem 54 {\arcsin} is continuous.

The graphical representation of {\arcsin x} is

Arcsine function

Arcsine function

and it is evident by its representation that {\arcsin x} is an odd function.

— 6.10.2. Arctangent function —

In {]-\pi/2,\pi/2[} the function {\tan x} is injective:

Tangent function

Tangent function

Hence one can define the inverse of the tangent function in this suitably restricted domain.

\displaystyle y=\tan x\quad\mathrm{with}\quad x\in ]\pi/2,\pi/2[\Leftrightarrow x=\arctan x

Where {\arctan} denotes the inverse of {\tan}.

Since {\tan x:]-\pi/2,\pi/2[\rightarrow]-\infty,+\infty[} it is {\arctan x:]-\infty,+\infty[\rightarrow ]-\pi/2,\pi/2[}. Also by theorem 54 {\arctan} is continuous.

The graphical representation of {\arctan x} is

Arctangent function

Arctangent function

and it is evident by its representation that {\arctan x} is an odd function.

— 6.10.3. Arccosine function —

In {[0,\pi]} the function {\cos x} is injective:

Cosine function

Cosine function

Hence one can define the inverse of the cosine function in this suitably restricted domain.

\displaystyle y=\cos x\quad\mathrm{with}\quad x\in [0,\pi]\Leftrightarrow x=\arccos x

Where {\arccos} denotes the inverse of {\cos}.

Since {\cos x:[0,\pi]\rightarrow[-1,1]} it is {\arccos x:[-1,1]\rightarrow [0,\pi]}. Also by theorem 54 {\arccos} is continuous.

The graphical representation of {\arccos x} is

Arccosine function

Arccosine function

Another way to define the arccosine function is to first use the relationship

\displaystyle \cos=\sin(\pi/2-x)

to write

\displaystyle \arccos y=\frac{\pi}{2}-\arcsin y

— 6.10.4. Continuous functions and intervals —

Theorem 55 (Extreme value theorem) Let {[a,b]\subset \mathbb{R}} and {f:[a,b]\rightarrow\mathbb{R}}. Then {f} has a maximum and a minimum.Proof: Let {E} be the codomain of {f} and {s=\sup E}.By Theorem 17 in post Real Analysis – Sequences II there exists a sequence {y_n} of points in {E} such that {\lim y_n=s}.

Since the terms of {y_n} are points of {f} for each {n} there exists {x_n\in [a,b]} such that {y_=f(x_n)}.

Since {x_n} is a sequence of points in the compact interval (see definition 22 in post Real Analysis – Sequences IV) {[a,b]}, by Corollary 27 (also in post Real Analysis – Sequences IV) there exists a subsequence {x_{\alpha n}} of {x_n} that converges to a point in {[a,b]}.

Let {c\in [a,b]} such that {x_n\rightarrow c}.

Since {f} is continuous in {c} it is, by definition of continuity, (see definition 34) {\lim f(x_{\alpha n})=f(c)}. But {f(x_{\alpha n})=y_{\alpha n}}, which is a subsequence of {y_n}. Since {y_n\rightarrow s} it also is {y_{\alpha n}\rightarrow s}.

But {y_{\alpha n}=f(x_{\alpha n})\rightarrow f(c)}.

In conclusion it is {s=f(c)}, hence {s\in E}. That is {s=\max E}.

For the minimum one can construct a similar proof. This proof is left as an exercise for the reader. \Box

One easy way to remember the previous theorem is:

Continuous functions have a maximum and a minimum in compact intervals.

Theorem 56 Let {I} be a compact interval of {\mathbb{R}} and {f:I\rightarrow\mathbb{R}} continuous. Then {f(I)} is a compact interval.Proof: By corollary 53 {f(I)} is an interval.By theorem 55 {f(I)} has a maximum and a minimum.

Hence {f(I)} is of the form {[\alpha , \beta]}.

Thus {f(I)} is a limited and closed interval, which is the definition of a compact interval. \Box

One easy way to remember the previous corollary is:

Compactness is preserved under a continuous map.

Real Analysis – Limits and Continuity VI

Posted in 01 Basic Mathematics, 01 Real Analysis on February 15, 2014 by ateixeira

— More properties of continuous functions —

Definition 35

Let {D \subset \mathbb{R}}; {f: D\rightarrow \mathbb{R}} and {c \in D'\setminus D}. If {\displaystyle \lim_{x\rightarrow c}f(x)=a\in \mathbb{R}}, we can define {\tilde{f}} as:

\displaystyle   \tilde{f}(x)=\begin{cases} f(x) \quad x \in D \\ a \quad x=c \end{cases} \ \ \ \ \ (16)

As an application of the previous definition let us look into {f(x)= \sin x/x}. It is {D= \mathbb{R}\setminus \{0\}}.

Since {\displaystyle\lim_{x \rightarrow 0} \sin x/x=1} we can define {\tilde{f}} as

\displaystyle  \tilde{f}(x)=\begin{cases} \sin x/x \quad x \neq 0 \\ 1 \quad x=0 \end{cases}

As another example let us look into {f(x)=1/x} Since {\displaystyle\lim_{x\rightarrow 0^+}f(x)=+\infty} and {\displaystyle\lim_{x\rightarrow 0^-}f(x)=-\infty} we can’t define {\tilde{f}} for {1/x}.

Finally if we let {f(x)=1/x^2} we have {\displaystyle\lim_{x\rightarrow 0^+}f(x)=\displaystyle\lim_{x\rightarrow 0^-}f(x)=+\infty}. Since the limits are divergent we still can’t define {\tilde{f}}.

In general one can say that given {f: D\rightarrow \mathbb{R}} and {c \in D'\setminus D} {\tilde{f}} exists if and only if {\displaystyle\lim_{x \rightarrow c}f(x)} exists and is finite.

Theorem 42 Let {D \subset \mathbb{R}}; {f,g: D\rightarrow \mathbb{R}} and {c \in D}. If {f} and {g} are continuous functions then {f+g}, {fg} and (if {g(c)\neq 0}){f/g} are also continuous functions.

Proof: We’ll prove that {fg} is continuous and let the other cases for the reader.

Let {x_n} be a sequence of points in {D} such that {x_n \rightarrow c}. Then {f(x_n) \rightarrow f(c)} and {g(x_n) \rightarrow c} (since {f} and {g} are continuous functions).

Hence it follows {f(x_n)g(x_n) \rightarrow f(x)g(x)} from property {6} of Theorem 19. Which is the definition of a continuous function. \Box

Let {f(x)=5x^2-2x+4}. First we note that {f_1(x)=5}, {f_2(x)=-2} and {f_3(x)=4} are continuous functions. Now {f_4(x)=4} also a continuous function. {f_5(x)=x^2} is continuous since it is the product of {2} continuous functions. {f_6(x)=-2x} is continuous since it is the product of {2} continuous functions. Finally {f(x)=5x^2-2x+4} is continuous since it is the sum of continuous functions.

Theorem 43 Let {D, E \subset \mathbb{R}}, {g: D\rightarrow E}, {f: E \rightarrow \mathbb{R}} and {c \in D}. If {g} is continuous in {c} and {f} is continuous in {g(c)}, then the composite function {f \circ g (x)=f(g(x)) } is continuous in point {c}.

Proof: Let {x_n} be a sequence of points in {D} with {x_n \rightarrow c}. Hence {\lim g(x_n)=g(c)}. If {f} is continuous in {g(c)} it also is {\lim f(g(x_n))=f(g(c))}. This is {\lim (f \circ g)(x_n)= (f \circ g)(c)}. Thus {f \circ g} is continuous in {c}. \Box

As an application of the previous theorem let {f(x)=a^x}. Since {a^x=e^{\log a^x}=e^{x \log a}} we can write {a^x=e^t \circ t=x\log a}. Since {f(t)=e^t} is a continuous function and {g(x)=x \log a} is also a continuous function it follows that {a^x} is a continuous function (it is the composition of two continuous functions).

By the same argument we can also show that with {\alpha \in \mathbb{R}}, {x^\alpha} (for {x \in \mathbb{R}^+}) is also a continuous function in {\mathbb{R}^+}.

Theorem 44 Let {D, E \subset \mathbb{R}}, {g: D\rightarrow E}, {f: E \rightarrow \mathbb{R}} and {c \in D'}. Suppose that {\displaystyle \lim_{x \rightarrow c}g(x)=a} and that {\displaystyle \lim_{t \rightarrow a}f(t)} exists. If {f} is continuous it follows {\displaystyle \lim_{x \rightarrow c}f(g(x))=\lim_{t \rightarrow a}f(t)}.

Proof: Omitted. \Box

Find {\displaystyle \lim_{x \rightarrow +\infty} \sin (1/x)}.

We can write {\sin (1/x)= \sin t \circ (t=1/x)}. Since {\displaystyle \lim_{x \rightarrow + \infty}(1/x)=0} it is, from Theorem 44, {\displaystyle \lim_{x \rightarrow +\infty} \sin (1/x)=\displaystyle\lim_{t \rightarrow 0}\sin t =0}.

In general if {\displaystyle \lim_{x \rightarrow c} g(x)= a \in \mathbb{R}} it is {\displaystyle \lim_{x \rightarrow c} \sin (g(x))=\displaystyle\lim_{t \rightarrow a} \sin t = \sin a}. In conclusion

\displaystyle  \lim_{x \rightarrow c}\sin (g(x))=\sin (\lim_{x \rightarrow c}g(x))

Suppose that {\displaystyle \lim_{x \rightarrow c}g(x)=0} and let {\tilde{f}} be the function that makes {\sin x/x} be continuous in {x=0}.

It is {\sin x = \tilde{f}(x)x}, hence it is {\sin g(x) = \tilde{f}(g(x))g(x)}.

By definition {\tilde{f}} is continuous so by Theorem 44 {\displaystyle \lim_{x \rightarrow c^+}f(g(x))=\displaystyle\lim_{t \rightarrow 0}\tilde{f}(t)=1}.

Thus we can conclude that when {\displaystyle \lim_{x \rightarrow c}g(x)=0} it is

\displaystyle  \sin (g(x))\sim g(x)\quad (x \rightarrow c)

For example {\sin (x^2-1) \sim (x^2-1)\quad (x \rightarrow 1)}.

Let {\displaystyle \lim_{x \rightarrow c}g(x)=a \in \mathbb{R}}. By Theorem 44 it is {\displaystyle \lim_{x \rightarrow c} e^{g(x)}=\lim_{t \rightarrow a}e^t=e^a} (with the conventions {e^{+\infty}=+\infty} and {e^{-\infty}=0}). Thus {\displaystyle \lim_{x \rightarrow c}e^{g(x)}=e^{\displaystyle\lim_{x \rightarrow c}g(x)}}.

Analogously one can show that {\displaystyle \lim_{x \rightarrow c} \log g(x)= \log (\lim_{x \rightarrow c}g(x))} (with the conventions {\displaystyle \lim_{x \rightarrow +\infty} \log g(x)=+\infty} and {\displaystyle \lim_{x \rightarrow 0} \log g(x)=-\infty}).

Let {a>1}. It is {\displaystyle \lim_{x \rightarrow +\infty}a^x =\displaystyle\lim_{x \rightarrow +\infty}e^{x\log a}=e^{\displaystyle\lim_{x \rightarrow +\infty} x\log a}=+\infty } (since {\log a>0}).

On the other hand for {\alpha > 0} it also is {\displaystyle \lim_{x \rightarrow +\infty}x^\alpha =\displaystyle\lim_{x \rightarrow +\infty}e^{\alpha \log x}= e^{\displaystyle \lim_{x \rightarrow +\infty}\alpha \log x}=+\infty}.

The question we want to answer is {\displaystyle \lim_{x \rightarrow +\infty}\dfrac{a^x}{x^\alpha} } since the answer to this question tell us which of the functions tends more rapidly to {+\infty}.

Theorem 45 Let { a<1} and {\alpha > 0}. Then

\displaystyle   \lim_{\infty}\frac{a^x}{x^\alpha}=+\infty \ \ \ \ \ (17)

Proof: Let {b=a^{1/(2\alpha)}} ({b>1}). It is {a=b^{2\alpha}}. Hence {a^x=b^{2\alpha x}}. Moreover {\dfrac{a^x}{x^\alpha}=\dfrac{b^{2\alpha x}}{x^\alpha}=\dfrac{b^{2\alpha x}}{\sqrt{x}^{2\alpha}}}.

which is

\displaystyle   \frac{a^x}{x^\alpha}=\left( \frac{b^x}{\sqrt{x}} \right)^{2\alpha} \ \ \ \ \ (18)

Let {[x]} denote the nearest integer function and using Bernoulli’s Inequality ({b^m\geq 1+ m(b-1)}) it is {b^x\geq x^{}[x]\geq 1+[x](b-1)>[x](b-1)>(x-1)(b-1)}.

Hence {\dfrac{b^x}{\sqrt{x}}>\dfrac{x-1}{\sqrt{x}}(b-1)=\left( \sqrt{x}-1/\sqrt{x}\right)(b-1)}.

Since {\displaystyle \lim_{x \rightarrow +\infty}\left( \sqrt{x}-1/\sqrt{x}\right)(b-1)=+\infty} it follows from Theorem 32 that {\displaystyle\lim_{x \rightarrow \infty} \frac{b^x}{\sqrt{x}}=+\infty}.

Using 18 and setting {t=b^x/\sqrt{x}} it is {\displaystyle\lim_{x \rightarrow \infty}\frac{a^x}{x^\alpha}=\displaystyle\lim_{t \rightarrow +\infty}t^{2\alpha}=+\infty} \Box

Corollary 46 Let {\alpha > 0}, then

\displaystyle \lim_{x \rightarrow +\infty}\frac{x^\alpha}{\log x}=+\infty

Proof: Left as an exercise for the reader (remember to make the convenient change of variable). \Box

Theorem 47 Let {a>1}, then {\displaystyle \lim \frac{a^n}{n!}}=0. Proof: First remember that {\log n!=n\log n -n + O(\log n)} which is Stirling’s Approximation.

Since {\dfrac{\log n}{n} \rightarrow 0} it also is {\dfrac{O(\log n)}{n} \rightarrow 0}.

And

\displaystyle \frac{a^n}{n!}=e^{\log (a^n/n!)}=e^{n\log a - \log n!}

Thus

\displaystyle \lim \frac{a^n}{n!}=e^{\lim(n\log a - \log n!)}

For the argument of the exponential function it is

{\begin{aligned} \lim(n\log a - \log n!) &= \lim n\log a-n\log n+n-O(\log n) \\ &=\lim \left(n\left(\log a -\log n+1 -\dfrac{O(\log n)}{n}\right)\right) \\ &=+\infty\times -\infty=-\infty \end{aligned}}

Hence {\displaystyle \lim \frac{a^n}{n!}=e^{-\infty}=0}. \Box

Lemma 48

\displaystyle   \lim_{x \rightarrow +\infty}\left( 1+\frac{1}{x}\right)^x=e \ \ \ \ \ (19)

Proof: Omitted. \Box

Theorem 49

\displaystyle   \lim_{x \rightarrow 0}\frac{\log (1+x)}{x}=1 \ \ \ \ \ (20)

Proof: Will be proven as an exercise. \Box

Corollary 50

\displaystyle   \lim_{x \rightarrow 0}\frac{e^x-1}{x} \ \ \ \ \ (21)

Proof: Left as an exercise for the reader. Make the change of variables {e^x=t+1} and use the previous theorem. \Box

Generalizing the previous results one can write with full generality:

  • {\sin g(x) \sim g(x) \quad (x \rightarrow c)} if {\displaystyle \lim_{x \rightarrow c} g(x)=0}
  • {\log (1+g(x)) \sim g(x) \quad (x \rightarrow c)} if {\displaystyle \lim_{x \rightarrow c} g(x)=0}
  • {e^{g(x)}-1 \sim g(x) \quad (x \rightarrow c)} if {\displaystyle \lim_{x \rightarrow c} g(x)=0}

Real Analysis Exercises III

Posted in 01 Basic Mathematics, 01 Real Analysis on July 29, 2009 by ateixeira

1.

a) Calculate { \displaystyle \sum_{k=p}^{m}(u_{k+1}-u_k)} and {\displaystyle\sum_{k=p}^{m}(u_k - u_{k+1}}

{\displaystyle \sum_{k=p}^{m}(u_{k+1}-u_k)=u_{p+1}-u_{p}+u_{p+2}-u_{p+1}+\ldots +u_{m+1}-u_{m}}

As we can see the first term cancels out with the fourth, the third with the sixth, and so on and all we are left with is the second and second last terms:

{\displaystyle \sum_{k=p}^{m}(u_{k+1}-u_k) = u_{m+1}-u_p}

{\begin{aligned} \displaystyle \sum_{k=p}^{m}(u_k - u_{k+1})&= - \sum_{k=p}^{m}(u_{k+1}-u_k)\\ &= - (u_{m+1}-u_p)\\ &= u_p-u_{m+1} \end{aligned}}

b) Calculate {\displaystyle \lim \sum_{k=1}^n\dfrac{1}{k(k+1)}} using the previous result.

{\displaystyle \lim \sum_{k=1}^n\dfrac{1}{k(k+1)}= \lim \sum_{k=1}^n \left( \frac{1}{k}-\frac{1}{k+1} \right) }

Defining {u_k=1/k} the previous sum can be written as

{\begin{aligned} \displaystyle \lim \sum_{k=1}^n \left( u_k-u_{k+1} \right)&=\lim (u_1 - u_{n+1})\\ &= \lim \left(1-\frac{1}{n+1}\right)\\ &=1 \end{aligned}}

This last result apparently has a funny story. Mengoli was the first one to calculate {\displaystyle \lim \sum_{k=1}^n\dfrac{1}{k(k+1)}=1}.

At the time this happened people did research in mathematics (I’m using this term rather abusively) in a somewhat different vein. They didn’t rush to print what they found like today.

Many times people held out their results for years while tormenting their rivals about what they found.

This is exactly what Mengoli did. In the times he was around the theory of series wasn’t much developed, thus this result, that we can calculate without being particularly brilliant in Mathematics, was something to take note of.

So, he wrote some letters to people saying that {\displaystyle \lim \sum_{k=1}^n\dfrac{1}{k(k+1)}=1}, but not how he concluded that.

The other mathematicians he sent the result too didn’t know about his methods and all they could do was to add numbers up explicitly and the only thing they could see was that even though they could sum more and more terms the result was always less than {1} and was got nearer and nearer to {1}.

Of course this didn’t prove nothing since summing up a billion terms isn’t the same as summing an infinite number of terms and everyone but Mengoli was dumbfounded with that surprising result.

c) Calculate {\displaystyle \sum_{k=0}^{n-1}(2k+1) }

In this exercise what we are calculating is the sum of {n} consecutive odd numbers. This result was already known to the ancient Greeks and the result wasn’t nothing short to astounding to them.

But enough with the talk already:

{\begin{aligned} \displaystyle \sum_{k=0}^{n-1}(2k+1)&=\sum_{k=0}^{n-1}\left[ (k+1)^2-k^2\right]\\ &= \sum_{k=0}^{n-1}(u_{k+1}-u_k) \end{aligned}}

With {u_k=k^2}

Using the now familiar formula

{\begin{aligned} \displaystyle \sum_{k=0}^{n-1}(2k+1) &= (n-1+1)^2-0^2\\ &= n^2 \end{aligned}}

An astounding result indeed!

Just look to {\displaystyle \sum_{k=0}^{n-1}(2k+1)=n^2}, interpret the result and try not to be as surprised as the ancient Greeks were.

2.

a) Using 1.a) and {a^k=a^k\dfrac{a-1}{a-1}\quad (a \neq 1)} calculate {\displaystyle \sum_{k=0}^{n-1} a^k }

{\begin{aligned} \displaystyle \sum_{k=0}^{n-1} a^k &= \displaystyle\sum_{k=0}^{n-1} \left[ a^k\frac{a-1}{a-1}\right]\\ &= \displaystyle \frac{1}{a-1}\sum_{k=0}^{n-1}\left( a^{k+1}-a^k\right)\\ &= \displaystyle\frac{1}{a-1}(a^n-1)\\ &= \displaystyle\frac{a^n-1}{a-1} \end{aligned}}

b) Using a) establish the Bernoulli inequality {a^n-1 \geq n(a-1)} if {a > 0} and {n \in \mathbb{Z}^+}

If {a=1} it is {1-1=n(1-1) \Rightarrow 0=0} which is trivially true.

If {n=1} it is {a-1=a-1} which is trivially true.

For {n \geq 2 } and {a>1} it is:

{\begin{aligned} \displaystyle \sum_{k=0}^{n-1}a^k&= 1+a+a^2+\ldots+a^{n-1}\\ &> 1+1+\ldots+1\\ &= n \end{aligned}}

Thus

{\begin{aligned} \dfrac{a^n-1}{a-1} &> n \\ a^n-1 &> n(a-1) \end{aligned}}

Since {a > 1}

Finally if {0 < a <1 } it is

{\begin{aligned} \displaystyle \sum_{k=0}^{n-1}a^k&= 1+a+a^2+\ldots+a^{n-1}\\ &< 1+1+\ldots+1\\ &= n \end{aligned}}

Thus

{\begin{aligned} \dfrac{a^n-1}{a-1} & < n \\ a^n - 1 & > n(a-1) \end{aligned}}

Since {a < 1}

c) Use b) to calculate {\lim a^n} if {a > 1} and then conclude that {\lim a^n=0} if {|a| < 1}.

By b) it is

{\begin{aligned} a^n &> n(a-1)+1 \\ \lim a^n &\geq \lim \left( n(a-1)+1 \right)= +\infty \end{aligned}}

Hence {\lim a^n = +\infty \quad (a>1)}

For the second part of the exercise we will calculate instead {\lim |a^n|} since that we know that { u_n \rightarrow 0 \Leftrightarrow |u_n| \rightarrow 0}

Let us make a change of variable {t=1/a}. Thus {|a|=|1/t|} and

{\begin{aligned} \lim |a^n| &= \lim |1/t|^n\\ &= \dfrac{1}{\lim |t|^n}\\ &= \dfrac{1}{+\infty}\\ &=0 \end{aligned}}

3. Consider the sequences {u_n=\left( 1+\dfrac{1}{n} \right)^n } and {v_n=\left( 1+\dfrac{1}{n} \right)^{n+1}}

a) Calculate {\dfrac{v_n}{v_{n+1}}} and {\dfrac{u_{n+1}}{u_n}}. Then use Bernoulli’s inequality to show that {v_n} is strictly decreasing and that {u_n} is strictly increasing.

{\begin{aligned} \dfrac{v_n}{v_{n+1}} &= \dfrac{\left( 1+1/n \right)^{n+1}}{\left(1+1/(n+1)\right)^{n+2}}\\ &=\dfrac{\left(\dfrac{n+1}{n}\right)^{n+1}}{\left( \dfrac{n+2}{n+1} \right)^{n+2}}\\ &= \dfrac{n}{n+1}\dfrac{\left(\dfrac{n+1}{n}\right)^{n+2}}{\left( \dfrac{n+2}{n+1} \right)^{n+2}}\\ &=\dfrac{n}{n+1}\left( \dfrac{(n+1)^2}{n(n+2)} \right)^{n+2}\\ &= \dfrac{n}{n+1}\left( \dfrac{n^2+2n+1}{n(n+2)} \right)^{n+2}\\ &=\dfrac{n}{n+1}\left( \dfrac{n(n+2)+1}{n(n+2)} \right)^{n+2}\\ &= \dfrac{n}{n+1}\left( 1+\dfrac{1}{n(n+2)} \right)^{n+2} \end{aligned}}

After having calculated {v_n/v_{n+1}} we can use Bernoulli’s inequality, with {a=1+\dfrac{1}{n(n+2)}} , to conclude that {v_n} is strictly decreasing.

{\begin{aligned} \dfrac{n}{n+1}\left( 1+\dfrac{1}{n(n+2)} \right)^{n+2} &> \dfrac{n}{n+1}\left(1 + \dfrac{n+2}{n(n+2)} \right)\\ &= \dfrac{n}{n+1}(1+1/n)\\ &= \dfrac{n}{n+1}\dfrac{n+1}{n}\\ &= 1 \end{aligned}}

Thus {v_n} is strictly decreasing.

With a similar technique we can prove that

{ \displaystyle u_{n+1}/u_n=\dfrac{n+1}{n}\left( 1- \dfrac{1}{(n+1)^2}\right)^{n+1}}

After that by using Bernoulli’s inequality like in the previous example one can show that {u_{n+1}/u_n>1} and thus {u_n} is strictly increasing.

c) Using a) and b) and {\lim u_n = e} prove the following inequalities {(1+1/n)^n < e <(1+n)^{n+1}}.

{\begin{aligned} \lim v_n&= \lim(1+1/n)^n(1+1/n)\\ &= e\times 1\\ &= e \end{aligned}}

We already know that {v_n} is decreasing so it is {v_n<(1+1/n)^{n+1}}

On the other hand {u_n} is increasing and {\lim u_n=e} so {(1+1/n)^n<e}.

Hence {(1+1/n)^n<e<(1+1/n)^{n+1}}

d) Use c) to prove that { \displaystyle \frac{1}{n+1}<\log (n+1)-\log n <\frac{1}{n}}

{ \begin{aligned} (1+1/n)^n &< e \\ n \log \left( \dfrac{n+1}{n} \right) &< 1 \\ \log(n+1) - \log n &< \dfrac{1}{n} \end{aligned} }

And now for the second part of the inequality:

{ \begin{aligned} e &< \left(1+\dfrac{1}{n}\right)^{n+1} \\ 1 &< (n+1)\log \left(\dfrac{n+1}{n}\right) \\ \dfrac{1}{n+1} &< \log (n+1) -\log n \end{aligned}}

In conclusion it is { \dfrac{1}{n+1}<\log (n+1)- \log n < \dfrac{1}{n} }

4.

a) Using 3d) show that

{ \displaystyle 1+\log k < (k+1)\log (k+1)-k\log k < 1+ \log(k+1) }

From

{ \begin{aligned} \dfrac{1}{k+1} &< \log (k+1) - \log k \\ 1 &< (k+1)\log(k+1) - (k+1)\log k \\ 1+ \log k &< (k+1)\log(k+1)-k \log k \end{aligned}}

With a similar reasoning we can also prove that {(k+1)\log(k+1)-l\log k < 1+ \log(k+1)}.

Thus it is {1+\log k < (k+1)\log(k+1)-k\log k < 1+ \log(k+1)}

b) Sum the previous inequalities between {1 \leq k \leq n-1}.

{\begin{aligned} \displaystyle \sum_{k=1}^{n-1}(1+ \log k) &< \sum_{k=1}^{n-1} ((k+1)\log(k+1)-k \log k)\\ &< \displaystyle \sum_{k=1}^{n-1}(1+\log(k+1)) \end{aligned}}

Now

{\begin{aligned} \displaystyle \sum_{k=1}^{n-1} (1+ \log k) &= \sum_{k=1}^{n-1}1+\sum_{k=1}^{n-1}\log k\\ &= n-1 +\sum_{k=1}^{n-1}\log k \end{aligned}}

And

{\begin{aligned} \displaystyle \sum_{k=1}^{n-1}\log k &= \log 1 + \log2 +\ldots+\log(n-1)\\ &=\log((n-1)!) \end{aligned}}

It also is

{\begin{aligned} \displaystyle \sum_{k=1}^{n-1}((k+1)\log(k+1) - k\log k)&= m\log n -\log 1\\ &=n\log n \end{aligned}}

And {\displaystyle \sum_{k=1}^{n-1}(1+\log(k+1))=n-1+\log n!}

Thus it is {n-1+\log(n-1)! < n\log n < n-1 \log n!}

c) Conclude the following inequalities { n \log n -n +1 < \log n! < n \log n -n+1+\log n} and establish Stirling's approximation { \displaystyle \log n! = n\log n -n +r_n} with {e < C_n < en}

{ \begin{aligned} n-1 + \log (n-1)! &< n\log n \\ \log (n-1)! &< n\log n -n+1 \\ \log n! &< n\log n -n +1+\log n \end{aligned}}

On the other hand

{\begin{aligned} n\log n &< n-1 + \log n! \\ n\log n -n +1 &< \log n! \end{aligned} }

Thus

{\begin{aligned} n\log n -n +1 &< \log n! \\ &< n\log n -n +1 +\log n \end{aligned}}

And from this follows {1 < \log n! -n\log n+n < 1+\log n}

Defining {r_n=\log n! -n\log n+n} it is {\log n! = n\log n-n+r_n} with {1 < r_n < 1+\log n}

5.

Show that {\log \left(1+\dfrac{1}{n}\right)\sim \dfrac{1}{n}} and that {\log \left(1+\dfrac{1}{n^2}\right)\sim \dfrac{1}{n^2}}

We know that

{ \begin{aligned} \dfrac{1}{n+1} &< \log(n+1)-\log n < \dfrac{1}{n} \\ \dfrac{1}{n+1} &< \log\left( \dfrac{n+1}{n}\right) < \dfrac{1}{n} \\ \dfrac{1}{n+1} &< \log\left( 1+\dfrac{1}{n}\right) <\dfrac{1}{n} \\ \dfrac{1/(n+1)}{1/n} &< \dfrac{\log (1+1/n)}{1/n}<1 \\ \lim \dfrac{n}{n+1} &\leq \lim \dfrac{\log (1+1/n)}{1/n} \leq \lim 1 \\ 1 &\leq \lim \dfrac{\log (1+1/n)}{1/n} \leq 1 \end{aligned}}

Thus {\lim \dfrac{\log (1+1/n)}{1/n}=1} and this equivalent to saying that {\log \left(1+\dfrac{1}{n}\right)\sim \dfrac{1}{n}}

Let {u_n = \dfrac{\log (1+1/n)}{1/n}}. In this case it is {\dfrac{\log (1+1/n^2)}{1/n^2}=u_{n^2}}. Since {u_{n^2}} is a subsequence of {u_n} we know that {\lim u_{n^2}= \lim u_n} and so it also is {\log \left(1+\dfrac{1}{n^2}\right)\sim \dfrac{1}{n^2}}.

6. Show that {u_n \sim v_n} and {v_n \sim w_n \Rightarrow u_n \sim w_n }

By hypothesis it is {u_n=h_n v_n}, {v_n=t_n w_n} with {h_n,t_n \rightarrow 1}.

Substituting the second equality in the first we obtain {u_n = h_n t_n w_n}.

Let {s_n = h_n t_n} and we write {u_n =s_n w_n } with {\lim s_n = \lim h_n \lim t_n =1\times 1=1}.

Thus {u_n \sim w_n}

7. Let {u_n = O\left(1/n\right)} and {v_n = O (1/ \sqrt{n})}. Show that {u_n v_n = o ( 1/n^{4/3})}.

{u_n = h_n 1/n} and {v_n = t_n 1/ \sqrt{n}} with {h_n} and {t_n} bounded sequences. Now

{\begin{aligned} u_n v_n &= \dfrac{h_n}{n} \dfrac{t_n}{\sqrt{n}}\\ &= \dfrac{h_n t_n}{n^{3/2}}\\ &=\dfrac{h_n t_n}{n^{1/6}}\dfrac{1}{n^{4/3}} \end{aligned}}

Let {s_n = \dfrac{h_n t_n}{n^{1/6}}} it is {\lim s_n = \lim \dfrac{h_n t_n}{n^{1/6}} = 0} since {h_n t_n} is bounded.

Thus {u_n v_n = o (1/n^{4/3})}

8. Using Stirling’s approximation show that {\log n! = n\log n -n + O(\log n)}

We know that it is {\log n! = n\log n -n + +r_n} with { 1< r_n < 1+\log n}. Thus

{\begin{aligned} 0 &<\dfrac{1}{\log n}\\ &< \dfrac{r_n}{\log n}\\ &< \dfrac{1}{\log n} +1\\ &\leq \dfrac{1}{\log 2}+1 \end{aligned}}

Where we used the fact that { \dfrac{1}{\log n}+1} is decreasing function.

Thus {\dfrac{r_n}{\log n}} is bounded and so {r_n=O(\log n)} as desired.

Real Analysis – Limits and Continuity IV

Posted in 01 Basic Mathematics, 01 Real Analysis on July 1, 2009 by ateixeira

As an application of theorem 35 let us look into the functions {f(x)=e^x} and {g(x)=\log x}.

Now {f:\mathbb{R} \rightarrow \mathbb{R^+}} and is a strictly increasing function, and {g:\mathbb{R^+} \rightarrow \mathbb{R}} also is a strictly increasing function.

By theorem 35 it is {\displaystyle \lim_{x \rightarrow +\infty}\exp x = \mathrm{sup} [\mathbb{R^+}] = +\infty} and {\displaystyle \lim_{x \rightarrow -\infty} \exp x= \mathrm{inf} [\mathbb{R^+}] = 0}.

As for {g(x)} it is {\displaystyle \lim_{x \rightarrow +\infty} \log x=\sup [\mathbb{R}]=+\infty} and {\displaystyle \lim_{x \rightarrow 0} \log x = \inf [\mathbb{R}]=-\infty}.

Definition 33 Let {D \subset \mathbb{R}}; {f,g: D \rightarrow \mathbb{R}}, and {c \in D^\prime}. Let us suppose that there exists {h: D \rightarrow \mathbb{R}} such as {f(x) = h(x)g(x) }.

  1. If {\displaystyle \lim_{x \rightarrow c} h(x)=1 } we say that {f(x)} is asymptotically equal to {g(x)} when {x \rightarrow c} and write {f(x) \sim g(x)\,\, (x \rightarrow c)}.
  2. If {\displaystyle \lim_{x \rightarrow c} h(x) = 0} we say that {f(x)} is little-o of {g(x)} when {x \rightarrow c} and write { f(x) = o (g(x)) \,\, (x \rightarrow c)}.
  3. If {h(x)} is bounded in some neighborhood of {c} we say that {f(x)} is big-o of {g(x)} when {x \rightarrow c} and write {f(x)=O(g(x)) \;(x \rightarrow c)}.

If in the previous definition {g(x)} doesn’t equal zero:

  1. { f(x) \sim g(x) \Leftrightarrow \displaystyle \lim_{x \rightarrow c} \frac{f(x)}{g(x)} = 1}.
  2. { f(x) = o (g(x)) \,\, (x \rightarrow c) \Leftrightarrow \displaystyle \lim_{x \rightarrow c} \frac{f(x)}{g(x)} = 0}.
  3. { f(x) = O(g(x)) \,\, (x \rightarrow c) \Leftrightarrow \dfrac{f(x)}{g(x)} } is bounded in some neighborhood of {c}.

These notions work exactly as they worked for sequences and they give the same type of information about the behavior of the functions in question.

Theorem 36 Let {D \subset \mathbb{R}}; {f,g,f_0,g_0: D \rightarrow \mathbb{R}}, and {c \in D^\prime}. Then:

  1. If {f(x) \sim g(x) \,\, (x \rightarrow c)} and {\displaystyle \lim_{x \rightarrow c}g(x) = a}, then {\displaystyle \lim_{x \rightarrow c} f(x) = a}
  2. If {f(x) \sim f_0(x) \,\, (x \rightarrow c)} and {g(x) \sim g_0(x) \,\, (x \rightarrow c)}, then {f(x)g(x) \sim f_0(x)g_0(x) \,\, (x \rightarrow c)} and {f(x)/g(x) \sim f_0(x)/f_0(x) \,\, (x \rightarrow c)}.

Proof: Left as an exercise. \Box

As an example of the previous definitions we can say, with full generality, that for any polynomial function we can keep track of the term with the leading degree if we are interested in how it behaves for larger and larger values.

But on the other hand if we are interested on how the polynomial function behaves near the origin we have to keep track of the term with the smaller degree. To see that this is indeed so let us introduce the following example:

\displaystyle  f(x) = x^2+x

Now {x^2+x=(x+1)x}. If we take {h(x)=x+1} it is {\displaystyle \lim_{x \rightarrow 0} h(x)=1} and so it is {x^2+x=O(x) \,\, (x \rightarrow 0)}.

Another example that has a lot of interest to us is:

\displaystyle  \sin x \sim x \,\, (x \rightarrow 0)

We can see that it is so because of {\displaystyle \lim_{x \rightarrow 0} \frac{\sin x}{x} = 1}

— 6.6. Epsilon-delta condition —

And it is time for us to introduce the concept of limit using the { \epsilon - \delta } condition.

Once again we are walking into regions of greater and greater rigor at the expense of having to use more abstract concepts while we are doing it. Things are going to get a little harder for people that aren’t used to this types of reasoning but please bear with me and you’ll find it rewarding when you get used to it.

The point of the { \epsilon - \delta } condition is to avoid using fuzzy concepts near, input signals, output signals, or the somewhat weak definition of limit we been using so far.

Theorem 37 (Heine’s Theorem)

Let {D \subset \mathbb{R}}, {f: D \rightarrow \mathbb{R}}, {c \in D^\prime} and {a \in \overline{\mathbb{R}}}. {\displaystyle \lim_{x \rightarrow c} f(x) = a} if and only if

\displaystyle  \forall \delta > 0 \, \exists \epsilon >0 : \; x \in V(c,\epsilon) \cap (D \setminus \left\lbrace c \right\rbrace ) \Rightarrow f(x) \in V(a, \delta)

Proof: Omitted. \Box

In case you are wondering what that means the straightforward answer is that it means exactly what you’re idea of a function having a limit in a given point is (I’m assuming you have the right idea). It tell us that if a function indeed has limit {a} in point {c} then, if we restrict ourselves to points near {c}, the images of those points are all near {a}.

Once again I tell the reader to look at this as if it were a game played between two (slightly odd) people. One of them is choosing the {\delta} and the the other is choosing the {\varepsilon}. But this game isn’t just about choosing. The first player gets to choose any {\delta} he wants, but the second has to choose the right {\varepsilon}that makes the condition hold.

If he can prove that he has an {\varepsilon} for every {\delta} that the other player chooses than he succeeds in the game and the function does have limit {a} at point {c}.

Theorem 38

Let {D \subset \mathbb{R}}, {f: D \rightarrow \mathbb{R}}, and {c \in D^\prime}. If {\displaystyle \lim_{x \rightarrow c} f(x)} exists and is finite, than there exists a neighborhood of {c } where {f(x)} is bounded.

Proof:

Let {\displaystyle \lim_{x \rightarrow c} f(x) = a \in \mathbb{R}}. By theorem 37 with {\delta=1} there exists {\varepsilon > 0} such as

{\begin{aligned} x \in V(c,\varepsilon)\cap(D\setminus\left\lbrace c \right\rbrace ) &\Rightarrow f(x) \in V(a,1) \\ &\Rightarrow f(x) \in \left] a-1, a+1\right[ \end{aligned}}

Thus {x\in V(c,\varepsilon)\cap(D\setminus\left\lbrace c \right\rbrace)\Rightarrow a-1 < f(x) < a+1}.

So {x \in V(c,\varepsilon) \cap D \Rightarrow f (x) \begin{cases} \leq \mathrm{max} \left\lbrace a+1,f(c)\right\rbrace \\ \geq \mathrm{max}\left\lbrace a+1,f(c)\right\rbrace \end{cases} }

and {f(x)} is bounded in {V(c,\varepsilon)} \Box

If {\displaystyle \lim_{x \rightarrow c} f(x)/g(x)} exists, then {f(x)= O(g(x))\,\, (x \rightarrow c)} since in this case it is {h(x)=f(x)/g(x)} and there exists some neighborhood of {c} where {h(x)} is bounded.

After this one may be interested in knowing how we can translate {\displaystyle \lim_{x \rightarrow c^+} f(x) = a} to a {\varepsilon - \delta} condition.

In this case we are considering {f(x)} only in the set {D_{c^+}} and so what we get is:

\displaystyle  \forall \delta > 0 \exists \varepsilon > 0: \, x \in V(c,\varepsilon)\cap D_{c^+} \Rightarrow f(x) \in V(a,\delta)

Theorem 39 Let {D \subset \mathbb{R}}, {f:D \rightarrow \mathbb{R}}, and {c \in D^\prime}. If {\displaystyle \lim_{x \rightarrow c^-}f(x)=\lim_{x \rightarrow c^+}f(x)=a}, then {\displaystyle \lim_{x \rightarrow c}f(x)=a}.

Proof: Let {\delta > 0}. By the {\varepsilon-\delta} condition it is:

\displaystyle  \exists \varepsilon_1>0:x \in V(c,\varepsilon_1)\cap D_{c^+} \Rightarrow f(x) \in V(a,\delta)

\displaystyle  \exists \varepsilon_2>0:x \in V(c,\varepsilon_2)\cap D_{c^-} \Rightarrow f(x) \in V(a,\delta)

Thus by taking {\varepsilon =\mathrm{min} \left\lbrace \varepsilon_1, \varepsilon_2 \right\rbrace } it follows {x \in V(c,\varepsilon) \cap (D \setminus \left\lbrace c \right\rbrace ) \Rightarrow x \in V(c,\varepsilon) \cap D_{c^+}} or {x \in V(c,\varepsilon) \cap D_{c^- }\Rightarrow f(x) \in V(a,\delta)}

In conclusion:

{ \forall \delta > 0 \exists \varepsilon > 0: x \in V(c,\varepsilon)\cap (D\setminus \left\lbrace c \right\rbrace ) \Rightarrow f(x) \in V(a,\delta) } which is equivalent to saying that {\displaystyle \lim_{x \rightarrow c} f(x)=a}. \Box

Definition 34 Let {D \subset \mathbb{R}}; {f: D \rightarrow \mathbb{R}} and {c \in D}. We say that {f(x)} is continuous in point {c} if for all sequences {x_n} of points in {D}, such as {\lim x_n = c} it is {\lim f(x_n)=f(c)}.

A function is said to be continuous if it is continuous in all points in {D}.

A few examples to clarify definition 34

  1. \displaystyle  f(x)=|x| \quad \forall x \in \mathbb{R}

    Let {c \in \mathbb{R}} and {x_n} a sequence such as {x \rightarrow c}. Then {f(x_n)=|x_n|} and {\lim f(x_n) = \lim |x_n| = |c|}. In conclusion {f(x_n) \rightarrow f(c)} which is equivalent to saying that {f} is continuous in {c}. Since {c} can be any given point {f(x)=|x|} is continuous in {\mathbb{R}}.

  2. Let {f(x)= \sin x} and {x_n} a sequence such as {x_n \rightarrow \theta}. It is {\lim \sin x= \sin \theta} and by the same reasoning {\sin x} is also continuous.
  3. In general if {x_n \rightarrow c} it is {\lim f(x_n)=f(c)=f(\lim x_n)}. So for {\exp (x)} it is {\lim \exp (x_n)=\exp (\lim x_n)}.

    If {x_n \rightarrow +\infty } it follows that {\lim \exp(x_n)=+\infty } and for {x_n \rightarrow -\infty} it follows that {\lim \exp(x_n)=0}.

    Thus if we define {\exp (+\infty)=+\infty} and {\exp (-\infty)=0} it follows that it always is {\lim \exp (x_n)=\exp (\lim x_n)}.

  4. Analogously we can define {\log +\infty= +\infty} and {\log 0 = -+\infty} and it always is {\lim \log x_n = \log (\lim x_n)}.
Theorem 40 (Heine’s theorem for continuity)

Let {D \subset \mathbb{R}}, {f:D \rightarrow \mathbb{R}} and {c \in D}. {f} is continuous in {D} if and only if

\displaystyle  \forall \delta>0 \,\,\exists \, \varepsilon > 0: \, x \in D \wedge |x-c| < \varepsilon \Rightarrow |f(x)-f(c)| < \delta

Or written in terms of neighborhoods

\displaystyle  \forall \delta>0 \,\,\exists \, \varepsilon > 0: \, x \in V(c,\varepsilon) \cap D \Rightarrow f(x) \in V(f(c),\delta)

Proof: Omitted. \Box

As can be seen the {\varepsilon - \delta} condition for continuity in point {c} is very similar to the one for limit {a} in point {c}.

To finish this post I’ll just state a theorem that sheds some light on the connections of these two concepts:

Theorem 41 Let {D \subset \mathbb{R}}, {f:D \rightarrow \mathbb{R}} and {c \in D \cap D^\prime}. Then {f} it’s continuous in point {c} if and only if {\displaystyle \lim_{x \rightarrow c} f(x) = c}.

Proof: Omitted. \Box

So as this theorem shows the connection between continuity and limit is indeed a deep one, but we can look at the concept of limit as being an auxiliary tool to determine if a function is continuous or not and we should not confuse them.

In the next post I intend to write a little bit more about continuity but in the mean time a very good text about it can be found here

%d bloggers like this: