Real Analysis Limits and Continuity VII

Posted in 01 Basic Mathematics, 01 Real Analysis on March 8, 2014 by ateixeira

— 6.10. Global properties of continuous functions —

Theorem 51 (Intermediate Value Theorem) Let {I=[a,b] \in \mathbb{R}} and {f: I \rightarrow \mathbb{R}} is a continuous function. Let {u \in \mathbb{R}} such that {\inf(I)<u<\sup(I)}, then there exists {c \in I} such that {f(c)=u}.Proof: Omitted. \Box

Intuitively speaking the previous theorem states that the graph of a continuous function doesn’t have holes in it if the domain of the function doesn’t have any holes in it too.

Corollary 52 Let {[a,b]} be an interval in {\mathbb{R}}, {f:[a,b]\rightarrow\mathbb{R}} a continuous function. Suppose that {f(a)f(b)<0}. Then {\exists c \in ]a,b[} such that {f(c)=0}.Proof: In the codomain of {f} there exists values bigger than {0} and values smaller than {0}. Hence {\sup f(I)>0} and {\inf f(I)<0}. Therefore {0} is strictly between between the infimum and supremum of the codomain of {f}. By hypothesis the function doesn’t vanish on the extremities of the interval, hence the {0} value has to be in the interior of the interval \Box
Corollary 53 Let {I\in\mathbb{R}}, {f:I\rightarrow\mathbb{\mathbb R}} a continuous function. Then {f(I)} is also an interval.Proof: Let {\alpha=\inf(I)} and {\beta=\sup(I)}. By definition of infimum and supremum it is {f(I)\subset [\alpha , \beta]}. Using Theorem 51 it is {]a,b[\subset f(I)}.Thus we have the following four possibilities for {f(I)}:

{f(I)=\begin{cases}[\alpha , \beta] \\ ]\alpha , \beta] \\ [\alpha , \beta[ \\ ]\alpha , \beta[ \end{cases}}

\Box

As an application let us look into {P(x)=a_xx^n+\cdots +a_1x+a_0} with {n} odd and {a_n <0}. It is {P(x)\sim a_x^n} for large (positively or negatively) values of {x}. It is {\displaystyle \lim_{x\rightarrow +\infty} P(x)=+\infty} and {\displaystyle \lim_{x\rightarrow -\infty} P(x)=-\infty}.

Now

  • {P(x)} is a continuous function.
  • The domain, {D} of {P(x)} is {\mathbb{R}} which is an interval.
  • {\sup(D)=+\infty} and {\inf(D)=-\infty}, implying {P[\mathbb{R}]=]-\infty, +\infty[}

By corollary 52 it is {0\in P[\mathbb{R}]}. Which means that every odd polynomial function has at least one {0}.

Theorem 54 (Continuity of the inverse function) Let {I} be an interval in {\mathbb{R}} and {f:I\rightarrow\mathbb{R}} a continuous function and strictly monotonous. Then {f^{-1}} is continuous and strictly monotonous.Proof: Omitted. \Box

This theorem has important applications since it allows us to define the inverse functions of the trigonometric functions.

— 6.10.1. Arcsine function —

In {[-\pi/2,\pi/2]} the function {\sin x} is injective:

Sine function

Sine function

Hence one can define the inverse of the sine function in this suitably restricted domain.

\displaystyle y=\sin x\quad\mathrm{with}\quad x\in [\pi/2,\pi/2]\Leftrightarrow x=\arcsin x

Where {\arcsin} denotes the inverse of {\sin}.

Since {\sin x:[-\pi/2,\pi/2]\rightarrow[-1,1]} it is {\arcsin x:[-1,1]\rightarrow [-\pi/2,\pi/2]}. Also by theorem 54 {\arcsin} is continuous.

The graphical representation of {\arcsin x} is

Arcsine function

Arcsine function

and it is evident by its representation that {\arcsin x} is an odd function.

— 6.10.2. Arctangent function —

In {]-\pi/2,\pi/2[} the function {\tan x} is injective:

Tangent function

Tangent function

Hence one can define the inverse of the tangent function in this suitably restricted domain.

\displaystyle y=\tan x\quad\mathrm{with}\quad x\in ]\pi/2,\pi/2[\Leftrightarrow x=\arctan x

Where {\arctan} denotes the inverse of {\tan}.

Since {\tan x:]-\pi/2,\pi/2[\rightarrow]-\infty,+\infty[} it is {\arctan x:]-\infty,+\infty[\rightarrow ]-\pi/2,\pi/2[}. Also by theorem 54 {\arctan} is continuous.

The graphical representation of {\arctan x} is

Arctangent function

Arctangent function

and it is evident by its representation that {\arctan x} is an odd function.

— 6.10.3. Arccosine function —

In {[0,\pi]} the function {\cos x} is injective:

Cosine function

Cosine function

Hence one can define the inverse of the cosine function in this suitably restricted domain.

\displaystyle y=\cos x\quad\mathrm{with}\quad x\in [0,\pi]\Leftrightarrow x=\arccos x

Where {\arccos} denotes the inverse of {\cos}.

Since {\cos x:[0,\pi]\rightarrow[-1,1]} it is {\arccos x:[-1,1]\rightarrow [0,\pi]}. Also by theorem 54 {\arccos} is continuous.

The graphical representation of {\arccos x} is

Arccosine function

Arccosine function

Another way to define the arccosine function is to first use the relationship

\displaystyle \cos=\sin(\pi/2-x)

to write

\displaystyle \arccos y=\frac{\pi}{2}-\arcsin y

— 6.10.4. Continuous functions and intervals —

Theorem 55 (Extreme value theorem) Let {[a,b]\subset \mathbb{R}} and {f:[a,b]\rightarrow\mathbb{R}}. Then {f} has a maximum and a minimum.Proof: Let {E} be the codomain of {f} and {s=\sup E}.By Theorem 17 in post Real Analysis – Sequences II there exists a sequence {y_n} of points in {E} such that {\lim y_n=s}.

Since the terms of {y_n} are points of {f} for each {n} there exists {x_n\in [a,b]} such that {y_=f(x_n)}.

Since {x_n} is a sequence of points in the compact interval (see definition 22 in post Real Analysis – Sequences IV) {[a,b]}, by Corollary 27 (also in post Real Analysis – Sequences IV) there exists a subsequence {x_{\alpha n}} of {x_n} that converges to a point in {[a,b]}.

Let {c\in [a,b]} such that {x_n\rightarrow c}.

Since {f} is continuous in {c} it is, by definition of continuity, (see definition 34) {\lim f(x_{\alpha n})=f(c)}. But {f(x_{\alpha n})=y_{\alpha n}}, which is a subsequence of {y_n}. Since {y_n\rightarrow s} it also is {y_{\alpha n}\rightarrow s}.

But {y_{\alpha n}=f(x_{\alpha n})\rightarrow f(c)}.

In conclusion it is {s=f(c)}, hence {s\in E}. That is {s=\max E}.

For the minimum one can construct a similar proof. This proof is left as an exercise for the reader. \Box

One easy way to remember the previous theorem is:

Continuous functions have a maximum and a minimum in compact intervals.

Theorem 56 Let {I} be a compact interval of {\mathbb{R}} and {f:I\rightarrow\mathbb{R}} continuous. Then {f(I)} is a compact interval.Proof: By corollary 53 {f(I)} is an interval.By theorem 55 {f(I)} has a maximum and a minimum.

Hence {f(I)} is of the form {[\alpha , \beta]}.

Thus {f(I)} is a limited and closed interval, which is the definition of a compact interval. \Box

One easy way to remember the previous corollary is:

Compactness is preserved under a continuous map.

Real Analysis – Limits and Continuity VI

Posted in 01 Basic Mathematics, 01 Real Analysis on February 15, 2014 by ateixeira

— More properties of continuous functions —

Definition 35

Let {D \subset \mathbb{R}}; {f: D\rightarrow \mathbb{R}} and {c \in D'\setminus D}. If {\displaystyle \lim_{x\rightarrow c}f(x)=a\in \mathbb{R}}, we can define {\tilde{f}} as:

\displaystyle   \tilde{f}(x)=\begin{cases} f(x) \quad x \in D \\ a \quad x=c \end{cases} \ \ \ \ \ (16)

As an application of the previous definition let us look into {f(x)= \sin x/x}. It is {D= \mathbb{R}\setminus \{0\}}.

Since {\displaystyle\lim_{x \rightarrow 0} \sin x/x=1} we can define {\tilde{f}} as

\displaystyle  \tilde{f}(x)=\begin{cases} \sin x/x \quad x \neq 0 \\ 1 \quad x=0 \end{cases}

As another example let us look into {f(x)=1/x} Since {\displaystyle\lim_{x\rightarrow 0^+}f(x)=+\infty} and {\displaystyle\lim_{x\rightarrow 0^-}f(x)=-\infty} we can’t define {\tilde{f}} for {1/x}.

Finally if we let {f(x)=1/x^2} we have {\displaystyle\lim_{x\rightarrow 0^+}f(x)=\displaystyle\lim_{x\rightarrow 0^-}f(x)=+\infty}. Since the limits are divergent we still can’t define {\tilde{f}}.

In general one can say that given {f: D\rightarrow \mathbb{R}} and {c \in D'\setminus D} {\tilde{f}} exists if and only if {\displaystyle\lim_{x \rightarrow c}f(x)} exists and is finite.

Theorem 42 Let {D \subset \mathbb{R}}; {f,g: D\rightarrow \mathbb{R}} and {c \in D}. If {f} and {g} are continuous functions then {f+g}, {fg} and (if {g(c)\neq 0}){f/g} are also continuous functions.

Proof: We’ll prove that {fg} is continuous and let the other cases for the reader.

Let {x_n} be a sequence of points in {D} such that {x_n \rightarrow c}. Then {f(x_n) \rightarrow f(c)} and {g(x_n) \rightarrow c} (since {f} and {g} are continuous functions).

Hence it follows {f(x_n)g(x_n) \rightarrow f(x)g(x)} from property {6} of Theorem 19. Which is the definition of a continuous function. \Box

Let {f(x)=5x^2-2x+4}. First we note that {f_1(x)=5}, {f_2(x)=-2} and {f_3(x)=4} are continuous functions. Now {f_4(x)=4} also a continuous function. {f_5(x)=x^2} is continuous since it is the product of {2} continuous functions. {f_6(x)=-2x} is continuous since it is the product of {2} continuous functions. Finally {f(x)=5x^2-2x+4} is continuous since it is the sum of continuous functions.

Theorem 43 Let {D, E \subset \mathbb{R}}, {g: D\rightarrow E}, {f: E \rightarrow \mathbb{R}} and {c \in D}. If {g} is continuous in {c} and {f} is continuous in {g(c)}, then the composite function {f \circ g (x)=f(g(x)) } is continuous in point {c}.

Proof: Let {x_n} be a sequence of points in {D} with {x_n \rightarrow c}. Hence {\lim g(x_n)=g(c)}. If {f} is continuous in {g(c)} it also is {\lim f(g(x_n))=f(g(c))}. This is {\lim (f \circ g)(x_n)= (f \circ g)(c)}. Thus {f \circ g} is continuous in {c}. \Box

As an application of the previous theorem let {f(x)=a^x}. Since {a^x=e^{\log a^x}=e^{x \log a}} we can write {a^x=e^t \circ t=x\log a}. Since {f(t)=e^t} is a continuous function and {g(x)=x \log a} is also a continuous function it follows that {a^x} is a continuous function (it is the composition of two continuous functions).

By the same argument we can also show that with {\alpha \in \mathbb{R}}, {x^\alpha} (for {x \in \mathbb{R}^+}) is also a continuous function in {\mathbb{R}^+}.

Theorem 44 Let {D, E \subset \mathbb{R}}, {g: D\rightarrow E}, {f: E \rightarrow \mathbb{R}} and {c \in D'}. Suppose that {\displaystyle \lim_{x \rightarrow c}g(x)=a} and that {\displaystyle \lim_{t \rightarrow a}f(t)} exists. If {f} is continuous it follows {\displaystyle \lim_{x \rightarrow c}f(g(x))=\lim_{t \rightarrow a}f(t)}.

Proof: Omitted. \Box

Find {\displaystyle \lim_{x \rightarrow +\infty} \sin (1/x)}.

We can write {\sin (1/x)= \sin t \circ (t=1/x)}. Since {\displaystyle \lim_{x \rightarrow + \infty}(1/x)=0} it is, from Theorem 44, {\displaystyle \lim_{x \rightarrow +\infty} \sin (1/x)=\displaystyle\lim_{t \rightarrow 0}\sin t =0}.

In general if {\displaystyle \lim_{x \rightarrow c} g(x)= a \in \mathbb{R}} it is {\displaystyle \lim_{x \rightarrow c} \sin (g(x))=\displaystyle\lim_{t \rightarrow a} \sin t = \sin a}. In conclusion

\displaystyle  \lim_{x \rightarrow c}\sin (g(x))=\sin (\lim_{x \rightarrow c}g(x))

Suppose that {\displaystyle \lim_{x \rightarrow c}g(x)=0} and let {\tilde{f}} be the function that makes {\sin x/x} be continuous in {x=0}.

It is {\sin x = \tilde{f}(x)x}, hence it is {\sin g(x) = \tilde{f}(g(x))g(x)}.

By definition {\tilde{f}} is continuous so by Theorem 44 {\displaystyle \lim_{x \rightarrow c^+}f(g(x))=\displaystyle\lim_{t \rightarrow 0}\tilde{f}(t)=1}.

Thus we can conclude that when {\displaystyle \lim_{x \rightarrow c}g(x)=0} it is

\displaystyle  \sin (g(x))\sim g(x)\quad (x \rightarrow c)

For example {\sin (x^2-1) \sim (x^2-1)\quad (x \rightarrow 1)}.

Let {\displaystyle \lim_{x \rightarrow c}g(x)=a \in \mathbb{R}}. By Theorem 44 it is {\displaystyle \lim_{x \rightarrow c} e^{g(x)}=\lim_{t \rightarrow a}e^t=e^a} (with the conventions {e^{+\infty}=+\infty} and {e^{-\infty}=0}). Thus {\displaystyle \lim_{x \rightarrow c}e^{g(x)}=e^{\displaystyle\lim_{x \rightarrow c}g(x)}}.

Analogously one can show that {\displaystyle \lim_{x \rightarrow c} \log g(x)= \log (\lim_{x \rightarrow c}g(x))} (with the conventions {\displaystyle \lim_{x \rightarrow +\infty} \log g(x)=+\infty} and {\displaystyle \lim_{x \rightarrow 0} \log g(x)=-\infty}).

Let {a>1}. It is {\displaystyle \lim_{x \rightarrow +\infty}a^x =\displaystyle\lim_{x \rightarrow +\infty}e^{x\log a}=e^{\displaystyle\lim_{x \rightarrow +\infty} x\log a}=+\infty } (since {\log a>0}).

On the other hand for {\alpha > 0} it also is {\displaystyle \lim_{x \rightarrow +\infty}x^\alpha =\displaystyle\lim_{x \rightarrow +\infty}e^{\alpha \log x}= e^{\displaystyle \lim_{x \rightarrow +\infty}\alpha \log x}=+\infty}.

The question we want to answer is {\displaystyle \lim_{x \rightarrow +\infty}\dfrac{a^x}{x^\alpha} } since the answer to this question tell us which of the functions tends more rapidly to {+\infty}.

Theorem 45 Let { a<1} and {\alpha > 0}. Then

\displaystyle   \lim_{\infty}\frac{a^x}{x^\alpha}=+\infty \ \ \ \ \ (17)

Proof: Let {b=a^{1/(2\alpha)}} ({b>1}). It is {a=b^{2\alpha}}. Hence {a^x=b^{2\alpha x}}. Moreover {\dfrac{a^x}{x^\alpha}=\dfrac{b^{2\alpha x}}{x^\alpha}=\dfrac{b^{2\alpha x}}{\sqrt{x}^{2\alpha}}}.

which is

\displaystyle   \frac{a^x}{x^\alpha}=\left( \frac{b^x}{\sqrt{x}} \right)^{2\alpha} \ \ \ \ \ (18)

Let {[x]} denote the nearest integer function and using Bernoulli’s Inequality ({b^m\geq 1+ m(b-1)}) it is {b^x\geq x^{}[x]\geq 1+[x](b-1)>[x](b-1)>(x-1)(b-1)}.

Hence {\dfrac{b^x}{\sqrt{x}}>\dfrac{x-1}{\sqrt{x}}(b-1)=\left( \sqrt{x}-1/\sqrt{x}\right)(b-1)}.

Since {\displaystyle \lim_{x \rightarrow +\infty}\left( \sqrt{x}-1/\sqrt{x}\right)(b-1)=+\infty} it follows from Theorem 32 that {\displaystyle\lim_{x \rightarrow \infty} \frac{b^x}{\sqrt{x}}=+\infty}.

Using 18 and setting {t=b^x/\sqrt{x}} it is {\displaystyle\lim_{x \rightarrow \infty}\frac{a^x}{x^\alpha}=\displaystyle\lim_{t \rightarrow +\infty}t^{2\alpha}=+\infty} \Box

Corollary 46 Let {\alpha > 0}, then

\displaystyle \lim_{x \rightarrow +\infty}\frac{x^\alpha}{\log x}=+\infty

Proof: Left as an exercise for the reader (remember to make the convenient change of variable). \Box

Theorem 47 Let {a>1}, then {\displaystyle \lim \frac{a^n}{n!}}=0. Proof: First remember that {\log n!=n\log n -n + O(\log n)} which is Stirling’s Approximation.

Since {\dfrac{\log n}{n} \rightarrow 0} it also is {\dfrac{O(\log n)}{n} \rightarrow 0}.

And

\displaystyle \frac{a^n}{n!}=e^{\log (a^n/n!)}=e^{n\log a - \log n!}

Thus

\displaystyle \lim \frac{a^n}{n!}=e^{\lim(n\log a - \log n!)}

For the argument of the exponential function it is

{\begin{aligned} \lim(n\log a - \log n!) &= \lim n\log a-n\log n+n-O(\log n) \\ &=\lim \left(n\left(\log a -\log n+1 -\dfrac{O(\log n)}{n}\right)\right) \\ &=+\infty\times -\infty=-\infty \end{aligned}}

Hence {\displaystyle \lim \frac{a^n}{n!}=e^{-\infty}=0}. \Box

Lemma 48

\displaystyle   \lim_{x \rightarrow +\infty}\left( 1+\frac{1}{x}\right)^x=e \ \ \ \ \ (19)

Proof: Omitted. \Box

Theorem 49

\displaystyle   \lim_{x \rightarrow 0}\frac{\log (1+x)}{x}=1 \ \ \ \ \ (20)

Proof: Will be proven as an exercise. \Box

Corollary 50

\displaystyle   \lim_{x \rightarrow 0}\frac{e^x-1}{x} \ \ \ \ \ (21)

Proof: Left as an exercise for the reader. Make the change of variables {e^x=t+1} and use the previous theorem. \Box

Generalizing the previous results one can write with full generality:

  • {\sin g(x) \sim g(x) \quad (x \rightarrow c)} if {\displaystyle \lim_{x \rightarrow c} g(x)=0}
  • {\log (1+g(x)) \sim g(x) \quad (x \rightarrow c)} if {\displaystyle \lim_{x \rightarrow c} g(x)=0}
  • {e^{g(x)}-1 \sim g(x) \quad (x \rightarrow c)} if {\displaystyle \lim_{x \rightarrow c} g(x)=0}

5 years ago

Posted in 00 Announcements on October 26, 2013 by ateixeira

Today I logged in again to my wordpress.com account, as I do from time to time, and I received a notification telling me that I had joined wordpress.com 5 years ago. Wow!

5 years ago (and a couple of days) I started this blog with a grand plan. I wanted to review my Physics education in order to get my knowledge of Physics solidified and also to provide a good online resource to Physics students all over the world.

The thing is that running a blog like this takes time and will power and I lacked on both of them. Actually running a blog like this only takes will power. You see I’ve recently learned that having no time is a big fat lie!!! What one has are priorities. Simply put, running this blog wasn’t one of my priorities until now.

The thing is that running this blog and once again being part of the Physics gang is one of priorities.

What I’m trying to say is: stay tuned in this space because this time it is for real.

Newtonian Physics – Introduction

Posted in 00 Announcements, 01 Classical Mechanics, 01 Newtonian Formalism, 02 Physics on July 6, 2011 by ateixeira

The first thing I want to say about this post is that its title is actually a misnomer. Much of what I’ll say here is valid for pretty much the rest of the blog, while some things are only pertinent to Newtonian Physics.

The approach taken in this blog for developing the physical theories will be the axiomatic one. I’ll do this because of brevity, internal elegance and consistency. Of course, I’m well aware of the fact that this is only possible with hindsight but I think that one has a lot to gain when physics is presented this way. Maybe the one who has more to gain is the presenter than the presentee, but since this is my blog I’m calling all the shots.

Maybe a word is in order for what the word axiom means and a little bit of history will be needed (gasp!!! the first self-contradiction!!!). In ancient Greece, the place where normally one thinks real science started to take shape (actually it wasn’t but this is a whole other can of worms), people who concerned themselves with such matters used two words to signify two things that nowadays are taken as synonyms. Those two words were: axiom and postulate.

Back in the day axiom was taken to be a self-evident truth while a postulate was taken to be something that one would have to take for certain for the sake of constructing an argument. So, axiom was a deep truth of nature while a postulate was something that humans had to resort to in order to reach new knowledge.

As an example of an axiom we have Euclid’s fifth (which revealed itself to be quite the deep mathematical truth!) and as an example of a postulate one has the assumption that Hipparchus made that the sun rays travelled in straight lines from the Sun to the Earth and Moon while he calculated the distances and sizes of those three bodies.

People have become a lot more cynical and in modern day usage those two terms are used as synonyms (and the meaning that prevails is the postulate one).

Axioms arise in Mathematics when one is willing to construct a theory that will unify a body of (not so) disjoint facts into a coherent whole. One should take proper care that the propositions one uses as the building blocks are enough for completeness and internal coherence and can derive the maximum amount of new facts with the minimum amount of assumed propositions.

In Physics things seem to be different at first sight but let me show you that things aren’t that different after all. For starters one knows ever since Galileo that the verbal method of Aristotle – (metha)physics – isn’t the way to go for one to know, predict and even interfere in natural phenomena. For all of this to happen mathematical tools are needed. One gets deeper into the truth of things, and one is also able to get technological progress that, besides of messing up with the natural environment, also makes people’s life easier. It isn’t enough to tell that bodies fall under gravity, one has also to specify where, with what energy, under what time interval such a fall happens.

For instance Newton’s theory as it was done by Newton was axiomatic. His three laws are just another name for axioms. They are the propositions that contain the undefined terms whose validity one has to accept in order to achieve new results.

One fundamental difference now arises. While in Mathematics things are normally evaluated in terms of self-consistency and internal elegance (this is a HUGE oversimplification) in physics things are also judged by how good the new results compare to actual measurements in the real world. In Physics physical theories have to be consistent with what see around us (another HUGE oversimplification). Hence if Newton’s Principia predicted squared orbits for the planets, Newton’s Principia would have to be scrapped.

Another difference is at the way we physicists arrive at the axioms: normally one has some experimental facts and start thinking about them and how they are linked with each other. Hopefully one will then be able to put the most fundamental properties as building blocks of our theories and call them axioms (in Physics it is more usual to call them laws).

After digressing a little, thanks for reading by the way, let me proceed with the defense of the axiomatic way in Physics. One other thing is that I think that knowledge is a lot more sound when one knows where one stands and why one is standing there and not some other place. So, if  I tell you what are our basics (it doesn’t matter how we get to them) and derive all that can be derived from them I believe that sounder knowledge is achieved.

The historical/phenomenological method has as its big advantage (according to me at least) of showing the inner struggles each concept has to endure before being accepted and being part of the reigning paradigm. It also makes things more approachable at a first attempt, but I think that the merits of this approach stop at this initial pedagogy.

The downsides of the axiomatic way are that, at first sight, it seems highly artificial, and may not be what most people are used to and want to see when wanting to learn physics.

Moving on from this rather big lecture let me explain what I’ll do in the Newtonian Physics part of this blog:

  1. I’ll start off by introducing units of measurement, dimensional analysis and explain why they are important in Physics.
  2. A little bit on error propagation and why it matters in physics. Yes, this is mostly a theoretical blog but I consider this to be part of the physicist knowing where he/she stands paradigm.
  3. Assume that the reader knows differential and integral calculus (even though I’ll continue my posts on Basic Mathematics).
  4. Introduce the Newtonian axioms and what most people think Newton meant to say what while introducing them.
  5. Do a lot of calculations.
  6. Have a lot of fun!

Real Analysis – Limits and Continuity V

Posted in Uncategorized on April 23, 2011 by ateixeira

The {\epsilon} {\delta} condition is somewhat hard to get into our heads as neophytes. On top of that the similarity of the {\epsilon} {\delta} definition for limit and continuity can increase the confusion and to try to counter those frequent turn of events the first part of this post will try to clarify the {\epsilon} {\delta} condition by means of examples.

{\epsilon} {\delta} for Continuity —

First we’ll start things off with something really simple.

Let {f(x)=\alpha} which is obviously continuous.

The gist of the the {\epsilon} {\delta} reasoning is that we want to show that no matter the {\delta} that is chosen at first it is always possible to find an {\epsilon} that satisfies Heine’s criterion for continuity.

Getting back to our function {f(x)=\alpha} it is {|f(x)-f(c)| < \delta}. Here {f(x)=f(c)=\alpha} so

{\begin{aligned} |f(x)-f(c)| &< \delta \\ |\alpha-\alpha| &< \delta \\ |0| &< \delta \\ 0 &< \delta \end{aligned}}

Which is trivially true since {\delta > 0} by assumption. Hence any value of {\epsilon} will satisfy Heine’s criterion for continuity and {f(x)=\alpha} is continuous at {c}.

Since we never made any assumption about {c} other than {c \in {\mathbb R}} we conclude that {f(x)=\alpha} is continuous in all points of its domain.

Let us now look at {f(x)=x}. Again we’ll look at continuity for point {c} ({f(c)=c}):

{\begin{aligned} |f(x)-f(c)| &< \delta \\ |x-c| &< \delta \end{aligned}}

The last expression is just we want at this stage since want to have something of the form {x-c} (the first part of the {\epsilon} {\delta} criterion).

If we let {\epsilon=\delta} it is {|x-c| < \epsilon} and this completes our proof that {f(x)=x} is continuous at point {c}.

And again since we never made any assumption about {c} other than {c \in {\mathbb R}} we conclude that {f(x)=\alpha} is continuous in all points of its domain.

Now we let {f(x)=\alpha x + \beta} and will see if {f(x)} is continuous at {c}.

{\begin{aligned} |f(x)-f(c)| &< \delta \\ |\alpha x + \beta-(\alpha c + \beta)| &< \delta \\ |\alpha x -\alpha c| &< \delta \\ |\alpha||x-c| &< \delta \\ |x-c| &< \dfrac{\delta}{|\alpha|} \end{aligned}}

Hence if we let {\epsilon=|\delta|/ |\alpha|} it is {|x-c|< \epsilon} and {f(x)=\alpha x + \beta} is continuous at {c}.

As a final example of Heine’s criterion of continuity we’ll look into {f(x)=\sin x}.

{\begin{aligned} |f(x)-f(c)| &< \delta \\ |\sin x-\sin c| &< \delta \end{aligned}}

Since we want something like {|x-c| < g(\delta)} the last expression isn’t very useful to us.

In this case we’ll take an alternative approach which nevertheless works and has exactly the same spirit of what we’ve using so far.

Please look at every step I make with a critical eye and see if you can really understand what’s going on with this deduction.

{\begin{aligned} |\sin x-\sin c| &= 2\left| \cos\left( \dfrac{x+c}{2}\right)\right| \left| \sin\left( \dfrac{x-c}{2}\right)\right|\\ &< 2\left| \sin\left( \dfrac{x-c}{2}\right)\right| \end{aligned}}

Since {x \rightarrow c} we know that at some point {\dfrac{x-c}{2}} will be in the first quadrant. Thus

{\begin{aligned} 2\left| \sin\left( \dfrac{x-c}{2}\right)\right| &< 2\left|\dfrac{x-c}{2}\right| \\ &= |x-c|\\ &< \epsilon \end{aligned}}

Where the last inequality follows by hypothesis.

That is to say that if we let {\epsilon=\delta} it is {|x-c|<\epsilon \Rightarrow | \sin x - \sin x | < \delta} which is the epsilon delta definition of continuity.

{\epsilon} {\delta} for Limits —

After looking into some simple {\epsilon} {\delta} proofs for continuity we’ll take a look at {\epsilon} {\delta} for limits.

The procedure is the same, but we’ll state it explicitly so that people can see it in action.

Let {f(x)=2}. We want to show that it is {\displaystyle \lim_{x \rightarrow 1}f(x)=2}.

{\begin{aligned} |f(x)-2| &< \delta \\ |2-2| &< \delta \\ 0 &< \delta \end{aligned}}

Which is trivially true for any value of {\delta}, hence {\epsilon} can be any positive real number.

Let {f(x)=2x+3}. We want to show that it is {\displaystyle \lim_{x \rightarrow 1}f(x)=5}.

{\begin{aligned} |f(x)-5| &< \delta \\ |2x+3-5| &< \delta \\ |2x-2| &< \delta \\ 2|x-1| &< \delta \\ |x-1| &< \dfrac{\delta}{2} \end{aligned}}

With {\epsilon=\delta/2} we satisfy the {\epsilon} {\delta} for limit.

As a final example let us look at the modified Dirichlet function that was introduced at this post.

\displaystyle f(x) = \begin{cases} o \quad x \in \mathbb{Q}\\ x \quad x \in \mathbb{R}\setminus \mathbb{Q} \end{cases}

At that post it was proved that for {a \neq 0} {\displaystyle\lim_{x \rightarrow a}f(x)} didn’t exist and it was promised that in a later date I’d show that {\displaystyle\lim_{x \rightarrow 0}f(x)=0} using the epsilon delta condition.

Since we now know what the epsilon delta condition is and already have some experience with it will tackle this somewhat more abstruse problem.

{\begin{aligned} |f(x)-f(0)| &< \delta \\ |f(x)-0| &< \delta \end{aligned}}

Since {f(x)=0} or {f(x)=x} we have two cases to look at.

In the first case it is {|0-0| < \delta} which is trivially valid, hence {\epsilon} can be any real positive number.

In the second case it is {|x-0| < \delta}. Hence letting {\epsilon=\delta} gets the job done.

Since we proved that {\displaystyle\lim_{x \rightarrow 0}f(x)=0=f(0)} the conclusion is that the modified Dirichlet function that was presented is only continuous at {x=0}.

As was said previously, they don’t make local concepts more local than that.

Real Analysis Exercises III

Posted in 01 Basic Mathematics, 01 Real Analysis on July 29, 2009 by ateixeira

1.

a) Calculate { \displaystyle \sum_{k=p}^{m}(u_{k+1}-u_k)} and {\displaystyle\sum_{k=p}^{m}(u_k - u_{k+1}}

{\displaystyle \sum_{k=p}^{m}(u_{k+1}-u_k)=u_{p+1}-u_{p}+u_{p+2}-u_{p+1}+\ldots +u_{m+1}-u_{m}}

As we can see the first term cancels out with the fourth, the third with the sixth, and so on and all we are left with is the second and second last terms:

{\displaystyle \sum_{k=p}^{m}(u_{k+1}-u_k) = u_{m+1}-u_p}

{\begin{aligned} \displaystyle \sum_{k=p}^{m}(u_k - u_{k+1})&= - \sum_{k=p}^{m}(u_{k+1}-u_k)\\ &= - (u_{m+1}-u_p)\\ &= u_p-u_{m+1} \end{aligned}}

b) Calculate {\displaystyle \lim \sum_{k=1}^n\dfrac{1}{k(k+1)}} using the previous result.

{\displaystyle \lim \sum_{k=1}^n\dfrac{1}{k(k+1)}= \lim \sum_{k=1}^n \left( \frac{1}{k}-\frac{1}{k+1} \right) }

Defining {u_k=1/k} the previous sum can be written as

{\begin{aligned} \displaystyle \lim \sum_{k=1}^n \left( u_k-u_{k+1} \right)&=\lim (u_1 - u_{n+1})\\ &= \lim \left(1-\frac{1}{n+1}\right)\\ &=1 \end{aligned}}

This last result apparently has a funny story. Mengoli was the first one to calculate {\displaystyle \lim \sum_{k=1}^n\dfrac{1}{k(k+1)}=1}.

At the time this happened people did research in mathematics (I’m using this term rather abusively) in a somewhat different vein. They didn’t rush to print what they found like today.

Many times people held out their results for years while tormenting their rivals about what they found.

This is exactly what Mengoli did. In the times he was around the theory of series wasn’t much developed, thus this result, that we can calculate without being particularly brilliant in Mathematics, was something to take note of.

So, he wrote some letters to people saying that {\displaystyle \lim \sum_{k=1}^n\dfrac{1}{k(k+1)}=1}, but not how he concluded that.

The other mathematicians he sent the result too didn’t know about his methods and all they could do was to add numbers up explicitly and the only thing they could see was that even though they could sum more and more terms the result was always less than {1} and was got nearer and nearer to {1}.

Of course this didn’t prove nothing since summing up a billion terms isn’t the same as summing an infinite number of terms and everyone but Mengoli was dumbfounded with that surprising result.

c) Calculate {\displaystyle \sum_{k=0}^{n-1}(2k+1) }

In this exercise what we are calculating is the sum of {n} consecutive odd numbers. This result was already known to the ancient Greeks and the result wasn’t nothing short to astounding to them.

But enough with the talk already:

{\begin{aligned} \displaystyle \sum_{k=0}^{n-1}(2k+1)&=\sum_{k=0}^{n-1}\left[ (k+1)^2-k^2\right]\\ &= \sum_{k=0}^{n-1}(u_{k+1}-u_k) \end{aligned}}

With {u_k=k^2}

Using the now familiar formula

{\begin{aligned} \displaystyle \sum_{k=0}^{n-1}(2k+1) &= (n-1+1)^2-0^2\\ &= n^2 \end{aligned}}

An astounding result indeed!

Just look to {\displaystyle \sum_{k=0}^{n-1}(2k+1)=n^2}, interpret the result and try not to be as surprised as the ancient Greeks were.

2.

a) Using 1.a) and {a^k=a^k\dfrac{a-1}{a-1}\quad (a \neq 1)} calculate {\displaystyle \sum_{k=0}^{n-1} a^k }

{\begin{aligned} \displaystyle \sum_{k=0}^{n-1} a^k &= \displaystyle\sum_{k=0}^{n-1} \left[ a^k\frac{a-1}{a-1}\right]\\ &= \displaystyle \frac{1}{a-1}\sum_{k=0}^{n-1}\left( a^{k+1}-a^k\right)\\ &= \displaystyle\frac{1}{a-1}(a^n-1)\\ &= \displaystyle\frac{a^n-1}{a-1} \end{aligned}}

b) Using a) establish the Bernoulli inequality {a^n-1 \geq n(a-1)} if {a > 0} and {n \in \mathbb{Z}^+}

If {a=1} it is {1-1=n(1-1) \Rightarrow 0=0} which is trivially true.

If {n=1} it is {a-1=a-1} which is trivially true.

For {n \geq 2 } and {a>1} it is:

{\begin{aligned} \displaystyle \sum_{k=0}^{n-1}a^k&= 1+a+a^2+\ldots+a^{n-1}\\ &> 1+1+\ldots+1\\ &= n \end{aligned}}

Thus

{\begin{aligned} \dfrac{a^n-1}{a-1} &> n \\ a^n-1 &> n(a-1) \end{aligned}}

Since {a > 1}

Finally if {0 < a <1 } it is

{\begin{aligned} \displaystyle \sum_{k=0}^{n-1}a^k&= 1+a+a^2+\ldots+a^{n-1}\\ &< 1+1+\ldots+1\\ &= n \end{aligned}}

Thus

{\begin{aligned} \dfrac{a^n-1}{a-1} & < n \\ a^n - 1 & > n(a-1) \end{aligned}}

Since {a < 1}

c) Use b) to calculate {\lim a^n} if {a > 1} and then conclude that {\lim a^n=0} if {|a| < 1}.

By b) it is

{\begin{aligned} a^n &> n(a-1)+1 \\ \lim a^n &\geq \lim \left( n(a-1)+1 \right)= +\infty \end{aligned}}

Hence {\lim a^n = +\infty \quad (a>1)}

For the second part of the exercise we will calculate instead {\lim |a^n|} since that we know that { u_n \rightarrow 0 \Leftrightarrow |u_n| \rightarrow 0}

Let us make a change of variable {t=1/a}. Thus {|a|=|1/t|} and

{\begin{aligned} \lim |a^n| &= \lim |1/t|^n\\ &= \dfrac{1}{\lim |t|^n}\\ &= \dfrac{1}{+\infty}\\ &=0 \end{aligned}}

3. Consider the sequences {u_n=\left( 1+\dfrac{1}{n} \right)^n } and {v_n=\left( 1+\dfrac{1}{n} \right)^{n+1}}

a) Calculate {\dfrac{v_n}{v_{n+1}}} and {\dfrac{u_{n+1}}{u_n}}. Then use Bernoulli’s inequality to show that {v_n} is strictly decreasing and that {u_n} is strictly increasing.

{\begin{aligned} \dfrac{v_n}{v_{n+1}} &= \dfrac{\left( 1+1/n \right)^{n+1}}{\left(1+1/(n+1)\right)^{n+2}}\\ &=\dfrac{\left(\dfrac{n+1}{n}\right)^{n+1}}{\left( \dfrac{n+2}{n+1} \right)^{n+2}}\\ &= \dfrac{n}{n+1}\dfrac{\left(\dfrac{n+1}{n}\right)^{n+2}}{\left( \dfrac{n+2}{n+1} \right)^{n+2}}\\ &=\dfrac{n}{n+1}\left( \dfrac{(n+1)^2}{n(n+2)} \right)^{n+2}\\ &= \dfrac{n}{n+1}\left( \dfrac{n^2+2n+1}{n(n+2)} \right)^{n+2}\\ &=\dfrac{n}{n+1}\left( \dfrac{n(n+2)+1}{n(n+2)} \right)^{n+2}\\ &= \dfrac{n}{n+1}\left( 1+\dfrac{1}{n(n+2)} \right)^{n+2} \end{aligned}}

After having calculated {v_n/v_{n+1}} we can use Bernoulli’s inequality, with {a=1+\dfrac{1}{n(n+2)}} , to conclude that {v_n} is strictly decreasing.

{\begin{aligned} \dfrac{n}{n+1}\left( 1+\dfrac{1}{n(n+2)} \right)^{n+2} &> \dfrac{n}{n+1}\left(1 + \dfrac{n+2}{n(n+2)} \right)\\ &= \dfrac{n}{n+1}(1+1/n)\\ &= \dfrac{n}{n+1}\dfrac{n+1}{n}\\ &= 1 \end{aligned}}

Thus {v_n} is strictly decreasing.

With a similar technique we can prove that

{ \displaystyle u_{n+1}/u_n=\dfrac{n+1}{n}\left( 1- \dfrac{1}{(n+1)^2}\right)^{n+1}}

After that by using Bernoulli’s inequality like in the previous example one can show that {u_{n+1}/u_n>1} and thus {u_n} is strictly increasing.

c) Using a) and b) and {\lim u_n = e} prove the following inequalities {(1+1/n)^n < e <(1+n)^{n+1}}.

{\begin{aligned} \lim v_n&= \lim(1+1/n)^n(1+1/n)\\ &= e\times 1\\ &= e \end{aligned}}

We already know that {v_n} is decreasing so it is {v_n<(1+1/n)^{n+1}}

On the other hand {u_n} is increasing and {\lim u_n=e} so {(1+1/n)^n<e}.

Hence {(1+1/n)^n<e<(1+1/n)^{n+1}}

d) Use c) to prove that { \displaystyle \frac{1}{n+1}<\log (n+1)-\log n <\frac{1}{n}}

{ \begin{aligned} (1+1/n)^n &< e \\ n \log \left( \dfrac{n+1}{n} \right) &< 1 \\ \log(n+1) - \log n &< \dfrac{1}{n} \end{aligned} }

And now for the second part of the inequality:

{ \begin{aligned} e &< \left(1+\dfrac{1}{n}\right)^{n+1} \\ 1 &< (n+1)\log \left(\dfrac{n+1}{n}\right) \\ \dfrac{1}{n+1} &< \log (n+1) -\log n \end{aligned}}

In conclusion it is { \dfrac{1}{n+1}<\log (n+1)- \log n < \dfrac{1}{n} }

4.

a) Using 3d) show that

{ \displaystyle 1+\log k < (k+1)\log (k+1)-k\log k < 1+ \log(k+1) }

From

{ \begin{aligned} \dfrac{1}{k+1} &< \log (k+1) - \log k \\ 1 &< (k+1)\log(k+1) - (k+1)\log k \\ 1+ \log k &< (k+1)\log(k+1)-k \log k \end{aligned}}

With a similar reasoning we can also prove that {(k+1)\log(k+1)-l\log k < 1+ \log(k+1)}.

Thus it is {1+\log k < (k+1)\log(k+1)-k\log k < 1+ \log(k+1)}

b) Sum the previous inequalities between {1 \leq k \leq n-1}.

{\begin{aligned} \displaystyle \sum_{k=1}^{n-1}(1+ \log k) &< \sum_{k=1}^{n-1} ((k+1)\log(k+1)-k \log k)\\ &< \displaystyle \sum_{k=1}^{n-1}(1+\log(k+1)) \end{aligned}}

Now

{\begin{aligned} \displaystyle \sum_{k=1}^{n-1} (1+ \log k) &= \sum_{k=1}^{n-1}1+\sum_{k=1}^{n-1}\log k\\ &= n-1 +\sum_{k=1}^{n-1}\log k \end{aligned}}

And

{\begin{aligned} \displaystyle \sum_{k=1}^{n-1}\log k &= \log 1 + \log2 +\ldots+\log(n-1)\\ &=\log((n-1)!) \end{aligned}}

It also is

{\begin{aligned} \displaystyle \sum_{k=1}^{n-1}((k+1)\log(k+1) - k\log k)&= m\log n -\log 1\\ &=n\log n \end{aligned}}

And {\displaystyle \sum_{k=1}^{n-1}(1+\log(k+1))=n-1+\log n!}

Thus it is {n-1+\log(n-1)! < n\log n < n-1 \log n!}

c) Conclude the following inequalities { n \log n -n +1 < \log n! < n \log n -n+1+\log n} and establish Stirling's approximation { \displaystyle \log n! = n\log n -n +r_n} with {e < C_n < en}

{ \begin{aligned} n-1 + \log (n-1)! &< n\log n \\ \log (n-1)! &< n\log n -n+1 \\ \log n! &< n\log n -n +1+\log n \end{aligned}}

On the other hand

{\begin{aligned} n\log n &< n-1 + \log n! \\ n\log n -n +1 &< \log n! \end{aligned} }

Thus

{\begin{aligned} n\log n -n +1 &< \log n! \\ &< n\log n -n +1 +\log n \end{aligned}}

And from this follows {1 < \log n! -n\log n+n < 1+\log n}

Defining {r_n=\log n! -n\log n+n} it is {\log n! = n\log n-n+r_n} with {1 < r_n < 1+\log n}

5.

Show that {\log \left(1+\dfrac{1}{n}\right)\sim \dfrac{1}{n}} and that {\log \left(1+\dfrac{1}{n^2}\right)\sim \dfrac{1}{n^2}}

We know that

{ \begin{aligned} \dfrac{1}{n+1} &< \log(n+1)-\log n < \dfrac{1}{n} \\ \dfrac{1}{n+1} &< \log\left( \dfrac{n+1}{n}\right) < \dfrac{1}{n} \\ \dfrac{1}{n+1} &< \log\left( 1+\dfrac{1}{n}\right) <\dfrac{1}{n} \\ \dfrac{1/(n+1)}{1/n} &< \dfrac{\log (1+1/n)}{1/n}<1 \\ \lim \dfrac{n}{n+1} &\leq \lim \dfrac{\log (1+1/n)}{1/n} \leq \lim 1 \\ 1 &\leq \lim \dfrac{\log (1+1/n)}{1/n} \leq 1 \end{aligned}}

Thus {\lim \dfrac{\log (1+1/n)}{1/n}=1} and this equivalent to saying that {\log \left(1+\dfrac{1}{n}\right)\sim \dfrac{1}{n}}

Let {u_n = \dfrac{\log (1+1/n)}{1/n}}. In this case it is {\dfrac{\log (1+1/n^2)}{1/n^2}=u_{n^2}}. Since {u_{n^2}} is a subsequence of {u_n} we know that {\lim u_{n^2}= \lim u_n} and so it also is {\log \left(1+\dfrac{1}{n^2}\right)\sim \dfrac{1}{n^2}}.

6. Show that {u_n \sim v_n} and {v_n \sim w_n \Rightarrow u_n \sim w_n }

By hypothesis it is {u_n=h_n v_n}, {v_n=t_n w_n} with {h_n,t_n \rightarrow 1}.

Substituting the second equality in the first we obtain {u_n = h_n t_n w_n}.

Let {s_n = h_n t_n} and we write {u_n =s_n w_n } with {\lim s_n = \lim h_n \lim t_n =1\times 1=1}.

Thus {u_n \sim w_n}

7. Let {u_n = O\left(1/n\right)} and {v_n = O (1/ \sqrt{n})}. Show that {u_n v_n = o ( 1/n^{4/3})}.

{u_n = h_n 1/n} and {v_n = t_n 1/ \sqrt{n}} with {h_n} and {t_n} bounded sequences. Now

{\begin{aligned} u_n v_n &= \dfrac{h_n}{n} \dfrac{t_n}{\sqrt{n}}\\ &= \dfrac{h_n t_n}{n^{3/2}}\\ &=\dfrac{h_n t_n}{n^{1/6}}\dfrac{1}{n^{4/3}} \end{aligned}}

Let {s_n = \dfrac{h_n t_n}{n^{1/6}}} it is {\lim s_n = \lim \dfrac{h_n t_n}{n^{1/6}} = 0} since {h_n t_n} is bounded.

Thus {u_n v_n = o (1/n^{4/3})}

8. Using Stirling’s approximation show that {\log n! = n\log n -n + O(\log n)}

We know that it is {\log n! = n\log n -n + +r_n} with { 1< r_n < 1+\log n}. Thus

{\begin{aligned} 0 &<\dfrac{1}{\log n}\\ &< \dfrac{r_n}{\log n}\\ &< \dfrac{1}{\log n} +1\\ &\leq \dfrac{1}{\log 2}+1 \end{aligned}}

Where we used the fact that { \dfrac{1}{\log n}+1} is decreasing function.

Thus {\dfrac{r_n}{\log n}} is bounded and so {r_n=O(\log n)} as desired.

Real Analysis – Limits and Continuity IV

Posted in 01 Basic Mathematics, 01 Real Analysis on July 1, 2009 by ateixeira

As an application of theorem 35 let us look into the functions {f(x)=e^x} and {g(x)=\log x}.

Now {f:\mathbb{R} \rightarrow \mathbb{R^+}} and is a strictly increasing function, and {g:\mathbb{R^+} \rightarrow \mathbb{R}} also is a strictly increasing function.

By theorem 35 it is {\displaystyle \lim_{x \rightarrow +\infty}\exp x = \mathrm{sup} [\mathbb{R^+}] = +\infty} and {\displaystyle \lim_{x \rightarrow -\infty} \exp x= \mathrm{inf} [\mathbb{R^+}] = 0}.

As for {g(x)} it is {\displaystyle \lim_{x \rightarrow +\infty} \log x=\sup [\mathbb{R}]=+\infty} and {\displaystyle \lim_{x \rightarrow 0} \log x = \inf [\mathbb{R}]=-\infty}.

Definition 33 Let {D \subset \mathbb{R}}; {f,g: D \rightarrow \mathbb{R}}, and {c \in D^\prime}. Let us suppose that there exists {h: D \rightarrow \mathbb{R}} such as {f(x) = h(x)g(x) }.

  1. If {\displaystyle \lim_{x \rightarrow c} h(x)=1 } we say that {f(x)} is asymptotically equal to {g(x)} when {x \rightarrow c} and write {f(x) \sim g(x)\,\, (x \rightarrow c)}.

  2. If {\displaystyle \lim_{x \rightarrow c} h(x) = 0} we say that {f(x)} is little-o of {g(x)} when {x \rightarrow c} and write { f(x) = o (g(x)) \,\, (x \rightarrow c)}.
  3. If {h(x)} is bounded in some neighborhood of {c} we say that {f(x)} is big-o of {g(x)} when {x \rightarrow c} and write {f(x)=O(g(x)) \;(x \rightarrow c)}.

If in the previous definition {g(x)} doesn’t equal zero:

  1. { f(x) \sim g(x) \Leftrightarrow \displaystyle \lim_{x \rightarrow c} \frac{f(x)}{g(x)} = 1}.
  2. { f(x) = o (g(x)) \,\, (x \rightarrow c) \Leftrightarrow \displaystyle \lim_{x \rightarrow c} \frac{f(x)}{g(x)} = 0}.
  3. { f(x) = O(g(x)) \,\, (x \rightarrow c) \Leftrightarrow \dfrac{f(x)}{g(x)} } is bounded in some neighborhood of {c}.

These notions work exactly as they worked for sequences and they give the same type of information about the behavior of the functions in question.

Theorem 36 Let {D \subset \mathbb{R}}; {f,g,f_0,g_0: D \rightarrow \mathbb{R}}, and {c \in D^\prime}. Then:

  1. If {f(x) \sim g(x) \,\, (x \rightarrow c)} and {\displaystyle \lim_{x \rightarrow c}g(x) = a}, then {\displaystyle \lim_{x \rightarrow c} f(x) = a}

  2. If {f(x) \sim f_0(x) \,\, (x \rightarrow c)} and {g(x) \sim g_0(x) \,\, (x \rightarrow c)}, then {f(x)g(x) \sim f_0(x)g_0(x) \,\, (x \rightarrow c)} and {f(x)/g(x) \sim f_0(x)/f_0(x) \,\, (x \rightarrow c)}.

Proof: Left as an exercise. \Box

As an example of the previous definitions we can say, with full generality, that for any polynomial function we can keep track of the term with the leading degree if we are interested in how it behaves for larger and larger values.

But on the other hand if we are interested on how the polynomial function behaves near the origin we have to keep track of the term with the smaller degree. To see that this is indeed so let us introduce the following example:

\displaystyle  f(x) = x^2+x

Now {x^2+x=(x+1)x}. If we take {h(x)=x+1} it is {\displaystyle \lim_{x \rightarrow 0} h(x)=1} and so it is {x^2+x=O(x) \,\, (x \rightarrow 0)}.

Another example that has a lot of interest to us is:

\displaystyle  \sin x \sim x \,\, (x \rightarrow 0)

We can see that it is so because of {\displaystyle \lim_{x \rightarrow 0} \frac{\sin x}{x} = 1}

— 6.6. Epsilon-delta condition —

And it is time for us to introduce the concept of limit using the { \epsilon - \delta } condition.

Once again we are walking into regions of greater and greater rigor at the expense of having to use more abstract concepts while we are doing it. Things are going to get a little harder for people that aren’t used to this types of reasoning but please bear with me and you’ll find it rewarding when you get used to it.

The point of the { \epsilon - \delta } condition is to avoid using fuzzy concepts near, input signals, output signals, or the somewhat weak definition of limit we been using so far.

Theorem 37 (Heine’s Theorem)

Let {D \subset \mathbb{R}}, {f: D \rightarrow \mathbb{R}}, {c \in D^\prime} and {a \in \overline{\mathbb{R}}}. {\displaystyle \lim_{x \rightarrow c} f(x) = a} if and only if

\displaystyle  \forall \delta > 0 \, \exists \epsilon >0 : \; x \in V(c,\epsilon) \cap (D \setminus \left\lbrace c \right\rbrace ) \Rightarrow f(x) \in V(a, \delta)

Proof: Omitted. \Box

In case you are wondering what that means the straightforward answer is that it means exactly what you’re idea of a function having a limit in a given point is (I’m assuming you have the right idea). It tell us that if a function indeed has limit {a} in point {c} then, if we restrict ourselves to points near {c}, the images of those points are all near {a}.

Once again I tell the reader to look at this as if it were a game played between two (slightly odd) people. One of them is choosing the {\delta} and the the other is choosing the {\varepsilon}. But this game isn’t just about choosing. The first player gets to choose any {\delta} he wants, but the second has to choose the right {\varepsilon}that makes the condition hold.

If he can prove that he has an {\varepsilon} for every {\delta} that the other player chooses than he succeeds in the game and the function does have limit {a} at point {c}.

Theorem 38

Let {D \subset \mathbb{R}}, {f: D \rightarrow \mathbb{R}}, and {c \in D^\prime}. If {\displaystyle \lim_{x \rightarrow c} f(x)} exists and is finite, than there exists a neighborhood of {c } where {f(x)} is bounded.

Proof:

Let {\displaystyle \lim_{x \rightarrow c} f(x) = a \in \mathbb{R}}. By theorem 37 with {\delta=1} there exists {\varepsilon > 0} such as

{\begin{aligned} x \in V(c,\varepsilon)\cap(D\setminus\left\lbrace c \right\rbrace ) &\Rightarrow f(x) \in V(a,1) \\ &\Rightarrow f(x) \in \left] a-1, a+1\right[ \end{aligned}}

Thus {x\in V(c,\varepsilon)\cap(D\setminus\left\lbrace c \right\rbrace)\Rightarrow a-1 < f(x) < a+1}.

So {x \in V(c,\varepsilon) \cap D \Rightarrow f (x) \begin{cases} \leq \mathrm{max} \left\lbrace a+1,f(c)\right\rbrace \\ \geq \mathrm{max}\left\lbrace a+1,f(c)\right\rbrace \end{cases} }

and {f(x)} is bounded in {V(c,\varepsilon)} \Box

If {\displaystyle \lim_{x \rightarrow c} f(x)/g(x)} exists, then {f(x)= O(g(x))\,\, (x \rightarrow c)} since in this case it is {h(x)=f(x)/g(x)} and there exists some neighborhood of {c} where {h(x)} is bounded.

After this one may be interested in knowing how we can translate {\displaystyle \lim_{x \rightarrow c^+} f(x) = a} to a {\varepsilon - \delta} condition.

In this case we are considering {f(x)} only in the set {D_{c^+}} and so what we get is:

\displaystyle  \forall \delta > 0 \exists \varepsilon > 0: \, x \in V(c,\varepsilon)\cap D_{c^+} \Rightarrow f(x) \in V(a,\delta)

Theorem 39 Let {D \subset \mathbb{R}}, {f:D \rightarrow \mathbb{R}}, and {c \in D^\prime}. If {\displaystyle \lim_{x \rightarrow c^-}f(x)=\lim_{x \rightarrow c^+}f(x)=a}, then {\displaystyle \lim_{x \rightarrow c}f(x)=a}.

Proof: Let {\delta > 0}. By the {\varepsilon-\delta} condition it is:

\displaystyle  \exists \varepsilon_1>0:x \in V(c,\varepsilon_1)\cap D_{c^+} \Rightarrow f(x) \in V(a,\delta)

\displaystyle  \exists \varepsilon_2>0:x \in V(c,\varepsilon_2)\cap D_{c^-} \Rightarrow f(x) \in V(a,\delta)

Thus by taking {\varepsilon =\mathrm{min} \left\lbrace \varepsilon_1, \varepsilon_2 \right\rbrace } it follows {x \in V(c,\varepsilon) \cap (D \setminus \left\lbrace c \right\rbrace ) \Rightarrow x \in V(c,\varepsilon) \cap D_{c^+}} or {x \in V(c,\varepsilon) \cap D_{c^- }\Rightarrow f(x) \in V(a,\delta)}

In conclusion:

{ \forall \delta > 0 \exists \varepsilon > 0: x \in V(c,\varepsilon)\cap (D\setminus \left\lbrace c \right\rbrace ) \Rightarrow f(x) \in V(a,\delta) } which is equivalent to saying that {\displaystyle \lim_{x \rightarrow c} f(x)=a}. \Box

Definition 34 Let {D \subset \mathbb{R}}; {f: D \rightarrow \mathbb{R}} and {c \in D}. We say that {f(x)} is continuous in point {c} if for all sequences {x_n} of points in {D}, such as {\lim x_n = c} it is {\lim f(x_n)=f(c)}.

A function is said to be continuous if it is continuous in all points in {D}.

A few examples to clarify definition 34

  1. \displaystyle  f(x)=|x| \quad \forall x \in \mathbb{R}

    Let {c \in \mathbb{R}} and {x_n} a sequence such as {x \rightarrow c}. Then {f(x_n)=|x_n|} and {\lim f(x_n) = \lim |x_n| = |c|}. In conclusion {f(x_n) \rightarrow f(c)} which is equivalent to saying that {f} is continuous in {c}. Since {c} can be any given point {f(x)=|x|} is continuous in {\mathbb{R}}.

  2. Let {f(x)= \sin x} and {x_n} a sequence such as {x_n \rightarrow \theta}. It is {\lim \sin x= \sin \theta} and by the same reasoning {\sin x} is also continuous.
  3. In general if {x_n \rightarrow c} it is {\lim f(x_n)=f(c)=f(\lim x_n)}. So for {\exp (x)} it is {\lim \exp (x_n)=\exp (\lim x_n)}.

    If {x_n \rightarrow +\infty } it follows that {\lim \exp(x_n)=+\infty } and for {x_n \rightarrow -\infty} it follows that {\lim \exp(x_n)=0}.

    Thus if we define {\exp (+\infty)=+\infty} and {\exp (-\infty)=0} it follows that it always is {\lim \exp (x_n)=\exp (\lim x_n)}.

  4. Analogously we can define {\log +\infty= +\infty} and {\log 0 = -+\infty} and it always is {\lim \log x_n = \log (\lim x_n)}.

Theorem 40 (Heine’s theorem for continuity)

Let {D \subset \mathbb{R}}, {f:D \rightarrow \mathbb{R}} and {c \in D}. {f} is continuous in {D} if and only if

\displaystyle  \forall \delta>0 \,\,\exists \, \varepsilon > 0: \, x \in D \wedge |x-c| < \varepsilon \Rightarrow |f(x)-f(c)| < \delta

Or written in terms of neighborhoods

\displaystyle  \forall \delta>0 \,\,\exists \, \varepsilon > 0: \, x \in V(c,\varepsilon) \cap D \Rightarrow f(x) \in V(f(c),\delta)

Proof: Omitted. \Box

As can be seen the {\varepsilon - \delta} condition for continuity in point {c} is very similar to the one for limit {a} in point {c}.

To finish this post I’ll just state a theorem that sheds some light on the connections of these two concepts:

Theorem 41 Let {D \subset \mathbb{R}}, {f:D \rightarrow \mathbb{R}} and {c \in D \cap D^\prime}. Then {f} it’s continuous in point {c} if and only if {\displaystyle \lim_{x \rightarrow c} f(x) = c}.

Proof: Omitted. \Box

So as this theorem shows the connection between continuity and limit is indeed a deep one, but we can look at the concept of limit as being an auxiliary tool to determine if a function is continuous or not and we should not confuse them.

In the next post I intend to write a little bit more about continuity but in the mean time a very good text about it can be found here

Follow

Get every new post delivered to your Inbox.

Join 261 other followers

%d bloggers like this: