## In case you’re wondering…

Posted in Announcements on January 13, 2012 by ateixeira

…who writes these texts just take a look at the upper right corner.
I decided to follow the steps in this simple tutorial: Google+ publisher code with badge and +1 button and now I have a direct connection between my blogs and my google+ profile.
This will serve as a reminder for me to really blog this year and try to learn a lot of cool things.

Take care and watch this space.

## Newtonian Physics – Introduction

Posted in Announcements, Classical Mechanics, Newtonian Formalism, Physics on July 6, 2011 by ateixeira

The first thing I want to say about this post is that its title is actually a misnomer. Much of what I’ll say here is valid for pretty much the rest of the blog, while some things are only pertinent to Newtonian Physics.

The approach taken in this blog for developing the physical theories will be the axiomatic one. I’ll do this because of brevity, internal elegance and consistency. Of course, I’m well aware of the fact that this is only possible with hindsight but I think that one has a lot to gain when physics is presented this way. Maybe the one who has more to gain is the presenter than the presentee, but since this is my blog I’m calling all the shots.

Maybe a word is in order for what the word axiom means and a little bit of history will be needed (gasp!!! the first self-contradiction!!!). In ancient Greece, the place where normally one thinks real science started to take shape (actually it wasn’t but this is a whole other can of worms), people who concerned themselves with such matters used two words to signify two things that nowadays are taken as synonyms. Those two words were: axiom and postulate.

Back in the day axiom was taken to be a self-evident truth while a postulate was taken to be something that one would have to take for certain for the sake of constructing an argument. So, axiom was a deep truth of nature while a postulate was something that humans had to resort to in order to reach new knowledge.

As an example of an axiom we have Euclid’s fifth (which revealed itself to be quite the deep mathematical truth!) and as an example of a postulate one has the assumption that Hipparchus made that the sun rays travelled in straight lines from the Sun to the Earth and Moon while he calculated the distances and sizes of those three bodies.

People have become a lot more cynical and in modern day usage those two terms are used as synonyms (and the meaning that prevails is the postulate one).

Axioms arise in Mathematics when one is willing to construct a theory that will unify a body of (not so) disjoint facts into a coherent whole. One should take proper care that the propositions one uses as the building blocks are enough for completeness and internal coherence and can derive the maximum amount of new facts with the minimum amount of assumed propositions.

In Physics things seem to be different at first sight but let me show you that things aren’t that different after all. For starters one knows ever since Galileo that the verbal method of Aristotle – (metha)physics – isn’t the way to go for one to know, predict and even interfere in natural phenomena. For all of this to happen mathematical tools are needed. One gets deeper into the truth of things, and one is also able to get technological progress that, besides of messing up with the natural environment, also makes people’s life easier. It isn’t enough to tell that bodies fall under gravity, one has also to specify where, with what energy, under what time interval such a fall happens.

For instance Newton’s theory as it was done by Newton was axiomatic. His three laws are just another name for axioms. They are the propositions that contain the undefined terms whose validity one has to accept in order to achieve new results.

One fundamental difference now arises. While in Mathematics things are normally evaluated in terms of self-consistency and internal elegance (this is a HUGE oversimplification) in physics things are also judged by how good the new results compare to actual measurements in the real world. In Physics physical theories have to be consistent with what see around us (another HUGE oversimplification). Hence if Newton’s Principia predicted squared orbits for the planets, Newton’s Principia would have to be scrapped.

Another difference is at the way we physicists arrive at the axioms: normally one has some experimental facts and start thinking about them and how they are linked with each other. Hopefully one will then be able to put the most fundamental properties as building blocks of our theories and call them axioms (in Physics it is more usual to call them laws).

After digressing a little, thanks for reading by the way, let me proceed with the defense of the axiomatic way in Physics. One other thing is that I think that knowledge is a lot more sound when one knows where one stands and why one is standing there and not some other place. So, if  I tell you what are our basics (it doesn’t matter how we get to them) and derive all that can be derived from them I believe that sounder knowledge is achieved.

The historical/phenomenological method has as its big advantage (according to me at least) of showing the inner struggles each concept has to endure before being accepted and being part of the reigning paradigm. It also makes things more approachable at a first attempt, but I think that the merits of this approach stop at this initial pedagogy.

The downsides of the axiomatic way are that, at first sight, it seems highly artificial, and may not be what most people are used to and want to see when wanting to learn physics.

Moving on from this rather big lecture let me explain what I’ll do in the Newtonian Physics part of this blog:

1. I’ll start off by introducing units of measurement, dimensional analysis and explain why they are important in Physics.
2. A little bit on error propagation and why it matters in physics. Yes, this is mostly a theoretical blog but I consider this to be part of the physicist knowing where he/she stands paradigm.
3. Assume that the reader knows differential and integral calculus (even though I’ll continue my posts on Basic Mathematics).
4. Introduce the Newtonian axioms and what most people think Newton meant to say what while introducing them.
5. Do a lot of calculations.
6. Have a lot of fun!

Posted in Announcements on June 12, 2011 by ateixeira

It’s been a while since my last update on this blog, but I can assure you that things will go on.

I still intend to follow the plan I laid out in the beginning and I understand if some people are getting turned of by the long bouts of inactivity this blog has (the next post is almost finished though).

For the ones of you that just can’t wait to get your Physics right away you can go to this other blog of mine: ateixeira. I’m starting with the Physics right away (and I think that in the future some of what I’ll write there will end up in here), even though it’ll be on a somewhat advanced level.

So, if you just can’t wait to get to Physics just click here and get done with it!

## Um novo blog

Posted in Announcements on April 29, 2011 by ateixeira

Um grupo de amigos decidiu preencher um vácuo da blogosfera em português: Mar de Dirac.

O objectivo do blog é produzir nova Física através da interacção dos seus membros e ser também uma plataforma de ensino e discussão da Física (e áreas directamente relacionadas) em português sem ter medo de se recorrer a equações.

Visitem, comentem e divulguem.

## Real Analysis – Limits and Continuity V

Posted in Uncategorized on April 23, 2011 by ateixeira

The ${\epsilon}$ ${\delta}$ condition is somewhat hard to get into our heads as neophytes. On top of that the similarity of the ${\epsilon}$ ${\delta}$ definition for limit and continuity can increase the confusion and to try to counter those frequent turn of events the first part of this post will try to clarify the ${\epsilon}$ ${\delta}$ condition by means of examples.

${\epsilon}$ ${\delta}$ for Continuity —

First we’ll start things off with something really simple.

Let ${f(x)=\alpha}$ which is obviously continuous.

The gist of the the ${\epsilon}$ ${\delta}$ reasoning is that we want to show that no matter the ${\delta}$ that is chosen at first it is always possible to find an ${\epsilon}$ that satisfies Heine’s criterion for continuity.

Getting back to our function ${f(x)=\alpha}$ it is ${|f(x)-f(c)| < \delta}$. Here ${f(x)=f(c)=\alpha}$ so

{\begin{aligned} |f(x)-f(c)| &< \delta \\ |\alpha-\alpha| &< \delta \\ |0| &< \delta \\ 0 &< \delta \end{aligned}}

Which is trivially true since ${\delta > 0}$ by assumption. Hence any value of ${\epsilon}$ will satisfy Heine’s criterion for continuity and ${f(x)=\alpha}$ is continuous at ${c}$.

Since we never made any assumption about ${c}$ other than ${c \in {\mathbb R}}$ we conclude that ${f(x)=\alpha}$ is continuous in all points of its domain.

Let us now look at ${f(x)=x}$. Again we’ll look at continuity for point ${c}$ (${f(c)=c}$):

{\begin{aligned} |f(x)-f(c)| &< \delta \\ |x-c| &< \delta \end{aligned}}

The last expression is just we want at this stage since want to have something of the form ${x-c}$ (the first part of the ${\epsilon}$ ${\delta}$ criterion).

If we let ${\epsilon=\delta}$ it is ${|x-c| < \epsilon}$ and this completes our proof that ${f(x)=x}$ is continuous at point ${c}$.

And again since we never made any assumption about ${c}$ other than ${c \in {\mathbb R}}$ we conclude that ${f(x)=\alpha}$ is continuous in all points of its domain.

Now we let ${f(x)=\alpha x + \beta}$ and will see if ${f(x)}$ is continuous at ${c}$.

{\begin{aligned} |f(x)-f(c)| &< \delta \\ |\alpha x + \beta-(\alpha c + \beta)| &< \delta \\ |\alpha x -\alpha c| &< \delta \\ |\alpha||x-c| &< \delta \\ |x-c| &< \dfrac{\delta}{|\alpha|} \end{aligned}}

Hence if we let ${\epsilon=|\delta|/ |\alpha|}$ it is ${|x-c|< \epsilon}$ and ${f(x)=\alpha x + \beta}$ is continuous at ${c}$.

As a final example of Heine’s criterion of continuity we’ll look into ${f(x)=\sin x}$.

{\begin{aligned} |f(x)-f(c)| &< \delta \\ |\sin x-\sin c| &< \delta \end{aligned}}

Since we want something like ${|x-c| < g(\delta)}$ the last expression isn’t very useful to us.

In this case we’ll take an alternative approach which nevertheless works and has exactly the same spirit of what we’ve using so far.

Please look at every step I make with a critical eye and see if you can really understand what’s going on with this deduction.

{\begin{aligned} |\sin x-\sin c| &= 2\left| \cos\left( \dfrac{x+c}{2}\right)\right| \left| \sin\left( \dfrac{x-c}{2}\right)\right|\\ &< 2\left| \sin\left( \dfrac{x-c}{2}\right)\right| \end{aligned}}

Since ${x \rightarrow c}$ we know that at some point ${\dfrac{x-c}{2}}$ will be in the first quadrant. Thus

{\begin{aligned} 2\left| \sin\left( \dfrac{x-c}{2}\right)\right| &< 2\left|\dfrac{x-c}{2}\right| \\ &= |x-c|\\ &< \epsilon \end{aligned}}

Where the last inequality follows by hypothesis.

That is to say that if we let ${\epsilon=\delta}$ it is ${|x-c|<\epsilon \Rightarrow | \sin x - \sin x | < \delta}$ which is the epsilon delta definition of continuity.

${\epsilon}$ ${\delta}$ for Limits —

After looking into some simple ${\epsilon}$ ${\delta}$ proofs for continuity we’ll take a look at ${\epsilon}$ ${\delta}$ for limits.

The procedure is the same, but we’ll state it explicitly so that people can see it in action.

Let ${f(x)=2}$. We want to show that it is ${\displaystyle \lim_{x \rightarrow 1}f(x)=2}$.

{\begin{aligned} |f(x)-2| &< \delta \\ |2-2| &< \delta \\ 0 &< \delta \end{aligned}}

Which is trivially true for any value of ${\delta}$, hence ${\epsilon}$ can be any positive real number.

Let ${f(x)=2x+3}$. We want to show that it is ${\displaystyle \lim_{x \rightarrow 1}f(x)=5}$.

{\begin{aligned} |f(x)-5| &< \delta \\ |2x+3-5| &< \delta \\ |2x-2| &< \delta \\ 2|x-1| &< \delta \\ |x-1| &< \dfrac{\delta}{2} \end{aligned}}

With ${\epsilon=\delta/2}$ we satisfy the ${\epsilon}$ ${\delta}$ for limit.

As a final example let us look at the modified Dirichlet function that was introduced at this post.

$\displaystyle f(x) = \begin{cases} o \quad x \in \mathbb{Q}\\ x \quad x \in \mathbb{R}\setminus \mathbb{Q} \end{cases}$

At that post it was proved that for ${a \neq 0}$ ${\displaystyle\lim_{x \rightarrow a}f(x)}$ didn’t exist and it was promised that in a later date I’d show that ${\displaystyle\lim_{x \rightarrow 0}f(x)=0}$ using the epsilon delta condition.

Since we now know what the epsilon delta condition is and already have some experience with it will tackle this somewhat more abstruse problem.

{\begin{aligned} |f(x)-f(0)| &< \delta \\ |f(x)-0| &< \delta \end{aligned}}

Since ${f(x)=0}$ or ${f(x)=x}$ we have two cases to look at.

In the first case it is ${|0-0| < \delta}$ which is trivially valid, hence ${\epsilon}$ can be any real positive number.

In the second case it is ${|x-0| < \delta}$. Hence letting ${\epsilon=\delta}$ gets the job done.

Since we proved that ${\displaystyle\lim_{x \rightarrow 0}f(x)=0=f(0)}$ the conclusion is that the modified Dirichlet function that was presented is only continuous at ${x=0}$.

As was said previously, they don’t make local concepts more local than that.

## Real Analysis Exercises III

Posted in Basic Mathematics, Real Analysis on July 29, 2009 by ateixeira

1.

a) Calculate ${ \displaystyle \sum_{k=p}^{m}(u_{k+1}-u_k)}$ and ${\displaystyle\sum_{k=p}^{m}(u_k - u_{k+1}}$

${\displaystyle \sum_{k=p}^{m}(u_{k+1}-u_k)=u_{p+1}-u_{p}+u_{p+2}-u_{p+1}+\ldots +u_{m+1}-u_{m}}$

As we can see the first term cancels out with the fourth, the third with the sixth, and so on and all we are left with is the second and second last terms:

${\displaystyle \sum_{k=p}^{m}(u_{k+1}-u_k) = u_{m+1}-u_p}$

{\begin{aligned} \displaystyle \sum_{k=p}^{m}(u_k - u_{k+1})&= - \sum_{k=p}^{m}(u_{k+1}-u_k)\\ &= - (u_{m+1}-u_p)\\ &= u_p-u_{m+1} \end{aligned}}

b) Calculate ${\displaystyle \lim \sum_{k=1}^n\dfrac{1}{k(k+1)}}$ using the previous result.

${\displaystyle \lim \sum_{k=1}^n\dfrac{1}{k(k+1)}= \lim \sum_{k=1}^n \left( \frac{1}{k}-\frac{1}{k+1} \right) }$

Defining ${u_k=1/k}$ the previous sum can be written as

{\begin{aligned} \displaystyle \lim \sum_{k=1}^n \left( u_k-u_{k+1} \right)&=\lim (u_1 - u_{n+1})\\ &= \lim \left(1-\frac{1}{n+1}\right)\\ &=1 \end{aligned}}

This last result apparently has a funny story. Mengoli was the first one to calculate ${\displaystyle \lim \sum_{k=1}^n\dfrac{1}{k(k+1)}=1}$.

At the time this happened people did research in mathematics (I’m using this term rather abusively) in a somewhat different vein. They didn’t rush to print what they found like today.

Many times people held out their results for years while tormenting their rivals about what they found.

This is exactly what Mengoli did. In the times he was around the theory of series wasn’t much developed, thus this result, that we can calculate without being particularly brilliant in Mathematics, was something to take note of.

So, he wrote some letters to people saying that ${\displaystyle \lim \sum_{k=1}^n\dfrac{1}{k(k+1)}=1}$, but not how he concluded that.

The other mathematicians he sent the result too didn’t know about his methods and all they could do was to add numbers up explicitly and the only thing they could see was that even though they could sum more and more terms the result was always less than ${1}$ and was got nearer and nearer to ${1}$.

Of course this didn’t prove nothing since summing up a billion terms isn’t the same as summing an infinite number of terms and everyone but Mengoli was dumbfounded with that surprising result.

c) Calculate ${\displaystyle \sum_{k=0}^{n-1}(2k+1) }$

In this exercise what we are calculating is the sum of ${n}$ consecutive odd numbers. This result was already known to the ancient Greeks and the result wasn’t nothing short to astounding to them.

But enough with the talk already:

{\begin{aligned} \displaystyle \sum_{k=0}^{n-1}(2k+1)&=\sum_{k=0}^{n-1}\left[ (k+1)^2-k^2\right]\\ &= \sum_{k=0}^{n-1}(u_{k+1}-u_k) \end{aligned}}

With ${u_k=k^2}$

Using the now familiar formula

{\begin{aligned} \displaystyle \sum_{k=0}^{n-1}(2k+1) &= (n-1+1)^2-0^2\\ &= n^2 \end{aligned}}

An astounding result indeed!

Just look to ${\displaystyle \sum_{k=0}^{n-1}(2k+1)=n^2}$, interpret the result and try not to be as surprised as the ancient Greeks were.

2.

a) Using 1.a) and ${a^k=a^k\dfrac{a-1}{a-1}\quad (a \neq 1)}$ calculate ${\displaystyle \sum_{k=0}^{n-1} a^k }$

{\begin{aligned} \displaystyle \sum_{k=0}^{n-1} a^k &= \displaystyle\sum_{k=0}^{n-1} \left[ a^k\frac{a-1}{a-1}\right]\\ &= \displaystyle \frac{1}{a-1}\sum_{k=0}^{n-1}\left( a^{k+1}-a^k\right)\\ &= \displaystyle\frac{1}{a-1}(a^n-1)\\ &= \displaystyle\frac{a^n-1}{a-1} \end{aligned}}

b) Using a) establish the Bernoulli inequality ${a^n-1 \geq n(a-1)}$ if ${a > 0}$ and ${n \in \mathbb{Z}^+}$

If ${a=1}$ it is ${1-1=n(1-1) \Rightarrow 0=0}$ which is trivially true.

If ${n=1}$ it is ${a-1=a-1}$ which is trivially true.

For ${n \geq 2 }$ and ${a>1}$ it is:

{\begin{aligned} \displaystyle \sum_{k=0}^{n-1}a^k&= 1+a+a^2+\ldots+a^{n-1}\\ &> 1+1+\ldots+1\\ &= n \end{aligned}}

Thus

{\begin{aligned} \dfrac{a^n-1}{a-1} &> n \\ a^n-1 &> n(a-1) \end{aligned}}

Since ${a > 1}$

Finally if ${0 < a <1 }$ it is

{\begin{aligned} \displaystyle \sum_{k=0}^{n-1}a^k&= 1+a+a^2+\ldots+a^{n-1}\\ &< 1+1+\ldots+1\\ &= n \end{aligned}}

Thus

{\begin{aligned} \dfrac{a^n-1}{a-1} & < n \\ a^n - 1 & > n(a-1) \end{aligned}}

Since ${a < 1}$

c) Use b) to calculate ${\lim a^n}$ if ${a > 1}$ and then conclude that ${\lim a^n=0}$ if ${|a| < 1}$.

By b) it is

{\begin{aligned} a^n &> n(a-1)+1 \\ \lim a^n &\geq \lim \left( n(a-1)+1 \right)= +\infty \end{aligned}}

Hence ${\lim a^n = +\infty \quad (a>1)}$

For the second part of the exercise we will calculate instead ${\lim |a^n|}$ since that we know that ${ u_n \rightarrow 0 \Leftrightarrow |u_n| \rightarrow 0}$

Let us make a change of variable ${t=1/a}$. Thus ${|a|=|1/t|}$ and

{\begin{aligned} \lim |a^n| &= \lim |1/t|^n\\ &= \dfrac{1}{\lim |t|^n}\\ &= \dfrac{1}{+\infty}\\ &=0 \end{aligned}}

3. Consider the sequences ${u_n=\left( 1+\dfrac{1}{n} \right)^n }$ and ${v_n=\left( 1+\dfrac{1}{n} \right)^{n+1}}$

a) Calculate ${\dfrac{v_n}{v_{n+1}}}$ and ${\dfrac{u_{n+1}}{u_n}}$. Then use Bernoulli’s inequality to show that ${v_n}$ is strictly decreasing and that ${u_n}$ is strictly increasing.

{\begin{aligned} \dfrac{v_n}{v_{n+1}} &= \dfrac{\left( 1+1/n \right)^{n+1}}{\left(1+1/(n+1)\right)^{n+2}}\\ &=\dfrac{\left(\dfrac{n+1}{n}\right)^{n+1}}{\left( \dfrac{n+2}{n+1} \right)^{n+2}}\\ &= \dfrac{n}{n+1}\dfrac{\left(\dfrac{n+1}{n}\right)^{n+2}}{\left( \dfrac{n+2}{n+1} \right)^{n+2}}\\ &=\dfrac{n}{n+1}\left( \dfrac{(n+1)^2}{n(n+2)} \right)^{n+2}\\ &= \dfrac{n}{n+1}\left( \dfrac{n^2+2n+1}{n(n+2)} \right)^{n+2}\\ &=\dfrac{n}{n+1}\left( \dfrac{n(n+2)+1}{n(n+2)} \right)^{n+2}\\ &= \dfrac{n}{n+1}\left( 1+\dfrac{1}{n(n+2)} \right)^{n+2} \end{aligned}}

After having calculated ${v_n/v_{n+1}}$ we can use Bernoulli’s inequality, with ${a=1+\dfrac{1}{n(n+2)}}$ , to conclude that ${v_n}$ is strictly decreasing.

{\begin{aligned} \dfrac{n}{n+1}\left( 1+\dfrac{1}{n(n+2)} \right)^{n+2} &> \dfrac{n}{n+1}\left(1 + \dfrac{n+2}{n(n+2)} \right)\\ &= \dfrac{n}{n+1}(1+1/n)\\ &= \dfrac{n}{n+1}\dfrac{n+1}{n}\\ &= 1 \end{aligned}}

Thus ${v_n}$ is strictly decreasing.

With a similar technique we can prove that

${ \displaystyle u_{n+1}/u_n=\dfrac{n+1}{n}\left( 1- \dfrac{1}{(n+1)^2}\right)^{n+1}}$

After that by using Bernoulli’s inequality like in the previous example one can show that ${u_{n+1}/u_n>1}$ and thus ${u_n}$ is strictly increasing.

c) Using a) and b) and ${\lim u_n = e}$ prove the following inequalities ${(1+1/n)^n < e <(1+n)^{n+1}}$.

{\begin{aligned} \lim v_n&= \lim(1+1/n)^n(1+1/n)\\ &= e\times 1\\ &= e \end{aligned}}

We already know that ${v_n}$ is decreasing so it is ${v_n<(1+1/n)^{n+1}}$

On the other hand ${u_n}$ is increasing and ${\lim u_n=e}$ so ${(1+1/n)^n.

Hence ${(1+1/n)^n

d) Use c) to prove that ${ \displaystyle \frac{1}{n+1}<\log (n+1)-\log n <\frac{1}{n}}$

{ \begin{aligned} (1+1/n)^n &< e \\ n \log \left( \dfrac{n+1}{n} \right) &< 1 \\ \log(n+1) - \log n &< \dfrac{1}{n} \end{aligned} }

And now for the second part of the inequality:

{ \begin{aligned} e &< \left(1+\dfrac{1}{n}\right)^{n+1} \\ 1 &< (n+1)\log \left(\dfrac{n+1}{n}\right) \\ \dfrac{1}{n+1} &< \log (n+1) -\log n \end{aligned}}

In conclusion it is ${ \dfrac{1}{n+1}<\log (n+1)- \log n < \dfrac{1}{n} }$

4.

a) Using 3d) show that

${ \displaystyle 1+\log k < (k+1)\log (k+1)-k\log k < 1+ \log(k+1) }$

From

{ \begin{aligned} \dfrac{1}{k+1} &< \log (k+1) - \log k \\ 1 &< (k+1)\log(k+1) - (k+1)\log k \\ 1+ \log k &< (k+1)\log(k+1)-k \log k \end{aligned}}

With a similar reasoning we can also prove that ${(k+1)\log(k+1)-l\log k < 1+ \log(k+1)}$.

Thus it is ${1+\log k < (k+1)\log(k+1)-k\log k < 1+ \log(k+1)}$

b) Sum the previous inequalities between ${1 \leq k \leq n-1}$.

{\begin{aligned} \displaystyle \sum_{k=1}^{n-1}(1+ \log k) &< \sum_{k=1}^{n-1} ((k+1)\log(k+1)-k \log k)\\ &< \displaystyle \sum_{k=1}^{n-1}(1+\log(k+1)) \end{aligned}}

Now

{\begin{aligned} \displaystyle \sum_{k=1}^{n-1} (1+ \log k) &= \sum_{k=1}^{n-1}1+\sum_{k=1}^{n-1}\log k\\ &= n-1 +\sum_{k=1}^{n-1}\log k \end{aligned}}

And

{\begin{aligned} \displaystyle \sum_{k=1}^{n-1}\log k &= \log 1 + \log2 +\ldots+\log(n-1)\\ &=\log((n-1)!) \end{aligned}}

It also is

{\begin{aligned} \displaystyle \sum_{k=1}^{n-1}((k+1)\log(k+1) - k\log k)&= m\log n -\log 1\\ &=n\log n \end{aligned}}

And ${\displaystyle \sum_{k=1}^{n-1}(1+\log(k+1))=n-1+\log n!}$

Thus it is ${n-1+\log(n-1)! < n\log n < n-1 \log n!}$

c) Conclude the following inequalities ${ n \log n -n +1 < \log n! < n \log n -n+1+\log n}$ and establish Stirling's approximation ${ \displaystyle \log n! = n\log n -n +r_n}$ with ${e < C_n < en}$

{ \begin{aligned} n-1 + \log (n-1)! &< n\log n \\ \log (n-1)! &< n\log n -n+1 \\ \log n! &< n\log n -n +1+\log n \end{aligned}}

On the other hand

{\begin{aligned} n\log n &< n-1 + \log n! \\ n\log n -n +1 &< \log n! \end{aligned} }

Thus

{\begin{aligned} n\log n -n +1 &< \log n! \\ &< n\log n -n +1 +\log n \end{aligned}}

And from this follows ${1 < \log n! -n\log n+n < 1+\log n}$

Defining ${r_n=\log n! -n\log n+n}$ it is ${\log n! = n\log n-n+r_n}$ with ${1 < r_n < 1+\log n}$

5.

Show that ${\log \left(1+\dfrac{1}{n}\right)\sim \dfrac{1}{n}}$ and that ${\log \left(1+\dfrac{1}{n^2}\right)\sim \dfrac{1}{n^2}}$

We know that

{ \begin{aligned} \dfrac{1}{n+1} &< \log(n+1)-\log n < \dfrac{1}{n} \\ \dfrac{1}{n+1} &< \log\left( \dfrac{n+1}{n}\right) < \dfrac{1}{n} \\ \dfrac{1}{n+1} &< \log\left( 1+\dfrac{1}{n}\right) <\dfrac{1}{n} \\ \dfrac{1/(n+1)}{1/n} &< \dfrac{\log (1+1/n)}{1/n}<1 \\ \lim \dfrac{n}{n+1} &\leq \lim \dfrac{\log (1+1/n)}{1/n} \leq \lim 1 \\ 1 &\leq \lim \dfrac{\log (1+1/n)}{1/n} \leq 1 \end{aligned}}

Thus ${\lim \dfrac{\log (1+1/n)}{1/n}=1}$ and this equivalent to saying that ${\log \left(1+\dfrac{1}{n}\right)\sim \dfrac{1}{n}}$

Let ${u_n = \dfrac{\log (1+1/n)}{1/n}}$. In this case it is ${\dfrac{\log (1+1/n^2)}{1/n^2}=u_{n^2}}$. Since ${u_{n^2}}$ is a subsequence of ${u_n}$ we know that ${\lim u_{n^2}= \lim u_n}$ and so it also is ${\log \left(1+\dfrac{1}{n^2}\right)\sim \dfrac{1}{n^2}}$.

6. Show that ${u_n \sim v_n}$ and ${v_n \sim w_n \Rightarrow u_n \sim w_n }$

By hypothesis it is ${u_n=h_n v_n}$, ${v_n=t_n w_n}$ with ${h_n,t_n \rightarrow 1}$.

Substituting the second equality in the first we obtain ${u_n = h_n t_n w_n}$.

Let ${s_n = h_n t_n}$ and we write ${u_n =s_n w_n }$ with ${\lim s_n = \lim h_n \lim t_n =1\times 1=1}$.

Thus ${u_n \sim w_n}$

7. Let ${u_n = O\left(1/n\right)}$ and ${v_n = O (1/ \sqrt{n})}$. Show that ${u_n v_n = o ( 1/n^{4/3})}$.

${u_n = h_n 1/n}$ and ${v_n = t_n 1/ \sqrt{n}}$ with ${h_n}$ and ${t_n}$ bounded sequences. Now

{\begin{aligned} u_n v_n &= \dfrac{h_n}{n} \dfrac{t_n}{\sqrt{n}}\\ &= \dfrac{h_n t_n}{n^{3/2}}\\ &=\dfrac{h_n t_n}{n^{1/6}}\dfrac{1}{n^{4/3}} \end{aligned}}

Let ${s_n = \dfrac{h_n t_n}{n^{1/6}}}$ it is ${\lim s_n = \lim \dfrac{h_n t_n}{n^{1/6}} = 0}$ since ${h_n t_n}$ is bounded.

Thus ${u_n v_n = o (1/n^{4/3})}$

8. Using Stirling’s approximation show that ${\log n! = n\log n -n + O(\log n)}$

We know that it is ${\log n! = n\log n -n + +r_n}$ with ${ 1< r_n < 1+\log n}$. Thus

{\begin{aligned} 0 &<\dfrac{1}{\log n}\\ &< \dfrac{r_n}{\log n}\\ &< \dfrac{1}{\log n} +1\\ &\leq \dfrac{1}{\log 2}+1 \end{aligned}}

Where we used the fact that ${ \dfrac{1}{\log n}+1}$ is decreasing function.

Thus ${\dfrac{r_n}{\log n}}$ is bounded and so ${r_n=O(\log n)}$ as desired.

## Real Analysis – Limits and Continuity IV

Posted in Basic Mathematics, Real Analysis on July 1, 2009 by ateixeira

As an application of theorem 35 let us look into the functions ${f(x)=e^x}$ and ${g(x)=\log x}$.

Now ${f:\mathbb{R} \rightarrow \mathbb{R^+}}$ and is a strictly increasing function, and ${g:\mathbb{R^+} \rightarrow \mathbb{R}}$ also is a strictly increasing function.

By theorem 35 it is ${\displaystyle \lim_{x \rightarrow +\infty}\exp x = \mathrm{sup} [\mathbb{R^+}] = +\infty}$ and ${\displaystyle \lim_{x \rightarrow -\infty} \exp x= \mathrm{inf} [\mathbb{R^+}] = 0}$.

As for ${g(x)}$ it is ${\displaystyle \lim_{x \rightarrow +\infty} \log x=\sup [\mathbb{R}]=+\infty}$ and ${\displaystyle \lim_{x \rightarrow 0} \log x = \inf [\mathbb{R}]=-\infty}$.

 Definition 33 Let ${D \subset \mathbb{R}}$; ${f,g: D \rightarrow \mathbb{R}}$, and ${c \in D^\prime}$. Let us suppose that there exists ${h: D \rightarrow \mathbb{R}}$ such as ${f(x) = h(x)g(x) }$. If ${\displaystyle \lim_{x \rightarrow c} h(x)=1 }$ we say that ${f(x)}$ is asymptotically equal to ${g(x)}$ when ${x \rightarrow c}$ and write ${f(x) \sim g(x)\,\, (x \rightarrow c)}$. If ${\displaystyle \lim_{x \rightarrow c} h(x) = 0}$ we say that ${f(x)}$ is little-o of ${g(x)}$ when ${x \rightarrow c}$ and write ${ f(x) = o (g(x)) \,\, (x \rightarrow c)}$. If ${h(x)}$ is bounded in some neighborhood of ${c}$ we say that ${f(x)}$ is big-o of ${g(x)}$ when ${x \rightarrow c}$ and write ${f(x)=O(g(x)) \;(x \rightarrow c)}$.

If in the previous definition ${g(x)}$ doesn’t equal zero:

1. ${ f(x) \sim g(x) \Leftrightarrow \displaystyle \lim_{x \rightarrow c} \frac{f(x)}{g(x)} = 1}$.
2. ${ f(x) = o (g(x)) \,\, (x \rightarrow c) \Leftrightarrow \displaystyle \lim_{x \rightarrow c} \frac{f(x)}{g(x)} = 0}$.
3. ${ f(x) = O(g(x)) \,\, (x \rightarrow c) \Leftrightarrow \dfrac{f(x)}{g(x)} }$ is bounded in some neighborhood of ${c}$.

These notions work exactly as they worked for sequences and they give the same type of information about the behavior of the functions in question.

 Theorem 36 Let ${D \subset \mathbb{R}}$; ${f,g,f_0,g_0: D \rightarrow \mathbb{R}}$, and ${c \in D^\prime}$. Then: If ${f(x) \sim g(x) \,\, (x \rightarrow c)}$ and ${\displaystyle \lim_{x \rightarrow c}g(x) = a}$, then ${\displaystyle \lim_{x \rightarrow c} f(x) = a}$ If ${f(x) \sim f_0(x) \,\, (x \rightarrow c)}$ and ${g(x) \sim g_0(x) \,\, (x \rightarrow c)}$, then ${f(x)g(x) \sim f_0(x)g_0(x) \,\, (x \rightarrow c)}$ and ${f(x)/g(x) \sim f_0(x)/f_0(x) \,\, (x \rightarrow c)}$. Proof: Left as an exercise. $\Box$

As an example of the previous definitions we can say, with full generality, that for any polynomial function we can keep track of the term with the leading degree if we are interested in how it behaves for larger and larger values.

But on the other hand if we are interested on how the polynomial function behaves near the origin we have to keep track of the term with the smaller degree. To see that this is indeed so let us introduce the following example:

$\displaystyle f(x) = x^2+x$

Now ${x^2+x=(x+1)x}$. If we take ${h(x)=x+1}$ it is ${\displaystyle \lim_{x \rightarrow 0} h(x)=1}$ and so it is ${x^2+x=O(x) \,\, (x \rightarrow 0)}$.

Another example that has a lot of interest to us is:

$\displaystyle \sin x \sim x \,\, (x \rightarrow 0)$

We can see that it is so because of ${\displaystyle \lim_{x \rightarrow 0} \frac{\sin x}{x} = 1}$

— 6.6. Epsilon-delta condition —

And it is time for us to introduce the concept of limit using the ${ \epsilon - \delta }$ condition.

Once again we are walking into regions of greater and greater rigor at the expense of having to use more abstract concepts while we are doing it. Things are going to get a little harder for people that aren’t used to this types of reasoning but please bear with me and you’ll find it rewarding when you get used to it.

The point of the ${ \epsilon - \delta }$ condition is to avoid using fuzzy concepts near, input signals, output signals, or the somewhat weak definition of limit we been using so far.

 Theorem 37 (Heine’s Theorem) Let ${D \subset \mathbb{R}}$, ${f: D \rightarrow \mathbb{R}}$, ${c \in D^\prime}$ and ${a \in \overline{\mathbb{R}}}$. ${\displaystyle \lim_{x \rightarrow c} f(x) = a}$ if and only if $\displaystyle \forall \delta > 0 \, \exists \epsilon >0 : \; x \in V(c,\epsilon) \cap (D \setminus \left\lbrace c \right\rbrace ) \Rightarrow f(x) \in V(a, \delta)$ Proof: Omitted. $\Box$

In case you are wondering what that means the straightforward answer is that it means exactly what you’re idea of a function having a limit in a given point is (I’m assuming you have the right idea). It tell us that if a function indeed has limit ${a}$ in point ${c}$ then, if we restrict ourselves to points near ${c}$, the images of those points are all near ${a}$.

Once again I tell the reader to look at this as if it were a game played between two (slightly odd) people. One of them is choosing the ${\delta}$ and the the other is choosing the ${\varepsilon}$. But this game isn’t just about choosing. The first player gets to choose any ${\delta}$ he wants, but the second has to choose the right ${\varepsilon}$that makes the condition hold.

If he can prove that he has an ${\varepsilon}$ for every ${\delta}$ that the other player chooses than he succeeds in the game and the function does have limit ${a}$ at point ${c}$.

 Theorem 38 Let ${D \subset \mathbb{R}}$, ${f: D \rightarrow \mathbb{R}}$, and ${c \in D^\prime}$. If ${\displaystyle \lim_{x \rightarrow c} f(x)}$ exists and is finite, than there exists a neighborhood of ${c }$ where ${f(x)}$ is bounded. Proof: Let ${\displaystyle \lim_{x \rightarrow c} f(x) = a \in \mathbb{R}}$. By theorem 37 with ${\delta=1}$ there exists ${\varepsilon > 0}$ such as {\begin{aligned} x \in V(c,\varepsilon)\cap(D\setminus\left\lbrace c \right\rbrace ) &\Rightarrow f(x) \in V(a,1) \\ &\Rightarrow f(x) \in \left] a-1, a+1\right[ \end{aligned}} Thus ${x\in V(c,\varepsilon)\cap(D\setminus\left\lbrace c \right\rbrace)\Rightarrow a-1 < f(x) < a+1}$. So ${x \in V(c,\varepsilon) \cap D \Rightarrow f (x) \begin{cases} \leq \mathrm{max} \left\lbrace a+1,f(c)\right\rbrace \\ \geq \mathrm{max}\left\lbrace a+1,f(c)\right\rbrace \end{cases} }$ and ${f(x)}$ is bounded in ${V(c,\varepsilon)}$ $\Box$

If ${\displaystyle \lim_{x \rightarrow c} f(x)/g(x)}$ exists, then ${f(x)= O(g(x))\,\, (x \rightarrow c)}$ since in this case it is ${h(x)=f(x)/g(x)}$ and there exists some neighborhood of ${c}$ where ${h(x)}$ is bounded.

After this one may be interested in knowing how we can translate ${\displaystyle \lim_{x \rightarrow c^+} f(x) = a}$ to a ${\varepsilon - \delta}$ condition.

In this case we are considering ${f(x)}$ only in the set ${D_{c^+}}$ and so what we get is:

$\displaystyle \forall \delta > 0 \exists \varepsilon > 0: \, x \in V(c,\varepsilon)\cap D_{c^+} \Rightarrow f(x) \in V(a,\delta)$

 Theorem 39 Let ${D \subset \mathbb{R}}$, ${f:D \rightarrow \mathbb{R}}$, and ${c \in D^\prime}$. If ${\displaystyle \lim_{x \rightarrow c^-}f(x)=\lim_{x \rightarrow c^+}f(x)=a}$, then ${\displaystyle \lim_{x \rightarrow c}f(x)=a}$. Proof: Let ${\delta > 0}$. By the ${\varepsilon-\delta}$ condition it is: $\displaystyle \exists \varepsilon_1>0:x \in V(c,\varepsilon_1)\cap D_{c^+} \Rightarrow f(x) \in V(a,\delta)$ $\displaystyle \exists \varepsilon_2>0:x \in V(c,\varepsilon_2)\cap D_{c^-} \Rightarrow f(x) \in V(a,\delta)$ Thus by taking ${\varepsilon =\mathrm{min} \left\lbrace \varepsilon_1, \varepsilon_2 \right\rbrace }$ it follows ${x \in V(c,\varepsilon) \cap (D \setminus \left\lbrace c \right\rbrace ) \Rightarrow x \in V(c,\varepsilon) \cap D_{c^+}}$ or ${x \in V(c,\varepsilon) \cap D_{c^- }\Rightarrow f(x) \in V(a,\delta)}$ In conclusion: ${ \forall \delta > 0 \exists \varepsilon > 0: x \in V(c,\varepsilon)\cap (D\setminus \left\lbrace c \right\rbrace ) \Rightarrow f(x) \in V(a,\delta) }$ which is equivalent to saying that ${\displaystyle \lim_{x \rightarrow c} f(x)=a}$. $\Box$

 Definition 34 Let ${D \subset \mathbb{R}}$; ${f: D \rightarrow \mathbb{R}}$ and ${c \in D}$. We say that ${f(x)}$ is continuous in point ${c}$ if for all sequences ${x_n}$ of points in ${D}$, such as ${\lim x_n = c}$ it is ${\lim f(x_n)=f(c)}$. A function is said to be continuous if it is continuous in all points in ${D}$.

A few examples to clarify definition 34

1. $\displaystyle f(x)=|x| \quad \forall x \in \mathbb{R}$

Let ${c \in \mathbb{R}}$ and ${x_n}$ a sequence such as ${x \rightarrow c}$. Then ${f(x_n)=|x_n|}$ and ${\lim f(x_n) = \lim |x_n| = |c|}$. In conclusion ${f(x_n) \rightarrow f(c)}$ which is equivalent to saying that ${f}$ is continuous in ${c}$. Since ${c}$ can be any given point ${f(x)=|x|}$ is continuous in ${\mathbb{R}}$.

2. Let ${f(x)= \sin x}$ and ${x_n}$ a sequence such as ${x_n \rightarrow \theta}$. It is ${\lim \sin x= \sin \theta}$ and by the same reasoning ${\sin x}$ is also continuous.
3. In general if ${x_n \rightarrow c}$ it is ${\lim f(x_n)=f(c)=f(\lim x_n)}$. So for ${\exp (x)}$ it is ${\lim \exp (x_n)=\exp (\lim x_n)}$.

If ${x_n \rightarrow +\infty }$ it follows that ${\lim \exp(x_n)=+\infty }$ and for ${x_n \rightarrow -\infty}$ it follows that ${\lim \exp(x_n)=0}$.

Thus if we define ${\exp (+\infty)=+\infty}$ and ${\exp (-\infty)=0}$ it follows that it always is ${\lim \exp (x_n)=\exp (\lim x_n)}$.

4. Analogously we can define ${\log +\infty= +\infty}$ and ${\log 0 = -+\infty}$ and it always is ${\lim \log x_n = \log (\lim x_n)}$.

 Theorem 40 (Heine’s theorem for continuity) Let ${D \subset \mathbb{R}}$, ${f:D \rightarrow \mathbb{R}}$ and ${c \in D}$. ${f}$ is continuous in ${D}$ if and only if $\displaystyle \forall \delta>0 \,\,\exists \, \varepsilon > 0: \, x \in D \wedge |x-c| < \varepsilon \Rightarrow |f(x)-f(c)| < \delta$ Or written in terms of neighborhoods $\displaystyle \forall \delta>0 \,\,\exists \, \varepsilon > 0: \, x \in V(c,\varepsilon) \cap D \Rightarrow f(x) \in V(f(c),\delta)$ Proof: Omitted. $\Box$

As can be seen the ${\varepsilon - \delta}$ condition for continuity in point ${c}$ is very similar to the one for limit ${a}$ in point ${c}$.

To finish this post I’ll just state a theorem that sheds some light on the connections of these two concepts:

 Theorem 41 Let ${D \subset \mathbb{R}}$, ${f:D \rightarrow \mathbb{R}}$ and ${c \in D \cap D^\prime}$. Then ${f}$ it’s continuous in point ${c}$ if and only if ${\displaystyle \lim_{x \rightarrow c} f(x) = c}$. Proof: Omitted. $\Box$

So as this theorem shows the connection between continuity and limit is indeed a deep one, but we can look at the concept of limit as being an auxiliary tool to determine if a function is continuous or not and we should not confuse them.

In the next post I intend to write a little bit more about continuity but in the mean time a very good text about it can be found here