Note that we discuss finding limits using **L’Hopital’s Rule** here. Also note that there’s a very good **limit calculator** **here** on the this online calculator site.

## Introduction to Limits

We need to understand how **limits** work, since the first part of **Differential Calculus **uses them extensively**. **(Differential calculus has to do with **rates **at which quantities change.) I like to think of a **limit** as what the $ y$-part of a graph or function approaches as $ x$ gets closer and closer to a number, either from the left-hand side (which means that $ x$-part is increasing), or from the right-hand side (which means the $ x$-part is decreasing). **Again, remember that limits are always the $ y$-value (dependent variable), not the $ x$-value (independent variable).**

We can write a limit where $ x$ gets closer and closer to **0** as $ \lim\limits_{x \to 0}f\left( x \right)=L$. To describe this, we say the “limit of $ f\left( x \right)$ as $ \boldsymbol{x}$ **approaches** **0 is **$ \boldsymbol{L}$”. Now the beauty of limits is that $ x$** **can get closer and closer to a number, but not actually ever get there (think

**Asymptote**from the

**Rational Functions**section). Even if a function is “normal”, like a linear function, we still consider the $ y$-value a limit where it touches the $ x$-value, as shown below.

The reason we have limits in **Differential Calculus** is because sometimes we need to know what happens to a function when $ x$ gets closer and closer to a number (but doesn’t actually get there); we will use this concept in getting the approximation of a slope (“rate”) of a curve at that point. Sometimes, the $ x$-value does get there (like when we’re taking the slope of a straight line), but sometimes it doesn’t (like when we’re taking the slope of a curved function). As an example, when you first learn how to handle limits, it might be the case that the $ x$-value is getting closer and closer to a number that makes the **denominator** of the $ y$-value **0**; this would typically be undefined.

Here’s a graphical example of a **removable discontinuity**, or hole, that represents a **limit** at the $ x$-value where the discontinuity exists. Note that $ y$ is **undefined** at $ x=1$, but the limit at $ x=1$ **is** defined (it is **3**, what would have been the $ y$-value without the hole). Note that we had to factor a **difference of cubes** from the **Advanced Factoring** section in this example.

## Finding Limits Algebraically

We will learn different techniques for finding simple limits; here are some problems. For limits with trigonometry, see **below**.

Again, we discuss finding limits using **L’Hopital’s Rule** here.

## Continuity and One Side Limits

Sometimes, the **limit** of a function at a particular point and the **actual value** of that function at the point can be two **different** things. Notice in cases like these, we can easily define a **Piecewise Function** to model this situation.

The **limit from the right**, or $ \lim\limits_{x \to {{{c}^{+}}}}f\left( x \right)=L$ means that $ x$ approaches $ c$ from the **right** side, or with values **greater than** $ c$, and the **limit from the left**, or $ \lim\limits_{x \to {{{c}^{-}}}}f\left( x \right)=L$ means that $ x$ approaches $ c$ from the **left **side, or with values **less than** $ c$. Do you see how if the limit from the right and the limit from the left are **the same**, then we can get a “regular” limit (meaning both sides converge to the same $ y$-value?)

### Existence of a Limit and Definition of Continuity

If the limit from the right equals the limit from the left, and this equals the actual point for $ f\left( x \right)$ (the $ y$ for that $ x$), then we have a **continuous function** (one that we can draw without picking up our pencil). This concept leads to the definition of the **existence of a limit**, the formal **definition of continuity**:

Here are some examples; remember that the actual **limits** are the $ y$-values, not the $ x$-values. The first example shows that some limits **do not exist** (**DNE**), based on the definition above. The second example actually gives you the equation for the **Piecewise Function** that illustrates limits. Notice that both functions are **discontinuous.**

## Continuity of Functions

We learned in the **Graphing Rational Functions, including Asymptotes** section how to find **removable discontinuities** (**holes**) and **asymptotes** of functions (functions with **0 in the denominator**); these functions are **discontinuous** at these points. Let’s review how we get **vertical asymptotes** for a **rational function**:

And we’ll also have to remember the **Trigonometric Function asymptotes **(remember, the ones that start with “**C**” are the easiest (“ea**C**est”):

Trig Function | Asymptote |

| None |

| $ \displaystyle \frac{\pi }{2}+\pi k$ |

| $ \pi k$ |

In Calculus, you may be asked to find the $ x$-values at which a function might be **discontinuous**, and also determine whether or not a discontinuity is **removable** (has a limit) or **non-removable** (doesn’t have a limit):

You might also see a problem like this:

## Property of Limits

Limits have properties that are pretty straightforward; basically, you can add, subtract, multiply, and divide limits (and multiply them by a number, or scalar) with the limit on the “inside” or “outside”. (Think about “picking apart” limits into smaller pieces.) Remember again that limits refer to the “$ y$”-value, or the dependent variable.

The properties are:

- $ \displaystyle \underset{{x\to c}}{\mathop{{\lim }}}\,\left[ {b\cdot f\left( x \right)} \right]=b\cdot \underset{{x\to c}}{\mathop{{\lim }}}f\left( x \right)$ (
**Scalar Multiple**) - $ \displaystyle \underset{{x\to c}}{\mathop{{\lim }}}\left[ {f\left( x \right)\pm g\left( x \right)} \right]=\underset{{x\to c}}{\mathop{{\lim }}}f\left( x \right)\pm \underset{{x\to c}}{\mathop{{\lim }}}\,g\left( x \right)$ (
**Sum or Difference**) - $ \displaystyle \underset{{x\to c}}{\mathop{{\lim }}}\left[ {f\left( x \right)\cdot g\left( x \right)} \right]=\underset{{x\to c}}{\mathop{{\lim }}}f\left( x \right)\cdot \underset{{x\to c}}{\mathop{{\lim }}}\,g\left( x \right)$ (
**Product**) - $ \displaystyle \underset{{x\to c}}{\mathop{{\lim }}}\frac{{f\left( x \right)}}{{g\left( x \right)}}=\frac{\lim\limits_{x \to c}f(x)}{{\lim\limits_{x \to c}g(x)}}$ (
**Quotient**) - $ \displaystyle \underset{{x\to c}}{\mathop{{\lim }}}\,\left[ {f{{{\left( x \right)}}^{n}}} \right]={{\left[ {\underset{{x\to c}}{\mathop{{\lim }}}f\left( x \right)} \right]}^{n}}$ (
**Power**) - $ \displaystyle \underset{{x\to c}}{\mathop{{\lim }}}\,f\left( {g\left( x \right)} \right)=f\left( {\underset{{x\to c}}{\mathop{{\lim }}}g\left( x \right)} \right)$ (
**Composite Function**)

Here is an example of how the **sum property of limits** works:

$ \begin{align}\underset{{x\to 1}}{\mathop{{\lim }}}\,\,\,\left( {5{{x}^{2}}+2x-1} \right)&=\underset{{x\to 1}}{\mathop{{\lim }}}\,\,5{{x}^{2}}+\underset{{x\to 1}}{\mathop{{\lim }}}\,\,2x-\underset{{x\to 1}}{\mathop{{\lim }}}\,\,1\\&=5+2-1=6\end{align}$

## Squeeze Theorem

The **Squeeze Theorem** (or Sandwich Theorem) is used to find the limit of a function, say $ f\left( x \right)$, at a particular point by showing that if two functions “on the outside of” that function have a limit at a certain point, $ f\left( x \right)$ has that same limit. In other words, $ f\left( x \right)$ is “trapped” between two functions with a certain limit, so it must have that same limit. The concept is based on this: $ \text{if }a\le b\le c\text{,}\,\,\text{and}\,\,a=c,\,\,\text{then}\,a=b=c\,$. Thus:

If $ f\left( x \right)\le g\left( x \right)\le h\left( x \right)$, and $ \displaystyle \underset{{x\to c}}{\mathop{{\lim }}}\,f\left( x \right)=\underset{{x\to c}}{\mathop{{\lim }}}\,h\left( x \right)=L,\,\,\underset{{x\to c}}{\mathop{{\lim }}}\,g\left( x \right)=L$

Here is a visual representation of what the function $ g\left( x \right)=\left| x \right|\cos x$ might look like between the functions $ f\left( x \right)=\left| x \right|$ and $ h\left( x \right)=-\left| x \right|$, all three of which functions have a limit of $ 0$ as $ x\to 0$. It is solved algebraically below.

Here are some types of problems you may see:

The squeeze theorem is helpful with functions that oscillate, and is used to prove the following limits with sine and cosine functions.

## Limits with Sine and Cosine

There are a couple of **special trigonometric limits** that you’ll need to know, and to use these, you may have to do some algebraic tricks. These are the two limits to learn:

$ \displaystyle \underset{{x\to 0}}{\mathop{{\lim }}}\frac{{\sin x}}{x}=1\,\,\,\,\,\,\,\,\,\,(\underset{{x\to 0}}{\mathop{{\lim }}}\frac{x}{{\sin x}}=1)$

$ \displaystyle \underset{{x\to 0}}{\mathop{{\lim }}}\frac{{1-\cos x}}{x}=\underset{{x\to 0}}{\mathop{{\lim }}}\frac{{\cos x-1}}{x}=0$

Note that for the first limit (with sin), the **reciprocal** is also true, since $ \displaystyle \frac{1}{1}=1$.

Here are the types of problems you might see. Note again that you can check these in your calculator by putting in numbers really close to the $ x$-values in your calculator (for example, $ x=.00001$ for $ x$* *approaching **0**).

## Intermediate Value Theorem (IVT)

The **intermediate value theorem** (**IVT**) seems very complicated and is a bit theoretical, but if we think about what it really says, it’s not that difficult and pretty obvious.

What the intermediate value theorem says is that if you are at a certain $ x$-value (where you have a $ y$-value) and you go to another $ x$-value to the right (where you have another $ y$-value), and the path that you go is on a **continuous function**, then you have to have hit (cross over) all the $ y$-values **in** **between**.

Here is a real-world example. Let’s say a baby boy weighs **7** pounds at birth, and then **20** pounds when he is **1** year old (**12** months). At some point, he must have weighed **15** pounds, for example, or any number of pounds between **7** and **20** pounds, since human weight is continuous. In this case, the baby’s age is the $ x$-value, and the baby’s weight is the $ y$-value, with the interval being between **0** and **12** months, inclusive. In other words, for a weight to get from some number to another, it must have hit all the numbers in between.

The other way to think of **IVT** is that we have **2** points on a **continuous** curve and there is a **horizontal line** between these two points, then the curve must **cross this horizontal line** to get from one point to the other point.

Here is the formal definition (and picture) of the **Intermediate Value Theorem**:

Here are some types of problems that you might see with the **Intermediate Value Theorem**:

## Infinite Limits

An **infinite limit** is just a limit in which the $ y$-value either increases or decreases without bound (goes up forever or down forever) as $ x$ gets closer and closer to a value. We typically think of these types of limits when we deal with **vertical asymptotes** (**VA**’s), so we can use what we know about **VA**’s to work with them. As an example, these limits exist in **Rational Functions** with a denominator of **0**.

When a function gets closer and closer to a **VA** from one side, or both sides (if the limit exists), the limit will either be $ -\infty $ or $ \infty $. (Theoretically, the limit doesn’t exist since these aren’t real numbers, but we still say these limits are $ -\infty $ or $ \infty $). To determine which one it is, we can put in numbers (for $ x$) really close to the **VA** on either side (to see what direction the graph is going), or use a graphing calculator. Sometimes, we can use algebra to simplify a rational function to get the limit.

Let’s do some problems where we need to find the **one-sided limit** (if it exists). Some of these may involve remembering rational **Parent Functions**. You can also try these on your **graphing calculator** to get the answers.

## Limits at Infinity

**Limits at Infinity** exist when the $ x$-values (not the $ y$-value) go to $ \infty$ or $ -\infty $. This can happen with **Rational Functions** and with **Horizontal Asymptotes** (**HA**s)** **(which are** End Behavior Asymptotes**, or** EBA**s). The $ y$-values can get closer and closer to a number, but never actually reach that number in the case of an **EBA**. Let’s review how to get horizontal or end behavior asymptotes:

The easiest way to get **limits at infinity **with rational functions is to find the **horizontal asymptotes** in that direction. We can also use a trick where we can divide every term in the numerator and denominator by the variable with the **highest degree** (highest exponent value). This is because of the following **Limits of Infinity Theorems**:

$ a$ is a real number, $ r$ is a positive rational number

Basically all this says is that if the bottom (numerator) of a fraction gets bigger and bigger (towards $ \infty $ or $ -\infty $), the whole fraction will get smaller and smaller and eventually go to **0**.

When limit problems have $ x$’s on the top and bottom, when trying to plug in $ \infty $ or $ -\infty $, we’ll typically get what we call **indeterminate form** – something like $ \displaystyle \frac{\infty }{\infty }$. In these cases, use the tricks of either finding the horizontal asymptote, or dividing all the terms by the variable with the highest degree. Sometimes if we have roots in the function, we can multiply by the **conjugate** of the numerator or denominator and try to go from there.

Note that you can check these by trying to put in a large number (or very small number for $ -\infty $) for $ x$ in your graphing calculator. Also note that with Limits at Infinity, if there is **no horizontal asymptote** (the degree on the bottom is less than the degree on the top with a rational function), the **limit doesn’t exist**.

## Limits of Sequences

We looked at **Sequence and Series** here; a sequence is basically a function using the natural numbers as $ x$-values. Thus, we can think of the limit of a sequence (if that limit exists) as what $ y$-value is approaching, as the number of terms goes to $ \infty $ (similar to **Limits at Infinity** above). This is the number to which this sequence **converges**.

All arithmetic sequences diverge, and geometric sequences converge if $ \left| r \right|<1$, where $ r$ is the common ratio; actually, these types of sequences converge to **0**. For other sequences, use the same rules as **Limits at Infinity** above, since those limits were also going to $ \infty $. Look for the **end behavior** of the sequences; what our $ y$-value is converging to, when our $ x$-value gets bigger and bigger.

For example, the limit of the sequence $ \displaystyle {{a}_{n}}=\frac{1}{{{{n}^{2}}}}\,\,\,(1,\frac{1}{4},\frac{1}{9},\frac{1}{{16}},…)$ ($ n$ is a positive integer) is **0**, since the $ \displaystyle \underset{{n\to \infty }}{\mathop{{\lim }}}\,\frac{1}{{{{n}^{2}}}}=0$. Therefore, this sequence converges to **0**.

Here are more examples of **limits of sequences**:

**Learn these rules, and practice, practice, practice**!

Use the **Mathway** keyboard to enter a** Limit** problem, and then click on Submit (the arrow to the right of the problem) to solve the problem. You can also click on the 3 dots and then **Examples** in the upper right hand corner to drill down for example problems under “**Evaluating Limits**“.

If you click on “Tap to view steps”, you will go to the **Mathway** site, where you can register for the **full version** (steps included) of the software. You can even get math worksheets.

You can also go to the **Mathway** site here, where you can register, or just use the software for free without the detailed solutions. There is even a Mathway App for your mobile device. Enjoy!

On to **Definition of the Derivative**** **– you are ready!