## Introduction to the Matrix

A **matrix** (plural **matrices**) is sort of like a “table” of information where you are keeping track of things both right and left (**columns**), and up and down (**rows**). Usually, a matrix contains **numbers** or **algebraic expressions**. You may have heard matrices called **arrays**, especially in computer science.

As an example, if you had three sisters, and you wanted an easy way to store their age and number of pairs of shoes, you could store this information in a matrix. The actual matrix is inside and includes the brackets:

$ \displaystyle \begin{array}{l}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\begin{array}{*{20}{c}} {\text{Ashley}} & {\text{Emma}} & {\text{Chloe}} \end{array}\\\begin{array}{*{20}{c}} {\text{Age}} \\ {\text{Number of Pairs of Shoes}} \end{array}\text{ }\left[ {\begin{array}{*{20}{c}} {\text{23}} & {\,\,\,\,\,\,\,\,\,\text{18}} & {\,\,\,\,\,\,\,\,\,\text{15}} \\ \text{5} & {\,\,\,\,\,\,\,\,\text{23}} & {\,\,\,\,\,\,\,\,\,\text{12}} \end{array}} \right]\end{array}$

Matrices are called multi-dimensional since we have data being stored in different directions in a grid. The **dimensions** of this matrix are “**2 x 3**” or “**2** by **3**”, since we have **2** rows and **3** columns. (You always go **down first**, and **then over** to get the dimensions of the matrix). Each number or variable inside the matrix is called an entry or **element**, and can be identified by **subscripts**. For example, for the matrix above, “Ashley’s number of pairs of shoes (**5**)” would be identified as $ {{a}_{{2,1}}}$, since it’s on the **2**^{nd} row and it’s the **1**^{st }entry.

Again, matrices are great for storing numbers and variables – and also great for solving systems of equations, which we’ll see later. Note that we’ll address **Solving Systems using** **Reduced Row Echelon Form** here.

## Adding and Subtracting Matrices

Let’s look at a matrix that contains numbers and see how we can **add** and **subtract** matrices.

Let’s say you’re in avid reader, and in June, July, and August you read fiction and non-fiction books, and magazines, both in paper copies and online. You want to keep track of how many different types of books and magazines you read, and store that information in matrices. Here is that information, and how it would look in matrix form:

We can **add matrices** if the dimensions are the same; since the three matrices are all “**3** by **2**”, we can add them. For example, if we wanted to know the total number of each type of book/magazine we read, we could add each of the elements to get the sum:

$ \displaystyle \require{cancel} \color{#800000}{{\left[ {\begin{array}{*{20}{c}} 2 & 4 \\ \begin{array}{l}3\\4\end{array} & \begin{array}{l}1\\5\end{array} \end{array}} \right]\,\,+\,\,\left[ {\begin{array}{*{20}{c}} 3 & 2 \\ \begin{array}{l}1\\5\end{array} & \begin{array}{l}1\\3\end{array} \end{array}} \right]\,\,\,+\,\,\left[ {\begin{array}{*{20}{c}} 1 & 3 \\ \begin{array}{l}2\\4\end{array} & \begin{array}{l}3\\6\end{array} \end{array}} \right]}}\,\,=\,\,\left[ {\begin{array}{*{20}{c}} {2+3+1} & {4+2+3} \\ \begin{array}{l}3+1+2\\4+5+4\end{array} & \begin{array}{l}1+1+3\\5+3+6\end{array} \end{array}} \right]\,\,=\,\,\left[ {\begin{array}{*{20}{c}} 6 & 9 \\ 6 & 5 \\ {13} & {14} \end{array}} \right]$

Thus we could see that we read **6** paper fiction, **9** online fiction, **6** paper non-fiction, **5** online non-fiction books, and **13** paper and **14** online magazines. We could also **subtract** matrices this same way.

If we wanted to see how many books and magazines we would have read in August if we had **doubled** what we actually read, we could multiply the August matrix by the number **2**. This is called **matrix scalar multiplication**; a **scalar** is just a single number that we multiply with every entry. Note that this is **not** the same as multiplying two matrices together (which we’ll get to next):

$ \displaystyle \color{#800000}{{2\left[ {\begin{array}{*{20}{c}} 1 & 3 \\ \begin{array}{l}2\\4\end{array} & \begin{array}{l}3\\6\end{array} \end{array}} \right]}}\,=\,\left[ {\begin{array}{*{20}{c}} {1\times 2} & {3\times 2} \\ {2\times 2} & {3\times 2} \\ {4\times 2} & {6\times 2} \end{array}} \right]\,=\,\left[ {\begin{array}{*{20}{c}} 2 & 6 \\ 4 & 6 \\ 8 & {12} \end{array}} \right]$

## Multiplying Matrices

Multiplying matrices is a little trickier. First of all, you can only multiply matrices if the dimensions “match”; the** second dimension (columns) of the first matrix has to match the first dimension (rows) of the second matrix**, or you can’t multiply them. Think of it like the **inner dimensions have to match**, and the resulting dimensions of the new matrix are the** outer dimensions**.

Here’s an example of matrices with dimensions that would work. **Again, always remember with matrices that you go down first and then over, to get their dimensions.**

Notice how the “middle” or “inner” dimensions of the first matrices have to be the same (in this case, “**2**”), and the new matrix has the “outside” or “outer” dimensions of the first two matrices (“**3** by **5**”).

Now, let’s do a** real-life example **to see how the multiplication works**.** Let’s say we want to find the final grades for **3** girls, and we know what their averages are for tests, projects, homework, and quizzes. We also know that tests are **40%** of the grade, projects **15%**, homework **25%**, and quizzes **20%**. Here’s the data we have:

Student | Tests | Projects | Homework | Quizzes |

Alexandra | 92 | 100 | 89 | 80 |

Megan | 72 | 85 | 80 | 75 |

Brittney | 88 | 78 | 85 | 92 |

Type | Weight |

Tests | 40% (.4) |

Projects | 15% (.15) |

Homework | 25% (.25) |

Quizzes | 20% (.2) |

Let’s organize the following data into two matrices, and perform matrix multiplication to find the final grades for Alexandra, Megan, and Brittney; this is a **weighted average**! To do this, you have to multiply in the following way:

Just remember when you put matrices together with matrix multiplication**,**** the columns (what you see across) on the first matrix have to correspond to the rows down on the second matrix**. You should end up with entries that correspond with the entries of each row in the first matrix. For example, with the problem above, the columns of the first matrix each had something to do with Tests, Projects, Homework, and Quizzes (grades). The row down on the second matrix each had something to do with the same four items (weights of grades). But then we ended up with information on the three girls (rows down on the first matrix), and the middle “descriptions” (test, projects, etc.) went away.

**Alexandra has a 90, Megan has a 77, and Brittney has an 87**. See how cool this is? Matrices are really useful for a lot of applications in “real life”!

Now let’s do another example; let’s multiply the following matrices: (And don’t worry too much; most of the time you’ll be doing matrix multiplication in the calculator!)

Oh, one more thing! Remember that **multiplying matrices is not commutative** (order makes a difference), but** is associative** (you can change grouping of matrices when you multiply them). **Multiplying matrices is also distributive** (you can “push through” a matrix through parentheses), as long as the matrices have the correct dimensions to be multiplied.

## Matrices in the Graphing Calculator

The **TI graphing calculator** is great for matrix operations! Here are some basic steps for storing, multiplying, adding, and subtracting matrices. (Note that you can also enter matrices using **ALPHA ZOOM **and the arrow keys in the newer graphing calculators, but I still like “naming” the matrices.)

We’ll learn other ways to use the calculator with matrices a little later.

## Determinants, the Matrix Inverse, and the Identity Matrix

Soon we will be solving **Systems of Equations** using matrices, but we need to learn a few mechanics first!

Most **square** matrices (same dimension down and across) have what we call a **determinant**, designated by $ \det \left[ \text{A} \right]$ or $ \left| \text{A} \right|$, for example, which we’ll need to get the **multiplicative inverse** of that matrix. (The **inverse** of a matrix, designated by $ {{\text{A}}^{{-1}}}$, for example, is what we multiply that square matrix by to get the **identity** matrix, a matrix with **1**‘s along the diagonal starting with the upper left element, and **0**‘s everywhere else.)

**We’ll use the inverses of matrices to solve** **Systems of Equations**; the inverses will allow us to get variables by themselves on one side (like “regular” algebra). You’ll solve these mainly by using your **calculator**, but you’ll also have to learn how to get them “by hand”.

I know all this sounds confusing, but let’s do some examples step-by-step. Let’s first get the **determinant of matrices** (which we can get easily on a calculator!). The determinant is always just a **scalar** (number), and you’ll see it with two lines around the matrix:

Now let’s use the determinant to get the **inverse of a matrix**. We’ll only work with **2** by **2** matrices, since you’ll probably be able to use the calculator for larger matrices. Note again that **only square matrices have inverses**, but there are square matrices that don’t have one (when the determinant is **0**):

Note that a matrix, multiplied by its inverse, if it’s defined, will always result in what we call an **Identity Matrix**: $ \displaystyle \left[ {\begin{array}{*{20}{c}} 3 & 1 \\ 4 & 8 \end{array}} \right]\,\times \,\left[ {\begin{array}{*{20}{c}} {\displaystyle \frac{2}{5}} & {\displaystyle -\frac{1}{{20}}} \\ {\displaystyle -\frac{1}{5}} & {\displaystyle \frac{3}{{20}}} \end{array}} \right]=\left[ {\begin{array}{*{20}{c}} 1 & 0 \\ 0 & 1 \end{array}} \right]$. An identity matrix has **1**’s along the diagonal starting with the upper left element, and **0**’s everywhere else.

When you multiply a square matrix with an identity matrix, you just get that matrix back: $ \displaystyle \left[ {\begin{array}{*{20}{c}} 3 & 1 \\ 4 & 8 \end{array}} \right]\,\times \,\left[ {\begin{array}{*{20}{c}} 1 & 0 \\ 0 & 1 \end{array}} \right]=\left[ {\begin{array}{*{20}{c}} 3 & 1 \\ 4 & 8 \end{array}} \right]$. Think of an **identity matrix** like “**1**” in regular multiplication (the multiplicative identity), and the **inverse matrix **like a** reciprocal** (the multiplicative inverse).

## Solving Systems with Matrices

Why are we doing all this crazy math? Because we can **solve systems** with the** inverse of a matrix**, since the inverse is sort of like dividing to get the variables all by themselves on one side.

To solve systems with matrices, we use $ \displaystyle X={{A}^{{-1}}}B$. Here is why, if you’re interested in the “theory” (the column on the right provides an example with “regular” multiplication). (** I **is the identity matrix.):

$ \displaystyle \begin{array}{l}\,\,\,\,\,\,\,\,AX=B\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,5x=10\\{{A}^{{-1}}}AX={{A}^{{-1}}}B\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\displaystyle \frac{1}{5}\cdot 5x=\displaystyle \frac{1}{5}\cdot 10\\\,\,\,\,\,\,\,\,\,\,IX={{A}^{{-1}}}B\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,1x=2\\\,\,\,\,\,\,\,\,\,\,\,X={{A}^{{-1}}}B\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,x=2\end{array}$

Here’s an example:

(It is important to note that if we are trying to solve a system of equations and the determinant turns out to be **0**, that system either has an **infinite number of solutions**, or **no solution**. See below.)

## Solving Systems with Reduced Row Echelon Form

There’s another way to solve systems by converting a systems’ matrix into **reduced row echelon form**, where we can put everything in one matrix (called an **augmented matrix**). I show how to use this method by hand here in the **Solving Systems using** **Reduced Row Echelon Form** section, but here I’ll just show you how to easy it is to solve using **RREF** in a **graphing calculator**:

## Solving Matrix Equations

Here are a couple more types of matrix problems you might see:

## Cramer’s Rule

Sometimes you’ll have to learn **Cramer’s Rule**, which is another way to solve systems with matrices. Cramer’s Rule was named after the Swiss mathematician Gabriel Cramer, who also did a lot of other neat stuff with math. Cramer’s rule is all about getting determinants of the square matrices that are used to solve systems. It’s really not too difficult; it can just be a lot of work, so again, I’ll take the liberty of using the calculator to do most of the work :). Let’s just show an example:

Solve the following system using Cramer’s rule: $ \displaystyle \begin{array}{l}\,2x+3y-\,\,z\,=\,15\\4x-3y-\,\,z\,=\,19\\\,\,x\,-\,3y+\,3z\,=\,-4\end{array}$.

To solve for $ x, y$, and $ z$, we need to get the determinants of **four** matrices, the first one being the **3** by **3** matrix that holds the coefficients of $ x,y$, and $ z$. Let’s call this first determinant $ D$; $ \displaystyle D=\det \left[ {\begin{array}{*{20}{c}} 2 & 3 & {-1} \\ 4 & {-3} & {-1} \\ 1 & {-3} & 3 \end{array}} \right]=-54$. Now get a matrix called $ {{D}_{x}}$, which is obtained by “throwing away” the first ($ x$) column, and replacing the numbers with the “answer” or constant column. Thus, $ \displaystyle {{D}_{x}}=\det \left[ {\begin{array}{*{20}{c}} {\boldsymbol{{15}}} & 3 & {-1} \\ {\boldsymbol{{19}}} & {-3} & {-1} \\ {\boldsymbol{{-4}}} & {-3} & 3 \end{array}} \right]=-270$. You can probably guess what the next determinant we need is: $ {{D}_{y}}$, which we get by “throwing away” the second column ($ y$) of the original matrix and replacing the numbers with the constant column like we did earlier for the $ x$. Thus, $ \displaystyle {{D}_{y}}=\det \left[ {\begin{array}{*{20}{c}} 2 & {\boldsymbol{{15}}} & {-1} \\ 4 & {\boldsymbol{{19}}} & {-1} \\ 1 & {\boldsymbol{{-4}}} & 3 \end{array}} \right]=-54$. Similarly, $ \displaystyle {{D}_{z}}=\det \left[ {\begin{array}{*{20}{c}} 2 & 3 & {\boldsymbol{{15}}} \\ 4 & {-3} & {\boldsymbol{{19}}} \\ 1 & {-3} & {\boldsymbol{{-4}}} \end{array}} \right]=108$.

OK, now for the fun and easy part! To get the $ x, y$, and $ z$ answers to the system, you simply divide the determinants $ {{D}_{x}}$, $ {{D}_{y}}$, and $ {{D}_{z}}$, by the determinant $ D$, respectively. Thus, $ \displaystyle \left[ {\begin{array}{*{20}{c}} x \\ \begin{array}{l}y\\z\end{array} \end{array}} \right]=\left[ {\begin{array}{*{20}{c}} {\displaystyle \frac{{{{D}_{x}}}}{D}} \\ {\displaystyle \frac{{{{D}_{y}}}}{D}} \\ {\displaystyle \frac{{{{D}_{z}}}}{D}} \end{array}} \right]=\left[ {\begin{array}{*{20}{c}} {\displaystyle \frac{{-270}}{{-54}}} \\ {\displaystyle \frac{{-54}}{{-54}}} \\ {\displaystyle \frac{{108}}{{-54}}} \end{array}} \right]=\left[ {\begin{array}{*{20}{c}} 5 \\ 1 \\ {-2} \end{array}} \right]$. Now we know that $ x=5$, $ y=1$, and $ z=-2$. Note that, like the other systems, we can do this for any system where we have **the same numbers of equations as unknowns**.

## Number of Solutions when Solving Systems with Matrices

Most systems problems that you’ll deal with will just have **one solution. **(These equations are called independent or consistent). But, like we learned here in the **Systems of Linear Equations and Word Problems **section, sometimes we have systems where we either have no solutions or an infinite number of solutions.

Without going too much into Geometry, let’s look at what it looks like when three systems (each system looks like a “plane” or a piece of paper) have an **infinite number of solutions**, **no solutions**, and **one solution**, respectively:

Systems that have an **infinite number of solution**s (called **dependent** or **coincident**) will have two equations that are basically the same. One row of the coefficient matrix (and the corresponding constant matrix) is a multiple of another row. Thus, it’s like you’re trying to solve a system with only two equations, but three unknowns. A system that has an infinite number of solutions may look like this: $ \displaystyle \begin{align}2x+2y-z&=16\\4x+4y-2z&=32\\\,\,x-3y+3z&=-4\end{align}$.

Systems with **no solutions** (called **inconsistent**) will have one row of the coefficient matrix a multiple of another, but the rows in the constant matrix will not be in this same proportion. Thus, a system that has no solutions may look like this: $ \displaystyle \begin{align}2x+2y-z\,&=16\\4x+4y-2z&=10\\\,\,x\,-3y+3z&=-4\end{align}$.

When you try to solve either of these types of systems in your **calculator** (using matrices), you’ll get an error, since the determinant of the coefficient matrix will be **0**. This is called a singular matrix and the calculator will tell you so: .

If you put these systems in a **3** by **4** matrix and use** RREF**, you’ll be able to see what is happening. For the systems with **infinite solutions**, you can see you won’t get an identity matrix, and that **0** always equals **0**. You can actually define the set of solutions by just allowing $ z$ to be anything, and then, from the other rows, solve for $ x$ and $ y$ in terms of $ z$: . This would look like $ \displaystyle \begin{align}1x+0y+.375z&=5\\0x+1y-.875z&=3\\\,\,\,\,\,\,\,\text{For all }z,\,\,\,\,\,\,0&=0\end{align}$, so the solution set for $ {x,y,z}$ is $ \displaystyle \{5-.375z,3+.875z,z\}$ (let $ z$ be anything, and solve for $ x$ and $ y$ in terms of $ z$). (This may be a little advanced for high school 🙂 ) For the system with **no solutions**, you’ll get this, where you can see that you still don’t have an identity matrix, and **0** can never equal **1** from the last row: .

## Applications of Matrices

Matrices can be used for many applications, including combining data, finding areas, and solving systems. Here are some examples of those applications; the most common is solving **system of equations,** as in the following problem:

**Using the Determinant to get the Area of a Triangle**

In your Geometry class, you may learn a neat trick where we can get the **area of a triangle** using the determinant of a matrix; let’s try an example. Let’s say we have the three coordinate points of that triangle, $ \left( {{{a}_{1}},{{b}_{1}}} \right),\left( {{{a}_{2}},{{b}_{2}}} \right)$, and $ \left( {{{a}_{3}},{{b}_{3}}} \right)$. The formula for the area of the triangle bounded by those points is:

$ \displaystyle \text{Area of Triangle with points }\left( {{{a}_{1}},{{b}_{1}}} \right),\,\left( {{{a}_{2}},{{b}_{2}}} \right)\,\,\text{and}\,\left( {{{a}_{3}},{{b}_{3}}} \right)=\pm \frac{1}{2}\left| {\begin{array}{*{20}{c}} {{{a}_{1}}} & {{{b}_{1}}} & 1 \\ {{{a}_{2}}} & {{{b}_{2}}} & 1 \\ {{{a}_{3}}} & {{{b}_{3}}} & 1 \end{array}} \right|$ (Try both the plus and minus, but only take positive answer.)

**Matrix Multiplication Word Problem when Tables are Given**

Here are some matrix multiplication applications:

Here are a few more matrix application problems that can get a little tricky:

**A Florist Must Make 5 Identical Bridesmaid Bouquets Systems Problem**

Here’s a problem from the **Systems of Linear Equations and Word Problems **section; we can see how much easier it is to solve with a matrix.

**An Input Output Problem**

**Input-output problems **are seen in** Economics**, where we might have industries that produce for consumers, but also consume for themselves. An application of matrices is used in this input-output analysis, which was first proposed by Wassily Leontief; in fact, he won the Nobel Prize in economics in 1973 for this work.

We can express the amounts (proportions) the industries consume in matrices, such as in the following problem. Don’t worry if you don’t get this now; I’m sure there are whole courses dedicated to this topic!

**Matrix Encoding/Decoding (Matrix Encryption) Problem**

**Matrix Encryption** is a fun and interesting application; it involves using matrices in cryptology, where a (“secret”) encoded message can be sent to someone else (along with a random matrix) and decoded on the other end. Here’s an example:

**Learn these rules, and practice, practice, practice!**

**For Practice**: Use the **Mathway** widget below to try a **Conics** problem. Click on **Submit** (the blue arrow to the right of the problem) and click on **Find the Directrix** to see the answer.

You can also type in your own problem, or click on the three dots in the upper right hand corner and click on “Examples” to drill down by topic.

If you click on **Tap to view steps**, or **Click Here**, you can register at **Mathway** for a **free trial**, and then upgrade to a paid subscription at any time (to get any type of math problem solved!).

On to **Solving Systems using Reduced Row Echelon Form**** ** – you are ready!