• There are coyotes in my backyard.
  • There are rabbits in my backyard.
  • Coyotes eat rabbits.
  • Rabbits do not eat coyotes.
  • When the rabbit population is large more coyotes move into the neighborhood.
  • When the rabbit population is small coyotes move somewhere else.
  • When the coyote population is large, its bad for rabbits.
  • When the coyote population is small, rabbits thrive.
  • Let c(t) be the size of the population of coyotes as a function of time $t$
  • Let r(t) be the size of the population of rabbits as a function of time $t$

Let's build a rudimentary model that describes the two populations over time.

In the absence of coyotes, we'll assume that the rabbit population grows at a rate proportional to the population size (more rabbits, faster growth):

$$\frac{dr}{dt}=a\cdot r(t)$$

However, the presence of coyotes has a negative effect on the population of rabbits:

$$\frac{dr}{dt}=a\cdot r(t)-b\cdot c(t)$$

(notice we assume implicitly that $a$ and $b$ are positive constants)

The population of coyotes is similar: the presence of coyotes contributes to the growth of the population: they are pack animals, they like to be together.

$$\frac{dc}{dt}=e\cdot c(t)$$

Certainly, the value of $e$ will be smaller than $a$ and perhap is more complicated. We'll keep it simple for now and modify later. The presence of rabbits contributes positively to the population size of coyotes:

$$\frac{dc}{dt}=e\cdot c(t)+f \cdot r(t)$$

(we've argued, for now, that $e$ and $f$ are both positive constants)

We have the following linear, homogeneous system of differential equations:

\begin{align*} \frac{dr}{dt}&=ar-bc\\ \frac{dc}{dt}&=fr+ec\\ \end{align*}

We'll consider the general situation:

\begin{align*} \frac{dx}{dt}&=ax+by\\ \frac{dy}{dt}&=cx+dy\\ \end{align*}

where $a$, $b$, $c$, and $d$ are constants (positive, negative, or zero) and $x(t)$ and $y(t)$ are the two functions of time we're intereted in.

Our goal is to solve the system and find $x(t)$ and $y(t)$. (The populations of rabbits and coyotes, if we had data to use to estimate values of the parameters $a$, $b$, $c$, and $d$.)

It's convenient to use matrix notation. We'll write

\begin{align*} \frac{dx}{dt}&=ax+by\\ \frac{dy}{dt}&=cx+dy\\ \end{align*}

as

$$\left(\begin{array}{c} \frac{dx}{dt}\\ \frac{dy}{dt} \end{array}\right) =\left(\begin{array}{cc}a & b \\c & d\end{array}\right)\left(\begin{array}{c} x\\ y\end{array}\right)$$

As mathematicians, we're going to try to use a technique we've seen before to find a solution. We'll guess $$x(t)=c_1e^{\lambda t}$$ and $$y(t)=c_2e^{\lambda t}.$$

The derivatives are: $\frac{dx}{dt}=c_1\lambda e^{\lambda t}$ and $\frac{dy}{dt}=c_2\lambda e^{\lambda t}$. Plug into the matrix equation:

$$\left(\begin{array}{c} c_1\lambda e^{\lambda t}\\ c_2\lambda e^{\lambda t} \end{array}\right) =\left(\begin{array}{cc}a & b \\c & d\end{array}\right)\left(\begin{array}{c} c_1e^{\lambda t}\\ c_2e^{\lambda t}\end{array}\right).$$

For the most part matrix arithmetic behaves the way you think it should:

$$\lambda e^{\lambda t}\left(\begin{array}{c} c_1\\ c_2 \end{array}\right) =e^{\lambda t}\left(\begin{array}{cc}a & b \\c & d\end{array}\right)\left(\begin{array}{c} c_1\\ c_2\end{array}\right)$$$$\lambda \left(\begin{array}{c} c_1\\ c_2 \end{array}\right) =\left(\begin{array}{cc}a & b \\c & d\end{array}\right)\left(\begin{array}{c} c_1\\ c_2\end{array}\right)$$

Remember $a$, $b$, $c$, and $d$ are known parameters. We're trying to find $\lambda$ and a vector $ \left(\begin{array}{c} c_1\\ c_2 \end{array}\right)$ that do something rather special:

$$\lambda \left(\begin{array}{c} c_1\\ c_2 \end{array}\right) =\left(\begin{array}{cc}a & b \\c & d\end{array}\right)\left(\begin{array}{c} c_1\\ c_2\end{array}\right)$$

Multiplying the vector by the matrix is the same as just multiplying it by a constant!

  • $\lambda$ is called an eigenvalue

  • $ \left(\begin{array}{c} c_1\\ c_2 \end{array}\right)$ is called an eigenvector

Notice that $$\left(\begin{array}{c} c_1\\ c_2 \end{array}\right)= \left(\begin{array}{c} 0\\ 0 \end{array}\right)$$

always satisfies the equation. This solution is called the trivial solution and mostly isn't helpful. So, we're looking for a non-trivial solution.

We'll do a little matrix arithmetic and interpretation to find, first, the eigenvalue(s) and, second the associated eigenvectors.

\begin{align*} \lambda \left(\begin{array}{c} c_1\\ c_2 \end{array}\right) &=\left(\begin{array}{cc}a & b \\c & d\end{array}\right)\left(\begin{array}{c} c_1\\ c_2\end{array}\right)-\lambda \left(\begin{array}{c} c_1\\ c_2 \end{array}\right) \\ \\ \left(\begin{array}{cc}a & b \\c & d\end{array}\right)\left(\begin{array}{c} c_1\\ c_2\end{array}\right)-\lambda \left(\begin{array}{c} c_1\\ c_2 \end{array}\right)&=0\\ \\ \left(\begin{array}{cc}a & b \\c & d\end{array}\right)\left(\begin{array}{c} c_1\\ c_2\end{array}\right)-\lambda \left(\begin{array}{cc}1 & 0 \\0 & 1\end{array}\right) \left(\begin{array}{c} c_1\\ c_2 \end{array}\right)&=0\\\\ \left(\begin{array}{cc}a & b \\c & d\end{array}\right)\left(\begin{array}{c} c_1\\ c_2\end{array}\right)- \left(\begin{array}{cc}\lambda & 0 \\0 & \lambda\end{array}\right) \left(\begin{array}{c} c_1\\ c_2 \end{array}\right)&=0\\ \\ \left(\begin{array}{cc}a-\lambda & b \\c & d-\lambda\end{array}\right)\left(\begin{array}{c} c_1\\ c_2\end{array}\right)&=0\\ \\ \end{align*}

So, we have the system: \begin{align*} (a-\lambda)c_1+bc_2&=0\\ \\ dc_1+(d-\lambda)c_2&0 \end{align*}

Multiply the top equation by $-c$, the bottom by $(a-\lambda)$ and add:

\begin{align*} -cbc_2+(a-\lambda)(d-\lambda)c_2&=0\\ c_2(-cb+(a-\lambda)(d-\lambda))&=0\\ \end{align*}

We're looking for a solution where $c_2 \neq 0$ so we need $-cb+(a-\lambda)(d-\lambda)=0$.

So, the matrix equation we are trying to solve has a non-trivial solution only if $(a-\lambda)(d-\lambda)-bc=0$. This is quadratic in $\lambda$ and so we can solve for $\lambda$. You might notice, just like in the case of second order linear differential equations, we'll have several cases: two real solutions, a repeated real solution, and complex conjugate solutions.

Once we have $\lambda$ values (eigenvalues) we can go back and find the associated eigenvectors. Together, the eigenvalues and their eigenvectors will give us two, linearly independent, solutions to the system and their linear combination will form the general solution. This will become apparent with examples.

We'll show one of each case in class. A few pages from the Herman text are in Canvas which show similar examples.

Summary:

  • Eigenvalues come from solving $(a-\lambda)(d-\lambda)-bc=0$. In matrix notation this equation is $$\det\left(\begin{array}{cc}a-\lambda & b \\c & d-\lambda\end{array}\right) =0$$

  • Eigenvectors are associated to particular eigenvalues: examples will show how they are found.

  • The general solution is the linear combination of two linearly independent solutions.

Homework Part I:

Solve the following systems of differential equations: (These problems are adapted from the Trench textbook.)

  1. \begin{align*} \frac{dx}{dt}&=x+2y\\ \frac{dy}{dt}&=2x+y\\ \end{align*}
  1. \begin{align*} \frac{dx}{dt}&=-x-4y\\ \frac{dy}{dt}&=-x-y\\ \end{align*}
  1. \begin{align*} \frac{dx}{dt}&=-x+2y\\ \frac{dy}{dt}&=-5x+5y\\ \end{align*}
  1. \begin{align*} \frac{dx}{dt}&=-11x+4y\\ \frac{dy}{dt}&=-26x+9y\\ \end{align*}
In [ ]: