# 46 Force-balance equations

Up to now, we have been studying dynamics in the format of one or more first-order differential equations. For instance,

\[\partial_t x = f(x, y)\\ \partial_t y = g(x, y)\] where \(f(x,y)\) and \(g(x,y)\) are the dynamical functions. This is not the style in which differential equations were introduced in the late 1600s. Instead, Isaac Newton (1642-1727) wrote his differential equations in the format of his Second Law of Motion, which reads (in a 1792 translation):

LAW II:The alteration of motion is ever proportional to the motive force impressed; and is made in the direction of the right line in which that force is impressed.— Source

In contemporary language, we would say things differently. Instead of “alteration of motion” we would write \(\partial_t v\), where \(v\) is the velocity of the moving object. We call \(\partial_t v\) the acceleration. Instead of “motive force” we would say simply “force,” and instead of “made in the direction of the right line in which that force is impressed”, we would say that velocity, acceleration, and force are vector quantities. Newton’s “ever proportional to” amounts to saying that

\[\partial_t \vec{v} = b \vec{F}\,\]

that is, change in motion is proportional to force. Newton stipulated that the constant of proportionality, \(b\), is the reciprocal of mass, that is \(1/m\). Writing acceleration \(\vec{a} = \partial_t \vec{v}\), the previous equation amounts to \[m \vec{a} = \vec{F}\ ,\]

the form in which beginning physics students first hear it.

Newton, of course, was very interested in ** gravity**. From previous experiments dropping weights and rolling balls down ramps, as was done by Galileo Galilee (1564-1642), Newton knew that the force of gravity on an object (near the surface of Earth) is proportional to the object’s mass, that is \[\vec{F} = -g m\ ,\] where the direction of \(\vec{F}\) is straight downwards toward the center of the Earth. The negative sign in front of \(g\) reflects this downward direction. We are assuming that position and velocity are both defined in a positive, upward direction.

^{1}developed his

**.]**

*Theory of General Relativity*The simple model of an object moving under the force of gravity is \(\partial_t v = g\). Notice that this is not a linear differential equation—\(g\) is not a linear function of \(v\) but a constant, and there is no fixed point—so the solution is not an exponential. But we can find the solution easily enough by integrating both sides of the equation with respect to \(t\).

\[\int \partial_t v\, dt = \int -g\, dt \ \ \implies v(t) = -g\, t + C\]

where \(C\) captures the constants of integration from both integrals into one number.

It is worth noticing how much mathematics needs to be understood before this method of solution makes sense. The ** fundamental theorem of calculus** is what tells us that \(\int \partial_t v\, dt = v(t) + B\), and you have to know about how to anti-differentiate a constant function to make sense of \(\int g\, dt = -g\,t + D\). You also need to know why constants of integration, such as \(B\) and \(D\), get included when writing the function that results from an anti-differentiation. (You might need to revisit Block 3 to refresh your memory about such things.)

There is also physical context to be considered. By setting \(t=0\) in \(v(t) = -g\,t + C\), for instance, we can identify \(C\) as the velocity at time zero, which we might write \(v(0)\) or \(v_0\) for short. And what about the position of the object? The solution \(v(t) = -g\,t + v_0\) has nothing to say directly about the position \(x(t)\) of the object as a function of time. We can work position \(x(t)\) into things by recognizing that \(v(t) = \partial_t x(t)\), which is the definition of velocity.

Anti-differentiating both sides of \(v(t) = -g\, t + v_0\) gives us a more complete story that include both initial velocity \(v_0\) and initial position \(x_0\):

$v(t), dt = (-g, t + v_0) dt x(t) = - g,t^2 + v_0,t + x_0 , $

where \(x_0\) is the constant of integration from this second stage of anti-differentiation. (Plug in \(t=0\) to see why we are justified in taking \(x_0\) as the initial position.)^{2}

Still one more way to write the dynamics of falling under the influence of gravity …. Recognizing that \(v(t) = \partial_t x(t)\), we can see that \(\partial_t v(t) = \partial_{tt} x(t)\). So the original differential equation could be written:

\[\partial_{tt} x = -g\]

This is an example of a ** second-order differential equation**, so called because of the appearance of a second derivative, \(\partial_{tt}x\).

In this chapter, we will study second-order differential equations in a variety of contexts. But, as for Newton, movement under the influence of gravity will be a focus. Since the second-order differentiation can be interpreted as representing the balance between force and acceleration, we will call these ** force-balance equations**.

In general, a force-balance equation has the form \[\partial_{tt} x = f(\partial_t x, x)\], the acceleration is a function both of position and velocity. In the above example, the dynamical function has a particularly simple form: \(f(\partial_t x, x) \equiv -g\).

Second-order differential equations can always be written as a pair of first-order differential equations. To see this, let one of the first-order equations be \[\partial_t x = v\ .\] The other equation, \(\partial_{tt} x = f(\partial_t x, x)\) can be re-written in terms of \(v\):

\[\partial_t v = f(v, x)\ .\]

Since we know how to solve sets of first-order differential equations by Euler’s method, we can always find the solution \(x(t)\) to any second-order differential equation.

## 46.1 Ballistics

A lot of the theory of second-order differential equations was developed in the setting of a ball being set off with an initial velocity from an initial position. Such a focus on the flight of balls might seem trivial. Fortunately, language allows us to construct a scientific-sounding word by adding the suffix “istic” to the root “ball.” This suffixing produces the word ** ballistics**.

The importance of ballistics to Newton can be seen by a famous diagram he drew, shown in Figure 46.1. In the diagram, Newton traces the path of a ball shot horizontally from a cannon placed at the top of a mountain.

Since the motion in Newton’s diagram has both vertical and horizontal components, we will need *two* second-order differential equations:

\[\text{Horizontal}: \ \ \partial_{tt} x = 0\\ \ \ \ \text{Vertical}: \ \ \ \ \ \ \partial_{tt} y = -g\] The zero on the right-hand side of the equation of horizontal movement reflects that gravity does not act horizontally.

We found a solution for the vertical equation in the previous section, \[y(t) = -\frac{1}{2} g\,t^2 + 0\,t + y_0\ .\] The \(0\, t\) component to the solution reflects that the vertical component of the ball, coming out of the cannon, is zero.

The solution for the horizontal component of motion can be found by anti-differentiating both sides of the equation of hortizontal motion: \[\int \partial_{tt} x(t)\, dt = \partial_t x(t) = \int 0\, dt = v_0\] where \(v_0\) is the initial horizontal velocity. A second stage of anti-differentiation gives \(x(t)\) itself:

\[\int \partial_t x(t) = \int v_0 dt = v_0\, t + x_0\]

Regrettably, symbolic anti-differentiation works only in simple cases. To support more realistic models of ballistics, let’s see how to translate the two second-order differential equations into sets of first-order equations. The state variables will be \(x(t)\) and \(y(t)\), but we also have to add another pair, \(u(t)\) and \(v(t)\) standing for the horizontal and vertical velocities respectively. The first-order equations will be:

\[\partial_t x = u\\ \partial_t y = v\\ \partial_t u = 0\\ \partial_t v = -g \]

To illustrate, we will solve this set of four first-order equations numerically. We need to specify the initial values for \(x_0\), \(y_0\), \(u_0\) and \(v_0\). We will let the cannon be located at horizontal position \(x_0 = 0\) and vertical position \(y_0 = 100\) meters. The vertical velocity is, initially, zero, so \(v_0 = 0\). And suppose the cannon produces an initial horizontal velocity of \(u_0 = 250\) meters/sec. The constant \(g\) is known to be 9.8 meters/sec^{2}.

Here’s the trajectory:

```
<- integrateODE(
traj ~ u, dy ~ v, du ~ 0, dv ~ -9.8, #dynamics
dx x=0, y=100, u = 250, v=0, #initial conditions
bounds(t=0:5)
) # traj_plot(y(t) ~ x(t), traj)
# traj_plot(v(t) ~ u(t), traj)
```

The left panel in Figure 46.2 shows that the trajectory is a parabola. At about \(t=4.4\) secs the \(y\) position is zero. Since zero is the location of the ground, the part of the trajectory for \(4.4 < t\) is invalid, since the ball has already hit the ground. The ball travels a little more than 1100 meters horizontally before hitting the ground.

The right panel might seem somewhat strange. You can see that the vertical component of velocity, \(v(t)\) starts out at zero and increases linearly with time, becoming more and more negative as gravity continuous to accelerate the ball downward. The vertical velocity, \(u(t)\), stays constant at \(u(t) = 250\) meters per second. This is because there is no horizontal force on the ball.

## 46.2 The harmonic oscillator

Consider the motion of a weight attached to a spring, as in ?fig-spring-mass. We will denote the vertical position of the mass by \(y(t)\). Such a spring-mass system has a fixed point where the spring is stretched just enough to cancel out gravity and the velocity is zero. We will measure \(y\) relative to this fixed point.

::: {.cell .column-margin layout-align=“center” fig.cap = “A spring-mass system in motion. Source=’Svjo CC BY-SA via Wikimedia Commons”’ fig.showtext=‘false’} ::: {.cell-output-display} ::: :::

According to Hooke’s Law, a stretched or compressed spring exerts a force that is proportional to the amount of extension or compression. With our measuring \(y\) relative to the fixed point, the Hooke’s Law force will be

\[m\, \partial_{tt} y = - s\, y\ ,\] where \(m\) is the amount of mass and \(s\) is the stiffness of the spring. This force-balance equation corresponds to the second-order differential equation \[\partial_{tt} y = - \frac{s}{m} y\ .\]

You can see that the motion is oscillatory, which suggests that the solution to the differential equation will be of the form \(y(t) = A \sin(\omega t)\). Taking this as an ** ansatz** leads to finding a value of \(\omega\), which is called the

**of the oscillation. (In terms of the period of oscillation \(P\), the angular frequency is \(\omega = 2 \pi/P\).)**

*angular frequency*To find \(\omega\), plug in the ansatz to the differential equation:

\[\partial_{tt} A \sin(\omega t) = - \frac{s}{m}\, A \sin(\omega t)\]

Differentiating \(\sin(\omega t)\) once let’s us re-write the left-hand side of the equation in terms of a *first derivative*

\[\partial_{t} A \omega\, \cos(\omega t) = - \frac{s}{m}\, A \sin(\omega t)\]

Differentiating again gives

\[- \omega^2 A\sin(\omega\, t) = - \frac{s}{m}\, A\sin(\omega t)\ .\]

Simplifying this by cancelling out the \(A \sin(\omega t)\) term gives \(\omega^2 = \frac{s}{m}\), where \(\omega\) is the angular frequency of the oscillation.

Instead of using \(A \sin(\omega t)\) as the *ansatz* we could have used \(A \sin(\omega t) + B \cos(\omega t)\). Working through this ansatz would produce the same result, that \(\omega^2 = \frac{s}{m}\). So the solution to the spring-mass system will be, in general, a linear combination of the sine and the cosine functions with angular frequency \(\omega\).

## 46.3 Exponential or sinusoid?

In Chapter Chapter 45, we established that solutions to second-order linear differential equations have the form \(m_1 e^{\lambda_1 t} + m_2 e^{\lambda_2 t}\). Yet in the previous section, we saw one linear second-order differential equation, \(\partial_{tt} y = - \omega^2 y\) where the solution is a linear combination of a sine and a cosine function: \(y(t) = A \sin(\omega t) + B \cos(\omega t)\) with \(\omega = \sqrt{\frac{s}{m}}\).

How is it possible for the solution to be both in the form of a linear combination of exponentials and a linear combination of sine and cosine? Sinusoids oscillate up and down and up and down, whereas exponentials are monotonic.

To find out what might be the relationship between an exponential and a sinusoid, let’s plug an exponential *ansatz* \(y(t) = A e^{\lambda t}\) into the spring-mass system \(\partial_{tt} y = -\omega^2 y\).

\[\partial_{tt} A e^{\lambda t} = \lambda^2 A e^{\lambda t} = -\omega^2 A e^{\lambda t}\ .\]

As before, we will cancel out the common term \(A e^{\lambda t}\) to get a simple relationship:

\[\lambda^2 = -\omega^2\ \ \ \implies\ \ \ \lambda = \pm \sqrt{\strut-1}\ \omega \ .\]

Generally, the symbol \(i\) is used to stand for \(\sqrt{\strut -1}\), so our eigenvalues can be written \(\lambda = \pm i \omega\). The solution to the spring-mass system, according to this analysis, is:

\[y(t) = m_1 e^{i\omega t} + m_2 e^{-i \omega t}\]

In other words, \(e^{i \omega t}\)—notice the \(i\) in the argument to the exponential—is a sinusoid with angular frequency \(\omega\).

## 46.4 Exponentials with “imaginary” inputs

The “imaginary” in the section title is used in its mathematical sense. In interpreting the word “imaginary,” you should keep in mind a long history in mathematics of assigning insulting names to mathematical objects that, at the time they were first introduced. That is why some numbers are vilified as “negative,” and some as “irrational.” The insult is even more dire for numbers like \(i\), which are called the “imaginary” numbers. Regrettably, the word “imaginary” leads many people to shy away from them, just as many people avoid genres such as fantasy fiction. That imaginary numbers are introduced as kind of freakish—is there a numerical value for \(\sqrt{\strut -1}\)?—and rarely touched until advanced calculus, means that students are unused to them.

You will only get comfortable with “imaginary” numbers when you start to work with them extensively, as happens in physics and engineering courses. Our goal here is merely to increase your awareness of imaginary numbers and some of the ways they are used in the sciences. To that end, we offer three different approaches to understanding the function \(e^{i\omega t}\).

- Basic, pragmatic understanding. This is the level of understanding that you
**must have**to make sense of the rest of this chapter and Chapter ?sec-forcing. Here it is:

\[e^{i\omega t}\ \text{is simply a shorthand for}\ \cos(\omega t).\]

So whenever you see \(e^{i \omega t}\), think of \(\cos(\omega t)\).

- Algebraic understanding via Taylor Polynomials. (optional) This level of understanding can give you confidence that the basic, pragmatic understanding in (1) has honest roots. It also shows the way that (1) is not 100% on target (although good enough for a large fraction of mathematical work). But for many people, algebra is a rocky road to understanding.

The starting point for the algebraic understanding is the Taylor polynomial approximation for \(e^{\omega t}\). Recall from Chapter 26 that

\[e^{\omega t} = 1 + \omega t + \frac{1}{2!}\omega^2 t^2 + \frac{1}{3!}\omega^3 t^3 + \frac{1}{4!} \omega^4 t^4 + \frac{1}{5!} \omega^5 t^5 + \frac{1}{6!} \omega^6 t^6 + \cdots\]

You may also recall the Taylor polynomial expansion of sine and cosine:

\[ \cos(\omega t) = 1 - \frac{1}{2!} \omega^2 t^2 + \frac{1}{4!}\omega^4 t^4 - \frac{1}{6!} \omega^6 t^6 + \cdots\]

\[\color{magenta}{\sin(\omega t) = \omega t - \frac{1}{3!}\omega^3 t^3 + \frac{1}{5!} \omega^5 t^5 + \cdots}\]

You can see some association between \(e^{wt}\), \(\cos(\omega t)\), and \(\sin{\omega t}\) by looking at

\[\cos(\omega t) + \color{magenta}{i \sin(\omega t)} = 1 + \color{magenta}{i \omega t} -\frac{1}{2!} \omega^2 t^2 - \color{magenta}{i \frac{1}{3!} \omega^3 t^3} + \frac{1}{4!}\omega^4 t^4 + \color{magenta}{i \frac{1}{5!} \omega^5 t^5} - \frac{1}{6!}\omega^6 t^6 + \cdots\]

Now consider the Taylor polynomial for \(e^{i\omega t}\). This will be the same as the Taylor polynomial for \(e^{\omega t}\) but everywhere substituting \(i \omega\) in place of the plain \(\omega\). That is:

\[e^{i \omega t} = 1 + \color{magenta}{i\omega t} + \frac{1}{2!}i^2\omega^2 t^2 + \color{magenta}{\frac{1}{3!}i^3\omega^3 t^3} + \frac{1}{4!} i^4\omega^4 t^4 + \color{magenta}{\frac{1}{5!} i^5\omega^5 t^5} + \frac{1}{6!} i^6\omega^6 t^6 + \cdots\]

Since \(i\equiv \sqrt{\strut -1}\), we have the following facts for the powers \(i^n\):

\[i^2 = -1\ \ \ \ \ \color{magenta}{i^3 = -i}\ \ \ \ \ i^4 = 1\ \ \ \ \ \color{magenta}{i^5 = i}\ \ \ \ \ i^6 = -1\ \ \text{and so on}.\]

Substitute these facts about \(i^n\) into the Taylor polynomial for \(e^{i\omega t}\):

\[e^{i \omega t} = 1 + \color{magenta}{i\omega t} - \frac{1}{2!}\omega^2 t^2 - \color{magenta}{i \frac{1}{3!}\omega^3 t^3} + \frac{1}{4!} \omega^4 t^4 + \color{magenta}{i \frac{1}{5!} \omega^5 t^5} - \frac{1}{6!} \omega^6 t^6 + \cdots\]

which exactly matches the Taylor polynomial for \(\cos{\omega t} + \color{magenta}{i \sin(\omega t)}\).

- The arithmetic of complex numbers. (optional) A
is a number like \(2 - 3i\) which consists of two parts: the real-part \(2\) and the imaginary part \(-3\). When you multiply one complex number by another you get a complex number (although either the real or imaginary parts might happen to be zero.) For example:*complex number*

\[(2 + 3i)^2 = (2+3i)(2+3i) = \underbrace{4}_{2\times 2} + \underbrace{ \ 6 i\ }_{2 (3i)} + \underbrace{\ 6 i\ }_{(3i)2}\ \ \underbrace{- 9}_{(3i)(3i)}\ = -5 +12 i.\]

R knows the rules for arithmetic on complex numbers. Here’s a demonstration of the oscillations that result from raising a complex number to successive powers.

```
<- 0.65 + 0.76i
lambda ^2
lambda## [1] -0.1551+0.988i
^3
lambda## [1] -0.851695+0.524324i
^4
lambda## [1] -0.952088-0.3064776i
^5
lambda## [1] -0.3859342-0.9227973i
^6
lambda## [1] 0.4504687-0.8931283i
^7
lambda## [1] 0.9715821-0.2381771i
^8
lambda## [1] 0.812543+0.5835873i
^9
lambda## [1] 0.0846266+0.9968644i
^10
lambda## [1] -0.7026097+0.7122781i
```

Notice that the real part of the result oscillates between negative and positive. The imaginary part also oscillates, but delayed a bit from the real part. Just like sine and cosine.

We can get a clearer picture by plotting \(e^{i\omega t}\) over the domain \(0 < t < 10\). As an example, in ?fig-complex-exponential-plot we will set \(\omega = 2\). We need to be a little careful, since our plotting functions are not arranged to display complex numbers. But there is an easy workaround: plot the “real” and “imaginary” parts separately. The R operators `Re()`

and `Im()`

do this work.

```
<- makeFun(exp(1i * omega * t) ~ t, omega = 2)
f slice_plot(Re(f(t)) ~ t,
bounds(t=0:10), color = "magenta") %>%
slice_plot(Im(f(t)) ~ t, color="brown")
```

## 46.5 Damping

It is common for there to be friction, called ** damping**, in a spring mass system. To keep things very simple, we will consider that the friction is proportional to the velocity and, as in the cannonball example, in the direction opposite to velocity. That is: \[\partial_{tt} y = -r\, \partial_t y -b y\ ,\] where \(b\) would be the positive number \(\frac{s}{m}\) and \(r\) is another positive number reflecting the magnitude of friction. (Think of \(r\) as standing for “resistance.”)

As always, this second-order differential equation can be written as a pair of first-order differential equations. One of the first-order differential equations will be

\[\partial_t y = v\ ,\], which is just the definition of velocity \(v\). The other first-order equation will be \[\partial_t v = -r v - b y\ .\] Both equations are linear.

In the previous chapter, we wrote such a pair of linear first-order differential equations in terms of a vector

\[\vec{w(t)} = \left[\begin{array}{c}v(t)\\y(t)\end{array}\right]\ .\]

In terms of the vector \(\vec{w(t)}\) the dynamics can be written in vector/matrix form:

\[\partial_t \vec{w} = \left[\begin{array}{c}-r \ \ \ -b\ \ \\1 \ \ \ \ \ \ \ 0\end{array}\right]\, \vec{w}\ .\]

This form suggests, at least to the avid reader of the previous chapter, that we look for a solution \(y(t) = m_1\, e^{\lambda_1\, t} + m_2\, e^{\lambda_2\, t}\) in terms of the eigenvectors and eigenvalues of the matrix \(\left[\begin{array}{cc}r & b\\1 & 0\end{array}\right]\).

We used the R function `eigen()`

to compute the eigenvalues and eigenvectors of the matrix, given numerical values for \(r\) and \(b\). Let’s now try to find an algebraic formula for the eigenvalues. After all, it is the eigenvalues that determine the stability of the fixed point.

As an *ansatz* for the for the original second-order differential equation \[\partial_{tt} y = r\, \partial_t y + b y\ ,\] let’s use \(y(t) = A e^{\lambda t}\), a simple exponential function. Plugging in the ansatz to the differential equation gives:

\[A \lambda^2 e^{\lambda t} = - r A \lambda e^{\lambda t} - b A e^{\lambda t}\ .\]

We can cancel out the common term \(A e^{\lambda t}\) from all the terms in the equation, and bring all the terms to the left-hand side of the equation, leaving us with

\[\lambda^2 + r \lambda + b = 0\ .\]

This is a quadratic polynomial in \(\lambda\), so we can use the “quadratic formula” to find values for \(\lambda\) that are consistent with the parameters \(a\) and \(b\). In applying the quadratic formula you have to remember that the standard statement is for the roots of \(a x^2 + b x + c = 0\) and make adjustment for the fact that our polynomial uses the parameter names differently: \(\lambda^2 + r \lambda + b = 0\).

\[\lambda = \frac{- r \pm \sqrt{\strut r^2 - 4 b}}{2}\ .\]

Recall that the parameter \(r\) describes the amount of friction or resistence in the system; it is a positive number. Similarly, the nature of springs is that \(b\) is a positive number. The relative values of \(r\) and \(b\) determine the motion of the system.

Suppose the stiffness of the spring is much larger than the friction. Then \(r^2 < 4b\). This being the case, the \(\sqrt{\strut r^2 - 4 b}/2\) will be an imaginary number. Altogether, the eigenvalues will be \(\lambda = -\frac{r}{2} \pm {i \omega}\). The solution will be \[y = m_1 e^{\lambda_1 t} + m_2 e^{\lambda_2 t} \\ = m_1 e^{-\frac{r}{2}t + i \omega t} + m_2 e^{\frac{r}{2} - i\omega t} \\ = m_1 e^{-r t/2} e^{i\omega t} + m_2 e^{-r t/2} e^{-i \omega t} \\ = e^{-r t/2}\underbrace{\left[m_1 e^{i \omega t} + m2 e^{i\omega t}\right]}_{\text{sinusoid}(\omega t)}\] Result: an exponentially decaying sinusoid.

To graph this function, we need to choose appropriate numerical values for \(r\) and \(b\). Let’s set \(r=1\). Since \(r^2 < 4b\), we must have \(\frac{1}{4} < b\): we will choose \(b = 6\) which meets this criterion. Figure 46.4 shows the solution to the differential equation:

```
<- integrateODE(dv~ -r*v - b*y, dy ~ v,
traj v=10, y=0, r=1, b=6,
bounds(t=0:20))
## Solution containing functions v(t), y(t).
traj_plot(y(t) ~ t, traj)
```

This is the situation with a swinging door. You shove it to swing open, after which it oscillates with a decreasing amplitude.

In contrast, suppose the spring is weak compared to the damping such that \(4b < r^2\). Now \(\sqrt{\strut r^2 - 4b}\) is a positive number, not imaginary. What’s more, since \(b\) is positive, \(\sqrt{\strut r^2 - 4 b} < r\). This means that both eigenvalues are negative. We will illustrate the situation with \(r=1, b=0.2\):

```
<- integrateODE(dv~ -r*v - b*y, dy ~ v,
traj2 v=10, y=0, r=1, b=0.1,
bounds(t=0:20))
## Solution containing functions v(t), y(t).
traj_plot(y(t) ~ t, traj2) %>%
gf_lims(y = c(0, NA))
```

The situation in Figure 46.5 is the sort of behavior one expects when giving a shove to an exit door in theater or stadium. The shove causes the door to swing open, after which it slowly returns to the closed position. That gives plenty of time for the people following you to get to the door before it closes.

Finally, consider the case where \(r^2 - 4 b = 0\), a balance between resistance and springiness. In this case, both eigenvalues are \(\lambda = -r/2\).

```
<- integrateODE(dv~ -r*v - b*y, dy ~ v, v=10, y=0, r=1, b=0.25, bounds(t=0:20))
traj3 ## Solution containing functions v(t), y(t).
traj_plot(y(t) ~ t, traj3) %>%
gf_lims(y = c(0, NA))
```

This is a situation called ** critically damped**. The door swings open, then closes as fast as it can without any oscillation.

\(\ \) Consider the second-order linear differential equation \[\partial_{tt} y + 2\, \partial_t y - 3\, y = 0\ .\] Is this system stable?

For this system, \(a=2\) and \(b = - 3\), so the eigenvalues are

\[\lambda = \left(-2 \pm \sqrt{\strut 4 + 12}\right)/2 = 1 \pm \sqrt{\strut 16}/2 = -1 \pm 2\]

In other words, \(\lambda_1 = -3\) and \(\lambda_2 = +1\). This indicates that the system is a saddle: unstable in one direction and stable in the other.

To confirm our work, let’s use `eigen()`

to find the eigenvalues of the matrix \(\left[\begin{array}{cc}2 & 3\\1 & 0\end{array}\right]\):

```
<- cbind(rbind(-2,1), rbind(3,0))
M eigen(M)
## eigen() decomposition
## $values
## [1] -3 1
##
## $vectors
## [,1] [,2]
## [1,] -0.9486833 -0.7071068
## [2,] 0.3162278 -0.7071068
```

Although R is doing all the calculations for us, it is possible to write the directions of the eigenvectors only in terms of the eigenvectors:

\[\vec{\Lambda_1} = \left[\begin{array}{c}\lambda_1\\1\end{array}\right]\ \ \text{and}\ \ \vec{\Lambda_2} = \left[\begin{array}{c}\lambda_2\\1\end{array}\right]\]

For the system with \(\lambda_1 = 3\) and \(\lambda_2 = -1\), you can confirm that the eigenvectors calculated with this formula point in the same directions as the eigenvectors reported by `eigen()`

.

Let’s return to the car-following control system introduced in Chapter Chapter 45. Recall that \(x\) was defined to be the distance between two cars and \(x_0\) the distance to be maintained. In terms of \(y = x - x_0\) the system was \[\partial_{tt} y = - b y\ .\] You can see that this system *has no damping*; \(y(t)\) will be a sinusoidal oscillation. The ride will be more pleasant, however, if the oscillations can be damped out. To accomplish this, we should add a new term to the second-order differential equation, a damping term to give

\[\partial_{tt} y = -a\, \partial_t y- b\, y\ .\]

We should set the parameters \(a\) and \(b\) to make the real part of the eigenvalues negative. Only then will we have designed a workable control system.

## 46.6 Exercises

#### Exercise 46.01

**Friction** is an inevitable feature of real-world spring-mass systems. Without friction the spring-mass force-balance differential equation is \(m\partial_{tt} x = -k\, x\). How does friction fit in?

For a mass moving at velocity \(\partial_t{x}\), the friction force will be in the direction *opposed* to the velocity and, when velocity is zero, friction goes away. Following our general calculus idea of approximating with a simple straight-line function, we get a force \(\text{friction} = -r \partial_t{x}\). Adding in the friction force gives a new force-balance equation which has a famous name in physics: the “damped harmonic oscillator.”

\[m \partial_{tt}{x} = -r \partial_t{x} - k x\]

Note that all of the coefficients \(m, r\), and \(k\) are positive

Since we’ve gotten in the habit of using \(a\) and \(b\) on the right side of the equation, we will re-write the force-balance equation as \[\partial_{tt}{x} = a \partial_t{x} + b x\] where \(a = -r/m\) and \(b = -k/m\).

As the name “damped harmonic oscillator” suggests, we expect that the solution to the force-balance equation will be a “damped” oscillation, that is an oscillation that decreases in amplitude over time as friction draws energy out of the system (and dissipates it as heat). But how fast and in what form will the amplitude decrease?

**Part A** Suppose that friction is strong, that is \(a\) is big. More specifically, suppose \(a^2 > 4 b\). What will be true about \(\sqrt{\strut a^2 - 4b}\)?

- It will be purely “imaginary”.
- It will be purely “real”.
- It will be
*complex*, that is with a non-zero real part and a non-zero imaginary part. - There is no way to tell for sure.

**Part B** When \(a^2 > 4b\), can either of the eigenvalues be positive?

- No
- Yes, one eigenvalue can be positive.
- Both eigenvalues
*must*be positive. - Depends on the specific values of \(a\) and \(b\).

When *friction* dominates (that is, large \(|a|\)) the eigenvalues are both real and negative. This means there is no oscillation and the motion decays to \(x = 0\).

**Part C** Question: Suppose that friction is weak, that is \(a^2 < 4|b|\). What will be true about \(\sqrt{a^2 + 4b}\)?

- It will be purely “imaginary”.
- It will be purely “real”.
- It will be
*complex*, that is with a non-zero real part and a non-zero imaginary part. - There is no way to tell for sure.

Suppose that we define \(\omega \equiv \sqrt{\strut 4b - a^2}/2\) and \(k = a/2\). (Recall, that for \(a\) to describe friction, we must have \(a < 0\).) The eigenvalues will be of the form \({k + i\omega}\) and a solution to the differential equation will be \(e^{(k + i\omega)t} = e^{kt} e^{ i \omega t}\).

**Part D** What will \(e^{-kt} e^{i \omega t}\) be?

- An exponentially decaying sinusoid
- An exponentially growing sinusoid
- An ordinary sinusoid.

#### Exercise 46.02

Section 46.1 gives the equations for the motion of a cannonball with a simple model of air resistance and shows how to integrate them numerically with `integrateODE()`

.

You task is to set the initial conditions for velocity so that the ball is being fired 250 feet per second at an angle \(\theta\) from the horizontal. These initial conditions will be \(v_0 = 250 \sin\left(\strut \pi \theta/180\right)\) and \(u_0 = 250 \cos\left(\strut \pi \theta/180\right)\), where \(\theta\) has units of degrees.

Once you have the numerical integration working, find an argmin for \(\theta\) that maximizes the horizontal distance travelled by the cannonball.

#### Exercise 46.03

Most any mathematics textbook devotes a considerable amount of space to **deriving** formulas. This is well and good. But in practical work, there is considerable room for error even if you already know the formula.

It is a good professional practice to try to have at least two ways to perform a calculation so that you can confirm that you are doing the calculation properly. In this exercise, we give you a formulas for the eigenvalues and eigenvectors of an abcd matrix. And, of course, there is the `eigen()`

function that should give the same results as the formula.

Your task is to use both `eigen()`

and the formulas directly to confirm that the two calculations are the same. The formulas have symbols; replace these with numbers of your choice to do the calculations.

You might start with numbers that are simple integers, then switch to numbers that are more or less random.

The formula for the eigenvalues of an abcd matrix \[\left[\begin{array}{cc}a & b\\c & d\end{array}\right]\] is

\[\lambda_{1,2} = \frac{1}{2}\left[{\large\strut}(a-d)^2 \pm \sqrt{\strut(a+d)^2 - 4bc}\right]\ .\]

Once you know the eigenvalues, the eigenvectors can be calculated this way:

\[\vec{\Lambda_1} = \left[\begin{array}{c}\lambda_1 - d\\c\end{array}\right]\ \ \text{and}\ \ \vec{\Lambda_2} = \left[\begin{array}{c}\lambda_2 - d\\c\end{array}\right]\]

- Calculate the eigenvalues and eigenvectors of

\[\left[\begin{array}{rr}1 & 0 \\-2 & 6\end{array}\right]\]

- Calculate the eigenvalues and eigenvectors of

\[\left[\begin{array}{rr}8 & 6 \\4 & -1\end{array}\right]\]

- Calculate the eigenvalues and eigenvectors of

\[\left[\begin{array}{rr}-9 & 2\\8 & -6\end{array}\right]\]

#### Exercise 46.04

An eigenvector \(\vec{\Lambda}\) of a matrix \(\mathit{M}\) always has this property:

\[\mathit{M} \vec{\Lambda} = \lambda \vec{\Lambda}\]

This merely says that if you multiply a matrix by one of its eigenvectors, the result is a vector that points in the same direction as the eigenvector but may have a different length or reversed orientation.

Your task:

Construct a numerical abcd matrix with whatever values you like and, using

`eigen()`

, calculate its eigenvalues and eigenvectors.Multiply the matrix by one of the eigenvectors to create a new vector.

Confirm that the vector in (2) is proportional to the eigenvector. When two vectors are proportional, dividing component-wise one vector by the other will create a vector with every element the same. (Note: if both of the vectors in the division have any zero component, the result of the division will be

`NaN`

. In contrast, if one of the vectors has a zero component, but the other does not, you will get a result of 0 or`Inf`

for that component.)

#### Exercise 46.05

You’ve seen that sometimes the two eigenvalues of an abcd matrix are **complex numbers**, that is, have a nonzero imaginary part. You may also have noticed that the eigenvalues are closely related to one another: the imaginary part of one will be the negative of the imaginary part of the other. Such vectors are called ** complex conjugates**.

Whenever a matrix has a complex eigenvalue, it will also have the complex conjugate of that eigenvalue; complex eigenvalues *always* come in pairs.

For a 2x2 matrix, there will be two eigenvalues. If one is complex, the other will be the complex conjugate.

For a 3x3 matrix, either one or all three of the eigenvalues will **not** be complex.

Here’s a way to generate a random nxn matrix: ::: {.cell layout-align=“center” fig.showtext=‘false’}

```
<- 3
n <- matrix(rnorm(n^2), nrow=n)
M eigen(M)$values
## [1] -1.2701434 1.2291019 -0.6767853
```

:::

Your task: for \(n=3\), run the previous commands over and over, 20 times altogether. Report what fraction of the time the random matrix generated has 3 real eigenvalues. (Note: Something like `1.548950+0.00000i`

is a real eigenvalue, because the imaginary part is zero.)

#### Exercise 46.06

Let’s plot \(e^{i\omega t}\) over the domain \(0 < t < 10\) using \(\omega = 2\). We need to be a little careful, since our plotting functions are not arranged to display complex numbers. But there is an easy workaround: plot the “real” and “imaginary” parts separately.

```
<- makeFun(exp(1i * omega * t) ~ t, omega = 2)
f slice_plot(Re(f(t)) ~ t,
bounds(t=c(0, 10)), color = "magenta") %>%
slice_plot(Im(f(t)) ~ t, color="brown")
```

**Part A** Which part of \(e^{i \omega t}\) is the cosine function?

- The “real” part
- The “imaginary” part
- The negative of the “imaginary” part
- The negative of the “real” part

Now let’s consider \(e^{(k + i\omega) t}\), where the input to the exponential function has a real part \(k\) and an imaginary part \(\omega\). As before, the output of the exponential will be a complex number, so we will plot the real and imaginary parts separately. ::: {.cell layout-align=“center” fig.showtext=‘false’}

```
<- makeFun(exp((k + 1i * omega) * t) ~ t, omega = 2, k=-1)
g slice_plot(Re(g(t)) ~ t,
bounds(t=c(0, 10)), color = "orange3", npts=500) %>%
slice_plot(Im(g(t)) ~ t, color="dodgerblue", npts=500)
```

:::

**Part B** At what time \(t\) does the sine-like function complete one full oscillation?

At about \(t=1.6\) At about \(t=3.14\) At about \(t=4.7\) At about \(t=6.28\)

**Part C** Find a value for \(\omega\) that will produce one full oscillation every two time units. Graph it out to make sure that you have five full oscillations over the \(0 < t < 10\) domain. What is that \(\omega\)?

\(\omega = 1\) \(\omega = \pi/2\) \(\omega = \pi\) \(\omega = 2 \pi\)

**Part D** Keeping \(\omega\) at the value that produce five complete cycles over \(0 < t < 10\), find a value for \(k\) such that the amplitude of the oscillation at \(t=10\) will be half that of the amplitude at \(t=0\). What is \(k\)?

\(k \approx -0.70\) \(k \approx -0.07\) \(k \approx 0.07\) \(k \approx 0.70\)

**Part E** Set \(\omega\) at the value that produces 20 complete cycles over \(0 < t < 10\) and \(k\) at the value such that the amplitude of the oscillation at \(t=10\) will be *twice* that of the amplitude at \(t=0\). What are \(\omega\) and \(k\)?

- \(\omega = \pi,\ k \approx 0.35\)
- \(\omega = 2 \pi,\ k \approx 0.35\)
- \(\omega = 4 \pi,\ k \approx 0.070\)
- \(\omega = 6 \pi,\ k \approx 0.7\)

#### Exercise 46.07

At the very beginning of this 1987 video, the climber, Catherine Destivelle, is dangling from a rope. Make an estimate of how long the rope is (specifically the length of the rope from the climber’s harness to the bolts at the top of the route), based on the eigenvalues of the linear dynamics of swinging.

The standard differential-equation model for the changing angle \(\theta\) of a pendulum is \[L \partial_{tt} \theta = - g \theta\] where \(L\) is the length of the pendulum (in meters) and \(g\) is the acceleration due to gravity (9.8 m/s^2 on the Earth’s surface).

**Part A** What are the eigenvalues of the standard model for swinging?

- \(\lambda_1 = L\) and \(\lambda_2 = g\)
- \(\lambda_{1,2} = \pm \frac{L}{g}\)
- \(\lambda_{1,2} = \pm \sqrt{L/g}\)
- \(\lambda_{1,2} = \pm ⅈ \sqrt{L/g}\)

**Part B** Units for \(L\) and \(g\) were given in the paragraph above. Using these units, what are the corresponding units of the eigenvalues?

- Since ⅈ is “imaginary”, it makes no sense to talk about units.
- seconds/meter
- 1/seconds
- meters per second

**Part C** From the video, estimate the period of the oscillation. Which of these is closest to the duration of a full back-and-forth swing?

2 seconds 3 seconds 4 seconds 6 seconds

**Part D** It is conventional to give separate names to the components of \(\lambda\). The “real” part is often denoted \(k\), the ⅈ part is called \(\omega\). The period (in seconds, say) is the quantity \(P\) such that \(\omega P = 2 \pi\). What are the units of \(\omega\) for \(P\) in seconds?

seconds 1/seconds meters per second It has no units.

Put together these three facts to find a formula for \(L\). Remember: \(\omega = \sqrt{L/g}\) and \(6\,\text{sec}\,\omega = 2 \pi\) and \(g = 9.8\)m/s.

**Part E** What’s a good estimate of \(L\)?

5 meters 10 meters 15 meters 20 meters

PS. Glad to report that in 2022, Catherine turned 62 years old.

#### Exercise 46.08

Consider the second-order linear differential equation

\[\partial_{tt}x = - b\ \partial_t x - a\ x\] where \(a\) and \(b\) are scalars.

The only state variable listed explicitly in the second-order equation is \(x\).

Define a new state variable \(v \equiv \partial_t x\) and use it to re-write the second-order equation as

**two first-order**differential equations: \[\partial_t v = ????\\\partial_t x = ????\]Confirm that the matrix equivalent to the pair of first order equations is \[\left[\begin{array}{cc}a & b\\1 & 0\end{array}\right]\ .\]

With only 2 parameters, \(a\) and \(b\), can the matrix in (2) create the full range of behaviors seen in the 4-parameter abcd matrix

\[\left[\begin{array}{cc}a & b\\c & d\end{array}\right]\ ?\]

To answer this question, we will return to the app that displays flows:

You can see in the ab-selector graph on the left annotations for several types of generic behaviour:

- Saddle (unstable along one eigenvector, stable along the other)
- Sources (unstable along both eigenvectors)
- Sinks (stable along both eigenvectors)
- Stable oscillation (spirals toward the fixed point)
- Unstable oscillation (spirals away from the fixed point)

In the ab-selector graph, these are encoded by color and shading. The blue parabola marks the ab-region of oscillation; stability is indicated using dark shading. Saddles are at the top of the ab-graph. Sources and sinks are the gray arches rising up from the edges of the graph.

The shading is set by calculating the eigenvalues for each (a, b)-pair

**What sorts of trajectories, if any, are not produced by ab10 compared to the possibilities provided by the abcd matrix?**

- Underneath the graph are two numeric-input widgets, one to set the \(c\) parameter, the other to set \(d\). By default, these are set to display an [ab10] matrix, but you can change them.

Play with the \(c\)- and \(d\)-inputs to see how that changes the flow field (right graph) and the shaded regions. Make small changes to \(c\) and \(d\) at first to get a feeling for what the controls do to the display. Then you can explore large changes.

**Does any new type of behavior appear when \(c\) and \(d\) are moved from their [ab10] settings?**

#### Exercise 46.09

**Passive electrical circuits**

Second-order differential equations are used to model simple electrical circuits. In a step outside of calculus (meaning: you won’t be examined on it) it is worth pointing out the correspondence between concepts of motion (acceleration, velocity, position) and electrical circuits (voltage, current, charge). There are three classical idealized passive components of circuits:

- capacitor, denoted
- resistor, denoted
- inductor, denoted

In every case, we will be interested in the *voltage* across the two ends of the component. And we will think about the dynamics of the circuit in terms of electrical *charge* which we will denote \(x\).

- For a capacitor the voltage is proportional to
*charge*\(x/C\), where \(C\) is the “size” of the capacitor. - For a resistor the voltage is proportional to the flow of charge, that is,
*current*\(r \dot{x}\), where \(R\) is the amount of resistance, basically the “size” of the resistor. - For an inductor the voltage is proportional to the change in the flow of charge, that is, \(L \ddot{x}\), where \(L\) is the inductance.

Only a capacitor is capable of holding a voltage on its own. The other circuit elements can carry a voltage when they are part of a *circuit*. we will explore a simple circuit.

To prime the circuit, we will connect the two dots at the bottom of the circuit with a battery. This will charge up the capacitor in much the same way as we “charge up” a spring by pulling on it. Next remove the battery and get ready to observe the motion. Complete the circuit by closing the switch between the two dots. Doing so establishes the circuit, analogous to setting up the dynamics of the system. The initial condition is the amount of charge \(x\) on the capacitor and, at the instant the switch is closed, no flow of current, giving \(\dot{x} = 0\).

The “force-balance” is the requirement that the sum of the voltages across the circuit elements be zero. This amounts to

\[L \partial_{tt} {x} = -R\, \partial_t{x} - \frac{1}{C} x \]

- Consider a circuit with inductance \(L=1\), resistance \(R=3\) and capacitance \(C = 1\). What will be the eigenvalues of the dynamics?
**Will the fixed point at \(x=0\) be stable or not?**

**What will be the frequency of the solution \(x(t)\)?**

it is remarkable that the same \(m\) appears both in Newton’s Second Law and in the description of the force of gravity. There was no mathematical theory for this until Albert Einstein (1879-1955)↩︎

Another bit of physics which is still not included in the differential equation is that it will only hold until the object hits the ground, at which point the force of gravity will be counter-acted by the force of the ground on the object.↩︎