By Michal Jagodzinski - April 3rd, 2023
Following graduation, I have been interested in learning a variety of different topics, one being optimal control. It's a pretty complex topic that I've been struggling to understand for a while, especially with being able to apply the theoretical concepts I've been reading about. At this point however I feel ready to cover some simple ideas in optimal control, the first being the direct shooting method.
The direct shooting method is a way to solve a boundary value problem by converting it into an initial value problem. The solution to the problem is equal to the root(s) of the following equation:
Where is the initial condition and is the target value. If the notation is unclear, you can think of as the value of at time given initial condition . See the Wikipedia article linked above if you'd like a more rigorous explanation.
There are various numerical methods to find roots, in this case I went with Newton's method. This is an iterative method, starting with an initial guess of the root . The method converges upon a root with the following iterative step:
This works in the case of single variables, however often we want to find the roots of functions with multiple variables. For this, we can use the following form of Newton's method:
Where is the Jacobian matrix of , which is defined as:
Now, with each iteration we come closer to the vector of initial values that solve the boundary value problem.
To demonstrate using the direct shooting method, I will show how to use it to calculate the required velocity of a projectile to hit an arbitrary target. Assuming no drag, with the starting position of , the kinematics for the projectile are:
Let's say we have a target at . No we need to find the initial velocities that are required to hit the target.
First we need to calculate the time of flight of the projectile. does not affect the time of flight but does:
Next we can define our matrix based on the time of flight. This is a multivariable case, hence why I mentioned the multivariable form of Newton's method earlier.
For this example, simplifies to zero since no matter where the projectile hits the ground, its height off the ground is zero, plus the target's position is also zero. Now we can implement this in code.
Importing required libraries:
using Plots, Zygote, LinearAlgebra
Defining constants and initial conditions:
const g = 9.80665
const ẋ₀ = 15.0
const ẏ₀ = 15.0
const target = [200, 0]
Defining the matrix:
F(u) = [
(u[1] * 2*u[2]/g) - target[1],
0
]
Defining the equations of motion:
xt(t, ẋ) = ẋ * t
yt(t, ẏ) = ẏ * t - 0.5*g*t^2
Plotting initial trajectory and iteration loop:
p1 = scatter(
[target[1]], [target[2]],
label="Target",
dpi=300,
xlabel="x (m)",
ylabel="y (m)",
aspect_ratio=:equal,
margin=5Plots.PlotMeasures.mm
)
impact_time = 2*ẏ₀/g
times = 0.0:0.01:impact_time
plot!(p1, xt.(times, ẋ₀), yt.(times, ẏ₀), label="Trajectory 1")
un = [ẋ₀, ẏ₀]
errors = [abs(target[1] - xt(impact_time, ẋ₀))]
for i in 1:6
F_mat = F(un)
DF = jacobian(F, un)[1]
global un = un - pinv(DF)*F_mat
global impact_time = 2*un[2]/g
global times = 0.0:0.01:impact_time
plot!(xt.(times, un[1]), yt.(times, un[2]), label="Trajectory $(i+1)")
push!(errors, abs(target[1] - xt(impact_time, un[1])))
end
p1
scatter(
errors,
yaxis=:log, minorgrid=true,
label="", xlabel="Iteration", ylabel="Error",
marker=:xcross
)
As can be seen, using the direct shooting method quickly converges upon the required initial velocity to hit the target. For this example, the velocity is:
round.(un, digits=4)
2-element Vector{Float64}:
15.0
15.0
In conclusion, using the direct shooting method and Newton's method allows us to numerically calculate the initial conditions required to solve some boundary value problems. The example I covered was pretty simple, and I plan to do some more complex problems in the future. Specifically, I want to cover the projectile example but with added drag, and solving the problem solely using the dynamics of the system, instead of relying on the kinematic equations.
Hope you enjoyed reading and learned something. I'll definitely be writing more about optimal control the more I learn about it, as I find it a very fascinating albeit hard field. Until next time.