1

I am trying to find the differential equation which implies a smooth path $q:[a,b] \rightarrow \mathbb{R}^n$ subject to $|\dot{q}(t)| = 1$ (i.e. $q$ has unit speed) is a stationary point of $$ \int_a^b L(t,q(t), \dot{q}(t)) \ dt $$ for a certain $L(t,q(t), \dot{q}(t))$. This question and answer here appears to be exactly what I need, but when I tried to apply their formula to an easy example, it didn't work. Did I make a mistake or is their formula wrong? The example I tried to apply it to is below.

Following the notation of their question, suppose $F(x,y) = x^2+y^2$ and $g(x',y') = {x'}^2 + {y'}^2 -1 = 0$ and $x(0) = y(0) = 0$. So you are trying to maximize $$ \int_0^1 x(t)^2 + y(t)^2 dt \tag 1 $$ subject to $(x(t),y(t))$ having unit speed. Because $x(t)^2 + y(t)^2 \leq t^2$, $(1)$ is less than $\int_0^1 t^2 dt = 1/3$. On the other hand, this maximal value is obtainable by having $(x(t),y(t))$ move in a straight line at unit speed. However their equation $$ \frac{\partial F}{\partial x}-\frac{d}{dt}\frac{\partial F}{\partial x'}=\lambda(t)\left(\frac{\partial g}{\partial x}-\frac{d}{dt}\frac{\partial g}{\partial x'}\right) \tag 2 $$ yields $$2x = -\lambda(t) 2x''. $$ Their other equation for $y$ yields
$$ 2y = -\lambda(t) 2y''. $$ When $(x(t),y(t))$ moves in a straight line at unit speed, $x'' = y'' = 0$. Their equations imply $(x,y) = 0$. So $(x,y)$ maximizing $(1)$ by moving in a line is not detected by their equations.


I tried to rederive their formula by considering $$ \Phi(q) = \int_a^b L(t,q(t),\dot{q}(t)) + \lambda(t) g(q(t),\dot{q}(t)) \ dt $$ but got a different formula. For $\eta:[a,b] \rightarrow \mathbb{R}^n$ smooth with $\eta(a) = \eta(b) = 0$, one calculates $$ \begin{align} \frac{d}{d\epsilon} \Phi(q+\epsilon \eta) = \int_a^b &\eta(t) L_q(t,q(t)+\epsilon \eta(t), \dot{q}(t) + \epsilon \dot{\eta}(t)) \\ &+\dot{\eta}(t) L_{\dot{q}}(t,q(t)+\epsilon \eta(t), \dot{q}(t) + \epsilon \dot{\eta}(t)) \\ &+\lambda(t) \eta(t) g_q(q(t) + \epsilon \eta(t), \dot{q}(t) + \epsilon \dot{\eta}(t)) \\ &+\lambda(t) \dot{\eta}(t) g_{\dot{q}}(q(t)+\epsilon \eta(t),\dot{q}(t)+\epsilon \dot{\eta}(t) ) dt \end{align} $$ I don't fully understand why, but presumably if $q$ is a stationary point satisfying $g(q,\dot{q}) = 0$, setting $\epsilon = 0$ makes the previous equation $0$. $$ \begin{align} 0 = \int_a^b &\eta(t) L_q(t,q(t), \dot{q}(t)) +\dot{\eta}(t) L_{\dot{q}}(t,q(t), \dot{q}(t)) \\ &+\lambda(t) \eta(t) g_q(q(t), \dot{q}(t)) +\lambda(t) \dot{\eta}(t) g_{\dot{q}}(q(t),\dot{q}(t)) \ dt \end{align} $$ Integrating the terms multiplied by $\dot{\eta}(t)$ by parts and using $\eta(a) = \eta(b) = 0$ yields $$ 0 = \int_a^b \eta(t)\Big[L_q(t,q(t),\dot{q}(t)) - \frac{d}{dt} L_{\dot{q}}(t,q(t), \dot{q}(t)) + \lambda(t) g_q(q(t),\dot{q}(t)) - \frac{d}{dt} \big(\lambda(t) g_{\dot{q}}(q(t),\dot{q}(t)) \big)\Big] dt $$ Since $\eta(t)$ was arbitrary, the term in brackets is $0$. So $$ L_q(t,q(t),\dot{q}(t)) - \frac{d}{dt} L_{\dot{q}}(t,q(t), \dot{q}(t)) = - \lambda(t) g_q(q(t),\dot{q}(t)) + \frac{d}{dt} \big(\lambda(t) g_{\dot{q}}(q(t),\dot{q}(t)) \big) $$ which is different from $(2)$ due to $\lambda(t)$ being differentiated.

3 Answers3

1

The problem is simpler in polar coordinates. The augmented Lagrangian, then, is given by $$ L=r^2+\lambda(\dot{r}^2+r^2\dot{\theta}^2-1). \tag{1} $$ The correct Euler-Lagrange equations are$^{(*)}$ \begin{align} \frac{\partial L}{\partial r}-\frac{d}{dt}\left(\frac{\partial L}{\partial\dot{r}}\right)=0 &\implies \left(1+\lambda\dot{\theta}^2\right)r-\dot{\lambda}\dot{r}-\lambda\ddot{r}=0, \tag{2} \\ \frac{\partial L}{\partial \theta}-\frac{d}{dt}\left(\frac{\partial L}{\partial\dot{\theta}}\right)=0 &\implies \frac{d}{dt}(\lambda r^2\dot{\theta})=0 \implies \lambda r^2\dot{\theta}=C, \tag{3} \\ \frac{\partial L}{\partial \lambda}=0 &\implies \dot{r}^2+r^2\dot{\theta}^2-1=0. \tag{4} \end{align} The initial condition $x(0)=y(0)=0$, or $r(0)=0$, implies $C=0$ in Eq. $(3)$. There are, then, three possibilities:

  1. $r(t)=0$: this is not consistent with Eq. $(4)$, which becomes $\dot{r}^2=1$;

  2. $\lambda(t)=0$: Eq. $(2)$ then implies $r(t)=0$, which we have seen is not consistent with Eq. $(4)$;

  3. $\dot{\theta}=0$: Eq. $(4)$ again becomes $\dot{r}^2-1=0$. Plugging the solution $r(t)=t$ (and $\dot{\theta}=0$) into Eq. $(2)$, we obtain an equation for $\lambda(t)$: $$ t-\dot{\lambda}=0 \implies \lambda(t)=\lambda_0+\frac{t^2}{2}. \tag{5} $$

In conclusion, the solution to the maximization problem is the one expected, $r(t)=t$ --- or, in Cartesian coordinates, $(x(t),y(t))=(t\cos\theta,t\sin\theta)$.


$^{(*)}$ Compare with Eq. $(2)$ in the question, which missed the time derivative of $\lambda(t)$.

Gonçalo
  • 9,312
0

I think the problem is that you’re not actually solving a variational problem. You don’t mention the boundary conditions (and neither does the post you link to), and usually you don’t need to worry about boundary conditions in such problems, but in this case the boundary conditions of the global optimum that you’re considering (namely, that $(x(1),y(1))$ is a given unit vector) are such that the constraint $x'^2+y'^2=1$ only allows a single solution (namely $(x(t),y(t))=t(x(1),y(1))$). Thus, there are no other functions to compare against, so we shouldn’t expect a variational approach to work.

For other boundary conditions, with $|(x(1),y(1)|\lt1$, the optimal solution isn’t differentiable (it consists of a line segment that goes beyond $(x(1),y(1))$ and then a line segment that returns to $(x(1),y(1))$), so you can’t find it with a variational approach, either.

joriki
  • 238,052
  • If calculus of variations requires both endpoints of the path are specified, that is not good, because the endpoint is not known in the problem I am trying to apply calculus of variations to, only the start. In physics, the principle of least action does not specify the endpoint right? But doesn't that use the same calculus of variations formula? – Stephen Harrison Mar 05 '24 at 08:38
  • @StephenHarrison: The derivation of the Euler–Lagrange equation uses integration by parts to move the derivative from the variation to the function; the boundary term vanishes due to the fixed boundary conditions. As I said, usually you don’t need worry about that – you just assume arbitrary boundary conditions, and you get a differential equation; every solution of the differential equation is a solution of some instance of the variational problem with the particular boundary conditions that it satisfies. – joriki Mar 05 '24 at 08:47
  • 1
    @StephenHarrison: You might find this answer and this physics.SE post it links to helpful. – joriki Mar 05 '24 at 08:52
  • 1
    @StephenHarrison: As regards “that is not good, because the endpoint is not known”: If the endpoint isn’t known, you can optimize the functional for a variable endpoint and then optimize the resulting function of the endpoint. But that’s not what one typically does in physics. For instance, any straight or refracted light path is the solution of a variational problem; it’s not a problem that you don’t know where the light is going; you just get all the possible light paths from the variational approach. – joriki Mar 05 '24 at 08:56
  • As regards “the principle of least action does not specify the endpoint”: It does. See e.g. Wikipedia: “The path taken by the system between times $t_1$ and $t_2$ and configurations $q_1$ and $q_2$ is the one for which the action is stationary to first order.” It’s a local stationarity principle, not a global minimization principle. You don’t find the trajectory by assuming variable endpoints and minimizing over them; the action is stationary (usually minimal) for given endpoints, and the endpoints are determined in some other way. – joriki Mar 05 '24 at 09:05
  • In the answer you linked, it says boundary conditions are required if $L$ depends on $\dot{q}$. For me this is not the case. $L$ is of the form $L(q) \geq 0$ and I am trying to maximize/minimize $\int_0^a L(q) dt$ subject to $q(0) = 0$ and $|\dot{q}| = 1$. – Stephen Harrison Mar 05 '24 at 09:05
  • 1
    @StephenHarrison: Well, they don’t have a constraint and a Lagrange mutliplier. If you look at the derivation of the method of Lagrange multipliers, the constraint plays a similar role as the functional; you need integration by parts to move the derivative from the variation to the function as long as either the functional or the constraint contains $\dot q$. – joriki Mar 05 '24 at 09:09
  • @StephenHarrison: In physics you often have initial conditions rather than boundary conditions. But for a problem involving $\dot q$, the initial conditions need to be specified for both $q$ and $\dot q$. Once you obtain a differential equation from a variational principle, you’re free to use it with either initial conditions or boundary conditions; you either specify two boundary conditions for $q$ or two initial conditions for $q$ and $\dot q$. That doesn’t imply that the variational principle works with initial conditions. – joriki Mar 05 '24 at 09:16
  • Do you see what is wrong with my derivation at the end, and why I got a different formula? – Stephen Harrison Mar 05 '24 at 09:43
  • 1
    @StephenHarrison: You solved a different problem there. This is the solution of the problem where the integral of $g$ is constrained, not $g$ itself. In that case, $\lambda$ doesn’t actually depend on $t$ – a single constraint requires a single Lagrange multiplier, a continuous constraint requires a continuous Lagrange multiplier. – joriki Mar 05 '24 at 11:05
  • General rule of thumb: The Lagrange multiplier should be the same type of object as the constraint. $|\dot{q}(t)|-1$ depends on $t$, so $\lambda$ should as well. – whpowell96 Mar 05 '24 at 15:55
0

Too long for a comment.

The focused problem is non-holonomous, therefore we should expect problems on the way to its solution. Considering

$$ L = x^2+y^2+\lambda(x'^2+y'^2-1) $$

with $x,y,\lambda$ functions of $t$, the Euler-Lagrange give us

$$ \cases{ \lambda' x'+\lambda x''-x=0\\ \lambda' y'+\lambda y''-y=0 } $$

now deriving the restriction and including as a third equation we have

$$ \cases{ \lambda' x'+\lambda x''-x=0\\ \lambda' y'+\lambda y''-y=0\\ x'x''-y'y'' = 0 } $$

so we can solve for $x'',y'',\lambda '$ and equivalently we arrive at

$$ \cases{ x' = v_x\\ y' = v_y\\ v_x' = \frac{v_y(v_y x- v_x y)}{\lambda(v_x^2+v_y^2)}\\ v_y' = -\frac{v_x(v_y x- v_x y)}{\lambda(v_x^2+v_y^2)}\\ \lambda' = \frac{v_x x+v_y y}{v_x^2+v_y^2} } $$

but $v_x^2+v_y^2=1$ so the movement equations are

$$ \cases{ x' = v_x\\ y' = v_y\\ v_x' = \frac{v_y(v_y x- v_x y)}{\lambda}\\ v_y' = \frac{v_y(v_x y-v_y x)}{\lambda}\\ \lambda' = v_x x+v_y y } $$

now we have $5$ boundary conditions. If we choose initial conditions, the problem has solutions $x_{\lambda}(t),y_{\lambda}(t)$ but if we choose boundary conditions, the solutions will not satisfy the restriction $x'^2+y'^2=1$. Follows a MATHEMATICA script in which we can choose initial/boundary conditions to verify that.

L = x[t]^2 + y[t]^2 + lambda[t] (x'[t]^2 + y'[t]^2 - 1);
equ1 = D[Grad[L, {x'[t], y'[t]}], t] - Grad[L, {x[t], y[t]}];
equ2 = D[x'[t]^2 + y'[t]^2 - 1, t];
equs = Join[equ1, {equ2}];
sol = Solve[equs == 0, {x''[t], y''[t], lambda'[t]}][[1]] /. {x'[t] ->
 vx[t], y'[t] -> vy[t]};

tmax = 2; e = 10^-7; (*** BOUNDARY CONDITIONS *) condb = {x[0] == 0, y[0] == 1, x[tmax] == 1, y[tmax] == 1, lambda[0] == 1}; (* INITIAL CONDITIONS ***) condi = {x[0] == 0, y[0] == 1, x'[0] == 1/Sqrt[2], y'[0] == 1/Sqrt[2], lambda[0] == 1}; cinits = condi;

odes = Thread[{vx'[t], vy'[t], lambda'[t]} == ({x''[t], y''[t], lambda'[t]} /. sol) (vx[t]^2 + vy[t]^2)] // Simplify; odes0 = Join[Join[odes, {x'[t] == vx[t], y'[t] == vy[t]}], cinits]; solode = NDSolve[odes0, {x, y, vx, vy, lambda}, {t, 0, tmax}][[1]]; ParametricPlot[Evaluate[{x[t], y[t]} /. solode], {t, 0, tmax}] Plot[{1 - e, 1 + e, Evaluate[{vx[t]^2 + vy[t]^2} /. solode]}, {t, 0, tmax}, PlotStyle -> {Red, Red, Blue}]

By choosing values to lambda[0] we can obtain different solutions to the initial, as well to the boundary conditions problem. The calculated value of $v_x^2+v_y^2$ can be depicted in blue in the second plot, compared with $1\pm e$.

enter image description here

enter image description here

Cesareo
  • 33,252