2

I have just learned about the RK4 method for approximating differential equations. Is it possible for this to be applied to the three-body problem?

2 Answers2

2

Yes, it is possible. It is an ODE system, so you can apply numerical methods for ODE.

What you should be asking about is the quality of the numerical approximation. For one, in the long term the method step errors will accumulate and distort the physical picture.

In the short term picture, many instances of the 3-body problem will have phases when two bodies come very close together. In those time segments the system is very stiff, requiring a very small step size. A very short step size required in some short segment translates via the fixed-step paradigm to a very short step size everywhere. This implies a rapid build-up of floating-point errors, limiting the length of a useful integration interval.

For the preservation of physical properties you could use symplectic methods, as they preserve constants such as energy, momentum and angular momentum to a higher order than that of the method.

Alternatively, implicit methods are more robust towards the stiff phases, requiring much less step size reduction. Variable-step methods make the step-size selection adaptive to the current environment, so short steps are only used where needed.

In a next step, operator splitting methods such as exponential methods will also allow slightly longer step sizes.

Lutz Lehmann
  • 126,666
0

Yes, you can use RK4 on the three body problem. Others methods work too.

Following is Python demo code using different ODE solvers available under the scipy library. RK45, RK23, DOP853 , Radau, DBF and LSODA algorithms give similar results after setting the absolute and relative tolerance to 1E-8.

Note that past a certain point in time, no matter the error tolerance settings and the solver used, the solutions will diverge due to the chaotic nature of the system.

The initial values are taken from this web site that treats this problem in detail but uses an iterative method rather than sophisticated solvers.

from scipy.integrate import solve_ivp
import numpy as np
from numpy.linalg import norm
import matplotlib.pyplot as plt

plt.style.use('dark_background')

def dxdt(t, x, G, m1, m2, m3):
    r1 = x[0:3]
    r2 = x[3:6]
    r3 = x[6:9]    
    r1dotdot = -G * m2 * (r1 - r2) / norm(r1 - r2)**3 - G * m3 * (r1 - r3) / norm(r1 - r3)**3
    r2dotdot = -G * m3 * (r2 - r3) / norm(r2 - r3)**3 - G * m1 * (r2 - r1) / norm(r2 - r1)**3 
    r3dotdot = -G * m1 * (r3 - r1) / norm(r3 - r1)**3 - G * m2 * (r3 - r2) / norm(r3 - r2)**3
    f = np.full(18, np.nan)
    f[0:9] = x[9:18] # r1dot, r2dot, r3dot
    f[9:12] = r1dotdot
    f[12:15] = r2dotdot
    f[15:18] = r3dotdot
    return f

r1 = np.empty(3)
v1 = np.empty(3)
r1[0] = -10
r1[1] = 10
r1[2] = -11
v1[0] = -3
v1[1] = 0
v1[2] = 0

r2 = np.empty(3)
v2 = np.empty(3)
r2[0] = 0 
r2[1] = 0
r2[2] = 0 
v2[0] = 0
v2[1] = 0
v2[2] = 0

r3 = np.empty(3)
v3 = np.empty(3)
r3[0] = 10
r3[1] = 10
r3[2] = 12
v3[0] = 3
v3[1] = 0
v3[2] = 0

x0 = np.empty(18)
x0[0:3] = r1
x0[3:6] = r2
x0[6:9] = r3
x0[9:12] = v1
x0[12:15] = v2
x0[15:18] = v3

t0 = 0 
tf = 200
delta_t = 0.001
t_eval = np.linspace(t0, tf, int((tf - t0) / delta_t))
m1 = 10
m2 = 20
m3 = 30
G = 9.8

methods = methods = ['RK45', 'RK23', 'DOP853', 'Radau', 'BDF', 'LSODA']

plots = {}

for method in methods:
    print(f'Computing using {method}...')
    result = solve_ivp(dxdt, (t0, tf), x0, args=(G, m1, m2, m3), t_eval=t_eval, method=method, atol=1E-8, rtol=1E-8)

    print(f'Preparing plot for {method}...')   
    fig = plt.figure(figsize=(8, 8))
    ax = fig.add_subplot(projection='3d')
    fig.gca().patch.set_facecolor('black')

    y = result.y
    plt.plot(y[0], y[1], y[2], '^', color='red', lw=0.05, markersize=0.01, alpha=0.5)
    plt.plot(y[3], y[4], y[5], '^', color='white', lw=0.05, markersize=0.01, alpha=0.5)
    plt.plot(y[6], y[7], y[8], '^', color='blue', lw=0.05, markersize=0.01, alpha=0.5)
    plt.axis('on')
    ax.xaxis.set_pane_color((0.0, 0.0, 0.0, 1.0))
    ax.yaxis.set_pane_color((0.0, 0.0, 0.0, 1.0))
    plt.title(f'Three Body Problem\nSolver = {method}')

    plots[method] = fig

while True:
    print('Select Method to be displayed:')
    for i, method in enumerate(methods):
        print(f'{i+1} {method}')
    print('Any other input to exit.')

    try:
        selected = int(input('Method number: '))
    except:
        break

    if 1 <= selected <= len(methods):
        selected -= 1
        plots[methods[selected]].show()
    else:
        break
Tarik
  • 141
  • As it’s currently written, your answer is unclear. Please [edit] to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers in the help center. – Community Nov 29 '23 at 11:52
  • For practical value you need to explicitly set the error tolerances, as the defaults need not be the same and could be inappropriate for the task. Between time 100 and 120 I guess that a near-collision happened, the outcome of such is very sensible to the error accumulation so far. After the colliding bodies leave with widely different angles between the integrators, nothing of the paths after that will even be close. In real astronomy you get such uncertainty with long-period comets like Halley's. The period is predictable, but not the angle of the long axis over multiple periods. – Lutz Lehmann Nov 29 '23 at 13:23
  • @LutzLehmann Indeed, after setting the absolute and relative tolerances to 1E-8, the results are now similar. – Tarik Nov 30 '23 at 05:26
  • If you set t0,t1,t2 = 0, 140, 200 and integrate over tspan=[t0,2], but construct t_eval=arange(t1,t2,dt), then you get less of the identical parts and can clearly see the differences in the end phase that are still there, especially in the second, white curve. – Lutz Lehmann Nov 30 '23 at 08:26
  • @LutzLehmann Sure. Since it's a chaotic system, no matter what, the solutions will diverge at some point. I will update my answer to stress this point. – Tarik Nov 30 '23 at 11:35