2

I have a requirement to calculate the ground footprint for an aerial camera. The photos are TerraPhotos. TerraPhoto user guide provide camera position and plane orientation in .IML file. Additionally, I have the camera calibration file.

In TerraPhoto guide, the yaw, pitch, and roll of the aircraft are defined as follows:

  • yaw (heading): from North clock-wise direction
  • roll: positive, if left-wing is up.
  • pitch: positive, if the nose of the aircraft is up

The camera calibration details are as follows:

[TerraPhoto calibration]
Version=20050513
Description= Nikon D800E BW 50mm
TimeOffset= 0.0000
Exposure= 0.00000
LeverArm= 0.0000 0.0000 0.0000
AntennaToCameraOffset= 0.0000 0.0000 0.0000
AttitudeCorrections(HRP)= -0.4546 0.7553 -34.7538
PlateSize= 7630.00000000 4912.00000000
ImageSize= 7630 4912
Margin= 0
FiducialRadius= 40
FiducialMarks= 0
Orientation= BOTTOM
PrincipalPoint(XoYoZo)= -77.40000000 112.80000000 -10476.54389508
LensModel=Balanced
LensK0=0.000000E+000
LensK1=0.000000E+000
LensK2=0.000000E+000
LensP1=0.000000E+000
LensP2=0.000000E+000

Here, I see that AttitudeCorrection for the camera is given. Hence, I believe it is the orientation of the aerial camera according to the local frame (i.e. aircraft).

with respect to a given aerial photo, I have the following details, which I obtained from the.IML file (please check page 344 for more info).

Image=SLR2_443_20150326_144759_C_B_3489_DUBLIN_AREA_2KM2_FL_300_2232888
Time=402972.957799
Xyz=316440.819 234424.606 312.938
Hrp=-113.33234 2.03435 -1.87426
  • Image represent the name of the image
  • XYZ (i.e. camera easting, northing, and elevation)
  • aircraft yaw, pitch, roll

With this specific information at hand, I am attempting to calculate the ground coordinates of the Image. I intend to use Horizontal FoV, and vertical FoV.

I've been attempting this for some time, but still unable to estimate the geocoordinates properly. I did attempt, pin-hole model as well. I obtain results around the area of interest, but my results do not confirm the actual geolocations.

I intend to use either pinhole model or Horizontal and Vertical field of view (FoV) to calculate my geocoordinates.

A guide in the right direction is appreciated.

Code with respect to FoV calculation is provided.

def createRollMatrix(yaw,pitch,roll):
  '''
     Uses the Eigen formatted rotation matrix
     pulled directly from Eigen base code to python
  '''
  # convert degrees to radians
  yaw = np.radians(yaw)
  pitch = np.radians(pitch)
  roll = np.radians(roll)

su = np.sin(roll) cu = np.cos(roll) sv = np.sin(pitch) cv = np.cos(pitch) sw = np.sin(yaw) cw = np.cos(yaw)

rotation_matrix = np.zeros((3,3))

rotation_matrix[0][0] = cvcw rotation_matrix[0][1] = susvcw - cusw #rotation_matrix[0][2] = susw + cu - cusw rotation_matrix[0][2] = susw + cusv*cw

rotation_matrix[1][0] = cvsw rotation_matrix[1][1] = cucw + susvsw rotation_matrix[1][2] = cusvsw - su*cw

rotation_matrix[2][0] = -sv rotation_matrix[2][1] = sucv rotation_matrix[2][2] = cucv

return rotation_matrix

CAMERA misalignment angles

yaw = -0.4546 #
pitch = -34.7538 # roll = 0.7553 # 0

aircraft's yaw pitch roll

yaw1 = -113.33234 pitch1 = -1.87426 roll1 = 2.03435

R = createRollMatrix(yaw,pitch,roll) R2 = createRollMatrix(yaw1,pitch1,roll1)

Corrected_R = (R2.dot(R))

yaw = math.atan(Corrected_R[1][0]/ Corrected_R[0][0]) yaw

roll = math.atan(Corrected_R[2][1]/ Corrected_R[2][2]) roll

pitch = math.atan(-Corrected_R[2][0]/ math.sqrt( (math.pow(Corrected_R[2][1], 2) + math.pow(Corrected_R[2][2], 2)))) pitch

Subsequently, I use the following code to calculate the geocoordinates.

import math
import numpy as np

pip install vector3d

from vector3d.vector import Vector

class CameraCalculator: """Porting of CameraCalculator.java This code is a 1to1 python porting of the java code: https://github.com/zelenmi6/thesis/blob/master/src/geometry/CameraCalculator.java referred in: https://stackoverflow.com/questions/38099915/calculating-coordinates-of-an-oblique-aerial-image The only part not ported are that explicetly abandoned or not used at all by the main call to getBoundingPolygon method. by: milan zelenka https://github.com/zelenmi6 https://stackoverflow.com/users/6528363/milan-zelenka example: c=CameraCalculator() bbox=c.getBoundingPolygon( math.radians(62),SLR2_443_20150326_144759_C_B_3489_DUBLIN_AREA_2KM2_FL_300_2233046 math.radians(84), 117.1, math.radians(0), math.radians(33.6), math.radians(39.1)) for i, p in enumerate(bbox): print("point:", i, '-', p.x, p.y, p.z) """

def __init__(self):
    pass

def __del__(delf):
    pass

@staticmethod
def getBoundingPolygon(FOVh, FOVv, altitude, roll, pitch, heading):
    '''Get corners of the polygon captured by the camera on the ground. 
    The calculations are performed in the axes origin (0, 0, altitude)
    and the points are not yet translated to camera's X-Y coordinates.
    Parameters:
        FOVh (float): Horizontal field of view in radians
        FOVv (float): Vertical field of view in radians
        altitude (float): Altitude of the camera in meters
        heading (float): Heading of the camera (z axis) in radians
        roll (float): Roll of the camera (x axis) in radians
        pitch (float): Pitch of the camera (y axis) in radians
    Returns:
        vector3d.vector.Vector: Array with 4 points defining a polygon
    '''
    # import ipdb; ipdb.set_trace()
    ray11 = CameraCalculator.ray1(FOVh, FOVv)
    ray22 = CameraCalculator.ray2(FOVh, FOVv)
    ray33 = CameraCalculator.ray3(FOVh, FOVv)
    ray44 = CameraCalculator.ray4(FOVh, FOVv)

    rotatedVectors = CameraCalculator.rotateRays(
            ray11, ray22, ray33, ray44, roll, pitch, heading)

    #origin = Vector(0, 0, altitude) # 
    #origin = Vector(0, 0, altitude) # 


FW ---- SLR1

    #  origin = Vector(316645.779, 234643.179, altitude)

    '''
    BW ===== SLR2 
    '''
    origin = Vector(316440.819, 234424.606, altitude)
    #origin = Vector(316316, 234314, altitude)
    intersections = CameraCalculator.getRayGroundIntersections(rotatedVectors, origin)

    return intersections


# Ray-vectors defining the the camera's field of view. FOVh and FOVv are interchangeable
# depending on the camera's orientation
@staticmethod
def ray1(FOVh, FOVv):
    '''
    tasto
    Parameters:
        FOVh (float): Horizontal field of view in radians
        FOVv (float): Vertical field of view in radians
    Returns:
        vector3d.vector.Vector: normalised vector
    '''
    pass
    ray = Vector(math.tan(FOVv / 2), math.tan(FOVh/2), -1)
    return ray.normalize()

@staticmethod
def ray2(FOVh, FOVv):
    '''
    Parameters:
        FOVh (float): Horizontal field of view in radians
        FOVv (float): Vertical field of view in radians
    Returns:
        vector3d.vector.Vector: normalised vector
    '''
    ray = Vector(math.tan(FOVv/2), -math.tan(FOVh/2), -1)
    return ray.normalize()

@staticmethod
def ray3(FOVh, FOVv):
    '''
    Parameters:
        FOVh (float): Horizontal field of view in radians
        FOVv (float): Vertical field of view in radians
    Returns:
        vector3d.vector.Vector: normalised vector
    '''
    ray = Vector(-math.tan(FOVv/2), -math.tan(FOVh/2), -1)
    return ray.normalize()

@staticmethod
def ray4(FOVh, FOVv):
    '''
    Parameters:
        FOVh (float): Horizontal field of view in radians
        FOVv (float): Vertical field of view in radians
    Returns:
        vector3d.vector.Vector: normalised vector
    '''
    ray = Vector(-math.tan(FOVv/2), math.tan(FOVh/2), -1)
    return ray.normalize()

@staticmethod
def rotateRays(ray1, ray2, ray3, ray4, roll, pitch, yaw):
    """Rotates the four ray-vectors around all 3 axes
    Parameters:
        ray1 (vector3d.vector.Vector): First ray-vector
        ray2 (vector3d.vector.Vector): Second ray-vector
        ray3 (vector3d.vector.Vector): Third ray-vector
        ray4 (vector3d.vector.Vector): Fourth ray-vector
        roll float: Roll rotation
        pitch float: Pitch rotation
        yaw float: Yaw rotation
    Returns:
        Returns new rotated ray-vectors
    """
    sinAlpha = math.sin(yaw) #sw OK
    sinBeta = math.sin(pitch) #sv OK
    sinGamma = math.sin(roll) #su OK
    cosAlpha = math.cos(yaw) #cw OK
    cosBeta = math.cos(pitch) #cv OK
    cosGamma = math.cos(roll) #cu OK
    m00 = cosBeta * cosAlpha # cosAlpha * cosBeta  #cw*cv 
    m01 = sinGamma * sinBeta * cosAlpha - cosGamma * sinAlpha # cosAlpha * sinBeta * sinGamma - sinAlpha * cosGamma     #cw*sv#cu
    m02 = sinGamma * sinAlpha +  cosGamma * cosAlpha * sinBeta#cosAlpha * sinBeta * cosGamma + sinAlpha * sinGamma
    m10 = sinAlpha * cosBeta
    m11 = sinAlpha * sinBeta * sinGamma + cosAlpha * cosGamma
    m12 = sinAlpha * sinBeta * cosGamma - cosAlpha * sinGamma
    m20 = -sinBeta
    m21 = cosBeta * sinGamma
    m22 = cosBeta * cosGamma

    # Matrix rotationMatrix = new Matrix(new double[][]{{m00, m01, m02}, {m10, m11, m12}, {m20, m21, m22}})
    rotationMatrix = np.array([[m00, m01, m02], [m10, m11, m12], [m20, m21, m22]])

    # Matrix ray1Matrix = new Matrix(new double[][]{{ray1.x}, {ray1.y}, {ray1.z}})
    # Matrix ray2Matrix = new Matrix(new double[][]{{ray2.x}, {ray2.y}, {ray2.z}})
    # Matrix ray3Matrix = new Matrix(new double[][]{{ray3.x}, {ray3.y}, {ray3.z}})
    # Matrix ray4Matrix = new Matrix(new double[][]{{ray4.x}, {ray4.y}, {ray4.z}})
    ray1Matrix = np.array([[ray1.x], [ray1.y], [ray1.z]])
    ray2Matrix = np.array([[ray2.x], [ray2.y], [ray2.z]])
    ray3Matrix = np.array([[ray3.x], [ray3.y], [ray3.z]])
    ray4Matrix = np.array([[ray4.x], [ray4.y], [ray4.z]])

    res1 = rotationMatrix.dot(ray1Matrix)
    res2 = rotationMatrix.dot(ray2Matrix)
    res3 = rotationMatrix.dot(ray3Matrix)
    res4 = rotationMatrix.dot(ray4Matrix)

    rotatedRay1 = Vector(res1[0, 0], res1[1, 0], res1[2, 0])
    rotatedRay2 = Vector(res2[0, 0], res2[1, 0], res2[2, 0])
    rotatedRay3 = Vector(res3[0, 0], res3[1, 0], res3[2, 0])
    rotatedRay4 = Vector(res4[0, 0], res4[1, 0], res4[2, 0])
    rayArray = [rotatedRay1, rotatedRay2, rotatedRay3, rotatedRay4]

    return rayArray

@staticmethod
def getRayGroundIntersections(rays, origin):
    """
    Finds the intersections of the camera's ray-vectors 
    and the ground approximated by a horizontal plane
    Parameters:
        rays (vector3d.vector.Vector[]): Array of 4 ray-vectors
        origin (vector3d.vector.Vector): Position of the camera. The computation were developed 
                                        assuming the camera was at the axes origin (0, 0, altitude) and the python
                                        results translated by the camera's real position afterwards.
    Returns:
        vector3d.vector.Vector
    """
    # Vector3d [] intersections = new Vector3d[rays.length];
    # for (int i = 0; i < rays.length; i ++) {
    #     intersections[i] = CameraCalculator.findRayGroundIntersection(rays[i], origin);
    # }
    # return intersections

    # 1to1 translation without python syntax optimisation
    intersections = []
    for i in range(len(rays)):
        intersections.append( CameraCalculator.findRayGroundIntersection(rays[i], origin) )
    return intersections

@staticmethod
def findRayGroundIntersection(ray, origin):
    """
    Finds a ray-vector's intersection with the ground approximated by a planeç
    Parameters:
        ray (vector3d.vector.Vector): Ray-vector
        origin (vector3d.vector.Vector): Camera's position
    Returns:
        vector3d.vector.Vector
    """
    # Parametric form of an equation
    # P = origin + vector * t
    x = Vector(origin.x,ray.x)
    y = Vector(origin.y,ray.y)
    z = Vector(origin.z,ray.z)

    # Equation of the horizontal plane (ground)
    # -z = 0

    # Calculate t by substituting z
    t = - (z.x / z.y)

    # Substitute t in the original parametric equations to get points of intersection
    return Vector(x.x + x.y * t, y.x + y.y * t, z.x + z.y * t)

CoolCK
  • 121
  • 4

2 Answers2

2

Your assumption is correct, the formula

a = 2 arctan(d/2f)

assumes the principal point in the image center. With r=d/2 being the "radius" from the principal point to the sensor edge, you can read it as

a = arctan(r/f) + arctan(r/f)

still assuming that the radii to e.g. the left and right edge are equal. If the aren't, simply use:

a = arctan(r1/f) + arctan(r2/f)

Ralf Kleberhoff
  • 1,269
  • 6
  • 7
1

I am working on a similar project. I wonder if your rotation matrix is computed correctly. From the terrasolid/terraphoto documentation page 347, they give the rotation matrices for specific yaw, pitch, roll values.

https://www.terrasolid.com/download/tphoto.pdf

It might help you. However I did not find how they compute their rotation matrices yet. Do you have any updates?

CamilleM
  • 11
  • 1