2

I came across this post: How is this changing vertical perspective effect achieved?

Let's say those pictures have been taken by a drone, how would you calculate the positions (latitude, longitude), height and camera angle for where each photo will be shot?

gasparuff
  • 129
  • 3
  • Seems to me that is has nothing to do with photography. What you are looking for is probably in the field of image processing and called image registration : https://en.wikipedia.org/wiki/Image_registration – Olivier Dec 29 '16 at 10:06
  • 2
    I think it's a fair photography question about shooting technique. Have added that tag. – youcantryreachingme Dec 30 '16 at 00:15

2 Answers2

1

While calculation should be possible, I would rather recommend trial and error to begin. Start with your drone filming as it ascends. Then take screenshots from still-frames from the video. Do your stitching and see whether your image looks good to you. If so, make a note where in your video clip you took the screenshots. If your drone ascends slowly at a constant rate, then you can calculate its height from where you took your still frames: say it rose for 60 seconds and your first shots is taken at the 5 seconds mark in the video, your next at 10 seconds, your next at 20 seconds, your fourth at 60 seconds. Then your shots were taken at 5/60ths, 10/60ths, 20/60ths and 60/60ths of the way up.

  • Very good idea, but I would really like to find some calculation based solution. It's not just at which time the pictures should be taken, but also how the path of the drone should be at all. I'd say the flightpath has to be a logarithmic curve. – gasparuff Dec 30 '16 at 17:09
  • @gasparuff you are introducing very many variables if you suggest that the drone was not moving vertically. – Euri Pinhollow Jan 29 '17 at 03:19
-1

The exact calculation is impossible because the scene is volumetric. If the scene had flat details you could get a map and match the map and the photograph and get the function.

However you can still get the map and try to match it manually suggesting that the drone moved vertically. Your process should be something like:

  • find the photographer's position on the map
  • pick two points on every horizontal line and find matching pairs of points on the map (yes this is human work yet)
  • knowing the scale of the object gives you distance
  • the drone's height, the drones sight on every object (aka the distance) and the projection of drones sight form a 90 degree triangle. You know the projection of drones sight (from the map) and you know the distance from drone to the object (you get it from the ratio between the matched points distance ratio). You will get the height if you use Pifagor's theorem
  • given the height you can calculate the angle as a trigonometric function: angle = arctan ( height / distance ) - it will graduate from 90 deg to 0 deg for a rising copter. Calculating height for each horizontal line gives you a set of pairs < angle, height > which you can interpolate

This is a hint on how you'd do it and it is not very detailed.

Usage of fisheye objective makes this much more difficult to formalize. It is still guessable if you keep your matching pairs close to the center vertical of photograph.

Euri Pinhollow
  • 5,071
  • 17
  • 36
  • While the exact calculation might be impossible, a very very good approximation can be done. It has been a while that unmanned vehicule can use computer vision to orient themselves and compute deplacement. The rover on Mars uses computer vision. The bundle adjustment technique and image registration does exactly that : take a bunch of image, approximate the 3D scene and the parameter of each image (position, orientation, camera model,...) – Olivier Apr 29 '17 at 07:00
  • @Olivier it's funny that you downvoted because my entire post is about calculating the said approximation. – Euri Pinhollow Apr 29 '17 at 08:58
  • I downvoted because your first sentence is wrong. The approximation I mentionned has not much in common with yours. The subsequent mathematics might be correct, but it's based on an hypothesis which rarely occur in real life. The link I provided in my first comment (image registration) already provided information about finding view parameters from images. Please search for bundle adjustment – Olivier Apr 29 '17 at 14:14
  • @Olivier you have admitted that my first sentence is correct in first line of your comment. Machine vision has one degree of information more than in the problem discussed - it is ok to assume that copter did not move to the left or to the right but it is not correct to assume that it did not move forward. With machine vision, the coordinates of viewpoints is known (so, kinda one dimension only - from 1 to N) and it is needed to build approximate 3D model. cont..... – Euri Pinhollow Apr 29 '17 at 15:33
  • @Olivier This case is incomplete reverse: only the flat model is known (volumetric maps are not very common and exact) and it is needed to deduce 2 coordinates from series of flat models. – Euri Pinhollow Apr 29 '17 at 15:35
  • @Olivier if you assume copter to move only vertically or get 3D map the problem becomes solveable. – Euri Pinhollow Apr 29 '17 at 15:38
  • My first line was humorous, as pretty much nothing is exact in image processing, especially computer vision. Again, I invite you too look for image registration techniques and bundle adjustement in particular : https://en.m.wikipedia.org/wiki/Bundle_adjustment – Olivier Apr 29 '17 at 16:09
  • Extract from wikipedia article Given a set of images depicting a number of 3D points from different viewpoints, bundle adjustment can be defined as the problem of simultaneously refining the 3D coordinates describing the scene geometry, the parameters of the relative motion, and the optical characteristics of the camera(s) employed to acquire the images, according to an optimality criterion involving the corresponding image projections of all points. – Olivier Apr 29 '17 at 16:10
  • If you or Gasparuff (the OP) is interested, I will write a step by step guide to find the parameters of different images. It will yield a very basic 3D model of the scene and the position and orientation of the camera for each image. Again, in my opinion this isn't really about photography... but it was one of my first project and the reason with a bought a DSLR in the first place. – Olivier Apr 29 '17 at 16:18
  • @Olivier " The subsequent mathematics might be correct, but it's based on an hypothesis which rarely occur in real life." elaborate that. – Euri Pinhollow Apr 29 '17 at 17:06
  • All the images taken by a flying UAV have about 0 chance of being aligned. The projections you propose will probably have errors and using them to align the images will probably fail. That's why bundle adjustment is a global optimization problem and take computation time. In general, more than 20 points are needed between two consecutives images to align them correctly and solve the camera parameters. I garanty that just reasoning on one line and a scale information will render messy result. – Olivier Apr 29 '17 at 17:18
  • @Olivier what the actual flop? Did you ever read the OP question? – Euri Pinhollow Apr 29 '17 at 17:56
  • It's going nowhere, maybe because of my limited english. I will write a detailed answer later. Just a last question : are you are also making the assumption that the same focal lenght is used in all the shots ? – Olivier Apr 29 '17 at 18:36