I am working on a 3D model reconstruction application with Kinect sensor. I use Microsoft SDK to get depth data, I want to calculate the location of each point in the real-world. I have read several articles about it and I have implemented several depth-calibration methods but all of them do not work in my application. the closest calibration was http://openkinect.org/wiki/Imaging_Information but my result in Meshlab was not acceptable. I calculate depth value by this method:
private double GetDistance(byte firstByte, byte secondByte)
{
double distance = (double)(firstByte >> 3 | secondByte << 5);
return distance;
}
and then I used below methods to calculate distance of real-world
public static float RawDepthToMeters(int depthValue)
{
if (depthValue < 2047)
{
return (float)(0.1 / ((double)depthValue * -0.0030711016 + 3.3309495161));
}
return 0.0f;
}
public static Point3D DepthToWorld(int x, int y, int depthValue)
{
const double fx_d = 5.9421434211923247e+02;
const double fy_d = 5.9104053696870778e+02;
const double cx_d = 3.3930780975300314e+02;
const double cy_d = 2.4273913761751615e+02;
double depth = RawDepthToMeters(depthValue);
Point3D result = new Point3D((float)((x - cx_d) * depth / fx_d),
(float)((y - cy_d) * depth / fy_d), (float)(depth));
return result;
}
these methods did not work well and generated scene was not correct. then I used below method, the result is better than previous method but it is not acceptable yet.
public static Point3D DepthToWorld(int x, int y, int depthValue)
{
const int w = 640;
const int h = 480;
int minDistance = -10;
double scaleFactor = 0.0021;
Point3D result = new Point3D((x - w / 2) * (depthValue + minDistance) * scaleFactor * (w/h),
(y - h / 2) * (depthValue + minDistance) * scaleFactor,depthValue);
return result;
}
I was wondering if you let me know how can I calculate real-world position based on my depth pixel values calculating by my method.