Please let me know what I am doing wrong.
I understand I am missing something fundamental in my approach as I do not have a scale parameter in my equations too.
But when I plot the points in 3D graph, I don't get the correct structure. Here P =, so, Point in 3D is: (X/W, Y/W, Z/W). P_camera_homogeneous =, p_camera, p_camera, 1] # also augment M_inv with horizontal vector so it's size is (4,4) Here is my current understanding of the approach: R_inv = R.T I just need a unit vector in the right direction to the 3D point and not the absolute 3D point coordinate. Just to get more intuition about the process, I am trying to project the pixel back to world**(at a scale)** using Camera Pose and Camera Intrinsics. So, after running it through COLMAP, I have the following things available to me: For those that are not aware, COLMAP, 3D reconstructs the entire scene, populates a point cloud and the camera poses for each image in the dataset. I had images of a building taken by a drone, I reconstructed the building using COLMAP(an SFM tool). I have been trying to understand how to project a 3D point to 2D image and vice versa.