computer vision - Camera pose estimation using opencv C++, SolvePnP function -
i trying measure pose of camera , have done following.
- mark world 3-d(assuming z=0, since flat) points on corners of square on flat surface , assume world coordinate system.(in cms)
have taken top left corner of square origin , given world points in following order(x,y)or(col,row): (0,0),(12.8,0),(12.8,12.8),(0,12.8) - in cms
detect points in image.(in pixels) image points , world points in same order.
i have calibrated camera intrinsic matrix , distortion coefficients.
i use solvepnp function rvec , tvec.
i use rodrigues function rotation matrix.
to check if rvec , tvec correct, project 3-d points(z=0) using projectpoints image plane , points correctly on image error of 3 pixels on x- axis.
now go ahead , calculate camera position in world frame using formula:
cam_worl_pos = - inverse(r) * tvec. (this formula have verified in many blogs , makes sense)
- but cam_worl_pos x,y, , z in cms not seem correct.
my doubt is, if able project 3-d world point image plane using rvec , tvec (3 pixel error on x-axis , no error on y axis, hope not bad), why not getting camera position in world frame right.
also, have doubt on solvpnp rvec , tvec solution, might 1 of multiple solutions, not 1 want.
how right rvec , tvec solvpnp or other suggestions rvec , tvec helpful.
edits :
image size - 720(row) * 1280(col)
camera parameters
cameramatrix_front=[908.65 0 642.88 0 909.28 364.95 0 0 1] distcoeffs_front=[-0.4589, 0.09462, -1.46*10^-3, 1.23*10^-3]
opencv c++ code:
vector<point3f> front_object_pts; mat rvec_front; mat tvec_front; mat rotation_front; mat world_position_front_cam; //fill front object points(x-y-z order in cms) //it square of side 12.8cms on z=0 plane front_object_pts.push_back(point3f(0, 0, 0)); front_object_pts.push_back(point3f(12.8, 0, 0)); front_object_pts.push_back(point3f(12.8,12.8,0)); front_object_pts.push_back(point3f(0, 12.8, 0)); //corresponding image points detected in same order object points front_image_pts.push_back(points_front[0]); front_image_pts.push_back(points_front[1]); front_image_pts.push_back(points_front[2]); front_image_pts.push_back(points_front[3]); //detected points in image matching 3-d points in same order //(467,368) //(512,369) //(456,417) //(391,416) //get rvec , tvec using solve pnp solvepnp(front_object_pts, front_image_pts, cameramatrix_front, mat(4,1,cv_64fc1,scalar(0)), rvec_front, tvec_front, false, cv_iterative); //output of solvepnp //tvec=[-26.951,0.6041,134.72] (3 x 1 matrix) //rvec=[-1.0053,0.6691,0.3752] (3 x 1 matrix) //check rvec , tvec correct or not projecting 3-d object points image vector<point2f>check_front_image_pts projectpoints(front_object_pts, rvec_front, tvec_front, cameramatrix_front, distcoeffs_front, check_front_image_pts); //here note have made **distcoefficents**, //a 0 vector since image points detected after radial distortion removed //get rotation matrix rodrigues(rvec_front, rotation_front); //get rotation matrix inverse mat rotation_inverse; transpose(rotation_front, rotation_inverse); //get camera position in world cordinates world_position_front_cam = -rotation_inverse * tvec_front;
//actual location of camera(measured manually)
x=47cm
y=18cm
z=25cm
//obtained location
x=110cm
y=71cm
z=-40cm
Comments
Post a Comment