Trying to calculate field of view of smartphone camera using rotation sensors

Hey people,

I am trying to calculate the field of view of my smartphone camera without taking a calibration photo but instead by

rotating the phone taking images on the fly
saving the phone orientation for each image

The idea is to stitch the images using cv.ImgAlign. When using the match_getExtData function of the instance, I can get the transformation matrix of the process. The tx and ty elements of the matrix together with the difference in azimuth and altitude and should give me the FoV:

fov_horizontal = horizontalResolution * azimuthDiff / tx
fov_vertical = verticalResolution * altitudeDiff / ty

But when I do that for different images I get wildly different values, although the matching looks good in my opinion. Am I missing something? Or is there an easier method to calculate the FoV with those requirements?


horizontal: 83.81 deg
horizontal: 86.57 deg
horizontal: 83.83 deg
horizontal: -8.31 deg
horizontal: 94.60 deg
horizontal: 105.95 deg

vertical: 23.20 deg
vertical: -2.47 deg
vertical: 3.30 deg
vertical: -45.72 deg
vertical: -4.61 deg
vertical: -3.68 deg

Hi there,

That’s a great idea! I running into two possible issues here.

First, the idea could work on undistorted images. I suggest to study the effect of distortion on your calculations.

Second, camera sensors don’t measure angles, but angular velocity. Angles are calculated by adding up instantaneous angular velocities in a period of time. Errors accumulate. And finally, browsers combine sensors like magnetometer to give a result that “feels” best.

in principle, if you stitch two pictures with enough overlap, you can extract this information already.

in practice, the compass sensor may be accurate but the phone’s “compensation” fucks it all up again. I get that on various android devices. they are trying to be smart and in the process they sometimes completely lose touch with reality (the measurements, but not just those).

yes, lens distortion can get you… but it’s assumed to be near zero for pixels close to the center of the picture.

try doing an entire panorama (cylinder is ok) and then

  • draw the center point of each picture into the panorama
  • draw the phone orientation into the panorama

those should line up, give or take minor differences in how the camera and IMU/compass sensors were soldered on


compass sensor gives position (heading/angle), absolute (no drift) but slow-changing and slightly noisy.

gyro gives velocity. that is integrated to get position (relative heading/angle). that will drift, if left alone. it’s fast-changing. I shouldn’t comment on the noise though.

both are fused for a result that is fast-changing but also doesn’t drift.

1 Like

Thanks for your quick replies! :slight_smile:

I think the rotation angles should be relatively reliable, as the camera view in my app is turning nicely on orientation change. The vertical orientation ( DeviceOrientationEvent.beta) should be very precise through the gyro anyways and the heading ( DeviceOrientationEvent.alpha) I’d also assume to be more or less stable as @crackwitz mentioned.

My plan was to take many images to average the errors out.
If I look at the calculated horizontal FoVs, they seem to be more or less stable around 80. This feels for me to be too small, but at least more or less working.
For the vertical FoV its very messy and always much too small.

I am asking myself if I understand the transformation matrix OpenCV gives me incorrectly. The problem is that I do not find documentation for the ImgAlign class.
Here is how I use it:
const success = mImgAlign.match_getExtData(... mTransMat ...)
transformationMatrix = mTransMat.data64F
const deltaX = transformationMatrix[2]
const deltaY = transformationMatrix[5]

By the way, here is a deployed version of the PoC which you can play around with on a smartphone. The code can be found here.
If I try it on my Galaxy S20, I never get close to the correct values which are 104.1Âşx87.8Âş

I’d like to try out your suggestion @crackwitz, but am not sure how I would map the phone orientation to the panorama. (I want to calculate the FoV to map the panorama to spherical coords actually)
Can you guide me on that or point me to some good ressources? I did not find a good documentation of opencv.js unfortuntately.

hfov of 80 degrees seems reasonable to me. depends on the device. what fovs do you expect from specs or “conventional” calibration, or just calculating focal length from object’s pixel length, given known size and known distance from camera?

I’ve never encountered such a thing in actual OpenCV.

perhaps you should try prototyping with Python on a desktop instead. in the javascript “version” of OpenCV, Mats/arrays are very weird.