# How 3D panorama imaging works?

1. Apr 17, 2015

The fundamental question, what is the reference geometry where images can be fused to form a 3D panorama? In other words, how images are assigned a 3D position? Does it require a gyroscope to be built in camera to define its position per image of the interest? Or just it simply fuses images with a common pixel values of color for example?

2. Apr 17, 2015

### robphy

3. Apr 17, 2015

This looks even more advanced than the simple 3D panorama applied in mobile phone, such as HTC. In HTC there are multiple images of the surrounding are taken by rotating around a central point where the mobile camera is located. then 3D is originated after fusion. My question, will it apply the same idea in the first step of Photosynth, where where common features of images are applied, or it exploits the gyro function of the camera to identify the position/

4. Apr 17, 2015

### robphy

5. Apr 17, 2015

### Borg

6. Apr 17, 2015

No, I am asking about 3D panorama imaging. I found that stitching imaging is the technique adopted by most companies. This is done by fusion of multiple images taken at different angles with some overlapping fields. It seems that this is done by finding common pattern/color to match between different images. However, this is all done with no relation to any reference frame which means that it does not require a gyroscope in its action.
In fact, this could act as a gyroscope itself. For example, the first step is to obtain a 3D panorama. Then followed by the second step which is shooting at any angle, of course with the camera kept at the same position as 3D panorama has been originally taken. This will relate the current image with the 3D virtual world of panorama images and then can define the angle relative to the virtual world frame of reference. In other words, it will then act as a gyroscope or better to say a virtual gyroscope.

Last edited: Apr 17, 2015
7. Apr 17, 2015

### Fooality

When you are trying to find the best match between areas of two images, its called the Correspondence Problem, which has been a big important problem for a long time. One way to solve it in reasonable time is through convolution of sections of a downscaled image, then tuning in to local areas for even better matches. Convolution can be done quickly through multiplying the Fourier Transforms of areas to matched. I believe this is used in some consumer drones, to hold a steady position above the ground via a photo of the ground, tracking its movement that way. I'm sure there are other better ways of matching image sections for stitching I don't know about either. The challenge with what you're talking about though is deriving the 6 degrees of freedom - x, y, z (spacial location of camera) theta, rho, psi (rotation of camera) from a simple 2D image. If all you are dealing with is the rotation on one axis, you could do pretty well, but deducing the 6 is really hard. Has the red ball moved 10 pixels left because the camera moved right, or rotated right? But on the other hand, I have seen pretty good panorama stitches from just holding the camera without a tripod, so you may have a good idea there....

edit: fixed error

Last edited: Apr 17, 2015
8. Apr 30, 2015

### Gracie thomas

There’s something about the shape or aspect ratio of panoramic photographs that always makes you slow down for a closer look. Most of the pictures we view on a regular basis in print or online are invariably horizontal, in 3:2 or 4:3 rectangular aspect ratios, which can be visually digested with a quick pass or two of our eyes. The visual data contained within wider-aspect panorama photographs, however, is distributed across a field of view (FOV) too wide to absorb in a single glance—let alone in two or three glances—depending on the subject matter.

In addition to their attention-grabbing aspect ratios, panoramic photographs catch our attention because panoramic images are quite similar to the way we see the world, through horizontally aligned eyes.