Typical use case for epipolar geometry
Two cameras take a picture of the same scene from different points of view. The epipolar geometry then describes the relation between the two resulting views.
Epipolar geometry is the geometry of stereo vision. When two cameras view a 3D scene from two distinct positions, there are a number of geometric relations between the 3D points and their projections onto the 2D images that lead to constraints between the image points. These relations are derived based on the assumption that the cameras can be approximated by the pinhole camera model.
Contents

Epipolar geometry 1

Epipole or epipolar point 1.1

Epipolar line 1.2

Epipolar plane 1.3

Epipolar constraint and triangulation 1.4

Simplified cases 1.5

Epipolar geometry of pushbroom sensor 1.6

See also 2

References 3

Further reading 4
Epipolar geometry
The figure below depicts two pinhole cameras looking at point X. In real cameras, the image plane is actually behind the center of projection, and produces an image that is rotated 180 degrees. Here, however, the projection problem is simplified by placing a virtual image plane in front of the center of projection of each camera to produce an unrotated image. O_{L} and O_{R} represent the centers of projection of the two cameras. X represents the point of interest in both cameras. Points x_{L} and x_{R} are the projections of point X onto the image planes.
Each camera captures a 2D image of the 3D world. This conversion from 3D to 2D is referred to as a perspective projection and is described by the pinhole camera model. It is common to model this projection operation by rays that emanate from the camera, passing through its center of projection. Note that each emanating ray corresponds to a single point in the image.
Epipole or epipolar point
Since the centers of projection of the cameras are distinct, each center of projection projects onto a distinct point into the other camera's image plane. These two image points are denoted by e_{L} and e_{R} and are called epipoles or epipolar points. Both epipoles e_{L} and e_{R} in their respective image planes and both centers of projection O_{L} and O_{R} lie on a single 3D line.
Epipolar line
The line O_{L}–X is seen by the left camera as a point because it is directly in line with that camera's center of projection. However, the right camera sees this line as a line in its image plane. That line (e_{R}–x_{R}) in the right camera is called an epipolar line. Symmetrically, the line O_{R}–X seen by the right camera as a point is seen as epipolar line e_{L}–x_{L}by the left camera.
An epipolar line is a function of the 3D point X, i.e. there is a set of epipolar lines in both images if we allow X to vary over all 3D points. Since the 3D line O_{L}–X passes through the center of projection O_{L}, the corresponding epipolar line in the right image must pass through the epipole e_{R} (and correspondingly for epipolar lines in the left image). This means that all epipolar lines in one image must intersect the epipolar point of that image. In fact, any line which intersects with the epipolar point is an epipolar line since it can be derived from some 3D point X.
Epipolar plane
As an alternative visualization, consider the points X, O_{L} & O_{R} that form a plane called the epipolar plane. The epipolar plane intersects each camera's image plane where it forms lines—the epipolar lines. All epipolar planes and epipolar lines intersect the epipole regardless of where X is located.
Epipolar constraint and triangulation
Epipolar geometry
If the relative translation and rotation of the two cameras is known, the corresponding epipolar geometry leads to two important observations

If the projection point x_{L} is known, then the epipolar line e_{R}–x_{R} is known and the point X projects into the right image, on a point x_{R} which must lie on this particular epipolar line. This means that for each point observed in one image the same point must be observed in the other image on a known epipolar line. This provides an epipolar constraint which corresponding image points must satisfy and it means that it is possible to test if two points really correspond to the same 3D point. Epipolar constraints can also be described by the essential matrix or the fundamental matrix between the two cameras.

If the points x_{L} and x_{R} are known, their projection lines are also known. If the two image points correspond to the same 3D point X the projection lines must intersect precisely at X. This means that X can be calculated from the coordinates of the two image points, a process called triangulation.
Simplified cases
Example of epipolar geometry. Two cameras, with their respective centers of projection points O_{L} and O_{R}, observe a point P. The projection of P onto each of the image planes is denoted p_{L} and p_{R}. Points E_{L} and E_{R} are the epipoles.
The epipolar geometry is simplified if the two camera image planes coincide. In this case, the epipolar lines also coincide (E_{L}–P_{L} = E_{R}–P_{R}). Furthermore, the epipolar lines are parallel to the line O_{L}–O_{R} between the centers of projection, and can in practice be aligned with the horizontal axes of the two images. This means that for each point in one image, its corresponding point in the other image can be found by looking only along a horizontal line. If the cameras cannot be positioned in this way, the image coordinates from the cameras may be transformed to emulate having a common image plane. This process is called image rectification.
Epipolar geometry of pushbroom sensor
In contrast to the conventional frame camera which uses a twodimensional CCD, pushbroom camera adopts an array of onedimensional CCDs to produce long continuous image strip which is called "image carpet". Epipolar geometry of this sensor is quite different from that of frame cameras. First, the epipolar line of pushbroom sensor is not straight, but hyperbolalike curve. Second, epipolar 'curve' pair does not exist.^{[1]}
See also
References

^ Jaehong Oh. "Novel Approach to Epipolar Resampling of HRSI and Satellite Stereo Imagerybased Georeferencing of Aerial Images", 2011, accessed 20110805.
Further reading

Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in computer vision. Cambridge University Press.

QuangTuan Luong. "Learning Epipolar Geometry".

Robyn Owens. "Epipolar geometry". Retrieved 20070304.

Linda G. Shapiro and George C. Stockman (2001). Computer Vision. Prentice Hall. pp. 395–403.

Vishvjit S. Nalwa (1993). A Guided Tour of Computer Vision. Addison Wesley. pp. 216–240.

Roberto Cipolla and Peter Giblin (2000). Visual motion of curves and surfaces. Cambridge University Press, Cambridge.
This article was sourced from Creative Commons AttributionShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, EGovernment Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a nonprofit organization.