close

Вход

Забыли?

вход по аккаунту

?

Multiple View Geometry in Computer Vision

код для вставкиСкачать
Parameter estimation
class 6
Multiple View Geometry
Comp 290-089
Marc Pollefeys
Content
• Background: Projective geometry (2D, 3D),
Parameter estimation, Algorithm evaluation.
• Single View: Camera model, Calibration, Single
View Geometry.
• Two Views: Epipolar Geometry, 3D
reconstruction, Computing F, Computing
structure, Plane and homographies.
• Three Views: Trifocal Tensor, Computing T.
• More Views: N-Linearities, Multiple view
reconstruction, Bundle adjustment, autocalibration, Dynamic SfM, Cheirality, Duality
Multiple View Geometry course schedule
(subject to change)
Jan. 7, 9
Intro & motivation
Projective 2D Geometry
Jan. 14, 16
(no class)
Projective 2D Geometry
Jan. 21, 23
Projective 3D Geometry
(no class)
Jan. 28, 30
Parameter Estimation
Parameter Estimation
Feb. 4, 6
Algorithm Evaluation
Camera Models
Feb. 11, 13
Camera Calibration
Single View Geometry
Feb. 18, 20
Epipolar Geometry
3D reconstruction
Feb. 25, 27
Fund. Matrix Comp.
Structure Comp.
Planes & Homographies
Trifocal Tensor
Mar. 18, 20
Three View Reconstruction
Multiple View Geometry
Mar. 25, 27
MultipleView Reconstruction
Bundle adjustment
Apr. 1, 3
Auto-Calibration
Papers
Apr. 8, 10
Dynamic SfM
Papers
Apr. 15, 17
Cheirality
Papers
Apr. 22, 24
Duality
Project Demos
Mar. 4, 6
Parameter estimation
• 2D homography
Given a set of (xi,xi’), compute H (xi’=Hxi)
• 3D to 2D camera projection
Given a set of (Xi,xi), compute P (xi=PXi)
• Fundamental matrix
Given a set of (xi,xi’), compute F (xi’TFxi=0)
• Trifocal tensor
Given a set of (xi,xi’,xi”), compute T
DLT algorithm
Objective
Given n≥4 2D to 2D point correspondences {xi↔xi’},
determine the 2D homography matrix H such that xi’=Hxi
Algorithm
(i) For each correspondence xi ↔xi’ compute Ai. Usually
only two first rows needed.
(ii) Assemble n 2x9 matrices Ai into a single 2nx9 matrix A
(iii) Obtain SVD of A. Solution for h is last column of V
(iv) Determine H from h
Geometric distance
x measured coordinates
xˆ estimated coordinates
x true coordinates
d(.,.) Euclidean distance (in image)
Error in one image
Hˆ  argmin
H
 d x  , H x 
i
e.g. calibration pattern
2
i
i
Symmetric transfer error
Л† пЂЅ argmin
H
H
 
d x i , H x п‚ўi
-1
пЂ©
2
пЂ« d пЂЁ x п‚ўi , Hx
2
пЂ©
i
i
Reprojection error
Hˆ , xˆ , xˆ    argmin  d  x , xˆ
i
i
H, xˆ i , xˆ i
subject to
i
i
2
2
п‚ў
п‚ў
Л†
пЂ©
пЂЁ
пЂ©
пЂ«
d
x
,
x
i
i
i
xˆ i  Hˆ xˆ i
Geometric interpretation of
reprojection error
ОЅH
Estimating homography~fit surface
d  x i , xˆ i   d  x i , xˆ i   d   X i , H 
2
2
Analog to conic fitting
to points X=(x,y,x’,y’)T in 4
2
d alg пЂЁ x , C пЂ© пЂЅ x Cx
2
d пЃћ пЂЁx , C пЂ©
T
2
d S a mp so n пЂЁ x, C пЂ© пЂЅ e
2
T
пЂЁJJ пЂ©
T
пЂ­1
e
Statistical cost function and
Maximum Likelihood Estimation
• Optimal cost function related to noise model
• Assume zero-mean isotropic Gaussian noise
(assume outliers removed)
Pr пЂЁ x пЂ© пЂЅ
1
2 ПЂПѓ
2
2
e
пЂЁ
пЂ­ d пЂЁ x, x пЂ© / 2 пЃі
2
пЂ©
Error in one image
Pr пЂЁпЃ»x п‚ўi пЃЅ | H пЂ© пЂЅ
пЃђ
i
1
2 ПЂПѓ
log Pr пЂЁпЃ»x п‚ўi пЃЅ | H пЂ© пЂЅ пЂ­
2
e
1
2Пѓ
2
пЂ­d
пЂЁx п‚ўi , H x i пЂ©2 / пЂЁ2 пЃі 2 пЂ©

d пЂЁ x п‚ўi , H x i пЂ© пЂ« constant
Maximum Likelihood Estimate

d пЂЁ x п‚ўi , H x i пЂ©
2
2
Statistical cost function and
Maximum Likelihood Estimation
• Optimal cost function related to noise model
• Assume zero-mean isotropic Gaussian noise
(assume outliers removed)
Pr пЂЁ x пЂ© пЂЅ
1
2 ПЂПѓ
2
2
e
пЂЁ
пЂ­ d пЂЁ x, x пЂ© / 2 пЃі
2
пЂ©
Error in both images
Pr пЂЁпЃ»x п‚ўi пЃЅ | H пЂ© пЂЅ
пЃђ
i
1
2 ПЂПѓ
2
e
пЂ­ пѓ¦пѓ§ d
пѓЁ
пЂЁ x i , x i пЂ©2 пЂ« d пЂЁ x п‚ўi , H x i пЂ©
2
Maximum Likelihood Estimate

d  x i , xˆ i   d  x i , xˆ i 
2
2
пѓ¶пѓ· /
пѓё
пЂЁ2 пЃі пЂ©
2
Mahalanobis distance
• General Gaussian case
Measurement X with covariance matrix ОЈ
XпЂ­X
2
пЃ“
пЂЅ пЂЁX пЂ­ X пЂ© пЃ“
T
пЂ­1
пЂЁX пЂ­ X пЂ©
Error in two images (independent)
XпЂ­X
2
пЃ“
пЂ« Xп‚ў пЂ­ Xп‚ў
2
пЃ“п‚ў
Varying covariances

i
Xi пЂ­ Xi
2
пЃ“i
пЂ« X п‚ўi пЂ­ X п‚ўi
2
пЃ“ п‚ўi
Invariance to transforms ?
x п‚ў пЂЅ Hx
~
x пЂЅ Tx
~
x п‚ў пЂЅ T п‚ўx п‚ў
?
~
H пЂЅ Tп‚ў HT
-1
~~
~
xп‚ў пЂЅ Hx
~
T п‚ўx п‚ў пЂЅ H Tx
-1 ~
x п‚ў пЂЅ T п‚ў H Tx
will result change?
for which algorithms? for which transformations?
Non-invariance of DLT
Given x i п‚« x п‚ўi and H computed by DLT,
~
~
and x i пЂЅ Tx i , x п‚ўi пЂЅ T п‚ўx п‚ўi
Does the DLT algorithm applied to
~
-1
yield H пЂЅ T п‚ўHT ?
~
xi п‚« ~
x п‚ўi
Effect of change of coordinates on algebraic error
пЂЁ
~
e~i пЂЅ ~
x п‚ўi п‚ґ H ~
x i пЂЅ T п‚ўx п‚ўi п‚ґ T п‚ўHT
-1
пЂ©Tx
пЂЅ T п‚ў пЂЁ x п‚ўi п‚ґ Hx
*
i
for similarities
пѓ© sR
Tп‚ў пЂЅ пѓЄ
пѓ« 0
tпѓ№
пѓє
1пѓ»
~ ~
so A i h пЂЅ
d alg пЂЁ x п‚ўi , Hx
пѓ© R
Tп‚ў пЂЅ sпѓЄ T
пѓ«- t R
*
0пѓ№
пѓє
sпѓ»
T
T
~
~
пЂЁe1 , e 2 пЂ© пЂЅ s R пЂЁe1 , e 2 пЂ© пЂЅ s A i h
i
пЂ© пЂЅ sd alg пЂЁ
~~
~
п‚ў
xi , Hxi
пЂ©
*
п‚ў
пЂ©
пЂЅ
T
ei
i
Non-invariance of DLT
Given x i п‚« x п‚ўi and H computed by DLT,
~
~
and x i пЂЅ Tx i , x п‚ўi пЂЅ T п‚ўx п‚ўi
Does the DLT algorithm applied to
~
-1
yield H пЂЅ T п‚ўHT ?
minimize

d alg пЂЁ x п‚ўi , Hx
i
пѓ› minimize
d 
alg
i
пѓ› minimize
i
i
subject to
пЂ©
пЂ© subject to
~~ 2
~
x п‚ўi , H x i subject to
d 
alg
пЂ©
2
~~
~
x п‚ўi , H x i
2
~
xi п‚« ~
x п‚ўi
H пЂЅ1
H пЂЅ1
~
H пЂЅ1
Invariance of geometric error
Given x i п‚« x п‚ўi and H,
~
~
~
~
~
and x i п‚« x п‚ўi , x i пЂЅ Tx i , x п‚ўi пЂЅ T п‚ўx п‚ўi , H пЂЅ T п‚ўHT
Assume T’ is a similarity transformations
пЂЁ
пЂ©
~
-1
d ~
x п‚ўi , H ~
x i пЂЅ d пЂЁT п‚ўx п‚ўi , T п‚ўHT Tx
пЂЅ sd пЂЁ x п‚ўi , Hx
i
пЂ©
i
пЂ© пЂЅ d пЂЁT п‚ўx п‚ў , T п‚ўHx пЂ©
i
i
-1
Normalizing transformations
• Since DLT is not invariant,
what is a good choice of coordinates?
e.g.
• Translate centroid to origin
• Scale to a 2 average distance to the origin
• Independently on both images
Or
T norm
пѓ©w пЂ« h
пѓЄ
пЂЅ
0
пѓЄ
пѓЄпѓ« 0
0
wпЂ«h
0
w / 2пѓ№
пѓє
h/2
пѓє
1 пѓєпѓ»
пЂ­1
Importance of normalization
пѓ©0
пѓЄ
пѓ« xi
0
0
пЂ­ x iп‚ў
пЂ­ y iп‚ў
пЂ­1
y iп‚ў x i
y iп‚ў y i
yi
1
0
0
0
пЂ­ x iп‚ў x i
пЂ­ x iп‚ў y i
~102
~102
1
~104
~104
~102 ~102 1
orders of magnitude difference!
пѓ¦ h1 пѓ¶
y iп‚ў пѓ№ пѓ§ 2 пѓ·
пѓ§h пѓ· пЂЅ 0
пѓє
пЂ­ x iп‚ў пѓ» пѓ§ 3 пѓ·
пѓЁh пѓё
~102
Normalized DLT algorithm
Objective
Given n≥4 2D to 2D point correspondences {xi↔xi’},
determine the 2D homography matrix H such that xi’=Hxi
Algorithm
п‚ў x п‚ўi
(i) Normalize points ~
x i пЂЅ Tnorm x i , ~
x п‚ўi пЂЅ Tnorm
(ii) Apply DLT algorithm to ~
xi п‚« ~
x п‚ўi ,
~
(iii) Denormalize solution H пЂЅ T п‚ў -1 H
T
norm
norm
Iterative minimization metods
Required to minimize geometric error
(i) Often slower than DLT
(ii) Require initialization
(iii) No guaranteed convergence, local minima
(iv) Stopping criterion required
Therefore, careful implementation required:
(i) Cost function
(ii) Parameterization (minimal or not)
(iii) Cost function ( parameters )
(iv) Initialization
(v) Iterations
Parameterization
Parameters should cover complete space
and allow efficient estimation of cost
• Minimal or over-parameterized? e.g. 8 or 9
(minimal often more complex, also cost surface)
(good algorithms can deal with over-parameterization)
(sometimes also local parameterization)
• Parametrization can also be used to restrict
transformation to particular class, e.g. affine
Function specifications
(i) Measurement vector XпѓЋпѓ‘N with covariance ОЈ
(ii) Set of parameters represented by vector P пѓЋпѓ‘N
(iii) Mapping f : пѓ‘M в†’пѓ‘N. Range of mapping is
surface S representing allowable measurements
(iv) Cost function: squared Mahalanobis distance
X пЂ­ f пЂЁP пЂ©
2
пЃ“
пЂЅ пЂЁ X пЂ­ f пЂЁ P пЂ©пЂ© пЃ“
T
пЂ­1
пЂЁ X пЂ­ f пЂЁ P пЂ©пЂ©
Goal is to achieve f пЂЁ P пЂ© пЂЅ X , or get as close as
possible in terms of Mahalanobis distance
Error in one image

d пЂЁ x п‚ўi , H x i пЂ©
2
f : h п‚® пЂЁHx 1 , Hx 2 ,..., Hx
n
пЂ©
X пЂ­ f пЂЁh пЂ©
Symmetric transfer error
 
d x i , H x п‚ўi
-1
пЂ©
2
пЂ« d пЂЁ x п‚ўi , Hx
2
пЂ©
i
i
f : h п‚® пЂЁH x 1п‚ў , H x п‚ў2 ,..., H x п‚ўn , Hx 1 , Hx 2 ,..., Hx
-1
-1
-1
X пЂ­ f пЂЁh пЂ©
Reprojection error

d  x i , xˆ i   d  x i , xˆ i 
2
2
f : h, xˆ 1 , xˆ 2 ,..., xˆ n    xˆ 1 , xˆ 2 ,..., xˆ n 
X пЂ­ f пЂЁh пЂ©
n
пЂ©
Initialization
• Typically, use linear solution
• If outliers, use robust algorithm
• Alternative, sample parameter space
Iteration methods
Many algorithms exist
• Newton’s method
• Levenberg-Marquardt
• Powell’s method
• Simplex method
Gold Standard algorithm
Objective
Given n≥4 2D to 2D point correspondences {xi↔xi’},
determine the Maximum Likelyhood Estimation of H
(this also implies computing optimal xi’=Hxi)
Algorithm
(i) Initialization: compute an initial estimate using
normalized DLT or RANSAC
(ii) Geometric minimization of -Either Sampson error:
в—Џ Minimize the Sampson error
в—Џ Minimize using Levenberg-Marquardt over 9 entries of h
or Gold Standard error:
в—Џ compute initial estimate for optimal {xi}
2
2
● minimize cost  d  x i , xˆ i   d  x i , xˆ i  over {H,x1,x2,…,xn}
в—Џ if many points, use sparse method
Robust estimation
• What if set of matches contains gross outliers?
RANSAC
Objective
Robust fit of model to data set S which contains outliers
Algorithm
(i) Randomly select a sample of s data points from S and
instantiate the model from this subset.
(ii) Determine the set of data points Si which are within a
distance threshold t of the model. The set Si is the
consensus set of samples and defines the inliers of S.
(iii) If the subset of Si is greater than some threshold T, reestimate the model using all the points in Si and terminate
(iv) If the size of Si is less than T, select a new subset and
repeat the above.
(v) After N trials the largest consensus set Si is selected, and
the model is re-estimated using all the points in the
subset Si
Distance threshold
Choose t so probability for inlier is О± (e.g. 0.95)
• Often empirically
2
d
• Zero-mean Gaussian noise σ then  follows
2
пЃЈ m distribution with m=codimension of model
(dimension+codimension=dimension space)
Codimension
Model
t2
1
l,F
3.84Пѓ2
2
H,P
5.99Пѓ2
3
T
7.81Пѓ2
How many samples?
Choose N so that, with probability p, at least one
random sample is free from outliers. e.g. p=0.99
пЂЁ1 пЂ­ пЂЁ1 пЂ­ e пЂ© пЂ©
s N
пЂЅ 1пЂ­ p
пЂЁ
N пЂЅ log пЂЁ1 пЂ­ p пЂ© / log 1 пЂ­ пЂЁ1 пЂ­ e пЂ©
s
пЂ©
proportion of outliers e
s
2
3
4
5
6
7
8
5%
2
3
3
4
4
4
5
10% 20% 25% 30% 40% 50%
3
5
6
7
11
17
4
7
9
11
19
35
5
9
13
17
34
72
6
12
17
26
57
146
7
16
24
37
97
293
8
20
33
54 163 588
9
26
44
78 272 1177
Acceptable consensus set?
• Typically, terminate when inlier ratio reaches
expected ratio of inliers
T пЂЅ пЂЁ1 пЂ­ e пЂ©n
Adaptively determining
the number of samples
e is often unknown a priori, so pick worst case,
e.g. 50%, and adapt if more inliers are found,
e.g. 80% would yield e=0.2
• N=∞, sample_count =0
• While N >sample_count repeat
•
•
•
•
Choose a sample and count the number of inliers
Set e=1-(number of inliers)/(total number of points)
Recompute N from e
пЂЁN пЂЅ log пЂЁ1 пЂ­ p пЂ© / log пЂЁ1 пЂ­ пЂЁ1 пЂ­ e пЂ© пЂ©пЂ©
Increment the sample_count by 1
• Terminate
s
Robust Maximum Likelyhood
Estimation
Previous MLE algorithm considers fixed set of inliers
Better, robust cost function (reclassifies)
R пЂЅ

i
пѓ¬ e 2 e 2 пЂј t 2 inlier
ПЃ пЂЁ d пЃћ i пЂ© with ПЃ пЂЁe пЂ© пЂЅ пѓ­ 2 2
2
t
e
пЂѕ
t
outlier
пѓ®
Other robust algorithms
• RANSAC maximizes number of inliers
• LMedS minimizes median error
• Not recommended: case deletion,
iterative least-squares, etc.
Automatic computation of H
Objective
Compute homography between two images
Algorithm
(i) Interest points: Compute interest points in each image
(ii) Putative correspondences: Compute a set of interest
point matches based on some similarity measure
(iii) RANSAC robust estimation: Repeat for N samples
(a) Select 4 correspondences and compute H
(b) Calculate the distance dпЃћ for each putative match
(c) Compute the number of inliers consistent with H (dпЃћ<t)
Choose H with most inliers
(iv) Optimal estimation: re-estimate H from all inliers by
minimizing ML cost function with Levenberg-Marquardt
(v) Guided matching: Determine more matches using
prediction by computed H
Optionally iterate last two steps until convergence
Determine putative
correspondences
• Compare interest points
Similarity measure:
• SAD, SSD, ZNCC on small neighborhood
• If motion is limited, only consider interest points
with similar coordinates
• More advanced approaches exist, based on
invariance…
Example: robust computation
Interest points
(500/image)
Putative
correspondences (268)
Outliers (117)
Inliers (151)
Final inliers (262)
Assignment
• Take two or more photographs taken
from a single viewpoint
• Compute panorama
• Use different measures DLT, MLE
• Use Matlab
• Due Feb. 13
Next class: Algorithm evaluation
and error analysis
• Bounds on performance
• Covariance propagation
• Monte Carlo covariance estimation
Документ
Категория
Презентации
Просмотров
24
Размер файла
1 614 Кб
Теги
1/--страниц
Пожаловаться на содержимое документа