ucsd |
computervisionlaboratory |
We present a novel framework for motion segmentation that
combines the concepts of layer-based methods and
feature-based motion estimation. We estimate the initial correspondences
by comparing vectors of filter outputs at interest
points, from which we compute candidate scene relations
via random sampling of minimal subsets of correspondences.
We achieve a dense, piecewise smooth assignment
of pixels to motion layers using a fast approximate graphcut
algorithm based on a Markov random field formulation.
We demonstrate our approach on image pairs containing
large inter-frame motion and partial occlusion. The approach
is efficient and it successfully segments scenes with
inter-frame disparities previously beyond the scope of layer-based
motion segmentation methods.
|
Last updated Mar. 3, 2004.