Conventional approaches to 3D scene reconstruction often treat matting and reconstruction as two separate problems, with matting a prerequisite to reconstruction. The problem with such an approach is that it requires taking irreversible decisions at the first stage, which may translate into reconstruction errors at the second stage. In this paper, we propose an approach which attempts to solve both problems jointly, thereby avoiding this limitation. A general Bayesian formulation for estimating opacity and depth with respect to a reference camera is developed. In addition, it is demonstrated that in the special case of binary opacity values (background/foreground) and discrete depth values, a global solution can be obtained via a single graph-cut computation. We demonstrate the application of the method to novel view synthesis in the case of large-scale outdoor scene. An experimental comparison with a two-stage approach based on Chroma-keying and shape-from-silhouette illustrates the advantages of the new method.

This document was originally published in Proc. of The 6th International Conference on 3D Digital Imaging and Modeling (3DM'07), August 21-23 2007, Montreal, Quebec, Canada.