Reconstructing the World’s Museums

Everyday people use Google Maps, which uses an integration of satellite, aerial, and street imagery, as a useful tool to find directions as well as to explore the world. However, because it is impossible to take pictures of indoor environments from aerial viewpoints, indoor visualization is limited to ground level data, hampering effective and intuitive remote navigation of large scale indoor spaces. 

In the paper Reconstructing the World’s Museums (, one of the Google Excellent Papers for 2012 (, Princeton Assistant Professor and former Google intern +Jianxiong Xiao ( present a 3D reconstruction and visualization system for large indoor environments. Along with Assistant Professor at Washington University in St. Louis and former Google Software Engineer +Yasutaka Furukawa (, Xiao describes how their system uses image and linear laser range sensor data from the +Google Art Project, and a new algorithm called Inverse CSG (Constructive Solid Geometry,, to create photorealistic aerial renderings of indoor scenes. 

Developed when Xiao interned at Google under the supervision of Furukawa, the algorithm first divides a three dimensional space into 2D “slices”, using rectangles as “geometric primitives” to solve a 2D CSG. They then stack the “slices”, generating 3D geometric primitives based on the 2D reconstructions, finally solving for a 3D CSG model and effectively reconstructing the free space volume of a room. By including image data with the 3D reconstruction, the authors demonstrate a method of producing texture-mapped 3D models that can be rendered from aerial viewpoints, allowing remote explorers to intuitively navigate a reconstructed floorplan and transition to Street View images to see fine detail.

To learn more, read the full paper (linked above), and watch Xiao’s presentation of the paper at the 12th European Conference on Computer Vision (, linked below 
Shared publiclyView activity