Automated registration of point clouds with high resolution photographs and rendering under novel illumination conditions
With the increased computing power of modern technology, it has become feasible to digitally capture real world scenes and objects, preserving the scenes and objects indefinitely. Additionally, digitally capturing a scene provides the flexibility to re- visualize it under novel illumination conditions that may never occur at the scene’s real location. These two tools, scene capture and redisplay, are at the focal point of this proposal. Scene capture requires recording the spatial and intensity data of a real world scene. This is accomplished using LIDAR (a method of laser positioning) and pho- tographic cameras respectively. Once acquired, the data sets need to be registered together. This is the computation of a mathematical transform to that maps the photographic images onto the spatial data. Typically, this has been done using a significant amount of user interventation or requires the placement of distinguishing markers in the real scene. To remove these requirements and handle large data sets, the performed research submits methods to automatically compute the mathematical transforms between data sets with minimal manual intervention typically required in the current state of the art. This will be accomplished by posing the problem as an optimization problem with an objective function based upon a novel error metric. The redisplay portion of the research submits a novel rendering equation that is able to take cues from a photograph and realistically insert a synthetic object into the novel environment depicted in the photograph. This rendering equation allows the object to react realistically to the illumination conditions in the environment which may be substantially different from the environment conditions when the object or scene was captured.