A relevancy, hierarchical and contextual maximum entropy framework for a data-driven 3D scene generation

Abstract

We introduce a novel Maximum Entropy (MaxEnt) framework that can generate 3D scenes by incorporating objects' relevancy, hierarchical and contextual constraints in a unified model. This model is formulated by a Gibbs distribution, under the MaxEnt framework, that can be sampled to generate plausible scenes. Unlike existing approaches, which represent a given scene by a single And-Or graph, the relevancy constraint (defined as the frequency with which a given object exists in the training data) require our approach to sample from multiple And-Or graphs, allowing variability in terms of objects' existence across synthesized scenes. Once an And-Or graph is sampled from the ensemble, the hierarchical constraints are employed to sample the Or-nodes (style variations) and the contextual constraints are subsequently used to enforce the corresponding relations that must be satisfied by the And-nodes. To illustrate the proposed methodology, we use desk scenes that are composed of objects whose existence, styles and arrangements (position and orientation) can vary from one scene to the next. The relevancy, hierarchical and contextual constraints are extracted from a set of training scenes and utilized to generate plausible synthetic scenes that in turn satisfy these constraints. After applying the proposed framework, scenes that are plausible representations of the training examples are automatically generated. © 2014 by the authors; licensee MDPI, Basel, Switzerland.

Description

cc-by

Keywords

3D scene generation, And-Or graphs, Maximum entropy

Citation

Dema, M., & Sari-Sarraf, H.. 2014. A relevancy, hierarchical and contextual maximum entropy framework for a data-driven 3D scene generation. Entropy, 16(5). https://doi.org/10.3390/e16052568

Collections