Special Section on SIBGRAPI 2020An unstructured lumigraph based approach to the SVBRDF estimation problem
Graphical abstract
Introduction
Considering that a surface is composed of one or more materials, its appearance is defined by the way these materials interact with light. In computer graphics, this phenomenon can be simulated using reflectance functions, as the appearance of each material is the result of convolving its local reflectance function with the environment map around its location. Such function estimates how much of the incoming light will be reflected and define the color of a material under a given perspective and environment.
Reflectance functions present formulations that vary according to the properties of each material. These functions generally cover four dimensions, representing the polar coordinates of incoming and outgoing light rays at a point. The number of dimensions can increase or reduce in order to represent specific properties a surface must present. A review of models with different numbers of dimensions can be found in Weyrich et al. [1].
In many computer graphics areas, such as digital preservation of heritage and realistic rendering, it is important to generate reflectance functions that simulate materials observed in the real world. This way, it is possible to render the appearance of real surfaces in virtual scenes. However, sampling data to generate these functions is a cumbersome problem. Each material should be observed from many different positions and light settings to provide enough data to properly sample all the dimensions defined by a reflectance model.
Image-based methods have been proposed to deal with this amount of information while retrieving realistic models from real environments. In this work, we investigate previous research regarding image-based appearance preservation approaches and categorize them based on the way incoming light is considered (Section 2). We observed a lack of preservation approaches that assume that the light that comes from the whole environment influences the appearance of a real surface.
In this context, we propose an image-based process that aims to preserve the appearance of surfaces whose reflectance properties are spatially variant. This process considers the whole environment as a source of light over the area to be preserved, and we extend existing work by reconstructing the light from the whole environment to each point in this area. Such an approach should reproduce this phenomenon with more fidelity than the other ones, as in the real world the incoming light may change from point to point in a surface area, sometimes drastically (e.g. consider the two sides of an edge defined by a shadow).
To this end, we capture High Dynamic Range (HDR) images of a scene and combine structure from motion and multi-view stereo methods to retrieve the scene geometry and relative camera positions. To preserve the appearance of a chosen surface area inside this scene, we estimate color information about incoming and outcoming light along this area. This information is obtained from a set of unstructured lumigraphs, traced inside the reconstructed scene. Finally, using the scene geometry and the color information as input, we estimate a linear combination of basis BRDFs (Bidirectional Reflectance Distribution Functions) for a 2D grid of points projected over the surface area, defining thus its SVBRDF (Spatially Variant BRDF).
Assuming that lighting does not change strongly with time, our approach generates spatially variant reflectance models that can simulate the original surface appearance. As a consequence, it is possible to achieve realistic results using a more flexible acquisition setup.
Section snippets
Related work
We categorize image-based appearance preservation approaches in three groups, based on the way each work deals with the light that comes from the environment during acquisition:
- 1.
Minimizes the influence of incoming light over the surface;
- 2.
Considers only the light that has traveled directly from a light source to the surface (direct lighting);
- 3.
Assumes that the light that comes from the whole environment affects the surface appearance.
Approaches in the first group aim to reduce the influence of
Overview
The basic idea of the proposed method is to trace a set of rays of interest inside a reconstructed 3D scene. For a point p on a surface inside this scene, a set of such rays represents samples of incoming and outcoming light directions. Our method is inspired by the plenoptic function [24], which models the complete flow of light in a scene. We follow the lumigraph approach [25], which reduces the domain of the plenoptic function to four dimensions by considering only the subset of light
Data acquisition
During data acquisition, omnidirectional images of the scene were taken using a Ladybug 3 camera. This device is composed of six CCD sensors that cover more than 80% of a full sphere [27]. For each Ladybug camera position, several images were taken in different exposure settings for each sensor to obtain HDR images. The images were rectified to correct distortion caused by the lenses. Several sets of six HDR images were taken, being the Ladybug positioned in different places inside the scene to
Results and evaluation
Surface patches were sampled and their SVBRDFs estimated using the method described in this work. To evaluate the quality of the results, two approaches were used. In the first one, reference views of a patch are generated from virtual camera perspectives. The proposed SVBRDF estimation method is applied on this patch and linear combinations of BRDF models are found for each point in the patch. Images of the patch are then rendered from the same perspective of the virtual cameras using the
Final remarks
The proposed SVBRDF estimation method was designed to use only HDR images of the scene as input and to consider that the whole environment is a source of light. Furthermore, it assumes that incoming light can change at each surface point and that the surface does not need to be planar. These properties have been proposed based on the literature review and describe key aspects that simplify the acquisition setup, reduce the storage requirements, and model the appearance with realism.
To this end,
CRediT authorship contribution statement
Beatriz Trinchão Andrade: Conceptualization, Methodology, Software, Validation, Investigation, Visualization. Benjamin Resch: Software, Investigation. Hendrik P.A. Lensch: Conceptualization, Methodology, Software, Resources, Supervision. Olga Regina Pereira Bellon: Resources, Supervision, Funding acquisition. Luciano Silva: Resources, Supervision, Funding acquisition.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgments
The authors would like to thank Tobias Häußler, Manuel Finckh and André Carvalho for helpful discussions and codes. We also thank the anonymous reviewers for their valuable feedback. This work was supported by the CAPES PDSE program (BEX 0157/12-0) and CNPq.
References (43)
- et al.
Rapid material capture through sparse and multiplexed measurements
Comput Graph
(2018) - et al.
Sparse-as-possible SVBRDF acquisition
ACM Trans. Graph.
(2016) Mobile reflectance estimation
(2012)- et al.
Principles of appearance acquisition and representation
Found Trends Comput Graph Vis
(2009) - et al.
Digital preservation of Brazilian indigenous artworks: generating high quality textures for 3D models
J Cult Herit
(2012) - et al.
Scanning and processing 3D objects for web display
Int Conf 3D Digit Imaging Model
(2003) - et al.
Computing consistent normals and colors from photometric data
- et al.
Photometric stereo with non-parametric and spatially-varying reflectance
Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)
(2008) - et al.
Image-based reconstruction of spatial appearance and geometric detail
ACM Trans Graph (TOG)
(2003) - et al.
A data-driven reflectance model
ACM Trans Graph (TOG)
(2003)