Elsevier

Computers & Graphics

Volume 93, December 2020, Pages 95-107
Computers & Graphics

Special Section on SIBGRAPI 2020
An unstructured lumigraph based approach to the SVBRDF estimation problem

https://doi.org/10.1016/j.cag.2020.09.013Get rights and content

Highlights

  • Spatially variant surfaces can be digitally preserved using image-based processes.

  • SVBRDF estimation method uses solely HDR images of a scene as input.

  • Unstructured lumigraphs sample scene’s plenoptic function during SVBRDF estimation.

  • Considering that incoming light can change at each surface point provides precision.

  • Considering the environment as source of light provides a flexible acquisition setup.

Abstract

Appearance preservation aims to estimate reflectance functions to model the way real materials interact with light. These functions are especially useful in digital preservation of heritage and realistic rendering, as they reproduce the appearance of real materials in virtual scenes. This work proposes an image-based process that aims to preserve the appearance of surfaces whose reflectance properties are spatially variant. During image acquisition, this process considers the whole environment as a source of light over the area to be preserved and, assuming the environment is static, it does not require controlled environments. To achieve this goal, the scene geometry and relative camera positions are approximated from a set of HDR images taken inside the real scene, using a combination of structure from motion and multi-view stereo methods. Based on this data, a set of unstructured lumigraphs is traced, on-demand, inside the reconstructed scene. The color information retrieved from these lumigraphs is then used to estimate a linear combination of basis BRDFs for a grid of points in the surface area, defining thus its SVBRDF. This paper details the proposed method and presents the results obtained using real and synthetic settings. It shows that considering the whole environment as a source of light is a viable approach to obtain reliable results and to enable more flexible acquisition setups.

Introduction

Considering that a surface is composed of one or more materials, its appearance is defined by the way these materials interact with light. In computer graphics, this phenomenon can be simulated using reflectance functions, as the appearance of each material is the result of convolving its local reflectance function with the environment map around its location. Such function estimates how much of the incoming light will be reflected and define the color of a material under a given perspective and environment.

Reflectance functions present formulations that vary according to the properties of each material. These functions generally cover four dimensions, representing the polar coordinates of incoming and outgoing light rays at a point. The number of dimensions can increase or reduce in order to represent specific properties a surface must present. A review of models with different numbers of dimensions can be found in Weyrich et al. [1].

In many computer graphics areas, such as digital preservation of heritage and realistic rendering, it is important to generate reflectance functions that simulate materials observed in the real world. This way, it is possible to render the appearance of real surfaces in virtual scenes. However, sampling data to generate these functions is a cumbersome problem. Each material should be observed from many different positions and light settings to provide enough data to properly sample all the dimensions defined by a reflectance model.

Image-based methods have been proposed to deal with this amount of information while retrieving realistic models from real environments. In this work, we investigate previous research regarding image-based appearance preservation approaches and categorize them based on the way incoming light is considered (Section 2). We observed a lack of preservation approaches that assume that the light that comes from the whole environment influences the appearance of a real surface.

In this context, we propose an image-based process that aims to preserve the appearance of surfaces whose reflectance properties are spatially variant. This process considers the whole environment as a source of light over the area to be preserved, and we extend existing work by reconstructing the light from the whole environment to each point in this area. Such an approach should reproduce this phenomenon with more fidelity than the other ones, as in the real world the incoming light may change from point to point in a surface area, sometimes drastically (e.g. consider the two sides of an edge defined by a shadow).

To this end, we capture High Dynamic Range (HDR) images of a scene and combine structure from motion and multi-view stereo methods to retrieve the scene geometry and relative camera positions. To preserve the appearance of a chosen surface area inside this scene, we estimate color information about incoming and outcoming light along this area. This information is obtained from a set of unstructured lumigraphs, traced inside the reconstructed scene. Finally, using the scene geometry and the color information as input, we estimate a linear combination of basis BRDFs (Bidirectional Reflectance Distribution Functions) for a 2D grid of points projected over the surface area, defining thus its SVBRDF (Spatially Variant BRDF).

Assuming that lighting does not change strongly with time, our approach generates spatially variant reflectance models that can simulate the original surface appearance. As a consequence, it is possible to achieve realistic results using a more flexible acquisition setup.

Section snippets

Related work

We categorize image-based appearance preservation approaches in three groups, based on the way each work deals with the light that comes from the environment during acquisition:

  • 1.

    Minimizes the influence of incoming light over the surface;

  • 2.

    Considers only the light that has traveled directly from a light source to the surface (direct lighting);

  • 3.

    Assumes that the light that comes from the whole environment affects the surface appearance.

Approaches in the first group aim to reduce the influence of

Overview

The basic idea of the proposed method is to trace a set of rays of interest inside a reconstructed 3D scene. For a point p on a surface inside this scene, a set of such rays represents samples of incoming and outcoming light directions. Our method is inspired by the plenoptic function [24], which models the complete flow of light in a scene. We follow the lumigraph approach [25], which reduces the domain of the plenoptic function to four dimensions by considering only the subset of light

Data acquisition

During data acquisition, omnidirectional images of the scene were taken using a Ladybug 3 camera. This device is composed of six CCD sensors that cover more than 80% of a full sphere [27]. For each Ladybug camera position, several images were taken in different exposure settings for each sensor to obtain HDR images. The images were rectified to correct distortion caused by the lenses. Several sets of six HDR images were taken, being the Ladybug positioned in different places inside the scene to

Results and evaluation

Surface patches were sampled and their SVBRDFs estimated using the method described in this work. To evaluate the quality of the results, two approaches were used. In the first one, reference views of a patch are generated from virtual camera perspectives. The proposed SVBRDF estimation method is applied on this patch and linear combinations of BRDF models are found for each point in the patch. Images of the patch are then rendered from the same perspective of the virtual cameras using the

Final remarks

The proposed SVBRDF estimation method was designed to use only HDR images of the scene as input and to consider that the whole environment is a source of light. Furthermore, it assumes that incoming light can change at each surface point and that the surface does not need to be planar. These properties have been proposed based on the literature review and describe key aspects that simplify the acquisition setup, reduce the storage requirements, and model the appearance with realism.

To this end,

CRediT authorship contribution statement

Beatriz Trinchão Andrade: Conceptualization, Methodology, Software, Validation, Investigation, Visualization. Benjamin Resch: Software, Investigation. Hendrik P.A. Lensch: Conceptualization, Methodology, Software, Resources, Supervision. Olga Regina Pereira Bellon: Resources, Supervision, Funding acquisition. Luciano Silva: Resources, Supervision, Funding acquisition.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

The authors would like to thank Tobias Häußler, Manuel Finckh and André Carvalho for helpful discussions and codes. We also thank the anonymous reviewers for their valuable feedback. This work was supported by the CAPES PDSE program (BEX 0157/12-0) and CNPq.

References (43)

  • D. den Brok et al.

    Rapid material capture through sparse and multiplexed measurements

    Comput Graph

    (2018)
  • Z. Zhou et al.

    Sparse-as-possible SVBRDF acquisition

    ACM Trans. Graph.

    (2016)
  • T. Häußler

    Mobile reflectance estimation

    (2012)
  • T. Weyrich et al.

    Principles of appearance acquisition and representation

    Found Trends Comput Graph Vis

    (2009)
  • B.T. Andrade et al.

    Digital preservation of Brazilian indigenous artworks: generating high quality textures for 3D models

    J Cult Herit

    (2012)
  • M. Farouk et al.

    Scanning and processing 3D objects for web display

    Int Conf 3D Digit Imaging Model

    (2003)
  • H. Rushmeier et al.

    Computing consistent normals and colors from photometric data

  • N. Alldrin et al.

    Photometric stereo with non-parametric and spatially-varying reflectance

    Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)

    (2008)
  • H.P.A. Lensch et al.

    Image-based reconstruction of spatial appearance and geometric detail

    ACM Trans Graph (TOG)

    (2003)
  • W. Matusik et al.

    A data-driven reflectance model

    ACM Trans Graph (TOG)

    (2003)
  • M. Levoy et al.

    The digital michelangelo project: 3D scanning of large statues

    Proceedings of the 27th annual conference on computer graphics and interactive techniques, SIGGRAPH ’00

    (2000)
  • S.R. Marschner et al.

    Image-based BRDF measurement including human skin

    Proceedings of the Eurographics workshop on rendering techniques

    (1999)
  • Y. Sato et al.

    Object shape and reflectance modeling from observation

    Proceedings of the 24th annual conference on computer graphics and interactive techniques, SIGGRAPH ’97

    (1997)
  • C. Schwartz et al.

    Design and implementation of practical bidirectional texture function measurement devices focusing on the developments at the university of bonn

    Sensors

    (2014)
  • K. Kang et al.

    Learning efficient illumination multiplexing for joint capture of reflectance and shape

    ACM Trans. Graph.

    (2019)
  • M. Aittala et al.

    Two-shot SVBRDF capture for stationary materials

    ACM Trans. Graph.

    (2015)
  • R.A. Albert et al.

    Approximate SvBRDF estimation from mobile phone video

    (2018)
  • D. Gao et al.

    Deep inverse rendering for high-resolution SVBRDF estimation from an arbitrary number of images

    ACM Trans. Graph.

    (2019)
  • G. Nam et al.

    Practical SVBRDF acquisition of 3D objects with unstructured flash photography

    ACM Trans. Graph

    (2018)
  • V. Deschaintre et al.

    Flexible SVBRDF capture with a multi-image deep network

    Comput Graph Forum

    (2019)
  • T. Haber et al.

    Relighting objects from image collections

    Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)

    (2009)
  • Cited by (0)

    View full text