Please wait a minute...
Computational Visual Media  2019, Vol. 05 Issue (04): 325-336    doi: 10.1007/s41095-019-0150-3
Research Article     
Practical BRDF reconstruction using reliable geometric regions from multi-view stereo
Taishi Ono1, Hiroyuki Kubo1,(✉), Kenichiro Tanaka1, Takuya Funatomi1, Yasuhiro Mukaigawa1
1Nara Institute of Science and Technology, Ikoma, Nara 630-0192, Japan.
Download: PDF (23208 KB)      HTML  
Export: BibTeX | EndNote (RIS)      

Abstract  

In this paper, we present a practical methodfor reconstructing the bidirectional reflectance distribu-tion function (BRDF) from multiple images of a real object composed of a homogeneous material. The key idea is that the BRDF can be sampled after geometry estimation using multi-view stereo (MVS) techniques. Our contribution is selection of reliable samples of lighting, surface normal, and viewing directions for robustness against estimation errors of MVS. Our method is quantitatively evaluated using synthesized images and its effectiveness is shown via real-world experiments.



Key wordsBRDF reconstruction      multi-view stereo (MVS)      photogrammetry      rendering     
Received: 06 May 2018      Published: 13 March 2020
Corresponding Authors: Hiroyuki Kubo   
Cite this article:

Taishi Ono, Hiroyuki Kubo, Kenichiro Tanaka, Takuya Funatomi, Yasuhiro Mukaigawa. Practical BRDF reconstruction using reliable geometric regions from multi-view stereo. Computational Visual Media, 2019, 05(04): 325-336.

URL:

http://cvm.tsinghuajournals.com/10.1007/s41095-019-0150-3     OR     http://cvm.tsinghuajournals.com/Y2019/V05/I04/325

Fig. 1:  Geometry reconstruction result using commercial software<i> PhotoScan</i>. Each blue square represents the estimated viewpoint of the input images.
Fig. 2:  (a) Example of the multi-view images with ambient lighting for acquiring geometry. (b) Example of a few additional images with one distant light for acquiring reflectance. Note that (a) and (b) were captured from the same viewpoint.
Fig. 3:  Overview of our method. First, we obtain multi-view images under ambient lighting to acquire the geometry. We then obtain a few additional images with only one distant light to acquire the reflectance. Note that these distant light images share a viewpoint with one of the corresponding multi-view images. Next, we reconstruct the geometry using<i> PhotoScan</i>. We then extract reliable regions from the reconstructed geometry following assumptions introduced in this paper. Nielsen et al.’s optimal lighting and viewing directions are determined using the acquired lighting and viewing directions, and the reflectance is sampled at corresponding pixels of the distant light images. Finally, we reconstruct the BRDF using these samples.
Fig. 4:  (a) Example image captured with ambient lighting, for acquiring geometry. (b) Example image captured with a distant light, for acquiring reflectance. (c) Reconstructed normal map using<i> PhotoScan</i>. (d) Error map (in radian) of the angle between the reconstructed surface normal and ground truth. (e) Comparison to the ground truth (dashed line) of reconstructed BRDFs without considering surface normal error (solid line).
Fig. 5:  Ground truth (left) and reconstructed (right) surface normal. (a) Blurry example. (b) Bumpy example.
Fig. 6:  (a) Reconstructed surface normal map. (b) Acquired curvature. (c) Mask using the threshold.
Fig. 7:  (a) Example distant light image. (b) Example rendered image using the reconstructed BRDF and reconstructed geometry. Comparing the intensity distribution in (a) and (b) allows us to determine the best reconstructed BRDF.
Fig. 8:  Simulated example with<i> blue-acrylic</i> BRDF and<i> Dragon</i> geometry. (a) Example multi-view image with ambient lighting, for acquiring geometry. (b) Example single distant light image, for acquiring reflectance distribution. (c) Surface normal map acquired from multi-view images.
and the viewpoint is located on the same plane as the incident light. The rendered sphere images have a front light at [1,1,1] and a back light at [-1,-1,3]. Above: rendered using ground truth. Below: rendered using reconstructed BRDFs.">
Fig. 9:  Reconstructed BRDFs for<i> alum-bronze</i>,<i> fruitwood-241</i>,<i> pink-fabric</i>,<i> blue-acrylic</i>,<i> pink-fabric2</i>, and<i> green-latex</i> from the MERL BRDF database. Each graph compares the reconstructed BRDF (solid line) to the ground truth (dashed line). Each line represents the transition in the BRDF value when the incident light is at 135<inline-formula><math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="MA39"><mml:msup><mml:mi/><mml:mo>∘</mml:mo></mml:msup> </math></inline-formula> and the viewpoint is located on the same plane as the incident light. The rendered sphere images have a front light at <inline-formula><math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="MA40"><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mn id="XM37">1</mml:mn><mml:mo>,</mml:mo><mml:mn id="XM38">1</mml:mn><mml:mo>,</mml:mo><mml:mn id="XM39">1</mml:mn><mml:mo stretchy="false">]</mml:mo></mml:mrow></math></inline-formula> and a back light at <inline-formula><math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" id="MA41"><mml:mrow><mml:mo stretchy="false">[</mml:mo><mml:mrow id="XM40"><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>,</mml:mo><mml:mrow id="XM41"><mml:mo>-</mml:mo><mml:mn>1</mml:mn></mml:mrow><mml:mo>,</mml:mo><mml:mn id="XM42">3</mml:mn><mml:mo stretchy="false">]</mml:mo></mml:mrow></math></inline-formula>. Above: rendered using ground truth. Below: rendered using reconstructed BRDFs.
Fig. 10: Mean and standard deviation of BRDF reconstruction error. Above: calculated reconstruction error for all lighting and viewing directions. Left to right: reconstruction result without assumptions, using only the curvature assumption, using only the monotonicity assumption, using only the highest intensity assumption, and using all assumptions. Below: reconstruction error around the specular direction.
Fig. 11:  In a simulation, we reconstructed the BRDF 10 times and computed the correlation between the cosine similarity and the BRDF reconstruction error. The graphs represent cases using<i> Dragon</i> with<i> alum-bronze</i> (upper left),<i> Stanford Bunny</i> with<i> blue-acrylic</i> (upper right),<i> Dragon</i> with<i> green-latex</i> (middle left),<i> Stanford Bunny</i> with<i> fruitwood-241</i> (middle right),<i> Dragon</i> with<i> pink-fabric2</i> (lower left), and<i> Dragon</i> with<i> pink-fabric</i> (lower right).
Fig. 12:  Left: experimental setup. Upper right: example multi-view image with room lighting for acquiring geometry. Middle right: picture with a glossy black sphere for obtaining the light direction. Lower right: example distant light image for acquiring the reflectance distribution with a single distant light.
Fig. 13:  (a) Example distant light image. (b) Example mask image used to acquire the reflectance individually from the hammer’s head and body. These mask images were constructed for each distant light image.
Fig. 14:  Rendered images using the reconstructed BRDF with two lighting environments (Grace Cathedral (left) and Eucalyptus Grove (right), ? Paul Debevec 1998, 1999).
Fig. 15:  (a) One of the multi-view images for acquiring geometry. 67 images were used. (b) One of the distant light images for acquiring the reflectance distribution. Eight images were used. (c) One of the shadow mask images. Eight images were used. (d) One of the masks for material separation. Eight images were used.
Fig. 16:  Images rendered using the reconstructed BRDF with two lighting environments (? Paul Debevec 1998, 1999).
Fig. 17:  Rendered sphere with BRDFs reconstructed from the gold part (upper left), black horn part (upper right), black line part (middle left), purple body part (lower left), and red chest part (lower left).
Fig. 18:  (a) Example of multi-view images. (b) Example of distant light images. (c) Example of mask images.
Fig. 19:  Rendered images using the reconstructed BRDF with two lighting environments (? Paul Debevec 1998, 1999).
Fig. 20: (a) Example input multi-view image. (b, c) Images rendered using the reconstructed BRDFs.
[1]   Ullman, S. The interpretation of structure from motion. Proceedings of the Royal Society of London Series b Biological Sciences Vol. 203, No. 1153, 405-426, 1979.
[2]   Lu, F.; Matsushita, Y.; Sato, I.; Okabe, T.; Sato, Y. From intensity profile to surface normal: Photometric stereo for unknown light sources and isotropic reflectances. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 37, No. 10, 1999-2012, 2015.
[3]   Nielsen, J. B.; Jensen, H. W.; Ramamoorthi, R. On optimal, minimal BRDF sampling for reflectance acquisition. ACM Transactions on Graphics Vol. 34, No. 6, Article No. 186, 2015.
[4]   Seitz, S. M.; Dyer, C. R. Photorealistic scene reconstruction by voxel coloring. International Journal of Computer Vision, Vol. 35, No. 2, 151-173, 1999.
[5]   Kutulakos, K. N.; Seitz, S. M.A theory of shape by space carving. In:Proceedings of the 7th IEEE International Conference on Computer Vision, Vol. 1, 307-314, 1999.
[6]   Vogiatzis, G.; Torr, P. H. S.; Cipolla, R.Multi-view stereo via volumetric graph-cuts. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 2, 391-398, 2005.
[7]   Koenderink, J. J.; van Doorn, A. J.Photometric invariants related to solid shape. In: Shape from Shading. MIT Press, 301-321, 1989.
[8]   Zisserman, A.; Giblin, P.; Blake, A. The information available to a moving observer from specularities. Image and Vision Computing Vol. 7, No. 1, 38-42, 1989.
[9]   Furukawa, Y.; Ponce, J. Accurate, dense, and robust multiview stereopsis. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 32, No. 8, 1362-1376, 2010.
[10]   Treuille, A.; Hertzmann, A.; Seitz, S. M.Example-based stereo with general BRDFs. In: Computer Vision-ECCV 2004. Lecture Notes in Computer Science, Vol. 3022. Pajdla, T.; Matas, J. Eds. Springer Berlin Heidelberg, 457-469, 2004.
[11]   Chandraker, M.; Reddy, D.; Wang, Y. Z.; Ramamoorthi, R.What object motion reveals about shape with unknown BRDF and lighting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2523-2530, 2013.
[12]   Chandraker, M. What camera motion reveals about shape with unknown BRDF. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2179-2186, 2014.
[13]   Li, Z. Q.; Xu, Z. X.; Ramamoorthi, R.; Chandraker, M.Robust energy minimization for BRDF-invariant shape from light fields. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 578-586, 2017.
[14]   Schwartz C.; Klein, R.Acquisition and presentation of virtual surrogates for cultural heritage artefacts. In: Proceedings of the EVA, 50-57, 2012.
[15]   Xia, R.; Dong, Y.; Peers, P.; Tong, X. Recovering shape and spatially-varying surface reflectance under unknown illumination. ACM Transactions on Graphics Vol. 35, No. 6, Article No. 187, 2016.
[16]   Oxholm, G.; Nishino, K. Shape and reflectance estimation in the wild. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 38, No. 2, 376-389, 2016.
[17]   Erb, W. Computer-controlled gonioreflectometer for the measurement of spectral reflection characteristics. Applied Optics Vol. 19, No. 22, 3789-3794, 1980.
[18]   Miyashita, L.; Watanabe, Y.; Ishikawa, M.Rapid SVBRDF measurement by algebraic solution based on adaptive illumination. In: Proceedings of the 2nd International Conference on 3D Vision, 232-239, 2014.
[19]   Marschner, S. R.; Westin, S. H.; Lafortune, E. P. F.; Torrance, K. E.; Greenberg, D. P.Image-based BRDF measurement including human skin. In: Proceedings of the Eurographics Workshop on Rendering, 131-144, 1999.
[20]   Matusik, W.; Pfister, H.; Brand, M.; McMillan, L.Efficient isotropic BRDF measurement. In: Proceedings of the 14th Eurographics Workshop on Rendering, 241-247, 2003.
[21]   Rusinkiewicz, S. M. A new change of variables for efficient BRDF representation. In: Proceedings of the Eurographics Workshop on Rendering, 11-22, 1998.
[22]   Higo, T.; Matsushita, Y.; Ikeuchi, K.Consensus photometric stereo. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1157-1164, 2010.
[23]   Vergne, R.; Pacanowski, R.; Barla, P.; Granier, X.; Schlick, C. Light warping for enhanced surface depiction. ACM Transactions on Graphics Vol. 28, No. 3, Article No. 25, 2009.
[24]   Pharr M.; Humphreys, G. Physically based Rendering: From Theory to Implementation, 2nd edn. Morgan Kaufmann Publishers Inc., 2010.
[1] Xiaochuan Wang, Xiaohui Liang, Bailin Yang, Frederick W. B. Li. No-reference synthetic image quality assessment with convolutional neural network and local image saliency[J]. Computational Visual Media, 2019, 5(2): 193-208.
[2] Hirokazu Sakai, Kosuke Nabata, Shinya Yasuaki, Kei Iwasaki. A method for estimating the errors in many-light rendering with supersampling[J]. Computational Visual Media, 2019, 5(2): 151-160.
[3] Matias K. Koskela, Kalle V. Immonen, Timo T. Viitanen, Pekka O. Jääskeläinen, Joonas I. Multanen, Jarmo H. Takala. Instantaneous foveated preview for progressive Monte Carlo rendering[J]. Computational Visual Media, 2018, 04(03): 267-276.
[4] Hideki Todo,Yasushi Yamaguchi. Estimating reflectance and shape of objects from a single cartoon-shaded image[J]. Computational Visual Media, 2017, 3(1): 21-31.
[5] Yusuke Tokuyoshi. Modified filtered importance sampling for virtual spherical Gaussian lights[J]. Computational Visual Media, 2016, 2(4): 343-355.
[6] Daniel Kauker,Martin Falk,Guido Reina,Anders Ynnerman,Thomas Ertl. VoxLink—Combining sparse volumetric data and geometry for efficient rendering[J]. Computational Visual Media, 2016, 2(1): 45-56.
[7] David A. T. Roberts,Ioannis Ivrissimtzis. Quality measures of reconstruction filters for stereoscopic volume rendering[J]. Computational Visual Media, 2016, 2(1): 19-30.
[8] Peng Zhou,Yanyun Chen. Variance reduction using interframe coherence for animated scenes[J]. Computational Visual Media, 2015, 1(4): 343-349.
[9] Yuan Tian,Yin Yang,Xiaohu Guo,Balakrishnan Prabhakaran. Stable haptic interaction based on adaptive hierarchical shape matching[J]. Computational Visual Media, 2015, 1(3): 253-265.
[10] Liming Lou,Lu Wang,Xiangxu Meng. Stylized strokes for coherent line drawings[J]. Computational Visual Media, 2015, 1(1): 79-89.