Researcher
Microsoft Research Asia, Beijing

  - I currently work as a researcher at Microsoft Research Asia. My main research interests include artificial neural network, realistic rendering, appearance acquisition, video processing and GPU acceleration. I received B.E. majored in Electronic Engineering, and PhD majored in Computer Science and supervised by Professor Baining Guo, both at Tsinghua University. I love sports, especially badminton, tennis, table tennis, foosball and soccer.

Address: No.5, Dan Ling Street, Haidian District, Beijing, 100080, PRC
Email: renpeiran@gmail.com


Publications

Image Based Relighting Using Neural Networks

Peiran Ren, Yue Dong, Stephen Lin, Xin Tong, Baining Guo

We present a neural network regression method for relighting realworld scenes from a small number of images. The relighting in this work is formulated as the product of the scene's light transport matrix and new lighting vectors, with the light transport matrix reconstructed from the input images. Based on the observation that there should exist non-linear local coherence in the light transport matrix, our method approximates matrix segments using neural networks that model light transport as a non-linear function of light source position and pixel coordinates. Central to this approach is a proposed neural network design which incorporates various elements that facilitate modeling of light transport from a small image set. In contrast to most image based relighting techniques, this regression-based approach allows input images to be captured under arbitrary illumination conditions, including light sources moved freely by hand. We validate our method with light transport data of real scenes containing complex lighting effects, and demonstrate that fewer input images are required in comparison to related techniques.

ACM SIGGRAPH 2015
[ Project Page ] [ Paper 71MB ] [ Video 178MB ] [ bibtex ]

Global Illumination with Point Regression Functions

Peiran Ren, Jiaping Wang, Minmin Gong, Steve Lin, Xin Tong, Baining Guo

We present radiance regression functions for fast rendering of global illumination in scenes with dynamic local light sources. A radiance regression function (RRF) represents a non-linear mapping from local and contextual attributes of surface points, such as position, viewing direction, and lighting condition, to their indirect illumination values. The RRF is obtained from precomputed shading samples through regression analysis, which determines a function that best fits the shading data. For a given scene, the shading samples are precomputed by an offline renderer.

The key idea behind our approach is to exploit the nonlinear coherence of the indirect illumination data to make the RRF both compact and fast to evaluate. We model the RRF as a multilayer acyclic feed forward neural network, which provides a close functional approximation of the indirect illumination and can be efficiently evaluated at run time. To effectively model scenes with spatially variant material properties, we utilize an augmented set of attributes as input to the neural network RRF to reduce the amount of inference that the network needs to perform. To handle scenes with greater geometric complexity, we partition the input space of the RRF model and represent the subspaces with separate, smaller RRFs that can be evaluated more rapidly. As a result, the RRF model scales well to increasingly complex scene geometry and material variation. Because of its compactness and ease of evaluation, the RRF model enables real-time rendering with full global illumination effects, including changing caustics and multiple-bounce high-frequency glossy interreflections.

ACM SIGGRAPH 2013
[ Project Page ] [ Paper 36MB ] [ Video 145MB ] [ bibtex ]


Pocket Reflectometry

Peiran Ren, Jiaping Wang, John Snyder, Xin Tong, Baining Guo

We present a simple, fast solution for reflectance acquisition using tools that fit into a pocket. Our method captures video of a flat target surface from a fixed video camera lit by a hand-held, moving, linear light source. After processing, we obtain an SVBRDF. We introduce a BRDF chart, analogous to a color "checker" chart, which arranges a set of known-BRDF reference tiles over a small card. A sequence of light responses from the chart tiles as well as from points on the target is captured and matched to reconstruct the target's appearance.

We develop a new algorithm for BRDF reconstruction which works directly on these LDR responses, without knowing the light or camera position, or acquiring HDR lighting. It compensates for spatial variation caused by the local (finite distance) camera and light position by warping responses over time to align them to a specular reference. After alignment, we find an optimal linear combination of the Lambertian and purely specular reference responses to match each target point's response. The same weights are then applied to the corresponding (known) reference BRDFs to reconstruct the target point's BRDF. We extend the basic algorithm to also recover varying surface normals by adding two spherical caps for diffuse and specular references to the BRDF chart.
We demonstrate convincing results obtained after less than 30 seconds of data capture, using commercial mobile phone cameras in a casual environment.

ACM SIGGRAPH 2011
[ Project Page ] [ Paper 5MB ] [ Video 55MB ] [ bibtex ]

 

All-Frequency Rendering with Dynamic, Spatially-Varying Reflectance

Jiaping Wang, Peiran Ren, Minmin Gong, John Snyder, Baining Guo

 We describe a technique for real-time rendering of dynamic, spatially-varying BRDFs in static scenes with all-frequency shadows from environmental and point lights. The 6D SVBRDF is represented with a general microfacet model and spherical lobes fit to its 4D spatially-varying normal distribution function (SVNDF). A sum of spherical Gaussians (SGs) provides an accurate approximation with a small number of lobes. Parametric BRDFs are fit on-the-fly using simple analytic expressions; measured BRDFs are fit as a preprocess using nonlinear optimization. Our BRDF representation is compact, allows detailed textures, is closed under products and rotations, and supports reflectance of arbitrarily high specularity. At run-time, SGs representing the NDF are warped to align the half-angle vector to the lighting direction and multiplied by the microfacet shadowing and Fresnel factors. This yields the relevant 2D view slice on-the-fly at each pixel, still represented in the SG basis. We account for macro-scale shadowing using a new, nonlinear visibility representation based on spherical signed distance functions (SSDFs). SSDFs allow per-pixel interpolation of high-frequency visibility without ghosting and can be multiplied by the BRDF and lighting efficiently on the GPU.

ACM SIGGRAPH ASIA 2009
[ Project Page ] [ Paper 20MB ] [ Video 55MB ] [ bibtex ]