

Researcher Microsoft Research Asia, Beijing  I currently work as a researcher at Microsoft Research Asia. My main research interests include artificial neural network, realistic rendering, appearance acquisition, video processing and GPU acceleration. I received B.E. majored in Electronic Engineering, and PhD majored in Computer Science and supervised by Professor Baining Guo, both at Tsinghua University. I love sports, especially badminton, tennis, table tennis, foosball and soccer.

Image Based Relighting Using Neural Networks
Peiran Ren, Yue Dong, Stephen Lin, Xin Tong, Baining Guo
We present a neural network regression method for relighting realworld scenes from a small number of images. The relighting in this work is formulated as the product of the scene's light transport matrix and new lighting vectors, with the light transport matrix reconstructed from the input images. Based on the observation that there should exist nonlinear local coherence in the light transport matrix, our method approximates matrix segments using neural networks that model light transport as a nonlinear function of light source position and pixel coordinates. Central to this approach is a proposed neural network design which incorporates various elements that facilitate modeling of light transport from a small image set. In contrast to most image based relighting techniques, this regressionbased approach allows input images to be captured under arbitrary illumination conditions, including light sources moved freely by hand. We validate our method with light transport data of real scenes containing complex lighting effects, and demonstrate that fewer input images are required in comparison to related techniques.
ACM SIGGRAPH 2015

Global Illumination with Point Regression Functions
Peiran Ren, Jiaping Wang, Minmin Gong, Steve Lin, Xin Tong, Baining Guo
We present radiance regression functions for fast rendering of global illumination in scenes with dynamic local light sources. A radiance regression function (RRF) represents a nonlinear mapping from local and contextual attributes of surface points, such as position, viewing direction, and lighting condition, to their indirect illumination values. The RRF is obtained from precomputed shading samples through regression analysis, which determines a function that best fits the shading data. For a given scene, the shading samples are precomputed by an offline renderer. The key idea behind our approach is to exploit the nonlinear coherence of the indirect illumination data to make the RRF both compact and fast to evaluate. We model the RRF as a multilayer acyclic feed forward neural network, which provides a close functional approximation of the indirect illumination and can be efficiently evaluated at run time. To effectively model scenes with spatially variant material properties, we utilize an augmented set of attributes as input to the neural network RRF to reduce the amount of inference that the network needs to perform. To handle scenes with greater geometric complexity, we partition the input space of the RRF model and represent the subspaces with separate, smaller RRFs that can be evaluated more rapidly. As a result, the RRF model scales well to increasingly complex scene geometry and material variation. Because of its compactness and ease of evaluation, the RRF model enables realtime rendering with full global illumination effects, including changing caustics and multiplebounce highfrequency glossy interreflections.
ACM SIGGRAPH 2013

Pocket Reflectometry
Peiran Ren, Jiaping Wang, John Snyder, Xin Tong, Baining Guo
We present a simple, fast solution for reflectance acquisition using tools that fit into a pocket. Our method captures video of a flat target surface from a fixed video camera lit by a handheld, moving, linear light source. After processing, we obtain an SVBRDF. We introduce a BRDF chart, analogous to a color "checker" chart, which arranges a set of knownBRDF reference tiles over a small card. A sequence of light responses from the chart tiles as well as from points on the target is captured and matched to reconstruct the target's appearance.
We develop a new algorithm for BRDF reconstruction which works
directly on these LDR responses, without knowing the light or camera
position, or acquiring HDR lighting. It compensates for spatial
variation caused by the local (finite distance) camera and light position
by warping responses over time to align them to a specular
reference. After alignment, we find an optimal linear combination
of the Lambertian and purely specular reference responses to match
each target point's response. The same weights are then applied to
the corresponding (known) reference BRDFs to reconstruct the target
point's BRDF. We extend the basic algorithm to also recover
varying surface normals by adding two spherical caps for diffuse
and specular references to the BRDF chart.
ACM SIGGRAPH 2011

AllFrequency Rendering with Dynamic, SpatiallyVarying Reflectance
Jiaping Wang, Peiran Ren, Minmin Gong, John Snyder, Baining Guo
We describe a technique for realtime rendering of dynamic, spatiallyvarying BRDFs in static scenes with allfrequency shadows from environmental and point lights. The 6D SVBRDF is represented with a general microfacet model and spherical lobes fit to its 4D spatiallyvarying normal distribution function (SVNDF). A sum of spherical Gaussians (SGs) provides an accurate approximation with a small number of lobes. Parametric BRDFs are fit onthefly using simple analytic expressions; measured BRDFs are fit as a preprocess using nonlinear optimization. Our BRDF representation is compact, allows detailed textures, is closed under products and rotations, and supports reflectance of arbitrarily high specularity. At runtime, SGs representing the NDF are warped to align the halfangle vector to the lighting direction and multiplied by the microfacet shadowing and Fresnel factors. This yields the relevant 2D view slice onthefly at each pixel, still represented in the SG basis. We account for macroscale shadowing using a new, nonlinear visibility representation based on spherical signed distance functions (SSDFs). SSDFs allow perpixel interpolation of highfrequency visibility without ghosting and can be multiplied by the BRDF and lighting efficiently on the GPU.
ACM SIGGRAPH ASIA 2009
