Sunday, March 17, 2013

Light Tracing

The typical approach to global illumination rendering using ray tracing is to trace rays from the camera and gather radiance from the light sources. This approach is used to solve the light transport equation (a.k.a. the rendering equation). This is usually what the term "path tracing" refers to. Here's the formulation of the light transport equation that Veach uses in his thesis (see chapter 3, specifically page 90):

\[ L_{\text{o}}(\mathbf{x},\, \omega_{\text{o}}) \,=\, L_e(\mathbf{x},\, \omega_{\text{o}}) \ +\, \int_{\mathcal{S}^2} L_{\text{o}}(\mathbf{x}_\mathcal{M}(\mathbf{x},\, \omega_{\text{i}}),\, -\omega_{\text{i}}) \, f_s(\mathbf{x},\, \omega_{\text{i}} \rightarrow \omega_{\text{o}}) \, d \sigma_{\mathbf{x}}^{\perp} (\omega_{\text{i}}) \]

It's also possible to trace rays from the light sources and gather importance from the camera. This approach is used to solve the importance transport equation. This approach is sometimes called "light tracing". Here's the formulation of the importance transport equation that Veach uses in his thesis (see chapter 3, specifically page 91):

\[ W_{\text{o}}(\mathbf{x},\, \omega_{\text{o}}) \,=\, W_e(\mathbf{x},\, \omega_{\text{o}}) \ +\, \int_{\mathcal{S}^2} W_{\text{o}}(\mathbf{x}_\mathcal{M}(\mathbf{x},\, \omega_{\text{i}}),\, -\omega_{\text{i}}) \, f_s(\mathbf{x},\, \omega_{\text{o}} \rightarrow \omega_{\text{i}}) \, d \sigma_{\mathbf{x}}^{\perp} (\omega_{\text{i}}) \]

Solving the importance transport equation relies on the ability to map a point and direction on the camera's lens/aperture/sensor to a point on the image plane, and the ability to evaluate the emitted importance for a given point and direction. Evaluating the emitted importance is a little tricky because it is not constant across the image plane—it is necessary (at least in my formulation) to take into account factors like the area of the aperture, the angle at which the light passes through the aperture, the distance from the aperture to the image plane, the angle at which the light hits the image plane, and the area of the image plane. 

For many typical scenes light tracing is very slow compared to path tracing from the camera, however it is much faster for rendering certain types of transport paths (e.g., caustics). Bidirectional path tracing is a generalization of both of these techniques that forms paths based on all of the possible combinations of two subpaths: one traced from the light and one traced from the sensor/eye.

Bidirectional path tracing uses many parts of light tracing (e.g., originating rays from light sources, evaluating the importance emitted from sensors), so I decided to implement light tracing before full bidirectional path tracing. I rendered one of the images below using light tracing, and the second with path tracing. I rendered the light tracing version in my new renderer (which is a revamped version of Photorealizer), and the path tracing version in Photorealizer. These images contain only paths of length 2, and they are identical (apart from the noise) as they should be.

Light tracing (starting on the light and gathering importance)

Path tracing (starting on the sensor and gathering radiance)

Sunday, March 10, 2013

First Steps Towards Bidirectional Path Tracing

I began my implementation of bidirectional path tracing by making a streamlined version of my existing renderer, Photorealizer. I removed little-used features, removed the core recursive path tracing code, cleaned up some of the code, and optimized some of the code. Then I added the following features:

Light and sensor geometry in the scene. Lights and sensors (i.e., the camera lens) are now associated with objects in the scene, just like other geometry. They can be stored in the same spatial data structures and intersected using the same methods as regular geometry. This is important for implementing bidirectional path tracing, and it is conceptually more similar to Veach's formal, mathematically rigorous definition of a scene.
• Unbiased image sampling and filtering. Now I can sample the image plane in any way I want (e.g., stratified sampling) regardless of the arrangement of pixels, and reconstruction filters can be any size and shape. Previously, I was using the common technique of reconstructing samples into pixels using a weighted average based on the filter weights at the samples (as described on page 308 in Veach's thesis, and in the book Physically Based Rendering).
• More robust BSDF system. Each BSDF now exactly follows the conventions of BSDFs as described by Veach (and others).  Reflection and transmission are now both handled in a unified way, so any BSDF can contain reflection or transmission. Each BSDF returns the value of the PDF when sampled. And I no longer have to flip the surface normal to be on the same side as the incident direction.
• Improved scene traversal and intersection system. I made a more robust system for preventing self-intersections, along with some performance improvements and code structure changes.
• Different materials for different instances. Each instance of a model in the scene can have can have its own material. More generally, I gave containers in my scene hierarchy the ability to override the materials of the objects within them.
Flexible texture mapping system. Any material property can now be texture-mapped easily. These properties can be set to either a 2D or 3D varying texture or a uniform color, which are all descendants of the same abstract texture class.
• Custom linear algebra library. I incorporated a new templatized linear algebra library that I wrote from scratch. The code is much cleaner than what I was using before, because there is only one vector class template and one matrix class template, and most of the operations are written as loops over the components (which I could potentially parallelize in the future).

An early test render showing the aperture geometry (the black hexagon).

Friday, March 1, 2013

New Project

I am implementing bidirectional path tracing (BPT) and Metropolis Light Transport (MLT) as an independent study project at Penn.

I finished thoroughly reading and understanding Eric Veach's PhD thesis (and some other resources, which I'll talk about later), and now I'm moving on to the implementation. After working through Veach's thesis, I feel like I have a much deeper understanding of the rigorous mathematical theory behind rendering, as well as a strong understanding of the algorithms that I will be implementing. And I think that the math I've learned (in particular, some relatively advanced probability and calculus) will be useful outside the context of rendering, in a wide variety of subject areas.

I'm planning on making some progress on the implementation over spring break. I think the implementation will go pretty smoothly since I've already put much effort into learning and planning.

Adding bidirectional path tracing and MLT directly to my existing renderer Photorealizer would be very difficult because it would require significant refactoring of the path tracing core, and significant effort to integrate with the many existing features. Instead, I will start by creating a new, streamlined version of Photorealizer. Bidirectional path tracing and MLT will be relatively difficult to implement correctly, so I will first focus on implementing them well, making them work perfectly, and making the code clean, fast, and extensible. Then later I can add extra features.

I will start by implementing bidirectional path tracing, making both sensors and lights part of the scene, writing utilities for creating paths from both sensors and lights, manipulating paths, computing the correct contribution of paths, using multiple importance sampling to weight the paths in an optimal way, and more. Then I'd like to add support for participating media (which isn't covered in Veach's thesis, but has been derived by others). Once bidirectional path tracing is complete, I will then be able to implement MLT. Like bidirectional path tracing, MLT uses the path integral formulation of light transport. Furthermore, MLT uses bidirectional path tracing directly to create seed paths, and it uses bidirectional path tracing concepts to mutate paths.

I plan to post more specific implementation details, progress updates, and pictures on this new blog.