Saturday, April 13, 2013

Working Bidirectional Path Tracing with MIS

I've successfully implemented bidirectional path tracing. There are still a number of optimizations and other features that I would like to implement, but most of the difficult parts are finished and the core algorithm is working correctly.

I've almost finished implementing multiple importance sampling (MIS). There's just one thing still missing from my MIS implementation: a correct PDF for sensor emission directions. This causes, for example, the contribution of light tracing to be lower than it should. I will fix this soon. For now, renders still converge to the correct result, but they converge a little more slowly than they should.

There were lots of tricky things to work though and bugs to fix along the way to this point. As of now, I've skipped over a lot of that stuff in this blog, but I might go back and add some more details later. For now, feel free to let me know if you have any questions about implementing bidirectional path tracing.

Bidirectional path tracing without MIS.

Bidirectional path tracing with MIS. Same number of samples as the image above.

Photorealizer reference (path tracing with direct illumination).

5 comments:

  1. Hi Peter:
    I am working on implementing BDPT these days. It is an algorithm that confuses me for quite a long time, I finally got some time on it.
    I'm able to achieve the same result with my path tracing implementation regardless of noise, I didn't implement MIS yet, I'd like to add light tracing and light path of zero (sampling path whose light is found during eye path generation)
    I have a question that I can't find answers:
    For example, I have eye path with 5 vertices and light path with 3 vertices (eye point and light point are included). My factor before each path evaluation is 1/(5+3-1) because I didn't evaluate light tracing and light path of zero length. Light tracing is simple, but I'm really confused with evaluating zero-length light path, in other words, light is found during eye path generation. If all lights are non-delta lights, it won't be a problem, what if there is a scene with two lights, one area light and one point light, what is the factor before path evaluation?
    Let's say if your scene is only lit by a point light, the factor for the above case should be also 1/(5+3-1). Ignore light tracing. Because you won't be able to hit a point light source at all. If your scene is lit by an area light and you are evaluating zero-length light path, the denominator of the factor should be 5+3, instead of 5+3-1, because you have another option to evaluate light with length 8.
    What if you have both lights in the scene? How should I decide the factor??? How do you solve the problem in your renderer?
    Thanks in advance and looking forward to your answer, :)

    ReplyDelete
    Replies
    1. My factor before each path evaluation is 1/(5+3-1) because I didn't evaluate light tracing and light path of zero length.

      Sorry, not for each path, it is for path with length of 8.

      Delete
    2. Hi agraphicsguy. I don't support point lights in my renderer, but I do have a few thoughts about how this should work.

      First, is it necessary that your factor be the same for all paths? If you have an area light and a point light in a scene, could you use a factor of 1/(5+3) for paths containing the area light and 1/(5+3-1) for paths containing the point light? Each contributing path either contains the area light or the point light, so there's seemingly never any ambiguity about which version of the factor to use.

      If it were necessary to use the same factor for every path, you could choose 1/(5+3) and just accept that certain light transport cannot be sampled using one of the sampling techniques. Your image would then be slightly too dark, but at least it wouldn't be as dark as it would be if you tried to render the point light with pure unidirectional path tracing from the camera.

      Most importantly, this problem should go away completely after you implement MIS. With MIS in place, the contribution of each path will be weighted proportionally to its likelihood of being constructed using the current sampling technique relative to the other possible sampling techniques. MIS should automatically weight s=0 paths down to zero for the point light, and it should weight paths constructed using the other sampling techniques up to compensate.

      Delete
    3. First of all, much appreciate for your reply.
      Call me Jerry, which is my name where I work, :).

      And actually I managed to implement it correctly. It is just like you said, 1/8 for area/sky light path and 1/7 for delta light path. My method of doing this is to sample a light first and use different factors accordingly. If light A is sampled, the whole path evaluation won't take other lights into account at all. That just gave me a easy way of doing this.

      And if I may, I'd like to ask another two questions that confused me for a couple of days recently.
      Question 1:
      By expending the rendering equation, you will have a nice symmetric equation describing the light transport, please refer to the pbrt book at the page of 760.(Apologize that I can't write it in this reply, I would be really grateful if you can take a look at it.) What I'm wondering is that where is the cosine factor at the camera side of this equation? I never recall any renderer taking it into consideration, at least for real time rendering engine. Take a real world example here, you are watching a movie, which is displaying a uniform white image, let's say radiance of each ray is exactly 1, obviously the ones that hit the center of the screen will reflect more lights to the viewer, while the ones hit around the edge of the screen will be a little bit darker depending on the FOV of the projector, no matter how small it is, it should be there. So what is the real world solution for this issue? scale the radiance by a factor of 1/cos(theta)? Or totally ignore it since theta should be very small.

      Question 2:
      Why do we ignore the pdf of primary ray in path tracing? I checked pbrt's path tracing implementation and path tracer in smallvcm(http://www.smallvcm.com/). Neither takes it in path tracer, the pdf of primary ray is simply one.
      However you can not ignore it in light tracing even if the last light path vertex and eye point position are already known.

      Delete
    4. The equation in the first question is here
      https://agraphicsguy.files.wordpress.com/2015/12/equation.png

      Delete