Mon March 6th, 11:59pm
You will implement the core routines of a physically-based renderer using a pathtracing algorithm. This assignment reinforces many of the hefty ideas covered in class recently, including ray-scene intersection, acceleration structures, and physically based lighting and materials. By the time you are done, you'll be able to generate some stunning pictures (given enough patience and CPU time). You will also have the chance to extend the assignment in a plethora of technically challenging and intellectually stimulating directions.
This time, we've split off each part into its own article:
- Part 1: Ray Generation and Scene Intersection
- Part 2: Bounding Volume Hierarchy
- Part 3: Direct Illumination
- Part 4: Indirect Illumination
- Part 5: Adaptive Sampling
All parts are equally weighted at 20 points each, for a total of 100 points.
You'll also want to read these articles:
Using the program
git clone https://gitlab.com/cs184/proj3_1_pathtracer.git
As before, use cmake and make inside a build/ directory to create the executable (how to build and submit).
Command line options
Use these flags between the executable name and the dae file when you invoke the program. For example, to simply run the regular GUI with the CBspheres.dae file and 8 threads, you could type:
./pathtracer -t 8 ../dae/sky/CBspheres_lambertian.dae
If you wanted to save to the spheres_64_16_6.png file on the instructional machines with 64 samples per pixel, 16 samples per light, 6 bounce ray depth, and 480x360 resolution, you might rather use something like this:
./pathtracer -t 8 -s 16 -l 4 -m 6 -r 480 360 -f spheres_16_4_6.png ../dae/sky/CBspheres_lambertian.dae
For this assignment, we've provided a windowless run mode, which is triggered by providing a filename with the
-f flag. The program will run in this mode when you are ssh-ed into the instructional machines.
This means that when trying to generate high quality results for your final writeup, you can use the windowless mode to farm out long render jobs to the s349 machines! You'll probably want to use screen to keep your jobs running after you logout of ssh. After the jobs complete, you can view them using the display command, assuming you've ssh-ed in with graphics forwarding enabled (by using the
Also, please take note of the
-t flag! We recommend running with 4-8 threads almost always -- the exception is that you should use
-t 1 when debugging with print statements, since
cout are not thread safe.
|Flag and parameters||Description|
||Number of camera rays per pixel (default=1, should be a power of 2)|
||Number of samples per area light (default=1)|
||Number of render threads (default=1)|
||Maximum ray depth (default=1)|
||Image (.png) file to save output to in windowless mode|
||Width and height in pixels of output image (if windowless) or of GUI window|
||Load camera settings file (mainly to set camera position when windowless)|
||Print command line help message|
Moving the camera (in edit and BVH mode)
|Rotate||Left-click and drag|
|Translate||Right-click and drag|
|Zoom in and out||Scroll|
|Mesh-edit mode (default)||
|BVH visualizer mode||
|Descend to left/right child (BVH viz)||
|Move up to parent node (BVH viz)||
|Save a screenshot||
|Decrease/increase area light samples||
|Decrease/increase camera rays per pixel||
|Decrease/increase maximum ray depth||
|Toggle cell render mode||
|Dump camera settings to file (including position)||
Cell render mode lets you use your mouse to highlight a region of interest so that you can see quick results in that area when fiddling with per pixel ray count, per light ray count, or ray depth.
Basic code pipeline
What happens when you invoke pathtracer in the starter code? Logistical details of setup and parallelization:
main()function inside main.cpp parses the scene file using a
- A new
Viewermanages the low-level OpenGL details of opening the window, and it passes most user input into
Applicationowns and sets up its own
pathtracerwith a camera and scene.
- An infinite loop is started with
viewer.start(). The GUI waits for various inputs, the most important of which launch calls to
set_up_pathtracer()sets up the camera and the scene, notably resulting in a call to
PathTracer::build_accel()to set up the BVH.
start_raytracing()(implemented in pathtracer.cpp), some machinery runs to divide up the scene into "tiles," which are put into a work queue that is processed by
- Until the queue is empty, each thread pulls tiles off the queue and runs
raytrace_tile()to render them.
raytrace_pixel()for each pixel inside its extent. The results are dumped into the pathtracer's
sampleBuffer, an instance of an
HDRImageBuffer(defined in image.h).
Most of the core rendering loop is left for you to implement.
raytrace_pixel(), you will write a loop that calls
camera->generate_ray(...)to get camera rays and
trace_ray(...)to get the radiance along those rays.
trace_ray, you will check for a scene intersection using
bvh->intersect(...). If there is an intersection, you will accumulate the return value in
- adding the BSDF's emission with
- adding direct lighting with
- adding indirect lighting with
estimate_indirect_lighting(...), which will recurse to call
- adding the BSDF's emission with
You will also be implementing the functions to intersect with triangles, spheres, and bounding boxes, the functions to construct and traverse the BVH, and the functions to sample from various BSDFs.
Approximately in order, you will edit (at least) the files
- pathtracer.cpp (part 1)
- camera.cpp (part 1)
- static_scene/triangle.cpp (part 1)
- static_scene/sphere.cpp (part 1)
- bvh.cpp (part 2)
- bbox.cpp (part 2)
- bsdf.cpp (part 3)
- pathtracer.cpp (parts 3-5)
You will want to skim over the files
since you will be using the classes and functions defined therein.
Rendering Competition Part 1
If you participate in this assignment's rendering competition, we will require you to submit both a competition.png image and a short 5-10 sentence description of what you did to create your entry. You can choose to either emphasize the technical or artistic/modeling merits of your image. If you go the artistic route, you should generate the model yourself using Blender or procedural code that you write. If you go the technical route, you may use models downloaded from the internet, but you should emphasize the extra algorithms you implemented in your short writeup.
Possible ideas include but not limited to: making complicated scenes, making cool shadows (by customizing your light sources and manipulating objects), making short videos (by writing a script that generates a .dae file per frame), etc. Just be creative!