A from-scratch software ray tracer built by following Peter Shirley's Ray Tracing in One Weekend (v1.54), implementing chapters 1–11 in C++.
All images are rendered entirely in software — no GPU, no graphics API, just math and rays.
This project is my C++ implementation of a path tracer built from the ground up following Peter Shirley's free book Ray Tracing in One Weekend. A path tracer simulates the physical behavior of light by casting rays from a virtual camera into a scene, bouncing them off surfaces, and accumulating color — producing realistic lighting, shadows, reflections, and refractions.
The result is a program that outputs a .ppm image file with no external dependencies or rendering libraries.
Set up the foundation: writing pixel data directly to a .ppm file (a plain-text image format). This is the renderer's "hello world" — a gradient image that proves the output pipeline works.
Implemented a reusable 3D vector class (vec3) to handle positions, directions, and colors uniformly. Includes operator overloading for dot product, cross product, addition, scalar multiplication, and normalization — the backbone of every calculation in the tracer.
Introduced the core abstraction: a ray defined as p(t) = origin + t * direction. Built a simple pinhole camera and fired rays through each pixel, blending sky blue and white based on ray direction to produce a gradient background.
Added the first geometry: a sphere. Derived the ray–sphere intersection from the quadratic equation dot((p - C), (p - C)) = R², substituting the ray equation and solving for t. Colored hit pixels red to verify correctness.
Computed outward surface normals at hit points and visualized them as colors (mapping XYZ → RGB). Introduced an abstract hittable interface and a hittable_list to support multiple objects in the scene cleanly via polymorphism.
Eliminated jagged edges by firing multiple rays per pixel with randomized sub-pixel offsets and averaging the resulting colors. Encapsulated camera logic into a camera class to prepare for future extensions.
Implemented physically-based diffuse shading using a rejection-sampling method to pick random scatter directions within a unit sphere tangent to the hit point. Applied gamma correction (gamma 2, i.e. sqrt) to match display expectations. Fixed "shadow acne" by ignoring intersections at t values very close to zero.
Added a metal material using the specular reflection formula v - 2 * dot(v, N) * N. Introduced a fuzz parameter to perturb the reflected ray with a random offset, producing polished or brushed-metal appearances.
Implemented transparent refractive materials using Snell's Law (n sin θ = n' sin θ'). Handled total internal reflection (no real solution to Snell's law when exiting a denser medium at a shallow angle). Applied the Schlick approximation for angle-dependent reflectivity. Demonstrated the hollow-sphere trick using a negative-radius inner sphere.
Extended the camera to support arbitrary position (lookfrom), target (lookat), and orientation (vup), computing a full orthonormal basis (u, v, w). Added a configurable vertical field of view (in degrees), allowing zoom in/out effects.
Simulated real-lens depth-of-field by jittering ray origins across a disk centered at the camera lens. Rays still converge at the focus plane, but objects outside it blur — producing a cinematic shallow depth-of-field effect controlled by aperture size and focus distance.
Ray–Object Intersection Math Every rendered pixel starts as a ray equation; finding intersections means solving quadratics. Understanding how the discriminant maps to "hit / miss / tangent" made the geometry intuitive.
Monte Carlo Path Tracing Diffuse shading isn't a formula — it's a statistical process. Firing many rays per pixel and averaging their results approximates the rendering integral. More samples → less noise → more accurate light.
Physically Based Materials Each material type models a real physical phenomenon:
- Lambertian → random scattering (matte surfaces)
- Metal → specular reflection (mirrors, brushed metals)
- Dielectric → refraction + Schlick reflectance (glass, water)
Gamma Correction
Raw linear light values look too dark on a monitor. Applying sqrt() (gamma 2) transforms linear energy into perceptual brightness — a small fix with a big visual impact.
The Hittable Abstraction
Using a pure virtual hittable interface decouples geometry from the rest of the renderer. Adding new shapes later requires only implementing hit() — the color and scattering logic stays untouched.
Camera as a First-Class System
A camera isn't just a position — it's a coordinate frame. Understanding how lookfrom, lookat, vup, and the orthonormal basis (u, v, w) interact gave me a solid mental model for 3D view transforms.
Problem: Diffuse surfaces produced random black specks even on evenly-lit areas.
Cause: Reflected rays were re-intersecting the sphere they originated from due to floating-point rounding errors near t = 0.
Fix: Clamped the valid intersection range to t_min = 0.001 instead of 0, skipping self-intersections.
Problem: Rendered images looked correct in values but appeared far too dark on screen.
Cause: Display devices expect gamma-corrected (non-linear) values, but the renderer output raw linear light.
Fix: Applied a gamma-2 correction by taking sqrt() of each color component before writing to the PPM file.
Problem: The dielectric material produced black artifacts when rays tried to refract at steep angles inside the glass.
Cause: Snell's law has no real solution when n/n' * sin(θ) > 1 — physically, all light must reflect internally.
Fix: Added a discriminant check before computing the refracted direction; fell back to specular reflection when no refraction was possible.
Problem: Early glass implementation never reflected — only refracted — making it look flat and unrealistic.
Cause: The first draft always chose refraction when mathematically possible, ignoring angle-dependent reflectivity.
Fix: Added Christophe Schlick's polynomial approximation to blend probabilistically between reflection and refraction based on the incident angle, producing the characteristic glassy sheen at steep angles.
Requirements: Any C++11-compatible compiler (g++, clang++)
# Clone the repo
git@github.com:rosuae/RayTracing-Engine.git
cd RayTracing-Engine
# Compile
g++ -O2 -std=c++17 main.cpp src/*.cpp -o raytracer
# Render (outputs a PPM file)
./raytracer > render.ppmTo view .ppm files: Preview on macOS, GIMP or feh on Linux, GIMP on Windows.
- Ray Tracing in One Weekend — Peter Shirley (free online)
- Original author's GitHub
- Scratchapixel — Ray-Sphere Intersection
- Physically Based Rendering (PBRT) — for going deeper
- Emissive materials and explicit light sources
- Triangle mesh support for loading 3D models
- UV texture mapping (image and procedural textures)
- Bounding Volume Hierarchy (BVH) for performance
- Multi-threaded rendering
This implementation is for educational purposes, following the structure of Peter Shirley's book. The book itself is copyright © 2018 Peter Shirley, All Rights Reserved.
