Table of Contents

Ray Tracing in Lua

This is a work-in-progress. I'll keep posting as I work through the book!!

Last update: <2019-11-30 Sat>

I've never done much graphics programming, but I've always wanted to explore it. I think the most common approach is to go straight to OpenGL, but I tend to approach a domain from first principles, especially when I'm coding for fun, so I turned to Amazon and found Ray Tracing in One Weekend for $3. It tackles everything in C++, and I thought it would be a fun exercise to follow the algorithms laid out in the book, and I could focus on learning how ray tracing works while porting all the code to Lua.

Lua might seem like exactly the wrong choice for ray tracing, which requires high performance and lends itself to a language that can parallelize the compuation, but I enjoy working in small languages that use few data structures to accomplish many things, so I'm curious to see how far I can get in Lua. I'm going to stick with luajit, however, as it is quite a bit faster than lua5.3. As such, the code will be lua5.1 compatible, since that's the version of the language luajit targets.

Chapter 1 - Graphical Hello World

The first step is to produce a PPM file that demonstrates we can create images at all. All it does is alter the red and green channel during iteration over the pixels to create a color gradient. Our image size is defined at the top. In the C++ code, it is locked to a 200px x 100px image, but I found that that scaling it by 3x to 600x300 is more visually engaging, so I switched in the Lua code to reflect that. PPM files are easy to create, but very inefficient. In the spirit of "make it work, make it right, make it fast", I'll worry about optimizing the file format later, and probably only if needed.

#! /usr/bin/env luajit

nx = 600
ny = 300
print(string.format("P3\n%s %s\n255\n", nx, ny))
for j = ny-1, 0, -1 do
   for i = 0, nx-1 do
      r = i/nx
      g = j/ny
      b = 0.2
      ir = math.floor(256*r)
      ig = math.floor(256*g)
      ib = math.floor(256*b)
      print(string.format("%s %s %s", ir, ig, ib))
   end
end

I keep a running Emacs shell buffer alongside my code, and I can run this command to display the image after I change the code:

./ray.lua | convert - chapter1.png && emacsclient -n chapter1.png

The result looks like this:

chapter1.png

Chapter 2 - The Vector3 Class

This chapter defines the Vector3 class, which I understand will be used for all sorts of vectors in later chapters. I thought a bit about whether to change to a more functional approach since I'm in Lua, and decided that I'd learn a bit more and have a bit of fun by using Lua's metatables to retain the object-oriented approach in the original C++. So I did a bit of research about how objects are approached in Lua put together Vector3, a Lua interpretation of the C++ version's vec3.

Vectors are simply Lua tables, and since they are generic, they have accessors supporting both cartesian and RGB interpretations. This is unconventional, but follows the pattern in the book.

Vector3 = {}

function Vector3:new(o)
   o = o or {}
   self.__index = self
   setmetatable(o, self)
   return o
end

-- Accessors for cartesian coords
function Vector3:x() return self[1] end
function Vector3:y() return self[2] end
function Vector3:z() return self[3] end

-- Accessors for RGB values
function Vector3:r() return self[1] end
function Vector3:g() return self[2] end
function Vector3:b() return self[3] end

Since I'm trying out all my code using the Lua REPL, I also want a nice string representation for vectors. It's easy to attach the tostring method to the metamethod __tostring, which is used by the REPL when printing out results.

-- REPL usability
function Vector3:tostring()
   return string.format("Vector3{%s, %s, %s}", self[1], self[2], self[3])
end
Vector3.__tostring = Vector3.tostring

Next, I implement a straight port of the C++ mathematical functions for vectors, attaching them to Lua's metamethods where available.

-- Mathematical operations
function Vector3:add(vec)
   return Vector3:new{self[1] + vec[1], self[2] + vec[2], self[3] + vec[3]}
end
Vector3.__add = Vector3.add

function Vector3:sub(vec)
   return Vector3:new{self[1] - vec[1], self[2] - vec[2], self[3] - vec[3]}
end
Vector3.__sub = Vector3.sub

function Vector3:mul(val)
   local t = type(val)
   if t == "number" then return self:nmul(val)
   elseif t == "table" then return self:vmul(val)
   end
end
Vector3.__mul = Vector3.mul

function Vector3:div(val)
   local t = type(val)
   if t == "number" then return self:ndiv(val)
   elseif t == "table" then return self:vdiv(val)
   end
end
Vector3.__div = Vector3.div

function Vector3:negate()
   return Vector3:new{-self[1], -self[2], -self[3]}
end
Vector3.__unm = Vector3.negate

function Vector3:length()
   return math.sqrt(self:squared_length())
end

function Vector3:squared_length()
   return self[1]*self[1] +
      self[2]*self[2] +
      self[3]*self[3]
end

function Vector3:dot(vec)
   return self[1] * vec[1] + self[2] * vec[2] + self[3] * vec[3]
end

function Vector3:cross(vec)
   return Vector3:new{
      self[2] * vec[3] - self[3] * vec[2],
      self[3] * vec[1] - self[1] * vec[3],
      self[1] * vec[2] - self[2] * vec[1]
   }
end

function Vector3:unit_vector()
   local l = self:length()
   return Vector3:new{self[1]/l, self[2]/l, self[3]/l}
end

-- Destructive functions

function Vector3:make_unit_vector()
   k = 1 / self:length()
   self[1] = self[1]*k
   self[2] = self[2]*k
   self[3] = self[3]*k
end

Many of the functions above are defined in the C++ code as inline and const. I'm not well-versed in C++, but after reading up on these keywords, I think they are both compiler hints that improve performance: inline inlines the function so we avoid dispatch overhead, and const allows the compiler to assume the value won't change, allowing further optimizations. Neither is present in Lua, so I've ignored them above. The C++ implementation also leverages static dispatch for performance, but Lua isn't statically typed, so my implementation will be dynamically dispatched. In the case of mul and div, which can accept either a number (scalar context) or a table (vector context), I use reflection via the type method to determine context and then dispatch to the appropriate implementation. The actual implementations are below.

-- Internal implementations

function Vector3:nmul(num)
   return Vector3:new{self[1]*num, self[2]*num, self[3]*num}
end

function Vector3:vmul(vec)
   return Vector3:new{self[1]*vec[1], self[2]*vec[2], self[3]*vec[3]}
end

function Vector3:ndiv(num)
   return Vector3:new{self[1]/num, self[2]/num, self[3]/num}
end

function Vector3:vdiv(vec)
   return Vector3:new{self[1]/vec[1], self[2]/vec[2], self[3]/vec[3]}
end

This completes our Lua implementation of Vector3. A notable omission from the C++ implementation are the +=, -=, *=, and /= operators, which have no associated metamethods in Lua due to its single-pass parse model. As a result, I'm leaving them out for now, but I might add them later if the ray tracing code becomes to unweildy without them.

The code at the end of the chapter uses the new vec3 abstraction. We can use our Vector3 class instead.

nx = 600
ny = 300
print(string.format("P3\n%s %s\n255\n", nx, ny))
for j = ny-1, 0, -1 do
   for i = 0, nx-1 do
      v = Vector3:new{i/nx, j/ny, 0.2}
      ir = math.floor(256*v[1])
      ig = math.floor(256*v[2])
      ib = math.floor(256*v[3])
      print(string.format("%s %s %s", ir, ig, ib))-
   end
end

With this new libary to handle vectors, we can take a look at modeling how light moves!

Chapter 3 - Rays, Camera, Background

We start by defining a Ray class:

-- Ray Class
Ray = {}

-- Parameters:
--   origin: Vector3
--   direction: Vector3
-- Example:
--   Ray:new{Vector3:new{0, 0, 0}, Vector3:new{0, 0, -1}}
function Ray:new(o)
   o = o or {}
   self.__index = self
   setmetatable(o, self)
   return o
end

function Ray:origin()
   return self[1]
end

function Ray:direction()
   return self[2]
end

function Ray:point_at_parameter(t)
   return self[1] + self[2] * t
end

While we're here, we can add our usual handy tostring method and bind it.

function Ray:tostring()
   return string.format("Ray{origin: %s, direction: %s}", self[1], self[2])
end
Ray.__tostring = Ray.tostring

The Color Method

Now we can add the color method:

function Ray:color()
   unit_direction = self:direction():unit_vector()
   t = 0.5 * (unit_direction:y() + 1)
   return Vector3:new{1, 1, 1} * (1 - t) + Vector3:new{0.5, 0.7, 1} * t
end

The color method is worth an explanation. It first generates the unit vector for the direction, clamping the length to 1. It then computes t, which takes the unit vector's vertical component (which can vary from -1 to 1, since that's the y-value of the bottom and top of our virtual screen, as you'll see below), adds one to get a value between 0 and 2, and then compresses the range to 0 to 1 by multiplying by 0.5. This is mathematically attractive because t can now be used as a scaling factor for our color vectors. The final part of color is to calculate the color of the pixel.

The general strategy is to blend pure white (an RGB vector of 1, 1, 1) and light blue (an RGB vector of 0.5, 0.7, 1) proportionally to the value of t, which again is really an expression of the vertical offset of the pixel. This means the pixels at the top of the image will be pure blue and the pixels at the bottom will be pure white, with pixels in between blending the two.

You'll notice that it's not a pure vertical gradient, though. That's because the vertical offset when clamping the ray's direction to a unit vector is diminished for pixels with a high horizontal offset, so pixels near the corners avoid extreme values of pure white and pure blue, instead remaining a blend. This produces a very natural effect that looks like the sky.

Adding Color to Each Pixel

I then put together the main function, and simply inlined it at the end of the file.

nx = 200
ny = 100
print(string.format("P3\n%s %s\n255\n", nx, ny))
lower_left_corner = Vector3:new{-2, -1, -1}
horizontal = Vector3:new{4, 0, 0}
vertical = Vector3:new{0, 2, 0}
origin = Vector3:new{0, 0, 0}
for j = ny-1, 0, -1 do
   for i = 0, nx-1 do
      u = i / nx
      v = j / ny
      r = Ray:new{origin, lower_left_corner + (horizontal * u) + (vertical * v)}
      col = r:color()
      ir = math.floor(256 * col[1])
      ig = math.floor(256 * col[2])
      ib = math.floor(256 * col[3])
      print(string.format("%s %s %s", ir, ig, ib))
   end
end

This new approach introduces the notion of an origin, which is where the camera is placed. It also introduces the notion of a screen, and places it one unit away, with a Z value of -1. The screen is defined in terms of its lower_left_corner and its horizontal and vertical dimensions. The screen is twice as wide as it is high, with a width of 4 units and a height of 2 units.

In every graphics application, there needs to be some way to map the scene-space the world is calculated in to the pixels in the image being generated. Above, this is handled at the moment we shoot a ray out from the camera. Each Ray requires two vectors: an origin and a direction. The origin of that ray is the camera, but the direction requires some computation.

We're trying to determine the color of a pixel, so our computation in based on the coordinates of that pixel in image-space, but the final value needs to be in scene-space to match the origin vector. We compute the direction by starting from lower left corner of our image and calculating the offset of the current pixel in the x and y direction. Since we're iterating over each pixel in our image (defined by nx and ny), this offset is calulated in terms of a the ratio of we are along the x and y axis to the total image size in that dimension. We then convert to "scene-space" by multiplying by the coodinate dimensions of the image ((horizontal and vertical) to get the final direction vector. Our ray is now fully defined, and we can ask it what its color is.

In the original C++, most vector multiplication by a scalar, like in the color function for a Ray, are performed by multiplying the number by the vector, rather than the vector by the number. Mathematically it doesn't make a difference, but the order matters during dispatch, and numbers don't have a __mul operator that works with Vector3. To remedy this, I changed the order of the parameters in the Lua implementation. After converting to png, the output of the program matches the C++ version.

chapter3.png

Chapter 4 - Adding a Sphere

I'd previously made the decision to move the color function into the Ray class simply because a ray's color can be expected to be constant for a given scene, and I envision all this code running in the context of a constant scene. Chapter 4 modifies Ray's color function to call an additional check called hit_sphere, so I've baked that function into the Ray class as well. This clearly isn't sustainable, since binding scene logic to the Ray class is madness. But for now, it's great, and I'll change it when need be.

The modification to color is straightforward: if the ray would hit a sphere centered at {0, 0, -1} with a radius of 0.5, return red. Otherwise, compute the value for the blue/white gradient we defined in Chapter 3.

function Ray:color()
   if (self:hit_sphere(Vector3:new{0, 0, -1}, 0.5)) then
      return Vector3:new{1, 0, 0}
   end
   unit_direction = self:direction():unit_vector()
   t = 0.5 * (unit_direction:y() + 1)
   return Vector3:new{1, 1, 1} * (1 - t) + Vector3:new{0.5, 0.7, 1} * t
end

How do we define hit_sphere? The original text has a very good explanation.

-- Parameters:
--   center: Vector3
--   radius: number
-- Returns: boolean
--
-- Returns true iff this Ray intersects the sphere specified
-- by the provided center and radius, otherwise false.
function Ray:hit_sphere(center, radius)
   oc = self:origin() - center
   a = self:direction():dot(self:direction())
   b = oc:dot(self:direction()) * 2.0
   c = oc:dot(oc) - radius * radius
   discriminant = b * b - a * c * 4
   return discriminant > 0
end

Running this, we obtain the expected result: a red ball in the middle of the screen!

chapter4.png

Chapter 5 - Surface Normals and Multiple Objects

Surface normals are vectors that point directly away from the surface of an object. It's really just the the point where the ray hits the object minus the center of the object. In more geometric terms, it's an arrow pointing from the center of the sphere outward towards where the ray hit the sphere. That's it.

Visualizing Normals

To visualize that we're computing this correctly, we can, as an intermittent step, simply visualize the normals by converting them to colors. To do this, we need to first modify hit_sphere to calculate the normal, and instead of returning a boolean to indicate whether the sphere was hit, we instead return the point on the sphere where the ray hit it. If the ray does not intersect the sphere, we use a sentinel value of -1 to indicate that.

-- Parameters:
--   center: Vector3
--   radius: number
-- Returns: [Vector3] the point on the sphere where the ray hit it
--
-- Returns true iff this Ray intersects the sphere specified
-- by the provided center and radius, otherwise false.
function Ray:hit_sphere(center, radius)
   oc = self:origin() - center
   a = self:direction():dot(self:direction())
   b = oc:dot(self:direction()) * 2.0
   c = oc:dot(oc) - radius * radius
   discriminant = b * b - a * c * 4
   if discriminant < 0 then
      return -1
   else
      return (-b - math.sqrt(discriminant)) / 2 * a
   end
end

Previously, however, the color method relied on hit_sphere returning a boolean value, and it also assumed the color of the sphere was always red. Let's update color to make use of the point at which the ray hits the sphere and create a color based on it's normal.

function Ray:color()
   t = self:hit_sphere(Vector3:new{0, 0, -1}, 0.5)
   if t > 0 then
      normal = (self:point_at_parameter(t) - Vector3:new{0, 0, -1}):unit_vector()
      return Vector3:new{normal:x()+1, normal:y()+1, normal:z()+1} * 0.5
   end

   unit_direction = self:direction():unit_vector()
   t = 0.5 * (unit_direction:y() + 1)
   return Vector3:new{1, 1, 1} * (1 - t) + Vector3:new{0.5, 0.7, 1} * t
end

This yields sort of rainbow effect since it visualizes the vector space of all the normals as colors:

chapter5-1.png

Creating Objects

The current code bakes the sphere we've been rendering into the render path itself, but we'll want to have the rendering code independent of what objects it is rendering, so we need an abstraction. One useful way to approach this is based on the insight that we're dealing with objects that a Ray could hit. Naming objects in object-oriented languages is challenging, so Ray Tracing in One Weekend opted to called them hitable. Since we're in a dynamic language, I can elide the contract boilerplate in the original C++, but I need to ensure each object we want to render has a hit method attached. The first step is to pull out the logic for our sphere into a dedicated class, which we can then attach a hit method to.

Sphere = {}

-- Parameters:
--   origin: Vector3
--   radius: number
-- Example:
--   Sphere:new{Vector3:new{0, 0, 0}, 0.5}
function Sphere:new(o)
   o = o or {}
   self.__index = self
   setmetatable(o, self)
   return o
end

function Sphere:center()
   return self[1]
end

function Sphere:radius()
   return self[2]
end

As we did with Ray, it's nice to have a good string representation available on the REPL.

function Sphere:tostring()
   return string.format("Sphere{center: %s, radius: %s}", self[1], self[2])
end
Sphere.__tostring = Sphere.tostring

Making Objects Hitable

The hit method itself has an interesting contract that depends upon a hit_record. The Lua version of hit_record looks something like this:

HitRecord = {}

-- Parameters
--   t: [number] time at which impact occurred
--   p  [Vector3] point of impact
--   normal [Vector3] normal from point of impact
function HitRecord:new(o)
   o = o or {}
   self.__index = self
   setmetatable(o, self)
   return o
end

function HitRecord:t() return self[1] end
function HitRecord:p() return self[2] end
function HitRecord:normal() return self[3] end

This implementation covers the three values included in the C++ version, but that version returns a boolean value indicating whether the object was hit or not as well. If a hit does occur, the C++ code destructively modifies the hit_record parameter passed in. This is typical in C and C++, but not idiomatic in Lua. For the Lua version, we can provide a field in hit_record to record whether a hit occurred, and then simply return the hit_record regardless of whether a hit occurred:

function HitRecord:hit() return self[1] end
function HitRecord:t() return self[2] end
function HitRecord:p() return self[3] end
function HitRecord:normal() return self[4] end

This alters the contract to make it less surprising in Lua, but doesn't yet make a Sphere hitable. If we follow the original C++, this requires implementing a hit method for Sphere. This is a moment to pause and consider our architecture, since I'd previously decided to attach the hit method to Ray class. Ultimately, we'll have a table that contains a whole bunch of objects (we'll assume that's called the world) that support the hit method, and as we render the scene, we'll create rays one-by-one and ask them what color they are. Should every Ray contain a reference to the world? Will we want to ask a Ray for its color in the context of more than one world? It doesn't seem likely, since we often think of a ray-traced image as a visual representation of a particular world. The overhead of passing the world pointer to each ray as it is constructed is likely negligible, so we'll assume for now that the hit method will move from the Ray class to the object classes (making them hitable!) and that later on, we'll pass the whole scene/world to each Ray when we construct it. Let's do it!

function Sphere:hit(ray, t_min, t_max)
   oc = ray:origin() - self:center()
   a = ray:direction():dot(ray:direction())
   b = oc:dot(ray:direction())
   c = oc:dot(oc) - self:radius() * self:radius()
   discriminant = b * b - a * c
   if discriminant > 0 then
      time = (-b - math.sqrt(discriminant)) / a
      if time < t_max and time > t_min then
         return self:hit_record(ray, time)
      end
      time = (-b + math.sqrt(discriminant)) / a
      if time < t_max and time > t_min then
         return self:hit_record(ray, time)
      end
   end
   return HitRecord:new{false}
end

You'll notice that this method delegates to a method I created for this implementation called hit_record. I did this to minimize the duplication of code within Sphere. Here it is:

function Sphere:hit_record(ray, time)
   point_of_impact = ray:point_at_parameter(time)
   return HitRecord:new{
      true,
      time,
      point_of_impact,
      (point_of_impact - self:center()) / self:radius()
   }
end

Creating the World

Now that we have the notion of a hitable object, we can make it easy to test whether a Ray hits any of them by creating a World object. In the original code, this is called hitable_list, and it has a hit method just as hitable objects do. We'll apply the same parameter transformation that we did for Sphere and elide the hit record from the parameter list.

World = {}

function World:new(o)
   o = o or {}
   self.__index = self
   setmetatable(o, self)
   return o
end

function World:hit(ray, t_min, t_max)
   closest_hit_record = HitRecord:new{false, t_max}
   for idx, obj in pairs(self) do
      hit_record = obj:hit(ray, t_min, closest_hit_record:t())
      if hit_record:hit() then
         closest_hit_record = hit_record
      end
   end
   return closest_hit_record
end

A couple of notes here. First, we have to iterate through every object no matter what, because we need to return the point that the ray hits that is closest to the camera. We can't be sure we got it right unless we check every object. This also means that the order of iteration through the objects in the world doesn't matter, so I use pairs instead of ipairs. I'm not sure if it is actually slower to use ipairs in luajit, but given that it adds an additional constraint to the iteration, I suspect it may be. Secondly, the use of HitRecord as the sole return value from the this method significantly cleans up the code when compared to the corresponding C++ code with uses destructive modification. Since it's allocating a new table on each call, however, I suspect there's a significant performance penalty in terms of at least GC, if not allocation. We'll revisit this decision if it becomes a problem, but cleaner code takes precedence, all other things being equal.

We now need to make sure that Ray gets a reference to world when it is created. Since the way we're crafting constructors is very liberal, only the documentation needs to be updated. But we still need to define an accessor.

function Ray:world()
   return self[3]
end

But we still haven't update Ray:color to remove the hardcoded reference to a sphere. Let's update that.

function Ray:color()
   hit_record = self:world():hit(self, 0, math.huge)
   if hit_record:hit() then
      normal = hit_record:normal()
      return Vector3:new{normal:x()+1, normal:y()+1, normal:z()+1} * 0.5
   end
   unit_direction = self:direction():unit_vector()
   t = 0.5 * (unit_direction:y() + 1)
   return Vector3:new{1, 1, 1} * (1 - t) + Vector3:new{0.5, 0.7, 1} * t
end

This method is largely symmetric with the version we used in Chapter 4, returning the normal-vector-as-a-color if a hit is detected, and otherwise returning the blue background gradient from Chapter 3.

Finally, we can update our entry point to make use of all our new machinery!

nx = 600
ny = 300
print(string.format("P3\n%s %s\n255\n", nx, ny))
lower_left_corner = Vector3:new{-2, -1, -1}
horizontal = Vector3:new{4, 0, 0}
vertical = Vector3:new{0, 2, 0}
origin = Vector3:new{0, 0, 0}
small_sphere = Sphere:new{Vector3:new{0, 0, -1}, 0.5}
big_sphere = Sphere:new{Vector3:new{0, -100.5, -1}, 100}
world = World:new{small_sphere, big_sphere}
for j = ny-1, 0, -1 do
   for i = 0, nx-1 do
      u = i / nx
      v = j / ny
      ray = Ray:new{origin, lower_left_corner + (horizontal * u) + (vertical * v), world}
      col = ray:color()
      ir = math.floor(256 * col[1])
      ig = math.floor(256 * col[2])
      ib = math.floor(256 * col[3])
      print(string.format("%s %s %s", ir, ig, ib))
   end
end

Aside from the "world creation" code just before the iteration, there's remarkably little different from the earlier iterations. Here's the result:

chapter5-2.png

Chapter 6 - Antialiasing

If you look at the images above, particularly if you zoom in, you'll notice "jaggies": the image is jagged at the edges of the spheres. That's because in our current implementation, each pixel either hits the object (and takes on that object's color at that point), or it simply takes on the color of the background at that location: there is no in-between. Antialiasing is a technique that randomly samples multiple points within a pixel, and then blends the colors at each point within that pixel to compute the final color the pixel should be. Antialiasing is measured in terms of the number of samples taken per-pixel, all the way from 2x antialiasing to 16x antialiasing. I gather that there's actually a lot more to antialiasing than this, but this is my understanding so far from reading this book and my previous knowledge from tweaking video game settings to trade off between performance and quality.

Introducing the Camera

If we're going to shoot multiple rays per pixel, where should the logic live? The Ray class seems inappropriate, as does World and Vector3. In the real world, analog cameras (using film!) do this naturally. Perhaps a solution is to introduce a class that represents the camera. Rather than the main method, the camera can encapsulate the screen dimensions and ray generation instead.

Camera = {}

function Camera:new(o)
   base = {
      lower_left_corner=Vector3:new{-2, -1, -1},
      horizontal=Vector3:new{4, 0, 0},
      vertical=Vector3:new{0, 2, 0},
      origin=Vector3:new{0, 0, 0}
   }
   o = o or {}
   for k,v in pairs(base) do
      o[k] = v
   end
   self.__index = self
   setmetatable(o, self)
   return o
end

-- Accessors to maintain symmetric access via function calls
function Camera:lower_left_corner() return self.lower_left_corner end
function Camera:horizontal() return self.horizontal end
function Camera:vertical() return self.vertical end
function Camera:origin() return self.origin end

In addition to capturing the basic screen dimensions we're working with, we also want the camera to generate rays for us. To do this, though, it needs a reference to the world. So, we define an accessor, and then proceed with get_ray:

function Camera:world()
   return self[1]
end

function Camera:get_ray(u, v)
   return Ray:new{
      self.origin,
      self.lower_left_corner + (self.horizontal * u) + (self.vertical * v) - self.origin,
      self:world()
   }
end

This is essentially the exact same logic that was previously in the main method, but now encapsulated. But that means we need to update main to request that the camera generate multiple rays per pixel.

function main()
   nx = 600
   ny = 300
   ns = 100

  print(string.format("P3\n%s %s\n255\n", nx, ny))

   small_sphere = Sphere:new{Vector3:new{0, 0, -1}, 0.5}
   big_sphere = Sphere:new{Vector3:new{0, -100.5, -1}, 100}
   world = World:new{small_sphere, big_sphere}

   camera = Camera:new{world}

   for j = ny-1, 0, -1 do
      for i = 0, nx-1 do
         col = Vector3:new{0, 0, 0}
         for s = 0, ns do
            u = (i + math.random()) / nx
            v = (j + math.random()) / ny
            ray = camera:get_ray(u, v)
            col = col + ray:color()
         end
         col = col / (ns + 1)

         ir = math.floor(256 * col[1])
         ig = math.floor(256 * col[2])
         ib = math.floor(256 * col[3])
        print(string.format("%s %s %s", ir, ig, ib))
      end
   end
end

The notable addition here is that we now sample each pixel 100 times and average the color that is returned. Running this takes about 24 seconds on my laptop, and produces this result:

chapter6.png

back