How Ray Tracing Works in Computer Graphics, Part 1
Ray tracing in game engines is the means by which graphics are rendered. At their core, a ray tracing algorithm is simply drawing a straight line between two objects in a scene, and then applying an interaction function. A ray tracing engine starts from some vantage point inside the scene, usually the player or a virtual camera, and sends a set of rays from this point into the scene. The rays intersect with an object inside the scene, and the ray is subsequently reflected back. In games, rays are almost always representing visible light and traced from the player into the scene and returned as an image. Hence, the name.
The general operation of a ray tracer follows:
Anywhere from 1000s to 1,000,000a of rays are drawn, or "cast," into the scene from the chosen vantage point.
The ray tracer finds the intersections between the rays and the objects inside the scene.
The physics engine modifies the properties of each ray according to the properties of the object, e.g., an object looking darker in dim light.
The modified rays are reflected back to the source.
The ray tracing engine interprets the modified, reflected rays into an output format, e.g., some sort of visual representation of the scene.
All of the reflected rays are aggregated to form a final output.
However, we can apply custom physics and scene properties to simulate interactions between any objects with ray tracing. The ray tracer essentially links two objects together with a ray, and the effects they have on each other are captured by the physics engine. For example, a Wi-Fi network can be modeled using the Godot ray tracer, with "rays" serving as Wi-Fi signals. In this case, the rays still represent a wave-based signal, just not in the range of visible light.
This is the heart of the ray tracer - the code that determines the state of the reflected rays. By adding custom functions inside the physics engine, we can use the ray tracer to model any interaction over distance between two objects in a virtual scene. To see this in action, I've started a repository to show how generalization of ray tracers into awesome physics engines works: https://github.com/pjdurst/ray-tracing-basics
This repo breaks down the ray tracing process and highlights its utility in non-gaming, non-visualization applications, starting from the most bare-bones ray tracer possible. In principle, a ray tracer is used to "see" a virtual object. The virtual object is made up of a mesh file containing the properties that will determine the reflected ray (more on meshes later). The simplest case of all is plotting a cube.
First, we start with the simplest mesh possible. It is a .txt file made up of geometric points called 'edges' and 'vertices.' The vertices are the (x,y,z) coordinates that define the bounds of the object. The edges are the lines between each (x,y,z) bound. So, in essence, a mesh is a collection of points and the lines connecting them. This mesh represents a simple, texture-less cube, shown below (more to come on textures).