One of the most expensive computational tasks for a computer is video rendering. As video contains an array of pixels and each pixel contains at least three color information in each frame so it becomes complex. Moreover, the challenge is, the video must look as real as possible which contains light source, object on which it’s falling, shadows, etc. Mathematical formula with matrix operations helps to generate video as real as possible. Computers a few decades back were computationally weak for image operations. With the advent of graphics cards, videos became more real. The Graphics card, until now, is very efficient in generation video images. However, ray tracing was not possible as it requires tensor cores. In other words, it is more computationally expensive.
However, 3D rendering has used a process called rasterization(rasterization). This technique used in the 1990s. Rasterisation is a method of making an image that is described in vector graphics. Rasterization is extremely fast in mapping scene geometry to the pixel. However, it cannot produce color shades, light refractions on the object surface. Whereas ‘ray tracing ‘ can generate all these image features which produce an extremely realistic image. But it takes a lot of hardware resources which were not available in those days.
How does ray tracing work?
As the names suggest, raytracing traces ray of the light source. In other words, raytracing simulates lighting of a scene and its objects by providing “near” physical reflections, refractions, shadows, colors shades/hue, indirect lighting. Raytracing generates images in a computer by following a ray of light from the image sensor( or eye) through the 2D image plane( pixel sheet) to the 3D world, where objects do exist. Again from the objects to the light source.
As the raytracing algorithm traverses the scene, light from other sources falls on other objects(it creates reflection), or other objects block the light(which creates shadows), or light may pass through a translucent object cause refractions. These are the possible scenarios that create life-like images. The algorithm creates a final image by observing all these interactions of light, objects, shadows, the distance among the objects. This is reverse-tracing from light to object to the eye/camera looks more realistic than creating an image from the light source to the object.
Important terms in Ray tracing:
Ray casting: When raytracing algorithm shoots one or more rays from the position of the eye after passing through the image plane and checks that it(the ray) has hit any image primitive. if the passing ray through the image plane(pixel) hits any primitive then, the distance between the source of the light and the primitive is determined. It also determines the final color of the pixel after hitting the primitive. it may also produce reflection and refraction when it comes in contact with another light source or any translucent object.
Path Tracing: It is a more exhaustive ray tracing. It traces hundreds of rays from eye to the light source through pixel plane… Hence, it provides more accurate color information about the primitive.
Rasterization: It is a technique converting 3D objects into 2D images on a screen. A mesh of virtual triangles, polygons of different shape and size creates objects on the screen. Corners of the triangle or polygons provide a lot of information like the position of the object concerning image plane, color, light intensity. The rasterization algorithm converts the triangles of the 3D models into pixels, or dots, on a 2D screen. In the initial stage, pixels are assigned a color value based on the position of the vertex. However, after processing, the color value of the pixels are changed based on how light ray has hit the objects in 3D space. this is also known as ‘color shading’.
Hybrid Rasterization and Ray Tracing: It is a process where raytracing and rasterization use concurrently to generate an image. Rasterization can produce accurate 2D images from the 3D scene. While raytracing can produce light reflections, shadows, refractions. when both the method used together produces high-quality images with decent frame rates.
Why Ray tracing need GPU:
Traditionally, the multicore CPU renders all the lighting effects. Therefore it takes a lot of time to make visual effects. Whereas GPU contains multiple Cuda cores that can render visual effect in exponentially less time. With the addition of tensor cores, there is significant acceleration in providing ray tracing features like reflection, refraction, and shadows.