Tech

What Is 3D Rendering And How Does It Work?

3D rendering basically consists of the process of creating two-dimensional images (for example, for a computer screen) from a 3D model. These images are generated based on data sets that dictate what color, texture, and material a given object has in the image.

How Does It Work

In principle, 3D rendering is similar to photography. For example, a 3D rendering program directs a camera towards an object to compose a photograph. Therefore, digital lighting is important to create a detailed and realistic rendering.

Over time, various rendering techniques have been developed. However, the goal of any 3D rendering is to capture an image based on how light affects objects, as well as real life

One of the first rendering methods was rasterization, which treats models as a polygon mesh. These polygons consist of vertices, which contain information such as position, texture, and color. Subsequently, these vertices are projected in a plane normal to the perspective (that is, the camera).

The vertices act as limits so that the rest of the pixels are filled with the appropriate colors. Imagine painting an image by first arranging an outline for each of the colors you are going to paint: that’s what rendering by rasterization consists of.

A rasterization is a quick form of rendering. Today, this technique is still widely used, especially for real-time 3D rendering (for example, in computer games, simulations, and interactive graphic interfaces). More recently, the process has been improved by increasing the resolution and by using the anti-aliasing or anti-lapping functionality, which makes it possible to soften the edges of objects and blur them with the surrounding pixels.

Although the rasterization technique is effective, it raises certain problems in the presence of overlapping objects: if the surfaces overlap, the last one that has been drawn will be reflected in the 3D rendering, causing the wrong object to be rendered. To solve this difficulty, the “Z-buffer” concept in rasterization was developed, which consists of a depth sensor that indicates which surfaces are above or below a certain point of view. This method was no longer necessary, however, when ray casting was developed.

The rays extend to each pixel in the image plane. The surface on which they strike first will be shown in the rendering, and any other intersection after this first surface will not be rendered.

In essence, the primary rays from the camera point of view are projected onto the models and generate secondary rays. Once they reach the model, they produce rays of shadows, reflections and refractions, depending on the properties of the surface. A shadow is produced on another surface if the path of the shadow ray towards the origin of the light is hindered by the surface.

If the surface is reflective, it will project the resulting reflection beam at a certain angle and illuminate any other surface with which it hits, which will generate a new set of rays.…

GPUs as Past, Present, and Future of Computing

We all use our computer, but many times we are not aware of the technology in them. Incredible machines that allow us to enjoy leisure with graphics videogames very close to reality, physics mechanisms that emulate destroyed skyscrapers or blows between two vehicles fighting in frantic races. For these tasks, the work of the graphic processor or GPU is essential.

Today we will enter the world of graphics cards, GPU architectures, and their differences with another processor, the central or CPU, much better known because graphics cards are essential in today’s computing and are a basic part of what is important for many in the future: GPU, or data processing using GPU. Today we enter fully into this topic with our special graphics processors.

The pre-GPU era

Things have changed a lot since home computers began to be implanted in our homes back in the 80s. The hardware is based on the same foundations of von Neumann architecture, although it has evolved in a very remarkable way, and the Current systems are now much more complex.

The trio of components raised by John von Neumann was three: ALU, memory, and input/output referring to mechanisms that process, store, and receive/send information, respectively. Interpreting the architecture on a current computer would be equivalent to having only one processor, one disk, one keyboard, and one screen. Obviously, a modern system is made up of many more elements, and among them, the graphics card has become one of the current fundamental components.

The Origins

In the first computers, the central processor – CPU, central processing unit – was responsible for managing and processing all kinds of information.

Although those first systems used text-based interfaces, with the arrival of the first graphic interfaces, the level of demand grew not only in the operating system itself but also in many of the applications that began to emerge at the time. CAD programs or video games, for example, required many more resources to function properly.

At this point, system designers relied on a component that already existed to evolve it and make it grow. The math coprocessor or FPU – floating-point unit – was used in many systems to speed up data processing. They can be understood as a second processor, although some of the differences with respect to the CPUs are very clear: they cannot have access to the data directly (it must be the CPU that manages this section), or they execute a much simpler set of instructions to process floating-point data.

The definition of the general-purpose processor we had already used before. These types of processors are the most common, and the CPU is the most common example. They use generic records and instruction sets that can make the most diverse operations. An important fact for the topic that we are going to deal with is that the CPUs did not operate directly with floating-point data, but that they performed a previous conversion that involved an expense in resources and, therefore, time. Therefore, mathematical coprocessors were important as they could process this type of data.

The demands continued to grow, and the systems of the time had a CPU and an optional FPU that ended up becoming fundamental: the mathematical coprocessors evolved towards the GPUs, being the most efficient component when processing and determining the graphic aspect of everything type of software.

The First Graphics Cards

Mathematical coprocessors continued to evolve and improve and began to mount on individual cards. Through this format, they could have more space to create larger chips, with more transistors and circuitry and better energy connections, which were able to offer greater process capacity.

It was not until 1999 when NVidia coined the term GPU, Graphics Processing Unit, to replace the previous video cards. After a successful RIVA TNT2, they presented the NVidia GeForce 256, and to promote it, and they placed great emphasis on the graphic possibilities that it brought to our team. Video games, gaining more and more followers, were one of the keys for GPU designers to increase their performance year after year.