GPUs as Past, Present, and Future of Computing

We all use our computer, but many times we are not aware of the technology in them. Incredible machines that allow us to enjoy leisure with graphics videogames very close to reality, physics mechanisms that emulate destroyed skyscrapers or blows between two vehicles fighting in frantic races. For these tasks, the work of the graphic processor or GPU is essential.

Today we will enter the world of graphics cards, GPU architectures, and their differences with another processor, the central or CPU, much better known because graphics cards are essential in today’s computing and are a basic part of what is important for many in the future: GPU, or data processing using GPU. Today we enter fully into this topic with our special graphics processors.

The pre-GPU era

Things have changed a lot since home computers began to be implanted in our homes back in the 80s. The hardware is based on the same foundations of von Neumann architecture, although it has evolved in a very remarkable way, and the Current systems are now much more complex.

The trio of components raised by John von Neumann was three: ALU, memory, and input/output referring to mechanisms that process, store, and receive/send information, respectively. Interpreting the architecture on a current computer would be equivalent to having only one processor, one disk, one keyboard, and one screen. Obviously, a modern system is made up of many more elements, and among them, the graphics card has become one of the current fundamental components.

The Origins

In the first computers, the central processor – CPU, central processing unit – was responsible for managing and processing all kinds of information.

Although those first systems used text-based interfaces, with the arrival of the first graphic interfaces, the level of demand grew not only in the operating system itself but also in many of the applications that began to emerge at the time. CAD programs or video games, for example, required many more resources to function properly.

At this point, system designers relied on a component that already existed to evolve it and make it grow. The math coprocessor or FPU – floating-point unit – was used in many systems to speed up data processing. They can be understood as a second processor, although some of the differences with respect to the CPUs are very clear: they cannot have access to the data directly (it must be the CPU that manages this section), or they execute a much simpler set of instructions to process floating-point data.

The definition of the general-purpose processor we had already used before. These types of processors are the most common, and the CPU is the most common example. They use generic records and instruction sets that can make the most diverse operations. An important fact for the topic that we are going to deal with is that the CPUs did not operate directly with floating-point data, but that they performed a previous conversion that involved an expense in resources and, therefore, time. Therefore, mathematical coprocessors were important as they could process this type of data.

The demands continued to grow, and the systems of the time had a CPU and an optional FPU that ended up becoming fundamental: the mathematical coprocessors evolved towards the GPUs, being the most efficient component when processing and determining the graphic aspect of everything type of software.

The First Graphics Cards

Mathematical coprocessors continued to evolve and improve and began to mount on individual cards. Through this format, they could have more space to create larger chips, with more transistors and circuitry and better energy connections, which were able to offer greater process capacity.

It was not until 1999 when NVidia coined the term GPU, Graphics Processing Unit, to replace the previous video cards. After a successful RIVA TNT2, they presented the NVidia GeForce 256, and to promote it, and they placed great emphasis on the graphic possibilities that it brought to our team. Video games, gaining more and more followers, were one of the keys for GPU designers to increase their performance year after year.

Leave a Reply

Your email address will not be published. Required fields are marked *