We all use our computer, but many times we are not aware of the technology in them. Incredible machines that allow us to enjoy leisure with graphics videogames very close to reality, physics mechanisms that emulate destroyed skyscrapers or blows between two vehicles fighting in frantic races. For these tasks, the work of the graphic processor or GPU is essential.
Today we will enter the world of graphics cards, GPU architectures, and their differences with another processor, the central or CPU, much better known because graphics cards are essential in today’s computing and are a basic part of what is important for many in the future: GPU, or data processing using GPU. Today we enter fully into this topic with our special graphics processors.
The pre-GPU era
Things have changed a lot since home computers began to be implanted in our homes back in the 80s. The hardware is based on the same foundations of von Neumann architecture, although it has evolved in a very remarkable way, and the Current systems are now much more complex.
The trio of components raised by John von Neumann was three: ALU, memory, and input/output referring to mechanisms that process, store, and receive/send information, respectively. Interpreting the architecture on a current computer would be equivalent to having only one processor, one disk, one keyboard, and one screen. Obviously, a modern system is made up of many more elements, and among them, the graphics card has become one of the current fundamental components.
In the first computers, the central processor – CPU, central processing unit – was responsible for managing and processing all kinds of information.
Although those first systems used text-based interfaces, with the arrival of the first graphic interfaces, the level of demand grew not only in the operating system itself but also in many of the applications that began to emerge at the time. CAD programs or video games, for example, required many more resources to function properly.
At this point, system designers relied on a component that already existed to evolve it and make it grow. The math coprocessor or FPU – floating-point unit – was used in many systems to speed up data processing. They can be understood as a second processor, although some of the differences with respect to the CPUs are very clear: they cannot have access to the data directly (it must be the CPU that manages this section), or they execute a much simpler set of instructions to process floating-point data.
The definition of the general-purpose processor we had already used before. These types of processors are the most common, and the CPU is the most common example. They use generic records and instruction sets that can make the most diverse operations. An important fact for the topic that we are going to deal with is that the CPUs did not operate directly with floating-point data, but that they performed a previous conversion that involved an expense in resources and, therefore, time. Therefore, mathematical coprocessors were important as they could process this type of data.
The demands continued to grow, and the systems of the time had a CPU and an optional FPU that ended up becoming fundamental: the mathematical coprocessors evolved towards the GPUs, being the most efficient component when processing and determining the graphic aspect of everything type of software.
The First Graphics Cards
Mathematical coprocessors continued to evolve and improve and began to mount on individual cards. Through this format, they could have more space to create larger chips, with more transistors and circuitry and better energy connections, which were able to offer greater process capacity.
It was not until 1999 when NVidia coined the term GPU, Graphics Processing Unit, to replace the previous video cards. After a successful RIVA TNT2, they presented the NVidia GeForce 256, and to promote it, and they placed great emphasis on the graphic possibilities that it brought to our team. Video games, gaining more and more followers, were one of the keys for GPU designers to increase their performance year after year.
Compressed ZIP or RAR files get corrupted very easily since it is enough that a bit or a small piece of data is corrupted so that the system cannot open, read, or decompress the file. In the case that we are working with Windows, the usual error is usually something like “The file is damaged or invalid.”
Although Windows already allows creating and extracting ZIP files natively for several years, the truth is that if we want to repair a corrupt ZIP, we have no choice but to opt for specialized programs. Most of these programs are usually paid, but there are also fairly efficient free alternatives, and that is precisely what we will see in today’s post.
Although we talk about free software, it is clear that since they are professional tools, some usually work under a “shareware” model. That is, they are free and 100% functional but with some limitation (usually in the maximum size of the file to be repaired).
DiskInternals ZIP repair
DiskInternals is a company specialized in data recovery and has a freeware utility called “ZIP repair,” with which we can recover damaged zips. The application has a wizard that will guide us throughout the process: we just have to select the corrupt file, a destination folder, and the program will tell us what part of the content can be recovered.
Zip2Fix is a tool that recovers damaged ZIPs by extracting the files that are “healthy” (leaving the corrupt ones aside) and compressing them back into a new ZIP. To start it, click on the “Open” button, select the damaged ZIP / SFX file, and the program will automatically start analyzing the file in search of everything that can be saved.
During the installation, we must be careful to uncheck the corresponding tabs so that we do not install the typical unwanted programs that are usually included in this type of free utilities (they have to live for something).
Object Fix Zip
A free tool dedicated to the repair of ZIP files. It has an assistant that guides us throughout the process: we select the damaged file, the path where we want to save the recovered file, perform analysis, and the program will try to fix the damaged parts. We are facing a utility that stopped being updated there by 2008, which means that it will probably have problems to solve the most modern failures or errors. In any case, it is a more valid option if we do not achieve positive results with any other application.
The applications we use to compress files also usually include damaged file repair functions, and although there are many that do not serve much, there are others that offer the most efficient restoration processes.
The famous WinRAR also has a repair tool for ZIP and RAR files. First, we open WinRAR, we load the file with error and go to the menu “Tools -> Repair file. ” The corrected file will appear in the folder that we choose, with the same name as the original but with the prefix “rebuild.”
If you are interested in your business, taking the next step to enter the digital era, cloud computing can be a great option for you. However, it is common for many to get carried away by several false myths that have become popular over time. Too bad because several companies are depriving themselves of the opportunity to take their operation to the next level.
So that does not happen to you. In this article, we will share the myths and truths of cloud computing. So you can make a good decision!
The Myths Of Cloud Computing
First of all, let’s start with myths. If you have researched the subject, it is very likely that you have already heard several of them before. Here we will tell you why they are not true. Let us begin!
The cloud is, in fact, much safer than a local computer. Once the data reaches the servers, they will be protected by a large number of electronic barriers that are practically impossible to pass.
In other words, if you are concerned about the security of your data, it is much better for you to have it in a cloud computing system. Exactly the opposite of what the myth says!
Strongly false! Precisely, one of the benefits of using the cloud is that the expenses are controllable since you only pay for what you use. This is much cheaper than the alternative: buy a local storage system.
It Is Only For Large Companies
Small businesses can also have access to this technology and, best of all, benefit as much as the world’s largest technology companies. It is an advantage that any SME should consider.
Data Loss Is Common
In fact, data loss is a much more recurring accident on local systems, since these can be damaged or stolen. However, once the data enters the cloud, the system automatically produces backups that will ensure that information is never lost.
Not to mention the great benefit of accessing information whenever you want and from any device.
The Truth About Cloud Computing
Now that we cover the most important myths, it’s time to talk about the truths!
When most of the data is in the cloud, computers can run with much more processing power. Imagine what you could achieve with that and with dedicated software!
It Is Necessary To Train The It Team
Once the cloud computing system is working 100%, members of the IT team will be able to put aside the hardware improvement to devote to developing software. However, in the beginning, they need to be trained to understand the new service they will use.
There Are Several Types Of Clouds
The truth is that there is a type of cloud for each type of company. The truth is that this technology can grow with your company and adapt to your needs.
There are different types such as public, private, and hybrid, and each of them can respond to problems as specific as you can imagine.…
Cloud Computing is the technology that allows you to use applications or services that are independent of your computer or device. Both the programs and their data do not depend on your computer; they are on the Internet, in what we call The Cloud. No, it really isn’t something new. Surely for years, you have connected to your bank online or that you buy your trips on a website.
The bank provides you with a service from the Cloud years ago. But then what has changed for us to talk about Cloud Computing?
The difference is in power and the things that today can be done in the cloud.
The difference is in the programs and software. We will give some examples:
Before, to use a text editor, you had to install a program. Now you can edit text or use spreadsheets directly in the cloud, and also the files remain in the cloud for you to edit from any place or device.
Before, to store files, you had to use your computer’s hard drive. Now you can use a storage service, and your files will be available from anywhere and from any device you use, your computer, your mobile or your Tablet.
But why the hell is a revolution?
There are many reasons. To begin, we have to understand that the old application consumption model had many limitations, especially for companies and professionals.
The old software model had important limitations, such as:
1.Limited capabilities. Both in storage space and in execution speed. In the end, everything was small.
2.The applications were installed one day, and in time they remained old. We had to install a new version.
3.Rigidity of the model. There was no payment for consumption as it happens with Cloud Computing. Full programs and licenses had to be paid forever.
4.Vulnerability to local problems. The breakage of the computer where the program was and its data could mean the total loss — also viruses and operating system problems.
5.Absence of ubiquity. Programs installed on a computer were only available on that device. People want to have their programs and their data accessible from anywhere.
Ok, in the end, why is it considered a revolution?
The first is because it overcomes all the limitations we have described above.
The second is that it makes available to any services that until recently were unthinkable.
Cloud computing allows you to have the power and capacity that were previously only available to large companies or institutions.
World map showing access to the cloud from any connected place.
Let’s turn to an example again.
Let’s imagine that we need a translator of texts in several languages before we had to buy a program that we installed on our computer, of course, this limited us to what we had bought.
Now imagine that in Japan they have a computer the size of a bus that turns out to be the best translator in the world with Artificial Intelligence.
Of course, if you want the best translator, you will not have to buy a computer like that of Japan, which is worth several million and that does not fit in your home.
There will simply be an application in the cloud that connects to that computer, and you can do translations by paying only for the amount of translated text. That is Cloud Computing!
Where is that cloud?
Well, that cloud we talk about so much are computers that reside in places called Data Centers or Datacenters.
They are entire buildings full of computers tucked into special cabinets called Racks. Each cabinet or cabin can have more than a dozen servers. A server is a computer with a high capacity, and that is able to provide services on the network.
Datacenters like the one we show are not something new either. Large companies and governments have had them for decades. What Cloud Computing does is provide that power to the general public.
3D rendering basically consists of the process of creating two-dimensional images (for example, for a computer screen) from a 3D model. These images are generated based on data sets that dictate what color, texture, and material a given object has in the image.
How Does It Work
In principle, 3D rendering is similar to photography. For example, a 3D rendering program directs a camera towards an object to compose a photograph. Therefore, digital lighting is important to create a detailed and realistic rendering.
Over time, various rendering techniques have been developed. However, the goal of any 3D rendering is to capture an image based on how light affects objects, as well as real life
One of the first rendering methods was rasterization, which treats models as a polygon mesh. These polygons consist of vertices, which contain information such as position, texture, and color. Subsequently, these vertices are projected in a plane normal to the perspective (that is, the camera).
The vertices act as limits so that the rest of the pixels are filled with the appropriate colors. Imagine painting an image by first arranging an outline for each of the colors you are going to paint: that’s what rendering by rasterization consists of.
A rasterization is a quick form of rendering. Today, this technique is still widely used, especially for real-time 3D rendering (for example, in computer games, simulations, and interactive graphic interfaces). More recently, the process has been improved by increasing the resolution and by using the anti-aliasing or anti-lapping functionality, which makes it possible to soften the edges of objects and blur them with the surrounding pixels.
Although the rasterization technique is effective, it raises certain problems in the presence of overlapping objects: if the surfaces overlap, the last one that has been drawn will be reflected in the 3D rendering, causing the wrong object to be rendered. To solve this difficulty, the “Z-buffer” concept in rasterization was developed, which consists of a depth sensor that indicates which surfaces are above or below a certain point of view. This method was no longer necessary, however, when ray casting was developed.
The rays extend to each pixel in the image plane. The surface on which they strike first will be shown in the rendering, and any other intersection after this first surface will not be rendered.
In essence, the primary rays from the camera point of view are projected onto the models and generate secondary rays. Once they reach the model, they produce rays of shadows, reflections and refractions, depending on the properties of the surface. A shadow is produced on another surface if the path of the shadow ray towards the origin of the light is hindered by the surface.
If the surface is reflective, it will project the resulting reflection beam at a certain angle and illuminate any other surface with which it hits, which will generate a new set of rays.…