Acceleration Of Calculations And Saving On Resources: When GPUs Are Needed
When graphics processing units (GPUs) first appeared, no one thought that they would become so widely used over time.
Initially, GPUs were used to draw pixels in graphics, and their main advantage was energy efficiency. No one has tried to use GPUs for computing: they cannot provide the same accuracy as central processing units (CPUs).
But then it turned out that the accuracy of calculations on GPUs is quite acceptable for machine learning. At the same time, GPUs can quickly process large amounts of data. So today they are used in various fields, we talk about the most interesting ones in the article.
What Is A Graphics Processing Unit (GPU)
A graphics processing unit (GPU) is a type of microprocessor. Unlike the central processing unit (CPU), it has not tens but thousands of cores. Because of this architecture, GPUs have several features:
- They can simultaneously perform the same operations on an entire array of data, for example, subtracting or adding many numbers at once.
- Unlike CPUs, GPUs have less computational accuracy, but they are sufficient for tasks such as machine learning, where high speed is more important.
- GPUs are energy efficient. For example, one server equipped with an Nvidia Tesla V100 GPU consumes 13 kW and provides the same performance as 30 servers with CPUs. That is, using GPUs, you can pay less for electricity.
GPU And Machine Learning
GPUs are used at all stages of machine learning – in data preparation, training machine learning models and their industrial operation.
The latest generations of GPUs from NVIDIA contain Tensor Cores, a new type of computing core. Compared to classic GPUs, they perform fewer operations per unit of time but are even more energy-efficient. This is important for large companies with their data centers.
Today, machine learning is used in various industries, such as medicine. AI-based solutions check CT and MRI images and find pathological changes in them. As a result, doctors spend less time working with pictures, and the risk of human error is reduced.
Machine learning also underlies computer vision, a neural network that can recognize people and objects in photos and videos.