How the Turing GPU will revolutionize immersive AI

AI, machine learning, and image manipulation have a lot in common, so it’s no surprise that GPUs can do both, which will bring us amazingly immersive apps

Become An Insider

Sign up now and get FREE access to hundreds of Insider articles, guides, reviews, interviews, blogs, and other premium content. Learn more.

Graphics processing units (GPUs) are far more than graphics chips. They have been the heart of the artificial intelligence revolution for many years. This is due in great part to the fact that the computational substrate for high-fidelity 3D image processing lends itself beautifully to the mathematics that underpin the neural networks powering today’s most sophisticated AI applications.

GPUs seem to have been engineered for AI from the start, but that would misrepresent the historical development of this technology. Nvidia, AMD, and other chipmakers have made a lot of money providing GPUs for PC graphics, interactive gaming, image postprocessing, and virtual desktop infrastructure for many years.

Nevertheless, the affinity between graphical processing and AI is undeniable. Convolutional neural networks (CNNs), for example, are at the forefront of AI and are principally for image analysis, classification, rendering, and manipulation. It almost goes without saying that GPUs are one of the primary hardware workhorses for CNN processing in many applications.

What AI and image processing have in common

From a technical standpoint, what image processing and AI have in common is a reliance on highly parallel matrix and vector operations, which is where GPUs shine. Essentially, a matrix (aka “tensor graph”) in AI terminology is equivalent to a matrix of pixels—or rows and columns of dots—in a computer-generated image frame. A GPU’s embedded memory structures process an entire graphic image as a matrix—perhaps enriched through the adaptive intelligent that comes from concurrent execution of deep learning and other AI matrices. This architecture enables GPU-powered systems to use inline AI to dynamically and selectively accelerate processing of image updates and modifications.

The symbiotic relationship between these workloads is also evident at the application level, which explains why GPUs are often the preferred hardware accelerator technology for many intelligent, graphics-rich applications. Increasingly, we’re seeing AI being embedded in mass-market image-processing products, such as smart cameras that automatically stabilize images, adjust color and exposure, select focal points, and otherwise tailor the image in real time in the field to the scenes being captured, thereby reducing the likelihood that any of us might take a technically inept photograph.

To continue reading this article register now