The world of computing over the last few years has been seeing graphics cards (GPU’s) taking over the primary computational role from general purpose processors in a number of applications. Chemical & Engineering News magazine is profiling how these highly optimized devices, which can crunch data much faster than do-it-all processors, are helping chemists study complicated reactions and develop new pharmaceuticals that would otherwise take a lot of time and labor in the laboratory.
A snippet:
Unlike the central processing unit (CPU) of a computer, which might have a couple of electronic components that carry out mathematics (known as cores), a GPU has hundreds of arithmetic elements that can perform massively parallel calculations.
Computational chemists took note of these graphics cards nearly a decade ago because of their ability to carry out billions of math operations per second, but harnessing their power was tedious and difficult. “In those early days, you really had to represent your calculation as if it were some graphics operation,” says Vijay S. Pande, a chemistry professor at Stanford University. In other words, theorists had to jump through a lot of hoops to get the GPUs, which were set up to output shaded polygons, to recognize their algorithms.
It wasn’t until 2007, when graphics hardware firm Nvidia introduced Compute Unified Device Architecture (CUDA), a new GPU chip structure and programming tool kit, that things changed. CUDA enabled scientists to access GPUs with high-level programming languages such as C and Fortran, so “it feels like you’re writing a more normal computer program,” Pande says. Since then, other firms, such as AMD, have followed suit with more-user-friendly GPUs.
These easier-to-use GPUs have transformed the computational field in the past three years. Supercomputers are still at the forefront of computational research (C&EN, Oct. 18, page 5), but when GPUs are incorporated into the clustered machines, more complex calculations become possible. Chemists are now using graphics cards to carry out classical molecular dynamics simulations on desktops, and clusters are beginning to output results on large biomolecular systems that couldn’t be easily explored previously. And theorists who do quantum chemical calculations are joining the GPU bandwagon, adapting their more complex algorithms to run on the graphics hardware.
Vijay S. Pande’s group at Stanford recently simulated the folding of a 39-residue fragment of the protein NTL9. In this video, the unfolded fragment passes back and forth between nonnative states and a partial native configuration before completely folding.
Read on at C&EN: The GPU Revolution
Image credit: William Hook…