I bought my first personal computer in 1983. It was a Leading Edge with an 8086/2 processor. The /2 meant that the CPU would double its clock rate when possible. This was normally when in pure computation mode - when it was not interfacing with other components of the system. In 1990 I worked on the HOOPS 3D graphics system for Ithaca Software. HOOPS ran on 386 personal computers as long as the computer also had an accompanying math co-processor. The Weitek coprocessor was a popular add-on at the time. So computers have a history of striving to improve general purpose computation as well as the performance of computer graphics.
When we were at Ithaca Software, Carl Bass was our chief architect. He noted that the computer graphics industry was like a pendulum. Special purpose graphics processors would be prevalent. Then general purpose CPUs would increase in speed so dramatically that the extra expense of a special purpose graphics processor would no longer be cost effective. They would fall out of vogue. Computer graphics would instead be handled by the main CPU. Eventually the demands placed on the general purpose CPU would reach capacity, and special purpose processors would rise again. This was 1995.
General purpose CPUs have made great advances in the last decade; however, exponential performance increases have plateaued as of late. Thus the pendulum has swung towards special purpose processors to improve graphics performance, Hence many graphics cards, such as those from NVIDIA, contain a GPU (graphics processing unit). This does wonders for graphics. By now we've all seen the NVIDIA demo where they simulate the GPU effect to create the image of the Mona Lisa by shooting paint balls in parallel.
It's a great demo. But the question arises - can the GPU also be used for general purpose computing? Is it possible to offload some computations from the main CPU to the GPU to improve overall system performance? Just like with the 8086/2, there are limitations as to when this is possible. In addition, just like HOOPS was programmed to take advantage of the Weitek coprocessor, applications have to be programmed to take advantage of the GPU.
Recently a way for application developers to leverage the CPU has been standardized.
This is great news for developers. It might allow them to code their applications once and take advantage of any GPU instead of having to wire their applications for specific devices. Research in this area show promise, but there will be limitations.
Staying abreast of industry computing trends is alive in the lab.