Explanation: The Future Of Computer Graphics


Explained: The future of computer graphicsExplained: The future of PC graphics

The future of computer graphics

What future for graphics? Why do Core Graphics Then, of course. Thank you, AMD, because it nicely pallindromic way to start a function.

And also to discuss the successor to the current generation of Radeon graphics cards, which is expected next year.

The presentation took place in June, GCN Fusion Developer Summit. It 'the first comprehensive reform of GPU technology architecture, threatened to launch from Vista.

It also means, incidentally, makes the first all-new design graphics from AMD, which is not based on the work started by ATI before it was purchased.

Vista and DirectX 10 specifically called for a graphics card with a fully programmable shader pipeline.

It meant abandoning the traditional circuit-bit, which deals with some elements of the chart - as pixel shaders and vertex shaders - and replacing them with something more flexible, I can do everything: a uniform shader (see "Why shaders Unifi AND? "On the next page).

Schism

At birth, DX10 class graphics, there was a kind of schism between Nvidia and AMD.

To simplify: the first has opted for an interpretation of the theory in its unified shader GeForce G80 chip that was flexible enough. Place a few hundred processors in a very simple picture, and send them to a calculation (or, in certain circumstances the two) a piece of work until the job is done.

It 'a method that creates a nightmare for the creation of a small engine, but it is very flexible and well-written code that exploits the way the processors are too close together on the table, the dynamite.

The design of the G80 and its successors, Nvidia has an eye on applications for more than graphics. Developers can create applications for GPGPU GeForce cards are written in C, C + + and more recently.

AMD / ATI, meanwhile, focused on the needs of a traditional graphics card. Its unified shaders worked by combining the operations of "very long instruction word (VLIW) and send the batch processing.

The basic unit in the first processor Nvidia DX10 was a simple "scale" developed in groups of 16 for parallel processing.

Within an AMD one, it was the CPU "vector" in a four-function and special session. Therefore, the Radeon architecture name: VLIW5. While the horrible sound setup was designed to be more effective.

The important point is that the pixel color is determined by mixing red, green, blue and alpha (transparency) channel. So R600 processor - which was the basis for the series and HD2xxx HD3xxx short - was designed to be incredibly effective to find the four values, again and again.

Unfortunately, these early maps were R600 is not great, but over time and refine the design of work and work well done AMD.

HD4xxx cards, and HD5xxx HD6xxx were superb, putting out a better performance and requires less power than Nvidia peers. Often less expensive, too. But despite the improvements of the last four years, the current generation GeForce and Radeon chips are still recognizable as part of the same family as the first G80 and R600.

There have been changes in the memory interface (see the ring bus power hungry Radeon) and a large increase in the number of execution cores (1536 in a single Radeon HD6970 against a HD2900XT 320), but the change is significant over time was the separation of the special unit of the functions of the processor cores.

Core Graphics Then, however, is a completely new design. According to AMD, the existing architecture is no longer the most effective for the tasks that the graphics are called to do.

A new approach

Future Graphics: VLIW5 has four vector processing units, one for each R, G, B and alpha

Proportionally, the number of routines for physics and geometry that run on the graphics card has increased dramatically in a typical piece of code in the game, calling for a more flexible processor design that focused primarily made for spots in pixels.

Consequently, the VLIW design is left that can be programmed with C and C + +.

The basic unit in the Google content network is an array of 16 range of threads arranged for SIMD (Single Instruction Multiple Data) operations. If all this sounds familiar to G80 and that is because it is.

Cynic, which can be seen as a tacit acknowledgment that Nvidia was right all the time, and there is no doubt that AMD considers the GPGPU applications for the next generation of chips. But there is more complicated than that.

In GCN, these SIMD processors are grouped into groups of four to create a "computing unit" or CU. They are, functionally, the units of delivery, including Fourway (perfect for instructions RGBA), but also a processor coupled to climb out of it for the calculation can not be completed efficiently on SIMD units.

Each CPU has all the circuits, it must be almost autonomous, with an L1 cache, Instruction Fetch arbitration controller, Management & MSG unit and so on.

There are more than CU is GCN, though. The new architecture also supports the x86 virtual memory space, or large data sets - such as megatextures employs id Software Rage - can be addressed when they are partially out of the living memory.

And while it is not - as other reviewers have noted - a processor out of sight, it is able to use its transistors very effectively by working on multiple threads simultaneously and switch between them if the it is paused and waiting on a set of values ​​to be returned. In other words, this is a very versatile chip.

After a preview of the project, some have pointed out some similarities between the concepts of Intel's Larrabee is dead, and the Atom and ARM chips-8, but much more oriented for parallel processing.

Inventive NAME: GCN always work with RGBA data, but has more flexibility

"The graphics are still the primary goal," said Eric Demers, AMD GCN for his keynote presentation, "But we do significant optimizations to calculate ... What is the computing and graphics that blend."

The big question now is whether AMD can do this ambitious chip. His first VLIW5 chips were a disappointment that runs hotter and slower than expected. Then came the first generation of Nvidia Fermi-based GPU.

GCN will nail together? While we wait to find out. The first chips based GCN is the code name for the islands of the North, and probably officially branded as the Radeon HD7xxx. They were initially for this year, but is not expected until 2012.


Category Article

What's on Your Mind...

Powered by Blogger.