Jump to content

Useful Information on AMD/Nvidia Tech/Purchase Choices

smogsy
What the Thread is About

This thread, is to help inform people on what AMD & Nvidia have to offer, Hopefully helping you in your purchase decisions.

maybe you just want to learn about AMD/Nvidia's technologies

 

ill try keep this up to date this is a direct copy from my thread on another big forum :)

 

 

AMD

NIfXNed.png 

 

Overclockers AMD Range Can be found here

 

 

Freesync


What we Know?

We believe FreeSync is to combat & show AMD trick up its sleeve vs Nvidia G-sync Technology

 

General Information/News

 

According to the company’s senior engineers, they can replicate much of the advantages of Nvidia’s G-Sync tech through the use of what are called dynamic refresh rates. Multiple generations of AMD video cards have the ability to alter refresh rates on the fly, with the goal of saving power on mobile displays. Some panel makers offer support for this option, though the implementation isn’t standardized. AMD engineers demoed their own implementation, dubbed “FreeSync,” on a laptop at the show

 

Dynamic refresh rates would theoretically work like G-Sync by specifying how long the display remained blank on a frame-by-frame basis, providing for smoother total movement. AMD has stated that the reason the feature didn’t catch on was a lack of demand — but if gamers want to see G-Sync-like technology, AMD believes it can offer an equivalent. AMD also told Tech Report that it believes triple buffering can offer a solution to many of the same problems G-Sync addresses. AMD’s theory as to why Nvidia built an expensive hardware solution for this problem is that Nvidia wasn’t capable of supporting G-Sync in any other fashion.

 

 

No Videos can be found at this stage that does not ruin the effect of FreeSync

 

credit to AnAndTech & Extreme Tech

 

 

 

 


 

True Audio


 

How AMD TrueAudio removes constraints on sound developers

AMD TrueAudio technology is all about giving sound engineers the freedom to follow their imaginations and the power to make their games sound as convincing as they look. Here are some of the features that make that possible:

A dedicated digital signal processor (DSP) is built in to the AMD GPU core. That’s a hefty dose of processing power committed just to generating immersive soundscapes. That doesn’t just enable new and exciting audio features in games; it also saves CPU cycles that can be used for other tasks.

Programmable sound effects bring to audio the same kind of flexibility that programmable shaders brought to graphics. Game developers aren’t stuck with inflexible canned effects anymore. They now have the flexibility to create complex effects and acoustic environments.

More voice channels and audio objects mean game developers no longer have the unenviable task of determining which sounds are expendable. AMD TrueAudio technology multiplies the number of sounds a game can generate at once, giving developers the capacity to create much more lifelike soundscapes.

True to life echoes and convolution reverb are finally available on all platforms, meaning programmers can now build these effects into their games rather than relying on basic reverbs. The types of real-world acoustic phenomena that can be faithfully reproduced are massively greater.

Multi-channel specialization brings in-headset surround sound with accurate positional audio algorithms to all gamers; not just the ones with the priciest headgear.

 

What it means for gamers

By leveraging AMD TrueAudio technology, game developers now have the ability to recreate acoustic environments with incredible fidelity, and to bring them to every gamer. With AMD technology inside all the most popular “next-generation” gaming platforms, including the PC, gamers across the entire industry will finally enjoy the best audio today’s technology can offer.

 

From AMD Engineers


 

Demonstration:


 


 

Mantle


 

Taken from Wikipedia

Mantle is a graphics API specification developed by AMD as an alternative to Direct3D and OpenGL, primarily for use on the PC platform. Currently the only implementation is for graphics processing units with AMD's Graphics Core Next architecture.[4] The design goals of Mantle are to allow games and applications to utilize the CPU and GPU more efficiently, eliminate CPU bottlenecks by reducing API validation overhead and allowing more effective scaling on multiple cores, provide faster draw routines, and allow greater control over the graphics pipeline by eliminating certain aspects of hardware abstraction inherent to the current prevailing graphics APIs.

 

AMD has stated that Mantle will be an open API. It is unknown when the Mantle specification and development materials will be released to the public, although AMD's director of software alliances and developer relations stated in an interview that Mantle may be made public in early 2014, or the year after.As of March 2014, the Mantle specification and development materials remain unavailable to the general public.

 

Claimed Advantages

Reduced runtime shader compilation overhead

AMD claims that Mantle can generate up to 9 times more draw calls per second than comparable APIs by reducing CPU overhead.

Performance increase over higher-level APIs such as Direct3D and OpenGL.

Explicit command buffer control

Up to 45% faster than Direct3D in Battlefield 4 and up to 319% faster in the Star Swarm demo in single GPU configuration in extremely CPU-limited situations.

Better control over the hardware.

"All hardware capabilities are exposed through the API."

Low-overhead validation and processing of API commands

Reduction of command buffer submissions

Data formats optimizations via flexible buffer/image access

Explicit control of resource compression, expansion, and synchronization

Asynchronous DMA queue for data uploads independent from the graphics engine

Asynchronous compute queue for overlapping of compute and graphics workloads

Advanced Anti-Aliasing features for MSAA/EQAA optimizations

New rendering techniques.

Close to linear performance scaling from recording command buffers onto multiple CPU cores

Multithreaded parallel CPU rendering support for at least 8 cores.

No game developer reliance on existing AMD driver support release schedules (potentially no or fewer bugs on release, much faster patching for GFX related errors).

Due to bypassing of error-prone and inefficient abstraction, common technical difficulties like FPS drops, micro stuttering and texture corruption can be significantly less frequent or nonexistent, though Mantle currently has these more commonly than DirectX and OpenGL.[citation needed]

 

Cards Supported

GCN supported cards (AMD 7000 series & above) there is a chance Nvidia could use this technology to but yet to be confirmed

 

Information:


 


 

Benchmarks:

 


 


 

Crossfire


Do2Er4K.png

AMD Explaination:

 

AMD CrossFire™ harnesses the power of two or more discrete graphics cards working in parallel to dramatically improve gaming performance. With AMD CrossFire™-certified AMD Radeon™ HD graphics cards ready for practically every budget and the flexibility to combine two, three or four GPUs, AMD CrossFire™ is the perfect multi-GPU solution for those who demand the best.

 

Performance

By combining the intelligent software algorithms of the AMD Catalyst™ suite with dedicated scaling logic in each AMD graphics processor, a gaming rig equipped with AMD CrossFire™ technology can deliver up to four times the performance of a system with a single graphics card.2,3 That’s performance you can count on to build your empire, take the pole position or keep your platoon alive.

 

Detail

Let other people worry about optimal settings and resolutions. Multiple AMD Radeon™ HD graphics processors give you and your system the power to see every game just as the developer intended: in up to extreme high definition, with razor sharp clarity and uncompromising detail.

 

Flexibility

Whether you’re a grizzled veteran or fresh from basic, there’s an AMD CrossFire™ solution to meet your needs and budget. With a complete range of AMD Radeon™ HD graphics cards; flexible configurations; and an array of partnerships for AMD CrossFire™-certified motherboards, cases and power supplies, it’s easy to dial in the performance that you deserve.4 Still need more? Add another AMD Radeon™ HD graphics card at a later date and take your rig to the next level all over again.

AMD CrossFire™ technology is also the perfect way to help extend the life of your system. If you’re tired of turning back the settings or lowering the resolution, another AMD Radeon™ HD graphics card could be the shove your system needs to get back in the game.

 

Compatible Chip-sets

Slightly old but good reference point


 

Example Performance improvement

See below 290X in single card & Crossfire Solution an extra 40fps is shown here:

kc7dLJC.png

(taken from TweakTown of course)

 

 

XDMA an enhancement to Crossfire

Credit to AMD Blog Posted by rhallock in AMD Gaming on Jan 3, 2014 11:20:00 AM - See more at: http://community.amd.com/community/amd-blogs/amd-gaming/blog/2014/01/03/modernizing-multi-gpu-gaming-with-xdma#sthash.Qv6LNSxZ.dpuf

 

A lesser-known feature of the AMD Radeon™ R9 290 and R9 290X is a new technology called AMD CrossFire™ Direct Memory Access, or “XDMA” for short. XDMA is a modernization of multi-GPU configurations that totally overhauls how these many GPUs communicate with one another and stay synchronized during intense gaming. Today we will explore how the feature functions, what problems it solves, and what scenarios it’s designed to accommodate.

 

Before we explore the drastic improvements presented with XDMA, however, we should first start by exploring the old way of performing multi-GPU communication.

 

THE OLD WAY OF DOING MULTI-GPU

Intenal Components.pngPrior to the advent of XDMA, a “bridge” or “connector” of some fashion was required. This bridge was installed on the exterior of a graphics card, fitting onto small golden fingers protruding from the circuit board of the graphics card. You can see the connector to the right, where it has been installed on two AMD Radeon™ HD 7970 GHz Edition GPUs .

 

An external bridge was considered a modern solution that gave two (or more) GPUs the ability to communicate on a very important task: copying data between the GPUs to show you a rendered frame of your favorite game.

 

While the external bridge has been an effective multi-GPU solution for many years in the graphics industry, we are coming on an era when that is no longer the case. To wit, the bandwidths provided by today’s bridge solution are insufficient to fully accommodate the new generation of high-resolution 4K displays. As the AMD Radeon R9 290 and R9 290X are designed with this resolution in mind, it was time to bring in a fresh approach to multi-GPU systems.

 

MODERN MULTI-GPU WITH XDMA

Picture7.pngAt a principle level, XDMA dispenses with the external bridge by opening a direct channel of communication between the multiple GPUs in a system. This channel operates over the very same PCI Express® bus in which your AMD Radeon graphics cards are currently installed. The exclusive function of that bus is to shuttle graphics data between GPUs and your processor, so it’s already well suited to the new task of collecting and showing the data each GPU is working on when playing games.

 

It just so happens that the PCI Express bus also provides a tremendous amount of bandwidth—far more than can be allocated to today’s external bridges! As noted by Anandtech in their comprehensive analysis of XDMA, the bandwidth of an external bridge is just 900MB/s, whereas PCI Express® can provide up to 32GB/s with a PCIe 3.0 x16 slot (about 35x more bandwidth).

 

In dynamically taking a portion of that superhighway to negotiate rendering with multiple GPUs, AMD CrossFire can efficiently negotiate UltraHD scenarios. This is one of the many reasons why we say that the AMD Radeon R9 290 and R9 290X are uniquely suited, at a hardware level, for gaming at 3840x2160.

 

Diving more deeply into the technology, XDMA specifically and directly connects the “display controllers” on the respective GPUs in an AMD CrossFire configuration. These display controllers are responsible for taking a rendered scene in a game from the GPU pipeline and formatting it to send over the display cable to a monitor. XDMA provides an easier and more extensible method of transferring the frame from the GPU it was rendered on, to the GPU driving the display cable, using the high bandwidth of PCIe, while avoiding extra connectors and cables.

 

FACTS ABOUT XDMA

Rather than dig through more technical jargon, we wanted to jump to some essential facts that we wanted you to know about this great technology:

XDMA is a unique solution in the graphics industry; no similar technologies presently exist for consumer GPUs.

In case you didn’t catch it, XDMA eliminates the need to install any bridge. Install matching GPUs and you’re set!

XDMA is designed for optimal performance with systems running PCI Express 2.0 x16 (16GB/s), PCI Express 3.0 x8 (16GB/s), or PCI Express 3.0 x16 (32GB/s).

Bandwidth of the data channel opened by XDMA is fully dynamic, intelligently scaling with the demands of the game being played, as well as adapting to advanced user settings such as vertical synchronization (vsync).

Designed for UltraHD via DisplayPort™, which permits for 2160p60 gaming on the AMD Radeon R9 290 Series.

XDMA fully supports the “frame pacing” algorithms implemented into the AMD Catalyst™ driver suite.

Products without XDMA are scheduled to receive a new AMD Catalyst driver in January that will resolve uneven frame pacing as a symptom of the more limited bandwidth provided by an external bridge.


 


 

AMD 3D


AMD 3D

 AMD HD3D Technology is supported by an advanced and open ecosystem that, in conjunction with specific AMD hardware and software technologies, 

 enables 3D display capabilities for many PC applications and experiences.

 your PC has evolved, offering unprecedented amount of games, photos and movies for you to play, watch, design, create, share and download in 3D.  With the arrival of the latest in 3D technology you can now enjoy an enhanced visual experience on Stereo 3D-capable desktops and notebooks powered by AMD HD3D Technology.

Systems and hardware supported by AMD HD3D Technology

 

Recommended 3D Displays

See here:


 

What are the system requirements for HD3D?

AMD Radeon™ HD5000 series or above graphics card

Latest graphics card drivers from the AMD website: www.amd.com/drivers

Supported 3D Display with HDMI™ 1.4a, DisplayPort 1.2, or DVI input.

3D glasses are supplied by the display manufacturer: Currently supported displays

3D Middleware software, such as TriDef to convert games that don’t have native 3D support

Additional requirements for 3D Blu-Ray movie playback:

Blu-ray optical disc drive AMD Radeon HD6000 or above graphics card for Blu-ray 3D playback

Blu-ray playback software such as CyberlinkPower DVD 10, Arcsoft or similar

 

Why do I need to use 3D Middleware?

​​​3D middleware, such as TriDef, is used to convert games and applications that are not natively 3D ready.

The latest list of stereo 3D games ​shows games that are either natively supported or supported by 3D middleware.


 

videos:


 


 

References:

 

3D vision vs AMD 3D


StereoScopic games:


 


 

 

Eyefinity


 

What is it?

AMD Eyefinity technology maximises your field of view across up to six displays, fully engaging your peripheral vision.  For gamers this puts you right in the game and for other applicationsit helps to increase productivity by maximising your visual workspace so that you can see more windows simultaneously.

 

Benefits:

Gaming - Immerse yourself in game play

 

Get a commanding view of the action, and enjoy more control in real-time strategy games.

Detect enemies sooner, react faster, and survive longer in first-person-shooter games.

See enemy aircraft with peripheral vision, and fly with greater spAMDal awareness in flight combat simulators.

Eliminate blind spots and feel a heightened sense of speed in racing games.

Productivity - Helps you get more done:

 

Optimize productivity by increasing PC desktop workspace with multiple high-resolution monitors.

Manage multitasking more efficiently, and view more data, applications, and images at once.

Avoid time-wasting application-switching, window-sorting, mouse-clicking, and scrolling.

Entertainment - Maximize your leisure me:

 

Group multiple monitors into a large integrated display surface for the ultimate wide-screen home theater display.

View TV sports, movies, or video entertainment on one monitor while viewing online stats, Internet pages, or games on other displays.

 

How many Displays?

every family of GPUs supports a different maximum number of displays. This support is inherent to the AMD graphics chip at the heart of your graphics card.

Before looking through the table, though, keep in mind that the maximum number of supported displays can differ from the number of display outputs on the card. Certain adapters, hubs, or a non-reference graphics card may be required to take full advantage of the capabilities we build into our chips.

AMD Radeon™ graphics solutions

 

Up to 6 displays    

AMD Radeon™ HD 7900 Series

AMD Radeon™ HD 7800 Series

AMD Radeon™ HD 7700 Series

AMD Radeon™ HD 6900 Series

AMD Radeon™ HD 6900M Series

AMD Radeon™ HD 6800 Series

AMD Radeon™ HD 6800M Series

AMD Radeon™ HD 6700M Series

AMD Radeon™ HD 6600M Series

AMD Radeon™ HD 6500M Series

ATI Radeon™ HD 5800 Series

Up to 5 displays    

AMD Radeon™ HD 6700 Series

ATI Radeon™ HD 5700 Series

Up to 4 displays    

AMD Radeon™ HD 6600 Series

AMD Radeon™ HD 6500 Series

AMD Radeon™ HD 6400M Series

AMD Radeon™ HD 6300M Series

Up to 3 displays    

AMD Radeon™ HD 6400 Series

ATI Radeon™ HD 5600 Series

ATI Radeon™ HD 5500 Series

ATI Radeon™ HD 5400 Series

AMD FirePro™ Professional Graphics

Up to 6 displays    

ATI FirePro™ V9800

Up to 4 displays    

ATI FirePro™ V8800

AMD FirePro™ V7900

ATI FirePro™ 2460 Multi-View

Up to 3 displays    

ATI FirePro™ V7800

AMD FirePro™ V5900

ATI FirePro™ V5800

ATI FirePro™ V4800

 

Crossfire & EyeFinity possible?

Any AMD Eyefinity technology configuration that works with a single graphics card will work with AMD CrossFire™ technology, however all monitors must connect to the primary graphics card. In most systems, this will be the GPU installed closest to the CPU. This is true for both AMD Radeon™ graphics and AMD FirePro™ professional graphics products.​


 

Photos:

THTq2om.jpg

 

Videos


 


 


 

Gaming Evolved


 

What is it?

AMD’s Gaming Evolved program represents deep commitment to PC gamers, PC game developers, and the PC gaming industry to deliver innovative technologies, nurture open industry standards, and to help the gaming industry create the best possible gaming experience on the world’s best gaming platform—the PC.

 

AMD's commitment, a promise, and a pledge

AMD commit to driving PC gaming innovation and delivering the advanced next-generation technology that is the heart of PC gaming. We promise to nurture the expertise, creativity, and open standards needed for inventing thrilling and immersive PC gaming experiences. AMD pledge our unconditional support for the entire PC gaming industry to help keep it thriving, and producing the great products that define our entertainment culture.

 

Gaming evolved App

The simplest way to get the best gaming experience, everytime you play"" - From AMD

What does the App offer?

: Keep Drivers Up to Date

: Get the optimal quality and performance settings based on your hardware configuration

: Earn real rewards just for using the app. Free games, hardware, discounts, and more!

: Broadcast live video via Twitch, watch streams, take screenshots, and share them with your friends without ever leaving your game!

 

Extra Benefits

When buying New Graphics Cards from AMD they normally are bundled with free games, providing the shop supports this. Overclockers Normally go above & beyond in this area :)

 

supported games?

see here:


 

see AMD playlist on Gaming evolved


 

 

AMD Gaming Evolved Game DVR

adds continuous recording with virtually no performance cost

 

The Gaming Evolved client has always been designed to enhance as many aspects of your PC gaming as possible, 

and with the introduction today of new, automated DVR and streaming features, we’re taking that concept to another level. 

 

 

Just like the popular feature many people use with their TVs, the new DVR feature is always recording your gameplay, making up to the last 10 minutes available for replay, saving and sharing. The next time you bust an astounding move, you’ll be able to share it with all your friends. The feature is intended to be left always on, so you never have to worry about forgetting it, and the overhead required to keep it running in the background is minimal. It’s a terrific way of saving and sharing your best moments, including the ones you didn’t plan for.

As great as DVR is, sometimes a recording is not what you’re after and only live action will do. For that, we’re happy to announce that Twitch is now integrated into the AMD Gaming Evolved Client. Twitch is the leading platform for live game streaming. Now you can use it to share live gameplay, broadcast tournaments and show your skills to the world, all from within the AMD Gaming Evolved Client.

To top it all off, this newest version of the AMD Gaming Evolved Client (version 3.9) implements some exciting improvements to the user interface, making it easier to use than ever before, and seamlessly integrating the newest features.

Remember, the DVR and Twitch streaming features are in beta, so there are still some kinks to work out. But we invite you to try the new features and let us know what you think. We hope you find the new additions as exciting as we do.

 

Jay Lebo is a Product Marketing Manager at AMD. His postings are his own opinions and may not represent AMD’s positions, strategies or opinions. Links to third party sites are provided for convenience and unless explicitly stated, AMD is not responsible for the contents of such linked sites and no endorsement is implied.

 

for a full run down see


 

Videos

 


 



 

TressFX


 

What is TressFX

TressFX Hair revolutionizes in-game hair by using the DirectCompute programming language to unlock the massively-parallel processing capabilities of the Graphics Core Next architecture, enabling image quality previously reserved for pre-rendered images. Building on AMD's previous work on Order Independent Transparency (OIT), this method makes use of Per-Pixel Linked-List (PPLL) data structures to manage rendering complexity and memory usage.

 

What Can TressFX do?

Finally, hair styles are simulated by gradually pulling the strands back towards their original shape after they have moved in response to an external force. Graphics cards featuring the Graphics Core Next architecture, like select AMD Radeon™ HD 7000 Series GPUs, are particularly well-equipped to handle these types of tasks, with their combination of fast on-chip shared memory and massive processing throughput on the order of trillions of operations per second. - See more at: http://community.amd.com/community/amd-blogs/amd-gaming/blog/2013/03/05/tomb-raider-launches-with-revolutionary-tressfx-hair#sthash.vpLKdUVS.dpuf

 

What Games Support TressFX

Tomb Raider [2013 edition]

Thief

 

TressFX 2.0

In addition, AMD revealed that it will be presenting a new version of its TressFX tech. TressFX 2.0, a newer version of TressFX with better performance and easier integration into games. Additionally research is now being done to use TressFX for other things besides hair, like grass and fur.

 

What cards Support TressFX

Any recent card can support TressFX, however AMD cards at the this moment in time have better performance, most likely due to being Developed close to AMD & crystal Dynamics

 

Comparsions


 


 

TressFX Vs TressFX 2.0


 


 

HSA

 


What is Heterogeneous Computing?

Many in the industry have adopted the term “heterogeneous computing” to describe an approach that tightly integrates CPUs, GPUs, DSPs and other programmable accelerators into a single System on Chip (SoC).   

Processor architects apply three broad templates to the chips they design, depending on the specific tasks the chip must handle. 

Their most general design, known as a Central Processing Unit (CPU), addresses the wide range of tasks required to run an operating system and support application programs such as web browsers, word processors and video games. 

CPUs struggle with the massive calculations needed to manipulate 3-D images on a computer display in real time, so chip architects devised specialized Graphics Processing Units (GPUs) to handle these types of parallel operations. 

Contemporary GPUs not only accelerate a broad range of parallel computations; they also help reduce the energy needed to perform those operations, a desirable feature in power-constrained environments. 

Conventional CPU designs also struggle when called upon to support high-speed streaming data in digital communications systems and audio applications, along with a variety of other low latency, compute-intensive tasks, so specialized units, known as Digital Signal Processors or DSPs, emerged to handle these workloads.

 DSPs embedded in your cell phone convert the analog signals in your speech into a stream of bits, move those bits to the nearest cell phone tower using a variety of radio protocols, and then convert any received digitized voice signals back into a form you can hear using the phone’s speaker. 

 That same DSP also handles the data you send and receive via WiFi and any Bluetooth communications between your tablet and its keyboard.  

 

Programmable devices optimized for these diverse workloads evolved independently for several decades, and designers often included all three flavors of chips in the systems they assembled. Increased transistor budgets now allow designers to place CPU, GPU and DSP elements onto a single System on Chip (SoC).

This physical integration enables smaller devices, reduces cost and saves power, since on-chip communication uses far less energy than connecting chips via a motherboard or substrate. 

The co-location of these logic blocks on a single slab of silicon is just the beginning. To gain the full benefits of integration, separate functions need shared access to the data they process. 

Getting these diverse processor personalities to work as a team is no easy task; as with any team-building exercise, a manager needs to help the team members work together. The creation of a model that presents these features in a manner comprehensible to mainstream software developers, and supported by their development environments, is a more challenging task. 

We’ll examine moves the industry is taking to address these issues a little later in this document.   

 

Why does Heterogeneous Computing matter?

Today’s computer users want to handle a wide variety of tasks and interact naturally with their systems. Instead of typing on a keyboard and pointing with a mouse, they want to aim their phones and tablets at a scene, pinch and zoom the image they see on the display, and capture a great photo. 

They expect their systems to recognize faces, track eye movements and respond to complex inputs (touch, voice and gestures) in real time. When playing games, they want the objects they blow up to explode realistically, with full visual and physical fidelity. They want batteries that last a long time on a single charge. 

These new user experiences place far greater computational demands on system resources than the usage models of just a few years ago. Conventional CPU designs cannot meet the response time expectations these new workloads demand, especially on limited power budgets. Fortunately, much of this work fits nicely with the talents GPUs and DSPs bring to the table.

 GPUs can easily extract meaning from the pixels captured by a camera, and transform those pixels into objects a CPU can recognize. DSPs have long played a key role in voice and image processing systems. These different processing elements have long been available, albeit packaged discretely. Clever programmers have been able to cobble together proofs of concept that demonstrate what this technology can accomplish. 

The challenge the industry now faces is to integrate these capabilities into standard environments that simplify the programming task for the broader community of software developers who haven’t had an opportunity to earn a PhD in computer science from Stanford or MIT.  

 

Will HSA make the smartphone, tablet or laptop I just bought run better?

 Odds are that system you bought in 2013 includes an SoC that integrates CPU and GPU technology and perhaps a DSP as well. The concept of co-locating all these processors on a single piece of silicon has been around for a few years, and most hardware vendors have adopted it in one form or another. As we’ve noted elsewhere, HSA takes this integration to the next level, by allowing these processors to work harmoniously in a shared memory environment. This aspect of HSA is so new it won’t be available in any systems planned for shipment in 2013. The systems you can buy today deliver better battery life and performance in thinner and lighter form factors than the ones you could buy a few years ago, but as the song goes, you ain’t seen nothing yet. Between now and then, enjoy your new device, and be sure to download the latest apps that take advantage of the integrated features in today’s hardware.  

What workloads benefit the most from HSA?

HSA makes it easier to deploy CPU and GPU resources harmoniously to execute application programs, so it makes sense that the applications that benefit the most include those that spend a lot of time performing calculations that can be done in parallel on a GPU. (

The CPU side of a current quad-core SoC can deliver around 100 GFLOPS, while the GPU side offers peak rates approaching 700 GFLOPS5). Although CPU resources are more than adequate for most traditional scalar workloads, they struggle when called on to analyze images, sort through vast databases, or interact with users via “natural user interfaces.” Fortunately, these latter tasks are just the ones that benefit the most from the GPU on the SoC; it’s just a matter of adapting applications to become “HSA-aware.”  

 

More References:

 


 


 


 

 


 

AMD Workstation range


What is it?

AMD FirePro™ W-Series graphics cards are designed for CAD/CAM/CAE, Media & Entertainment and Medical Imaging professionals who value innovation and demand the highest quality, reliability and application performance from their workstation.

 

what are the cards?

AMD FirePro™ Graphics for Desktop Workstations

We push the limits of graphics technology to help transform your desktop into a workstation that lets you see more and do more.

Optimized + Certified for Professional Applications

We stocked our line-up of AMD FirePro™ desktop workstation graphics cards with powerful next-gen technologies to efficiently balance 3D workloads without compromising the outstanding visual quality, app responsiveness and compute performance that CAD/CAE and Media & Entertainment professionals need.

AMD FirePro W9000

AMD FirePro W8000

AMD FirePro V7900

AMD FirePro W7000

ATI FirePro V5800

ATI FirePro V5800 DVI

AMD FirePro W5000

AMD FirePro W5000 DVI

AMD FirePro V4900

Multidisplay Solutions for Professional Multitaskers

When your productivity relies on multitasking across several monitors, optimize your workstation with a low-profile, energy-efficient AMD FirePro multidisplay graphics card. Full compatibility with VGA, DVI and DisplayPort™ technologies makes for easy desktop configuration.

AMD FirePro W600

ATI FirePro 2460

AMD FirePro 2270

 

 

 

What Card is Best For me?

 

Computer Aided Design & Engineering

AMD FirePro™ Workstation Graphics Solutions

Optimized & Certified for Design, Engineering and Manufacturing Workflows

AMD FirePro professional graphics cards are optimized and certified for all major CAD applications, including AutoCAD®, SolidWorks®, Creo™, Catia, TopSolid and many others.

A rigorous and exacting certification process, conducted by software vendors, puts AMD FirePro workstation graphics up against a series of simulations and real-world scenarios to ensure the compatibility and stability required by engineers and other industry professionals.

AMD FirePro™ V4900

Solid Performance & Outstanding Reliability for Entry-Level Professional

GOOD FOR:

Small-Medium Assemblies

2D/3D Design & Modeling

Manufacturing

 

 

 

AMD FirePro™ W5000

Our Most Powerful Mid-Range Workstation Graphics Card Available

GREAT FOR:

Complex 3D Design

Animation & Sub-assemblies

Simulation & Rendering

 

 

 

AMD FirePro™ W8000

Category-Leading Compute Performance & Amazing Responsiveness

BEST FOR:

Advanced 3D Modeling & Animation

Massive Assemblies

Complex Simulation & Rendering

 

AMD FirePro™ Workstation Graphics Solutions

Optimized & Certified for Media & Entertainment Workflows

AMD FirePro workstation graphics cards are optimized and certified for many major Media & Entertainment (M&E) applications, including Autodesk® Maya®, Motionbuilder®, Softimage® and 3ds Max® among others.

AMD FirePro technology delivers industry-leading graphics quality and exceptional app responsiveness to M&E professionals using locally installed software for their creative designs, animations and video editing projects.

 

AMD FirePro™ W7000

High Performance & Visual Quality for the Budget-Conscious Pro

GOOD FOR:

2D/3D design & asset creation

Video editing & color correction

Animation, modeling & rendering

 

 

AMD FirePro™ W8000

Category-Leading Performance Without Compromise

GREAT FOR:

Massive 3D asset creation

Digital workflows & color correction

Complex visual effects & compositing

 

AMD FirePro™ W9100

Category-Leading Memory and Compute Performance

BEST FOR:

4K multi-monitor workflows

Real-time video editing, effects and color correction

Manipulating massive data sets and assemblies in real-time

 

Display Walls, Medical & Financial Workstations

AMD FirePro™ Workstation Graphics Solutions

For Display Walls, Medical Workstations & Financial Services

 

 

AMD FirePro professional graphics cards are an ideal choice for powering large, high-res display walls with excellent visual quality across multiple monitors. They’re also perfect for smaller financial and medical workstations that need to save space and power without compromising high-end graphics quality.

AMD FirePro™ 2270

Energy-efficiency & ultra-low profile to save space & power

FOR FINANCIAL WORKFLOWS:

DisplayPort™, VGA & DVI support

Low profile form & passive cooling

15W max power consumption

Learn more about AMD FirePro™ 2270

AMD FirePro™ W5000 DVI

The most powerful midrange card for medical imaging

FOR MEDICAL WORKSTATIONS:

Support for two 10MP displays

3D high-res medical imagery

Up to 1,024 unique shades of grey

 

AMD FirePro™ W600

Professional graphics built for multimedia-rich display walls

FOR DISPLAY WALLS:

Dedicated display wall features

Multi-stream audio capabilities

Learn more about AMD FirePro™ W600Display Walls, Medical & Financial Workstations

AMD FirePro™ Workstation Graphics Solutions

For Display Walls, Medical Workstations & Financial Services

 

 

AMD FirePro professional graphics cards are an ideal choice for powering large, high-res display walls with excellent visual quality across multiple monitors. They’re also perfect for smaller financial and medical workstations that need to save space and power without compromising high-end graphics quality.

AMD FirePro™ 2270

Energy-efficiency & ultra-low profile to save space & power

FOR FINANCIAL WORKFLOWS:

DisplayPort™, VGA & DVI support

Low profile form & passive cooling

15W max power consumption

Learn more about AMD FirePro™ 2270

AMD FirePro™ W5000 DVI

The most powerful midrange card for medical imaging

FOR MEDICAL WORKSTATIONS:

Support for two 10MP displays

3D high-res medical imagery

Up to 1,024 unique shades of grey

Learn more about AMD FirePro™ W5000 DVI

AMD FirePro™ W600

Professional graphics built for multimedia-rich display walls

FOR DISPLAY WALLS:

Dedicated display wall features

Multi-stream audio capabilities

 

more references:


 


 


 


 

 

 

NVIDIA

pqk5aw3.png

 

Overclockers NVIDIA Range can be found here

 

 

 

DSR or Dynamic Super Resolution


aPEO23T.jpg

 

Dynamic Super Resolution: 4K-Quality Graphics On Any HD Monitor

 

Our new Maxwell architecture introduces a raft of innovative, exciting technologies that make your games better in dramatic ways. Of these new features, Dynamic Super Resolution (DSR) is most immediately impactful, enhancing any game that supports resolutions above 1920x1080. What does DSR do? Simply put, it renders a game at a higher, more detailed resolution and intelligently shrinks the result back down to the resolution of your monitor, giving you 4K-quality graphics on any screen.

 


 

Enthusiasts with compatible monitors and technical knowledge refer to this process as Downsampling, and for some time they've been applying a basic version to improve fidelity in games. DSR improves upon this process by applying a high-quality Downsampling filter that significantly improves image quality, by making Downsampling compatible with all monitors, by removing the need for technical know-how, and by integrating DSR into GeForce Experience, enabling gamers to apply DSR Optimal Playable Settings with a single click.

 

cktenlC.jpg

 

o make DSR easy to use we've incorporated it into our GeForce Experience Optimal Playable Settings: if your GPU has the performance to play a game at 4K, DSR will be recommended. One click-optimize applies the setting and you're good to go. If the game has a user interface that does not scale correctly with the resolution, we won't recommend DSR.

 

If you still want to enable DSR manually for a game, you can enable it by selecting the DSR resolution in the Optimal Playable Settings customizer dropdown:

93TDRyb.png

 

You can also use the NVIDIA Control Panel to fine tune DSR and filtering smoothness.

 

As of today, DSR is only supported on new GeForce GTX 900 Series GPUs, but in the future we'll be rolling out this incredible new feature to other, similarly powerful NVIDIA GeForce GTX graphics cards. For more information on the inner workings of DSR, please check out our dedicated Dynamic Super Resolution article.

 

4K ShadowPlay Recording

 

GeForce GTX 900 Series GPUs, like the newly-released GeForce GTX 980 and GeForce GTX 970, include an enhanced H.264 hardware encoder, further reducing the already-minimal performance of GeForce Experience's ShadowPlay gameplay recorder. This improved hardware also enables the recording of footage at Ultra HD resolutions, beginning with 3840x2160, commonly referred to as "4K".

 

Recording at up to 60 FPS, at bitrates of up to 130 Mbps, ShadowPlay 4K gameplay captures are the best way to get your favorites moments onto YouTube in Ultra HD resolutions, with the absolute best quality possible. And as with 1920x1080 and 2560x1440 captures, all ShadowPlay recording modes, features, and options are available, letting you tailor your recordings to your personal preferences.

 

 





 


 



 

 

 

 

G-SYNC


 

What is G-SYNC

NVIDIA G-SYNC is ground breaking new display technology that delivers the smoothest and fastest gaming experience ever. 

G-SYNC’s revolutionary performance is achieved by synchronizing display refresh rates to the GPU in your GeForce GTX-powered PC, eliminating screen tearing and minimizing display stutter and input lag.

 The result: scenes appear instantly, objects look sharper, and game play is super smooth, giving you a stunning visual experience and a serious competitive edge.

 

The Problem: Old Tech

When TVs were first developed they relied on CRTs which work by scanning a beam of electrons across the surface of a phosphorus tube. This beam causes a pixel on the tube to glow, and when enough pixels are activated quickly enough the CRT can give the impression of full motion video. Believe it or not, these early TVs had 60Hz refresh rates primarily because the United States power grid is based on 60Hz AC power. Matching TV refresh rates to that of the power grid made early electronics easier to build, and reduced power interference on the screen.

By the time PCs came to market in the early 1980s, CRT TV technology was well established and was the easiest and most cost effective technology for utilize for the creation of dedicated computer monitors. 60Hz and fixed refresh rates became standard, and system builders learned how to make the most of a less than perfect situation. Over the past three decades, even as display technology has evolved from CRTs to LCD and LEDs, no major company has challenged this thinking, and so syncing GPUs to monitor refresh rates remains the standard practice across the industry to this day.

Problematically, graphics cards don’t render at fixed speeds. In fact, their frame rates will vary dramatically even within a single scene of a single game, based on the instantaneous load that the GPU sees. So with a fixed refresh rate, how do you get the GPU images to the screen? The first way is to simply ignore the refresh rate of the monitor altogether, and update the image being scanned to the display in mid cycle. This we call ‘VSync Off Mode’ and it is the default way most gamers play. The downside is that when a single refresh cycle show 2 images, a very obvious “tear line” is evident at the break, commonly referred to as screen tearing. The established solution to screen tearing is to turn VSync on, to force the GPU to delay screen updates until the monitor cycles to the start of a new refresh cycle. This causes stutter whenever the GPU frame rate is below the display refresh rate. And it also increases latency, which introduces input lag, the visible delay between a button being pressed and the result occurring on-screen.

Worse still, many players suffer eyestrain when exposed to persistent VSync stuttering, and others develop headaches and migraines, which drove us to develop Adaptive VSync, an effective, critically-acclaimed solution. Despite this development, VSync’s input lag issues persist to this day, something that’s unacceptable for many enthusiasts, and an absolute no-go for eSports pro-gamers who custom-pick their GPUs, monitors, keyboards, and mice to minimize the life-and-death delay between action and reaction.

 

Enter NVIDIA G-SYNC, which eliminates screen tearing, VSync input lag, and stutter. To achieve this revolutionary feat, we build a G-SYNC module into monitors, allowing G-SYNC to synchronize the monitor to the output of the GPU, instead of the GPU to the monitor, resulting in a tear-free, faster, smoother experience that redefines gaming.

Industry luminaries John Carmack, Tim Sweeney, Johan Andersson and Mark Rein have been bowled over by NVIDIA G-SYNC’s game-enhancing technology. Pro eSports players and pro-gaming leagues are lining up to use NVIDIA G-SYNC, which will expose a player’s true skill, demanding even greater reflexes thanks to the unnoticeable delay between on-screen actions and keyboard commands. And in-house, our diehard gamers have been dominating lunchtime LAN matches, surreptitiously using G-SYNC monitors to gain the upper hand.

Online, if you have a NVIDIA G-SYNC monitor you’ll have a clear advantage over others, assuming you also have a low ping.

How To Upgrade To G-SYNC

If you’re as excited by NVIDIA G-SYNC as we are, and want to get your own G-SYNC monitor, you can buy a modded monitor now. Select NVIDIA System Builders will be offering ASUS VG248QE monitor

 

How To Upgrade To G-SYNC

If you’re as excited by NVIDIA G-SYNC as we are, and want to get your own G-SYNC monitor, you can buy a modded monitor now. 

Select NVIDIA System Builders will be offering ASUS VG248QE monitors that have been specially upgraded with an NVIDIA G-Sync module. These are now available to buy here.

 

Videos


 


 


 



 

PhysX


iGU5XQUl.png

What is NVIDIA PhysX Technology?

NVIDIA® PhysX® is a powerful physics engine enabling real-time physics in leading edge PC games. 

PhysX software is widely adopted by over 150 games and is used by more than 10,000 developers. PhysX is optimized for hardware acceleration by massively parallel processors. 

GeForce GPUs with PhysX provide an exponential increase in physics processing power taking gaming physics to the next level.

 

What is physics for gaming and why is it important?

Physics is the next big thing in gaming. It's all about how objects in your game move, interact, and react to the environment around them. Without physics in many of today's games, objects just don't seem to act the way you'd want or expect them to in real life. Currently, most of the action is limited to pre-scripted or ‘canned' animations triggered by in-game events like a gunshot striking a wall. Even the most powerful weapons can leave little more than a smudge on the thinnest of walls; and every opponent you take out, falls in the same pre-determined fashion.

Players are left with a game that looks fine, but is missing the sense of realism necessary to make the experience truly immersive. With NVIDIA PhysX technology, game worlds literally come to life: walls can be torn down, glass can be shattered, trees bend in the wind, and water flows with body and force. 

NVIDIA GeForce GPUs with PhysX deliver the computing horsepower necessary to enable true, advanced physics in the next generation of game titles making canned animation effects a thing of the past.

 

 

Which NVIDIA GeForce GPUs support PhysX?

The minimum requirement to support GPU-accelerated PhysX is a GeForce 8-series or later GPU with a minimum of 32 cores and a minimum of 256MB dedicated graphics memory. However, each PhysX application has its own GPU and memory recommendations. In general, 512MB of graphics memory is recommended unless you have a GPU that is dedicated to PhysX.

 

How does PhysX work with SLI and multi-GPU configurations?

When two, three, or four matched GPUs are working in SLI, PhysX runs on one GPU, while graphics rendering runs on all GPUs. The NVIDIA drivers optimize the available resources across all GPUs to balance PhysX computation and graphics rendering. Therefore users can expect much higher frame rates and a better overall experience with SLI.

A new configuration that’s now possible with PhysX is 2 non-matched (heterogeneous) GPUs. In this configuration, one GPU renders graphics (typically the more powerful GPU) while the second GPU is completely dedicated to PhysX. By offloading PhysX to a dedicated GPU, users will experience smoother gaming.

 

Why is a GPU good for physics processing?

The multithreaded PhysX engine was designed specifically for hardware acceleration in massively parallel environments.

GPUs are the natural place to compute physics calculations because, like graphics, physics processing is driven by thousands of parallel computations.

Today, NVIDIA's GPUs, have as many as 480 cores, so they are well-suited to take advantage of PhysX software. 

NVIDIA is committed to making the gaming experience exciting, dynamic, and vivid. The combination of graphics and physics impacts the way a virtual world looks and behaves.

 

Can I use an NVIDIA GPU as a PhysX processor and a non-NVIDIA GPU for regular display graphics?

No. There are multiple technical connections between PhysX processing and graphics that require tight collaboration between the two technologies.

To deliver a good experience for users, NVIDIA PhysX technology has been fully verified and enabled using only NVIDIA GPUs for graphics.

 

can i use an AMD card?

PhysX supports both CPU and GPU simulation. GPU simulation is only available on Nvidia graphics card irrespective of CPU. 

Well written application usually check for GPU simulation support and allow for CPU fallback if its not present. 

However, CPU PhysX can seriously effect Framerate

 

Games that support physX

See link below:


 

Videos:


 


 

 

 


 

 

SLI


 

Quick Understanding

SLI is the technology that combats AMD's Crossfire Technology 

 

WHAT IS SLI

NVIDIA SLI intelligently scales graphics performance by combining multiple GeForce GPUs on an SLI certified motherboard. With over 1,000 supported applications and used by over 94% of multi-GPU PCs on Steam, SLI is the technology of choice for gamers who demand the very best.

SLI features an intelligent communication protocol embedded in the GPU, a high-speed digital interface to facilitate data flow between the two graphics cards, and a complete software suite providing dynamic load balancing, advanced rendering, and compositing to ensure maximum compatibility and performance in today’s latest games.

 

Features

Beyond just better performance, SLI offers a host of advanced features. For PhysX games, SLI can assign the second GPU for physics computation, enabling stunning effects such as life-like fluids, particles, and destruction. For CUDA applications, a second GPU can be used for compute purposes such as Folding@home or video transcoding. Finally, for the ultimate in image quality, SLI antialiasing offers up to 64xAA with two GPUs, 96xAA with three GPUs, or 128xAA with four GPUs.

 

Scaling

Thanks to Fermi’s architectural innovations, SLI scaling is higher than ever. Across many popular titles, over 80%, and at times 100% performance improvement, can be obtained by adding a second GPU.

CtPhV4r.png

 

Not just Multiple Video Cards

When SLI was first introduced the technology was used only to connect multiple video cards. In 2005, however, Gigabyte introduced a video card that used SLI technology to connect two different Nvidia GPUs located on the same video card.

This arrangement has become more common over time. Both Nvidia and AMD have released reference design cards featuring two GPUs in the same video card connected via SLI or CrossFire. 

This has confused things a bit because two video cards with two GPUs each would technically be a quad-SLI arrangement even though only two video cards are involved. With that said, these cards are expensive and thus rare, so you can generally assume that if someone is talking about SLI they are talking about the use of two or more video cards.

SLI usually describes a desktop solution but it is available in gaming laptops. AMD sometimes pairs its APUs with a discrete Radeon GPU, which means you’ll sometimes run across CrossFire laptops that only cost $600 to $800 bucks.

Nvidia has also paired a discrete GPU with an integrated GPU in the past. This was branded with the term Hybrid SLI. Nvidia was forced out of the chipset business soon after, however, which meant the company no longer offered integrated graphics. Hybrid SLI is effectively dead as a result.

 

Photos

qWVQJVl.jpg

 

Videos

Everything you need to know about SLI


Benchmarks & Guide


Titan Quad SLI Demo


 


 

 

NVIDIA GameWorks™


pushes the limits of gaming by providing a more interactive and cinematic game experience and thus enabling next gen gaming for current games. 

We provide technologies e.g. PhysX and VisualFX, which are easy to integrate into games as well as tutorials and tools to quickly generate game content. 

In addition we also provide tools to debug, profile and optimize your code.

 

Upcoming Technology

NVIDIA FlameWorks enables cinematic smoke, fire and explosions. It combines a state-of-the-art grid based fluid simulator with an efficient volume rendering engine. 

The system is highly customizable, and supports user-defined emitters, force fields, and collision objects

 


 

PhysX FleX is a particle based simulation technique for real-time visual effects. It will be introduced as a new feature in the upcoming PhysX SDK v3.4. 

The FleX pipeline encapsulates a highly parallel constraint solver that exploits the GPU’s compute capabilities effectively. 

 


 

NVIDIA GameWorks technology in released games

Call of Duty: Ghosts is using NVIDIA HairWorks to provide a more realistic Riley and wolves. Each hair asset has about 400-500K hair strands.

Most of these hair assets are created on the fly inside the GPU from roughly 10K guide hairs. 

Additional technologies used in the game are NVIDIA Turbulence for the smoke bombs, as well as TXAA.

 


 

Batman Arkham Origins is loaded with NVIDIA GameWorks technologies; NVIDIA Turbulence  for the snow, steam/fog and shock gloves as well as PhysX Cloth for ambient cloth.

In addition NVIDIA ShadowWorks for HBAO+ and advanced soft shadows and NVIDIA CameraWorks for TXAA and DoF. 

 


 


 


 

 

ShadowPlay


ShadowPlay From Nvidia

Don't just brag about your gaming wins. Show the world with GeForce ShadowPlay.

Available only with NVIDIA® GeForce Experience™, ShadowPlay records game action as you play—automatically—with minimal impact on performance. It's fast, simple, and free!

Shadow every game.

Share every victory.

ShadowPlay records the up to the last 20 minutes of your gameplay. 

Just pulled off an amazing stunt? Hit a hotkey and the game video will be saved to disk. Or, use the manual mode to capture video for as long as you like.

You now have a video record of your gaming awesomeness to share or post anywhere.

ShadowPlay even lets you instantly broadcast your gameplay in an HD-quality stream through Twitch.tv.

 

How It Works

 

ShadowPlay has two user-configurable modes. The first, shadow mode, continuously records your gameplay, saving up to 20 minutes of high-quality 1920×1080 footage to a temporary file.

So, if you pull off a particularly impressive move in-game, just hit the user-defined hotkey and the footage will be saved to your chosen directory.

The file can then be edited with the free Windows Movie Maker application, or any other .mp4-compatible video editor, and uploaded to YouTube to share with friends or gamers galore.

Alternatively, in manual mode, which acts like traditional gameplay recorders, you can save your entire session to disk.

The beauty of ShadowPlay is that, because it takes advantage of the hardware built into every GTX GPU, you don’t have to worry about any major impact on frame rates compared to other, existing applications.

 

 

Features

 

GPU accelerated H.264 video encoder

Records up to the last 20 minutes of gameplay in Shadow Mode

Records unlimited length video in Manual Mode

Broadcasts to Twitch

Outputs 1080p at up to 50 Mbps

Minimal performance impact

Full desktop capture (desktop GPUs only)

 

Requirements:

General System Requirements

 

Operating System:

Windows 8, 8.1

Windows 7

Windows Vista (DirectX 11 Runtime required)

Windows XP SP3 (driver updates only)

RAM:

2GB system memory

Supported Hardware:

 

CPU:

Intel Pentium G Series, Core 2 Duo, Quad Core i3, i5, i7, or higher

AMD Phenom II, Athlon II, Phenom X4, FX or higher

 

 

Disk Space Required:

20MB minimum

Internet Connectivity:

Required

Display Resolution:

Any display with 1024x768 to 3840×2160 resolution

GPU:

 

GPU Series    Driver Updates    Game Optimization    ShadowPlay and SHIELD PC Streaming

GeForce TITAN, GTX 700,

GTX 600    Yes    Yes    Yes

GeForce GTX 800M, GTX 700M, select GTX 600M    Yes    Yes    Yes

GeForce 800, 700, 600, 500, 400    Yes    Yes    No

GeForce 600M, 500M, 400M    Yes    Yes    No

GeForce 300, 200, 100, 9, 8    Yes    No    No

GeForce 300M, 200M, 100M, 9M, 8M    Yes    No    No

 

 

 


 

 


 



 

3D Vision

 


3D Vision Vision


3D Vision (previously GeForce 3D Vision) is a stereoscopic gaming kit from Nvidia which consists of LC shutter glasses and driver software which enables stereoscopic vision for any Direct3D game, 

with various degrees of compatibility. There have been many examples of shutter glasses over the past decade, but the NVIDIA 3D Vision gaming kit introduced in 2008 made this technology available for mainstream consumers and PC gamers.

The kit is specially designed for 120 Hz LCD monitors but is compatible with CRT monitors (some of which may work at 1024×768×120 Hz and even higher refresh rates), DLP-projectors, and others. 

It requires a compatible graphics card from Nvidia (GeForce 200 series or later).

 

Shutter Glasses

The glasses use wireless IR protocol and can be charged from a USB cable, allowing around 60 hours of continuous use.

The wireless emitter connects to the USB port and interfaces with the underlying driver software. It also contains a VESA Stereo port for connecting supported DLP TV sets, although standalone operation without a PC with installed Nvidia 3D Vision driver is not allowed.

NVIDIA includes one pair of shutter glasses in their 3D Vision kit, SKU 942-10701-0003. Each lens operates at 60 Hz, and alternate to create a 120 Hz 3-dimensional experience.

This version of 3D Vision supports select 120 Hz monitors, 720p DLP projectors, and passive-polarized displays from Zalman.

 

Stereo Driver

The stereo driver software can perform automatic stereoscopic conversion by using the 3D models submitted by the application and rendering two stereoscopic views instead of the standard mono view. The automatic driver works in two modes: fully "automatic" mode, where 3D Vision driver controls screen depth (convergence) and stereo separation, and "explicit" mode, where control over screen depth, separation, and textures is performed by the game developer with the use of proprietary NVAPI.

The quad-buffered mode allows developers to control the rendering, avoiding the automatic mode of the driver and just presenting the rendered stereo picture to left and right frame buffers with associated back buffers.

 

3D Vision Requirements

see here:


 

What is supported? (Games/Web/Photos/Blu-Rays)

see here for the full list:


 

Videos:

Setup/Install


 

Performance Review


 


 

 

NVIDIA SURROUND


NVIDIA Surround Technology

In NVIDIA Surround three displays are supported up to 2560x1600 resolution on each display, 

the same as AMD’s Eyefinity, although Eyefinity can currently support up to six displays at 2560x1600 each.

 

How many displays do I need to run 3D Vision Surround?

3D Vision Surround requires three displays or projectors. You can use a variety of displays, including LCD monitors, TVs, or projectors. Please view the 3D Vision Surround system requirements for a full list of supported products

Please note: NVIDIA 3D Vision Surround and NVIDIA Surround does not support a two display surround configuration. Both NVIDIA 3D Vision Surround and NVIDIA Surround require three supported displays as defined in the system requirements above.

 

Does 3D Vision Surround support 2D displays?

Yes, you can run Surround in 2D mode. We call this mode Surround (2D). Make sure that 3D Vision is disabled from the Stereoscopic 3D control panel to use Surround (2D) mode.

 

What GPUs does 3D Vision Surround support?

Please view the 3D Vision Surround system requirements for a full list of supported products.

Please note that in the first Beta v258.69 driver for 3D Vision Surround, we do not recommend running GeForce GTX 295 Quad SLI because there are bugs in the driver which may result in system instability. We will be providing a future driver update to support this configuration.

 

Can I use HDMI connectors to run 3D Vision Surround?

3D Vision-Ready LCDs require dual-link DVI connectors and will not work with HDMI connectors. However, you can use HDMI connectors with 3D Vision Projectors that support HDMI input.

Note: 3D DLP HDTVs do use HDMI connectors, but 3D Vision Surround does not support using 3D DLP TVs.

 

What resolutions does Surround support?

Surround requires three displays to operate. Surround resolutions are calculated by arranging your monitors in portrait or landscape mode and then multiplying the horizontal resolution by three times a single display in that mode (portrait or landscape).

The following table illustrates some common aspect ratios and resolutions:

1280x1024

3840x1024

3072x1280

1920x1080

5760x1080

3240x1920

1920x1200

5760x1200

3600x1920

 

Note: 3D Vision-Ready LCDs do not support portrait mode. However, you can run 3D Vision Surround with 3D projectors in portrait mode.

 

Can I use 3D Vision Surround with three displays and setup an Accessory display that is not in 3D?

At this time, accessory displays are only supported in Surround (2D) configuration.

 


 


 


 

CUDA


CUDA™ is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU).

With millions of CUDA-enabled GPUs sold to date, software developers, scientists and researchers are finding broad-ranging uses for GPU computing with CUDA. Here are a few examples:

Identify hidden plaque in arteries: Heart attacks are the leading cause of death worldwide. Harvard Engineering, Harvard Medical School and Brigham & Women's Hospital have teamed up to use GPUs to simulate blood flow and identify hidden arterial plaque without invasive imaging techniques or exploratory surgery.

Analyze air traffic flow: The National Airspace System manages the nationwide coordination of air traffic flow. Computer models help identify new ways to alleviate congestion and keep airplane traffic moving efficiently. 

Using the computational power of GPUs, a team at NASA obtained a large performance gain, reducing analysis time from ten minutes to three seconds.

Visualize molecules: A molecular simulation called NAMD (nanoscale molecular dynamics) gets a large performance boost with GPUs. The speed-up is a result of the parallel architecture of GPUs, which enables NAMD developers to port compute-intensive portions of the application to the GPU using the CUDA Toolkit.

 

Widely Used By Researchers

Since its introduction in 2006, CUDA has been widely deployed through thousands of applications and published research papers, and supported by an installed base of over 500 million CUDA-enabled GPUs in notebooks, workstations, compute clusters and supercomputers. 

 

How to get started

Software developers, scientists and researchers can add support for GPU acceleration in their own applications using one of  three simple approaches:

Drop in a GPU-accelerated library to replace or augment CPU-only libraries such as MKL BLAS, IPP, FFTW and other widely-used libraries

Automatically parallelize loops in Fortran or C code using OpenACC directives for accelerators

Develop custom parallel algorithms and libraries using a familiar programming language such as C, C++, C#, Fortran, Java, Python, etc.

 

What is NVIDIA Tesla™?

 

With the world’s first teraflop many-core processor, NVIDIA® Tesla™ computing solutions enable the necessary transition to energy efficient parallel computing power. With thousands of CUDA cores per processor , Tesla scales to solve the world’s most important computing challenges—quickly and accurately.

 

What is OpenACC?

OpenACC is an open industry standard for compiler directives or hints which can be inserted in code written in C or Fortran enabling the compiler to generate code which would run in parallel on multi-CPU and GPU accelerated system. OpenACC directives are easy and powerful way to leverage the power of GPU Computing while keeping your code compatible for non-accelerated CPU only systems. Learn more at https://developer.nvidia.com/openacc.

 

What kind of performance increase can I expect using GPU Computing over CPU-only code?

This depends on how well the problem maps onto the architecture. For data parallel applications, accelerations of more than two orders of mangitude have been seen. You can browse research, developer, applications and partners on our CUDA In Action Page

 

 

What operating systems does CUDA support?

 

CUDA supports Windows, Linux and Mac OS. For full list see the latest CUDA Toolkit  Release Notes.The latest version is available at http://docs.nvidia.com

 

Which GPUs support running CUDA-accelerated applications?

CUDA is a standard feature in all NVIDIA GeForce, Quadro, and Tesla GPUs as well as NVIDIA GRID solutions.  A full list can be found on the CUDA GPUs Page.

 

What is the "compute capability"?

The compute capability of a GPU determines its general specifications and available features. For a details, see the Compute Capabilities section in the CUDA C Programming Guide.

 

Where can I find a good introduction to parallel programming?

There are several university courses online, technical webinars, article series and also several excellent books on parallel computing. These can be found on our CUDA Education Page.

 

Hardware and Architecture

Will I have to re-write my CUDA Kernels when the next new GPU architecture is released?

No. CUDA C/C++ provides an abstraction; it’s a means for you to express how you want your program to execute. The compiler generates PTX code which is also not hardware specific. At runtime the PTX is compiled for a specific target GPU - this is the responsibility of the driver which is updated every time a new GPU is released. It is possible that changes in the number of registers or size of shared memory may open up the opportunity for further optimization but thats optional. So write your code now, and enjoy it running on future GPU's

 

Does CUDA support multiple graphics cards in one system?

Yes. Applications can distribute work across multiple GPUs. This is not done automatically, however, so the application has complete control. See the "multiGPU" example in the GPU Computing SDK for an example of programming multiple GPUs.

 

 

 


 


 


 

 


 

Nvidia Game Stream


zJaARao.png

PC GAMING MADE PORTABLE WITH NVIDIA GAMESTREAM™ TECHNOLOGY.

NVIDIA GameStream harnesses the power of the most advanced GeForce® GTX™ graphics cards or GRID cloud Beta and exclusive game-speed streaming technologies to bring low-latency PC gaming experience to your SHIELD portable.

 

With streaming support for over 100 of the hottest PC titles at up to 1080p and 60 frames per second, SHIELD offers a truly unique handheld gaming experience.

 

NEW: Remotely access your PC to play your games away from your home.*

Play your favorite PC games on your HDTV at up to 1080p at 60 FPS using a wireless Bluetooth controller with SHIELD Console Mode.

NEW: Stream games like World of Warcraft, League of Legends, and DOTA 2 to your HDTV using Wi-Fi and play with full Bluetooth keyboard and mouse support.

Download GeForce Experience to see if your PC is ready for.

 

 

How GAMESTREAM Works

NVIDIA uses the H.264 encoder built into GeForce GTX 650 or higher desktop GPUs and GeForce GTX Kepler and Maxwell notebooks, along with efficient wireless streaming software protocol integrated into GeForce Experience, to stream games from the PC to SHIELD over the user's home Wi-Fi network with ultra-low latency. Gamers then use SHIELD as the controller and display for their favorite PC games, as well as for Steam Big Picture.

In addition to streaming the game, NVIDIA also configures PC games for streaming using GeForce Experience to deliver a seamless out of the box experience. There are three key activities involved here:

 

Generating optimal game settings for streaming games from the PC to SHIELD

We use our GeForce Experience servers to determine the best quality settings based on the user's CPU and GPU, and target higher frame rates than 'normal' optimal settings to ensure the lowest latency gaming experience. These settings are automatically applied when the game is launched so gamers don't have to worry about configuring these settings themselves.

 

Enabling controller support in games

This avoids gamers having to manually configure controller support in the game – it 'just works' out of the box.

 

Optimizing the game launch process

This is important so that gamers can get into the game as quickly and seamlessly as possible, without hitting launchers or other 2 foot UI interactions that can be difficult to interface with a controller.

 

Steam Big Picture

Big Picture shows up as a selection in the list of SHIELD-optimized games. Launching it gives you access to your Steam games – both supported and unsupported – plus the Steam Store and all the Steam community features. Supported games are recommended for the best, optimized streaming experience – the latest list of supported games can be found on SHIELD.nvidia.com starting at launch.

Unsupported games may work if they have native controller support, but will not be optimized for streaming and may have other streaming compatibility issues.

 

 

System Requirements for PC Game Streaming:

> GPU:

- Desktop: GeForce GTX 650 or higher GPU, or

- Notebook (Beta): GeForce GTX 800M, GTX 700M and select Kepler-based GTX 600M GPUs

> CPU: Intel Core i3-2100 3.1GHz or AMD Athlon II X4 630 2.8 GHz or higher

> System Memory: 4 GB or higher

> Software: GeForce Experience™ application and latest GeForce drivers

> OS: Windows 8 or Windows 7

> Routers: 802.11a/g router (minimum). 802.11n dual band router (recommended). See list of GameStream ready routers.

 

GameStream Ready Games


 

Setup Guide


 


 


 


 

 

 


 

 

Nvidia Workstation cards


QUADRO ADVANTAGE

The NVIDIA® Quadro® family of products is designed and built specifically for professional workstations, powering more than 200 professional applications across a broad range of industries. From Manufacturing, Sciences and Medical Imaging, and Energy, to Media and Entertainment, Quadro solutions deliver unbeatable performance and reliability that make them the graphics processors of choice for professionals around the globe.

 

DISCOVER THE POWER OF NVIDIA KEPLER™ ARCHITECTURE

Get the power to realise your vision with the new family of NVIDIA Quadro professional graphics. They’re fueled by Kepler, NVIDIA’s most powerful GPU architecture ever, bringing a whole new level of performance and innovative capabilities to modern workstations.

Whether you’re creating revolutionary products, designing groundbreaking architecture, navigating massive geological datasets, or telling spectacularly vivid visual stories, Quadro graphics solutions give you the power to do it better and faster.

 

DISCOVER THE POWER OF NVIDIA KEPLER™ ARCHITECTURE

Guaranteed compatibility through support for the latest OpenGL, DirectX, and NVIDIA CUDA® standards, deep professional software developer engagements, and certification with over 200 applications by software companies.

Maximised performance through the unique capabilities of the latest Kepler GPU—including the SMX next-generation multiprocessor engine, advanced temporal anti-aliasing (TXAA) and fast approximate anti-aliasing (FXAA) modes, and innovative bindless textures technology—as well as larger on-board GPU memory and optimised software drivers.

Transformed and accelerated workflows that drive faster time-to-results in product design or digital content creation. This is made possible through simultaneous design and rendering or simulation using Quadro and NVIDIA Tesla® cards in the same system—called NVIDIA Maximus™.

 

 

 

REIMAGINE THE VISUAL WORKSPACE

 

REIMAGINE THE VISUAL WORKSPACE

Quadro solutions combine the most advanced display technologies and ecosystem interfaces to provide the ultimate visual workspace for maximum productivity.

Stunning image quality with movie-quality antialiasing techniques and enhanced color depth, higher refresh rates, and ultra-high screen resolution offered by the DisplayPort standard.

Simplified display scaling through increased display outputs per board, choice of display connections, and multi-display blending and synchronization made possible by NVIDIA Mosaic technology.

Enhanced desktop workspace across multiple displays using intuitive placement of windows, multiple virtual desktops, and user profiles offered by the NVIDIA nView® visual workspace manager.

 

 

GET PERFORMANCE YOU CAN TRUST. EVERY TIME

Reliability is core to all Quadro solutions, and one of the keys to the decade-long industry leadership of Quadro-powered professional desktops. Every product is designed to deliver the peace of mind you need to focus on what you do best—changing the way we all look at the world.

Highest-quality products through power-efficient hardware designs and component selection for optimum operational performance, durability, and longevity.

Simplified software driver deployment for the IT team through a regular cadence of long-life, stable driver releases and quality-assurance processes.

Maximum uptime through exhaustive testing in partnership with leading OEMs and system integrators that simulates the most demanding real-world conditions.

 

CREATE WITHOUT THE WAIT USING NVIDIA MAXIMUS

The NVIDIA Tesla-accelerated computing co-processor pairs with Quadro products to fundamentally transform traditional workflows. Take advantage of the NVIDIA Maximus solution to simulate and visualize more design options, explore more innovative entertainment ideas, and accelerate time to market for a competitive advantage.

 

Quadro graphics card for desktop workstations

Designed and built specifically for professional workstations, NVIDIA® Quadro® GPUs power more than 200 professional applications across a broad range of industries, including Manufacturing, Media and Entertainment, Sciences, and Energy. Professionals like you trust Quadro solutions to deliver the best possible experience with applications such as Adobe CS6, Avid Media Composer, Autodesk Inventor, Dassault Systemes CATIA and SolidWorks, Siemens NX, PTC Creo, and many more. For maximum application performance, add an NVIDIA Tesla® GPU to your workstation and experience the power of NVIDIA Maximus™ technology.

 

The cards

QUADRO K6000

The most powerful pro graphics on the planet built for tackling the largest visual projects, 12GB of on-board memory, advanced display capabilities for large-scale visualization, and support for high-performance video I/O.

 

 

Quadro K5000

Extreme performance to handle demanding workloads, 4 GB of on-board memory, advanced quad-display capabilities for large-scale visualization, and support for high-performance video I/O.

 

 

 

 

 

Quadro K5000 for Mac

Extreme performance on the Apple Mac Pro platform for accelerating professional design, animation, and video applications, 4 GB of on-board memory, and advanced quad-display capabilities for large-scale visualization.

 

 

Quadro K4000

Supercharged performance for graphics-intensive applications, large 3 GB on-board memory, multi-monitor support, and stereo capability in a single-slot configuration.

 

 

 

Quadro K2000

Outstanding performance with a range of professional applications, substantial 2 GB on-board memory to hold large models, and multi-monitor support for enhanced desktop productivity.

 

 

Quadro K2000D

Outstanding performance with a range of professional applications, dual-link DVI capability, 2 GB of GDDR5 memory, and multi-monitor support for enhanced desktop productivity.

 

 

Quadro K600

Great performance for leading professional applications, 1 GB of on-board memory, and a low- profile form factor for maximum usage flexibility.

 

 

Quadro 410

Entry-level professional graphics with ISV certifications.

 

references:


 


 


 


 

 


 

 

Reference Sites


LinusTechTips


 

As Fast As Possible



Link to comment
Share on other sites

Link to post
Share on other sites

if anyone can tell me how to embed youtube on this forum ill fix the videos too :)

Link to comment
Share on other sites

Link to post
Share on other sites

█████ > █████ Just kidding.. each has their ups and downs.

Nice compilation of useful information.

if anyone can tell me how to embed youtube on this forum ill fix the videos too :)

Just paste the link and it will embed.

Main: i7 2600 | ASUS P8Z68-V | 2x4GB Vengeance 1600 | GTX 580 | WD Blue 1TB | Antec TP-650C | NH U12S | W7 x64

Backup: X6 1090T | MSI K9A2 Platinum | 4x2GB XMS2 800 | GTX 550Ti | WD Blue 1TB |  Antec VP-450 | CM TX3 | W7 x64

Link to comment
Share on other sites

Link to post
Share on other sites

an everyday consumer won't understand none of those things or care u.u 99% are interested in price per performance where AMD wins so far.

The blood on to your heart start pumping faster when you notice me.


But is ok.


Judge me for my nickname, my avatar and for the low amounts of posts I have. I will keep your heart beat raised.

Link to comment
Share on other sites

Link to post
Share on other sites

its for people who want to learn thats the point... or threads where you get what why amd  over nvidia or via versa

Link to comment
Share on other sites

Link to post
Share on other sites

I'm interested in the technology, so much so that I've spent the last several days reading up on all the GPUs since the Geforce 256.

 

Though I dont really understand all the architecture pictures, I wanted to look up the advances between each gen. Im a total geek :(

Linus is my fetish.

Link to comment
Share on other sites

Link to post
Share on other sites

an everyday consumer won't understand none of those things or care u.u 99% are interested in price per performance where AMD wins so far.

But then again, someone who wants to sink a large amount of money into a PC component, but not 100% enthusiast, this topic is a great help, as he can find every technology out there and compare them

 

as to be honest, we can pair up cards easily ( even though there is a company who brought out their new line of products whereas the other still WIP) and they are at a very similar price point

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×