Jump to content

The full amps - RTX A6000 now Available

williamcll

4-NVIDIA-RTX-Ampere-A6000-top-3.png

With the full 10752 cores, this deluxe workstation card that is surprisingly priced at half of a RTX 8000, which has less than half of its cores compared.

Quote

NVIDIA has discontinued its Quadro and Tesla series by offering a single series of products known as NVIDIA RTX Axx or NVIDIA Axx. It actually matters if the card carries the RTX branding because these cards will replace the Quadro series, while the non-RTX A40 card is basically a Tesla successor that will accompany the already launched GA100 A100 accelerator. The RTX A6000 is the only graphics card based on GA102 GPU to feature all CUDA cores enabled. With 10752 CUDA cores, the GPU has a single-precision compute performance of up to 38.7 TFLOPs, which is 3.1 TLFOPs more than GeForce RTX 3090.

image.thumb.png.28ba1df36b0443b7bbeed327dea7930b.png

Unlike the gaming card, however, the RTX A6000 features twice the memory capacity of 48 GB. Such modules are currently only available with GDDR6 memory technology, hence the memory bandwidth is actually lower on the former professional model. NVIDIA RTX A6000 features four DisplayPort 1.4 connectors with no HDMI 2.1 output. It’s a blower type design cooler, so it is not a datacenter card like A40, which only features a passively cooled heatsink. Only two cards can be connected through a new low-profile NVLink bridge. The A6000 is a workstation card that also supports NVIDIA vGPU virtualization technologies.

Source: https://videocardz.com/newz/nvidia-announces-rtx-a6000-workstation-graphics-card-with-full-ga102-gpu-is-now-available
Thoughts: Shiny indeed, I wonder what in the production process made Nvidia to decide to lower down the price? I'm also guessing GDDR6X has no EEC support yet, perhaps there will be a refresh later next year?

Specs: Motherboard: Asus X470-PLUS TUF gaming (Yes I know it's poor but I wasn't informed) RAM: Corsair VENGEANCE® LPX DDR4 3200Mhz CL16-18-18-36 2x8GB

            CPU: Ryzen 9 5900X          Case: Antec P8     PSU: Corsair RM850x                        Cooler: Antec K240 with two Noctura Industrial PPC 3000 PWM

            Drives: Samsung 970 EVO plus 250GB, Micron 1100 2TB, Seagate ST4000DM000/1F2168 GPU: EVGA RTX 2080 ti Black edition

Link to comment
Share on other sites

Link to post
Share on other sites

What a good looking card. That nvlink bridge looks great as well.

Our Grace. The Feathered One. He shows us the way. His bob is majestic and shows us the path. Follow unto his guidance and His example. He knows the one true path. Our Saviour. Our Grace. Our Father Birb has taught us with His humble heart and gentle wing the way of the bob. Let us show Him our reverence and follow in His example. The True Path of the Feathered One. ~ Dimboble-dubabob III

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, williamcll said:

4-NVIDIA-RTX-Ampere-A6000-top-3.png

How does the top card get any airflow?

Main System (Byarlant): Ryzen 7 5800X | Asus B550-Creator ProArt | EK 240mm Basic AIO | 16GB G.Skill DDR4 3200MT/s CAS-14 | XFX Speedster SWFT 210 RX 6600 | Samsung 990 PRO 2TB / Samsung 960 PRO 512GB / 4× Crucial MX500 2TB (RAID-0) | Corsair RM750X | Mellanox ConnectX-3 10G NIC | Inateck USB 3.0 Card | Hyte Y60 Case | Dell U3415W Monitor | Keychron K4 Brown (white backlight)

 

Laptop (Narrative): Lenovo Flex 5 81X20005US | Ryzen 5 4500U | 16GB RAM (soldered) | Vega 6 Graphics | SKHynix P31 1TB NVMe SSD | Intel AX200 Wifi (all-around awesome machine)

 

Proxmox Server (Veda): Ryzen 7 3800XT | AsRock Rack X470D4U | Corsair H80i v2 | 64GB Micron DDR4 ECC 3200MT/s | 4x 10TB WD Whites / 4x 14TB Seagate Exos / 2× Samsung PM963a 960GB SSD | Seasonic Prime Fanless 500W | Intel X540-T2 10G NIC | LSI 9207-8i HBA | Fractal Design Node 804 Case (side panels swapped to show off drives) | VMs: TrueNAS Scale; Ubuntu Server (PiHole/PiVPN/NGINX?); Windows 10 Pro; Ubuntu Server (Apache/MySQL)


Media Center/Video Capture (Jesta Cannon): Ryzen 5 1600X | ASRock B450M Pro4 R2.0 | Noctua NH-L12S | 16GB Crucial DDR4 3200MT/s CAS-22 | EVGA GTX750Ti SC | UMIS NVMe SSD 256GB / Seagate 1.5TB HDD | Corsair CX450M | Viewcast Osprey 260e Video Capture | Mellanox ConnectX-2 10G NIC | LG UH12NS30 BD-ROM | Silverstone Sugo SG-11 Case | Sony XR65A80K

 

Camera: Sony ɑ7II w/ Meike Grip | Sony SEL24240 | Samyang 35mm ƒ/2.8 | Sony SEL50F18F | Sony SEL2870 (kit lens) | PNY Elite Perfomance 512GB SDXC card

 

Network:

Spoiler
                           ┌─────────────── Office/Rack ────────────────────────────────────────────────────────────────────────────┐
Google Fiber Webpass ────── UniFi Security Gateway ─── UniFi Switch 8-60W ─┬─ UniFi Switch Flex XG ═╦═ Veda (Proxmox Virtual Switch)
(500Mbps↑/500Mbps↓)                             UniFi CloudKey Gen2 (PoE) ─┴─ Veda (IPMI)           ╠═ Veda-NAS (HW Passthrough NIC)
╔═══════════════════════════════════════════════════════════════════════════════════════════════════╩═ Narrative (Asus USB 2.5G NIC)
║ ┌────── Closet ──────┐   ┌─────────────── Bedroom ──────────────────────────────────────────────────────┐
╚═ UniFi Switch Flex XG ═╤═ UniFi Switch Flex XG ═╦═ Byarlant
   (PoE)                 │                        ╠═ Narrative (Cable Matters USB-PD 2.5G Ethernet Dongle)
                         │                        ╚═ Jesta Cannon*
                         │ ┌─────────────── Media Center ──────────────────────────────────┐
Notes:                   └─ UniFi Switch 8 ─────────┬─ UniFi Access Point nanoHD (PoE)
═══ is Multi-Gigabit                                ├─ Sony Playstation 4 
─── is Gigabit                                      ├─ Pioneer VSX-S520
* = cable passed to Bedroom from Media Center       ├─ Sony XR65A80K (Google TV)
** = cable passed from Media Center to Bedroom      └─ Work Laptop** (Startech USB-PD Dock)

 

Retired/Other:

Spoiler

Laptop (Rozen-Zulu): Sony VAIO VPCF13WFX | Core i7-740QM | 8GB Patriot DDR3 | GT 425M | Samsung 850EVO 250GB SSD | Blu-ray Drive | Intel 7260 Wifi (lived a good life, retired with honor)

Testbed/Old Desktop (Kshatriya): Xeon X5470 @ 4.0GHz | ZALMAN CNPS9500 | Gigabyte EP45-UD3L | 8GB Nanya DDR2 400MHz | XFX HD6870 DD | OCZ Vertex 3 Max-IOPS 120GB | Corsair CX430M | HooToo USB 3.0 PCIe Card | Osprey 230 Video Capture | NZXT H230 Case

TrueNAS Server (La Vie en Rose): Xeon E3-1241v3 | Supermicro X10SLL-F | Corsair H60 | 32GB Micron DDR3L ECC 1600MHz | 1x Kingston 16GB SSD / Crucial MX500 500GB

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, williamcll said:

I'm also guessing GDDR6X has no EEC support yet, perhaps there will be a refresh later next year?

GDDR6X has a kind of error detection builtin. The problem is that 18Gbit modules are not available yet, so a GDDR6X based GPU with 24 modules caps at 24GiB. A new model when such modules become available would be possible, I guess.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, AbydosOne said:

How does the top card get any airflow?

It's a blower card, it gets airflow through the front and expels it at the back. That's why blowers are better than open-air for multi-gpu configs.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

This is completly irrelevant for GPUs that most likely will end up in server racks, but hot damn that industrial design is slick 

I like cute animal pics.

Mac Studio | Ryzen 7 5800X3D + RTX 3090

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, n0stalghia said:

This is completly irrelevant for GPUs that most likely will end up in server racks, but hot damn that industrial design is slick 

The A6000 is a workstation card. The one that'll end up in server racks is the A40 (with no video outputs).

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, williamcll said:

With the full 10752 cores, this deluxe workstation card that is surprisingly priced at half of a RTX 8000, which has less than half of its cores compared.

Core functions are not equal. As we saw with the consumer models, the cores seem to be tied to a doubling of FP execution units, but not everything was doubled in that process. So, some things will scale as expected with the core count, but not everything will.

 

22 minutes ago, williamcll said:

Thoughts: Shiny indeed, I wonder what in the production process made Nvidia to decide to lower down the price?

If nothing else it tracks the trend started by the consumer models.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

So who's going to buy this to play video games on it?  I know you filthy rich people are out there.  Get buying!

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, gabrielcarvfer said:

Radeon Instinct MI100.

Nope. Radeon Instincts are compute cards competing with the likes of the A100 - they have no video outputs and are passively cooled.

Spoiler

AMD_instinct_view_1150-678x381.png

The A6000 is a Quadro - AMD's competitor for this card is somewhere in their Radeon Pro lineup (WX9100? I can't say I know much about AMD's workstation cards)

CPU: i7 4790k, RAM: 16GB DDR3, GPU: GTX 1060 6GB

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, AldiPrayogi said:

I was confused at the naming first, 6000-series like when did AMD steal RTX naming haha

Personally I keep getting confused with Sony's camera lineup - the A6000 is one of their budget mirrorless cameras!

CPU: i7 4790k, RAM: 16GB DDR3, GPU: GTX 1060 6GB

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, tim0901 said:

Nope. Radeon Instincts are compute cards competing with the likes of the A100 - they have no video outputs and are passively cooled.

Although I get what you meant (and the current MI100 competes with the soon to be released A40), the price between Teslas and Quadros are usually close enough for such comparison.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, igormp said:

GDDR6X has a kind of error detection builtin.

Assuming that PAM4 uses FEC (Forward Error Correction) to ensure the integrity of the signaling, that's still only within the DIMM module. Error correction doesn't occur between the GPU and RAM like a traditional ECC scheme would call for.

 

Perhaps full ECC isn't needed if Nvidia was confident that all the errors were in the RAM ICs. It would definitely reduce the workload from the GPU.

Link to comment
Share on other sites

Link to post
Share on other sites

so this is a dual slot card with 48 GB? 

 

because the picture in the OP is kinda irritating, as those are *2* cards glued together apparently... 

 

 

And it's 15% faster than a 3090, hmmpf, hard pass. :)

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Mark Kaine said:

because the picture in the OP is kinda irritating, as those are *2* cards glued together apparently... 

They are two cards, they are showing NVLink connectivity. You don't have to use two, but it's good to know you can as that is still relevant in workstation and compute applications

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, StDragon said:

Perhaps full ECC isn't needed if Nvidia was confident that all the errors were in the RAM ICs. It would definitely reduce the workload from the GPU.

Lots of people also disabled ECC due to the perf hit caused by it on your regular workstation cards, so maybe that's another factor.

 

7 hours ago, gabrielcarvfer said:

Not really. They don't have video outputs, but they're meant for virtualized workstations. Cheaper than Nvidia solutions, no stupid licensing shenanigans, etc.

They are also way more limited in usefulness when compared to a Quadro/Tesla, since most stuff relies on CUDA. Instinct cards are mostly used for heavy FP64 simulations/analysis, and now those game streaming platforms.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, leadeater said:

They are two cards, they are showing NVLink connectivity. You don't have to use two, but it's good to know you can as that is still relevant in workstation and compute applications

yeah, I figured after clicking the link. 

 

this is just all weird to me, they should have done like the Ares III, just put both in an enclosure with a water cooling block ... much nicer to look at and these blower cards must be running hot as hell? 

 

like I get it, it will work but a confusing product nonetheless, are those 48 GB really important? Because otherwise a 3090 / with or without SLI nvlink  makes a lot more sense imo 

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Mark Kaine said:

this is just all weird to me, they should have done like the Ares III, just put both in an enclosure with a water cooling block

That means an extra point of failure and shorter lifespan, nobody wants that.

4 minutes ago, Mark Kaine said:

much nicer to look at and these blower cards must be running hot as hell? 

Those cards go into closed boxes with no windows, they're not meant as your regular gaming card. Yes, blowers usually do run hot, but when you have multiple of those they end up cooler than multiple open-air models. All workstation models are blower for a reason.

 

5 minutes ago, Mark Kaine said:

like I get it, it will work but a confusing product nonetheless, are those 48 GB really important? Because otherwise a 3090 / with or without SLI makes a lot more sense imo 

It's not confusing, you're just thinking about the wrong use for those. They are meant to render stuff, do heavy calculations or train ML models, not game. 48GB is actually not enough for most tasks, that's why you get 2~4 of those.

SLI is used only for games, it has no other application. For compute scenarios, you don't even need a nvlink bridge, since the software is meant to distribute the work through PCIe without problems, the benefit of nvlink is when you start to saturate the PCIe link and need more bandwidth for inter-memory comms (usually when you go for more than 4 GPUs).

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, igormp said:

you're just thinking about the wrong use for those

No, I know what the use cases are, I just didn't know how it works exactly and how that much RAM is useful. 

2 hours ago, igormp said:

since the software is meant to distribute the work through PCIe without problems, the benefit of nvlink is when you start to saturate the PCIe link and need more bandwidth for inter-memory comms

I see, I didn't know that. 

 

2 hours ago, igormp said:

Those cards go into closed boxes with no windows, they're not meant as your regular gaming card. Yes, blowers usually do run hot, but when you have multiple of those they end up cooler than multiple open-air models. All workstation models are blower for a reason

Ah, ok didn't know that either. 🤔

 

 

2 hours ago, igormp said:

That means an extra point of failure and shorter lifespan, nobody wants that.

*except certain gamer folks apparently ;)

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, n0stalghia said:

This is completly irrelevant for GPUs that most likely will end up in server racks, but hot damn that industrial design is slick 

Man, if companies designed consumer desktop products like this, I would be in love. Instead everything is g4m3r with RGB-everything 🙁

3 hours ago, igormp said:

SLI is used only for games, it has no other application. For compute scenarios, you don't even need a nvlink bridge, since the software is meant to distribute the work through PCIe without problems, the benefit of nvlink is when you start to saturate the PCIe link and need more bandwidth for inter-memory comms (usually when you go for more than 4 GPUs).

NV Link is significantly faster than PCIe 4.0 though, so I would expect any well-written library to use it if possible for inter-card data broadcasts. Not to say I know if it does or not, I don't make GPU accelerated workloads, I just expect it would. 

15" MBP TB

AMD 5800X | Gigabyte Aorus Master | EVGA 2060 KO Ultra | Define 7 || Blade Server: Intel 3570k | GD65 | Corsair C70 | 13TB

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, Blade of Grass said:

NV Link is significantly faster than PCIe 4.0 though

For rough indication only (values are for a single direction, although not 100% sure for the SLI number):

High Bandwidth SLI connector was 2 GB/s

PCIe 4.0 lanes are 2GB/s each, so 32GB/s if you use 16. Halve that for 3.0.

Consumer nvlink as introduced with 20 series cards is 50GB/s and limited to two devices. I believe the full fat version can go to more devices, but I don't know if their bandwidth is same or higher. A quick search suggests 3090 and A6000 both do 56.25GB/s on their links.

 

To compare theoretical bandwidth dual channel ram running at a marketing speed of 3200 is about 50GB/s, but that can only read or write at a given time, not both at the same time.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, Blade of Grass said:

NV Link is significantly faster than PCIe 4.0 though, so I would expect any well-written library to use it if possible for inter-card data broadcasts. Not to say I know if it does or not, I don't make GPU accelerated workloads, I just expect it would. 

I can't really say for all scenarios, maybe rendering does make use of it. But for ML stuff (something that I do work with), the benefit of nvlink is less than 10% (in a best case scenario) for up to 10 GPUs. When you go past 8~10 GPUs is where things get interesting.

 

5 minutes ago, porina said:

I believe the full fat version can go to more devices, but I don't know if their bandwidth is same or higher. A quick search suggests 3090 and A6000 both do 56.25GB/s on their links.

The full fat nvswitch can do a total of 928 GB/s bidirectional (up to 18 GPUs, each with ~50GB/s). Link multiple of those with an infiniband link and now you made your own DGX cluster.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, igormp said:

I can't really say for all scenarios, maybe rendering does make use of it. But for ML stuff (something that I do work with), the benefit of nvlink is less than 10% (in a best case scenario) for up to 10 GPUs. When you go past 8~10 GPUs is where things get interesting.

Interesting, it definitely depends on how broadcast dependent the workload is, vs. if it's easily solvable as discrete subproblems with a final combination step.

I would expect stuff like FFT, LU Factorization, PageRank, and some other workloads would benefit from the bandwidth. 

 

EDIT: heyo, 5k counted posts :)

15" MBP TB

AMD 5800X | Gigabyte Aorus Master | EVGA 2060 KO Ultra | Define 7 || Blade Server: Intel 3570k | GD65 | Corsair C70 | 13TB

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×