Jump to content

AMD Moves Vega 56 Embargo Forward, Asks Reviewers to Prioritize Over 64

LaboonTheWhale
4 hours ago, LaboonTheWhale said:

Depending on the game and the optimization. Games I have been mainly playing like Rising storm 2: vietnam, Squad, and VR it struggles. Now I'm just being nit picky as it can reach 60 in most, but after investing in a 144hz monitor. I want it to reach it. only 144hz I've experienced so far without LOW graphics is minecraft :( Paired with a ryzen 1700 @ 3.8Ghz

At 1440p with a Fury Nitro, I got 144fps in cs go, basically all old source games, in all bioshocks except the last one where it sometimes dips into the 120fps range, in all platform games, as well as alien isolation most of the time. DOOM runs fairly well as well (Overwatch as well, when I tried it (basically all e sports titles))

However, even if it's not optimal, 75-100 fps range with freesync is noticeably better than 60fps locked, even if it underutilize the monitor a bit. You'd have to have a bit more than 1080 ti perf with a more gaming oriented cpu to attain 144fps in all games, so I guess above 75fps in AAA games with freesync is livable until at least Vega 20/Volta.

 

On topic, this may show that AMD finally understood we just want the bloody damn cards and that they should just stop waiting to release them and release them as soon as possible.

Link to comment
Share on other sites

Link to post
Share on other sites

I don't think I'll be able to buy one anyway since 580 already either twice the price or not available.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, dizmo said:

It's a little better though, no?

If it's 20% better, I'm still all over it if it's what the Nano is based on.

If it's par, then the 1070 wins because of it's much lower TDP.

 

I like the Fury X, just can't use it because of the cooler. I considered the R9 Nano, but want more VRAM.

Get a Fury X and a broken RX or R9 GPU for its cooler? 

 

3 hours ago, lots of unexplainable lag said:

Honestly, a Vega 64 at that price is just retarded. 1080 performance at 115W more power draw. It's a repeat of the Fury launch where the Fury was a way more sensible buy over a Fury X.

 

The Vega 56 is a way better buy in every way, shape and form, IF the leaked performance we've seen is actually accurate and it crushes a 1070. Otherwise that should also not exist at $399.

The Fury can be flashed to unlock more CU's and better overclocking making it a better card then the Fury X, though because of binning the Fury X reaches slightly higher maximum core clocks. 

 

3 hours ago, Humbug said:

Yep, it seems like Vega56 is closer to the optimal point for the Vega architecture.

Compared to the Vega64 it takes far less power, but only gives up a little bit of performance.

They also probably cut the HBM voltage seeing as that clock drops quite a bit. 

I spent $2500 on building my PC and all i do with it is play no games atm & watch anime at 1080p(finally) watch YT and write essays...  nothing, it just sits there collecting dust...

Builds:

The Toaster Project! Northern Bee!

 

The original LAN PC build log! (Old, dead and replaced by The Toaster Project & 5.0)

Spoiler

"Here is some advice that might have gotten lost somewhere along the way in your life. 

 

#1. Treat others as you would like to be treated.

#2. It's best to keep your mouth shut; and appear to be stupid, rather than open it and remove all doubt.

#3. There is nothing "wrong" with being wrong. Learning from a mistake can be more valuable than not making one in the first place.

 

Follow these simple rules in life, and I promise you, things magically get easier. " - MageTank 31-10-2016

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Price/Perf will be much better at the 56 anyway. With OC its probably 96% of the pref of the 64.

“Remember to look up at the stars and not down at your feet. Try to make sense of what you see and wonder about what makes the universe exist. Be curious. And however difficult life may seem, there is always something you can do and succeed at. 
It matters that you don't just give up.”

-Stephen Hawking

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Bananasplit_00 said:

Get a Fury X and a broken RX or R9 GPU for its cooler?

I'd basically have to find a Nano for it's cooler...it's the only card that's the same size and iirc they're based on the same thing.

CPU: Ryzen 9 5900 Cooler: EVGA CLC280 Motherboard: Gigabyte B550i Pro AX RAM: Kingston Hyper X 32GB 3200mhz

Storage: WD 750 SE 500GB, WD 730 SE 1TB GPU: EVGA RTX 3070 Ti PSU: Corsair SF750 Case: Streacom DA2

Monitor: LG 27GL83B Mouse: Razer Basilisk V2 Keyboard: G.Skill KM780 Cherry MX Red Speakers: Mackie CR5BT

 

MiniPC - Sold for $100 Profit

Spoiler

CPU: Intel i3 4160 Cooler: Integrated Motherboard: Integrated

RAM: G.Skill RipJaws 16GB DDR3 Storage: Transcend MSA370 128GB GPU: Intel 4400 Graphics

PSU: Integrated Case: Shuttle XPC Slim

Monitor: LG 29WK500 Mouse: G.Skill MX780 Keyboard: G.Skill KM780 Cherry MX Red

 

Budget Rig 1 - Sold For $750 Profit

Spoiler

CPU: Intel i5 7600k Cooler: CryOrig H7 Motherboard: MSI Z270 M5

RAM: Crucial LPX 16GB DDR4 Storage: Intel S3510 800GB GPU: Nvidia GTX 980

PSU: Corsair CX650M Case: EVGA DG73

Monitor: LG 29WK500 Mouse: G.Skill MX780 Keyboard: G.Skill KM780 Cherry MX Red

 

OG Gaming Rig - Gone

Spoiler

 

CPU: Intel i5 4690k Cooler: Corsair H100i V2 Motherboard: MSI Z97i AC ITX

RAM: Crucial Ballistix 16GB DDR3 Storage: Kingston Fury 240GB GPU: Asus Strix GTX 970

PSU: Thermaltake TR2 Case: Phanteks Enthoo Evolv ITX

Monitor: Dell P2214H x2 Mouse: Logitech MX Master Keyboard: G.Skill KM780 Cherry MX Red

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Vega 56 launches August 28th?

 

Holy crap yes. I'm gonna be on a trip for 2 weeks soon and this means I'll be able to order it first thing without miners getting in the way.

 

Bad for you guys but good for me :P 

Make sure to quote me or tag me when responding to me, or I might not know you replied! Examples:

 

Do this:

Quote

And make sure you do it by hitting the quote button at the bottom left of my post, and not the one inside the editor!

Or this:

@DocSwag

 

Buy whatever product is best for you, not what product is "best" for the market.

 

Interested in computer architecture? Still in middle or high school? P.M. me!

 

I love computer hardware and feel free to ask me anything about that (or phones). I especially like SSDs. But please do not ask me anything about Networking, programming, command line stuff, or any relatively hard software stuff. I know next to nothing about that.

 

Compooters:

Spoiler

Desktop:

Spoiler

CPU: i7 6700k, CPU Cooler: be quiet! Dark Rock Pro 3, Motherboard: MSI Z170a KRAIT GAMING, RAM: G.Skill Ripjaws 4 Series 4x4gb DDR4-2666 MHz, Storage: SanDisk SSD Plus 240gb + OCZ Vertex 180 480 GB + Western Digital Caviar Blue 1 TB 7200 RPM, Video Card: EVGA GTX 970 SSC, Case: Fractal Design Define S, Power Supply: Seasonic Focus+ Gold 650w Yay, Keyboard: Logitech G710+, Mouse: Logitech G502 Proteus Spectrum, Headphones: B&O H9i, Monitor: LG 29um67 (2560x1080 75hz freesync)

Home Server:

Spoiler

CPU: Pentium G4400, CPU Cooler: Stock, Motherboard: MSI h110l Pro Mini AC, RAM: Hyper X Fury DDR4 1x8gb 2133 MHz, Storage: PNY CS1311 120gb SSD + two Segate 4tb HDDs in RAID 1, Video Card: Does Intel Integrated Graphics count?, Case: Fractal Design Node 304, Power Supply: Seasonic 360w 80+ Gold, Keyboard+Mouse+Monitor: Does it matter?

Laptop (I use it for school):

Spoiler

Surface book 2 13" with an i7 8650u, 8gb RAM, 256 GB storage, and a GTX 1050

And if you're curious (or a stalker) I have a Just Black Pixel 2 XL 64gb

 

Link to comment
Share on other sites

Link to post
Share on other sites

I have to say, I'm both impressed and disappointed by vega at the same time. Impressed by the cost of the actual product and the  technology inside but disappointed by the huge 484mm² die that can barely compete with a 314mm2 die on an inferior process, while using 2x the power. 

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, rrubberr said:

Well, here in the OpenCL compute space, they might as well charge 1080ti prices for it.

Depends on what volta can do. Nvidia's "tensor cores" are just glorified vector processors for mixed precision (mainly fp16 I think) , marketed for AI and deep learning. If used for compute, they could drastically improve performance. 

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Morgan MLGman said:

What worries me the most is that this might make Nvidia be like Intel = stop innovating. At least so far Nvidia has kept releasing new generations that are an actual improvement over previous ones, let's hope it doesn't change.

They cut out double precision to keep down the power consumption so that they can clock higher once

then they changed the process node and clocked the core higher again

and then they said single SLI connecter is not strong enough, so they raised the clock speed of the connector, then said it's still not strong enough and then they used two of them and said - oh and btw, no Quad-SLI

What innovation are you talking about? So far the only thing they have innovated on is "Tensor" cores on V100, at which point they might as well be producing ASIC's...

 

XDMA, Bringing HBM to consumers, implementing HBCC, not to mention SSG on the professional front - that's innovation

CPU: Intel i7 5820K @ 4.20 GHz | MotherboardMSI X99S SLI PLUS | RAM: Corsair LPX 16GB DDR4 @ 2666MHz | GPU: Sapphire R9 Fury (x2 CrossFire)
Storage: Samsung 950Pro 512GB // OCZ Vector150 240GB // Seagate 1TB | PSU: Seasonic 1050 Snow Silent | Case: NZXT H440 | Cooling: Nepton 240M
FireStrike // Extreme // Ultra // 8K // 16K

 

Link to comment
Share on other sites

Link to post
Share on other sites

Article is offline now. Either AMD's not happy and forced them to or they found the info was wrong and they pulled the article from the side themselves.

Ye ole' train

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, DXMember said:

They cut out double precision to keep down the power consumption so that they can clock higher once

then they changed the process node and clocked the core higher again

and then they said single SLI connecter is not strong enough, so they raised the clock speed of the connector, then said it's still not strong enough and then they used two of them and said - oh and btw, no Quad-SLI

What innovation are you talking about? So far the only thing they have innovated on is "Tensor" cores on V100, at which point they might as well be producing ASIC's...

 

XDMA, Bringing HBM to consumers, implementing HBCC, not to mention SSG on the professional front - that's innovation

Hmmmm, maybe I used a wrong word here -> they at least have significant performance gains generation per generation (pricing is another matter, it's retarded) instead of doing what happened with Haswell->Broadwell and then Skylake->Kaby Lake :|

CPU: AMD Ryzen 7 5800X3D GPU: AMD Radeon RX 6900 XT 16GB GDDR6 Motherboard: MSI PRESTIGE X570 CREATION
AIO: Corsair H150i Pro RAM: Corsair Dominator Platinum RGB 32GB 3600MHz DDR4 Case: Lian Li PC-O11 Dynamic PSU: Corsair RM850x White

Link to comment
Share on other sites

Link to post
Share on other sites

Reviewers have just got the proper drivers with additional features like TBR, power savings etc enabled. Below is Vega64 (not 56).

 

Performance now is looking comparable to the aftermarket GTX 1080 designs.

 

35678144324_9105f51ac4_o.png

 

So Vega FE owners can expect a handy little boost once Vega RX launches and they receive these drivers too. Currently Vega FE gaming performance is lower than GTX 1080 founders edition.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Morgan MLGman said:

Hmmmm, maybe I used a wrong word here -> they at least have significant performance gains generation per generation (pricing is another matter, it's retarded) instead of doing what happened with Haswell->Broadwell and then Skylake->Kaby Lake :|

Well intel does pretty much the same, they bump clock speeds - they just don't have a headroom to do it by 50% every time, and all things considered, nVidia is running out of that headroom for SIMD, or at least they should be pretty soon, well at least in theory...

the IPC is pretty much negligible past Sandy Bridge with the exception of some instruction sets in certain situations where Haswell has a notable difference.

The only major thing Intel did is move from Ring Bus to Mesh on Skylake-X and redesigning cache architecture, but that's hardly innovation...

CPU: Intel i7 5820K @ 4.20 GHz | MotherboardMSI X99S SLI PLUS | RAM: Corsair LPX 16GB DDR4 @ 2666MHz | GPU: Sapphire R9 Fury (x2 CrossFire)
Storage: Samsung 950Pro 512GB // OCZ Vector150 240GB // Seagate 1TB | PSU: Seasonic 1050 Snow Silent | Case: NZXT H440 | Cooling: Nepton 240M
FireStrike // Extreme // Ultra // 8K // 16K

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Humbug said:

Reviewers have just got the proper drivers with additional features like TBR, power savings etc enabled. Below is Vega64 (not 56).

 

Performance now is looking comparable to the aftermarket GTX 1080 designs.

 

So Vega FE owners can expect a handy little boost once Vega RX launches and they receive these drivers too. Currently Vega FE gaming performance is lower than GTX 1080 founders edition.

damn, if we can unlock all or part of the shaders on 56 and if it clocks to those sick 1750 with like a Sapphire Tri-X cooler, that'd be dope yo!

 

though the results seem interesting... at least judging by the Fury X results posted on:

Fire Strike is only 25.7% gain for Vega 64 with 55.2% clock advantage

something real wrong is some of that - how do you get such a bad scaling?

Could it be the lower memory throughput, higher memory latency?

CPU: Intel i7 5820K @ 4.20 GHz | MotherboardMSI X99S SLI PLUS | RAM: Corsair LPX 16GB DDR4 @ 2666MHz | GPU: Sapphire R9 Fury (x2 CrossFire)
Storage: Samsung 950Pro 512GB // OCZ Vector150 240GB // Seagate 1TB | PSU: Seasonic 1050 Snow Silent | Case: NZXT H440 | Cooling: Nepton 240M
FireStrike // Extreme // Ultra // 8K // 16K

 

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, Coaxialgamer said:

I have to say, I'm both impressed and disappointed by vega at the same time. Impressed by the cost of the actual product and the  technology inside but disappointed by the huge 484mm² die that can barely compete with a 314mm2 die on an inferior process, while using 2x the power. 

TSMC's 16nm is NOT inferior to GloFo's 14nm, it's the opposite. GloFo has a little bit denser process but 16nm finfet from TSMC is better for high power applications.

On a mote of dust, suspended in a sunbeam

Link to comment
Share on other sites

Link to post
Share on other sites

On 8/11/2017 at 0:17 AM, dizmo said:

It's a little better though, no?

If it's 20% better, I'm still all over it if it's what the Nano is based on.

If it's par, then the 1070 wins because of it's much lower TDP.

 

I like the Fury X, just can't use it because of the cooler. I considered the R9 Nano, but want more VRAM.

If it's 20% better then it's a 210W 1080 for $400. It seems pretty unrealistic considering Vega64 is rumored to trade blows with the 1080. So how does the cutdown 210W card basically equal a 1080? Techpowerup has the 1080 as 21% more powerful than the 1070 at 2560x1440 in their big testsuite of games.

Link to comment
Share on other sites

Link to post
Share on other sites

Odd but more interested to see 64 one. 

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, Agost said:

TSMC's 16nm is NOT inferior to GloFo's 14nm, it's the opposite. GloFo has a little bit denser process but 16nm finfet from TSMC is better for high power applications.

Tsmc's process is less dense, which is (arguably) the most important thing for gpus. Also, the whole "high power" thing is BS. Both 16FF and 14nm LPP were developed as mobile-first process technologies, and GF's process has been seen in very high performance products, such as ryzen ( which is capable of 4ghz). 

Tsmc's tech hasn't proven itself as a high performance process because nobody has made >3ghz parts out of it. 

And given that neither company publishes transistor performance data, I have a hard time imagining how you'd know which process performs better. 

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Coaxialgamer said:

Tsmc's process is less dense, which is (arguably) the most important thing for gpus. Also, the whole "high power" thing is BS. Both 16FF and 14nm LPP were developed as mobile-first process technologies, and GF's process has been seen in very high performance products, such as ryzen ( which is capable of 4ghz). 

Tsmc's tech hasn't proven itself as a high performance process because nobody has made >3ghz parts out of it. 

And given that neither company publishes transistor performance data, I have a hard time imagining how you'd know which process performs better. 

Couple of things though:

  • High performance doesn't mean high clock rate, math/compute wise look no further than CPU vs GPU
  • Ryzen or more correctly Zen has a very obvious fabrication process limitation with clock rate scaling to power draw and voltage requirement, unlike Intel's process.
  • Actual GPU design has more to do with it than what the initial fabrication process was designed for, it is a factor though

 

It's almost impossible to say who's process is better without manufacturing the same GPU design with any modifications that are required for each fabrication process.

 

High clock rates can come from having more stages in an instruction pipeline and having each stage be less complex circuitry wise which can enable a CPU to run at a higher clock rate. CPUs have more stages than a GPU does (I think) along with various different instruction sets to accelerate certain code paths which is one of the reasons why CPUs clock higher. Having more stages does have down sides though, it doesn't reduce the instruction latency and may in fact increase it due to overhead of more stages and pipeline stalls.

 

We do however need to be very careful in comparing CPUs to GPUs since they are very different in design and architecturally.

 

Here is a nice chart of Intel's pipeline stage count over architecture generations though.

 

Quote
Year Microarchitecture Pipeline stages max. Clock
1989 486 (80486) 3 100 MHz
1993 P5 (Pentium) 5 300 MHz
1995 P6 (Pentium Pro; later Pentium II) 14 (17 with load & store/retire)
[further explanation needed]
450 MHz
1999 P6 (Pentium III) 12 (15 with load & store/retire) 450–1400 MHz
2000 NetBurst (Pentium 4) 20 unified with branch prediction 800–3466 MHz
2003 Pentium M 10 (12 with fetch/retire)
[further explanation needed]
400–2133 MHz
2004 Prescott 31 unified with branch prediction 4000 MHz
2006 Intel Core 12 (14 with fetch/retire) 3333 MHz
2008 Nehalem 20 unified (14 without miss prediction) 3600 MHz
2008 Bonnell 16 (20 with prediction miss) 2100 MHz
2011 Sandy Bridge 14 (16 with fetch/retire) 4000 MHz
2013 Silvermont 14-17 (16-19 with fetch/retire) 2670 MHz
2013 Haswell 14 (16 with fetch/retire) 4400 MHz
2015 Skylake 14 (16 with fetch/retire) 4200 MHz
2016 Goldmont 20 unified with branch prediction 3500 MHz
2016 Kaby Lake 14 (16 with fetch/retire) 4500 MHz
2017 Coffee Lake 14 5000 MHz
2017 Cannonlake 14 5000 MHz

 https://en.wikipedia.org/wiki/List_of_Intel_CPU_microarchitectures

Link to comment
Share on other sites

Link to post
Share on other sites

On 8/12/2017 at 7:26 AM, DXMember said:

damn, if we can unlock all or part of the shaders on 56 and if it clocks to those sick 1750 with like a Sapphire Tri-X cooler, that'd be dope yo!

 

though the results seem interesting... at least judging by the Fury X results posted on:

Fire Strike is only 25.7% gain for Vega 64 with 55.2% clock advantage

something real wrong is some of that - how do you get such a bad scaling?

Could it be the lower memory throughput, higher memory latency?

ever since vega specs were leaked i though that it would be a bit memory limited, and according to buildzoid it is, (he said for 3% mem oc you got 1% perf, but he didn't say it that was at stock or if it was while the core was overclocked), ever since the 290x amd hasn't increased the ROP count which is a bit weird, as since then amd has gone from 2800 CUs to 4096, which means that fury x's, and more so vega's perf are very dependent on compute effects, thats why on most games fury vs fury x is almost the same perf (around 5% delta) except that on doom the delta increases to 8.5% at 1080p and 9,3% at 4k which seems to support my thesis (in raise of the tomb raider the difference goes down to 1-3%)

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, leadeater said:

Couple of things though:

  • High performance doesn't mean high clock rate, math/compute wise look no further than CPU vs GPU
  • Ryzen or more correctly Zen has a very obvious fabrication process limitation with clock rate scaling to power draw and voltage requirement, unlike Intel's process.
  • Actual GPU design has more to do with it than what the initial fabrication process was designed for, it is a factor though

 

It's almost impossible to say who's process is better without manufacturing the same GPU design with any modifications that are required for each fabrication process.

 

High clock rates can come from having more stages in an instruction pipeline and having each stage be less complex circuitry wise which can enable a CPU to run at a higher clock rate. CPUs have more stages than a GPU does (I think) along with various different instruction sets to accelerate certain code paths which is one of the reasons why CPUs clock higher. Having more stages does have down sides though, it doesn't reduce the instruction latency and may in fact increase it due to overhead of more stages and pipeline stalls.

 

We do however need to be very careful in comparing CPUs to GPUs since they are very different in design and architecturally.

 

Here is a nice chart of Intel's pipeline stage count over architecture generations though.

 

 https://en.wikipedia.org/wiki/List_of_Intel_CPU_microarchitectures

I know this, but generally "performance" when referring to process technology means transistor switching speed, which is not known for either tech. 

And because we do not know the pipe depth of pascal (maxwell has 6 stages, i don't know if that is everything or just execution though) and polaris, along with the number of transistor states per cycle, we cannot make assumptions about which tech is faster. 

 

I wouldn't even be certain that zen's clock speed wall is a result of process limitations. It could be a result of the high transistor density of that design ( nearly 5 billion transistors in a sub 200mm² die) 

 

However, we do have the gtx 1050 series. 

Unlike the rest of pascal, gp107 was made using Samsung’s 14nm LPP tech, the same as polaris and zen. 

And we find that it pretty much reaches very similar clocks, when considering that it likely has a different voltage curve ( to stay within TDP , which is more important than on the 1060 and up) 

and has a very small die. 

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

On 8/11/2017 at 0:00 AM, mach said:

Just put them on the shelves already.....

It's not like they are releasing anything groundbreaking that deserves all the teasing:dry:

 

I had some hopes for Vega but even what AMD is doing doesnt seem to strike confidence in Vega being anything really all that amazing. I mean the paper specs looked promising but that's only part of the story of course.

 

I don't think it's going to be a shitty card but I think it's going to be rough for people to maybe justify getting either version if the price isnt right for the performance. If they aren't confident in their more expensive card that is suppose to slot in between the 1080 and 1080ti as being a good middle ground then that does say something about how they felt about this release, possibly rushing it. Here's to hoping maybe Navi will have the spirit of Ryzen and put AMD ahead of Nvidia. Now they did do what they said they wanted to do with Ryzen. They made Intel have to re-evaluate stuff. And honestly the i5 X and i7 X and i9s they launched was a knee jerk reaction.

a Moo Floof connoisseur and curator.

:x@handymanshandle x @pinksnowbirdie || Jake x Brendan :x
Youtube Audio Normalization
 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

The fact they believe that the Vega 56 is the only card thats gonna captivate the most audience is so disappointing.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×