Search the Community
Showing results for tags 'hbm'.
-
I was wondering if it would be possible to build a REAL fast 64gb usb 3.1 flash drive with 4 of theses 16gb hbm2 modules wich run at 4 gb/s each for a total of 16gb/s wich would be totally unneccesary. I couldnt find any documentation for any sort of hbm2 controller to usb aswell as no way to buy single hbm2 modules from samsung or anywhere else. If anyone finds out either, please hmu.
-
I am looking to get a SSD for my system. The main use case will be next gen gaming. Now I am basically on the fence between these 2. 1. Samsung 980 500GB ( No DRAM HBM supported, Samsung Pablo Controller, Read Speed (Max.):3100MB/s, Write Speed (Max.) : 2600MB/s, Random Read IOPS : 400000 IOPS. Random Write IOPS 470000 IOPS) 2. Transcend 220S 512GB M.2 2280 PCIe SSD ( it has DRAM, SMI SM2262EN controller, Read Speed (Max.) :3500MB/s, Write Speed (Max.) 2800MB/s, Random Read IOPS Up to 210,000,Random Write IOPS Up to 310,000) Now, I know DRAM is a huge deal. But samsung is getting much better IOPS than Transcend without the DRAM and almost the same speed with better price. so anyone using any of them please let me know which will be the better option. Thanks Pc specs: R5 3600X 16GB 2400MHZ RAM RTX 3050 (And no, My mother board does not support PCIe Gen 4 or higher speed RAM)
-
Was reading the coverage on the new Vega card AMD announced today over on Anandtech. http://www.anandtech.com/show/11403/amd-unveils-the-radeon-vega-frontier-edition Interestingly the article points out that the way in which AMD has implemented 16GB of HBM2 confuses them. Which is saying something since it's coming from the fountain of knowledge Ryan Smith. Any guesses on what 'creative measures' AMD could have implemented? Or do people think they just have supplier roadmaps that aren't public knowledge.
-
Which graphics cards have HBM?
-
With Raven Ridge projected to come in 2H 2017 or possibly later, I am curious as to everyone's thoughts and predictions for this product. For one, I wonder if any "consumer" models will receive HBM and on whether the GPU performance will end up lower than a RX 460/560 or if it has the potential to exceed that and maybe approach RX 470/570 performance territory.
-
I've made an article on medium to show some of my tests. This time I've tested R9 Fury X in Prey and realized some weird performance on AMD's GPUs. The article shows how the driver overhead and the VRAM Bottleneck can be a risky limitation for RX Vega. https://goo.gl/t8XcAF I'm trully worried about this. Aren't you? Marcus
- 4 replies
-
- fury x
- driver overhead
-
(and 2 more)
Tagged with:
-
Source: http://www.anandtech.com/show/11002/the-amd-vega-gpu-architecture-teaser So AMD ended up giving us quite a bit of info about Vega today. Not only did we learn about the new NCUs, but AMD has also made a lot of changes in Vega including a change in the ROPs that might indicate a move to tile-based rasterization, similar to what Nvidia has been doing since Maxwell. I was personally expecting Vega to be mostly just a larger Polaris with HBM2 but it seems that Vega might instead turn out to be a very large revision to GCN. This seems pretty exciting and I'm excited to see just how efficient and powerful Vega really is. Also a note: AMD showed a video of Vega running Doom at 4k on Ultra with around 70 fps on average looking by the framerate counter on the top right. You can see the video here: I'm assuming that they're running Doom with Vulkan, not OpenGL. The Fury X gets around 55 fps average on DOOM at 4k with ultra settings on Vulkan, so this means Vega is ~25% better than the Fury X, which is a pretty good increase. What are your thoughts?
-
As reported previously AMD will be launching the Radeon VII that comes with 16GB of HBM 2 and still uses the VEGA architecture but in 7nm - Reported by Fudzilla - The "#80" is obviously an error which should have been '$80' If you think this $80 is overestimating it, GamersNexus did make a report about it last year but using the RX VEGA's in context - For reference, the $175 he's talking about here is for 2x4GB stacks of HBM2. Also, the RX VEGA's had the same issue in which the HBM2 on the card costs almost half the GPU's price. So with this, we could say the HBM2 implementation on the Radeon VII could be costing them around $320-$325($150x2 + $25) but depending on underlying agreements between manufacturers this could be a bit lower. This is still expensive considering the whole card itself would cost around $699 not taking into account demand as Radeon VII was speculated to be launching in fewer numbers (5,000 units). Not new really as RX VEGA had a similar issue. In relation to this, 3dcenter made an article recently around the cost of GDDR6. Please note that they used a ',' instead of a '.' to indicate decimals. Using this data we could say that 16GB of GDDR6 could cost around $187.04 without talking into account implementation costs (which could be a lot more since you would need to take into account the space it takes up on the PCB). IMHO - I don't know about you but this seems a bit cheaper than HBM2. Of course others will argue that Radeon VII was targeted for compute etc. well it was pitted against the RTX 2080 in AMD's CES presentation so... For me they're still gonna market this as a high-end gaming card and of course it offers better compute than Nvidia GPUs but come on. How much of a performance hit difference it would be to replace HBM2 with GDDR6 we do not know for sure. With this said I am kind of thinking there could have been some sort of anti-competitive practices going on here for example GDDR6 was available last year but only Nvidia managed to get GPUs with this. Same with AMD's RX VEGA being the only mainstream GPUs (I don't consider Nvidia's Titan V a mainstream GPU) to get HBM2. EDIT: I meant that these memory chips aren't exclusive, but production-wise there could have been reservations for example SKHynix could only produce 10,000 units but AMD issues buy orders for 12,000 units. Something like that, I mean not really anti-competitive but sort of. But what do you guys think? Tell us in the comments below. (lol) UPDATE EDIT: New thought - WHAT IF AMD LAUNCHED A RADEON VII WITH ONLY 8GB of HBM2!? - With what I said in the top part, HBM2 costs $80 per 4GB stack. By launching an 8GB model, shouldn't this make the card $100-180 cheaper!? a $519-$599 model could sell like hotcakes. Who needs 16GB of HBM2 to play Fortnite anyway? (Obviously a joke but you get the idea)
-
It seems Samsung isn't finished on the hbm 2 improvements, they have now anounced "Flashbolt" with 33% higher bandwidth than the already fast Aquabolt which ran at 2.4Gbps this brings bandwidth per stack from the standard spec of 256GB/s to 410GB/s, each stack is made of 16Gb dies and 4 hi stacks will make 16GB modules, this brings the maximum bandwidth of hbm using 4 stacks to 1640GB/s, and maximum capacity at 64GB for 8 hi (and possibly a 12 hi, 96GB) Source: https://www.overclock3d.net/news/memory/samsung_introduces_hbm2e_memory_packing_a_33_bandwidth_boost/1 Opinion: hbm 2 now at 400GB/s per stack is quite close/at where it needs to be for mass adoption on the laptop and mid range desktop markets, it will all depend on price but both the capacity and the speed are more than enough to power good gaming experiences while not needing large a interposer, all its missing from the puzzle is amd's version of EMIB (embeded interposer bridge)
-
Hey there, I am wondering about the performance differences between the different RAM Types. Now of course dependent on what you do, "performance" is defined pretty differently. So for the purpose of this posting I will use performance as equal to "performance in the standard Firestrike Benchmark of 3DMark". The reason for my quesion is mostly because I'd like to be able to have a sense of how much of a step GGDR5 to GDDR5X is (in the upcoming GTX 1070 and 1080) and also how much of a boost HBM2 will be in the follow up generation. Mostly "better" VRAM also comes with an increase of other specs. So the GTX 1070 and 1080 will have more differences besides of the VRAM. But can anyone estimate on what difference it would make if you just exchanged upgraded the VRAM (and it's connections that it could actually be used). And what about HBM? Well, HBM2 is not used yet from what I know. But what is the estimated boost from HBM2 over HBM? If you leave out that you could put more RAM in the same space. In this instance, would you compare the R9 380X (which has 4GB of RAM) to the R9 fury for example and say that the difference is basically the boost due to HBM Memory? Of course there is hardly an exact answer. But I'd appreciate your best guesses and extrapolations from the know examples we have with current cards and data. Will a cheap HBM2 card blow a top GDDR5X card away just because of the VRAM? Or is the boost and the role of this step rather limited and only plays a small role in overall performance boost?
-
so who will have the lions share in upcoming hbm battle between the two giants? will the green flag continue to rise? or will it be the perfect opportunity to bounce back for the reds?? also, how the current gen of GDDR5 muscles from both the company will stack up against? (edit: sorry for the pole option. it will be amd greenland. thanks to the people who pointed this out)
-
http://wccftech.com/nvidia-pascal-volta-gpus-sc15/ As we all know, Pascal will be released sometime 2016. I'm personally betting late Q2 into Q3 based on Samsung's HBM 2 ramp up. Wccftech summarized Nvidia's SC'15 presentation and pulled out the pertinent details. http://images.nvidia.com/events/sc15/SC5125-energy-efficient-architectures-exascale-systems.html Pascal is to top out in double precision performance of 4 Teraflops, a solid improvement, but not exactly ground-breaking, compared to AMD's S9170 FirePro rated at 2.62TFlops DP. It's not yet clear if Nvidia went with 1/2 performance or kept in step with Kepler's model of 1/3 performance. I'm betting on the latter due to the second announcement. Volta will top out at 7 TFlops DP, on the same 16nmFF+ node. If Pascal was 1/3 performance, that would put SP at 12TFlops. If then Volta moved to 1/2, the corresponding SP performance will be 14TFlops, a much more believable performance improvement between generations on the same node than Pascal (at 1/2 DP performance) going from 8 TFlops SP to Volta's 14, and it's certainly more believable than Volta being 1/3 performance and having a whopping 21TFlops SP in a single-die solution. 14/12 = 1.1666... (16.7% improvement) 14/8 = 1.75 (75% improvement) 21/12 = 1.75 (75% improvement) 21/8 = 2.625 (162.5% improvement) Pascal at 1/2, Volta at 1/3, and obviously insane Second, Nvidia's presenter outlined that the memory power/thermal problem is far from resolved. In fact, it gets worse when you crank up HBM 2. While below 1GHz it's a very efficient memory architecture, above that point, HBM 2 actually very quickly becomes more power hungry than GDDR5X for each additional cycle per second and every additional GB/s, At 1.2TB/s (the bandwidth of Pascal and Volta for now), the memory package is 60W of the thermal envelope on its own. whether this is with 4-hi or 8-hi stacks was not stated, but it doesn't bode well either way. For reference, a 1TB/s arrangement of GDDR5x is 16 64-bit wide chips at 2GHz, and has a thermal requirement of 70W (Micron). Per clock HBM2 is actually worse if Nvidia's model is of the 4hi stacks (4*4 = 16 chips at half the clock speed). All in all, the plot thickens in the GPU wars. Now to eagerly await the Arctic Islands announcements and unveiling. It looks like We'll have to find HBM 2's replacement much sooner than we'd hoped. *laughs to self since HMC has lower latency and comparable bandwidth to HBM 1 and the 8 chips on KNL only need 18W of thermal dissipation power*
-
I know Nvidia is gonna use hbm in pascal, but how are they able to? Amd holds the patent for hbm, so did they like give nvidia a license to use it? I'm assuming this would mean nvidia would have to pay royalties for every gpu they sell with hbm. Are they doing this or what? Just confused about it all. This would be a big advantage for amd if this is true cause they could charge a lot for the hbm and then nvidia would have to drive up their high end gpu costs.
-
so i want to get a new card ONLY FOR 1080p 60 FPS..... but i dont know what is better for my money. My limit is 550$. So my choices are the 390 8gb for about 350$ or the nano for about 489$. all i want is a solid 60 fps in 1080p. i do not care about anything above 60fps or 4k. i use my 60inch vizio tv as a monitor so i cant go higher then that anyway. so what do you think is best for that? the card i have atm is a amd 7850 2gb.
-
source: https://www.jedec.org/news/pressreleases/jedec-updates-groundbreaking-high-bandwidth-memory-hbm-standard --- who is JEDEC? Solid State Technology Association, formerly known as the Joint Electron Device Engineering Council (JEDEC), is an independent semiconductor engineering trade organization and standardization body JEDEC was founded in 1958 as a joint activity between EIA (Electronic Industries Alliance) and the National Electrical Manufacturers Association (NEMA) to develop standards for semiconductor devices. NEMA discontinued its involvement in 1979 --- what is HBM? High Bandwidth Memory is a type of DRAM jointly developed by AMD and Hynix as a competing product to Intel's and Micron's Hybrid Memory Cube HBM has been adopted as a standard by JEDEC in 2013 under JESD235 nomenclature the 1st products to utilize HBM were the AMD's Fury lineup utilizing the Fiji GPUs
-
SOURCES: =================================================================================== Yeah, WCCF "strikes again", this time bringing a pretty interesting thing, I have to say. I say interesting because this kinda busts all our previous speculations about this card. In a nutshell, the glorious Fury card might come in 3 variants: -Fury-Nano >>> Supposedly the watercooled one which appeared in various renders and pictures. -Fury XT >>> Might come with a triple fan air cooler(I smell some Sapphire stock cooling, but that's me) -Fury PRO >>> Also air cooled and possibly being the candidate for further aftermarket cooling.(Fury Lightning anyone?) Observe the words. "Might, speculation". It's a rumour, triggered by the apparition of "Nano" in the name. The pictures are the same as before. This being said though we're very close to the official launch at E3, so any further speculation kinda seems pointless. As always, take this info with a grain of salt, and leave your thoughts below.
- 34 replies
-
The people over at uk.hardware.info have managed to overclock the HBM memory of the R9 Fury X by 20% through a Catalyst Control Center (CCC) glitch. Apparently after every few system reboots, a Memory clock slider in CCC Overdrive became available: Given the extremely high memory bandwidth of HBM already, it is hard to say if a 20% increase would make a big difference in normal gaming - let alone worth the risk. What does this mean? It could mean that the memory clocks are simply not adjustable yet until Afterburner or Trixx receive an update. Given time I'm sure someone will figure it out. The downside of this news is that we still don't know whether the core voltage can be unlocked yet, as CCC never allows voltage adjustment on any AMD cards - 3rd party software like AB or Trixx is necessary for that. Given the deliberate remarks of AMD that the Fury X is "an overclockers dream," one would hope voltage isn't completely locked down, but we will have to wait and see.
- 56 replies
-
- fury x
- overclocking
-
(and 3 more)
Tagged with:
-
NCIX: http://bit.ly/1BA5449 Amazon: http://geni.us/3nl4 The Fury X has finally arrived! Has AMD left its competition in the dust with their new flagship card? This video would have been extremely delayed without the help of The Tech Report. Check out their Fury X article at http://techreport.com/review/28513/amd-radeon-r9-fury-x-graphics-card-reviewed
-
With the announcement of AMD's new line-up of cards, people are pretty excited for the Fury X. So much that whenever I search 'r9 fury' into google every result is about the r9 fury x. I'm personally more interested in the fury (non-x) since it is just about the same card minus the overclocking potential at a lower price. Are the benchmarks not out yet for the air-cooled version or are they just hidden under the endless stream of fury x articles?
-
Should i get now gtx970 and wait for Nvidia Pascal GPU till 2016 or get 980TI ? http://wccftech.com/nvidia-pascal-gpu-gtc-2015/ They say x10 more performance compared to Maxwell
-
source: http://www.computerbase.de/2015-08/idf-2015-samsung-fertigt-high-bandwidth-memory-ab-2016/ - german Soon, SK-Hynix won't be only manufacturer of HBM chips. In a private showing at IDF, Samsung has revealed it's plans to mass produce HBM memory chips for a variety of applications. Mass production will start early 2016 with chips aimed at graphic cards (nVidia's Pascal is a very good guess) and super-computers (HPC market) (IBM might have a hand in this). In 2017 and 2018 Samsung plans to expand the use in networking products and "others". --- there were some unfounded rummors that AMD, somehow, controls where HBM production goes - to be more specific, SK-Hynix would prioritize their production for AMD while all it could be very true, SK-Hynix nor AMD control what happens to HBM since it's a JEDEC standard, anyone (in their member list) could pursue it - case and point, Samsung so, nVidia doesn't have a problem with acquiring HBM chips for their upcoming Pascal architecture
-
Hi, not sure if this has been covered yet but I am confused with all the talk about HBM coming out on the AMD cards. I do not know if you need that much bandwidth. I always overclock my memory last when I oc my graphic cards as that is not as big as a priority as the clock speed on the GPU right? So I know it can help a little with the fps but how much is enough? Is this just hype? Does actual vram capacity take a higher priority vs bandwidth? I just don't know. I game at 2560x1440 and I think that 8gb of vram is the sweet spot but at what bandwidth? I would just like some clarification so if you are in the know please let me know so I can wrap my head around it. Thanks
- 20 replies
-
- graphics card
- memory
- (and 6 more)
-
According to dsogaming.com, AMD's newest Fiji architecture is rumoured to come in 4 different variants: Fiji Pro 4GB $599 - Fury Fiji XT 4GB $749 - Fury X Fiji XTX 8GB $849 Fiji VR 2x8GB $1399-$1499 At this point the rumours are flying in circles, and it's hard to know whats potentially real and whats pure fantasy. If this is true, the pricing seems very competitive for the Fiji Pro, which is probably going to be a direct competitor to the 980ti. http://www.dsogaming.com/news/rumour-amd-fiji-models-revealed-will-come-with-both-4gb-8gb-cheapest-model-at-599/ This Rumour is backed up by the claim that the 8GB variant will appear sometime in August: http://www.pcgameshardware.de/AMD-Radeon-Grafikkarte-255597/News/Fiji-8-GB-Fury-X-Titan-X-980-Ti-1160568/
-
Guys actaully why Titan X is cheper than Titan Z ? Is 390x better than Titan X by the specs? with new HBM thing What was u going to recomment me by specs ?