Jump to content

Why Is AMD Selling Broken Playstations?

AlexTheGreatish
2 minutes ago, Nystemy said:

But a supply that is designed correctly won't have that issue.

 

Sure, but....

 

GigaByte/Neweeg decided to still dump those onto the public after they knew there was an issue despite having a name to loose.

And still you want to trust a cheap noname PSU to not have cutten some corners for the case that someone operates them far out of spec?

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Kronoton said:

 

Sure, but....

 

GigaByte/Neweeg decided to still dump those onto the public after they knew there was an issue despite having a name to loose.

And still you want to trust a cheap noname PSU to not have cutten some corners for the case that someone operates them far out of spec?

If a large company decides to shit on their customers, then it is their loss. If anything it gives consumers and competitors legal ground to take them to court.

 

I am not defending Gigabyte's flawed PSU, that is a fire hazard.

Nor am I defending Newegg for flogging a known bad PSU at its consumers.

But a PSU that has passed regulatory approval and has showed no signs of misbehaving, shouldn't be tossed into "This might create a fire" when operating it bellow its rated voltage. Unless it actually has shown itself capable of failing in such a manner.

A 230 volt only PSU is most often not going to work at 110 volts, it won't catch fire. It might work fine if it is a flyback converter but will either trip on OPP or the fuse will blow if put under sufficiently high load, which most likely will only reach some 55-70% of what is written on the box. And this isn't a catastrophic failure.
 

Though, people are generally uninformed about the inner works of electronic products, it is mainly treated as black magic for better or worse. (as someone who works in electronics design and manufacturing, it is honestly mostly annoying...)

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks for fixing the video titile

image.png.f649a322215c15e5ab27d84bb4f9a6fe.png

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, SteelSkin667 said:

They didn't do any math, Linus is just reading benchmark results.

 

The way you would calculate memory bandwidth is by multiplying its data rate (usually expressed in MT/s or in Gbps per pin) by its bus width (64 bits per channel in the case of DDR type memory).

What about if it is GDDR memory, and does the calculation change if it is GDDR5/X to GDDR6/X?

Fuck you scalpers, fuck you scammers, fuck all of you jerks that charge way too much to tech-illiterate people. 

Unless I say I am speaking from experience or can confirm my expertise, assume it is an educated guess.

Current setup: Ryzen 5 3600, MSI MPG B550, 2x8GB DDR4-3200, RX 5600 XT (+120 core, +320 Mem), 1TB WD SN550, 1TB Team MP33, 2TB Seagate Barracuda Compute, 500GB Samsung 860 Evo, Corsair 4000D Airflow, 650W 80+ Gold. Razer peripherals. 

Also have a Alienware Alpha R1: i3-4170T, GTX 860M (≈ a 750 Ti). 2x4GB DDR3L-1600, Crucial MX500

My past and current projects: VR Flight Sim: https://pcpartpicker.com/user/nathanpete/saved/#view=dG38Jx (Done!)

A do it all server for educational use: https://pcpartpicker.com/user/nathanpete/saved/#view=vmmNcf (Cancelled)

Replacement of my friend's PC nicknamed Donkey, going from 2nd gen i5 to Zen+ R5: https://pcpartpicker.com/user/nathanpete/saved/#view=WmsW4D (Done!)

Link to comment
Share on other sites

Link to post
Share on other sites

gddr6x uses different encoding (pam4) compared to gddr6 or gddr5 (classic nrz) allowing for more bandwidth...

gddr6x and gddr6 and gddr5 send 4 bits per clock, ddr sends 2 bits per clock (in the name, double data rate),

 

so you multiply number of megatransactions with bits per clock and bus width to get maximum bandwidth ...

 

ex for 3200 MT ddr4 dual channel (2 sticks)        3200 mega transactions x 2 bits per clock x 128 (2 x 64 bit wide per stick) = 409600 mega bits = 51200 mega bytes /s  or 50 GB/s 

edit to correct : the mega transaction includes the 2 bits per clock, the ram runs at half the advertised value (ex "3200 Mhz" sticks run at 1600 Mhz, but transfer 2 bits per Hz, hence the 3200 mega transactions value)

 

for video memory just gonna paste from anandtech - it's from an article about micron making gddr6x so the maximums may be about maximum Micron can do, others may do more.

The maximum b/w per pin depends on the frequency (MT/s)

For example using the chart below, let's say a video card with a 128 bit bus width and gddr6, you have 14 Gbps x 128 bits = 1792 Gbps or  /8 = 224 GigaBYTES per second maximum theoretical.

 

Keep in mind that you have timings and latencies... you don't get this speed instantly, latency and other things screw things up. For example, the processor requests to read something from ram, and then it takes a few nanoseconds for the memory chips to be "ready" to give that data to the processor. Once it's ready, it can push the data to the cpu through its pins at that maximum bandwidth per pin.

If you're dealing with reading and writing small pieces of data, cpu may request something from ram and wait 10 nanoseconds and then the transfer takes 2 nanoseconds ... so the ability of the ram to transfer fast is meaningless if it takes several times more for the ram to be ready to transfer or to receive data.

For video cards the higher latency is not a big problem because they  deal with big chunks of data mostly (textures and stuff) and have reasonably big internal caches to keep small size stuff in caches instead of reading it from the video card ram

On cpus, they counteract that by using multiple ranks and multiple channels (ex the cpu can queue reading and writing on separate ranks or separate channels so that for example, it can send a command to a rank or channel to read something knowing it will take a few nanoseconds to be ready and in that meantime the cpu can use another rank or channel to write something, so that the cpu doesn't sit doing nothing waiting for the memory to be ready to give or receive something

 

image.png.173e6386ec42fdd067f54839a75d9216.png

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Nathanpete said:

What about if it is GDDR memory, and does the calculation change if it is GDDR5/X to GDDR6/X?

Usually GDDR specs use the Gb/s per pin notation (that is gigabits per second, not gigabytes), which already takes into account the differences between the different types of memory. All you have to do is to multiply by the bus width.

For example, the PS5 uses 14 Gbps GDDR6 memory on a 256-bit bus, so the calculation simply goes 14 * 256 = 3584 Gbit/s or 448 GB/s (which is the figure stated in the specs).

The relation between the bandwidth per pin and the actual underlying clock speed of the memory does change depending on memory type, as newer types of GDDR will transmit more bits per clock cycle, but usually the only time you'll see it is if you are overclocking your VRAM.

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, mariushm said:

ex for ddr4 dual channel (2 sticks)        3200 mega x 2 bits per clock x 128 (2 x 64 bit wide per stick) = 819200 mega bits = 102400 mega bytes

The specs for DDR already take into account the fact that is is double data rate, hence why it is expressed as MT/s rather than MHz. So the bandwidth of dual channel DDR4-3200 would  be 3200 * 64 * 2 = 409600 Mb/s or 50 GB/s.

However, its actual clock speed is 1600 MHz.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, SteelSkin667 said:

The specs for DDR already take into account the fact that is is double data rate, hence why it is expressed as MT/s rather than MHz. So the bandwidth of dual channel DDR4-3200 would  be 3200 * 64 * 2 = 409600 Mb/s or 50 GB/s.

However, its actual clock speed is 1600 MHz.

Yeah, thanks for correcting me, you're right. I just woke up and didn't even have my coffee yet, so such small mistakes can happen. 

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/23/2021 at 1:52 PM, BuckGup said:

This is LTT where spreading misinformation to their community is dismissed because they want more money. Sorta dumb if your channel is based upon educating your community if you ask me 

Reasonably sure no one has ever made that statement.

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/25/2021 at 12:47 AM, SteelSkin667 said:

Usually GDDR specs use the Gb/s per pin notation (that is gigabits per second, not gigabytes), which already takes into account the differences between the different types of memory. All you have to do is to multiply by the bus width.

For example, the PS5 uses 14 Gbps GDDR6 memory on a 256-bit bus, so the calculation simply goes 14 * 256 = 3584 Gbit/s or 448 GB/s (which is the figure stated in the specs).

The relation between the bandwidth per pin and the actual underlying clock speed of the memory does change depending on memory type, as newer types of GDDR will transmit more bits per clock cycle, but usually the only time you'll see it is if you are overclocking your VRAM.

Yeah I get the overclocking part. My 5600 XT has a base memory speed of 1500 MHz and advertises that as 12 Gbps, but my specific card actually advertises 1750 MHz, which they claim is 14 Gbps, but I can overclock that further to 1830 MHz which I guess is like closer to 14.65 Gbps by my math. For GDDR6 across a 192 bit bus is 2810 Gbit/s or 351.4 GB/s using your math.

 

So now I suppose why I understand people always talking about memory bus width in GPU leaks, bc I suppose it really does have a very significant impact on memory performance.

 

Now what do u call when u say 14 Gbps and 3584 Gbit/s? Bc hugely different numbers but when saying aloud aren't both units just gigabits per second?

Fuck you scalpers, fuck you scammers, fuck all of you jerks that charge way too much to tech-illiterate people. 

Unless I say I am speaking from experience or can confirm my expertise, assume it is an educated guess.

Current setup: Ryzen 5 3600, MSI MPG B550, 2x8GB DDR4-3200, RX 5600 XT (+120 core, +320 Mem), 1TB WD SN550, 1TB Team MP33, 2TB Seagate Barracuda Compute, 500GB Samsung 860 Evo, Corsair 4000D Airflow, 650W 80+ Gold. Razer peripherals. 

Also have a Alienware Alpha R1: i3-4170T, GTX 860M (≈ a 750 Ti). 2x4GB DDR3L-1600, Crucial MX500

My past and current projects: VR Flight Sim: https://pcpartpicker.com/user/nathanpete/saved/#view=dG38Jx (Done!)

A do it all server for educational use: https://pcpartpicker.com/user/nathanpete/saved/#view=vmmNcf (Cancelled)

Replacement of my friend's PC nicknamed Donkey, going from 2nd gen i5 to Zen+ R5: https://pcpartpicker.com/user/nathanpete/saved/#view=WmsW4D (Done!)

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/27/2021 at 3:37 PM, Nathanpete said:

Yeah I get the overclocking part. My 5600 XT has a base memory speed of 1500 MHz and advertises that as 12 Gbps, but my specific card actually advertises 1750 MHz, which they claim is 14 Gbps, but I can overclock that further to 1830 MHz which I guess is like closer to 14.65 Gbps by my math. For GDDR6 across a 192 bit bus is 2810 Gbit/s or 351.4 GB/s using your math.

 

So now I suppose why I understand people always talking about memory bus width in GPU leaks, bc I suppose it really does have a very significant impact on memory performance.

 

Now what do u call when u say 14 Gbps and 3584 Gbit/s? Bc hugely different numbers but when saying aloud aren't both units just gigabits per second?

The memory bus width does have a very significant impact on memory performance, as a wider bus is often the main way to scale memory bandwidth up when GPUs in a given generation only get to use one or two types of memory.

 

Those figures are both gigabits per second, but the 14 Gb/s one is "per pin" (on a hypothetical 1-bit bus), or how many transfers the memory can do per second. If they used the same notation as DDR, it would be 14000 MT/s.

On a 256-bit data bus, 256 bits of data will be transferred at a time, which is why you then multiply that per-pin figure by the bus width to get the actual memory bandwidth of the graphics card.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 months later...
On 10/23/2021 at 10:34 PM, jSONBB said:

The BIOS for this board CAN be edited with AMIBCP 😉

Maybe someone at LTT can do some... poking around?

Version 5.02.0031 opens it.

 

P.S. there is "Memory Clock" controls among other various options to unlock;

Set "Access/Use" to "USER"

Capture.PNG

default RAM speed is 800MHz

Did you try to flash this modified bios? Did it work ?
Do you know if Ryzen Master / DRAM Calculator for Ryzen / ClockTuner for Ryzen works in this kit?

Does CPU-z recognize this CPU?

 

Cheers.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 3 months later...

Sorry for digging this up but I am curious couldnt one just clone the PS5 memory (firmware/os) into a hard disc and have it boot with that? I mean essentially it is the same hardware it should boot shouldnt it ? 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×