Jump to content

PianoPlayer88Key

Member
  • Posts

    1,627
  • Joined

  • Last visited

Reputation Activity

  1. Informative
    PianoPlayer88Key reacted to Kilrah in Seagate has created a HDD with transfer speeds to rival SATA-based solid state drives   
    Dust that is microscopic is enough to cause damage...
     
    Talk about over-engineering...
     
    Even if it worked your single "built-in RAID" drive would cost 5 times what 2 usual drives do and still not offer the same level of data safety , so nobody would care about them 🙂
  2. Agree
    PianoPlayer88Key reacted to leadeater in Seagate has created a HDD with transfer speeds to rival SATA-based solid state drives   
    Yep but it makes abut as much sense as calling SSD persistent diamonds. Because that exactly how much the multiple petabytes of disk storage for backups would cost if I were to put them on SSD.
     
    HDDs aren't dying nor are they useless and this has probably just doubled or more their useful life span.
     
    Sure you all can keep your nice PCIe 4 NVMe SSDs with terrible write endurance, I'll be just as happy with these dual actuator drives in object storage servers totaling hundreds of drives that have more IOPs and throughput than your PCIe 4 NVMe drive. But ok fine that's going to cost a lot more than a single NVMe but you show me a single NVMe with 4PB+ capacity 🙃
  3. Funny
    PianoPlayer88Key reacted to Boosted_i6 in Welcome to the Linus Tech Tips forum!   
    Looks great!
    Keep firing Slick until he gets the bugs worked out
  4. Like
    PianoPlayer88Key reacted to LinusTech in Welcome to the Linus Tech Tips forum!   
    Thanks so much for joining. My vision is a community based on positive member interaction and helping each other. I believe that if we all work together toward this goal, we will have success in the long run.
    We don't have official forum rules or anything like that yet, but as long as everyone is respectful of each other, and patient with the growing pains that we are going to experience (vBulletin 5 is VERY beta right now) we can make this into an AWESOME community forum.
    Linus
  5. Agree
    PianoPlayer88Key reacted to AndreiArgeanu in Performance of Rocket Lake locked CPUs can vary up to 50% on some B560 motherboards.   
    Probably when AMD will be in the same position Intel was a few years ago, when they basically released the same cpu's year after year with small clock improvements and maybe a die shrink if you were lucky which meant that if you got a 2nd/3rd gen i5 or i7 back in 2011-2012, you didn't have much of a reason to upgrade 6th or 7th gen Intel 5 years down the line.
  6. Agree
    PianoPlayer88Key reacted to darknessblade in 32x SATA port on a single motherboard   
    That motherboard could be a whole load smaller if they opted for SFF-8644 connectors, with the HDD's being in a large rack.
     
    if we add a thread-ripper which has 128 CPU lanes.
    a 4x PCI-e SFF-8644 has 4 connectors for a total of 16  hard disks per card
    Calcualting for a total of: 512 hard disks, if we take 16TB hard disks that is a total of 8,192 Terrabyte. or about 8 Petabyte of data-storage.
     
    That way you can get more connectors and harddisks connected to a board with a M-atx size.
     
    Since the Video-card on the board does not need a lot of preformance, and only display the screen/status a extremely cheap PCI-e 1x videocard would be more than enough
    Which would only decrease the total capacity with 64TB
     
     
  7. Like
    PianoPlayer88Key reacted to Morris_lee_9116 in 32x SATA port on a single motherboard   
    ah yes the best NAS motherboard
  8. Informative
    PianoPlayer88Key reacted to igormp in Mainstream DDR5 Memory Modules Pictured, Rolling Out For Mass Production & Coming To Intel & AMD’s Next-Gen Platforms Soon!   
    Well, yeah, the cpu would be a bottleneck in the end if you did hit all of the drives at once. 
  9. Informative
    PianoPlayer88Key reacted to igormp in Mainstream DDR5 Memory Modules Pictured, Rolling Out For Mass Production & Coming To Intel & AMD’s Next-Gen Platforms Soon!   
    That's usually what they validated using the current DIMM capacities at the time. One nice example of this happening can be seen here.
    Some CPUs have flaws that limit the max ram you can have, such as what happened on ivy/haswell. One would hope that newer CPUs are well designed and should be able to handle more than what they validated.
    Yeah, it should be doable. As an example, AMD's 1st and 2nd gen TRs only announced support for 128gb, since there were no 32gb sticks available at the time. Once they came out, people found it that it worked without any problems.
    On the other hand, 3rd gen TR (non-PRO) has support for 512gb of ram, but there's no way to get that since those don't support registered dimms, so you're stuck with 8x32gb until 64gb udimms become a thing (if ever).
  10. Informative
    PianoPlayer88Key reacted to igormp in Mainstream DDR5 Memory Modules Pictured, Rolling Out For Mass Production & Coming To Intel & AMD’s Next-Gen Platforms Soon!   
    Around 8gb on my swap partition. I'm using linux. I avoid having stuff on swap like the plague, since it gets unbearably slow to use stuff.
  11. Informative
    PianoPlayer88Key reacted to igormp in Mainstream DDR5 Memory Modules Pictured, Rolling Out For Mass Production & Coming To Intel & AMD’s Next-Gen Platforms Soon!   
    I have mine on a nvme, but it's still slow as hell when compared to having things on your regular ram. Having to swap the pages between the disk (no matter how fast) and back to ram is painful.
    Aren't 32gb so-dimm sticks supported on your laptop? It'd be worth a try IMO, even if clevo hasn't validated it.
  12. Informative
    PianoPlayer88Key reacted to igormp in Mainstream DDR5 Memory Modules Pictured, Rolling Out For Mass Production & Coming To Intel & AMD’s Next-Gen Platforms Soon!   
    While speed would surely be better (if companies still target high speed NVMes instead of having same speeds but using fewer lanes), latency would still be an issue.
    Since you live in the US, buying a kit from amazon and seeing if it works shouldn't be a problem, right? If it doesn't, you could just return it.
  13. Informative
    PianoPlayer88Key reacted to igormp in Mainstream DDR5 Memory Modules Pictured, Rolling Out For Mass Production & Coming To Intel & AMD’s Next-Gen Platforms Soon!   
    That's not how it works. If you're downgrading the 256 PCIe 4.0 lanes to 2.0 speeds, you'd have 256 PCIe 2.0 lanes. That's why most PCIe 5.0 SSDs have the same speed and capacity of the current 4.0 ones, but use half the lanes.
     
     
  14. Informative
    PianoPlayer88Key reacted to igormp in Mainstream DDR5 Memory Modules Pictured, Rolling Out For Mass Production & Coming To Intel & AMD’s Next-Gen Platforms Soon!   
    No, you can't do that, a lane is a physical trace, you can't just double it.
    Yes. The only way to do so would be using a PLX chip, but then you're getting into high-end, server-grade hardware.
  15. Informative
    PianoPlayer88Key got a reaction from TheCoder2019 in Mainstream DDR5 Memory Modules Pictured, Rolling Out For Mass Production & Coming To Intel & AMD’s Next-Gen Platforms Soon!   
    Yeah, not having to use swap at all would be nice. 🙂  But I definitely notice a difference between swapping to NVMe vs swapping to platter HDD.
    I was just thinking ... with PCIe 5.0, or possibly PCIe 6.0 if it gets fast tracked (and I wait for that) ... I'm guessing NVMe SSDs (especially x8 or x16, not just the pleb x4 ones) would be faster, at least for sequential transfers, than my DDR3 RAM, and maybe even faster than my DDR4.
     
    I'll put a couple screenshots in a spoiler.
    Both have 2 runs each (4 total) of CrystalDiskMark (peak & real-world) on an NVMe SSD, and a RAM Disk.
    First is my desktop (i7-4790K, ASRock Z97 Extreme6, 32GB (8GBx4) G.Skill Ares DDR3-1600 CL9 DIMM, 1TB Samsung 970 Evo).
    Second is my laptop (i7-6700K, Clevo P750DM-G, 64GB (16GBx4) G.Skill Ripjaws DDR4-2133 CL15 SO-DIMM, 2TB Silicon Power P34A80).
    (I'll have to edit this post from my laptop after I submit on my desktop to put the 2nd screenshot in.)
    Wow, the laptop is only 4x the pixels (via an Asus VG289Q; desktop is using a Dell U2414H), but the screenshot is like 16.5x the size (8.5MB vs 500kb).
     
    I doubt it.  Mine's a Skylake laptop (which I believe capped at 16GB SO-DIMMs), and I have a hunch there isn't even an official Kaby Lake BIOS update for it.
    There might be something unofficial, like PremaMod if he's still doing them, or something that might also add support for Coffee Lake, as I've seen things about modding this laptop to support CFL.
    But ... while I'd like the extra cores, I really would like them in a lower thermal / power envelope.  Having it be on the same 14nm process (and not a die shrink) is making me have cold feet about doing another possible in-socket upgrade.
    I already upgraded from the i3-6100 that I originally had, to the i7-6700K.  Got the 6700K for $259 in November 2016, and I think I paid less for both CPUs combined, than I would have for just the 6700K if I'd gotten it with the laptop in December 2015.  (I think it was going for about $420 around then; I paid $130 for the i3.)  I was on a limited budget and at the time, storage capacity was a priority, and future upgradeability.  (Started with 8GB RAM, the i3, and a 2TB Seagate/Samsung M9T hard drive.)
     
     
  16. Informative
    PianoPlayer88Key got a reaction from TheCoder2019 in Mainstream DDR5 Memory Modules Pictured, Rolling Out For Mass Production & Coming To Intel & AMD’s Next-Gen Platforms Soon!   
    Ahh, I wondered if it was Linux, wasn't sure what task manager interface that was.
    My swapfile usage was a bit over 303 GB, between swapfiles on 2 SSDs.  There was a bit over 116GB on a 1TB 2.5" SATA (WD Blue 3D) SSD, and a bit under 187GB on a 2TB M.2 NVMe (Silicon Power P34A80) SSD.
    Having swap on SSD does help with performance from what I've observed.
    Yeah, I hear people have said that you shouldn't have swap on SSDs because of using up the write cycles, but I think a lot of recent TLC SSDs have good enough endurance so that shouldn't be much of an issue.  My 1TB Blue 3D specifies 400TB endurance, while my 2TB P34A80 specifies about 3.1PB endurance.
    Datacenter SSDs with their (IIRC) tens of petabytes of endurance would probably be even better, but last I checked even before storage mining rumors started coming out, those weren't cheap.  OTOH I don't like the low endurance of a lot of QLC SSDs... basically when possible, I like SSDs with at least 1 PB endurance per TB capacity, or, > 1 DWPD over 5+ years.  (Hmm, I wonder how much you can write to a HDD before it dies, vs. an SSD....)
     
    BTW I can't upgrade my RAM, 64GB is the max my laptop supports (DDR4 - 16GB per stick, has 4 SO-DIMM slots, Clevo P750DM-G came out before 32GB SO-DIMMs existed), and 32GB is the max on my desktop (DDR3 - 8GB per stick, 4 DIMM slots, ASRock Z97 Extreme6).
    My plan for a while now has been to upgrade the desktop to DDR5, and wait for DDR6 or DDR7 to upgrade the laptop.  Then I might see if I can skip to DDR8/9/10/whatever on the desktop - I don't like to have to tear apart an entire system just to upgrade one component like a motherboard or case any more often than I have to.  I'd rather replace a dead-of-old-age Seasonic Prime PSU a few times before I replace one of those other 2 parts.  Beyond that, I might want to leapfrog the two - for example, laptop on DDR6, desktop on DDR8, laptop on DDR10, desktop on DDR12, and so on.
  17. Informative
    PianoPlayer88Key reacted to RejZoR in Mainstream DDR5 Memory Modules Pictured, Rolling Out For Mass Production & Coming To Intel & AMD’s Next-Gen Platforms Soon!   
    You'll wait for a bit while. There are no DDR5 platforms yet and expect massive DDR5 shortages as well as very high prices. DDR4 was also very expensive in the beginning. And we didn't have a stupid pandemic at that time...
     
    DDR4 is quite mature now and modules are relatively cheap and still obtainable. I grabbed 32GB in 2x16GB kit last year in december without any issues when I was building the system in my signature.
  18. Agree
    PianoPlayer88Key reacted to cookiePerimetre in Mainstream DDR5 Memory Modules Pictured, Rolling Out For Mass Production & Coming To Intel & AMD’s Next-Gen Platforms Soon!   
    I was considering upgrading my build right now, I think I will hold until next gen and DDR5
  19. Informative
    PianoPlayer88Key got a reaction from TheCoder2019 in Mainstream DDR5 Memory Modules Pictured, Rolling Out For Mass Production & Coming To Intel & AMD’s Next-Gen Platforms Soon!   
    My big concern with DDR5 isn't so much getting much faster memory (although that would be nice too); but I really need much higher capacities.
     

     
     
    An approximate reverse-timeline of the RAM usage of the main daily driver systems I've used at home):
    (year-mo is start)
    2019-05 = 64GB = DDR4-2133 in my Clevo P750DM-G laptop 2016-10 = 40GB = DDR4-2133 in my Clevo P750DM-G laptop (got it with 8GB in 2015-12) 2015-02 = 32GB = DDR3-1600 in my 4790K desktop (i7-4790K, Z97 Extreme6, etc) 2015-01 = 16GB = DDR3-1600 in my 4790K desktop ~2012-03 = 2GB = DDR2-667 in dad's Dell D830 laptop (Core 2 Duo T7250) (my 2008 desktop's motherboard had died, I didn't have the $ to build another PC) 2009-04 = 3GB = DDR2-800 in my A64X2-4K desktop (Athlon 64X2 4000+, GA-MA69G-S3H, etc) (actually had 4GB, but 32-bit Windows limited to 3GB) 2008-02 = 2GB = DDR2-800 in my A64X2-4K desktop 2002-02 = 256MB = DDR PC2100 in dad's Athlon 1.4GHz / K7T266 Pro2 desktop 1999-03 = 64MB = in dad's Pentium 166 desktop (older brother had bought it in 1998-02, then sold it to dad; bro's invoice mentions Simm 4Mx32-60 72-pin, 2 @ $32.5 for $65 total) ~1995 ~ 8MB? (or 4 or 16 idk) = in dad's 486DX4-120 desktop (one invoice on 1995-08-19 mentions "Qty 1: 1x36-70 (12) $141.00") 1989-01 = 640KB = in dad's 286-10 desktop Basically the main jumps were:
    1995 (vs 1989): 12.8x (8MB / 640KB -- I'm assuming 1995 was 8MB) 1999 (vs 1995): 8x (64MB / 8MB) 2002 (vs 1999): 4x (256MB / 64MB) 2008 (vs 2002): 8x (2GB / 256MB) 2015 (vs 2012): 16x (32GB / 2GB) alternately: 2016/2012: 20x (40/2) or 2019/2012: 32x (64/2)  
    My current daily driver is the Clevo laptop with 64GB, and I still have and use the 4790K desktop across the room with 32GB.
     
    I want my next system to have at least as big of a jump over my current one, as some of the bigger previous jumps I've had -- and that's BEFORE considering registered / buffered memory or more than 4 DIMMs.
     
    For me, 512GB RAM would be a bare minimum, and I'd really like 1 or 2 TB RAM (more if I get registered ECC which I want support for, or 8 or more DIMM slots).
     
     
     
     
     
     
     
    To try to make my post appear a little shorter, I've put the rest, including comments / quotes, in a spoiler.  (There's quite a bit in there...)

     
     
     
     
     
     
     
  20. Like
    PianoPlayer88Key reacted to leadeater in Mainstream DDR5 Memory Modules Pictured, Rolling Out For Mass Production & Coming To Intel & AMD’s Next-Gen Platforms Soon!   
    Catch? You can't buy it like everything else and even if you could you wouldn't be able to buy a CPU that could use it anyway?
     
    Other than that like every DDR generation switch over, never buy in to the first product cycle iteration if you are looking for best performance as often it's only as good as or slightly worse than the best of the last generation. Just wait a bit and check reviews until you know it's better than what came before.
  21. Informative
    PianoPlayer88Key reacted to leadeater in Mainstream DDR5 Memory Modules Pictured, Rolling Out For Mass Production & Coming To Intel & AMD’s Next-Gen Platforms Soon!   
    Yep, that's why I said you need both. Neither can replace the other and making one more like the other makes it worse for what it's actually good for in the first place.
     
    There are already CPUs with on package HBM, Fujitsu A64FX and Intel Sapphire Rapids supports it. I believe the A64FX is only HBM though but the entire SoC design is high vector math throughput anyway so a little to specific for purpose for this discussion.
     
    But you really need a unified memory hierarchy to truly make use of HBM for CPUs, without tradeoffs, but that's too expensive outside of servers. 
     
    Then there is sort of the reverse situation happening too, Samsung putting processing cores in to HBM chips but I guess that is more akin to GPU/ASIC/FPGA usage.
     
    I don't remember it being half but then I don't remember a lot about it anymore lol. I know the IF link between the CCD and IOD is 100GBps each, consumer platform.
     
    My problem is I care more about EPYC than Ryzen and there is a big ass difference between EPYC and Ryzen IOD

     
    The IOD in EPYC actually has sub NUMA domains/memory channel/controller locality, so I fear I could be talking out my ass if I were to talk about Ryzen.
     
    But I assume you are meaning cases like this?

     
    I understand that is actually just a fault in the test, or the way it's being interpreted. Per core the bandwidth is limited to 25GB/s as each core has 2 loads but only 1 store. So if you utilize more than a single core the write bandwidth will increase, however each core is limited to that 25GB/s write and 50GB/s read. It has no affect on the computation throughput as the cores themselves are the limiter and cannot process anymore bandwidth.
     
    It's an architecture thing of the CCD/CCX/cores themselves not the Infinity Fabric.
     
  22. Informative
    PianoPlayer88Key reacted to porina in Mainstream DDR5 Memory Modules Pictured, Rolling Out For Mass Production & Coming To Intel & AMD’s Next-Gen Platforms Soon!   
    I look forward to the confusion that will bring when people talk about modules and channels... and as always if a particular workload is more affected by bandwidth and/or latency will vary, but bandwidth is the more easily addressable bottleneck compared to latency, which has remained roughly constant through the DDR generations. I think the splitting to twice as many half as wide channels might only benefit heavily random access workloads and it'll be interesting to see how that works. I'm sure the usual tech sites will try to do some kind of DDR4 vs DDR5 test, depending on what platform options are available at the time. Skylake supported DDR3 and DDR4 for example so there was some of that testing at the time.
     
    I'd really hate to think of the impact using HBM would have on general CPU use cases. Latency is horrible by design (low clock, very wide bus), it is geared towards bandwidth. Too far in one direction for a general use case. Maybe it'll make some kind of sense if everyone ran CPUs with so many cores they start to look more like GPUs as we know them today.
     
    Early talk of DDR5 modules will likely be to JEDEC standard timings. The vast majority of enthusiast marketed DDR4 modules do not run at standard timings, but are set much tighter. So, again compare like with like. I'm sure enthusiast modules will follow with more aggressive timings.
     
    I was of the understanding when running 1:1, unidirectional IF bandwidth equalled dual channel ram bandwidth. So I guess your statement would be correct if you're looking at a single DDR4 channel. Please correct me if I'm reading it differently than intended.
     
    IF was a pain point of Zen 2, especially when AMD clarified that (at least for consumer models) the CCD to IOD write bandwidth over IF was only half that of the read. For single CCD CPUs write bandwidth to memory was awful. You could only attain full ram potential with two CCD models, but they're already short on bandwidth with one.
  23. Like
    PianoPlayer88Key reacted to leadeater in Mainstream DDR5 Memory Modules Pictured, Rolling Out For Mass Production & Coming To Intel & AMD’s Next-Gen Platforms Soon!   
    Shouldn't actually be a problem, the IFOP bandwidth is actually double that of DDR4, by design, and as we know clocked to the memory memclk so they can either go with that approach again in DDR5 generation or decouple it, though that might actually be worse even if it allows an increase in bandwidth since you'll have to factor in syncing the buses.
  24. Informative
    PianoPlayer88Key reacted to DuckDodgers in Mainstream DDR5 Memory Modules Pictured, Rolling Out For Mass Production & Coming To Intel & AMD’s Next-Gen Platforms Soon!   
    The 64-bit DIMM format had been here since the mid-90s, when SDRAM was released, to match the interface width of the Pentium's FSB and allow more memory capacity on a single module instead of using matched pairs of the older 32-bit SIMM memory.
    Since then the default memory channel width has remained 64 bits. Going wider will not be beneficial, particularly when the clock rates are going up and the signal tolerances are getting tighter. Intel actually already tried to make a shift to a narrow memory bus by using RDRAM with the first P4 systems and it failed due to bad business practices mostly. Now with DDR5 the whole industry is finally ready to go narrow and fast. Let's hope AMD can keep the Infinity Fabric scaling up with the future DDR5-based Zen CPUs.
  25. Like
    PianoPlayer88Key reacted to PeachGr in Mainstream DDR5 Memory Modules Pictured, Rolling Out For Mass Production & Coming To Intel & AMD’s Next-Gen Platforms Soon!   
    The latency is going on higher numbers.
    On that advertising, it was cl40.
    And it's not easy to run ddr4 at above 3600 mhz on even the latest generations of CPU.
    Hope that new generation cpuz will run them easily above 5000 without crashes, that's the ecc memory for

×