Jump to content
Search In
  • More options...
Find results that contain...
Find results in...


  • Content Count

  • Joined

  • Last visited


This user doesn't have any awards

About Bit_Guardian

  • Title
  • Birthday 1989-03-03

Profile Information

  • Gender
  • Location
  • Interests
    tech (all kinds), math, science, games
  • Biography
    I contract for various tech projects around the world. iIn Japan right now.
  • Occupation


  • CPU
  • Motherboard
    Macbook Pro
  • RAM
    16GB DDR3L 2400
  • GPU
    GT 750m
  • Storage
    1TB SSD

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. And if you look at system requirements scaling, system cost is still DOWN vs. 2012 after accounting for inflation. However, RAM prices are going up now thanks to the shortage.
  2. Strawman. Scaling please. When you take into account memory requirements of operating systems and apps going back that far, the actual system cost is pretty much level for comparative requirements. And no, HPE costs differently country to country, much the way Apple does, even after accounting for taxes and shipping costs. Utter hogwash. The cost of your food alone is double what it is in Australia. The only thing you have going for you is property prices in urban areas.
  3. You acquire it with whatever HPE's markup is. I'm not arrogant at all. I've scorched 30% off my company's AWS costs in this last year by overhauling their supply chain economics and scaling. It doesn't take a genius to know that this is an economy of scale cradle to grave. You don't have the scale. You're based in New Zealand for crying out loud. And the chip sizes themselves have remained mostly static. No, defect rates once mass production starts are pretty much in line with the last generation, not miles ahead. No one goes full production scale without a die-killing defect rate lower than 10%. Yes it does, more steps = more points of failure via more sources of error, which compound multiplicatively, not linearly. Nope, my dad built a couple family systems for $1000-1500 in my early years. Components were certainly not wildly expensive even after you account for inflation. Of course, the economies in the Oceania region being the unstable laughing stocks they are may have made your mileage vary. Hah! It is true. Attempt to prove otherwise. I've plenty of material from Samsung proving you're full of it. There again, let's just look at HBM vs. GDDR5 economics in terms of both density and performance. You're out of your mind.
  4. Not fabbed in a single step! It's just like HBM/2/3 production. 4 chips get fabbed on at least 1 wafer, and a separate process stacks them together, and a separate process adheres them to an interposer, and a separate step adheres that to the DIMM PCB. Let aloe testing between each step. No, way easier back then. Manufacturing difficulty (not complexity which always goes up) has gone up, not stayed level. No, you don't have the scale to get that kind of discount. Cray, HPE, and Dell maybe, but not you.
  5. No, 148 in total. There are now 4 chips per visible package on that 128GB DIMM. The packaged are each 4 chips stacked now. Yes it's wildly different thanks to TSV usage as well as the new Load Reducing tech being used vs. the very simple register and buffer tech of tester-decade. They're 4-6 grand now.
  6. Eh, you can make a solid argument for their push to shove Boeing out of European airline markets via absolutely ridiculous lawsuits and fines as a money grab, but most of their stuff has been pretty reasonable. I still find the fine over the Intel compiler in Cinebench to be ridiculous since no one ever provided a shred of evidence that Intel at all incentivised Maxon to use ICC as the core compiler of their software, but that's a separate rant. By having legacy support on SOCs from 4-5 years ago. Legacy gets abandoned over time. It's just how software and hardware work. You don't expect new Java apps to be developed based on the Java 1.6 standard, do you? No, legacy is Java 7, mainstream production is 8, and really new apps should be developed on Java 9. We shouldn't be catering for the ultra low frequency bands outside of very remote areas with specialised devices. For instance, we still have satellite phones as the recommendation for the Australian outback and certain regions of New Zealand. There are parts of the U.S. (Death Valley for example) so remote that even the 400MHz range isn't tenable, and why should it be forced into new products when its usefulness is practically gone? Essentially, governments are awful about stifling technological progress in the name of legacy support.
  7. Doing the same thing ONCE, from actions in the mid 2000s under leadership long gone. And Qualcomm's got 4 more lawsuits to contend with, 3 being regulatory in other major markets and a class-action by Intel, Apple, and Samsung I believe out of the E.U. asking for close to 6 billion in damages.
  8. Thank God. One lawsuit down, 4 to go between Chinese, South Korean, and U.S. regulators alongside the combined Intel, Apple, Samsung lawsuit against Qualcomm in the E.U. over anti-competitive practices. People keep saying Intel's still bad (and based on what happened in the mid-2000s under management long gone), but everyone seems to forget Qualcomm's been led by the same corrupt chief executives for nearly fifteen years.
  9. Not really. It's at the heart of Z13 and Z14 mainframes, as well as Oracle/Fujitsu mainframes. I think there's close to a Terabyte in each flagship model right now. I didn't say 3DXP is DRAM. It does have a DRAM cache though for the dedicated SSDs. And Optane is definitely not niche. AWS EC2 Gen 5 instances optimised for storage and databases are backed by it at the caching layer.
  10. In the case of NAND, the controllers have always come from Micron since the flash and controllers come part & parcel. Intel still makes all of its own for the Stratix products, Curie, Edison, etc.. The MCDRAM for Xeon Phi is in fact an Intel-custom HMC design built in-house. Their 3DXP is also in-house along with the DRAM for its controller, which is why Micron hasn't yet launched its own. The two have also broken off their partnership moving forward.
  11. Not remotely representative anymore, especially with new TSV DIMMS in production with 2-4x the number of chips stacked up along the walls in the same places. http://www.samsung.com/semiconductor/dram/module/M386AAK40B40-CWD/ And no, depending on your config, you can fit 2 EPYC or even 4 Xeon Scalable in a 1U with 32-48DIMMs alongside 2 Tesla V100s or 16 NICs in a 1U rack. The demand in volume of chips has easily quadrupled thanks to Samsung's and Micron's innovations in this space (both offering 128GB 4-Hi stacked DDR4 DIMMs). I'm sorry but you're grasping at straws here. Sure, the overall demand in DIMMs may have only grown 14% in the last 5 years, but the demand in chips has quadrupled. The shortage is far from artificial if we take that into account. It's not like Samsung, Hynix, and Micron have quadrupled their wafer processing in that same time span. Far less expensive than today's 128GB DIMMs (referenced in the link above).
  12. A potentially illegal leak btw. Careful, you're in a tech forum filled with SJWs and trainees. You could trigger the Occupy Wall Street crowd
  13. I think they're a bit delusional on volume production by Q1 2020, but best of luck to them.
  14. Marketshare is only measured on the open market. I don't think you remember how many of its own products Intel sells using its own memory, a figure no one bothers to check against the market.