Jump to content

indrora

Member
  • Posts

    66
  • Joined

  • Last visited

Everything posted by indrora

  1. I have a few thoughts on this, tempered from my years of tutoring college students on basic IT. Not all of this will apply to an 8th grader. Use your best judgement. IMNSHO, topics like media literacy ("is this real? How do I trust it?") and core skills like file management, organizational skills, etc. at one point, I developed a checklist of "Essential shit you will need to know in order to be competent in the world of computers". Identify the following ports: USB (A,B,C, Micro/Mini B), ethernet, HDMI, VGA, SD/"removable media". Discuss at some level the difference between a desktop application (e.g. Word/Excel/VLC) and a web application (GoogleDocs, office online, Gmail, Twitter, etc) identify and discuss the differences between RAM and storage, "fixed" disks vs portable storage, etc. Troubleshoot "is it me" - that is, is this website slow, are all websites slow, is my computer slow: I constantly hear people who complain that their computer is slow because it takes a very long time to load a specific webapp, such as one from a university that is being inundated with every student trying to register for the same 10 classes at once. Importance of updated software (security, at least) and occasionally rebooting your machine. In some form or fashion, be able to store, in some long-term form, information you don't want to remember now ("exobrain" type work) -- I don't care how. it can be Google Keep, OneNote, Notion, a text file they keep on their desktop. Whatever. Some form of digital notebook. I will occasionally accept bookmarks. Print/Save to PDF Using the Internet Archive at some level. I specifically start with "what if I wanted to see the front page of CNN on September 11, 2001?" Two-place backups. Easier if they're a Mac user -- I just tell them to get a 2TB external SSD and the Apple usb-hdmi dongle, keep it plugged in when they go to bed. Windows users I generally point to "Make regular backups on a flash drive." I'm not going to go into 3-2-1 rule type stuff. Password managers and how they work. How to touch type. This is, no shit, one of the most powerful things I have gotten people to learn. Being able to transcribe something you see into words is supremely useful. How to manage files: copy/paste/move/rename/etc. Saving your work. Citing sources digitally (even if it's just a URL) That's most of it. The ends are the most important imo. I would hear people come to me and go "My phone is so slow" and I'd look at it and the reason wasn't that they had too many pictures, as they would claim, but instead because they had a ton of backgrounded apps. I'd ask "When's the last time you rebooted your phone" and they'd look at me like I had grown a third eyeball. There's some good reading to be had about this topic, too, if that's your thing: https://www.nngroup.com/articles/computer-skill-levels/ -- Older, but still somewhat relevant study on how well people handle various tiers of tasks with computers. https://www.sciencedirect.com/science/article/pii/S2405844023020856 A review of mechanisms for teaching and quantifying digital literacy (open access, even!) If I were to design a "thing" that covered a lot of these, it would be the following: You can get bulk packs of small size flash drives -- I don't know what your budget looks like (knowing America, bad) but if you can, you can get massive quantities of flash drives cheap now that are good enough for keeping high school stuff on.
  2. Apple beat you to the punch. There's an Apple-made version of Wine/Proton in the Game Porting Kit. One of the devs behind [a switch emulator] has released Whiskey, a wrapper around the Wine environment that Apple built: https://getwhisky.app/ It's pretty good. I stopped using the "real" macos install of Steam and just run it on top of the GPK.
  3. If you don't have one, it should return an empty string: This could be as simple as the software license service is stuck for some reason, or as hard to diagnose as disk corruption. It could be something weird with a microsoft account misapplied during setup, or it could be something like TronScript, Shutup10, or any of those "anti bloatware" tools breaking something in collateral damage. By far the simplest option, without having a lot of possibly-relevant information, is to back up everything you care about, then pave the install clean from media you get from Microsoft directly.
  4. If you do need to retrieve the windows key burned into your motherboard from a major OEM like HP/Dell/etc, you can do so with this at an Admin powershell: (Get-WmiObject -query 'select * from SoftwareLicensingService').OA3xOriginalProductKey If there is one baked in, you'll see it here. If there isn't, you'll get nothing. Seconding however the comment to get clean, fresh windows media. If you're uncertain and you need some guidance, https://rufus.ie/en/ -- Rufus can guide you through it and download known clean installation media (the download option is enabled if you allow it to look for updates)
  5. Knowing what module caused the failure is helpful, but I'd start by trying another drive if I could. Doesn't need to be fancy, just needs to be known working. I've had drives that write fine but then kaboom, fail on read. This can cause all sorts of Fun Problems. What version of your system firmware are you on? Asus says the most recent release is 1801 (released 2023/12/22) and support for the 14th gen i9 was added in 1303 (released 2023/08/25). Depending on your motherboard's manufacture date, that might have something to do with it: 1402 specifically has an update for 14th gen processors, and a previous one mentions specific support for some DDR5-7200 ram kits. It's been a Hot Minute since I did memory tuning, but I do know that the more sticks you cram in, the slower the clock speed needs to be. If my memory is correct, clocking it down to 4200mhz might be the best option at least for testing. This *is* a validated kit, however, so that shouldn't have much to do with it. I would suggest testing a pair of sticks just to try.
  6. A friend of mine recently asked for recommendations on a new PC build. I mentioned that I was just going through the LTT secret shopper series and I'd get them my thoughts. I was doubly impressed at the positive review from GamersNexus, who had one major complaint about some thermal pads. That's high praise. I have a few give-or-take things for smaller shops. HP and other Major OEMs are monoliths that have big corporate faces. Smaller boutiques though? I look a little into them. I look into their leadership, and Starforge, for all their dumb logo is forgivable, is very big on how they're sponsor-y-paired with OTK, who raised my eyebrows when I saw a clip from one of AsmonGold's streams that made me say "Man I don't want to be associated with him." In the clip, Asmongold starts badmouthing some VTubers: Then I noticed the name. That's one of my friends he's talking about. That's a human being who he's badmouthing. That's a streamer who has worked her ass off to build a positive community of people, responding to another streamer who has worked hard to have a supportive community. Even if that wasn't someone I know personally, I'd absolutely consider this a black mark. On top of that, it's a shitty comment, completely ignoring the point that people are trying to make (it's not actually about Palworld, but about its leadership, who are hard into the NFT/Crypto/AI-art world, which a lot of people aren't okay with supporting) just to hate on furries. (It's worth saying: furries make the internet go) Okay so... where there's smoke, there's fire. There's some really bad takes I don't agree with ("artists opinions don't matter" when it comes to things like copyright infringement, for one) but okay, let's do some background research. If I'm going to form an opinion on one group of people, I should probably consider their actions as a group/whole. What's Asmongold and the rest of OTK like? Asmongold has some amazingly bad takes about artists and creatives... Okay, so, Asmongold is a jerk, what about the others? It gets complicated. There's plenty of snapshots of various OTK members being vaguely sexist, laughing about poor people, being jerks on twitter, having some skeletons in their closet, getting banned from twitch for a few interesting reasons, saying racist and homophobic things. I'm only somewhat going to bring up straight up accusations of terrible behavior, like manipulation and gaslighting, as too heavy a weight, as I don't know how much context you need, but it sure adds to the fuel for the flame. Then there's the... tasteless naming of the supplement flavors that many of the OTK crew bring up, one of which is named by an OTK member. All this sums up to "The people who are in some level of power, culturally, aren't the kind of people I want to support." I guess the final straw was reading that OTK as a group are worried that being called out for shitty behavior is somehow going to ruin streaming. Hassan had it right. I was disappointed when LTT got their time in the spotlight of unpleasant drama, but LTT took ownership and righted the ship. I dunno what OTK does, but Starforge has them as their front line big names. Puts a real bad taste in my mouth.
  7. Linus et al at the end: But in all seriousness, it's not incorrect to say that the MIL STD standard testing is a little... "lacking". It's meant for the 95% use case, which generally isn't going to be "Hucked off a humvee onto the highway at 80mph", it's going to be "dropped out of a second story window" or "knocked off a road case and smushed into the corner of the carrier". As others have noted: The plywood is there to make the impact last longer: It's weirdly easier to survive hitting a hard surface than it is a soft surface because as you're decelerating, you feel all the resistance to slow you down *over a longer period of time*. There're also other places where these are super useful: Airport refueling stations (Jet fuel! Bad weather! Carts that whip around!) Dockside environments (corrosive, salt-laden fog!) Industrial environments (like a place known as The Worst Place On Earth, a steel mill and radioactive, toxic tailings pond) Basically, anywhere you would go "yeah that's probably dangerous for normal tech". A friend of mine bought a Toughbook competitor (Getac) a while back and would regularly sit in the tub with it watching youtube videos before he sold it. One important feature of a lot of these is the IP6X rating: Full dust sealing. One that you might not think about is Salt Fog protection, as seen in the Getac X600 Pro specs: In ocean environments, such as on docksides and in areas where you have saltwater fog coming in, electronics will slowly corrode themselves to death if they're not protected against the certification. Also noted here is a milstd 461G -- A series of tests that is more akin to "A more robust version of the FCC part 15 certification", making sure that under heavy RF noise environments there are no mission-stopper failures. Normal laptops do weird things when you drag them into adverse RF environments like on the top of radio towers. And it's important to consider: Mission-Stopper failure is the key for all the MIL-STD spec stuff: If I can pull the hard drive and RAM out, slap it into a new machine of the same type, kiss any booboos that might have occurred on someone's head/foot/etc from the fall, slap them on the ass and tell them to get back on the field, that's super duper. The critical part of the mission is the data on that disk. The fact I don't have to do that *every* time someone leaves their laptop on the top of the jeep? Only once out of every 10...ish times someone does it? That's savings in the bank right there.
  8. As a developer, this sort of processor is really appealing for a few reasons when you get into the more complex tasks. Anyone who works on OS-level work, browser development, or even just the typical Gentoo user, will appreciate the high core count and high memory throughput. Why? Because it will rip through code. Most of the time, code compilation is I/O bound, but with the sort of memory bandwidth and capacity that this can handle, it's enough that holding your entire codebase in memory is now possible and you have enough left over to do whatever compilation you need. One of the hard parts of compiling software is a part called linking -- taking all the intermediate compilation bits and binding them together. The issue is that most linkers are single-threaded, but newer advances in linkers such as one called mold take advantage of every CPU core available. Like Linus said: This is targeted right at the end of the "I need a desktop system" but where desktop-class Xeons are either non-performant enough but sticking an Epyc Genoa or Xeon Gold system underneath someone's desk isn't practical. For systems where you're CPU-bound but multithreaded to make use of as much compute power as you can get, these are nice big hammers for a very stubborn nail. These also make sense outside the HEDT space specifically. Certain virtualization workloads will benefit from these sorts of chips, especially ones that need a few cores but access to a lot of RAM alongside some specialty hardware like a GPU or dedicated accelerator card.
  9. Gotta give it to Jordan here: * Competent choice in processor, storage, GPU, RAM. I wouldn't have chosen that display, but what the heck, I'll go for it * Clearly hit 100fps in Crysis. * Way cuter than Linus. (+1000 points)
  10. You may have actually hit the limit on what your hub/root controller can handle from a single stream. Unplug everything except audio devices and turn their recording/playback quality down to something like "CD Quality 41kHz" in Windows. What isn't mentioned is that a four port hub has to *evenly* divide the amount of bandwidth. That means that if you have a hungry enough device (like an audio DAC or two) it can soak up an entire hub's bandwidth.
  11. You might find one "junk" on Yahoo Auctions Japan. You'll definitely pay a premium getting it over to the states though. SSDs are still "relatively" the same on the Japanese market, yen-for-yen, you're getting WAY better bang for your buck going spinning disk: At the scale that Epson and others are selling at, that spinning rust is far cheaper than the SSDs, especially when you have... Complicated relationships with China: Japan would much rather have something produced in Japan (as that chassis likely was) than something produced in China unless it's going to be seen as of inferior quality anyway. I'm spitballing here, but I'd bet that the drive that was included was also a local brand: Toshiba or HGST. a Toshiba drive would absolutely make sense given the market and the consumer cost: Most of my Japanese friends and coworkers have mentioned that they would rather go for something lower cost and slightly older than something higher cost and top of the line. Many of them have picked up Xbox One X as they've become quite cheap and can still play many of the games that the Xbox Series S can play. One even congratulated me on building a sleeper PC with slightly older parts instead of very high end parts (I had a third gen ryzen chip floating around, plus an rtx2070) and went as far as to say "you should have really gone for a 1650 if you were buying parts."
  12. It's worth calling out that outlets in China are a... bit of a mixed bag at best. There's no less than 3 different kinds of outlet you can encouter on a daily basis, depending on where the thing was made, who it was made for, what kind of thing it is, etc. You can encounter: Type A plugs (two prong ungrounded outlets, similar to the US non-polarized ones here) Type C plugs (two round pin plugs, aka Europlug) Type I plugs (two angled with a ground pin, which is also used in Australia and other similar countries) Type A is also used in Japan. Type I looks eerily similar to the US NEMA5-15 when plugged into the wall. It's a mad rush to the point that a power strip might well look like this:
  13. It's been years since I actively modified my UI outside of the occasional font tweak. Stopped somewhere around the win8 era because I got serious into college and decided "getting things done" was more important than tweaking all the knobs. Stability over cool factor made a huge difference, especially when it comes to usability. The oldest active screenshot I have is from the laptop I started college with, a hand-me-down Dell that I ran Debian on: Later, I picked up another hand-me-down in 2012 that I could do more work on -- it had I think four total gigs of ram. And here's a ca. 2012 shot showing Rainlendar on a different laptop I had at the time Nowadays, my customizations are minimal at best. Using Oh My ZSH's nanotech theme on unix boxes and the occasional UI tweak here and there. Customization for me now mostly involves stickers, which are a constant collector item, and things like keyboards. I have something in the ballpark of over 250 stickers -- around 80-100 unique ones if I had to wager -- in the collection of stickers that I apply to things. As for keyboards, here' one half of my daily driver Ergodox:
  14. A bit of golfing: Barring DOA drives, what's the fastest you've had a hard drive die? Add a point for every month the drive survived. Subtract 2 points each for: High Capacity (8TB+) High Speed (10kRPM) Enterprise (Seagate Exos & Enterprise Capacity, WD Gold, HGST Enterprise) Drive Eaters (controllers that you discover cause drives to fail) Cascade failure (one drive failing causes another similar-age drive to fail due to increased usage) "It's new, I swear" ("new" drive discovered to have had >1k hours on-time before you got it) SMART full test said nothing was wrong but read error rate kept going up Subtract 4 points each for: Firmware fuck-up caused the drive death (for instance, HP enterprise drives) Explosive and otherwise unusual failure modes Data Recovery service discovered mysteries Monkey's Paw/Butterfly Effect tier shit (environmental change causes a cascading failure) Vendor says "No Problem Found" Hole-in-zero is under 1 month before failure, but not DOA drive. Negative scores win by default. Standard par: 36 points (3 years of service) Business par: 48 points (4 years of service) Came up recently because I had a Seagate Ironwolf Pro drive go from 0 bad sector reads to every sector reading badly in a span of 6-8 months & just sent it in today to be RMA'd. 10TB Seagate Ironwolf that survived about a year and a half nets me a wonderful 16 points.
  15. Once upon a time I heard the following phrase: (This also implanted upon me the concept of "Open Sores", what you encounter when dealing with Open Source software) While you can go and do this with TrueNAS Core and the like, unless you're good at it, you're gonna spend a lot of time figuring out what you want and need. It is 100% true that I can build a storage server with more density for the same price... If my time is worth nothing. At my salary, that thing would cost over $30K more, simply from the time I would have to put into it. The fact of the matter is that Jellyfish devices come with three things that are worth the cost: An out of box experience that starts with "plug it in" and ends under 5 steps later with "dump content onto it" Support for when things inevitably go south with warranty service Out of the gate integration with the common tools without hassle. And that's where you're paying for the decade-and-some years of experience that the team that puts together the Jellyfish hardware, software, and integrations has. In the wise words of a consultant friend of mine:
  16. After working in The Biz for a while, I have some knowledge of why. Any organization inside a software development house will function like a bureaucracy, as any organization will eventually fall into this despite hopes for pseudoanarchy. In the case of QA, they're often left to their own devices. This "just... tell us what's wrong" approach means that there's a disconnect between developers and testers. From my understanding, Microsoft started encouraging developers inside to test releases through some form of internal "insiders program". We know that this sort of thing exists because of folks like Jen Gentleman talking about, but also things like BuildFeed (RIP) where we saw a lot of internal branches; it's now well documented over on wikipedia. This also means that broader bugs in Windows are caught by the developers before more users see it. What this means is that Microsoft has shifted how they test Windows. Why? Strangely, because having independent QA teams leads to a horrible pattern called Tester-Driven Development. Having a QA team that exists only for QA means that development focus shifts constantly to their beck and call. This also means that advancement within the QA team is almost entirely driven by your ability to spit out bugs for developers to fix... even if those bugs are meaningless or really caused by a deeper problem. This causes a lot of surface level problems to get patched over (we've seen some of those throughout Windows' development history) and doesn't give developers enough time to go back and find root causes. Now, that's not to say that bugs don't happen, but here's what it takes for a bug to make it out: it has to make it past the development team that is actively working on it (ballpark 20-30 people) It has to make it past the "canary" channel (nightly builds, probably 200-300 people) It has to make it past the "Selfhost" channel (builds that have been approved by those 200-300 people) It has to make it past the "Dev" channel, the first time it's seen outside of Microsoft. (likely 3-4k people) It has to make it past the "Microsoft" channel, where it's rolled out across workstations inside Micorosoft (50-100k at minimum) It has to make it past the "Beta" channel, the first time most consumer Insiders get to see it. This is the most unstable channel that is publicly available to users. (Ballpark 5 million people) It has to make it past the "Release Preview" channel, the last channel before it goes live to the rest of the world (ballpark 3-4 million people) If you're curious how many people it takes inside Microsoft to get a thing out the door, the blog posts How Many Microsoft Employees Does It Take to Change a Lightbulb by Eric Lippert, a developer on windows, and Thinking through a feature by Raymond Chen, one of the oldest developers in Windows. Both of these are probably some of the best looks into what it takes to get a change into Windows; change "Feature" to "Bugfix" and the logic still holds the same. And yet bugs still exist in Windows. So, how bad of a bug does it need to be in order to not get squashed? Let's assume that a developer is worth, ballpark, $100/hr and it'll take 5-6 developers to get it fixed, tested, and reviewed before it goes to Canary. That's between $500-800 per hour of development time, give or take. with a typical bug fix taking between 100-200 hours worth of time to properly diagnose and triage. That's $500,000-$1,600,000 in developer time for a single bug. That's not including the time of the PM, god help you if there's localization problems, etc. Now, how many users does that bug affect? Let's assume there are 800 million Windows users at any one given time (since that's the last number we've gotten out of Windows.) 100,000 people at a time is only shy of 0.125% of users. That's such a small fraction of people that it isn't worth time (shy of someone getting a real bug up their ass) to go fix it. it's spending $1million on a thing that affects 0.125% of the population when that same $1m (or, likely, $2-3million) could be spent dealing with a bug that affects 1,000,000 users (12.5%). The other issue is that to fix a bug, they have to find out if it's any number of categories. These are all real situations that I know have caused intermittent bugs: is it a bad version of the nVidia graphics driver that only gets shipped to users running older Quadro cards? is it a CPU instruction that isn't being interpreted correctly and as a result something gets shoved to an off-by-1 error? is it a race condition caused by high speed networks being just right that a coil of 100ft of cat5 sitting under someone's desk causes two threads to lose the race at the wrong time? Is it caused by a buggy USB3 controller that was only used by one vendor for one generation of machine before the controller was updated due to a bug in the actual silicon, and thus it only occurs on that one specific generation of HP Elitebooks? is it caused by lower quality HDMI cables getting interference during display EDID reading that causes the display to do something weird because windows trusted the EDID values a little too much? Is it a configuration that was otherwise uncommon but promoted on some blog as a "really cool trick" and now causes people to lose their data because someone tried to be clever? Lastly, the developers often go in blind trying to figure out what's going on. They get a vague "when I click the start button 1,000 times, the 1001th time it doesn't render right." There could be any NUMBER of things that happen between 1,000 clicks of the start menu: drivers can be updated, services stopped and restated, is this one single sitting, did the monitor turn off, is there actually a monitor or is this over RDP, etc. etc. etc. Winding back, QA inside Microsoft was, likely, not actually all that useful and getting in the way of shipping actual fixes.
  17. Mitsubishi Raycon 2mm thick acrylic. After working with that exact acrylic, it's a fair shade above the typical stuff. It's used in large scale aquarium tanks and museum displays.
  18. Saw this as I was reading Twitter: http://www.shop-siomi.com/shopdetail/000000000043/001/X/page1/recommend/ An ATX case, made in Japan, that costs over 450US alone, plus the cost of getting it out of Japan. Tired of those weakling screw in standoffs? How about standoffs that can probably hold your motherboard so rigid a 3lb air cooler can't weigh it down. GPU flex? Not on my watch! This absolute UNIT of a case can sport a combination of disk layouts with up to 43 small size SSDs shoved into it, can hold up to nine 5.25in bays (Hotswaps for DAYS) and has mounting points for radiators like you wouldn't believe. Interestingly, they decided that a reset button wasn't useful, given the stability of modern computers, so that's gone: But the adjustable LED brightness is something that could be seriously cool to see elsewhere in case designs. oh, and it can come with casters... For less than Apple's casters cost!
  19. I'm going to have to disagree with Linus here. The Athlon 3000G wasn't built for Gamers™, it was built for web browsing, light day to day tasks, and shoots at the market that Pentiums and i3s shoot for. For reference, a good friend of mine built a machine around the 3000G. The requirements? Runs a browser, at least somewhat performatively. Handles Word/Excel/Powerpoint Maybe runs Minecraft? Fits in a mini ITX case. This case to be exact: https://www.amazon.com/WIN-150W-Mini-ITX-Black-BQ656T-AD150TB3/dp/B01LVV6WVU That system is happily cranking along as the daily driver of a PhD student who has on average 20-30 PDFs open at a time. They got the case for free and scrounged $150 in parts up and even got a used m.2 SSD from a friend, adding it in on top of the 2TB hdd they had crammed in. What Linus and friends here did was Intentionally miss the point of the 3000G Kneecap the system with a card that was slow when it came out and which is Hella Slow compared to the baked in graphics on the chipset. Even with the integrated graphics, they compare it apples to apples with the mid-tier Rx570. They could have gotten supremely better performance out of a pair of 8G sticks. Go "LOL AMD WHY???? " I'd like to see a more reasonable "building a budget system for the non-gamer" -- your mom doesn't need a 2080Ti, or even an Rx570.
  20. Huh. Here's to hoping that Asus continues this process. Their ROG Strix XG49VQ ( https://www.asus.com/us/Monitors/ROG-Strix-XG49VQ/ ) has 12bpc 4:2:2 YCbCr support and is sub $1000, though it only does 400 nits (which I actually prefer!) Good job, Asus, you're helping push some of the cost down of good HDR content.
  21. I'm going to guess that the cables that Linus is using are Corning Thunderbolt cables. Specs from Corning: https://www.corning.com/optical-cables-by-corning/worldwide/en/products/thunderbolt-optical-cables.html Get them from newegg: https://www.newegg.com/black-corning-32-81-ft-thunderbolt-cable/p/N82E16812795001 Tldr: they're balls expensive but come in 100ft lengths if you really need them. Pair those with something like a Thunderbolt Adapter < https://www.amazon.com/StarTech-com-Thunderbolt-Adapter-DisplayPort-TBT3TBTADAP/dp/B019FPJDQ2/ > from StarTech. Super cool that it all Just Works™ out of the box,
  22. Oh man. Brings back some memories. Definitely gotta give a few different classic games a shot from the mid-aughts LAN days: Serious Sam ARMA Battlefield 1942 Portal Far Cry GTA Just to name a few. Portal was way on the high side of that but oh boy if you had a couple copies of Serious Sam kicking around you were the go-to friend at LAN parties.
  23. You're very much not its target audience. This is for the person with dollars than sense. This is the "I paid $fuckyou for my computer because I can" and a few steps above the person who buys a Porche 911 Sharkwerks to daily to the grocery store because fuck you and then takes it to mororama and puts a little "RARE ONE OF A KIND" plaque on it to show that they pay for twinkies with a platinum gold card. Where the typical system integrator machine is a Mazda Miata, this is a custom-baked supercar that cost more than the GDP of a small country because my pants don't bulge enough and I need to compensate at the next Fortnite invitational.
  24. I'm not talking about the camera trick that produces funky looking images, I'm talking about the HDR we get in games. It also doesn't inherently need multiple exposure samples (you can do certain amounts of HDR with RAW DNG and compensation techniques, a cheater HDR that some photographers use when handling moving subjects and want to avoid ghosts). No, I'm talking about games supporting HDR. Some of the first were in Unreal 3 and an early demo (HL2: Lost Cost), and a lot of the work was done by keeping two textures around and in other areas apply a bloom shader effect to the bright bits of the scene. It wasn't real HDR, but it was close enough. For an interesting look at how Valve cheated, they talked about it at SIGGRAPH 06, but the basic gist of it is that they're not really doing the raytrace HDR that you would want to do (and which RTX/DXR enables today) but instead pre-baked all their HDR stuff before rendering it fully. The long and short of it is thus: Microsoft released DXR into the world nVidia then released their RTX cards which have hardware support for DXR A few games kinda-sorta rush DXR support in nVidia announces they're kinda supporting DXR on GTX 10-series cards through CUDA cores, but that it comes with a significant performance hit Unreal Engine is slowly adding the baked-in support for DXR People get angry that there's a performance hit on GTX cards Everyone completely loses their mind and the rational folk in the room apply Hanlon's Razor Look where we are now. PhysX needed special physics cores that later were integrated into GPUs by nVidia and then replaced with CUDA GPGPU kernels -- in fact It wasn't until much later that GPU support was added. Only two games supported the accelerator card anywhere near its launch. When GPU support was added (which required quite high spec cards,) only five games supported physX on the GPU. Was physX a scam too? You later could run the physics simulations on the CPU but your framerate would tank. If we really apply the logic that's been going on, the Voodoo2 cards were a scam! There were a handful of games that depended on the Voodoo2 in order to provide 3D accelerated graphics and later got software rendering support. Hell, we can even go back as far as the Intel 8087 Math Coprocessor, since there were games that demanded it but worked fine after patching that out... but with tanked framerates because of the integer approxomations. Hmmm... It's like we've been down this path before.
  25. You're uh, new to this whole "cost of new tech" thing, aren't you? First generation of something that hasn't really been done before (remember that this is not GPGPU -- this is not CUDA -- this is adding special silicon for things like Tensor cores and ray tracing) will be more expensive. This isn't re-purposing texture shaders for something else, this is literally dedicated, first generation silicon. There's R&D cost that has to be recouped somewhere. That cost goes to you, the consumer. When CUDA came out, the cards cost more. Hell, dedicated compute cards like the nVidia Titan still cost a pretty penny. I remember when you had to make sure you had to make sure that the game you were buying supported your soundcard. Nothing like getting home from Fry's only to realize that your fancy new game (now out of the shrink wrap) doesn't support AdLib except for a few crappy MIDI renditions of the soundtrack because it was built for the Soundblaster PCM FM channels, so you go and buy a SoundBlaster only to realize that it's the same MIDI renditions just with slightly better sounding instruments. Today, there's little difference: Software support is still getting there and the first few generations of software and hardware are going to be... spotty, if not a little lacklustre. Remember when HDR was new? And how we all oohed and ahhed when it was all over our faces, only to realize it was just Bloom and some subjective lighting? How about when everybody realized Doom wasn't really 3D? Or when you really honestly needed an SLI setup to make Unreal work right?
×