Jump to content

tim0901

Member
  • Posts

    765
  • Joined

  • Last visited

Reputation Activity

  1. Agree
    tim0901 got a reaction from leadeater in Yet another German government vows to abandon Windows.   
    Orgs are really the reason why Win 10 has such a big marketshare still - most simply haven't moved across yet. Even if you ignore the fact that Win 10 only has 18 months left, it's only within the last 6-12 months that many admins would have even started to consider moving to 11 at all, simply because of its age - new versions of Windows have a reputation of being buggy with all sorts of weird software quirks that you just don't want to deal with on a large scale (yes Win 11 mostly avoided this, but many admins have just been burned too many times) but which you can mostly avoid by letting the OS mature for a year or so. And that's despite the fact that Win 11 has some really cool features for sysadmins that very much incentivise them to upgrade.
     
    There's also a number of people hoping that they can jump straight to Windows 12 - thanks to all the rumours that have been floating around - just like how many orgs went straight from Win 7 to WIn 10, but how many this actually represents I've no idea.
  2. Agree
    tim0901 reacted to StDragon in Yet another German government vows to abandon Windows.   
    When dealing with thousands of machines at an org, you need a platform that can be centrally managed within an IT department for policy enforcements (GPOs), patching, provisioning, and software deployment. Windows is "easier" in that there's a whole lot of official documentation with Microsoft backing it along with a 3rd party ecosystem.

    With Linux, you're relying on a vendor implementation of it for support; say for example a virtual appliance or as a packaged solution with their lifecycle support behind it. Or, you just have far fewer Linux servers running with dedicated admins maintaining them.

    I wouldn't say you can't run an entire org on FOSS, but rather it's the devil that you know vs the one you don't. Often is the lack of Linux admins in the market place (Windows admins are cheaper) too.

    The irony is that if the world changes to Linux from Windows, it's because they will run a Microsoft flavor of Linux. 😂
  3. Agree
    tim0901 reacted to starsmine in Are you getting Deja vu? 14900KS releases and breaks the clock speed record at 9.1GHz   
    I can never hate on the ks versions of chips. I think they are just dumb fun. Negative opinions on it always seem to just miss the point hard. its not there to try to beat ryzen, its just there to see how hard you can push it. Competition literally does not mater here. I dont even understand the "dont recommend" statements like that needs to be explicitly said. no one asking the question of what PC to buy is even looking at that budget, the same way you dont recommend a porshe 918. 

    like a Dodge Demon hellcat, stupid fun, completely unusable. 
  4. Agree
    tim0901 reacted to Fasterthannothing in Glassdoor adding real names without consent   
    Umm that literally makes Glassdoor a pointless product. Wonder if some large company was getting bad reviews and wanted to pressure Glassdoor? Maybe everyone should drop a review for Glassdoor on their site.
  5. Like
    tim0901 reacted to Itrio in We Downgraded all our PCs to Prove You Don’t Need a New One   
    I wanted to point out that at no point during the video was it addressed that moving a Windows install between CPUs, GPUs and other such hardware considerations can cause a LOT of the complaints I heard in the video from the subjects. Depending on what the source drive was installed on, I've had some weird issues that stemmed completely from just "slapping my old drive in the new box", where as a fresh Windows installation experienced none of the issues.
     
    You mentioned the NVMe support, and your ability to just swap drives at 2:54 in the video, but don't make any further mention, but then have the subjects complaining about the issues they experienced, all of which could be attributed to, or at least impacted by, an unclean windows installation having a significant change in hardware configuration. If it was from another similarly spec'd model of HP or Dell prebuilt, sure, that's acceptable in most cases from an IT perspective - just not in this use case, I fear. For context, I recently upgraded from a 5800X to a 7900X CPU, moving motherboards and RAM architectures in the process (AM4 to AM5, and DDR4 to DDR5). As an avid VR Chat player, the same world would net me 23-26 FPS on the same 3070 with my reused windows install, and almost 40-50 FPS on a fresh windows install. A clean install makes a world of difference for significant hardware changes.
     
    My question, was any consideration for the significant change in hardware to the pre-existing Windows installations that were used for this experiment? (understandably, this was not a Labs thing, and more a "proof of concept", but I'm curious)
  6. Agree
    tim0901 reacted to GamingAndRCs in We Downgraded all our PCs to Prove You Don’t Need a New One   
    Im glad to see someone looking into this! People still think you need a gaming PC for basic computing and it drives me CRAZY.
  7. Like
    tim0901 got a reaction from WhitetailAni in Windows 11 24H2 goes from “unsupported” to “unbootable” on some older CPUs   
    The reason Debian is still supported on such old systems is because that's pretty much its purpose. It's tagline is that it is the Universal Operating System. If you go ahead and look at Ubuntu, you'll find no support for such ancient architectures - just x86_64, ARM and PowerPC. Saying "Linux doesn't drop hardware support" is wrong. Debian doesn't drop hardware support. Other distros are more than happy to.
     
    It's also a lot easier to justify supporting older architectures for longer when more of your user base uses it. Debian is used a lot in embedded systems that would cost huge amounts of money to replace or upgrade. Windows... isn't. You don't find CNC machines or MRI scanners running Windows - you just attach them to a Windows machine via USB/ethernet so regular users can use them in a familiar environment.
     
    Chances are the actual hard work of supporting these old architectures is done by the people who need it, which also makes it a lot easier to justify as an organisation.
  8. Funny
    tim0901 got a reaction from Kilrah in Windows 11 24H2 goes from “unsupported” to “unbootable” on some older CPUs   
    The reason Debian is still supported on such old systems is because that's pretty much its purpose. It's tagline is that it is the Universal Operating System. If you go ahead and look at Ubuntu, you'll find no support for such ancient architectures - just x86_64, ARM and PowerPC. Saying "Linux doesn't drop hardware support" is wrong. Debian doesn't drop hardware support. Other distros are more than happy to.
     
    It's also a lot easier to justify supporting older architectures for longer when more of your user base uses it. Debian is used a lot in embedded systems that would cost huge amounts of money to replace or upgrade. Windows... isn't. You don't find CNC machines or MRI scanners running Windows - you just attach them to a Windows machine via USB/ethernet so regular users can use them in a familiar environment.
     
    Chances are the actual hard work of supporting these old architectures is done by the people who need it, which also makes it a lot easier to justify as an organisation.
  9. Agree
    tim0901 reacted to porina in Intel CEO Pat Gelsinger: I hope to build chips for Lisa Su and AMD   
    That applies to all fabs. The numbers have not represented a physical feature size for a long time.
  10. Agree
    tim0901 reacted to porina in Intel CEO Pat Gelsinger: I hope to build chips for Lisa Su and AMD   
    It is very much the node that is the limiting factor. They've been making great designs on a not so leading node. That doesn't make it a bad node, but when push comes to shove, that difference shows.
     
    Unfortunately I don't own either a recent Intel CPU nor Zen 4, so I can't do my own testing like I did in the past. Following is Zen 2 vs Skylake. Intel 14nm vs TSMC N7. Zen 2 takes a slight lead in IPC from being a newer design, but they're not so different. Note I consider Zen 2 as the point when AMD managed to really overtake Intel in the Ryzen era as earlier Zen was more lacking and only offset by throwing cores at the problem.
    I can't find it right now, but I separately did testing of Coffee Lake vs Zen 2 at various power limits. Zen 2 was clearly superior there at normal operating points for both. IPC could be considered indicative of the microarchitecture design, and power efficiency from the node. Of course, the two are related so they do factor into each other and are not totally isolated.
     
    I did that because I didn't want to end up looking up prices for someone else to say in their region it is something different. My point remains, I do not feel that buying new today 5800X3D + mobo + DDR4 ram makes any sense when you can buy 7600X + mobo + DDR5 ram for what is likely not that different a total, assuming the mobo chosen is comparable in quality. The newer platform performs similar or better in gaming while leaving the door open to future upgrades. I did acknowledge that someone upgrading on existing AM4 platform might see more value in 5800X3D.
     
    The 5800X3D is still a great CPU, for gaming or otherwise. But out of today's offerings it is unremarkable and nothing special.
  11. Agree
    tim0901 reacted to Brooksie359 in Intel CEO Pat Gelsinger: I hope to build chips for Lisa Su and AMD   
    Having more options for fabs is great especially because right now companies are so reliant on TSMC that it would be a very scary situation if something happened to tsmc or its supply chain. 
  12. Like
    tim0901 reacted to Lunar River in Intel CEO Pat Gelsinger: I hope to build chips for Lisa Su and AMD   
    I hope intel dont become a fab for AMD. not because they are rivals, but because i know for a fact that any time AMD releases a product that might not be as good as people hoped it would be, the argument would be "well clearly intel sabotaged it at the fab level".
     
    But yes, we do need more foundries across the world, the reliance on TSMC is quite scary
  13. Agree
    tim0901 got a reaction from OxideXen in OpenAI unveils "Sora." A prompt-based short video generator with amazing results   
    This is fucking terrifying.
  14. Like
    tim0901 got a reaction from Uttamattamakin in A 1 Petabit DVD-like disc has been created   
    A "single run" at CERN is about 4 years long - we are currently in the middle of Run 3 of the LHC. So... yeah they would probably store that much in that amount of time.
     
    Strictly speaking, Femtobarns is a measure of area - not data. Inverse femtobarns is (roughly speaking) a measure of the number of particle collision events. They don't use bytes because events is what matters to them.
     
     
    The triggers they use are ML models running on ASICs using the data flowing directly out of the detectors, which identify whether or not a collision is 'interesting' and determine whether or not that data should be stored before it even reaches a CPU core. This is because the vast majority of collisions are completely useless. If you picture two bullets being shot at each other, we only care about the absolutely perfect, head-on collisions. The off-centre hits or glancing blows aren't interesting and so can be ignored.
     
    There's then a massive server farm that performs more in-depth analysis of the information, discarding yet more interactions. This entire process is completed in ~0.2s, at which point the interaction is sent for long-term storage.
     
    If everything was stored, the raw data thoughput of the ATLAS detector alone would be approaching 100TB (yes terabytes) per second. After all the layers of processing, less than 1 percent of that data is stored. But that's only one of the dozens of experiments being performed at CERN.
     
    Data that is stored is stored on magnetic tape, but this is mostly for archival purposes to my knowledge. I believe fresh data is stored on hard disks, so that it can be easily transmitted. They don't send anything out via tape anymore as far as I'm aware - certainly my lecturers never got any. They sell old tapes in the LHC giftshop for 10CHF though!
    Big science experiments generally don't send things via the public internet. They use academic networks or NRENs, which are usually completely physically separate to public internet infrastructure.
     
    In Europe these have all merged to form one massive network called GÉANT, although national ones like the UK's JANET still operate as a subset of that.
     

     
    GÉANT has a throughput of about 7PB/day and offers speeds far beyond those available commercially to big experiments like CERN. But basically every university in Europe has at least one connection onto the network - they claim to have over 10,000 institutions connected. There are also multiple 100Gbit connections going outside of Europe, including across the Atlantic.
     
    So no, while data is stored on magnetic tapes for long-term offline storage, it is sent around the world at least predominantly via GÉANT. CERN has access to multiple (I believe 4?) 100Gb connections onto GÉANT as well as connections to the public internet. 10TB of data would only take about 10 minutes to send at that speed and will most definitely be throttled by the connection at the other end. Maybe it's different to institutions on the other side of the world, but at least within Europe, the days of sending CERN data by FedEx are long gone.
  15. Agree
    tim0901 got a reaction from wanderingfool2 in A 1 Petabit DVD-like disc has been created   
    A "single run" at CERN is about 4 years long - we are currently in the middle of Run 3 of the LHC. So... yeah they would probably store that much in that amount of time.
     
    Strictly speaking, Femtobarns is a measure of area - not data. Inverse femtobarns is (roughly speaking) a measure of the number of particle collision events. They don't use bytes because events is what matters to them.
     
     
    The triggers they use are ML models running on ASICs using the data flowing directly out of the detectors, which identify whether or not a collision is 'interesting' and determine whether or not that data should be stored before it even reaches a CPU core. This is because the vast majority of collisions are completely useless. If you picture two bullets being shot at each other, we only care about the absolutely perfect, head-on collisions. The off-centre hits or glancing blows aren't interesting and so can be ignored.
     
    There's then a massive server farm that performs more in-depth analysis of the information, discarding yet more interactions. This entire process is completed in ~0.2s, at which point the interaction is sent for long-term storage.
     
    If everything was stored, the raw data thoughput of the ATLAS detector alone would be approaching 100TB (yes terabytes) per second. After all the layers of processing, less than 1 percent of that data is stored. But that's only one of the dozens of experiments being performed at CERN.
     
    Data that is stored is stored on magnetic tape, but this is mostly for archival purposes to my knowledge. I believe fresh data is stored on hard disks, so that it can be easily transmitted. They don't send anything out via tape anymore as far as I'm aware - certainly my lecturers never got any. They sell old tapes in the LHC giftshop for 10CHF though!
    Big science experiments generally don't send things via the public internet. They use academic networks or NRENs, which are usually completely physically separate to public internet infrastructure.
     
    In Europe these have all merged to form one massive network called GÉANT, although national ones like the UK's JANET still operate as a subset of that.
     

     
    GÉANT has a throughput of about 7PB/day and offers speeds far beyond those available commercially to big experiments like CERN. But basically every university in Europe has at least one connection onto the network - they claim to have over 10,000 institutions connected. There are also multiple 100Gbit connections going outside of Europe, including across the Atlantic.
     
    So no, while data is stored on magnetic tapes for long-term offline storage, it is sent around the world at least predominantly via GÉANT. CERN has access to multiple (I believe 4?) 100Gb connections onto GÉANT as well as connections to the public internet. 10TB of data would only take about 10 minutes to send at that speed and will most definitely be throttled by the connection at the other end. Maybe it's different to institutions on the other side of the world, but at least within Europe, the days of sending CERN data by FedEx are long gone.
  16. Like
    tim0901 got a reaction from leadeater in A 1 Petabit DVD-like disc has been created   
    A "single run" at CERN is about 4 years long - we are currently in the middle of Run 3 of the LHC. So... yeah they would probably store that much in that amount of time.
     
    Strictly speaking, Femtobarns is a measure of area - not data. Inverse femtobarns is (roughly speaking) a measure of the number of particle collision events. They don't use bytes because events is what matters to them.
     
     
    The triggers they use are ML models running on ASICs using the data flowing directly out of the detectors, which identify whether or not a collision is 'interesting' and determine whether or not that data should be stored before it even reaches a CPU core. This is because the vast majority of collisions are completely useless. If you picture two bullets being shot at each other, we only care about the absolutely perfect, head-on collisions. The off-centre hits or glancing blows aren't interesting and so can be ignored.
     
    There's then a massive server farm that performs more in-depth analysis of the information, discarding yet more interactions. This entire process is completed in ~0.2s, at which point the interaction is sent for long-term storage.
     
    If everything was stored, the raw data thoughput of the ATLAS detector alone would be approaching 100TB (yes terabytes) per second. After all the layers of processing, less than 1 percent of that data is stored. But that's only one of the dozens of experiments being performed at CERN.
     
    Data that is stored is stored on magnetic tape, but this is mostly for archival purposes to my knowledge. I believe fresh data is stored on hard disks, so that it can be easily transmitted. They don't send anything out via tape anymore as far as I'm aware - certainly my lecturers never got any. They sell old tapes in the LHC giftshop for 10CHF though!
    Big science experiments generally don't send things via the public internet. They use academic networks or NRENs, which are usually completely physically separate to public internet infrastructure.
     
    In Europe these have all merged to form one massive network called GÉANT, although national ones like the UK's JANET still operate as a subset of that.
     

     
    GÉANT has a throughput of about 7PB/day and offers speeds far beyond those available commercially to big experiments like CERN. But basically every university in Europe has at least one connection onto the network - they claim to have over 10,000 institutions connected. There are also multiple 100Gbit connections going outside of Europe, including across the Atlantic.
     
    So no, while data is stored on magnetic tapes for long-term offline storage, it is sent around the world at least predominantly via GÉANT. CERN has access to multiple (I believe 4?) 100Gb connections onto GÉANT as well as connections to the public internet. 10TB of data would only take about 10 minutes to send at that speed and will most definitely be throttled by the connection at the other end. Maybe it's different to institutions on the other side of the world, but at least within Europe, the days of sending CERN data by FedEx are long gone.
  17. Informative
    tim0901 got a reaction from Uttamattamakin in A 1 Petabit DVD-like disc has been created   
    This seems really cool, but I think it's important to look at another project going on at the moment: Microsoft's Project Silica. Which already has working prototypes of non-degrading (on the scale of millenia) cloud storage systems for archival data - using similar sized glass slates with a capacity upwards of 7TB - which are already seeing throughput similar to that of conventional tape-based archival systems. Microsoft wont give any timescales, but given the technology has gone from a 300KB proof of concept in a university lab back in 2013 to prototype storage libraries of basically infinite capacity a decade later, it's probably not going to be that long before they begin to appear commercially. I really wouldn't be surprised if Microsoft started offering glass backups to Azure customers within the next 5-10 years.
     
    If they can massively improve their write speeds, sure I can see this technology having potential in datacentres - massive improvements in density could result in huge power savings. But for archival storage? I just don't really see the benefit over a completely immutable (and dirt cheap) medium like glass that, if I were to guess, is probably going to reach the market first.
  18. Agree
    tim0901 got a reaction from Needfuldoer in OpenAI unveils "Sora." A prompt-based short video generator with amazing results   
    This is fucking terrifying.
  19. Agree
    tim0901 reacted to StDragon in Windows 11 24H2 goes from “unsupported” to “unbootable” on some older CPUs   
    People are still going to bitch when the Windows 11 24H2 feature update installs and fails to boot from a subsequent reboot.

    "Just because you can, doesn't mean you should"

    People need to stop setting themselves up for failure. Window 11 was never meant to be installed on those CPUs. The fact it could be done was a bonus.
  20. Agree
    tim0901 reacted to GuiltySpark_ in Windows 11 24H2 goes from “unsupported” to “unbootable” on some older CPUs   
    Big nothingburger. 
     
    SSE4.2 is available way back to chips you really have no business putting 11 on anyway. 
     
     
  21. Agree
    tim0901 reacted to da na in How to set up your new PC in 30 minutes   
    Glad to see you back, Emily.
  22. Agree
    tim0901 got a reaction from LAwLz in Intel seeks to get inside Microsoft's next-gen Xbox console, potentially snatching away lucrative share from AMD   
    The other thing that matters though is money.
     
    If Intel are willing to continue to undercut AMD that might make them a very appealing option to Microsoft, especially for a Series S replacement where you're not looking for bleeding-edge performance anyway.
  23. Agree
    tim0901 reacted to leadeater in Intel seeks to get inside Microsoft's next-gen Xbox console, potentially snatching away lucrative share from AMD   
    Intel also has competitive iGPU and dGPU now which makes them an option for an integrated custom SoC or traditional hardware design. It would be silly not to engage and explore options with them even without any intention to commit to going with them. Always good to evaluate options.
  24. Agree
    tim0901 got a reaction from leadeater in Intel seeks to get inside Microsoft's next-gen Xbox console, potentially snatching away lucrative share from AMD   
    The other thing that matters though is money.
     
    If Intel are willing to continue to undercut AMD that might make them a very appealing option to Microsoft, especially for a Series S replacement where you're not looking for bleeding-edge performance anyway.
  25. Informative
    tim0901 got a reaction from Levent in Nvidia RTX 40 Super Series Review   
    7900XTX does feel like a rather big omission, but at the same time it feels like a pretty cut and dry conclusion.
     
    The 7900XTX performs practically identically to the 4080 in rasterization, therefore it performs practically identically to the 4080 Super. It's the same price as the 4080 Super as well, at least here in the UK, but you get worse RT performance and no DLSS.
     
    As such, I see no reason to buy the 7900XTX unless you can get one substantially cheaper than the 4080 Super. Maybe that's possible where you are, but here that's not really the case.
×