Jump to content

tim0901

Member
  • Posts

    765
  • Joined

  • Last visited

Everything posted by tim0901

  1. Orgs are really the reason why Win 10 has such a big marketshare still - most simply haven't moved across yet. Even if you ignore the fact that Win 10 only has 18 months left, it's only within the last 6-12 months that many admins would have even started to consider moving to 11 at all, simply because of its age - new versions of Windows have a reputation of being buggy with all sorts of weird software quirks that you just don't want to deal with on a large scale (yes Win 11 mostly avoided this, but many admins have just been burned too many times) but which you can mostly avoid by letting the OS mature for a year or so. And that's despite the fact that Win 11 has some really cool features for sysadmins that very much incentivise them to upgrade. There's also a number of people hoping that they can jump straight to Windows 12 - thanks to all the rumours that have been floating around - just like how many orgs went straight from Win 7 to WIn 10, but how many this actually represents I've no idea.
  2. Yes and no. In many businesses you're absolutely right that they'll be replaced every N years, generally aligning with when the manufacturer warranty runs out. Other places though will hold onto PCs for far longer than they have any right to - this is also especially common in schools, due to their general lack of funding. We still have classrooms full of 2nd Gen i5s for example, which were bought 2nd hand 3 years ago to replace a fleet of Core 2s. It comes down to how the management see computing within their organisation. If they are interested in it and want to get the most out of IT, it's likely to fall into the first category. If that's not the case and they simply care that the PC works, they're probably going to fall into the latter and as such will never invest in IT infrastructure unless it's absolutely necessary, much to the distain of their sysadmins. Win 11 is a handy stick they can wave at management though, as if the hardware won't support it, it can't be kept up to date. And as an organisation, if you haven't been keeping your systems up to date, then under laws like GDPR you can be found liable for huge fines if you're the subject of a data breach, as you'll be found as being negligent. And threats like that are the only way you're going to be able to get management to invest in IT if you're in that latter category of organisation - we've just been told "they can wait another year" for those 2nd Gen i5s...
  3. The reason Debian is still supported on such old systems is because that's pretty much its purpose. It's tagline is that it is the Universal Operating System. If you go ahead and look at Ubuntu, you'll find no support for such ancient architectures - just x86_64, ARM and PowerPC. Saying "Linux doesn't drop hardware support" is wrong. Debian doesn't drop hardware support. Other distros are more than happy to. It's also a lot easier to justify supporting older architectures for longer when more of your user base uses it. Debian is used a lot in embedded systems that would cost huge amounts of money to replace or upgrade. Windows... isn't. You don't find CNC machines or MRI scanners running Windows - you just attach them to a Windows machine via USB/ethernet so regular users can use them in a familiar environment. Chances are the actual hard work of supporting these old architectures is done by the people who need it, which also makes it a lot easier to justify as an organisation.
  4. Most of the data is junk because of a poor collision, as I described before with my bullet analogy. When you collide particles in an accelerator, the aim is to produce a plasma of pure energy. This plasma then generates particles via pair production - essentially the conversion of pure energy into particle/antiparticle pairs. The more energy you give to the particles you're accelerating, the higher energy plasma you can create and so the more interesting particles can be created - and the more likely it is that the higher energy particles you're looking for are created. But that's only the case if the interaction is head-on. If it's not, not all of that energy is contained in the collision, meaning we the interesting stuff we're looking for can't be produced. It also adds a level of uncertainty to the data - we want the collision energy to be as fixed as possible for scientific accuracy, so it's best to remove these "bad" interactions as early as possible. Categorising interactions based on what theories we're investigating is done later, because what's interesting one group of researchers will be different to what's interesting to another. There are general directions as to what the field of physics is looking at on the whole - at the moment Higgs and Supersymmetry would probably be the big headlining topics - but each group will be looking for different decay mechanisms or signatures within that dataset, so we don't want to delete any of that data at the detector stage. This also allows them to look for interactions across the entire run's dataset, rather than only having access to whatever is generated after they pick something to look at. Also, as a further note on CERN's compute capability since people seemed interested, everything I've discussed so far is what's based in Geneva, but there's a lot more than that. The worldwide LHC computing grid is the largest distributed computing grid in the world, comprising of 14 tier 1 sites (datacentres) and 160 tier 2 sites (universities and institutions) all of which contribute to the compute, storage and networking requirements of the experiments. These (and a few other) are all connected via the LHC Optical Private Network: There's also LHCONE which connects tier 1 sites to tier 2s. Data from CERN isn't just sent directly from Geneva to the universities. Geneva (tier 0) keeps the "raw" (mostly junk-free) data - they now have more than 1 exabyte of storage - as well as first passes for event reconstruction, but this is also shared with the tier 1s, kinda like a CDN. The tier 1s then distribute data to tier 2s and the countless tier 3/local users, as well as providing storage for processed data from tier-2 institutions. CERN being very open with this stuff, you can watch the network usage live here: https://monit-grafana-open.cern.ch/d/HreVOyc7z/all-lhcopn-traffic?orgId=16&refresh=5m But it's not very interesting at the moment - only in the 100-200 Gb/s range - because the LHC is switched off for winter maintenance. They did run an experiment on the network over the last 2 weeks though: I wonder when the test started?? That load is meant to be representative of ~25% of what will be generated when the upgraded HL-LHC starts operations in 2028 - and note that's only in one direction! I also found some graphics and slides giving an overview of CERN's computing status at the moment:
  5. A "single run" at CERN is about 4 years long - we are currently in the middle of Run 3 of the LHC. So... yeah they would probably store that much in that amount of time. Strictly speaking, Femtobarns is a measure of area - not data. Inverse femtobarns is (roughly speaking) a measure of the number of particle collision events. They don't use bytes because events is what matters to them. The triggers they use are ML models running on ASICs using the data flowing directly out of the detectors, which identify whether or not a collision is 'interesting' and determine whether or not that data should be stored before it even reaches a CPU core. This is because the vast majority of collisions are completely useless. If you picture two bullets being shot at each other, we only care about the absolutely perfect, head-on collisions. The off-centre hits or glancing blows aren't interesting and so can be ignored. There's then a massive server farm that performs more in-depth analysis of the information, discarding yet more interactions. This entire process is completed in ~0.2s, at which point the interaction is sent for long-term storage. If everything was stored, the raw data thoughput of the ATLAS detector alone would be approaching 100TB (yes terabytes) per second. After all the layers of processing, less than 1 percent of that data is stored. But that's only one of the dozens of experiments being performed at CERN. Data that is stored is stored on magnetic tape, but this is mostly for archival purposes to my knowledge. I believe fresh data is stored on hard disks, so that it can be easily transmitted. They don't send anything out via tape anymore as far as I'm aware - certainly my lecturers never got any. They sell old tapes in the LHC giftshop for 10CHF though! Big science experiments generally don't send things via the public internet. They use academic networks or NRENs, which are usually completely physically separate to public internet infrastructure. In Europe these have all merged to form one massive network called GÉANT, although national ones like the UK's JANET still operate as a subset of that. GÉANT has a throughput of about 7PB/day and offers speeds far beyond those available commercially to big experiments like CERN. But basically every university in Europe has at least one connection onto the network - they claim to have over 10,000 institutions connected. There are also multiple 100Gbit connections going outside of Europe, including across the Atlantic. So no, while data is stored on magnetic tapes for long-term offline storage, it is sent around the world at least predominantly via GÉANT. CERN has access to multiple (I believe 4?) 100Gb connections onto GÉANT as well as connections to the public internet. 10TB of data would only take about 10 minutes to send at that speed and will most definitely be throttled by the connection at the other end. Maybe it's different to institutions on the other side of the world, but at least within Europe, the days of sending CERN data by FedEx are long gone.
  6. This seems really cool, but I think it's important to look at another project going on at the moment: Microsoft's Project Silica. Which already has working prototypes of non-degrading (on the scale of millenia) cloud storage systems for archival data - using similar sized glass slates with a capacity upwards of 7TB - which are already seeing throughput similar to that of conventional tape-based archival systems. Microsoft wont give any timescales, but given the technology has gone from a 300KB proof of concept in a university lab back in 2013 to prototype storage libraries of basically infinite capacity a decade later, it's probably not going to be that long before they begin to appear commercially. I really wouldn't be surprised if Microsoft started offering glass backups to Azure customers within the next 5-10 years. If they can massively improve their write speeds, sure I can see this technology having potential in datacentres - massive improvements in density could result in huge power savings. But for archival storage? I just don't really see the benefit over a completely immutable (and dirt cheap) medium like glass that, if I were to guess, is probably going to reach the market first.
  7. Yeah I do believe it. Because I live in a country with these things called laws. And you know what's illegal under those laws? False advertising. Aka claiming you're going to do something - say, provide updates for X period of time - and then not following through with it. So yeah, if they say they're going to support a device for 7 years, I believe them. Because I know that if they don't, it would be a textbook case of false advertising. Meaning they would then get stung with some big fines and I'd get a check in the mail for compensation. And even outside of the courts, it would be complete suicide for Google if they tried to do it. Their competitors would immediately jump on it as a perfect anti-Google marketing point and it would likely cause them to lose many of their third-party partners due to the reputation cost. All for the sake of saving the cost of the handful of engineers who keep it maintained? It just wouldn't make any sense to do from a risk/reward perspective. So seriously, just take a step away from the "Google kills everything they make in seconds" internet meme hate train for a second, come back to the real world, and realise how little fucking sense Google breaking that promise would make.
  8. They also didn't discuss this for the Pixel. Google has committed to 7 years of updates with the Pixel 8, and genuine replacement parts for it are also available - for a fair price - via iFixit for that same duration. Heck, iFixit currently stock OEM components stretching back to the Pixel 2. Yeah, the Pixel is definitely harder to take apart, but that's something that I would hope I only have to do once or twice during the phone's lifetime anyway. That's a sacrifice I'm willing to make in exchange for modern hardware features like wireless charging, always on display and IP68 water and dust resistance (the Fairphone is only IP55 rated) on top of the far more premium build. Meanwhile on the update front, Fairphone doesn't exactly have a great record with update timeliness. According to Fairphone's OS release notes, the Fairphone 4 is still sitting on 2022's Android 13, which it only got back in October, meaning they're running a year behind in terms of major version releases. And yeah they get the monthly security patches, but they get them 4-8 weeks after their initial release. Fairphone also only guarantees 5 major Android revisions after Android 13, meaning it will cap out at Android 18. So compare all that to the Pixel where not only will you get your updates on day 1, but you also get at least 2 more major versions of Android (Google guarantees Android version updates for the full 7 years with the Pixel 8) and you realise the Fairphone's software support offering is just objectively worse in every way except for that single additional year of security updates. And frankly, I don't care about one additional year, and I doubt most people will either. 7 vs 8 years, both are exceptionally long amounts of time for something like a phone and both are well beyond the point where many people will stop caring about the availability of updates. Many - likely the vast majority - of people will simply go "I'd like something new" long before updates or parts availability actually run out, as the gradual advancements in technology are significant enough to actually make a difference. 7 years definitely counts as "good enough". You see this with iPhones - everyone clamours about how amazing their support is, but barely any of their users actually utilise its full extent. About 2/3 upgrade within 3 years, likely in line with their 24 or 36 month cellular contract, and the vast majority of all units are replaced within 5 years. And that's the biggest nail in the coffin for this thing for me. Other Android manufacturers have been upping their game in this area and are beginning to provide long-term support for their own phones, to the point that one of the biggest selling points for the Fairphone doesn't really apply anymore. And it's not just Google - Samsung has made similar commitments to updates for their latest flagship phones (7 years of security and OS version updates - which are also more timely than Fairphone's) and also sells replacement parts for the last 4 generations of flagships via their Self-Repair store. It's kinda like what Linus has said about his investment in Framework: he's quite happy for it to fail if it does so because the 'repairable laptop' niche has disappeared, due to other manufacturers offering it themselves. I feel that's exactly what's happening to Fairphone here.
  9. So... a Steam Machine? I can't see Valve being remotely interested in pursuing such a device given how awful the last attempt went. And If even Microsoft are reportedly considering pulling out of the home console market due to it not being worth it, I can't see Valve being that enthused to dive in. With basically all hardware being sold at a loss for several years after launch, it's hardly what I'd call lucrative.
  10. Yeah, Apple could implement a system that allows 3rd party browsers to be used for PWA (as that's fundamentally the problem - they currently always use Safari, with no way to direct them to another browser). They just don't want to put the effort into developing such a solution.
  11. The other thing that matters though is money. If Intel are willing to continue to undercut AMD that might make them a very appealing option to Microsoft, especially for a Series S replacement where you're not looking for bleeding-edge performance anyway.
  12. 7900XTX does feel like a rather big omission, but at the same time it feels like a pretty cut and dry conclusion. The 7900XTX performs practically identically to the 4080 in rasterization, therefore it performs practically identically to the 4080 Super. It's the same price as the 4080 Super as well, at least here in the UK, but you get worse RT performance and no DLSS. As such, I see no reason to buy the 7900XTX unless you can get one substantially cheaper than the 4080 Super. Maybe that's possible where you are, but here that's not really the case.
  13. You'll have to pry it from my cold, dead hands. Seriously I use this button all the freakin time. Win + E to open explorer, Win + V for the clipboard app (!!) or just tap it and start typing to search for something. It's so much quicker than scrolling through the start menu trying to find what stupid folder something has been sorted into. It also allows you to fully navigate Windows without a mouse, which can be very useful for troubleshooting and/or macros.
  14. Explain this for me then: I, an uneducated consumer, purchased a laptop with an R3 5400U cpu in it nearly 3 years ago. It's done me well, but it's not the quickest anymore and I could do with an upgrade. I head over to my local Best Buy, where I am greeted with a sight in front of me: the latest laptop from AMD, with an R5 7520U chip inside. Sure it has the same number of cores as my current laptop, but it's an R5, rather than an R3, which means it's more powerful right? It has more GHz too, and it's 2 generations newer! So surely this should be a suitable upgrade for me! And at such a great price too! Except it isn't. Because it's a 7020 series CPU, so it's Zen2 based. While their old 5400U is Zen3 based. So the laptop our uneducated consumer has just purchased is a direct downgrade over their previous system, despite the name suggesting otherwise. So please, good Sir, explain how that is not anti-consumer and misleading. Everyone knows that uneducated consumers work on the theory of bigger number = better, but you can no longer rely on that theory under AMD's new naming scheme. I don't care if "consumers only care if its new" - if you are selling a consumer a worse product, but the name suggests that it should be better unless you have significant knowledge about the product, then you are misleading them.
  15. Honestly, this feels just as bad if not worse than Intel's BS to me. As while Intel's marketing is completely ridiculous, their underlying point is correct: AMD's naming scheme is indeed misleading bullshit, even more so than Intel's own. This however just feels... petty? It's not trying to call out Intel for anything, it's just reactionary "E-cores bad" whining. For example, notice how they say "All cores have the same IPC" not "All cores have the same PERFORMANCE". Because Zen4c cores, with their half-sized L3 cache, are expected to perform worse than Zen4 cores. IPC means fuck all in the real world - it's just another number. Real-world performance is what matters. And "Doesn't require hardware OS SCHEDULER" ... is that meant to be a good thing? May I remind everyone of the years of repeated problems of trying to get the Windows scheduler to play nicely with Ryzen? Even now it still isn't great, especially when you're using mixed-CCD CPUs like the 7950X3D. I would be very surprised if the rumoured mixed Zen5/Zen5c chips don't see very similar scheduling issues if they eventually arrive. But then again, why would I expect anything good to come out of AMD's marketing department. They prove themselves as being completely incompetent basically every time AMD try to release a new product...
  16. It does. Read the official specs - it explicitly lists Power Delivery support, and the negotiation chip for it is clearly shown off in Jeff Geerling's video. In fact, it has to support it, since non-PD caps out at 5V 3A over USB-C. What it doesn't support is any voltage levels other than 5V. But that isn't a required part of USB PD. Most manufacturers use PD for the variable voltage part of the spec, but the PD spec also enables the device to pull current up to "the maximum current supported by its receptacle" - aka the maximum supported by USB C, which is 5A. This is why USB PD was limited to 100W for so long - that's 20V @ 5A, making full advantage of the spec. Many manufacturers don't bother with extending the current at lower voltages, but it is a supported use of the spec. But as RPI have stated, most of that additional power is for peripherals. The board itself only uses 12W under peak load. So for most people I really don't think it's much of an issue.
  17. Stuff like Plex works great on a Pi right up until you want to try transcoding where it absolutely shits the bed, so the extra power could make a big difference there. But the biggest reason to buy a Pi isn't the hardware. As others have mentioned, cheap 2nd hand USFF machines will far outperform it for a similar cost without consuming much more power, and there are indeed countless fruit Pi's out there with more X or Y. No, you go with a Pi for the software stack - PiOS in particular - and the community. The Pi foundation is still maintaining PiOS for the original Pi that's now over 11 years old, while other competing Pi's like the Orange Pi typically (although I'll admit I haven't looked at this one specifically) abandon them after 1-2 years, and on top of that their communities are far smaller so they're much less beginner friendly.
  18. A significant amount of these use Unreal Engine, which is a good sign for widespread future support if nothing else. Frostpunk 2, Crimson desert and Black Myth: Wukong aren't expected to be released until sometime next year though.
  19. No, it doesn't. It restricts it's use, but doesn't remotely forbid it. There are over 80 exemptions in RoHS for various materials for various reasons - lead's use in car batteries, medical devices, HDD solder and solar panels are such examples. It wouldn't be difficult or unreasonable for another exemption to be made for this if it were accurate. No, they really aren't - that's not what a phase of matter is. Phases of matter are solid, liquid, gas etc. and graphite/diamond are clearly both solids. What they are is different allotropes - different arrangements of the atoms while in the same phase of matter. But thats really irrelevant to the point of the comparison. My point was that just because two materials contain the same element, that doesn't mean you can assume that they will have the same mechanical properties. You cannot assume that everything with carbon in it will be as hard as diamond, and in the same vein you cannot assume that everything with lead in it will be soluble in water. That's just... not how that works.
  20. Yes, absolutely. You could not only shrink the size of the magnets (superconductors can take far higher currents, inducing a stronger magnetic field) but also remove the need for the liquid helium cooling system, massively reducing costs and space requirements. CERN is another institution that will be jumping on this. Their next particle accelerator (FCC) is in the early planning stages right now and again, between being able to ditch the liquid helium cooling system and the energy savings from zero resistance coils, the cost benefits of such a material could be enormous. It's also really funny seeing all the comments about lead poisoning in this thread - you guys do know that we still regularly use this stuff in modern life right? For example, the walls of every x-ray room are lined with a good amount of it to ensure nobody in the surrounding rooms is exposed to excess radiation unnecessarily. So that's not only hospitals, but also private clinics, dentists, vets - it's really not that uncommon. Lead lined clothing is also worn in these environments, and lead safes are used for storing any radioactive materials. And may I remind you that basically every ICE car has a lead-acid battery under the bonnet? There's also then stained glass, bullets, lead paint (which is still used sometimes) etc. which are perhaps less common depending where you are in the world. Sure we got rid of most lead paint and lead pipes, but we didn't stop using the stuff completely like we did with eg. CFCs. - it's still really easy to find, and not only are we aware of the risks it poses, but we're much better at handling and disposing of it safely. It also depends on how the structure itself breaks down. While this superconductor contains lead in its structure, it is not pure lead metal - it appears to be based upon lead apatite, which is a phosphate compound. Being in a compound like this can completely change its structure and properties. Think how graphite and diamond are, fundamentally, made up of the same carbon atoms, and yet their physical properties are completely different. This is also pretty much the theory behind what we do with radioactive waste to make it safe - the processes of vitrification essentially traps the radioactive atoms in an insoluble, glass-like structure that then prevents the dangerous atoms from leeching into the environment. And so just because this compound contains lead, doesn't mean it will leech that lead into its surroundings. In fact, the lead apatite that this superconductor appears to be based upon is already known to be almost completely insoluble in water - for this exact reason we deliberately add phosphate ions to the water supply in areas that use lead pipes to prevent them from leeching. It could well be (and is probably quite likely) the case that this modified version retains many of those same properties. So maybe we shouldn't just immediately condemn the technology and start fearmongering because of a "hurr lead bad" mentality, and should instead wait for the experts to test the material and find out it's properties before casting judgement. BUT we also can't get too excited. As OP mentioned, this is a pre-release (aka not peer-reviewed) paper and this wouldn't be the first time such a claim has been made in this field recently. And of course there are many other potential pitfalls that might prevent it from being useful, for example many superconductors are rather brittle, which makes them rather impractical to use in the real world. The same could yet be found here.
  21. Technically Winget isn't a part of the base OS, but is instead distributed as part of the App Installer package via the Microsoft Store. It just happens to also be a default package that is shipped as part of the Windows ISO (just like most of the bloatware that your debloater script is removing) and the version of the package that's distributed with modern Windows ISOs is now new enough to already contain Winget. But this does mean that if you have an older Windows installation and have turned auto updates off in the Microsoft Store, or you've uninstalled the package thinking it was bloatware, then you won't have it. Update/reinstall App Installer and it should appear after a reboot. But this kind of thing is the reason why I (and many others) advise against the use of such debloater scripts, because generally speaking they either go way overboard and remove things that can harm the functionality of the OS, or they do so little that you may as well have just scrolled down your start menu and clicked "uninstall" on anything you don't want. You just have to look at the issues list for that debloater to see all the problems people are having after using it. Also, just generally, anything that's says "hey run my script off the internet as an administrator to solve all your problems" can fuck right off. Unless you are able to understand exactly what that script is doing, believing such claims is a terrible idea. Sure it claims to be debloating your OS, but are you knowledgeable enough to sift through that code and check that it's not doing anything else at the same time? You've given it full access to your entire system, it could be doing literally anything. Remember: open source =/= safe; it's trivial for Github repo owners to delete issues to hide criticism/accusations. As a side note, I'd also highly recommend the new Windows Terminal to any command line users out there. It comes with Win 11 but you can install it on WIn 10 as well through the Microsoft Store. Supports tabs, Unicode, GPU acceleration - it's fantastic.
  22. Personally I'd recommend using Winget - the official Windows package manager - over chocolatey because: 1. It comes preinstalled on modern versions of windows (Win 10 22H2 and newer I believe) 2. It's compatible with the Windows store. It doesn't have as good of a selection of apps available, but it's still pretty good.
×