Jump to content

tim0901

Member
  • Posts

    765
  • Joined

  • Last visited

Awards

This user doesn't have any awards

1 Follower

About tim0901

  • Birthday January 9

Profile Information

  • Gender
    Male
  • Location
    England
  • Occupation
    Student

System

  • CPU
    Intel i7 4790k
  • Motherboard
    Gigabyte G1 Sniper Z97
  • RAM
    16gb Corsair Vengeance Pro Red
  • GPU
    Zotac GTX 780
  • Case
    Corsair Carbide 300R
  • Storage
    Samsung 840 250GB, 3TB Seagate Barracuda
  • PSU
    Corsair RM750
  • Display(s)
    LG 24MT48D
  • Cooling
    Be Quiet Pure Rock
  • Keyboard
    Tiny KB-9805
  • Operating System
    Windows 10 Pro

Recent Profile Visitors

1,951 profile views
  1. Orgs are really the reason why Win 10 has such a big marketshare still - most simply haven't moved across yet. Even if you ignore the fact that Win 10 only has 18 months left, it's only within the last 6-12 months that many admins would have even started to consider moving to 11 at all, simply because of its age - new versions of Windows have a reputation of being buggy with all sorts of weird software quirks that you just don't want to deal with on a large scale (yes Win 11 mostly avoided this, but many admins have just been burned too many times) but which you can mostly avoid by letting the OS mature for a year or so. And that's despite the fact that Win 11 has some really cool features for sysadmins that very much incentivise them to upgrade. There's also a number of people hoping that they can jump straight to Windows 12 - thanks to all the rumours that have been floating around - just like how many orgs went straight from Win 7 to WIn 10, but how many this actually represents I've no idea.
  2. Yes and no. In many businesses you're absolutely right that they'll be replaced every N years, generally aligning with when the manufacturer warranty runs out. Other places though will hold onto PCs for far longer than they have any right to - this is also especially common in schools, due to their general lack of funding. We still have classrooms full of 2nd Gen i5s for example, which were bought 2nd hand 3 years ago to replace a fleet of Core 2s. It comes down to how the management see computing within their organisation. If they are interested in it and want to get the most out of IT, it's likely to fall into the first category. If that's not the case and they simply care that the PC works, they're probably going to fall into the latter and as such will never invest in IT infrastructure unless it's absolutely necessary, much to the distain of their sysadmins. Win 11 is a handy stick they can wave at management though, as if the hardware won't support it, it can't be kept up to date. And as an organisation, if you haven't been keeping your systems up to date, then under laws like GDPR you can be found liable for huge fines if you're the subject of a data breach, as you'll be found as being negligent. And threats like that are the only way you're going to be able to get management to invest in IT if you're in that latter category of organisation - we've just been told "they can wait another year" for those 2nd Gen i5s...
  3. The reason Debian is still supported on such old systems is because that's pretty much its purpose. It's tagline is that it is the Universal Operating System. If you go ahead and look at Ubuntu, you'll find no support for such ancient architectures - just x86_64, ARM and PowerPC. Saying "Linux doesn't drop hardware support" is wrong. Debian doesn't drop hardware support. Other distros are more than happy to. It's also a lot easier to justify supporting older architectures for longer when more of your user base uses it. Debian is used a lot in embedded systems that would cost huge amounts of money to replace or upgrade. Windows... isn't. You don't find CNC machines or MRI scanners running Windows - you just attach them to a Windows machine via USB/ethernet so regular users can use them in a familiar environment. Chances are the actual hard work of supporting these old architectures is done by the people who need it, which also makes it a lot easier to justify as an organisation.
  4. Most of the data is junk because of a poor collision, as I described before with my bullet analogy. When you collide particles in an accelerator, the aim is to produce a plasma of pure energy. This plasma then generates particles via pair production - essentially the conversion of pure energy into particle/antiparticle pairs. The more energy you give to the particles you're accelerating, the higher energy plasma you can create and so the more interesting particles can be created - and the more likely it is that the higher energy particles you're looking for are created. But that's only the case if the interaction is head-on. If it's not, not all of that energy is contained in the collision, meaning we the interesting stuff we're looking for can't be produced. It also adds a level of uncertainty to the data - we want the collision energy to be as fixed as possible for scientific accuracy, so it's best to remove these "bad" interactions as early as possible. Categorising interactions based on what theories we're investigating is done later, because what's interesting one group of researchers will be different to what's interesting to another. There are general directions as to what the field of physics is looking at on the whole - at the moment Higgs and Supersymmetry would probably be the big headlining topics - but each group will be looking for different decay mechanisms or signatures within that dataset, so we don't want to delete any of that data at the detector stage. This also allows them to look for interactions across the entire run's dataset, rather than only having access to whatever is generated after they pick something to look at. Also, as a further note on CERN's compute capability since people seemed interested, everything I've discussed so far is what's based in Geneva, but there's a lot more than that. The worldwide LHC computing grid is the largest distributed computing grid in the world, comprising of 14 tier 1 sites (datacentres) and 160 tier 2 sites (universities and institutions) all of which contribute to the compute, storage and networking requirements of the experiments. These (and a few other) are all connected via the LHC Optical Private Network: There's also LHCONE which connects tier 1 sites to tier 2s. Data from CERN isn't just sent directly from Geneva to the universities. Geneva (tier 0) keeps the "raw" (mostly junk-free) data - they now have more than 1 exabyte of storage - as well as first passes for event reconstruction, but this is also shared with the tier 1s, kinda like a CDN. The tier 1s then distribute data to tier 2s and the countless tier 3/local users, as well as providing storage for processed data from tier-2 institutions. CERN being very open with this stuff, you can watch the network usage live here: https://monit-grafana-open.cern.ch/d/HreVOyc7z/all-lhcopn-traffic?orgId=16&refresh=5m But it's not very interesting at the moment - only in the 100-200 Gb/s range - because the LHC is switched off for winter maintenance. They did run an experiment on the network over the last 2 weeks though: I wonder when the test started?? That load is meant to be representative of ~25% of what will be generated when the upgraded HL-LHC starts operations in 2028 - and note that's only in one direction! I also found some graphics and slides giving an overview of CERN's computing status at the moment:
  5. A "single run" at CERN is about 4 years long - we are currently in the middle of Run 3 of the LHC. So... yeah they would probably store that much in that amount of time. Strictly speaking, Femtobarns is a measure of area - not data. Inverse femtobarns is (roughly speaking) a measure of the number of particle collision events. They don't use bytes because events is what matters to them. The triggers they use are ML models running on ASICs using the data flowing directly out of the detectors, which identify whether or not a collision is 'interesting' and determine whether or not that data should be stored before it even reaches a CPU core. This is because the vast majority of collisions are completely useless. If you picture two bullets being shot at each other, we only care about the absolutely perfect, head-on collisions. The off-centre hits or glancing blows aren't interesting and so can be ignored. There's then a massive server farm that performs more in-depth analysis of the information, discarding yet more interactions. This entire process is completed in ~0.2s, at which point the interaction is sent for long-term storage. If everything was stored, the raw data thoughput of the ATLAS detector alone would be approaching 100TB (yes terabytes) per second. After all the layers of processing, less than 1 percent of that data is stored. But that's only one of the dozens of experiments being performed at CERN. Data that is stored is stored on magnetic tape, but this is mostly for archival purposes to my knowledge. I believe fresh data is stored on hard disks, so that it can be easily transmitted. They don't send anything out via tape anymore as far as I'm aware - certainly my lecturers never got any. They sell old tapes in the LHC giftshop for 10CHF though! Big science experiments generally don't send things via the public internet. They use academic networks or NRENs, which are usually completely physically separate to public internet infrastructure. In Europe these have all merged to form one massive network called GÉANT, although national ones like the UK's JANET still operate as a subset of that. GÉANT has a throughput of about 7PB/day and offers speeds far beyond those available commercially to big experiments like CERN. But basically every university in Europe has at least one connection onto the network - they claim to have over 10,000 institutions connected. There are also multiple 100Gbit connections going outside of Europe, including across the Atlantic. So no, while data is stored on magnetic tapes for long-term offline storage, it is sent around the world at least predominantly via GÉANT. CERN has access to multiple (I believe 4?) 100Gb connections onto GÉANT as well as connections to the public internet. 10TB of data would only take about 10 minutes to send at that speed and will most definitely be throttled by the connection at the other end. Maybe it's different to institutions on the other side of the world, but at least within Europe, the days of sending CERN data by FedEx are long gone.
  6. This seems really cool, but I think it's important to look at another project going on at the moment: Microsoft's Project Silica. Which already has working prototypes of non-degrading (on the scale of millenia) cloud storage systems for archival data - using similar sized glass slates with a capacity upwards of 7TB - which are already seeing throughput similar to that of conventional tape-based archival systems. Microsoft wont give any timescales, but given the technology has gone from a 300KB proof of concept in a university lab back in 2013 to prototype storage libraries of basically infinite capacity a decade later, it's probably not going to be that long before they begin to appear commercially. I really wouldn't be surprised if Microsoft started offering glass backups to Azure customers within the next 5-10 years. If they can massively improve their write speeds, sure I can see this technology having potential in datacentres - massive improvements in density could result in huge power savings. But for archival storage? I just don't really see the benefit over a completely immutable (and dirt cheap) medium like glass that, if I were to guess, is probably going to reach the market first.
  7. Yeah I do believe it. Because I live in a country with these things called laws. And you know what's illegal under those laws? False advertising. Aka claiming you're going to do something - say, provide updates for X period of time - and then not following through with it. So yeah, if they say they're going to support a device for 7 years, I believe them. Because I know that if they don't, it would be a textbook case of false advertising. Meaning they would then get stung with some big fines and I'd get a check in the mail for compensation. And even outside of the courts, it would be complete suicide for Google if they tried to do it. Their competitors would immediately jump on it as a perfect anti-Google marketing point and it would likely cause them to lose many of their third-party partners due to the reputation cost. All for the sake of saving the cost of the handful of engineers who keep it maintained? It just wouldn't make any sense to do from a risk/reward perspective. So seriously, just take a step away from the "Google kills everything they make in seconds" internet meme hate train for a second, come back to the real world, and realise how little fucking sense Google breaking that promise would make.
  8. They also didn't discuss this for the Pixel. Google has committed to 7 years of updates with the Pixel 8, and genuine replacement parts for it are also available - for a fair price - via iFixit for that same duration. Heck, iFixit currently stock OEM components stretching back to the Pixel 2. Yeah, the Pixel is definitely harder to take apart, but that's something that I would hope I only have to do once or twice during the phone's lifetime anyway. That's a sacrifice I'm willing to make in exchange for modern hardware features like wireless charging, always on display and IP68 water and dust resistance (the Fairphone is only IP55 rated) on top of the far more premium build. Meanwhile on the update front, Fairphone doesn't exactly have a great record with update timeliness. According to Fairphone's OS release notes, the Fairphone 4 is still sitting on 2022's Android 13, which it only got back in October, meaning they're running a year behind in terms of major version releases. And yeah they get the monthly security patches, but they get them 4-8 weeks after their initial release. Fairphone also only guarantees 5 major Android revisions after Android 13, meaning it will cap out at Android 18. So compare all that to the Pixel where not only will you get your updates on day 1, but you also get at least 2 more major versions of Android (Google guarantees Android version updates for the full 7 years with the Pixel 8) and you realise the Fairphone's software support offering is just objectively worse in every way except for that single additional year of security updates. And frankly, I don't care about one additional year, and I doubt most people will either. 7 vs 8 years, both are exceptionally long amounts of time for something like a phone and both are well beyond the point where many people will stop caring about the availability of updates. Many - likely the vast majority - of people will simply go "I'd like something new" long before updates or parts availability actually run out, as the gradual advancements in technology are significant enough to actually make a difference. 7 years definitely counts as "good enough". You see this with iPhones - everyone clamours about how amazing their support is, but barely any of their users actually utilise its full extent. About 2/3 upgrade within 3 years, likely in line with their 24 or 36 month cellular contract, and the vast majority of all units are replaced within 5 years. And that's the biggest nail in the coffin for this thing for me. Other Android manufacturers have been upping their game in this area and are beginning to provide long-term support for their own phones, to the point that one of the biggest selling points for the Fairphone doesn't really apply anymore. And it's not just Google - Samsung has made similar commitments to updates for their latest flagship phones (7 years of security and OS version updates - which are also more timely than Fairphone's) and also sells replacement parts for the last 4 generations of flagships via their Self-Repair store. It's kinda like what Linus has said about his investment in Framework: he's quite happy for it to fail if it does so because the 'repairable laptop' niche has disappeared, due to other manufacturers offering it themselves. I feel that's exactly what's happening to Fairphone here.
  9. So... a Steam Machine? I can't see Valve being remotely interested in pursuing such a device given how awful the last attempt went. And If even Microsoft are reportedly considering pulling out of the home console market due to it not being worth it, I can't see Valve being that enthused to dive in. With basically all hardware being sold at a loss for several years after launch, it's hardly what I'd call lucrative.
  10. Yeah, Apple could implement a system that allows 3rd party browsers to be used for PWA (as that's fundamentally the problem - they currently always use Safari, with no way to direct them to another browser). They just don't want to put the effort into developing such a solution.
  11. The other thing that matters though is money. If Intel are willing to continue to undercut AMD that might make them a very appealing option to Microsoft, especially for a Series S replacement where you're not looking for bleeding-edge performance anyway.
  12. 7900XTX does feel like a rather big omission, but at the same time it feels like a pretty cut and dry conclusion. The 7900XTX performs practically identically to the 4080 in rasterization, therefore it performs practically identically to the 4080 Super. It's the same price as the 4080 Super as well, at least here in the UK, but you get worse RT performance and no DLSS. As such, I see no reason to buy the 7900XTX unless you can get one substantially cheaper than the 4080 Super. Maybe that's possible where you are, but here that's not really the case.
  13. You'll have to pry it from my cold, dead hands. Seriously I use this button all the freakin time. Win + E to open explorer, Win + V for the clipboard app (!!) or just tap it and start typing to search for something. It's so much quicker than scrolling through the start menu trying to find what stupid folder something has been sorted into. It also allows you to fully navigate Windows without a mouse, which can be very useful for troubleshooting and/or macros.
×