-
Posts
24,095 -
Joined
-
Last visited
About leadeater
- Birthday Sep 23, 1987
Profile Information
-
Gender
Male
-
Location
New Zealand
-
Occupation
Systems Engineer | IT
System
-
CPU
Intel i7 4930K
-
Motherboard
Asus Rampage IV Black Edition
-
RAM
16GB G.Skill TridentX F3-2400C10-4GTX
-
GPU
Dual Asus R9-290X
-
Case
LD PC-V8
-
Storage
4 512GB Samsung 850 Pro & 2 512GB Samsung 840 Pro & 1 256GB Samsung 840 Pro
-
PSU
EVGA Supernova NEX 1500 Classified
-
Display(s)
Dell U3014 30"
-
Cooling
Custom EKWB, 3x 480 RAD everything cooled inc ram (why not?)
-
Keyboard
Razor Black Window Ultimate BF4
-
Mouse
Mad Catz R.A.T. 5
-
Sound
Custom build speakers, home theater sound
-
Operating System
Windows 10
Recent Profile Visitors
32,411 profile views
leadeater's Achievements
-
Oh they can and do sandbag, it's about how. It's called product segmentation. You have Intel spending a lot of engineering resources on making their uarch slightly better and slighter faster which is not sandbagging however leaving the actual consumer product portfolio in a complete state of underdevelopment due to lack of competition while on their enterprise/datacenter product portfolio actually bringing to market significant product developments not just based on uarch improvements but actual full SoC/CPU development and progression. Intel was never forced to do this, there was no logical business reason to over invest and over spend in consumer desktop products with no reason to do so, so they didn't. There is very little demand for performance scaling in the consumer market, small gains is actually "enough" while enterprise/datacenter actually does demand and required generation over generation significant improvements to sustain and maintain that market growth and their requirements overwise datacenter operators would actually run out of rack space, floor space, building space, land area, power deliver etc. Not only do they actually require it, truly, but will also pay for it meaning Intel delivered on product developments to suit those customer demands and requirements. There is a huge difference in product development between Intel consumer and enterprise/datacenter portfolios up until consumer market competition reappeared. And if you think otherwise then Intel actually halving their price of entire product stacks in workstation and server literally as soon as AMD showed real market competition, on existing products!, is very damning evidence of this. You can't half the price if it's not still actually profitable which means the margin on it before was significantly higher specifically due to lack of competition not engineering costs etc.
-
So if you have a new app or are a new company to iOS but still globally large you can't choose this method without first getting 1 million installs? Seems like unnecessary hoop jumping. Different requirements or more flexibility would make much more practical sense, which wasn't going to happen.
- 39 replies
-
Don't have non-legal terms in the ToS Or more in this case terms that aren't quite not legal but also shouldn't be allowed to the extent Apple was. Either way Apple has little choice in the matter, refuse to give a company like Epic a developer account irrespective of all of this and it would have gone to court. Give them an account and they'll conduct themselves how they see fit per law and/or ToS with law coming first. Apple shouldn't need to be concerned about it or anything, nothing they do should be a problem and if "you" are worried that your restrictions could be found faulting then maybe "you" are in the wrong. Like cheating on an exam, if you are worried you'll be found out then probably shouldn't have cheated. Business don't have "morals", generalizing heavily. Point being business can act in harmful, non-ethical or otherwise less than scrupulous ways knowingly and intentionally so long as they think they have the legal right to do so. Similarly to the above any person or business who may object to any such thing can knowingly choose to violate something such as ToS without breaking any laws and if both involved want to make an issue of it can take the matter to civil court which may or may not have unintended flow on effects, or entirely intentional. There are a lot of things Apple does very well, preventing In-App purchase processing or even external payment information absolutely is not one of them.
- 56 replies
-
- apple
- epic games
-
(and 4 more)
Tagged with:
-
That is actually a legitimate legal strategy, without grounds to start litigation you cannot do it so if you need to create the situation to do so to try and get the legal ruling for or against then you do it. You may not like it but needs must sometimes. Bottom line is if you feel something in or all of the ToS is not legal then you don't have to follow it. ToS is not the law. If the ToS is not legal then you are not breaking it, it was never valid to begin with. Apple's ToS cannot prevent me from scratching my ass, that is not allowed. I can do it regardless of it being in their ToS
- 56 replies
-
- apple
- epic games
-
(and 4 more)
Tagged with:
-
Probably along the same lines for why a lot of things are or are not done, if it isn't broke then don't fix it. "Minor" differences or changes go a very different analytical consideration path when you are considering affecting many tens of millions if it's not so "minor" and it does cause problems. But also unless there actually is a benefit at all then why do it at all? Which then leads on to the actual situation, Microsoft latest Insider Preview is using the new Rust Kernel which is the root cause for this. Not because Rust doesn't support older CPUs like those but if you are making such a huge change you want to do a lot of things and one of those is reduce the required testing and the potential for issues which means cutting out support for unreasonable hardware. If it were just a compile setting change and some code modifications then the reasoning would have to be a lot more specific, but here you have to argue why to support rather than why it was removed. Windows 11 24H2 is the introduction of the Rust Kernel, so you must justify what hardware is to be supported and if you can't then it's not.
-
Because they still set the allowed hardware target to be as low as C2D, you take one path if you have the instruction and another if you don't. That's how all software works, not everyone has AVX-512 or even AVX2 but software can use that when it's there and not when it isn't.
-
I'm not saying don't have another unit but having another isn't really as much extra protection as you might be thinking. Unless you have a good system of both keeping the data in sync real time and an automated or quick way to switch across that is practiced and understood by multiple people then you're throwing money at a solution that doesn't actually offer the benefits sought after. So long as you have your RAID group properly laid out and using RAID-DP or RAID-TEC then disk failure is minimal if almost no concern, up until passing 8 year mark with the original disks still in use. It is not the cost of another system and you are not relying on a single layer other than the physical disks, which again you have to be neglecting your maintenance and replacement life cycle to ever really be a factor. Two Synology's is not better than one NetApp. Two Synology's also costs more than one NetApp But also like I mentioned QNAP dual controller ZFS is a cheaper option
-
What is the point of running an operating system that cannot run basically most modern software, if your argument is it still works today. You have a good point about supporting legacy systems and legacy software and that is a step better than just running old unsupported Windows but we all know 1000% that is actually what happens, a lot in manufacturing. It's like arguing Parallel ports on computers can still be used and are functional, yes they are but good luck getting a new printer to work properly with a Parallel port. Some things shouldn't be done.
-
They won't be using it "more", they will be removing support for the the alternative for not supporting it. It is very unlikely Microsoft was setting any compiler flags to not use this before and it's even less likely a compiler would not use this instruction, my prior post. The issue is both performance related and bug/support related. If entirely different code paths are being taken then you have completely different paths and failure modes for issues and unlike popular belief Microsoft do actually do inhouse kernel stability checking across a wide range of hardware and lots of it and they'll certainly have statistical data that shows hardware that does not have this instruction has higher rate of of errors. Combine that with actually no need to support such old hardware for Windows 11 and you have perfectly reasonable outcome, ~0.0001% of Windows 11 or potential Windows 11 users affected.
-
I think you maybe have to add more disks (3 in your case) for the option to come up, I've never done a migration with Synology or really used them so don't really know the practical details of doing to for them. Problem is spending money to add disks to find out no still won't let you do it. Edit: Actually never mind, think you are using the RAID group feature which means
-
Sounds like you and @Fatty 227 both need to start looking at different storage solutions with more inbuilt redundancy like a NetApp or cheaper option a QNAP Dual Controller ZFS model. Going with NetApp, probably QNAP too, you'd be gaining access redundancy through having HA controllers and dual paths to all disks through dual redundant SAS controllers in the disk shelves so there is no single point of failure. Totally up front though, NetApp is decently more expensive than Synology/QNAP. Obviously NetApp and QNAP are not the only options in this regard just what I have used most commonly. HPE also have many good options along with Dell/EMC etc. P.S. You could be cheeky and ask/rent an AWS Snowball Edge NVMe which has 210TB of capacity copy your data to that and back and then tell AWS "nah never mind, change our mind here is your equipment back". Just don't tell me you did this
-
I have some bad news, that is essentially unavoidable when seeking any semblance of reasonability in cost. Absolute best case if you can sustain 400MB/s it's 5 days for each copy so 10 days. Why not just convert the RAID level from 5 to 6? https://kb.synology.com/en-br/DSM/help/DSM/StorageManager/storage_pool_change_raid_type?version=7
-
SLURM and Servers Configuration
leadeater replied to nbstackpie's topic in Servers, NAS, and Home Lab
Have a look at Nvidia Cluster Manager, it'll do most of the hard work for you. https://www.nvidia.com/en-us/data-center/base-command/manager/ But as @igormp has pointed out it's mostly about how you do your processes, your workflows and how you do things more than just finding a job management system like SLURM/setuping a cluster. -
That is true, I search for odd ball Apps like that. I guess my point was they don't do it for services like Spotify, YouTube, Netflix etc of the huge incumbent services that are well known and heavily used. What I haven't done is go searching App stores for things like that because I'd directly search the App name rather than looking for a music service, and even if I was looking for one Google would be my first place to do that searching while an App like a compass or medical AED map I would search the App Store first. It was probably wrong to say nobody does it ever, since I do, but there are definitely categories of Apps where the method of discovery is different. aka I absolutely don't believe App Store searches has any significance for Spotify other than the necessity to be on there to avoid people choosing not to use them because they aren't. Honestly I rank word of mouth higher than Apple Store searches for how people hear of and decide to use Spotify.