Jump to content

Windows 11 - Here is everything you need to know - OUT NOW!!!

GoodBytes
6 hours ago, atxcyclist said:

Reading about how badly Win 11 destroys Ryzen performance, I have gone into the UEFI in my 2700X system and disabled TPM. I don't want to be automatically upgraded to it. If I can find a way to disable it on my Ryzen 4800 laptop, I will do so as well.

If not try disabling secure boot.

Link to comment
Share on other sites

Link to post
Share on other sites

It's ironic that Windows 11 ruins Ryzen performance. Remember how shitty Windows 10 performance was for Ryzen until both AMD and Microsoft addressed it? And then we had properly functioning AMD CPU's. Microsoft releases Windows 11 and suddenly they undo EVERYTHING they've done to address it with predecessor OS. How can I believe this was a honest mistake and not a deliberate nerf? Unless Microsoft is so utterly incompetent they forget what they were doing just a year or two ago. Which is a very strong possibility as well...

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, RejZoR said:

Which is a very strong possibility as well...

Since they were also working closely with intel it is also possible that someone got a fat check.....

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/14/2021 at 5:09 PM, HelpfulTechWizard said:

mores law has nothing to do with performance.

Mores law has everything to do with transistor count. Transistor count and performance are not directly related. 

More transistors is more performance on the same arcetecture and cores, but thats not the same thing as mores law.

High end cpu performance will grow by 10% a year probably

They do, you just have to use the setting to make the website thing your using a phone

More transistors packed per meant an increase in the order you don't have anywhere near now. It's not the only metric but meant most. 

 

https://www.researchgate.net/figure/Growth-in-processor-performance-over-40-years-HP11_fig3_340682076

 

Depending on how you measure performance, the growth is well under 10%, even 5%. Even if you had a boost like the i9 11900k it's not the trend and still way far from the over 50% increased you used to have in the past.

 

Either way, and again, even if I upgrade my gaming rig, the performance in the rest of the machines aren't required an increase in any way so far. Everyday tasks are perfectly doable in decade old processors. Hell, I even play AC Valhalla on my 12yo i7 processor maxed out with a simple overclock in an 1080p monitor.

 

And the overwhelming majority of Windows users don't even play. They use their computers to browse the web, work on office documents and little more. The only upgrade my dad needed in his working laptop for the past decade was an SSD and replacing the battery.

Link to comment
Share on other sites

Link to post
Share on other sites

windows 11 is here, and every reason to not use their EA software that they think can just be patched 100 times... never mind 1000 future patches to get it to were they want it to be... which is in windows 14.

 

MacOS or Linux just looks a bit better from update to update.

Link to comment
Share on other sites

Link to post
Share on other sites

50 minutes ago, GoodBytes said:

 

It's a good explanation of what VBS, TPM, and Secure boot do, but not a scenario regarding RDP. Fact is, if a hacker gained RDP with local administrative access, they can nuke the data. And if the server has AV, it would probably prohibit rewriting the MBR anyways.

Speaking of MBR, that last bit explaining how to enable Secure Boot was half-assed. Yes, you can disable CSM / Legacy for pure UEFI to enable Secure Boot, but the OS volume needs to be GPT. If not, it won't boot. 

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, GoodBytes said:

 

This video is pure marketing material aimed at uneducated people.

Issues I have with the video.

1) They say they will address the system requirements for Windows 11, but they never do. Nowhere in the video do they state why you need an 8th gen Intel or 2000 series AMD processor to run Windows 11. They repeatedly say they will address it, but they never do. Not once in the entire video.

2) The first attack they show, where they get local admin access through RDP. That attack works in Windows 11 as well. Windows 11 offers ZERO additional protection compared to Windows 10 for this attack. Also, this is really low hanging fruit. They might as well have shown "look under the keyboard and someone might have left their password written on a post-it".

3) All the things they showed in this video are part of Windows 10, and will be enabled by default on newer Windows 10 machines. Nothing in the video is exclusive to Windows 11, although they do really hard to sell you the idea that it is.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

58 minutes ago, LAwLz said:

All the things they showed in this video are part of Windows 10, and will be enabled by default on newer Windows 10 machines. Nothing in the video is exclusive to Windows 11, although they do really hard to sell you the idea that it is.

VBS won't automatically be enabled in Windows 10 even with all the requirements installed and enabled, and just to note it cannot be used at all on Windows 10 Home Edition. Thing is VBS can be a performance killer, which is why it's not on by default in Windows 10.

 

VBS really is a defense against the more sophisticated attacks though, which is why it was historically targeted at enterprises and high security environments. I think it's great tech however that performance loss, which in some games is up to 15%, is quite a bit a of a problem.

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, leadeater said:

Thing is VBS can be a performance killer, which is why it's not on by default in Windows 10.

My problem with it is that it requires their stupid hypervisor, which breaks a ton of performance evaluation/development tools, which will make me find a way to disable it anyways. But I do think enabling it by default isn't that bad.

 

We really need to move stuff to safe by default. Stupid RDP being exposed online is their own fault, due to insecure default settings.

 

We're in 2021 and damned routers still come with IP spoofing protection disabled and UPnP enabled.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Forbidden Wafer said:

We really need to move stuff to safe by default. Stupid RDP being exposed online is their own fault, due to insecure default settings.

Well that is 100% not a Microsoft thing and is completely on the  configuration of the system. First RDP is disabled by default, second you have to setup a Port Forward rule, so without both of those RDP attack as shown in the video is 100% impossible. VERY few people put RDP directly on the internet and if you do and you have some weak ass password then you get what's coming, nobody can save you from yourself.

 

UPnP won't create a RDP rule btw, not automatically as Windows doesn't send out a UPnP request to create one, ever. UPnP does have flaws obviously but you need more than just that to get in to systems.

 

And it's rather stupid Microsoft is showing that sort of attack while also having a proper RD Gateway HTTPS secure tunnel implementation with RADIUS/NPS policy based access rules that can control who and where they can access. On top of that RDS supports MFA so all that Microsoft is doing here is making themselves look bad and as if they don't have proper protections against password brute force attacks.

 

If you are a home user and cannot use the above, because they require AD/Azure AD and Windows Server for the RD Gateway role, then simply do not use RDP. There are other options that are better for home usage. RDP usage over the internet was never part of it's design, it was designed for local access and trusted networks. A lot of effort has gone in to making it at least mostly safe to use over the internet but not pure, standalone RDP.

 

2 hours ago, Forbidden Wafer said:

My problem with it is that it requires their stupid hypervisor, which breaks a ton of performance evaluation/development tools, which will make me find a way to disable it anyways.

Hyper-V is an exceedingly good hypervisor, it's in no way stupid when it's just as good as any other option while also being more user friendly than many others and has multiple features most do not, for free (no additional cost). And of course it requires a Hypervisor, it's literally in the name. VBS without the V would just be BS 😉

 

Also what tools does it break? There should be very few that Hyper-V itself would break just by being enabled. I could understand VBS doing that but then I also question what is being done because it's highly likely that shouldn't be being done today anymore and should be doing it another way. Of course throwing away working tools and workflows is costly and time consuming.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, leadeater said:

Hyper-V is an exceedingly good hypervisor, it's in no way stupid when it's just as good as any other option while also being more user friendly than many others and has multiple features most do not, for free (no additional cost).

Virtualbox and VMware are way better. And VBox is free. They support way more stuff and some have "seamless" integration (kind of hide the desktop of the VM and only show floating windows). Hyper-V is terrible.
 

 

8 minutes ago, leadeater said:

Also what tools does it break? There should be very few that Hyper-V itself would break just being being enabled.

AMD uProf and Intel VTune. Not sure if perf works correctly. WPA should work, but it is pretty bad in comparison.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Forbidden Wafer said:

Virtualbox and VMware are way better.

But are they, I hate VirtualBox so no I disagree. You might like it but I do not, who is correct? Feature, performance and capability wise Hyper-V leaves VirtualBox for dead. Same with VMware Workstation.

 

Hyper-V you get is the same Hyper-V that competes with feature and performance parity with VMware ESXi.

 

Sure I would use vSphere/ESXi over SCVMM/Hyper-V putting cost aside however many paid features, including ones in more expensive VMware licenses are no additional cost on Hyper-V.

 

7 minutes ago, Forbidden Wafer said:

AMD uProf and Intel VTune. Not sure if perf works correctly. WPA should work, but it is pretty bad in comparison.

But what does it actually break? How? Simply enabling the Hyper-V role does nothing to the host system in how it functions or it's performance. Up until you create a virtual switch and Hyper-V take over your NIC it hasn't done anything to the OS at all.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, leadeater said:

But what does it actually break? How? Simply enabling the Hyper-V role does nothing to the host system in how it functions or it's performance. Up until you create a virtual switch and Hyper-V take over your NIC it hasn't done anything to the OS at all.

It breaks kernel-level profiling (e.g. time spent on syscalls/drivers/etc). Hyper-V does break the host OS profiling. It obviously also impacts the performance, saying it doesn't is totally misleading. You can configure Hyper-V guest with hardware profiling (which do lets you profile them with Vtune and Uprof), but that is only for the guest.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, Forbidden Wafer said:

It obviously also impacts the performance, saying it doesn't is totally misleading.

Does it? Got any actually data to show enabling Hyper-V, doing nothing with it, reduces host OS performance. I've literally never seen any performance impact with it on nor anyone show that it does. Sure when running multiple VMs it will because there is resource sharing going on, shut all the VMs down and the host will be just as fast as with Hyper-V disabled.

 

If you're going to say something has a performance impact you need data to show it does, gut feel and I thinks don't mean anything.

 

18 minutes ago, Forbidden Wafer said:

It breaks kernel-level profiling (e.g. time spent on syscalls/drivers/etc).

So 3.4 only fixed within VM stuff then, well that sucks.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, leadeater said:

Does it? Got any actually data to show enabling Hyper-V, doing nothing with it, reduces host OS performance. I've literally never seen any performance impact with it on nor anyone show that it does. Sure when running multiple VMs it will because there is resource sharing going on, shut all the VMs down and the host will be just as fast as with Hyper-V disabled.

It does, even without any guests running, just like any other hypervisor does. It isn't magical nor special in that regard. From my tests when I was using WSLv2, it was about 3-5% without any guests running. With WSLv2 accessing stuff from Windows via Plan9 it was more like 15%. Still prefer no Hyper-V and WSLv1 to this date.

Link to comment
Share on other sites

Link to post
Share on other sites

 

5 hours ago, Forbidden Wafer said:

My problem with it is that it requires their stupid hypervisor, which breaks a ton of performance evaluation/development tools, which will make me find a way to disable it anyways. But I do think enabling it by default isn't that bad

As a software developer, I personally never heard of anyone having issues with WSL2 being enabled (it uses Hyper-V). VirtualBox (why would anyone use this beside toying with an OS and nothing serious, but anyway...) runs fine with Hyper-V. Just set "paravirtualization Interface" to "Hyper-V". Or you know.. use Hyper-V or vSphere for VM purposes.  You can't beat a Tier-1 hypervisor. 

 

As you said, WSL2, even if it not running, the fact that it is enabled, costs performance. I noted a minor performance drop on my Core i7 930 2.8GHz system with 6GB of RAM (DDR3), powered by a GeForce GTX 680 and SATA SSD from a company that went defunct ages ago (OCZ Vertex 4). I see it as a none issue unless you are benchmarking to the top to be honest, especially with modern CPUs. I mean, while I only briefly looked at benchmark figures, the slowest AMD Ryzen CPU of today (I think even an Athlon series) is actually faster or say, comparable performance, than my current CPU. I don't see the performance drop as being a problem. The effect should be even more negligible on a decent modern CPU or the recent years by both AMD and Intel.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, leadeater said:

VBS won't automatically be enabled in Windows 10 even with all the requirements installed and enabled

It is, if you meet certain requirements, which is basically what all Windows 11 PCs will do.

Source: Virtualization-Based Security: Enabled by Default

That post was made over 2 years ago.

 

 

6 hours ago, leadeater said:

and just to note it cannot be used at all on Windows 10 Home Edition.

Yeah but that's just Microsoft blocking it from working because they want people to pay for the non-gimped version of Windows. 

 

 

6 hours ago, leadeater said:

Thing is VBS can be a performance killer, which is why it's not on by default in Windows 10.

It is on by default, if your hardware meets certain requirements. Just like PCs sold with Windows 11 will. It is the same as Windows 11.

 

 

3 hours ago, leadeater said:

second you have to setup a Port Forward rule

That part is not entirely true.

I am not sure how it is in other countries, but here in Sweden it's not uncommon for people to just have an RJ45 connector in their hall or something, and that's where they get Internet from. When I worked at an ISP here in Sweden, it wasn't too uncommon that people just plugged their computers straight into that connection, or plugged switches into it and then their PCs. I don't think I need to tell you this, but for others reading this, this is a very bad idea because it means your computer is fully exposed to the Internet. If you have a port open, anyone from the Internet can send data to that port.

 

I agree that the users would have to for example turn RDP by themselves though. Really, I think the scenario Microsoft highlighted is a bit silly. It's extremely long hanging fruit, and what they showed is not even the worst things someone can do if you fuck up so bad they get local admin privilege through an RDP session.

 

Their demo is, in my opinion, as silly as going:

Quote

Okay, so let's say the lock on your door is broken and you never lock it. This leads to a thief entering your home when you are away. Now, pretend like that thief finds your iPhone, and in this particular case you have disabled the PIN on the lock screen so they can browse whatever they want. This is really bad because now they are able to buy vBucks for their Fortnite account since you don't have authorization for purchases through iTunes enabled! This is why it is very important that you enable password requirements for purchases through iTunes. That would surely have protected you from the bad things the thief could do to you!

 

Yeah, thanks Microsoft. I am sure the attacker with local admin privileges on my computer will sure be stomped because I got SecureBoot enabled. That will surely foil all their plans and totally protect me from any harmful doing...

 

 

 

3 hours ago, leadeater said:

But what does it actually break? How? Simply enabling the Hyper-V role does nothing to the host system in how it functions or it's performance. Up until you create a virtual switch and Hyper-V take over your NIC it hasn't done anything to the OS at all.

It actually does. Enabling Hyper-V puts your Windows in a VM that runs on top of the hypervisor. The penalty should be pretty small, but even without any additional configuration like a vSwitch, you still get the overhead of running the hypervisor.

I wouldn't be surprised if some hardware detects a Hyper-V enabled Windows install as a VM and won't run because of it. You also lose access to (or at least used to)  Intel VT-X instructions since those are being used by the hypervisor instead. If you use a program that uses VT-X instructions, that will break too if you enable Hyper-V.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, leadeater said:

Feature, performance and capability wise Hyper-V leaves VirtualBox for dead. Same with VMware Workstation.

Your forgot to mention why. It's because Hyper-V is a type-1 hypervisor whereas anything else that runs Windows would be a Type-2. If anyone wants a Type-1 that isn't Hyper-V, they will have to run it on bare metal.

Link to comment
Share on other sites

Link to post
Share on other sites

Intel latest drivers for its Xe graphics (mobile devices), feature support for Dynamic Refresh Rate. A new feature of Windows 11 which allows the switching between different refresh rate (typically: 60-120Hz).  Once installed, go to Settings > Display > Advanced Display > Choose a refresh rate, and you should see 'Dynamic (60 Hz or 120 Hz)'. Of course, you need a display that support more than 60Hz.

 

https://www.intel.com/content/www/us/en/download/19344/intel-graphics-windows-dch-drivers.html

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, LAwLz said:

It is, if you meet certain requirements, which is basically what all Windows 11 PCs will do.

Source: Virtualization-Based Security: Enabled by Default

That post was made over 2 years ago.

Oh interesting, guess I've not seen it come on by default in new installs because I don't normally enable the Hyper-V feature.

 

6 hours ago, LAwLz said:

That part is not entirely true.

I am not sure how it is in other countries, but here in Sweden it's not uncommon for people to just have an RJ45 connector in their hall or something, and that's where they get Internet from. When I worked at an ISP here in Sweden, it wasn't too uncommon that people just plugged their computers straight into that connection, or plugged switches into it and then their PCs. I don't think I need to tell you this, but for others reading this, this is a very bad idea because it means your computer is fully exposed to the Internet. If you have a port open, anyone from the Internet can send data to that port.

I could understand one computer working like that but not more than one. ISP might be providing a media and protocol translation to native Ethernet out the port but it should only allow a single public IP to be assigned and depending on ISP either PPPoE Auth and/or VLAN tag (not as common). With my GPON connection my ONT provides those Ethernet ports and I could plug directly in to it but at least with my ISP configuration I have to PPPoE Auth and use VLAN10.

 

So plugging a switch in and having multiple computers get internet access should work for more than I single computer, otherwise there is a router involved that is providing NAT and DHCP. ISP's don't hand out public IPs like free candy. But maybe CGNAT is in play with what you describe.

 

6 hours ago, LAwLz said:

It actually does.

No it doesn't. I've seen a few people say it does, however it's typically not Hyper-V at fault. This has been argued before and some have tested it showing around ~1% difference but only with 1 or 2 tests runs to we are in margin of error territory. Others have been able to show more, then did a Windows reinstall and the performance difference went away.

 

There needs to be a lot better testing to show it has a performance impact.

 

6 hours ago, LAwLz said:

Enabling Hyper-V puts your Windows in a VM that runs on top of the hypervisor

Correct however it is a special Root domain VM with direct hardware access and all things like DMA calls and GPU acceleration works as normal. An actual Hyper-V VM experience more performance difference than the Root domain Host OS does.

 

6 hours ago, LAwLz said:

You also lose access to (or at least used to)  Intel VT-X instructions since those are being used by the hypervisor instead. If you use a program that uses VT-X instructions, that will break too if you enable Hyper-V.

VT-x can still be used within VMs but you have to change the VM settings to expose that to the running guest. You need to do that if you want to run nested hypervisor for labs as an example, where you don't actually have multiple VM hosts etc.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, StDragon said:

Your forgot to mention why. It's because Hyper-V is a type-1 hypervisor whereas anything else that runs Windows would be a Type-2. If anyone wants a Type-1 that isn't Hyper-V, they will have to run it on bare metal.

I sort of half said it by way of that it's designed to compete with vSphere and ESXi, but yea it's a different class of hypervisor. One of the main things I liked about it when I had to use it, some clients when we took them over had it running, was that Live Migration (vMotion) was a free feature. However back then that was Server 2008 R2 and Server 2012 R2 and Hyper-V was rather ahhh crap compared to today.

 

GPU acceleration and GPU sharing is another great free feature, if you need just some basic acceleration on some VMs this is really nice compared to having to pay both VMware and Nvidia license fees for that same privilege. Certainly not a replacement for a proper solution if you need higher performance acceleration in VMs but it's very much a nice to have feature.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, leadeater said:

No it doesn't. I've seen a few people say it does, however it's typically not Hyper-V at fault. This has been argued before and some have tested it showing around ~1% difference but only with 1 or 2 tests runs to we are in margin of error territory. Others have been able to show more, then did a Windows reinstall and the performance difference went away.

 

There needs to be a lot better testing to show it has a performance impact.

 

Correct however it is a special Root domain VM with direct hardware access and all things like DMA calls and GPU acceleration works as normal. An actual Hyper-V VM experience more performance difference than the Root domain Host OS does.

 

VT-x can still be used within VMs but you have to change the VM settings to expose that to the running guest. You need to do that if you want to run nested hypervisor for labs as an example, where you don't actually have multiple VM hosts etc.

You still are putting the OS in a VM and you still need to run the hypervisor. 

Running something will always have a performance impact compared to not running it. 

How much of a performance impact it has, and if it's worth it is up for debate, but I don't agree that we can say it doesn't have a performance impact. It absolutely does. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×