Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
gabrielcarvfer

Foreshadow: the real flaw behind Spectre/Meltdown and affects most CPUs

Recommended Posts

Posted · Original PosterOP

It turns out that Spectre and Meltdown were side-effects of the real flaw, and it affects other CPU manufacturers. Previous mitigations do not prevent the side-channel leakage.

 

Attacks such as ZombieLoad (data sniffing via the browser) are not limited to Intel anymore.

_20200807_182230.thumb.JPG.00f42c9526ddee4dfb92a94396bd8e66.JPG

Quotes

Quote

It turns out that the root cause behind several previously disclosed speculative execution attacks against modern processors, such as Meltdown and Foreshadow, was misattributed to 'prefetching effect,' resulting in hardware vendors releasing incomplete mitigations and countermeasures.

 

[...]

The new research explains microarchitectural attacks were actually caused by speculative dereferencing of user-space registers in the kernel, which not just impacts the most recent Intel CPUs with the latest hardware mitigations, but also several modern processors from ARM, IBM, and AMD — previously believed to be unaffected.

 

My thoughts

Manufacturing issues, unending stream of security flaws and performance-hampering mitigations... If some serious breakthrough doesn't happen, we will be lucky if future processors (>5y) get any faster than their predecessors.

 

 

Sources

https://thehackernews.com/2020/08/foreshadow-processor-vulnerability.html

 

Link to post
Share on other sites
8 minutes ago, gabrielcarvfer said:

Womp womp, I'm late.

 

Hopefully it's merged with the other one as your post explains it a bit better. The graphic is a bit more informative and more helpful for others as well.

Link to post
Share on other sites

*is wondering if graphene will make feasible clock speeds in 10s to 100s of GHz


The pursuit of knowledge for the sake of knowledge.

Forever in search of my reason to exist.

Link to post
Share on other sites

Motivations means mitigations in this case and it’s an attack of autocorrect I am assuming? *ihateautocorrect*


Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to post
Share on other sites
Posted · Original PosterOP
5 minutes ago, Bombastinator said:

Motivations means mitigations in this case and it’s an attack of autocorrect I am assuming? *ihateautocorrect*

Yup *facepalm*

Link to post
Share on other sites
2 minutes ago, gabrielcarvfer said:

Yup *facepalm*

Lol I had to edit my post 3 times because of autocorrect garbage. You can type the right word and it will follow up with the wrong one, you can fix it and it will do it again.


Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to post
Share on other sites

So it needs to be mitigated everywhere soonest.  Argh.  And new “hardware mitigated”chips still have issues then I suppose.  is a mitigation even possible?  The original intrusions were at least hard to implement. Is this equally hard? I’m concerned that someone prepping a “virus” for a specter/meltdown injection could merely modify their code to use this instead.  How hard would that be?


Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to post
Share on other sites

The hardware is going to have flaws and holes,it's more of a matter how serious this is for most users, this really sucks if it hits AMD CPU's as hard as Intel.

 

Link to post
Share on other sites

I’m worried about every piece of hardware I own right now.  Do I need to shut down my internet connection?


Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to post
Share on other sites
10 minutes ago, Bombastinator said:

So it needs to be mitigated everywhere soonest.  Argh.  And new “hardware mitigated”chips still have issues then I suppose.  is a mitigation even possible?  The original intrusions were at least hard to implement. Is this equally hard? I’m concerned that someone prepping a “virus” for a specter/meltdown injection could merely modify their code to use this instead.  How hard would that be?

Modern CPUs have some (varying) degree of leeway in reordering code so as to maximize throughput in addition to executing code before needing to know whether it needed to be done or not. By design, CPUs use whatever shortcuts they can get ahold of to achieve faster results while still achieving correct results. 
 

In production, the big three priorities are Safety (including housekeeping), Quality (correct results), and Throughput, in this order. You can have a manufacturing line that produces consistently perfect results at mind boggling speeds, but an oversight in even something as simple as housekeeping can lead to an accident, and render moot all your gains. 
 

From a high level, CPUs appear to resemble a manufacturing line. Accuracy and throughput are obvious targets of high performance CPU design. A cpu is useless if it produces inconsistent results, so accuracy takes priority over throughput. Are there, perhaps, any housekeeping and safety measures that can/should take priority over everything else in CPU design however? I’m not certain. Though reading deeper into Spectre/Meltdown seems to suggest a similar occurrence. Not taking the time to run security checks or clean up leftover data that could be leaked later could be at the heart of the problem, though I’m no CPU engineer, so this is a guess, though if correct, paints a relatively grim picture that we’d probably have to just put up with security flaws as a proper fix would be expensive for both absolute performance and efficiency. 


The pursuit of knowledge for the sake of knowledge.

Forever in search of my reason to exist.

Link to post
Share on other sites
4 minutes ago, Zodiark1593 said:

Modern CPUs have some (varying) degree of leeway in reordering code so as to maximize throughput in addition to executing code before needing to know whether it needed to be done or not. By design, CPUs use whatever shortcuts they can get ahold of to achieve faster results while still achieving correct results. 
 

In production, the big three priorities are Safety (including housekeeping), Quality (correct results), and Throughput, in this order. You can have a manufacturing line that produces consistently perfect results at mind boggling speeds, but an oversight in even something as simple as housekeeping can lead to an accident, and render moot all your gains. 
 

From a high level, CPUs appear to resemble a manufacturing line. Accuracy and throughput are obvious targets of high performance CPU design. A cpu is useless if it produces inconsistent results, so accuracy takes priority over throughput. Are there, perhaps, any housekeeping and safety measures that can/should take priority over everything else in CPU design however? I’m not certain. Though reading deeper into Spectre/Meltdown seems to suggest a similar occurrence. Not taking the time to run security checks or clean up leftover data that could be leaked later could be at the heart of the problem, though I’m no CPU engineer, so this is a guess, though if correct, paints a relatively grim picture that we’d probably have to just put up with security flaws as a proper fix would be expensive for both absolute performance and efficiency. 

I’m more comfortable with slow computers than insecure ones.


Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to post
Share on other sites
Posted · Original PosterOP
25 minutes ago, Bombastinator said:

I’m more comfortable with slow computers than insecure ones.

Oh, come on, you have been using insecure stuff for your entire life and wasn't even aware of that... In terms of Intel processors, you would need to go back to Pentium Pro/686 era processors to get rid of speculative execution.

Link to post
Share on other sites
19 minutes ago, Bombastinator said:

I’m more comfortable with slow computers than insecure ones.

I’d have to wager that it would depend. Omission of the involved mechanisms would severely set back performance. Consider that Speculation and OoO excecution has enabled utilization of wider superscalar architectures. (Yes, an in-order design can technically go wide, but it would need a precise mix of code to make it worthwhile). Losing these performance enhancing features entirely would set us back a very long way. That is, unless graphene or some other transistor tech can ramp up clocks to offset the performance deficit. 
 

Even with security risks, I don’t much desire going back to 90s performance on all my PCs and devices. 


The pursuit of knowledge for the sake of knowledge.

Forever in search of my reason to exist.

Link to post
Share on other sites
Posted · Original PosterOP
30 minutes ago, Zodiark1593 said:

Not taking the time to run security checks or clean up leftover data that could be leaked later could be at the heart of the problem, though I’m no CPU engineer, so this is a guess, though if correct, paints a relatively grim picture that we’d probably have to just put up with security flaws as a proper fix would be expensive for both absolute performance and efficiency. 

Checking is hard to do as it is not aware of what it is doing in the first place. Cleaning up leftover data is an option, but operating systems will absolutely refuse to do that. Microsoft even refuses to reconfigure the floating point unit when a WSLv1 picoprocess is executing, resulting in rounding errors that they ignore in favor of performance for remaining processes (lame excuse when they use WSLv2 on top of the hypervisor, another OS, etc).

Link to post
Share on other sites
27 minutes ago, gabrielcarvfer said:

Oh, come on, you have been using insecure stuff for your entire life and wasn't even aware of that... In terms of Intel processors, you would need to go back to Pentium Pro/686 era processors to get rid of speculative execution.

Heh.  I got one downstairs.  I think anyway. Q66something.  I started on a 4.0mhz Sanyo.


Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to post
Share on other sites
2 hours ago, Blademaster91 said:

The hardware is going to have flaws and holes,it's more of a matter how serious this is for most users, this really sucks if it hits AMD CPU's as hard as Intel.

From reading the paper explaining the flaws, all CPUs seem to be affected badly. In particular I’m looking at Table 2 (page 8 ) which provides an F1 score (a measurement of how accurate a test is) for using different Syscalls to exploit the flaw. To quote the paper:

Quote

The results show that the same effects occur on both AMD and ARM CPUs, with similar F1-scores.

The exact method for efficient exploitation seems to differ depending on the platform, but all of them have high percent scores for one syscall or another. The ARM Cortex-A57 they use seems to leak a lot with the ‘read’ syscall, whilst the Threadripper 1700 seemed more vulnerable using ‘sched_yield’. The i7 8700k seemed to depend far more on what syscall had been executed previously. The ‘pipe’ syscall had almost 100% accuracy for every CPU tested, but as mentioned in the paper, that syscall takes 3-5 times longer than others and so is a less efficient method. 

They also built a proof of concept exploit in section 7, which was found to perform equally on all tested CPUs. 


My PCs:

Quote

Timothy: 

i7 4790k

Gigabyte G1 Sniper Z97

16gb Corsair Vengeance Pro Red

Zotac GTX 780

Corsair Carbide 300R

Samsung 840 250GB, 3TB Seagate Barracuda

 

Timothy II - In Progress:

Xeon E5 1650 V1

16GB ECC Memory

Quadro 2000

2TB Seagate Barracuda

 

Link to post
Share on other sites

So functionally the most useful protection for home users seems to be The same thing that was used for specter/meltdown: turn off JavaScript, correct?  


Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to post
Share on other sites
1 hour ago, Bombastinator said:

So functionally the most useful protection for home users seems to be The same thing that was used for specter/meltdown: turn off JavaScript, correct?  

I'm hoping that people who thought WASM was a good idea get smacked with a newspaper.

https://webassembly.org/docs/security/

Quote

Nevertheless, other classes of bugs are not obviated by the semantics of WebAssembly. Although attackers cannot perform direct code injection attacks, it is possible to hijack the control flow of a module using code reuse attacks against indirect calls. However, conventional return-oriented programming (ROP) attacks using short sequences of instructions (“gadgets”) are not possible in WebAssembly, because control-flow integrity ensures that call targets are valid functions declared at load time. Likewise, race conditions, such as time of check to time of use (TOCTOU) vulnerabilities, are possible in WebAssembly, since no execution or scheduling guarantees are provided beyond in-order execution and post-MVP atomic memory primitives :unicorn:. Similarly, side channel attacks can occur, such as timing attacks against modules. In the future, additional protections may be provided by runtimes or the toolchain, such as code diversification or memory randomization (similar to address space layout randomization (ASLR)), or bounded pointers (“fat” pointers).

 

The main target though would be data leaking out of VM's. So if you want to ensure that can't happen, you have to cripple features like hyperthreading that share the cache, or other cpu features that operate on the same data in parallel.

https://www.intel.com/content/www/us/en/architecture-and-technology/mds.html

 

Like, the average person simply won't understand how it works, heck the average tech news site doesn't really have a meaningful way to explaining it, so when wider non-tech news sites pick it up, it's sensationalized as though all computers can be taken over by malicious javascript, when the reality is more of a "don't let your browser sit on pages you don't trust", basically close the browser tab when you're done with it. Browsers can also mitigate damage by indicating when javascript is still running in a tab, and pausing all scripts when the tab is switched away unless the user has implicitly given it permission to run in the background (eg like slack or discord.) Remember the STOP button we used to have in browsers? Bring that back and have it also stop all script on the page first.

 

But really the problem here really shouldn't even lie in the web browser, lots of applications use CEF (Chrome Embedded Framework) as part of their insanely ugly bloated html5-nonsense UI (yes Epic and Steam, that's you, along with every MMO game launcher) exist, and all it takes is sneaking an ad into one of these to directly target the user's credentials in the launcher, because you know it's there. 

 

Given how often account takeovers happen in MMO games (literately idiots giving their account credentials away to fake cheat sites), you'd think that maybe the problem is that most software interacting with the web just doesn't take even the minimum of security. Like why isn't 2FA with TOTP the minimum to access anything that processes a credit card?

Link to post
Share on other sites
32 minutes ago, Kisai said:

I'm hoping that people who thought WASM was a good idea get smacked with a newspaper.

https://webassembly.org/docs/security/

 

The main target though would be data leaking out of VM's. So if you want to ensure that can't happen, you have to cripple features like hyperthreading that share the cache, or other cpu features that operate on the same data in parallel.

https://www.intel.com/content/www/us/en/architecture-and-technology/mds.html

 

Like, the average person simply won't understand how it works, heck the average tech news site doesn't really have a meaningful way to explaining it, so when wider non-tech news sites pick it up, it's sensationalized as though all computers can be taken over by malicious javascript, when the reality is more of a "don't let your browser sit on pages you don't trust", basically close the browser tab when you're done with it. Browsers can also mitigate damage by indicating when javascript is still running in a tab, and pausing all scripts when the tab is switched away unless the user has implicitly given it permission to run in the background (eg like slack or discord.) Remember the STOP button we used to have in browsers? Bring that back and have it also stop all script on the page first.

 

But really the problem here really shouldn't even lie in the web browser, lots of applications use CEF (Chrome Embedded Framework) as part of their insanely ugly bloated html5-nonsense UI (yes Epic and Steam, that's you, along with every MMO game launcher) exist, and all it takes is sneaking an ad into one of these to directly target the user's credentials in the launcher, because you know it's there. 

 

Given how often account takeovers happen in MMO games (literately idiots giving their account credentials away to fake cheat sites), you'd think that maybe the problem is that most software interacting with the web just doesn't take even the minimum of security. Like why isn't 2FA with TOTP the minimum to access anything that processes a credit card?

So no then?


Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to post
Share on other sites
2 minutes ago, Bombastinator said:

So no then?

The entire HTML5/CEF-ization of apps will render turning off javacscript impossible. ever try to use gmail or twitter with script off? It just doesn't work unless you also make the site think you're using a 10 year old mobile phone.

 

Link to post
Share on other sites
6 minutes ago, Kisai said:

The entire HTML5/CEF-ization of apps will render turning off javacscript impossible. ever try to use gmail or twitter with script off? It just doesn't work unless you also make the site think you're using a 10 year old mobile phone.

 

I did find that it was impossible to turn off JavaScript in edge.  I managed it in firefox on PC and safari on iPhone.  I can’t turn it off in Firefox on iPhone or edge on PC though. I don’t  care much about who the hell gets smacked with a newspaper.  It sounding like I’m screwed.  The Covid thing means staying off the internet is dangerous and this foreshadow thing is sounding like it means being on the internet is dangerous. 
 

Gmail is going to hurt a lot because it’s one of my main communication mediums. Twitter doesn’t affect me

 

This is sounding a lot like Covid for the internet.   


Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to post
Share on other sites
32 minutes ago, Bombastinator said:

I did find that it was impossible to turn off JavaScript in edge.  I managed it in firefox on PC and safari on iPhone.  I can’t turn it off in Firefox on iPhone or edge on PC though. I don’t  care much about who the hell gets smacked with a newspaper.  It sounding like I’m screwed.  The Covid thing means staying off the internet is dangerous and this foreshadow thing is sounding like it means being on the internet is dangerous. 
 

Gmail is going to hurt a lot because it’s one of my main communication mediums. Twitter doesn’t affect me

 

This is sounding a lot like Covid for the internet.   

I can understand being scared, but personally I don’t think you need to be worried about this exploit at all. 
 

This is a dangerous sounding release, but it is extremely slow to utilise. In the paper, their proof of concept exploit took 15 minutes to extract a single 32-bit secret from a CPU register. It’s just not the kind of thing that hacking groups wanting to steal passwords are going to try and use. It’s finicky to utilise properly and time consuming to perform - there’s lower hanging fruit for them than this.
 

The people that need to be worried about this are datacenter customers and government agencies, but that was always the case with these speculative execution exploits. They don’t actually matter on the consumer level. 


My PCs:

Quote

Timothy: 

i7 4790k

Gigabyte G1 Sniper Z97

16gb Corsair Vengeance Pro Red

Zotac GTX 780

Corsair Carbide 300R

Samsung 840 250GB, 3TB Seagate Barracuda

 

Timothy II - In Progress:

Xeon E5 1650 V1

16GB ECC Memory

Quadro 2000

2TB Seagate Barracuda

 

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×