Microarchitectural attacks have been all the rage. For the past two years, we’ve seen attacks like Meltdown, Spectre, Foreshadow/L1TF, Zombieload, and variants all discuss different ways to probe or leak data from a victim to a host. A new attack, published on March 10th by the same research teams that found the previous exploits, turns this principle on its head, and allows an attacker to inject their own values into the victim’s code. The data injection can either be instructions or memory addresses, allowing the attacker to obtain data from the victim. This data injection bypasses even stringent security enclave environments, such as Intel’s Software Guard Extensions (SGX), and the attackers claim that successful mitigation may result in a slowdown of 2x to 19x for any SGX code.

The High Level Overview

The attack is formally known as LVI, short for ‘Load Value Injection’, and has the MITRE reference CVE-2020-0551. The official website for the attack is https://lviattack.eu/. The attack was discovered on April 4th 2019 and reported to Intel, and disclosed publicly on March 10th 2020. A second group discovered and produced a proof-of-concept for one LVI attack variant in February 2020.

Currently Intel has plans to provide mitigations for SGX-class systems, however non-SGX environments (such as VMs or containers that aren’t programmed with SGX) will remain vulnerable. The researchers state that ‘in principle any processor that is vulnerable to Meltdown-type data leakage would also be vulnerable to LVI-style data injection’. The researchers focus was primarily on breaking Intel SGX protections, and proof of concept code is available. Additional funding for the project was provided by ‘generous gifts from Intel, as well as gifts from ARM and AMD’ – one of the researchers involved has stated on social media that some of his research students are at least part-funded by Intel.

Intel was involved in the disclosure, and has a security advisory available, listing the issue as a 5.6 MEDIUM on the severity scale. Intel also lists all the processors affected, including Atom, Core and Xeon, which goes as far back as Silvermont, Sandy Bridge, and even includes the newest processors, such as Ice Lake (10th Gen)* and the Tremont Atom core, which isn’t in the market yet.

*The LVI website says that Ice Lake isn’t vulnerable, however Intel’s guidelines says it is.

*Update: Intel has now updated its documents to say both Ice Lake and Tremont are not affected.

All told, LVI's moderate CVE score is the same as the scores assigned to Meltdown and Spectre back in 2018. This reflects the fact that LVI has a similar risk scope as those earlier exploits, which is to say data disclosure. Though in practice, LVI is perhaps even more niche. Whereas Meltdown and Spectre were moderately complex attacks that could be used against any and all "secure" programs, Intel and the researchers behind LVI are largely painting it as a theoretical attack, primarily useful against SGX in particular.

The practical security aspects are a mixed bag, then. For consumer systems, at least, SGX is rarely used outside of DRM uses (e.g. 4K Netflix), which isn't likely to upend too much. None the less, the researchers behind LVI have told ZDNet that the attack could theoretically be delivered via JavaScript, so it could potentially be delivered in a drive-by fashion, as opposed to requiring some kind of local code execution. The upshot, at least, is that LVI is already thought to be very hard to pull off, and JavaScript certainly wouldn't make that any easier.

As for enterprise and business users, the potential risk is greater due to both the more widespread use of SGX there, and the use of shared systems (virtualization). Ultimately such concerns are going to be on a per-application/per-environment basis, but in the case of shared systems in particular, the biggest risk is leaking information from another VM, or from a higher privileged user. Enterprises, in turn, are perhaps the best equipped to deal with the threat of LVI, but it comes after Meltdown and Spectre already upended things and hurt system performance.

The Attack

Load Value Injection is a four stage process:

  1. The attacker fills a microarchitectural buffer with a value
  2. This induces a fault or assisted load within the victim’s software (by redirecting the dataflow)
  3. The attacker’s value invokes a ‘code gadget’, allowing attacker instructions to be run
  4. The attacker hides traces of the attack to stop the processor detecting it

The other recent microarchitectural exploits, such as Spectre, Meltdown, L1TF, Zombieload and such, are all related to data leaks. They rely on data to be leaked or extracted from various buffers that are ‘all-access’ from the microarchitectural standpoint. LVI is different, in that it’s more of a direct ‘attack’ on the system in order to extract that data. While it means the attacker has to clean up after themselves, as a result of what the attack can do, it means it can be more dangerous than other previous exploits. The difference in the exploit means that current mitigations don’t work here, and this exploit according to the research essentially states that Intel’s secure enclave architecture requires significant changes in order to be useful again.

The focus of the attack has been on Intel’s secure enclave strategy, known as SGX, due to the nature of the technology. As reported by The Register, it is in fact the nature of SGX that actually assists the attack – SGX creates page faults for memory loads by altering non-secure buffer page tables (point 2 above).

Intel’s Own Analysis

Intel’s own deep dive into the problem explains that:

‘If an adversary can cause a specified victim load to fault, assist, or abort, the adversary may be able to select the data to have forwarded to dependent operations by the faulting/assisting/aborting load.

For certain code sequences, those dependent operations may create a covert channel with data of interest to the adversary. The adversary may then be able to infer the data's value through analyzing the covert channel.’

Intel goes on to say that in a fully trusted environment, this shouldn’t be an issue:

‘Due to the numerous, complex requirements that must be satisfied to implement the LVI method successfully, LVI is not a practical exploit in real-world environments where the OS and VMM are trusted.’

But then states that the fact that its own SGX solution is the vector for the attack, these requirements aren’t as strict.

‘Because of Intel SGX's strong adversary model, attacks on Intel SGX enclaves loosen some of these requirements. Notably, the strong adversary model of Intel SGX assumes that the OS or VMM may be malicious, and therefore the adversary may manipulate the victim enclave's page tables to cause arbitrary enclave loads to fault or assist.’

Then to state the obvious, Intel has a line for the ‘if you’re not doing anything wrong, it’s not a problem’ defense.

‘Where the OS and VMM are not malicious, LVI attacks are significantly more difficult to perform, even against Intel SGX enclaves.’

As a poignant ending, Intel’s official line is that this issue is not much of a concern for non-SGX environments where the OS and VMM are trusted. The researchers agree – while LVI is particularly severe for SGX, they believe it is more difficult to mount the attack in a non-SGX setting. That means that processors from other companies are less vulnerable to this style of attack however, those that are susceptible to Meltdown might be able to be compromised.

The Fix, and the Cost

Both Intel and the researchers have provided the same potential solution to the LVI class of attacks. The fix isn’t being planned at the microcode level, but at the code level, with compilers and SDK updates. The way to get around this issue is to essentially make instructions serialized through the processor, ensuring a very specific order of control.

Now remember that a lot of modern day processor performance relies on several things, such as the ability to rearrange micro-ops inside a core (out-of-order), and run multiple micro-ops in a single cycle (instructions per cycle). What these fixes do is essentially eliminate both of these when potentially attackable instructions are in flight.

For those that aren’t programmers, there exists a term in programming called a ‘fence’. A broad definition of a fence is to essentially make sure a program (typically a program running across several cores) stop at a particular point, and check to make sure everything is ok.

So, for example, imagine you have one core doing an addition, and another core doing a division at the same time. Now addition is a lot quicker than division, and therefore if there are a lot of parallel calculations to do, you might be able to fire off 4-10 additions in the time it takes to do a single division. However, if there is the potential for the additions or divisions to interact on the same place in memory, you might need a fence after a single addition+division, to make sure that there’s no conflict.

In a personal capacity, when I wrote compute programs for GPUs, I had to use fences when I moved from a parallel portion of my code to a serial portion of my code, and the fence made sure that everything I needed for the serial portion of my code, computed from the parallel portion, had been completed before moving on.

So the solution to LVI is to add these fences into the code – specifically after every memory load. This means that the system/program has to wait until every memory load is complete, essentially stalling the core for 100 nanoseconds or more. There is a knock on effect in that when a function returns a value, there are various ways for the ‘return’ to be made, and some of those are no longer viable with the new LVI attacks.

The researchers are quite clear in how this fix is expected to hurt performance – depending on the applications and various optimizations, we’re likely to see slowdowns from 2x to 19x. The researchers examined compiler variants on an i7-6700K for OpenSSL and an i9-9900K for SPEC2017.

Intel has not commented on potential performance reductions.

For those that could be affected, Intel gives the following advice for SGX system users:

  • Ensure the latest Intel SGX PSW 2.7.100.2 or above for Windows and 2.9.100.2 or above for Linux is installed

And for SGX Application Providers:

  • Review the technical details.
  • Intel is releasing an SGX SDK update to assist the SGX application provider in updating their enclave code. To apply the mitigation, SDK version 2.7.100.2 or above for Windows and 2.9.100.2 or above for Linux should be used.
  • Increase the Security Version Number (ISVSVN) of the enclave application to reflect that these modifications are in place.
  • For solutions that utilize Remote Attestation, refer to the Intel SGX Attestation Technical Details to determine if you need to implement changes to your SGX application for the purpose of SGX attestation.

Final Words

From the researchers, they told The Register that:

"We believe that none of the ingredients for LVI are exclusive to Intel processors. However, LVI turns out to be most practically exploitable on Intel processors … certain design decisions that are specific to the Intel SGX architecture (i.e. untrusted page tables). We consider non-SGX LVI attacks [such as those on AMD, Arm and others] of mainly academic interest and we agree with Intel's current assessment to not deploy extra mitigations for non-SGX environments, but we encourage future research to further investigate LVI in non-SGX environments," 

In the same light, all major chip architecture companies seem to have been told of the findings in advance, as well as Microsoft should parts of the Windows kernel need adjustment.

Technically, there are several variants of LVI, depending on the types of data and attack:


All can be found on the LVI website.

Overall, Intel has had a rough ride with its SGX platform. It had a complicated launch with Skylake, not being enabled on the first batches of processors then being enabled in later batches, and then SGX has been the focus of a number of these recent attacks on processors. The need for a modern core, especially one involved in everything from IoT all the way up to the cloud and Enterprise, to have an equivalent of a safe enclave architecture is paramount, and up until this point it has been added to certain processors, rather than necessarily being built from the ground up with it in mind – we can see that with Ice Lake and Tremont still affected. The attack surface of Intel’s SGX solution, compared to those from AMD or Apple, has grown in recent months due to these new attacks based on a microarchitectural level, and the only way around them is to invoke performance limiting restrictions on code development. Some paradigm has to change.

Comments Locked

42 Comments

View All Comments

  • Teckk - Wednesday, March 11, 2020 - link

    While issues exist in all systems, whether hardware or software, it has become a little too often in Intel's case. How is this affecting their data center sales? Those who buy/use Xeons shouldn't they be concerned? Should we be concerned if, say, a banking system runs on such a platform?
  • Sharpman - Wednesday, March 11, 2020 - link

    Personally I think that those issues will start arising on AMD CPU as soon as they gain market share and will become an object of attacks. Intel has been in the leading position for years and thus these cases. Same will all operating systems and browsers, the more something is popular, the more people will try to find holes..
  • darkswordsman17 - Wednesday, March 11, 2020 - link

    Certainly there's some truth to that, and absolutely vulnerabilities happen to pretty much everyone. That's not the real issue here though, which is Intel's response. It seems to be better for this one (by better, I mean they're acknowledging and providing some of their own technical analysis), but they ignored some of the other vulnerabilities (one group I think said they alerted Intel about some vulnerability over a year before releasing their findings, and Intel essentially blew them off until it blew up into big news once it was publicly known). Its nice to see Anandtech reporting on this again. At one point AT's official (by that I mean, one of the editors replying to a comment on an article) response was "we're waiting for Intel's response" which simply never came. And then AT stopped reporting on it altogether. Really tarnished my opinion of Anandtech (especially after the CTSLabs fiasco). Then again, perhaps its simply because there's an official response from Intel with regards to it that we're getting this article?
  • Spunjji - Thursday, March 12, 2020 - link

    Pretty sure that first thing you're referring to is Meltdown. We found out about it publicly a long, long time after it was discovered and Intel still didn't have a patch.
  • rocky12345 - Wednesday, March 11, 2020 - link

    Yes it is true that as a company gains market share they can become a target for attacks. With that said 99% if not all of these so called exploits found since 2017 and became public since 2018 are nothing more than researchers in labs trying to find ways to break the CPU so it can be attacked.

    Since 2018 it has become the rage to report these things and make them public. Why that's easy job security for those involved in finding these exploits. If the ones involved were not getting paid a wage to work at these labs we would never hear of these exploits because no one would take the time to find them because they would not be getting paid any income.

    As stated over and over again 99% of these so called exploits need access to the system itself as in in front of it to make things happen. So for most of these exploits if you don't let strange people into your home and access your computer everything should be fine. Yes some of them can be done through the web browser but even then the risks are small.

    The other day when someone reported a exploit on AMD CPU's the post stated the researchers had to reverse engineer stuff to make the exploit work. My reply to that was nearly 100% of malware writers would not have a clue how the inner workings of a CPU works and for them to reverse engineer something like that is nearly impossible and not worth their time to figure out.

    Of coarse this is just my opinion and my take is if these researchers were to just report what they find to the companies in question and not make them public so every tom,dick & harry knows about them maybe things would be safer for everyone.
  • rahvin - Wednesday, March 11, 2020 - link

    With state sponsored cracking and exploitation of digital assets your assertion that these would never be discovered if foolish. They called it Spectre because they new it would haunt us for a decade.

    Need I remind you that timing based attacks have been known about since the early 00's. The first practical demonstration of these was the spectre attacks, but that's all it took. Once people knew how the first attack worked it was only a matter of time until additional exploits were discovered using the first as a template.

    Spectre variants will continue as researchers explore these new attacks. How bad the exploit is depends entirely on the vendors desire for security above speed.
  • Spunjji - Thursday, March 12, 2020 - link

    You seem to be missing the point of all this - it's datacentres, large businesses, financial houses legal operations, governments and the like that are worried about these vulnerabilities - not ordinary home users. That you need physical access is a barrier, but not enough of one. That's why these security researchers are paid to do it; it's not for "job security" (you don't get security in a job nobody wants you to do), it's because it's better for white-hat teams to find the flaws than it is to find out after the black-hats are already exploiting them.

    Suggesting that malware writers have no incentive to reverse-engineer things is contrary to the facts. Some of these people work for state governments, and they often have more resources and more of an incentive to do this than any private company, hence the outsourcing to specialist security researchers.

    Your final paragraph makes no sense at all. If the white hats don't perform a controlled release of the info, nobody knows to patch their systems. They literally *have* to go public, and the methodology of most is to do so *after* the manufacturer of the affected product(s) has developed a patch. There have been some exceptions - either where the affected company took too long to respond (Meltdown) or where the researchers were doing an obvious hit job (Ryzenfall etc).
  • rocky12345 - Thursday, March 12, 2020 - link

    Ok then the short answer for making it all public would be do do so without providing all of the needed details or pretty much instructions on how they did it. If they released a less detailed version to the public letting us know they found something that they were able to reverse engineer to break the CPU.

    We don't need to know the full details as the public. What I am saying is have the public release info and have the more detailed version for the company in question that the product was exploited by them. This way we the public know about it but do not have enough information to do anything about it other than wait for a patch and the company that the product was exploited has all of the information they need to patch or fix the problem because they got the full break down on what was found.

    This way both we know about it & the company does as well. Because we know about it we can update our systems as needed and because it was safely made public the company in question has to do something about it or face backlash from the public.
  • rocky12345 - Thursday, March 12, 2020 - link

    No edit so have to add another comment. These so called flaws everyone is going on about. Maybe these were not flaws in the hardware at all. If someone has to reverse engineer something and then write code that attacks the hardware that tells me they had to go out of their way to break the hardware. With that said then pretty much everything ever created by man is flawed and can be made broken because as humans we are not perfect and anything we make will not be perfect and can be broken or exploited by someone willing to take the time to break the product.

    Anything that is made by us most of the time works as it is intended to work but if you introduce someone into the picture that is trying to break the product yes it will break every time because it was not created perfect. I am willing to bet over the coarse of time there will be several more exploits found in CPU's and pretty much everything else in our computers and everyone will claim they are flaws. They are not flaws until someone goes in and tried to break it by reverse engineering and creating targeted code to make an exploit.
  • Carmen00 - Friday, March 13, 2020 - link

    I've worked in information security and I can tell you that a lot of your assumptions are wrong.

    1. Malware authors are often highly-motivated and highly-competent, perhaps more so than academic researchers. Yes, they know precisely how a CPU works, and reverse-engineering something is not difficult for them. We wouldn't have ROP attacks (for example) if they were stupid.

    2. Malware authors SELL their products to criminals. Those criminals are unlikely to have the skills to do the job themselves, and those criminals are often the ones who are caught. The original authors of much of the malware around the world today are simply unknown.

    3. Security research leads to more security. If your system relies on security by obscurity, then the only open question is how many attacks you're not detecting per unit time -- because you don't know enough to even recognize them as attacks. When everything's open, at least you can detect the attacks. When everything's private, attackers have an immense advantage.

    4. Partial disclosure just doesn't work. It's been tried and it's failed. Firstly, you need to understand an attack thoroughly to understand how to defend against it. All you're doing with partial disclosure is putting up a sign that says "There's a vulnerability here! Anyone want to have a go at finding it?", which is pretty good motivation for a "black hat" hacker. Secondly, it gives a huge incentive for a company to PARTIALLY fix a vulnerability, or just claim they've fixed it, since they know that you can't release attack code to call them out on their BS.

    5. Secure software is possible. Secure hardware is possible. There is plenty of research on provably secure systems and what security actually means. It's not easy reading, but maybe you should have a look at the decades of research before you pronounce your opinion.

Log in

Don't have an account? Sign up now