Microsoft hit by Storm season – a tale of two semi-zero days

Security

At the tail-end of last week, Microsoft published a report entitled Analysis of Storm-0558 techniques for unauthorized email access.

In this rather dramatic document, the company’s security team revealed the background to a previously unexplained hack in which data including email text, attachments and more were accessed:

from approximately 25 organizations, including government agencies and related consumer accounts in the public cloud.

The bad news, even though only 25 organisations were apparently attacked, is that this cybercrime may nevertheless have affected a large number of individuals, givem that some US government bodies employ anywhere from tens to hundreds of thousands of people.

The good news, at least for the vast majority of us who weren’t exposed, is that the tricks and bypasses used in the attack were specific enough that Microsft threat hunters were able to track them down reliably, so the final total of 25 organisations does indeed seem to be a complete hit-list.

Simply put, if you haven’t yet heard directly from Microsoft about being a part of this hack (the company has obviously not published a list of victims), then you may as well assume you’re in the clear.

Better yet, if better is the right word here, the attack relied on two security failings in Microsoft’s back-end operations, meaning that both vulnerabilities could be fixed “in house”, without pushing out any client-side software or configuration updates.

That means there aren’t any critical patches that you need to rush out and install yourself.

The zero-days that weren’t

Zero-days, as you know, are security holes that the Bad Guys found first and figured out how to exploit, thus leaving no days available during which even the keenest and best-informed security teams could have patched in advance of the attacks.

Technically, therefore, these two Storm-0558 holes can be considered zero-days, because the crooks busily exploited the bugs before Microsoft was able to deal with the vulnerabilities involved.

However, given that Microsoft carefully avoided the word “zero-day” in its own coverage, and given that fixing the holes didn’t require all of us to download patches, you’ll see that we referred to them in the headline above as semi-zero days, and we’ll leave the description at that.

Nevertheless, the nature of the two interconnected security problems in this case is a vital reminder of three things, namely that:

  • Applied cryptography is hard.
  • Security segmentation is hard.
  • Threat hunting is hard.

The first signs of evildoing showed crooks sneaking into victims’ Exchange data via Outlook Web Access (OWA), using illicitly acquired authentication tokens.

Typically, an authentication token is a temporary web cookie, specific to each online service you use, that the service sends to your browser once you’ve proved your identity to a satisfactory standard.

To establish your identity strongly at the start of a session, you might need to enter a password and a one-time 2FA code, to present a cryptographic “passkey” device such as a Yubikey, or to unlock and insert a smart card into a reader.

Thereafter, the authentication cookie issued to your browser acts as a short-term pass so that you don’t need to enter your password, or to present your security device, over and over again for every single interaction you have with the site.

You can think of the initial login process like presenting your passport at an airline check-in desk, and the authentication token as the boarding card that lets you into the airport and onto the plane for one specific flight.

Sometimes you might be required to reaffirm your identity by showing your passport again, such as just before you get on the plane, but often showing the boarding card alone will be enough for you affirm your “right to be there” as you make your way around the airside parts of the airport.

Likely explanations aren’t always right

When crooks start showing up with someone else’s authentication token in the HTTP headers of their web requests, one of the most likely explanations is that the criminals have already implanted malware on the victim’s computer.

If that malware is designed to spy on the victim’s network traffic, it typically gets to see the underlying data after it’s been prepared for use, but before it’s been encrypted and send out.

That means the crooks can snoop on and steal vital private browsing data, including authentication tokens.

Generally speaking, attackers can’t sniff out authentication tokens as they travel across the internet any more, as they commonly could until about 2010. That’s because every reputable online service these days requires that traffic to and from logged-on users must travel via HTTPS, and only via HTTPS, short for secure HTTP.
HTTPS uses TLS, short for transport layer security, which does what its name suggests. All data is strongly encrypted as it leaves your browser but before it gets onto the network, and isn’t decrypted it until it reaches the intended server at the other end. The same end-to-end data scrambling process happens in reverse for the data that the server sends back in its replies, even if you try to retrieve data that doesn’t exist and all the server needs to tell you is a perfunctory 404 Page not found.

Fortunately, Microsoft threat hunters soon realised that the fraudulent email interactions weren’t down to a problem triggered at the client side of the network connection, an assumption that would have sent the victim organisations off on 25 separate wild goose chases looking for malware that wasn’t there.

The next-most-likely explanation is one that in theory is easier to fix (because it can be fixed for everyone in one go), but in practice is more alarming for customers, namely that the crooks have somehow compromised the process of creating authentication tokens in the first place.

One way to do this would be to hack into the servers that generate them and to implant a backdoor to produce a valid token without checking the user’s identity first.

Another way, which is apparently what Microsoft originally investigated, is that the attackers were able to steal enough data from the authentication servers to generate fraudulent but valid-looking authentication tokens for themselves.

This implied that the attackers had managed to steal one of the cryptographic signing keys that the authentication server uses to stamp a “seal of validity” into the tokens it issues, to make it as good-as-impossible for anyone to create a fake token that would pass muster.

By using a secure private key to add a digital signature to every access token issued, an authentication server makes it easy for any other server in the ecosystem to check the validity of the tokens that they receive. That way, the authentication server can even work reliably across different networks and services without ever needing to share (and regularly to update) a leakable list of actual, known-good tokens.

A hack that wasn’t supposed to work

Microsoft ultimately determined that the rogue access tokens in the Storm-0558 attack were legitimately signed, which seemed to suggest that someone had indeed pinched a company singing key…

…but they weren’t actually the right sort of tokens at all.

Corporate accounts are supposed to be authenticated in the cloud using Azure Active Directory (AD) tokens, but these fake attack tokens were signed with what’s known as an MSA key, short for Microsoft consumer account.

Loosely speaking, the crooks were minting fake authentication tokens that passed Microsoft’s security checks, yet those tokens were signed as if for a user logging into a personal Outlook.com account instead of for a corporate user logging into a corporate account.

In one word, “What?!!?!”

Apparently, the crooks weren’t able to steal a corporate-level signing key, only a consumer-level one (that’s not a disparagement of consumer-level users, merely a wise cryptographic precaution to divide-and-separate the two parts of the ecosystem).

But having pulled off this first semi-zero day, namely acquiring a Microsoft cryptographic secret without being noticed, the crooks apparently found a second semi-zero day by means of which they could pass off an access token signed with a consumer-account key that should have signalled “this key does not belong here” as if it were an Azure AD-signed token instead.

In other words, even though the crooks were stuck with the wrong sort of signing key for the attack they had planned, they nevertheless found a way to bypass the divide-and-separate security measures that were supposed to stop their stolen key from working.

More bad-and-good news

The bad news for Microsoft is that this isn’t the only time the company has been found wanting in respect of signing key security in the past year.

The latest Patch Tuesday, indeed, saw Microsoft belatedly offering up blocklist protection against a bunch of rogue, malware-infected Windows kernel drivers that Redmond itself has signed under the aegis of its Windows Hardware Developer Program.

The good news is that, because the crooks were using corporate-style access tokens signed with a consumer-style cryptographic key, their rogue authentication credentials could reliably be threat-hunted once Microsoft’s security team knew what to look for.

In jargon-rich language, Microsoft notes that:

The use of an incorrect key to sign the requests allowed our investigation teams to see all actor access requests which followed this pattern across both our enterprise and consumer systems.

Use of the incorrect key to sign this scope of assertions was an obvious indicator of the actor activity as no Microsoft system signs tokens in this way.

In plainer English, the downside of the fact that no one at Microsoft knew about this in advance (thus preventing it from being patched proactively) led, ironically, to the upside that no one at Microsoft had ever tried to write code to work that way.

And that, in turn, meant that the rogue behaviour in this attack could be used as a reliable, unique IoC, or indicator of compromise.

That, we assume, is why Microsoft now feels confident to state that it has tracked down every instance where these double-semi-zero day holes were exploited, and thus that its 25-strong list of affected customers is an exhaustive one.

What to do?

If you haven’t been contacted by Microsoft about this, then we think you can be confident you weren’t affected.

And because the security remedies have been applied inside Microsoft’s own cloud service (namely, disowning any stolen MSA signing keys and closing the loophole allowing “the wrong sort of key” to be used for corporate authentication), you don’t need to scramble to install any patches yourself.

However, if you are a programmer, a quality assurance practioner, a red teamer/blue teamer, or otherwise involved in IT, please remind yourself of the three points we made at the top of this article:

  • Applied cryptography is hard. You don’t just need to choose the right algorithms, and to implement them securely. You also need to use them correctly, and to manage any cryptographic keys that the system relies upon with suitable long-term care.
  • Security segmentation is hard. Even when you think you’ve split a complex part of your ecosystem into two or more parts, as Microsoft did here, you need to make sure that the separation really does work as you expect. Probe and test the security of the separation yourself, because if you don’t test it, the crooks certainly will.
  • Threat hunting is hard. The first and most obvious explanation isn’t always the right one, or might not be the only one. Don’t stop hunting when you have your first plausible explanation. Keep going until you have not only identified the actual exploits used in the current attack, but also discovered as many other potentially related causes as you can, so you can patch them proactively.

To quote a well-known phrase (and the fact that it’s true means we aren’t worried about it being s cliche): Cybersecurity is a journey, not a destination.


Short of time or expertise to take care of cybersecurity threat hunting? Worried that cybersecurity will end up distracting you from all the other things you need to do?

Learn more about Sophos Managed Detection and Response:
24/7 threat hunting, detection, and response  ▶


Products You May Like

Leave a Reply

Your email address will not be published. Required fields are marked *