All This Twitter OAuth Security Nonsense

In a wordy article that could have been much shorter and a lot less sensational, Ryan Paul of ArsTechnica throws mud mostly at Twitter, but saves plenty to throw at OAuth. Unfortunately, Ryan Paul (who clearly is a smart guy) is heavy on the accusation but light on the arguments. Typically, I would go over this article an item at a time, but I’m right in the middle of draft 11 of OAuth 2.0 which is a much better use of my time. If you want to read a great rebuttal, Ben Adida’s response (as always) is a great read.

The OAuth standard has many significant weaknesses and limitations. A number of major Web companies are collaborating through the IETF to devise an update that will fix some of the problems, but it’s still largely a work in progress. The current version of the standard—OAuth 1.0a—is an inelegant hack that lacks maturity and fails to provide clear guidance on many critical issues that are essential to building a robust authentication system.

First, the current version of the standard is RFC 5849. The RFC not only changed a lot of the terminology, highlighted the known security limitations, and has completely new prose, it also explicitly clarified at least one of the author’s issues regarding the handling of timestamps (which most companies don’t even bother to check anyway).

My favorite silliness from the article, when discussing the lack of specification details about using client secrets in installed applications:

Part of the problem is that the specification doesn’t provide much guidance about what implementers should do instead, which has forced them to improvise. Facebook and Google Buzz have both come up with reasonable solutions and offer desktop-appropriate OAuth authentication flows that do not require a secret key or require the end user to go through a complicated copy/paste process.

The specification is very clear (as the article quotes) – don’t use client secrets in installed applications! The reason why the specification doesn’t say much more is because there is no solution. It does not exist for a distributed application unless you issue a different secret to each installation. To say that Facebook and Google came up with reasonable solutions is pure sensationalism. What is their reasonable solution? They don’t use client secrets. In other words, they do exactly what the specification says.

The article does bring up important points implementers should pay attention to when using OAuth, such as the secrecy of their client credentials, the exact details of their user experience, how they authenticate the user (cookies, etc.), and an overall awareness to phishing. But OAuth, just like other security protocols, is designed to be implemented by security experts. In addition, there are simply no widely available solutions for many of these problems.

If Twitter uses the client secret in installed application for anything other than gathering statistics, well, they should reconsider. But it’s not like they have other alternative. That’s the only valid “news” the article has to offers.

It’s easy to throw mud without getting dirty with making a fully baked technical argument – and it makes for a fun read too. But when it comes to a widely deployed security protocol, scoring page views by scaring readers about security is not fair game. It is always valuable to highlight OAuth’s weakness, but in context, with the right security risk analysis, and with clear comparison to other alternatives.

I expect more from ArsTechnica.

8 thoughts on “All This Twitter OAuth Security Nonsense

  1. So, you (and the spec) say, “don’t use client secrets in installed applications!” And Twitter says all client apps must use OAuth.

    So, what is the mechanism for doing OAuth without storing the secret in a desktop or mobile client?

    It’s a rhetorical question, sure, but in all seriousness this question gets asked and goes unanswered on the Twitter developers forums about once a week. There is no obvious acceptable solution for this challenge.

    You can argue your point as much as you like. You can say it is Twitter’s fault, or the developer’s fault, or even Ars’ fault, but just take a quick look around. As of yesterday every functional desktop and mobile Twitter client is using OAuth. The vast majority of them **ARE** storing the secret. Yes, it’s wrong. Yes, it goes against the spec, but most still do it.

    Why do you think that is? What is the solution to getting them to stop?


    • There is simply no simple solution for authenticating installed applications. This is not a new problem, or an OAuth problem. It is a basic cryptographic problem that if you have access to the code, you have access to everything it needs to identify itself. And when you have that, you can imitate it.

      Twitter, nor anyone else, should rely on the client identifier when making any meaningful decisions about an application. It is really just for statistics and diagnostic, not for anything else. I can’t help Twitter figure out how to manage their OAuth deployment or developer program. I don’t work for them.

      Last year I wrote about alternative flows: Ironically, this was in response to Twitter developers unhappiness with the OAuth experience on an installed application (shortly after Twitter announced they will shut down Basic auth).

      OAuth works just fine without a client secret. What you lose is the ability to safely inform the user who is asking for access (because it can be fake). This is not a big problem because most services don’t do much to verify the identity of the application developer, so they can register a fake application anyway (and avoid all this secret stealing mess).

      As long as users pay attention to what they install, and they download it from a trusted source, they are fine. Yes, Twitter will have a hard time giving certain applications special treatment or shutting down spammers, but that’s their problem, not a user security or privacy issue.

      In other words, OAuth does not provide a solution to address Twitter’s requirements for managing installed application. My issue is with making this sound like the protocol is broken or unsafe. This is like people saying HTTP is broken because it doesn’t do bi-directional streaming well – it wasn’t designed to do that.

      There are creative ways to distribute installed application with unique keys. Some media companies are doing that with content, as well as some anti-virus vendors (the serial is baked into the download). This is a whole different ballgame, but it doesn’t solve the issue of identifying installed applications (though, the user can still mess around and copy the secrets, but since it should be unique, killing it will cause no real damage).

  2. Thank you for your clarification (and brevity, relative to Ryan’s.)

    I’m don’t think —scope of the RFC aside, that is intentionally vague— that he wrote anything technically false. He does use a surprisingly threatening tone, photo, and too much words to say: OAuth can’t verify Apps (yet). I’m not sure if he has unrealistic anticipation for those who use source-code in FOSS — I’m assuming if they can explore a FOSS How-to thread and compile, they know what a token is, and how to copy-paste, so that grudge is unnecessary.

    Mud-slinging Twitter aside, he has nothing but praises for the tech and the people behind it, understands its scope, took the time to explore it… He was the first that I read outside of the OAuth lists to publicly say that OAuth wasn’t suitable for applications yet, and took needed steps to have Twitter face the a problem. Including your correction, the sum of it all is positive.

    All that to point out that OAuth isn’t doing enough to explain in layman terms what is happening. You change decisive, user-facing tech — some Common Craft-type video is increasingly needed.

    • I disagree. The article makes it sound like there is a solution but neither Twitter or OAuth has figured out how to do it, leaving it for others to solve. There is no other solutions for installed applications. This is like writing an article about a database that cannot be both non-blocking, distributed, and 100% reliable, all at the same time.

  3. The article certainly should have been shorter and probably could have done with a less dire tone, but I would say that the primary point he is raising is not criticism of OAuth, but Twitter’s implementation and the policies surrounding that. Here is the point he brings up a few times:

    Twitter intends to systematically invalidate compromised keys. This means that when somebody extracts the key from a popular desktop Twitter client and publishes it on the Internet, Twitter will revoke access to the service for that client application. All of the users who rely on the compromised program will be locked out and will have to use other client software or the Twitter website in order to access the service.

    And the absurd ramifications of this policy for FOSS clients:

    In response to the concerns raised by the FOSS community, Twitter committed to implementing an alternate OAuth authentication mechanism specifically for FOSS applications. The alternate authentication flow would allow users to register a sub-key that they could paste into the application.

    You pulled out his quote of praise for Google and Facebook. But the way I read it, he isn’t claiming that they have solved the problem — he is pointing out that Google and Facebook aren’t pretending to have a solution, but Twitter is.

    That’s what he dedicates the first 2 pages to. On the 3rd page, even though he says he is going to examine “broader OAuth issues that also affect many other implementations,” it actually still largely focuses on implementation decisions that Twitter has made that makes developing a client difficult.

    Anyway, that’s what I got out of it.

    p.s. Thank you Eran for all your work on OAuth!!!

  4. Thanks for putting this rebuttal together. I had just read Ryan’s article and was pretty terrified. This rebuttal (and especially the comments) make it clear that it’s a problem with desktop / downloadable apps in general, not an OAuth problem.

    In my case, I guess it’s OK if someone hijacks a consumer key/secret and pretends to be another app. I’d rather deal with that than lock out a popular app just because someone hijacked the secret.

    Thanks for bringing the discussion back down to earth.

Comments are closed.