In an effort to resume work on the OAuth 2.0 protocol at the IETF OAuth Working Group, I posed three questions about the authentication half of the protocol. From my perspective as the specification editor, these questions are the main open issues currently standing between us and a draft covering the authentication process in OAuth 2.0 (of course, being an IETF effort, I might be completely off base).
Below are the questions with their explanations, as well as my personal views after each question.
1. Should OAuth 2.0 Require HTTPS for Any Unsigned Request?
WRAP got some negative attention because of how it sends requests without using signatures or over a secure channel. WRAP uses HTTPS only for obtaining tokens but does not mandate (or even suggests) using HTTPS for making protected resources requests. Instead, WRAP recommends short lived tokens that must be refreshed (using HTTPS). In other words, WRAP uses secrets that are easy to steal, but which are good for a short period of time, limiting their damage.
In a recent thread on the OAuth WG list, we reach a (low participation) consensus that the OAuth 1.0 protocol should be changed in the RFC draft to mandate HTTPS (or others technologies offing the same or greater protection) for the PLAINTEXT method and when requesting token credentials. The original community edition only recommends using HTTPS, but every implementation (known to me) requires HTTPS.
Are there any valid reasons (such that will pass IETF security review scrutiny) for allowing unsigned requests to be sent in the clear over an insecure channel? Are there real use cases for this kind of insecure flavor (not just speculations but actual services which we can point to)?
Yes, OAuth 2.0 must require the use of a secure channel when making unsigned requests. I understand that some providers will not want to bother with the extra security for cost of performance reasons. I am willing to assume they are doing this fully aware of the repercussions of their actions. What I don’t understand is why should the protocol – which is aimed at interoperability – bother with it?
The working group includes people representing many companies, large and small. Can one of them please raise their hand and ask for this feature? And if they do, can they explain how they justify providing poor security to their developers?
I am no longer interested in the argument that somewhere there are valid use cases. Writing a protocol for scenarios that are not anchored in reality is bad. OAuth 1.0 does not require using a secure channel for sending token secrets because people claimed it will be a problem for some providers. So far, no such providers showed up.
If someone wants to argue the need of no-cryptography / no-secure-channel option, while showing how that need justifies subjecting the web to more bad protocols and poor foresight, I am eager to listen.
If a provider doesn’t care about security, it is free to implement the protocol poorly. There is no OAuth Police to force providers to check signatures and reject requests with bad ones. By forcing such providers to break the protocol, we are forcing them to make an explicit decision and we get their developers to notice.
There is also a case to be made about pushing the envelope when it comes to security. The more services use TLS, the cheaper and easier it will get. That’s economics 101. And unlike writing new code for new OAuth signatures, requiring TLS will simply mean linking another library and making a function call.
My vote is to start with an HTTPS requirement for any unsigned request, and let those who have real reasons to object show up. None of the current users of OAuth 1.0 will be able to claim this since they all use HTTPS for all such requests.
2. What to Sign?
The community edition of the OAuth Core 1.0 protocol was designed to sign API requests which use common form-encoded parameters (in the URI or body). The main component of the 1.0 signature base string are the parameters. The host and HTTP methods are important but were never the focus on the signed content.
The new OAuth 1.0 RFC draft does not change the process but does describe it very differently, changing the focus form signing API requests and parameters to signing HTTP requests (partially). The new Token Authentication draft (which is proposed as the basis for half of OAuth 2.0) takes this approach a step further and focuses on signing the raw HTTP request components, completely ignoring their meaning as used by API calls.
The end result is very similar but the differences are important.
Last month Brian Eaton – a WRAP co-author and a long time OAuth contributor working for Google – proposed an alternative approach in which a message is signed instead of the API call or HTTP request. In his proposal, the HTTP request (or API call, whatever your perspective) is transformed into a message (Eaton’s proposal uses a JSON-based format) which is then signed. This additional layer of abstraction allows using the method with other transports, or in use cases in which parameters are not part of the request URI or body.
(Ignoring the details and proposed format of the message,) which style should OAuth 2.0 use?
- Process the HTTP request into a base string for signing (OAuth RFC draft style),
- Treat the request as an API call with form-encoded parameters (OAuth 1.0 community edition style), or
- Convert the request into a normalized message and sign it (Eaton style).
Given the interest in using OAuth with XMPP, SIP, and other protocol, I think there is great value in introducing a simple process for converting the request into a message, and then signing that message. The part of the proposal I am not yet sold on is the added complexity of using JSON and the multi-tier structure.
3. Should the Normalized Request String Get Sent with the Request?
In OAuth 1.0 the request is normalized into the signature base string by the client and the server independently. The base string itself is never sent with the request. In his outline, Eaton proposed to include the signed string (message) with the request, removing the need for the server to regenerate the normalized string. It also allows the client to use the included string to send additional (signed) information that is not part of the HTTP request.
This is a significant departure from OAuth 1.0, but one that deserves an in-depth discussion.
Some advantages to this approach are:
- The server can easily verify what is being signed
- The client can include additional parameters in the signed message
- The request remains valid even if changed by proxies or other intermediaries
Some disadvantages are:
- The request is sent twice, once raw and once normalized
- It adds another place to make mistakes and create security holes if the server uses the raw data without fully comparing it to the normalized (signed) data
- Since any server enforcing security will only use the data contained in the normalized portion, it will create a de-facto standard for API requests (not nearly as heavy as SOAP or WS-*, but) in which case the request itself is normalized before sending.
What are some other advantaged and disadvantages to this approach? Should the normalized string be included with the request or even replace it?
Unless someone raises some additional advantages, I can’t see how this makes sense. The request itself is what matters.
Got answers? Please join the conversation.