Yesterday Twitter released ‘Sign-in with Twitter’, the ability to use Twitter as a delegated sign-in provider for third-party websites. The cool thing about this new feature, which is part of their OAuth API beta, is that it is completely standard OAuth. No extensions, not secret sauce, and not another proprietary provider (yes, I’m looking at you Facebook).
It is Open done right.
With this small enhancement of the Twitter OAuth API, Twitter created a product that directly competes with Facebook Connect. The implementation details are significantly different (and there are some technical shortcoming on both sides), but there is little you can do with one and not the other. There is no reason why ‘Sign-in with Twitter‘ cannot be used anywhere Facebook Connect is offered, including blog posts and activity streaming.
It has been a year since I decided to put my startup Nouncer on hold and join Yahoo!. It has been fascinating to witness Twitter’s renewed media attention and recent growth, and it has inspired me to go back to my old posts about trying to run such a business.
The following are three posts on the subject:
(Or, Delegating Delegation)
There is nothing like a popular API to drive OAuth forward. As more developers transition to use Twitter’s new OAuth API, new requirements emerge. Existing sites based on Twitter use Twitter usernames and passwords for more than just calling the Twitter API. They use it as a sign-in solution for their own service, as well as to integrate with other Twitter-based applications.
There are many cases in which one third-party application uses functionality from another third-party application. For example, my iPhone Twitter client Twittelator, integrates with TwitPic to allow me to post photos directly from my phone. The way it works is that Twittelator has my Twitter username and password, and TwitPic uses the same credentials to offer its service.
When I gave Twittelator my Twitter credentials, I implicitly allowed it to act on my behalf without limitations. Everything my Twitter credentials can do, is technically fair game for this third-party application. Of course, applications should never abuse this power, but as long as they deliver expected results and don’t scare their users, people will be happy with the additional functionality.
Once Twittelator or TwitPic switches over to the OAuth API, this functionality breaks.
Back when I was “running” a startup and had a tiny bit of cash to spend on “marketing”, I commissioned a couple of cartoons about OAuth and Twitter. I recently had some new ideas so more might be coming, but for now, here is a recap of the Hueniverse cartoons (the OAuth cartoons have been updated in this post to reflect the final spec Token names):
Where are Your Endpoints?
(Or, How Do I Make My Desktop Applications Usable with OAuth)
There are plenty of reasons why the OAuth web redirection flow sucks. That is, the flow described in section 6 of the OAuth Core 1.0 specification. And it was all said before: it smells like phishing, it can be slow, it is hard to relay errors to the user, potential for high drop off rate, unfamiliar pattern for end users, difficulty in balancing security warnings with practical usability, requires a browser, and on and on.
But none of these are reasons not to use OAuth. They are simply challenges to overcome and a call for action to find new and better ways to authenticate users and authorize access on the web. Yes, this is a huge undertaking, but there are plenty of ways site owners can improve it today and still support OAuth. In the previous post I talked about some of the limitations of OAuth with a desktop client and the rules of rolling out an OAuth API. Now it's time for some ideas moving forward.
(Or, The Challenges of Using OAuth in Desktop or iPhone Applications)
Services adopting or considering OAuth like Twitter, face the question of how to get developers to move from their HTTP Basic Auth API to their OAuth API. If you keep both, why would anyone bother to learn a much more complex authentication method and subject their users to a workflow where at least some will drop off. And let’s not forget that services can always ignore the API and use screen scraping like in the “good old days” if you make it too hard.
Dumping the problem on your API developers (or worse, your users) isn’t going to help anyone.
When discussing microblogging scalability, the conversation includes scaling each individual service, but also scaling the network and relationship between services. Part I discussed the challenges of scaling a single microblogging site with focus on dealing with a large and constantly changing content database. In that post I mentioned that the proposal by some critics to build a distributed or federated microblogging service as a scaling solution will actually make thing worse. This second part will elaborate on that claim.
When discussing a distributed microblogging service, the conversation touches the long debate on the future of social networks and linking communities across individual walled gardens. After all, microblogging is one aspect of the social web, and status updates lives side by side sharing photos, videos, and other personal information and experiences. Being able to choose a social network and make friends from another without having to sign up for multiple accounts is one of the visions being offered. Another is the approach being advocated by the Data Portability group, which focuses on being able to move an entire experience off to another network instead, creating multiple identities.