There used to be a big difference between API access and regular human-oriented HTML access: the speed in which requests are made. When a request is made via a browser, there is inherit delay from human interaction, browser response, page rendering, and fetching of images. Most of this is gone once a machine makes the call. However, with recent improvement in browser technology and the wide use of AJAX techniques on the client side, even the human-readable pages can make API calls to render pages.
Scalability plays a central role when designing the ways in which data can be requested from a service, be it via an API call or HTML page request. Both types fetch raw data, process it, and then format it into a presentation format such as HTML, XML, JSON, etc. In the case of a server-rendered HTML page, all the different requests are made internally, hidden from the user, and a single page is returned. If the page uses AJAX scripts, the browser makes multiple API calls to fetch individual data sets, but the server still has to fetch the raw data, process it, and format it. It is the size of the batch that makes the difference.
Nouncer is getting ready for its alpha release this month. I have written before about what Nouncer was supposed to be, and how I started working on it. But like most early stage products, Nouncer has evolved and changed in order to offer a unique service and remain competitive. In the spirit of anti-stealth, this post aims to explain, as much as currently known, what Nouncer is and what it is about.
Nouncer bridges the gap between real-time delivery and information overload. While most services focus on building a messaging system, Nouncer offers a content delivery platform. Content: Real-time, quality, and as requested.
Nouncer is making its first few steps into the world this week. The first set of APIs are being released this week to the Nouncer alpha environment, and will allow developers to learn about the session management and user registration features. I expect to release enough API calls by the end of the year to allow developers to build working applications using the platform. However, Nouncer will not be fully operational until February-March 2008.
With the completion of OAuth Core 1.0, it was time to go back to what I was doing before – getting the Nouncer API ready. Like others, my interest in OAuth started with the plan to use OpenID as the user credential platform for the API. Now that OAuth is ready, I am going back to my initial objective of integrating the two (something I plan to write about in an upcoming post). Given that Nouncer is taking shape as a corporate solution rather than a consumer service, I’ve started questioning the need for OpenID. After all, it is not something you’d think about when discussing closed internal corporate identity systems.
Being critical of Twitter is really a compliment. It comes with the territory of being the market leader in a new space. My recent negative rants about the Twitter are really about how it is being used than its qualities. Like many others, I have a vested interest to see Twitter succeed. While Nouncer does not compete with Twitter, it builds upon the usefulness and experience of microblogs users. Most of my points about Twitter apply equally to other microblogs like Jaiku, Pownce and others. And there are many others.
The best features Twitter has to offer are their powerful platform and open API. It is also the reason they are more successful and why others are coming out with their own API almost as fast as their website. I am excited about the soon to be released Pownce API and have been playing around with the Jaiku API. These three sites and the many who try to improve the space (using their lower load as an advantage to build new functionality), all serve an important purpose of getting microblogging into the mainstream. We are still in the imagination phase, trying to figure out what to do with this powerful tool we stumble upon.
Public comments about OAuth are a great opportunity to explain the thinking and goals behind the protocol. Rob Sayre asks about the protocol use of redirection in order to get the user to grant access:
“Maybe I’m missing something, but doesn’t this train users to enter their credentials into web pages they’ve been redirected to?”
First, you are correct. Redirection carries with it some risk of training users to follow a pattern of coming to a login screen without explicitly entering a URL in the browser address box. The basic idea behind phishing is getting the user to a page they think is one thing but is really something else. A link in an email message made to look like your bank is actually a fake page asking you to enter your username and password. When you fall for it, it usually redirects you back to the real bank to enter it again (making you think you just mistyped).
With OAuth 1.0 Draft due out next week, I wanted to introduce the protocol and try to help people understand what it is and what it is trying to solve. OAuth (pronounced “Oh Auth”) is mentioned is many blog posts, usually in the context of OpenID and Open Social Networks. While OAuth can play an important role in helping open up closed communities, it is not specific to social networks. The short(est) explanation of OAuth is ‘An API access delegation protocol’. Now for the longer one.