The Open Web, Fuck Yeah!

Some douchebag wrote a thought-leadering rant about the pain of using the open web vs native apps – this is my rebuttal.

The problem with that rant isn’t that it’s false.

The pain is real and the challenges of building amazing web experiences that can compete with native apps are something every company should take into account. That post never claimed it wasn’t possible to build great web experiences, just that the cost was higher and required more developers of higher skill which of course, translates to higher cost and higher risk. All the great innovation in the framework space like React Native solve the overall front end architecture problem, but not the bits and pieces of the actual experience where shit meets fan.

The problem with that rant is that it’s incomplete, and largely misses the point. Continue reading

The Fucking Open Web

Nine years ago I was part of an idealistic group of web advocates looking to free the web from the tyranny of the big silos. We saw web identity as a core part of the web’s future and fought to build it with web standards. That’s how I got sucked into the delusional world of open standards and web specifications. We appealed to other developers working at the big co.’s to adopt our work through any means necessary, from high ideals to public shaming. And we got some traction. Aren’t you glad everyone is using fucking OAuth now? (That group, btw, has mostly sold out taking high paying jobs at Facebook and Google, and have not heard from since).

A decade later, I’m the founder of a scrappy startup trying to reinvent web conversations. We have limited resources and a staff of almost 3, struggling to tame this fucking web. It is amazing how hard it still is to build innovative, quality web experiences. It is very much possible – there are plenty of amazing web developers building mind blowing experiences. The problem is, I can’t afford to hire them, especially since a big chunk of them work for Google and Facebook.

It’s hard running a small business. No matter where you stand in the political spectrum, the amount of regulation an American business has to deal with is fucking insane. From incorporation, liability, accounting, human resources, and taxes, the system is rigged for those who already succeeded. We have 2 full time employees and we spend over $6000 a year to make sure we comply with all the payroll and labor laws across multiple states and the federal government. But at least government regulation is a known, predictable cost.

The web on the other hand. Fuck. Continue reading

How to Use Open Source and Shut the Fuck Up At the Same Time

First, I do not speak on behalf of the hapi.js community, only for myself. hapi.js is a friendly, diverse, tolerant, and welcoming community.

We have a serious problem of user entitlement in open source. We have now seen many lead maintainers quit their own projects over constant abuse.

This affects everyone. Over the last year I have stopped supporting hapi users. The tone and attitude of open source users have become so negative, so demanding, and so persistent that I’ve decided I’m just not having any of it. I still write plenty of code, and most of it published as open source. I just don’t talk about it anymore. The only people who get my attention at this point are other hapi.js lead maintainers and a select group of people who get my “platinum support service” via private channels. This premium support is free and is offered to friends and people I highly respect in the community.

As for everyone else –

If you don’t pay me for my services or contribute meaningful value to me personally, I don’t owe you shit. You are not my customer and you are rarely right. Open source is not an invitation for harassment and making demands. If you choose to interact with me about my open source work, remember it was your decision and you can stop interacting with me at any time. No one is forcing you to use my code. I am giving you a lot of free modules to do with as you wish. That license does not extend to my time or my attention.

I am a big supporter of Code of Conducts. Every project where people interact with one another should have one. What I am not supportive of is harassing maintainers to adopt one. The only right approach here is to nicely ask the maintainer if they would consider adding a CoC using a well establish template (ideally from a closely related project), and if they say no, to move on. Enough with the fucking indignation, the public shaming, the boycotts.

Same goes for documentation. I see more bitching and entitlement around quality documentation than any other issue. You are not entitled to good (or any) documentation. If you don’t have the time to read the code, the tests, and the examples, shut up or fuck off. I don’t have the time to explain shit to you. I document my work for my own needs. I publish a lot of code without any documentation. Don’t ask me to add some. The only right thing to do is to ask if I am willing to take a pull request adding or improving the documentation. That’s it.

You know those people who leave product reviews on Amazon on items they will never buy? Don’t be those fucking people. If an open source project is not working out for you, just don’t use it. Don’t go posting on Twitter how shitty it is and how dickish the maintainer is. Just move on. Publishing code on GitHub is not an open invitation for abuse and humiliation. Every time I publish a new breaking release of hapi, people who have never (and will never) use hapi post shit about it on Twitter. What the fuck is wrong with them?

Why am I posting this? Why am I acting up so angry and negative?

Because I can afford to behave like this while other maintainers cannot. I am not going to lose work, reputation, or sleep over expressing these feelings publically and forcefully. However, this is not true for many new, young, inexperienced, or just sensitive maintainers who find themselves suddenly maintaining a popular open source project. They cannot afford to tell people to shut the fuck up or go fuck themselves without doing significant damage to their careers, reputation, or risk diving into deep depression.

I am posting this because someone needs to tell you to shut the fuck up, you entitled asshole.

Auth to See the Wizard

(or, I wrote an OAuth Replacement)



It’s me again.

The fuck OAuth guy.

Before that I was the guy who wrote this and then this (and then I took my name off it).

I wrote a replacement protocol and thought you might want to check it out.

Well, sort of. I didn’t write a protocol. I wrote a JavaScript module providing a full authentication and authorization solution for building web applications. I am done with protocols and specifications. At the end of the day, I needed a working solution I could deploy and trust. The problem with security protocols is that they are useless without an equally solid implementation. The only point in a protocol is interoperability and I don’t care about interoperability. I just want to build great products.

I actually wrote three modules.

Iron is a simple way to take a JavaScript object and turn it into a verifiable encoded blob. Hawk is a client-server authentication protocol providing a rich set of features for a wide range of security needs. Oz combines Iron and Hawk into an authorization solution. Together these three modules provide a comprehensive and powerful solution.

I’ll take some questions now.

How is Oz different from OAuth?

OAuth, especially 1.0, is based on solid, well established security best practices. There is no reason to invent something new. OAuth 2.0 added the foundation for building highly scalable solutions. Any new protocol should be based directly on this existing body of work and Oz does just that. It throws out all the silly wire protocol parts because they add no value. Oz makes a lot of highly opinionated decisions about how to implement the things that actually matter. If you understand OAuth well, you should be able to pick up Oz and Hawk pretty quickly.

What so cool about it?

Oz provides a complete solution with full support for access scopes, delegation, credential refresh, stateless server scalability, self-expiring credentials, secret rotation, and a really solid authentication foundation. Some would say Oz goes a bit overboard in layering security but I don’t think there is ever enough of that. The implementation is broken up into small utilities which can be composed together to build other solutions with different properties. And by braking it into three modules, you get to use just the bits you want.

Does it require client-side cryptography?

Yes. Building a solution without security layers is irresponsible and stupid. Don’t do that. Bearer tokens are a bad idea. That said, Hawk, the layer providing the authentication component is trivial to implement. It’s a simple HMAC over a few strings. No sorting and encoding and all that nonsense.

Who should use it?

Me, mostly. I wrote this for myself because OAuth 1.0 is based on obsolete requirements, and I rather stick pencils in my eyes than use OAuth 2.0. If you are a happy OAuth user (regardless of the version), I say stick with it. But if you don’t like it or looking for an alternative (and are using JavaScript), to the best of my knowledge, Oz is the only other option. It is particularly smooth experience when also using hapi.

Is it done?

Yes and no. The core protocol is done and is in great shape. It has been stable for over two years. You can expect the same quality engineering I’ve put into hapi. The code is lean, clean, and it goes out of its way to protect against developer mistakes. What’s not done are the workflows such as the OAuth 2.0 implicit grant. Right now Oz provides an OAuth 1.0-like workflow, but more workflows (especially for mobile) will be added soon. Oz is in active development and will be the core security component of my new project. Expect it to get better as I continue to use it myself.

Is there going to be a specification?

Not if I had to write it. Honestly, I think a specification is a waste of time. I don’t care about Oz on platforms other than JavaScript. While Hawk and Iron have already been ported to other platforms, I am not aware of Oz ports yet.

What the background behind Oz?

Oz was initially an OAuth 2.0 higher-level protocol developed for the Yahoo Sled project (now open sourced as Postmile). In fact, Postmile turned out to be the beginning of a lot of cool stuff including the entire hapi ecosystem. However, it turned out, the OAuth bits were adding no value and compliance just made development slower and more complicated. My initial focus was on the authentication bits which resulted in Hawk. Hawk is actually widely used already and was the foundation of the Mozilla identity API. Iron followed providing the token format needed to send self-encoded information securely (and is heavily used by hapi users). I then got stuck on Oz for about three years because I didn’t have a use case for it. I left it alone for a while until it was time to put the final touches on it.

Got more questions?

Just open an issue and I’ll do my best to answer.

The Myth of Descriptive Module Names

I get constant grief for the way I name my modules. While some people enjoy the whimsical, often childish names, many others complain that the names are counterproductive. I strongly disagree. Descriptive module names are an anti-pattern.

Descriptive names are the exception

Modules are products. They are something we create and present to the world in hope of finding an audience. You don’t buy “car”, you buy a BMW, or a Toyota, or a Cooper. Not a single module on the npm most downloaded list has a descriptive name.

Descriptive names are anti-democratic

What do you think is the chance of anyone producing a successful competing WebSocket plugin for hapi if I named my module hapi-websocket? A descriptive name from someone with authority means no one else gets to play and offer their own vision. I would like to think I get a lot of things right but I will never get everything right. A healthy environment means keeping a level playing field.

Describe names are anti-competitive

The problem with descriptive names and the reason people like them is they make life easier. It’s the lazy way out. You search for “websocket”, you find the websocket module. Done. Of course, the fact someone claimed the name has absolutely nothing to do with that module being the best one. The exact same outcome can be accomplish with keywords and a smarter search. Being the first person to grab a descriptive name should not give you an unfair advantage. Also, since good descriptive names are a finite commodity, you end up with a mouthful names with lots of hyphens which are a turn off for many people.

Descriptive names are boring

npm install poop.


Made you smile.

Converting Full Time Pay to Hourly Contract Rate

Multiple people asked me this week how much they should charge for an hourly development engagement, trying to figure out what’s fair. They all know their full time salary range but didn’t know how to translate that into self-employed hourly rate. This post is a quick explanation of the formulas I use. This is obviously not an industry standard or what your next employer (or employee) is going to use but you might find it a useful reference point. This is limited to US based employment. Needless to say, these formulas ignore the many employer tax differences of employing W-2 vs 1099 employees. They are also ignoring the most important factor which is the human element. We are talking about people hiring other people where unique skill sets, personalities, and economic realities can often mean more.

There are two types of hourly employees, those you can easily replace with full time people and those you can’t afford (or can’t convince) to work for you as W-2 employees. For the first group, the hourly rate is based on the employer cost of a full time person. Basically, the employer will want to keep its cost about the same regardless of the employee status.


  • 20% employer overhead cost over the employee cash compensation for benefits, taxes, and other expenses
  • 250 work days a year
  • 15 days paid vacation
  • 8 hours work day

The formula is:
Hourly rate = Total annual cash compensation / 1567

For the second group – people the employer can’t afford to pay full time or the kind of top talent they can’t convince to join full time – the formula is slightly different. There is an additional consideration. People doing short term contract work typically lose about 20% annually due to time in between jobs and the cost of finding work on a regular basis (a contractor factor of 0.80 – 20%productivity loss). Because they are top talent, the employer will have to pay for that loss. This gives us:

Hourly rate = Total annual cash compensation / 1253

The full formula is:

\displaystyle \frac{A * E}{(W - V) * H * C}

  • A – total annual cash compensation
  • E – employer overhead percent (1.0 is 100%)
  • W – work days a year
  • V – paid vacation days
  • H – hours a day
  • C – contractor factor (percent time employed)

This means:

  • An average developing making $120K a year would be able to get about $80/hour
  • A senior expert making $200K a year would be able to get about $160/hour

If the full time job come with equity (assuming a 25% annual vesting schedule of a publicly traded stock), you can add 25% of the equity value to the annual salary.

On Securing Web Session Ids

Someone asked: why does the Express session middleware add a hash suffix to the session id cookie? A great question. But first the obligatory disclaimer: like any security advice from someone who doesn’t know the specifics of your own system, this is for educational purposes only. Security is a complex and very specific area and if you are concerned about the security of your system you should hire an expert that can review your system along with a threat analysis and provide the appropriate advice.

Brute Force

Brute force attacks are those in which the attacker is trying to gain access to the system by making repeated requests using different credentials (until one works). The most common example is of an attacker who tries guessing a user password. This is why passwords should be long and avoid using dictionary words to make it harder to guess. Properly designed systems keep track of failed authentication requests and escalate the issue when it appears an attack is in progress. Passwords are not the only credential used in web authentication. The most common implementation includes a login page which upon successful authentication sets a session cookie on the client. The session cookie acts as a bearer token – whoever shows up with the token is considered to be the authenticated user. Setting a session cookie removes the need to enter your username and password on every page. However, this session cookie now acts as the sole authentication key and anyone who gains access to this key will gain access to the system. Cookies are, after all, just a simple string of characters. A session id guessing attack is a type of brute force attack. Instead of trying to guess the password, the attacker is trying to guess the session id and forge the authentication cookie. The attacker generates session ids and tries to make requests using those ids, in hope that they will match actual active sessions. For example, if a web application session ids are generated in sequence, an attacker can look up their own session id and based on that forge requests using nearby session id values. To protect against this attack we need to make guessing session ids impractical. Note I’m saying “impractical,” not “impossible”.


The first step is to make sure session ids are sufficiently long and non-sequential. Just like passwords, the longer the session id is, the harder it is to find a valid one by guessing. It is also critical that session ids are not generated using a predictable algorithm such as a counter because if such logic exists, the attacker is no longer guessing but generating session ids. Using a cryptographically secure random number generator to produce sufficiently long session ids is the best common practice. What’s “sufficiently long”? Well, that depends on the nature of your system. The size has to translate into an impractical effort to guess a valid session id. Another way to prevent an attacker from guessing session ids is to build integrity into the token by adding a hash or signature to the session cookie. The way the Express session middleware does this is by calculating a hash over the combination of the session id and a secret. Since calculating the hash requires possession of the secret, an attacker will not be able to generate valid session ids without guessing the secret (or just trying to guess the hash). Just like strong random session ids, the hash size must match the security requirements of the specific application it is meant to protect. This is because at the end, the session cookie is still just a string and open to guessing attacks. Session ids must be sufficiently long and impractical to guess. There are a few ways to accomplish this. The randomness and hashing techniques above are the two most common ways but not the only ones.


If we generate strong random session ids, do we still need the hash? Absolutely! The core security principal is layering. This is also known as not putting all your eggs in one basket. If you rely on a single source of security, you end up with no security at all if that single source fails. For example, what if someone finds a bug in your random number generator? What if they find a way to hack that part of your system and replace it? There are countless of known attacks exploiting exactly this – the generation of random numbers that turns out not to be so random after all. Combining a strong random session id with hash for integrity will protect against flaws in the random number generator. It will also protect against developer errors such as using the wrong random number generator function (e.g. the not so random method every system offers alongside the strong method). We all write bad code no matter how great our process is or how experienced we are. It is part of software engineering. This is why it is so important to layer your security. A moat is not enough, you also want a wall behind it, and probably some guards behind the wall. If you think using the wrong random function or a deep bug in OpenSSL are the only two issues here consider the common practice of monkey patching code in JavaScript and other dynamic languages. If someone anywhere in an entire application deployment messes with the global random facilities (for testing, logging, etc.) and breaks it (or it is part of a malicious code injection), session ids relying solely on randomness are no longer secure.


An important difference between guessing passwords and guessing session ids is the fact that passwords are associated with an account (e.g. username). The account-password pair makes it easier to keep track of brute force attacks because it provides a relatively straightforward way to keep track of failed attempts. However, when it comes to session ids, it is not as simple because sessions expire and do not include an account context. This means an invalid session id could come from an expired session or from an attacker, but without additional data (e.g. IP address) it would be hard to tell the difference in a large scale system. By including an integrity component in the session id (via a hash or signature), the server can immediately tell the difference between an expired session, an unallocated session id, and an invalid session. Even if you just log invalid authentication attempts (and you should), you would want to log an expired session differently than an invalid one. Beside the security value of knowing the difference, it will also provide useful insight about how your users behave.


Credentials should expire and therefore session ids should have a finite lifespan (where duration is very much a system-specific value). While cookies come with an expiration policy, there is no way to ensure it is actually obeyed. An attacker can set the cookie expiration to any value without the server being able to detect it. A common best practice is to include a timestamp in every credential issued, which can be as simple as adding a timestamp suffix to the randomly generate session id. However, in order to rely on this timestamp, we must be able to verify it was not tampered with and the way to accomplish that is with a hash or signature. Adding a timestamp to the session id allows the server to quickly handle expired sessions without having to make an expensive database lookup. While this might sound unrelated to security, it actually is very much core to maintaining a secure application. A denial of service attack (or DoS) is an attack in which the attacker makes repeated requests with the sole purpose of consuming too much resources on the server and either shutting it down or making it inaccessible to others. If every request authentication requires a full database lookup at the application tier, an attacker can use forged session ids to stage a DoS attack with ease. By including an integrity component in the cookie, the server can immediately identify forged or expired credentials without any backend lookup cost.

Kill Switch

Sometimes things go wrong. And when they go very wrong, you need to have a way to immediately invalidate entire classes of sessions. Because generating a hash or signature requires a server-side secret or key, replacing the secret will immediately cause all session ids to fail validation. By using different secrets for different types of session ids, entire classes of sessions can be segregated and managed. Without such a mechanism, the application itself has to make a computed decision about the state of each session or perform mass database updates. In addition, in large distributed systems with database replication over different geographic locations, invalidating a session record in one location can take seconds and even minutes to replicate. This means the session stays active until the full system is back in sync. Compared to a self-describing and self-validating session id, the benefits are obvious.

General Purpose

An important feature of the Express session middleware is its support for user-generated session ids. This allows developer to deploy the middleware in an existing environment where session ids are generated by an existing entity which might reside on a completely different platform. Without adding a hash to the user-provided session ids, the burden of building a secure system moves from the expert (the module author) to the user (which is likely to be a security novice). Applying a hash is a much better approach than forcing an internal session id generator.


Adding a hash to a strong random session id is not all you should do. Whether your moat can benefit from crocodiles is, again, a castle-specific decision. Without going too far from the topic, there are plenty of other layers you can add to your session management tier. For example, you can use two session credentials, one long lived (lasts as long as the session) and another short lived (good for minutes or hours). To refresh the short lived you use the long lived but by doing so, you are reducing the exposure of the long lived credential on the network (especially when not using TLS). Another common practice is to have one cookie with general information about the user (e.g. first name, recently viewed items, etc.) alongside the session and to then include something from that cookie in the hash to create a binding between the user’s active state to the authentication. It’s a way of bringing back the “username” into the workflow. To go even further, hashing can be replaced with signatures and cookie content can be encrypted (and then hashed or signed on top). The security verbosity must match the threat.

Last Words

If you take one thing away from this post I hope it is the concept of layering security. While math plays a significant role in security, it is far from the only tool. Measuring the odds of guessing a session id as the sole source of security is failing to recognize that security comes from a combination of defenses. I would also strongly advise against having the kind of academic debates focusing on a single aspect of a secure system in public (at least without the proper disclaimers). It is extremely misleading to narrow down the question to the point where it causing confusion and misinformation. Asking “is there a statistical benefit to hashing a strong random session id?” is harmful because it creates the false impression that this is the only consideration. It moves the discussion from the real world to that of an incomplete abstraction. As I hope I demonstrated above, there are a lot of reasons for including a hash beyond just making guessing impractical.

Introducing chairo, a hapi.js Microservices Plugin


Over the past four years hapi grew to be the framework of choice for many projects, big or small. What makes hapi unique is its ability to scale to large deployments and large teams. As a project grows, so does its complexity – engineering complexity and process complexity. hapi’s architecture and philosophy handles the increased complexity without the need to constantly refactor the code or build meta-frameworks on top of it, while keeping the simple cases simple.

hapi being a web application framework is not concerned with how the actual business logic is implemented. It provides the developer with a few hooks (in the form of handlers and extensions) to implement its logic and largely stays out of what goes into these hooks. As the project complexity grows, so does the need to decouple functionality and distribute internal load. The hapi server becomes the outwards facing interface (either via an API or UI) while behind it an array of other technologies is used to break the monolithic business logic into smaller pieces (some of which can themselves use hapi).

As node extends deeper into the full system stack and is used to implement more and more core services all the way down to the database or file system, we need better tools to connect all these components together. While we can certainly use many standalone hapi servers for a distributed RESTful SOA, this might add complexity and overhead that is better addressed with other tools.


The basic premise of microservices is to isolate business logic to its smallest components, each implemented separately and with a clear and simple interface. Complex solutions are then broken down into a set of small services which are composed together to provide the combined, orchestrated functionality.

The important part about microservices isn’t the deployment strategy which should be based on load and scale requirements (as well as policies and politics). The focus is on writing the code in a way to allow these services to be deployed as both a monolithic single executable and as many distributed processes based on the evolving needs of the environment in which they run. Such decisions represent a trade-off between software complexity and operational complexity.

A good microservices framework provides the tools to define these components and connect them together through a message bus which supports this range of deployment strategies. As the project grows, services can be moved, changed, or replaced with minimal impact of the rest of the architecture because they can live side-by-side with older versions.


Seneca is a microservices framework from nearForm, a leading node consultancy based in Ireland. The nearForm team has been an early adopter for node and is an active member of the community (they organize the European NodeConf franchise among other activities).

The core feature of Seneca is the registration and invocation of actions through simple and powerful pattern matching. Each of these actions (which can be as simple as a single function) represents a microservice which in turn can invoke other actions. To reach another service, you just need to know it’s matching pattern regardless of where it is deployed.

var Seneca = require('seneca');

// Create instance
var seneca = Seneca();

// A microservice for loading a user record from a database
seneca.add({ record: 'user' }, function (message, callback) {
    db.load('user',, callback);

// A microservice for information about today
seneca.add({ service: 'today' }, function (message, callback) {

    return callback(null, { date: (new Date()).toString(), weather: 'Sunny' });

// Invoking the two services
seneca.act({ record: 'user', id: '123' }, function (err, user) {
    seneca.act({ service: 'today' }, function (err, today) {
        console.log('Hi ' + + '! It is a ' + + ' day today');

And to make things easier, Seneca accepts string patterns as well using a loose JSON format:

seneca.act('record:user,id:123', function (err, user) {
    seneca.act('service:today', function (err, today) {
        console.log('Hi ' + + '! It is a ' + + ' day today');

The combination of the two services can be published as another service:

seneca.add('service:welcome', function (message, callback) {
    seneca.act({ record: 'user', id: }, function (err, user) {
        seneca.act('service:today', function (err, today) {
            return callback(null, {
                message: 'Hi ' + + '! It is a ' + + ' day today'

seneca.act('service:welcome,id:123', function (err, result) {

The chairo plugin

Seneca is ideal for building microservices implementing the bits and pieces of the application business logic. However, its pattern matching routing interface is optimized for internal consumption and less for public exposure of these services. It would be unusual to expose Seneca actions direction as a public API. In addition, Seneca focuses on the backend architecture, not on interfacing with a front end experience (single page application or server-rendered views).

The new chairo (which means “happy” in ancient Greek) plugin brings the power of Seneca to hapi by bridging between these two frameworks and allowing developers to use the richness of serving web and API content via hapi while building their business logic using the Seneca microservices architecture.

chairo is registered with a hapi server like any other plugin using the hapi server.register() method. Once registered it decorates the server and request objects with a reference to the seneca instance initialized:

var Chairo = require('chairo');
var Hapi = require('hapi');

var server = new Hapi.Server();

// Pass options to the Seneca constructor
var senecaOptions = { log: 'silent' };

// Register plugin
server.register({ register: Chairo, options: senecaOptions }, function (err) {

    // Add a Seneca action
    var id = 0;
    server.seneca.add({ generate: 'id' }, function (message, next) {
        return next(null, { id: ++id });

    // Invoke a Seneca action
    server.seneca.act({ generate: 'id' }, function (err, result) {
        // result: { id: 1 }

	method: 'POST',
	path: '/id',
	handler: function (request, reply) {
            // Invoke a Seneca action using the request decoration
            request.seneca.act({ generate: 'id' }, function (err, result) {
            if (err) {
                return reply(err);

            return reply(result);

hapi already provides its own version of actions using server methods. While server methods can be cached and used as handlers and prerequisites, they cannot be decoupled from the server implementation and must reside within the same process. The new server.action() method provided by chairo maps a Seneca action pattern to a hapi server method. This allows using Seneca actions anywhere server methods can be used with the Seneca flexibility of maintaining the actual business logic elsewhere.

var Chairo = require('chairo');
var Hapi = require('hapi');

var server = new Hapi.Server();
server.register(Chairo, function (err) {
    // Set up a Seneca action
    var id = 0;
    server.seneca.add({ generate: 'id' }, function (message, next) {
        return next(null, { id: ++id });

    // Map action to a hapi server method
    server.action('generate', 'generate:id', { cache: { expiresIn: 1000 } });

    server.start(function () {
        // Invoke server method
        server.methods.generate(function (err, result1) {
            // Invoke the same server method
            server.methods.generate(function (err, result2) {
                // result1 === result2 (cached)

In simple cases, all you want to do is map a Seneca action to a hapi endpoint and proxy the action result back. chairo adds a new reply() interface decorator reply.act() which sends back a handler response using the result of a Seneca action by specifying the action pattern.

    method: 'POST',
    path: '/id',
    handler: function (request, reply) {
        // Reply using a Seneca action
        return reply.act({ generate: 'id' });

In addition, the act handler shortcut is also provided:

    method: 'POST',
    path: '/id',
    handler: { act: 'generate:id' }

For more complex cases where a hapi endpoint requires combining data from multiple source, some of which are based on Seneca actions, chairo provides the reply.compose() decorator which renders a template view using the provided template and context object. The context object combines regular object keys with top level keys with a $ suffix which are resolved into the corresponding Seneca actions matching the key’s value pattern.

// Set up a hapi view engine
    engines: { html: require('handlebars') },
    path: '../templates'

// Add route
    method: 'GET',
    path: '/welcome',
    handler: function (request, reply) {
        // Setup context with both Seneca actions and simple keys
        var context = {
            today$: 'service:today',
            user$: { record: 'user', id: 123 },
            general: {
                message: 'Welcome'

        // Reply with rendered view
        return reply.compose('example', context);

Using the template ./templates/example.html:

    <h1>{{general.message}} {{user$.name}}!</h1>
    <h2>Today is {{today$.date}} and it's going to be a {{today$.weather}} day.</h2>

In addition, the compose handler shortcut is also provided:

    method: 'POST',
    path: '/id',
    handler: {
        compose: {
            template: 'example',
            context: {
                today$: 'service:today',
                user$: { record: 'user', id: 123 },
                general: {
                    message: 'Welcome'

What’s Next?

The initial version of chairo is a very basic implementation of the Seneca features within the context of the hapi ecosystem. It maps the basic actions functionality and allows simple and elegant composition of API endpoints and web pages in hapi powered by existing or new Seneca deployments. When used with more advanced Seneca configuration, the actions can be moved to other processes, benefiting from the full power of a distributed microservices architecture.

Future versions of this plugin will look to incorporate more Seneca functionality such as data entities, make routing configuration simpler for a large distributed system, and combine the logging functionality of the two frameworks into a unified operations view.

Please give it a try and post questions, feedback, or issues.

The Best Kept Secret in the Node Community

It’s not that anyone is trying to hide this from you. It’s that those who have gone through the experience and know how incredible it is just assume it to be so obvious that it is not worth mentioning. If you have not been to a NodeConf event at Walker Creek Ranch you are passing on a rare opportunity to truly elevate your node game and connections. This is not a hyperbole.

My first NodeConf (at the very first NodeConf in Portland) was a typical ineffective event. I didn’t know anyone. No one really cared who I was and what I was working on. Yes, I could always name drop OAuth and other bullshit I worked on in the past but what I actually cared about – a node-based project called Sled – was of little interest to anyone. I also wasn’t part of the small group of people who ran the node project. I didn’t know any of them.

Conferences can be hard. They are not a natural place to meet people at a sufficiently deep level to build meaningful lasting connections. That’s not true for all conferences but I am sure your first few events felt pretty lonely (unless of course you went with people you knew in which case you stuck with them and missed the point as well). NodeConf at Walker Creek Ranch is different. Completely fucking different.

First, it is attended by pretty much all the internet famous node celebrities (and me). Second, we are all Mikeal’s hostages there. There is nowhere to go. There is nothing to do (other than hang out with people). There is barely any wifi so working on your laptop is kinda useless. This might sound terrible but here is the thing – everyone is equally stuck.

Hey look over there, it’s Isaac – yeah the guy who ran node for a while and created npm. Want to chat with him? Go ahead – get him! How fast can he possibly run away from you in those ridiculous shoes? Nope, you are not wrong, that’s Substack over there hacking away on some new crazy fucking tiny module shit. Ummmmmm – you guessed it, dshaw chilling out under the big oak tree (the same fucking tree the fucking domains API was conceived under – maybe this year will bury it there). Did a drone killing robot just run past you? I guess Raquel is up to no good again. And where’s that awful sound coming from?! Must be Nexxy throwing another rave party at the boogie barn.

Being successful in node – like most other emerging platforms – require a solid network. With io.js moving faster than most people can keep track of and the module landscape changing daily, it is absolutely essential to be both connected to others in the community and to have access to the people who can help you out. Getting a question answered about node or an npm module is dramatically easier when you had a drink with the author and can ping them directly on IRC or email. Making a personal connection matters a lot.

I can tell you without a doubt that my personal success with node and my community connection were both directly result of attending the very first NodeConf SummerCamp at Walker Creek Ranch. This is where I met all the people I have later worked closely with to make node a huge success at Walmart. That event was instrumental to my personal success and the success of my employer. Need proof I’m super successful? Well, I’m the only person who’s picture has ever been posted on the official website – after all, isn’t that how we measure success!

NodeConf this year is likely to be a smaller event. This means more quality time with people, more meaningful connections. If you are able to travel to CA for the event, you have to be an idiot not to. It’s really that simple. Here is a chance for you to spend time with the people who wrote a lot of the stuff you’re using, and the people who are likely to write it next. This is the place where so many of the second and third round of node leaders came from and you could (and should) be part of.

If this sounds like a bit of hero worship, it’s not. I am dropping names because these are all amazingly generous people who not only helped push node forward, but are also known for their kindness and welcoming attitude. They are all very busy and in other places can be hard to get hold of for a meaningful conversation. But at Walker Creek Ranch, we’re all just hanging around chillin’. The setting is so beautiful and relaxing that no one is treated differently. There are no private rooms, secret parties, or dinners where all the cool kids are hangin’ (except you).

This should be the easiest conference expense to justify to your boss. If you are doing node and have never been to a NodeConf event at the ranch, you are throwing away an opportunity to improve your skills, your network, your influence, and make a difference at the company you work for. It is a no brainer.

There are still tickets available and if you use my discount code you will get $50 off any ticket type. I expect to see you there next month!

On Leaving Walmart

It has been an exciting three and a half years, but it’s time to move on.

Looking back, there is much to be proud of. We have produced massive amounts of open source code that have been successfully adopted by dozens of companies. We created an open flow of information on our production experience (e.g. #nodebf) that played major role in increasing node adoption by the enterprise. It’s amazing how our small team of 18 people had an amplified influence over the future of node. I am extremely grateful for the trust I was given building and leading this exceptional team. This was very much a team effort all around.

Our biggest and most visible accomplishment has been the creation of the hapi framework and its community. It is very hard to predict how an open source project will work out, especially one created by a corporation (and even more so when that corporation is Walmart). hapi’s success clearly demonstrates that by embracing the community and openness from the start, companies can reap valuable rewards.

We never said “we want to work with the community” because we considered ourselves part of the community. Over the last year, the vast majority of work on hapi modules has been done by people outside of Walmart. In fact, the shift has been so dramatic that we changed the entire governance model last year to encourage and empower this transformation. At this point, Walmart is responsible for a very small share of the resources maintaining the codebase and the community.

This transformation has been so successful that it is no longer a Walmart project – and that’s a huge win for Walmart. Walmart gets to continue to benefit from their initial investment by having access to a best-in-class framework, custom-made to suit their needs, with little to none ongoing cost. They planted and nurtured a seedling and now get to share in the benefits of the tree, cost free.

There were two less visible accomplishments (but equally impressive) worth mentioning. The first is the amazing remote team we built which can serve as a model for other companies to follow. When I joined Walmart, attracting talent was a major challenge. People interviewed with us for the sole purpose of getting a competing offer to leverage against the company they actually wanted to join. But by reaching beyond the local Bay Area boundaries and showcasing our work and community participation, we quickly became one of the most sought after teams to join.

The second is the culture of quality we created. Learning from open source and community management best practices, along with extensive investment in testing tools, we developed an engineering workflow that has produced unmatched quality results. What is impressive about it is how quickly it was adopted by others outside of Walmart and the hapi community.

It has been gratifying to see our accomplishments celebrated by the community and to be able to share our success with others through open source and public sharing of information. Walmart has been consistently supportive of these dramatic cultural shifts in attitude towards the outside world more than any company I have previously worked for. I hope others will use this example to push for change in their own organizations.

As for hapi moving forward, nothing changes. I will continue to maintain hapi and participate in leading the community around it. As I mentioned above, hapi has been successfully transitioned out of Walmart over the last year and is fully owned by the community that supports it. No one owns any trademarks or has special rights to the code, names, logos, etc. It’s all under the same open license.

There is one person I have to thank by name and that’s Dion Almaer – you won’t find a more supportive, generous, and inspiring person to work for. It has been an amazing experience and I am grateful to everyone who took part – we share these accomplishments.

As for what’s next, I guess it’s time to find another adventure (yep, I’m looking).