Running Webpack on AWS Lambda

AWS Lambda, but more importantly the serverless ecosystem, is changing the way we can create and think our softwares on the cloud.
With serverless we can really focus on coding our service more than worrying about the infrastructure. Serverless is not useful in any occasion but there is a wide range of situation where using it becomes handy and very powerful.

For instance, let’s think for a moment our automation build pipeline: what would you say if I can tell you that you won’t need to tight them with a specific tool like Jenkins or Bamboo but you should use Jenkins or similar as luncher running multiple Lambda functions, in parallel or in sequence, levering the power of the cloud?

I can give you a concrete example, yesterday night I was doing a spike in order to generate a Javascript bundle on the cloud with Webpack.
Therefore I invested some time creating an AWS Lambda that executes Webpack for bundling a simple example that contains lodash and some ES6 code like this one:

import _ from ‘lodash’;
function component () {
       var element = document.createElement(‘div’);
       element.innerHTML = _.join([‘Hello’,’webpack’], ‘ ‘);
       return element;
}
document.body.appendChild(component());

This is an example that you can find in the webpack official website, I used that just for the sake of the demo.
What we want to do now is the possibility to bundle this ES6 code and its library to a unique Javascript file that could be used inside our hypothetic application or website, mimicking what a step of a build pipeline would do.
Obviously you could run any other executables inside AWS Lambda, I choose Webpack because was the one used in my working place.

AWS Lambda at the rescue

If you create an automation pipeline on the cloud and maybe you don’t have many devops in your team or company, you should spend some time learning AWS Lambda, it could help out in these kind of activities.

What is AWS Lambda? Long story short: it’s a stateless docker container that is maintained by AWS where you can focus on writing the business logic of your activity more than thinking on the infrastructure.
Too good for being true? Yeah, you are right, Lambda has some limitations:

Information retrieved from Amazon documentation in March 2017
Information retrieved from Amazon documentation in March 2017
Information retrieved from Amazon documentation in March 2017

More information regarding the limits are available in the Lambda documentation website

But still the amount of things you can do with it is pretty impressive!

So, going back to our main goal, we want to bundle our Javascript project with Webpack inside Lambda, how can we achieve that?

First thing first, I created a git repository where you can find a Javascript project to simply use inside a AWS Lambda function. In this way you won’t need to create a custom project and you can focus more on the AWS side.
There are few points that I’d like to highlight in this simple project because usually are the ones that you can waste your time:

  1. Lambda functions can save temporary files inside the /tmp/ folder (bear in mind that you are running your code inside a docker container!).
    If you try to save somewhere else you will receive an error trying executing it.
  2. With Lambda you can run executables or node CLI tools like NPM or Webpack just uploading them inside your Lambda environment and referring to them with relative path.
  3. AWS Lambda runs for not more than 300 seconds therefore if you have a complex operation you could split it up in different chained Lambda functions that are triggered in sequence.
    This should help you in debugging your operations too.

In the project I set up the webpack config file in this way:

var path = require('path');
module.exports = {
   entry: './app/index.js',
   output: {
      filename: 'bundle.js',
      path: path.resolve('/tmp/')
   }
};

As you can see I’m saving my bundle in the tmp folder because is the only one with write permissions (remember the capacity limit of 512MB for that folder).

Then I created an index.js file where I describe my Lambda function:

var spawn = require('child_process').spawn;
var fs = require('fs');


exports.handler = (event, context, callback) => {
   var wp = spawn('./node_modules/.bin/webpack', ['--config', 'webpack.config.js']);

   wp.stdout.on('data', function(data){
     console.log('stdout: ' + data);
   });

   wp.stderr.on('data', function(err){
     context.fail("writeFile failed: " + err);
   });


   wp.on('close', (code) => {
     fs.readFile('/tmp/bundle.js', 'utf8', function (err, data) {
         if (err) context.fail("read file failed: " + err);
         context.succeed(data);
     });
   });
};

Very simple code here, I’m using Node, as you can see, but potentially you could use Python or Java (these 3 languages are supported by AWS Lambda at the moment), up to you peeking your favourite.
I import the spawn method in order to run webpack and once it has finished I read the content inside the Javascript bundle file created by Webpack in the tmp folder and I return it via context.succeed method.
Context is an object, always available inside a Lambda function, that will allow you to interact with Lambda for retrieving some useful information or, like in this case, to define when the function succeed or failed.

Now we are ready to upload the application in an AWS Lambda function.
In order to do that you will need to zip your project files (not the parent folder but just the files) and upload then in AWS.
If you didn’t install the project dependencies after cloning the repository, you should do it before uploading it to AWS.

Select and compress only the files inside your project not the parent folder 

Inside your AWS console, after selecting Lambda service, you should be able to create a new function (as far as I know not all the regions are supporting AWS Lambda).
Choose your favorite language, in my case Node 4.3, and define the basic information like “function name”, “description” and so on.
Then instead of writing the Lambda function code inside the editor provided by AWS, open the dropdown and select Upload a ZIP file

Select upload a ZIP file

Then setup the handler, role and advanced settings in this way

it’s important set at least 30 seconds as Timeout period

The important part will be setting up the docker container where the Lambda is going to be executed with enough memory size and with a decent timeout because we are running an executable therefore we don’t want to block the execution due to a Lambda timeout.
If for any reason, you need to increase the 300 seconds soft limit set by default, you will need to contact Amazon and ask to increase it.
Another important information to remember is when the Lambda is not active for a certain amount of time (it’s estimated to 5 mins), the container used by your code will be re-used for other Lambda functions.
Therefore when you will trigger your function again it will be recreated (cold Lambda), instead if the Lambda function runs several times in few mins (warm Lambda) we will have better performances because the container will be already available and live to execute a new activity again.

Now if you want to test your Lambda function, you will need to click the button test and you should have an output similar to this one:

you can see easily the output produced by this Lambda function that is the content inside the Javascript bundle created by Webpack

If you want to test live the Lambda I created you can trigger it from this link

Where to go from here

Obviously the example described is very basic and it works mainly with the project I created, but it is useful to know also how you could expand this example:

  1. AWS Lambda functions accept arguments passed when we trigger them, therefore potentially you could upload your project files in S3 and trigger the Lambda function directly after the upload.
    In fact Lambda can be triggered by several cloud software in AWS like DynamoDB, SNS and so on; S3 is present in the list.
  2. In order to expose the Lambda externally you will need to connect it via API Gateway, another tool provided by AWS.
    In the example I shared above I configured API Gateway to trigger my Lambda function when someone is calling a specific URL.
  3. The fastest way, and my personal recommendation, to work with AWS Lambda is via a CLI tool like Serverless Framework, you won’t need to configure manually API Gateway and your Lambda environment because Serverless Framework will provide a boilerplate to work with.
    On top of that it will allow you to test your Lambda functions locally without uploading them every time on AWS.
    There are many other CLI tools available but at the moment Serverless is the most complete and documented with a large community behind it.
  4. Recently Amazon added the possibility to set environment variables for Lambda functions, therefore if your project requires them you will have the possibility to configure easily via AWS console or inside Serverless framework configuration file.
  5. If you don’t want to upload a CLI tool with the node_modules folder, you can create an executable with all static dependencies and upload just that file inside the ZIP file.
    For Node I found a tool that works pretty well with Webpack and NPM called EncloseJS.
  6. Remember to not abuse the power of the serverless ecosystem but understand the pros and the cons before starting using it because in some cases it’s definitely not the right choice
  7. An important consideration of the combination API Gateway + Lambda is that could work with HTTP2 protocol out of the box and you can use Cloudfront to cache your responses (also if they are dynamic) with just few configurations to set in the API Gateway console.
  8. With Lambda you pay just what you use, therefore if you use it for cron services, automation pipelines, triggering databse backup operations or similar, you could end up savings quite few dollars compare to an EC2 instance.

Visual Studio Code extensions demystified

When I tried for the first time Visual Studio Code on my Mac I remained quite impressed about its performances.
The investment Microsoft did during the last few years on this editor is really remarkable, considering also that it’s an open source software and not a commercial one.
As you know with Visual Studio Code you can create your own extensions and then share with the community inside the marketplace.
This for me was just an interesting and quick pet project before going back the my reactive studies, but it is worth to share it

I created a simple extensions for retrieving all the annotations in my Javascript projects grouping per categories inside the output panel or in a markdown file.

vscode-annotations-panel

You can download the extension called vscode-annotations directly from the marketplace or inside the extensions panel in Visual Studio Code editor.
If you want instead take a look to the source, feel free to clone the project from Github.

 

extensionpanel

First steps

If you wanna quickly start working on an extension, there is a Yeoman generator provided by the Visual Studio Code team that will create the folder structure and the necessary files for publishing your extension later on.
In order to use it just run these commands in your terminal window:

npm install -g yo generator-code
yo code

During the generation, the interactive generator will ask if you prefer working with Typescript or pure Javascript, in my case I picked the latter one.
After that you will have your project ready and you can start to have fun with Visual Studio Code!
if you prefer start with the classic Hello World project feel free to check Microsoft tutorial.

In my annotations extension what I’ve done is just providing 3 commands available in the command palette (CMD+SHIFT+P or View > Command Palette) :

. output all the annotations in the file opened inside the editor
. output all the annotations in a specific project
. export all the annotations in a specific project to a Markdown file

The first two will create an output panel inside the editor showing the annotations present inside a specific file or an entire workspace, the third one will create a markdown file with all the annotations for a specific project.

When you want to create a command inside the command palette, you need to set it up in few files, the first one is the package.json:

"activationEvents": [
     "onCommand:extension.getAnnotations",
     "onCommand:extension.getAllAnnotations",
     "onCommand:extension.createAnnotationsOutput"
 ]

and then in the commands array:

"contributes": {
    "commands": [
    {
        "command": "extension.getAnnotations",
        "title": "ANNOTATIONS: check current file"
    },
    {
        "command": "extension.createAnnotationsOutput",
        "title": "ANNOTATIONS: export markdown file"
    },
    {
        "command": "extension.getAllAnnotations",
        "title": "ANNOTATIONS: check current project"
    }]
 }

so in the commands array we are just defining the label that will be inside the command palette and the action that should be triggered when the user selects a specific command.
Then we will need to add each of them in the extension.js file (created by the scaffolder) inside the activate method that will be triggered once the editor will have loaded your extension:

vscode.commands.registerCommand('extension.getAnnotations', function () {
    // extension code here
 });

Just with these few lines of code you can see the expected results of having your commands present in the palette

vscode-annotations-palette

Microsoft is providing a well documented APIs for interacting with the editor, obviously, because it’s based on Electron bear in mind that you can also use Node.js APIs for extending the functionalities of your extension, for instance to create a file or interacting with the operating system.

Working with the workspace

When you want to interact with the editor manipulating files or printing inside the embedded console you need to deal with the workspace APIs.
In order to do that you need to become confident with a couple of objects of the vscode library:

  • window
  • workspace

With window and workspace you can handle end to end the editor UI and the project selected.
Window object is mainly use to understand what’s happening inside a file meanwhile an user is editing it.
You can also use the window object for showing notification or error messages or change the status bar

With the workspace object instead, you will be able to manage all the interactions that are happening inside the menu or editor interface.
Workspace object is useful when you want to iterate trough the project files or if you need to understand which files are currently open in the editor and when they will be closed for instance.

In my extension I used these 2 objects for showing a notification to the user:

vscode.window.showErrorMessage('There aren\'t javascript files in the open project!');

for interacting with the output panel:

vscode.window.createOutputChannel(outputWin_NAME);

[....]

outputWin.appendLine(`FILE -> file://${doc.fileName}`);
outputWin.appendLine("-----------------------------------------");
outputWin.appendLine(getBody(data, OUTPUT_PANEL_CONFIG))
outputWin.appendLine(OUTPUT_PANEL_CONFIG.newline);

outputWin.show(true);

and for iterating and opening a javascript file present inside a proejct:

vscode.workspace.openTextDocument(file.path)

[...]

vscode.workspace.findFiles('**/*.js', '**/node_modules/**', 1000).then(onFilesRetrieved)

Debugging an extension

Considering you are developing an extension for an editor you can easily debug what you are doing simply running the extension debug mode (F5 or fn+F5 from your macbook).

screen-shot-2017-01-31-at-04-06-39

Few suggestions regarding the debug mode:

  • console.dir doesn’t work, console.log will substitute what console.dir does if you are inspecting an object but not an array!
  • when an error occurs it’s not very self-explained (kudos to Facebook for the react native errors handling, best implementation ever!) so you will need to follow the stack trace as usual

Publishing an extension

Last part of this brief post will be related to the submission of your extension to the Visual Studio Code marketplace.
Also in this case Microsoft did a good job creating an extensive guide on how to do that, few suggestions also in this case:

  • in order to submit an extension in the marketplace you will need to create a Microsoft and a Visual Studio Team Service accounts
  • when you create the Personal Access Token for publishing your extension, bear in mind to set access to all accounts and all scope otherwise you could end up with a 401 or 404 error when you try to publish the extension
    screen-shot-2017-01-31-at-04-26-47
  • vsce command line tool is pretty good in order to create a publisher identity and super fast to publish an application on the marketplace.
    Considering that is a CLI tool you can also automate few part of the publishing process (increasing release number for instance) adding a scripts in your package.json
  • to make your extension more accessible in the marketplace, remember to add the keywords array inside the package.json with meaningful words and the appropriate category, at the moment there are the following categories available:
    Debuggers
    Extension Packs
    Formatters
    Keymaps
    Languages
    Linters
    Snippets
    Themes
    Other

    screen-shot-2017-01-31-at-22-14-31

Wrap up

There could be tons of other things to do and to discover for developing a Visual Studio Code extension but I think that could be a good recap of the lessons learnt for creating one that you could use along with the Microsoft guide.

The art of coding

Today I spent few hours in my favourite London museum: The Design Museum.
I went to the main exhibition called Designer, Maker, User where they created a journey between these 3 actors explaining how they are linked together and all the innovations they brought up in the past 100 years in different industries like transportation, fashion, design and so on.

During the time spent in the museum I was able to create a parallelism between the exhibition topic and the software world.
I truly believe that we are creating some sort of art when we write code, if you want in an hidden way because our users can appreciate just the “visual” result more the journey behind it but in a certain way it’s what we are evaluating when we look at a statue or a painting right?

Communication

At the beginning of this journey there was a sign with the topic explanation:

Design is a process carried out by people, for people.
At its heart is a dialogue between three key people: the designer, the maker and the user.
[…] The exhibition shows how designers respond to the needs of makers and users, how users consume and influence design and how revolutions in technology and manufacturing transform our world

Does it sound familiar? Have you ever thought how many times we are executing this process on a daily basis in our job?
All the innovations we went trough in the past few years moving from procedural programming to functional programming, from MVC to Reactive architectures, from Monolith to Microservices architectures?

Have you ever realised how important is the feedback loop (dialogue between designer, maker and user) in our job too?
A feedback loop that is partially hidden behind techniques like unit testing, TDD, continuous integration or continuous delivery.
I saw several times in my career people integrating or using these techniques without understanding why they should do it.
The feedback loop is a well known technique in many Agile and Lean frameworks and you can learn about ita lot from different talks and books.
If you are curious how to improve it I’d suggest to start reading a book on Kaizen, you will discover the feedback loop as cornerstone for your continuous improvement journey.

Simplicity

Few steps later, I found a couple of subway maps: the first London Tube Map and the New York subway map.

the first Tube Map

Both were created around 70s with the main intent of simplify how to travel across the city.
Independently that these maps distort the reality not providing a perfect representation of the city, they become the standard for travelling for millions of users from all over the world.

The takeaway here is the simplicity: simplicity in our code, simplicity in our tests, simplicity in our architectures, simplicity in our daily tasks.

A technique that I learnt during my career is no matter how complex is a specific task or implementation, it can always be divided in smaller chunks of work and when you reach the smallest one, then you can start working on it.
This approach will give you 2 main benefits:

  1. you are defining a list of steps to follow in order to create a more complex algorithm, therefore you are already thinking how to achieve the final solution in a systematic way.
    I’d suggest you to write down the list of things to do so you can visually see what you need to achieve (what is not visualised, doesn’t exist)
  2. if you have a list of small tasks that you can achieve in minutes instead of hours your self awareness will grow minute by minute and at the end of the day you will see many things achieved instead “just a few” big tasks.

Modularisation

Another interesting video that I watched in the museum was related to the design of new Tube trains available in the future.
In this case the main goal for the designers would be increasing the frequency and the capacity of each line maintaining the same infrastructure.

new London Tube train

A part from the current infrastructure (tunnel size for instance) the designers need to take in consideration how often these trains will be replaced.
In fact each train will remain in use for 30–40 years and you can understand how many innovations and improvements the humans can do in this large amount of time.
So they approached the problem taking in consideration the 2 big constrains (time and infrastructure) and they designed a larger train optimising every single centimetre inside each coach, creating bigger entrances for speeding up the access inside the coach but, more importantly, for accommodating the future innovations that could improve their trains, they have created modular part inside each coach that could be removed and exchange with something better in the future.

That approach completely blew my mind, the modularisation of applications is a problem solved in many ways inside the software world.
Encouraging loosely coupled relations over tight coupled ones, is a cornerstone in all the main frameworks.
Think for a moment about the frontend framework evolution where, after React release, they moved all to a more component-oriented architecture.

Think then to all the design patterns that are encouraging decoupling objects since the beginning of the software history… it’s incredible how easily we can retrieve these important concepts of software engineering in the real world.

If we want to take a similar approach in an industry closer to the software one, let’s talk about the Fairphone 2 released in 2016.

Fairphone 2

Fairphone 2 is a modular phone that will guarantee the longevity of your device changing different parts on demand instead the entire phone.
The company made the modularisation a feature of the new device.
Totally a different approach taken by Apple with the iPhone for instance where replacing a part of the phone is (almost) impossible.

Every time you are thinking on a new or existing project, think about how to modularise it, how to break dependencies and improve your code.
In particular how the different modules are going to “stick” together.
Often we don’t spend much time on the “contracts” between systems, but it’s very important investing the right amount of time for creating the “perfect glue” for your application.
Think how much we have done in the bast moving from a monolithic architecture to microservices where communication, monitoring and logging are way more important of the code is running inside a microservice for instance.

Wrapping up

We are often very busy working with new libraries, frameworks and languages; anyway sometimes we will need to stop for few hours or even a day, looking around us and understand that probably the next innovation in the software industry could be “borrowed” by existing tools already available in the real world.

HTTP2: the good, the bad and the ugly

I spent last few weeks investigating on HTTP2, the successor of HTTP1.1 and I’d like to share my findings and thoughts in this post.

Let’s start saying that if the question you have in mind at this point is: “Can I really use it today, not only for experiments but also in production?”
My answer would be: “YES, you can!”

First of all, I’d like to share with you the browsers implementation status for this protocol

screen-shot-2016-09-06-at-23-00-23

As you can see from the screenshot taken from caniuse.com it’s definitely well supported on the latest version of the major browsers with some caveats obviously.

If you are not convinced yet, please check this website with one of the browsers that currently supports HTTP2 and look how fast to load is!
I’d suggest to install the HTTP2 indicator Chrome extension to discover how many web apps or online services are using this protocol:

screen-shot-2016-09-07-at-21-41-09

Not yet convince?! OK let’s move to a deeper analysis then!

HTTP2 is a binary protocol with a multiplexing requests method implemented, that means all the browser requests will be handled asynchronously.

This massive change will increase drastically the performance of your application.
Considering at the moment a browser can download simultaneously a maximum of 5 resources per domain (let’s avoid talking about “resource sharding” for now), with HTTP2 we will be able to request all the resources and render them when the browser will accomplish their download, check this demo made with Go Lang for a proper comparison between the 2 protocols and check also the Network panel in the Chrome Dev Tools or Firefox dev tools in order to understand how the 2 protocols differ.

The Good

HTTP2 has really few rules in order to be implemented:

  • it works ONLY with https protocol (therefore you need a valid SSL certificate)
  • it’s backward compatible, so if a browser or a device where your application is running, don’t support HTTP2 it will fall back to HTTP1.1
  • it comes with great performance improvements out-of-the-box
  • it doesn’t require to do anything on the client side but on the server side for a basic implementation
  • few new interesting features will allow to speed up the load of your web project in a way that is not even imaginable with HTTP1.1 implementation

Despite the short list, HTTP2 is bringing a substantial change to the internet ecosystem.
One of my favourite feature is the server PUSH where a server can pass a link header specifying what the browser should download in advance before starting to parse entirely the HTML document.
In this case, we can educate the browser to download several resources like images, css or even javascript files before the engine recognise them inside the DOM, providing a better user experience to our web apps and/or games.

The Bad

There is still plenty of works to do in order to have a great penetration of this protocol, few specs are still on going (read the next paragraph: the ugly) and probably it will take quite few months before we will see a lot of services moving to this new protocol.

A part from the high level overview of the downsides, let’s look what will change on the technical side.

Considering that HTTP2 is not restrict on the amount of requests a browser is doing in order to download resources few techniques for optimising our websites will need to be reviewed or even removed from our pipeline.
Delivering all the application inside a unique javascript file won’t have any benefit with HTTP2, so we need to move our logic downloading only what we need when we need it.
Knowing that downloading large files won’t be a problem we could use sprites instead of several small images to handle the icons of our website.
Probably the different tools like Grunt, Gulp or Webpack will need to review their strategies or update their plugin in order to provide real value to this new project pipeline.

The Ugly

Google Chrome protocol implementation!
Chrome is my favorite browser and I use it extensively, in particular, when I need to debug a specific script or I need to gather metrics from a specific behavior of a web app.
At the moment it’s the only browser that requires HTTP2 server negotiation via ALPN (Application-Layer Protocol Negotiation) that basically is an extension allowing the application layer to negotiate which protocol will be used within the TLS connection.

Considering that OpenSSL integrates ALPN only from version 1.0.2, we won’t be able to enable HTTP2 protocol support for Chrome (from build 51 and above) if we don’t configure our server correctly.
For instance, on Linux OS, only Ubuntu from version 16.04 has that OpenSSL version installed by default, for all the other major Linux version you will either install the newer version manually or you’ll need to wait for the next major OS release.

I’d suggest reading carefully the article that describes this “issue” on ngnix blog before you start to configure your server for Chrome.

Wrap up

HTTP2 is not perfect and probably is not supported as it should be but, definitely, could improve (drastically in certain cases) your web project performance.
A lot of “big players” are already using HTTP2 protocols in production (Instagram, Twitter or Facebook for instance) and the results are remarkable.

Why not starting catching up with the future today?

Benchmarking Falcor.js

In the past few weeks I was figuring out how to solve a problem of a chatty communication between client and server considering that we are close to the release and I have already got few ideas in mind to improve the product I’m working on right now.

Thinking to possible solutions I thought to search online few different approaches that could help me out to solve the problem, at the beginning I was thinking to refactor the REST APIs in order to create them closer to what the UI needs (backend for frontend pattern) but then I remembered that I had bookmarked few projects that could help me out to implement this pattern without reinventing the wheel.
So on friday night I started to investigate Falcor.js, a library made by Netflix that was trying to solve exactly my same problem and they honestly solve the issue in a really smart way.

Let’s imagine you have a client that needs to call several REST end points in order to aggregate the data for a specific view, independently from how large is the amount of data to retrieve you have to bear in mind few other drawbacks like:

  • latency
  • no caching on specific data because they are real time data or tight to a specific user
  • amount of data to display in a view (maybe without a paging API available to split them in several views)
  • pre-flight calls for CORS end points
  • internet connection speed (mobility vs home vs office)
  • content negotiation

All of these, and probably many others, could be causes of a bad user experience and quite often we postpone to address these problems after the release of our online products.
So if we can minimise the impact in somehow we could provide a better user experience and therefore our products could be faster, more interesting and raise a good success with our users also when they have poor connection signal on their favourite device.

Here is where Falcor.js comes in support, in fact with Falcor we can minimise the amount of calls to specific end points because this library is leveraging the idea of a unique data model that could be interrogate by our clients asynchronously via Falcor APIs optimising the amount of queries to it under the hood.
The query system allows not only to fetch data from Falcor model but also to get only the data you need to use in a specific view.

services-diagram
From Falcor.js website

Looking at the image above you can spotted immediately the possible problem that Falcor solves brilliantly with a unique model.
In fact the Router is aggregating data from different end points and therefore the client can request exactly what needs in a nice and simple way.

Let’s try to explain how the system works first:

falcor-end-to-end
From Falcor.js website

The client will create a connection to a JSON model using the HTTPDataSource provided by Falcor.js client library that will allow to start the connection between the client and the backend data model.
On the backend a Falcor router is created and inside this instance there will be the description of the different queries available and what are the data to return as router system.
Doing this each client will download only the data that effectively needs to render the page and not an element more (sometimes reducing drastically the amount of data to load).
Also each client won’t need to interrogate ad hoc services for retrieving ad hoc data created for it but will just query the same data model created for the entire application only querying different end points.
As you can see from the diagram above this part sits between the client and your backend system creating a middleware that could be use only for specific end points or for the entire application.

Another interesting characteristic of Falcor is how it can optimise the query to the data model, in fact activating the batch mode, Falcor will gather all the queries to a specific route in a tick of your application performing all of them and possibly optimising to a unique roundtrip the requests instead of multiple roundtrips!

Last but not least Falcor allows you to query the APIs implementing a paging mechanism when you are iterating elements inside lists. For instance if you have an array of elements to display in your view but the APIs provided by the backend team don’t include any paging parameter, Falcor helps you via the query system, retrieving only a certain amount of elements via the paging mechanism.

So after watching the few videos available on Falcor and reading all the documentation in the website I started to experiment directly on the chatty issue I’ve got in my project.
I can’t really share the code I’ve used for my spike mainly because I’m using the product end points but I can share with you some benchmarks and thoughts on that for now.

Currently the catalogue I’m working with is composed by 5 calls to 5 different end points in order to display THE catalogue inside the view.
There is also a timer where every few minutes these 5 calls are performed again in order to retrieve new data that possibly could refresh the products available for the user inside the page.
Obviously behind the scene the application is doing several other calls in order to synchronise its status and performing some checks, but for now let’s focus only on the catalogue part.
I extracted the 5 calls in a simple HTML page replicating the current situation in the product and gathering some metrics to understand which was the starting point and how much Falcor would be able to improve the situation.
These are the benchmark I’ve retrieved for 5 XHR calls to different end points when the data are cached and when they aren’t:

NO CACHED XHR (after 5 calls):
average time on 10 tests ~678ms
average kb on 10 tests ~111kb

CACHED XHR (after 5 calls):
average time on 10 tests ~126ms
average kb on 10 tests ~3.1 kb

Then what I did was implementing a Node.js gateway where using the Falcor Router and the Falcor model I was able to query only the data I needed (and not downloading the entire JSON with information not needed for that specific view), I’ve optimised the query to the Falcor model via batching requests and these are the results with Falcor in place in front of the product APIs:

NO CACHED WITH FALCOR (Falcor optimise the 5 calls to only 2 calls):
average time on 10 tests ~32ms
average kb on 10 tests ~5 kb

CACHED WITH FALCOR (Falcor optimise the 5 calls to only 2 calls):
average time on 10 tests ~10ms
average kb on 10 tests ~406 bytes

This is really a good performance boost considering the amount of data we downloaded by default that are now optimised via Falcor queries; several roundtrips saved because of the batching requests optimisation natively available in Falcor APIs and the nice asynchronous implementation to fetch data from the unique model.

I’ve to admit that I was really surprised and I believe sooner than later I’ll introduce Falcor.js in production because it can really simplify the pipeline of work and provide great benefits to your applications in particular when you are targeting different low end devices like in my project.
Another think that would be good to invest time on(maybe another weekend :P) is GraphQL, the main “competitor”, maybe I’ll be able to do a 1 to 1 comparison based on the same problem and see which is the best library for a specific problem.

Meanwhile if you want to start playing with it I recommend Falcor website and I encourage you to keep an eye on this blog because I’d like to share in another post more technical information about Falcor that will allow you to understand how easy is working with it.

 

Cycle.js a reactive framework

I had the chance to spend quite few evenings with Cycle.js preparing my presentation for JSDay 2016 and I have to admit that I had quite a lot of fun so I decided to share my experience in this post.

I’d suggest first to take a look to my previous introduction to reactive programming just to cover the basics if you are not familiar with this paradigm.

Cycle.js is an incredible light framework that allow you to work in a reactive fashion way. As mentioned on all my talks regarding this topic, the initial entry level is a little bit higher than other frameworks if you are not familiar with Reactive concepts like hot and cold observables for instance and Functional concepts like pure functions.
Cycle puts the functions as first class citizen for the entire framework, having a previous knowledge of reactive and functional paradigms definitely it helps to speed up working with this framework.
Let’s start to see how it works from an eagle eye perspective:

Screen Shot 2016-05-22 at 11.37.17

This image above represent the main concept behind Cycle where we have a loop between the a main method, that receives as input a read effects (sources), and the “external world” that receives some write effects as output (sinks).
The side effects generated by the main function will be handled by the “external world” inside drivers that will communicate with the external world.

Cycle in practice

 

Cycle.js allows you to work with an interesting architecture created ad hoc for reactive programming called MVI (Model View Intent), I wrote an extensive article on this topic and I suggest to read it because the architecture is really interesting leveraging new concepts like the communication between objects via observables.

mvi

There is an interesting read in the Cycle website that explains how MVI works in the official website.

For the JSDay I prepared a simple example in MVI with Cycle just to show how simple is working with this architecture and with Cycle framework.
The example is a real time monitor for London Underground trains showing the position of each train and the expected arrival to their destination.

Screen Shot 2016-05-11 at 11.52.07

You can find the project on my github account

The application is mainly composed by 4 main areas:

  • Main.js is the entry point of our application where I’m instantiating Cycle run loop and I’m adding the hot reloading module for Cycle.js
  • App.js where there is the business logic of the application written in MVI (mvi-refactor branch)
  • Networking.js where I’m handling the HTTP request and response made via HTTPDriver
  • Template.js that represents the renderer object made with hyperscript (default virtual-dom library present in Cycle)

The App.js is where the business logic of the application sits with the division of competences between Model – View – Intent.
I warmly suggest to have a read to my article on MVI as mentioned before in order to understand how MVI works.

export default function app(_drivers){
   const response$ = networking.processResponse(_drivers.HTTP);
   const actions = intent(_drivers.DOM);
   const state$ = model(response$, actions);
   const vtree$ = view(state$);
   const trainsRequest$ = networking.getRequestURL(actions.line$, DEFAULT_LINE);

   return {
      DOM: vtree$,
      HTTP: trainsRequest$
   };
}

the interesting part here is how I’ve divided the interaction between different actors present in the app inside the main method.
All the constants that ends with $ symbols are observables (this is a simple convention to immediately recognise which variable is an observable and which is not).
The app method has a unique argument (_drivers) and it returns an object containing 2 observables that will be handled by the outside world via drivers (check Main.js to see where I’ve instantiated the DOM and HTTP drivers).

Then I’ve created the networking object that is handling the interaction between an end point and the application.
response$ is the observable where I’m going to store the service response and trainsRequest$ is where I’m defining the request using the tube line chosen by the user in query string (check Networking.js for that).

Last but not least I’m creating the MVI structure where the only point of communication between these pure functions is an observable or a collection of observables.
That would help me to testing easily the application, encapsulating properly the behaviours of each part and having a unidirectional flow of information that are flowing inside the application or component.

I’d like to bring your attention now to how the view is handled in Cycle.js, the view is generating a virtual tree of elements that will be rendered by a third party library like Hyperscript or React.
It’s not a case that Cycle is using a render library that implements the virtual-dom concept because, as we describe at the beginning, Cycle works with loops so having a diff algorithm that update the DOM only when needed it’s a great boost performance wise for the entire application and it removes some complexity to handle on the developers side.

export default function getBody(results){

   let selectedLine = results[0].length > 0 ? 'Selected line: ' + results[0] : "";
 
   return div(".container", [
      h1("#title", ["Reactive Live London Tube trains status"]),
      select("#lines", [
         option({ value: 'piccadilly' }, ["Piccadilly line"]),
         option({ value: 'northern' }, ["Northern line"]),
         option({ value: 'bakerloo' }, ["Bakerloo line"]),
         option({ value: 'central' }, ["Central line"]),
         option({ value: 'district' }, ["District line"]),
         option({ value: 'circle' }, ["Circle line"]),
         option({ value: 'victoria' }, ["Victora line"]),
     ]),
     h3("#selectedLine", [selectedLine]), 
     renderTrainsData(results[1])
   ]);
}

Just to give you an idea, I brought the entry function that generates the application view, after defined few static elements like the dropdown and few title elements, I’m going to call another function that is returning the results to display in the main view.
In renderTrainsData function I’m returning the elements splitting the different domain objects across 3 other functions:

function getTrainData(data){
   return li(".train", [
      div(".train-data", [
        p(".stationName .col", [span("station: "), data.stationName]),
        p(".platform", [span("platform: "), data.platformName]),
        p(".current-location", [span("current location: "), data.currentLocation]),
        p(".arrival-time: ", [span("expected arrival time: "), moment(new Date(data.expectedArrival)).format("HH:MM - Do MMM YYYY")])
      ]
   )])
}

function getDestinationStation(data){
   return div(`.destination-station .${data.lineId}`, [
       h2(data.destination),
       ul(".destination-trains-available", data.trains.map(item => getTrainData(item)))
   ]);
}

function renderTrainsData(data){
   return div(".all-stations", data.map(item => getDestinationStation(item)));
}

As you can see is very easy to understand and the code becomes very readable for anyone, in particular when we are writing meaningful function names.
I’ve created also some simple CSS IDs and classes defined as first parameter of each HTML element just for the curiosity to see a basic integration between hyperscript and CSS.

Wrap Up

Cycle.js is showing a new approach to the Javascript community with a lot of good ideas and a growing community that is following and improving the framework.
I can’t say that would be the most used framework for the next year considering how much React+Redux and Angular 2 are succeeding in the front end scene, but I believe that with Cycle you can still create great applications often with less and well structured code compared to other frameworks.
I’ll definitely continue to go ahead with it and possibly to contribute as much as I can to this project and Rx.js as well, because I strongly believe in the future of reactive programming paradigm and the people behind it.

 

How to Dockerize your Node application

Today I’d love to share how to wrap a Node microservice or monolithic application inside a Docker container.
I assume that you’ve already installed Docker, boot2docker and Node in your machine.
If you don’t, please check the official Docker page and Node page and after picking your operating system follow the instructions.

I’ve created a simple Node application with Hapi.js that once called returns the classic “hello world” message as response, obviously you can have a way more complex application as well, my main purpose here is talking about how to setup Docker with your Node.js application.

In order to wrap your application inside a Docker container you need to create a Dockerfile, this file is basically defining the setup for the environment where your application will run.
Checking the Docker containers currently available on Docker hub, you can see for Node the official container page that will allow you to pick the right Node version for your application.

Screen Shot 2016-04-03 at 17.01.57

As you can see there are many different Node containers available, you can also use other solution like using ubuntu, fedora or centOS as base OS where your node application is going to run but I preferred to use the official one because my server side application doesn’t require any particular configuration.
For this example we are going to use the 5-onbuild that basically is doing the “dirty job” for us, obviously this is a sample application but you can customise your container as you prefer.
What the onbuild version is doing is basically:

  • creating a directory of the application inside the container
  • copying package.json in that folder
  • running “npm install” command
  • running “npm start” command

It’s fundamental to define inside the package.json all the dependencies and the start script otherwise the application is not going to work inside the container.

Inside our Dockerfile we are going to write:

FROM node:5-onbuild
EXPOSE 8080

So basically we are inheriting all the steps described above regarding the onbuild Dockerfile plus we are exposing the port 8080 that is the one used by our Node application:

const server = new Hapi.Server();
server.connection({ port: 8080 });

It’s a best practice for any Dockerfile starting always with a FROM command and reusing the images that are already created by the community on Docker hub so remember when you want to try something different check always what’s available on Docker hub.

Good so now let’s package our container and try if it works correctly.
In order to build the container we’ll need to run the following command:

docker build -t <username>/<applicationName> .

Basically here we are saying to docker to dockerize the entire folder application (dot at the end of the command)  and the container will be called <userName>/<applicationName>
This is another Docker best practice, potentially you can call your container as you prefer but a pattern is adding first your Docker username then slash (“/”) then your application name:

docker build -t lucamezzalira/docker-hapi .

Not let’s try if it works:

docker run -p 49160:8080 -d lucamezzalira/docker-hapi

With this command we are running our container, therefore our Node application, mapping the port 8080 of the container to the port 49160 of the host.
You can check easily if the application is working correctly just typing docker ps in your console, you should see something like that:

Screen Shot 2016-04-03 at 17.29.48

So now, because I’m working on Mac, I need to retrieve the boot2docker IP and check if in the port 49160 I’m able to see my hello world application up and running.
In order to do that I’ll run the command:

boot2docker ip

That should return the external IP where my application  is running, you can also see which is the container IP using the command:

docker inspect CONTAINER ID (for example: docker inspect afb5810152f6)

The container ID is easily retrievable via docker ps (first column in the picture above).
This command will return a JSON file with a lot of information related to the container.

So now that I’ve the IP where my application is running and the related port  I can type inside my bowser the address and see the result!

Screen Shot 2016-04-03 at 17.36.00

Obviously you can also map that IP to etc/hosts file adding something more meaningful like boot2docker or whatever name your think is more appropriate!

If you want to download from Docker hub the container I’ve prepared for this post just type:

docker pull lucamezzalira/docker-hapi

This is a very basic introduction to Docker world and Node, I’m working on a sample microservices application that will involve few interesting concepts and pattern, so keep an eye on this blog 😉

Hot and Cold observables

One important thing to understand when you are working with Reactive Programming is the difference between hot and cold observables.
Let’s start with the definition of both types:

Cold Observables

Cold observables start running upon subscription: the observable sequence only starts pushing values to the observers when subscribe is called.

Values are also not shared among subscribers.

Hot Observables

When an observer subscribes to a hot observable sequence, it will get all values in the stream that are emitted after it subscribes.

The hot observable sequence is shared among all subscribers, and each subscriber is pushed the next value in the sequence.

Let’s see now how these 2 different observables are used across a sample application.

Using the same application I started to describe in my previous blog post, we can spot in the main class some code that is commented.
The first part represent how a cold observable works, so if you try to have the following code inside App.js you can immediately understand what’s going on when an observer subscribes to a cold observable:

let engine = new BallsCallSystem(); 
let t1 = new Ticket("t1", engine.ballStream); 

setTimeout(()=>{
    let t2 = new Ticket("t2", engine.ballStream);
}, 5000);

engine.startGame();

Our game engine instead has the following code in order to share the balls called across the tickets:

let stream = Rx.Observable
    .interval(INTERVAL)
    .map((value) => {return numbersToCall[value]})
    .take(TOTAL_CALLS);
 
this.published = stream.publish();

As you can see when we start the game there is only 1 ticket bought by the user, then after 5 seconds the user bought a new ticket and in theory we should check if in that ticket all the numbers previously called are present or not.

coldObservables
But with cold observables when an observer subscribe, it won’t be able to receive the previous data but just the data since when it subscribed, completely broken the main purpose of our bingo game… we definitely need something different in order to work with it, potentially something that won’t completely change our game logic but that could be useful to fix the issue…

That’s where the hot observables come in!
The cool part of hot observables is that you can define how much data you want to buffer inside the memory, sometimes you need to store all of them (like in the bingo game) sometimes you need only the last 4-5 occurrences, but as you will see changing from a cold to an hot observables it’s a matter of 1 line of code!

Inside the same git repo, you can find a class called BallsCallSystemReplay, this class implement an hot observable, let’s see how much the code changed inside our game engine:

this.stream = Rx.Observable
   .interval(INTERVAL)
   .map((value) => {return numbersToCall[value]})
   .take(TOTAL_CALLS)
   .shareReplay();

Basically we removed the publish method after creating the stream (that was useful if you want to control when the observable starts to stream data) and we added shareReplay that is a method that transform a cold observable to an hot one sharing all the data pass trough the stream every time a new observer subscribe to it.

So now if we change the code in App.js in this way:

let engine = new BallsCallSystemReplay();
let t1 = new Ticket("t1", engine.ballStream);
 
setTimeout(function() {
    let t2 = new Ticket("t2", engine.ballStream);
}, 5000);

Now you can see that when the second ticket is bought after the game started, we’re going anyway to check all the values called by the engine without doing a massive refactor of our game logic! Below you can see when the second ticket subscribe to the hot observable, the first thing is going to receive are all the previous values called by the engine and then is receiving the data with a certain interval following the logic of the game.

hotObservables

Reactive Programming with RxJS

In the past 6 months I spent quite few time trying to understand Reactive Programming and how it could help me on my daily job.
So I’d like to share in this post a quick example made with RxJS just to show you how Reactive Programming could help when you are handling asynchronous data streams.

If you are not familiar at all with these concepts I’d suggest to watch first my presentation on Communicating Sequential Process and Reactive Programming with RxJS (free registration) or check the slides below.

For this example I thought to create a basic bingo system that I think is a good asynchronous application example that fits perfectly with the Reactive Programming culture.
I won’t introduce in this blog post concepts like hot and cold observables, iterator pattern or observer pattern mainly because all these theoretical information are present in the webinar and the slides previously mentioned. 

You can clone the project repository directly from my git account.

Let’s start talking a little bit about the engine, basically a bingo system is composed by an engine where the numbers are called every few seconds and shared with the users in order to validate in which ticket bought by the user the number called is present.
For this purpose working with observables will facilitate the communication and the information flow between the engine and the ticket objects.
In BallsCallSystem class, after setting up the object creating few constant that we’re going to use inside the core engine, we’re going to implement the core functionality of the engine:

let stream = Rx.Observable
              .interval(INTERVAL)
              .map((value) => {return numbersToCall[value]})
              .take(TOTAL_CALLS);

These few lines of code are expressing the following intents:

  1. we create an observable (Rx.Observable)
  2. that every few milliseconds (interval method)
  3. iterate trough the interval values (incremental value from 0 to N) and return a value retrieved from the array numbersToCall (function described inside the map method)
  4. and after a certain amount of iteration we need to close the observable because the game is ended (take method) so all the observer will stop to execute their code

If we compare with an imperative programming implementation made with CSP (communicating sequential processes) I’ll end up having something similar to this one:

this.int = setInterval(this.sendData.bind(this), 3000);
[....]
sendData(){
   var val = this.numbersToCall[this.counter];
   console.log("ball called", val);
 
   csp.putAsync(this.channel, val);
   this.counter++;
 
   if(this.counter > this.numbersToCall.length){
      clearInterval(this.int);
      this.channel.close();
      console.log("GAME OVER");
   }
 }

As you can see I needed to express each single action I wanted to do in order to obtain the core functionality of my bingo system.
These 2 implementations are both solving exactly the same problem but as you can see the reactive implementation is way less verbose and easy to read than the imperative one where I’ve control of anything is happening inside the algorithm but at the same time I don’t really have a specific reason to do it.

Moving ahead with the reactive example, when we create an observable that streams data we always need an observer to retrieve these data.
So now let’s jump to Ticket class and see how we can validate against a ticket the numbers called by the engine

First of all we pass the observable via injection to a Ticket object:

let t2 = new Ticket("t2", engine.ballStream);

Then, inside the Ticket class we subscribe to the observable and we handle the different cases inside the stream (when we receive data, when an error occurs and when the stream will be terminated):

obs.subscribe(this.onData.bind(this), this.onError.bind(this), this.onComplete.bind(this));
onData(value){
    console.log("number called", value, this.tid);
    if(this.nums.indexOf(value) >= 0){
       this.totalNumsCalled.push(value);
       console.log(value + " is present in ticket " + this.tid);
    } 
 }
 
 onError(err){
    console.log("stream error:", err);
 }
 
 onComplete(){
    console.log("total numbers called in " + this.tid + ": " + this.totalNumsCalled.length);
    console.log(this.totalNumsCalled);
 }

Also here you can notice the simplicity of an implementation, for instance if we are working with React it will be very easy to handle the state of an hypothetical Ticket component and create a resilient and well structured view where each stream state is handled correctly.

An interesting benefit provided by reactive programming is for sure the simplicity and the modularity at the same time how our implementations are working.
I would really recommend to spend sometime watching the webinar in order to get the first approach to Reactive Programming and to understand better the purpose of the example described briefly above.

Hapi.js and MongoDB

During the Fullstack conference I saw a small project made with Hapi.js during a talk, so I decided to invest some time working with Hapi.js in order to investigate how easy it was create a Node.js application with this framework.

I’ve to admit, this is a framework really well done, with a plugin system that give you a lot of flexibility when you are creating your server side applications and with a decent community that provides a lot of useful information and plugins in order to speed up the projects development.

When I started to read the only book available on this framework I was impressed about the simplicity, the consideration behind the framework but more important I was impressed where Hapi.js was used for the first time.
The first enterprise app made with this framework was released during Black Friday on Walmart ecommerce. The results were amazing!
In fact one of the main contributor of this open source framework is Walmart labs, that means a big organisation with real problems to solve; definitely a good starting point!

Express vs Hapi.js

If you are asking why not express, I can reply with few arguments:

  • express is a super light and general purpose framework that works perfectly for small – medium size application.
  • hapi.js was built on top of express at the beginning but then they move away in order to create something more solid and with more built in functionalities, a framework should speed up your productivity and not giving you a structure to follow.
  • express is code base instead hapi.js is configuration base (with a lot of flexibility of course)
  • express uses middleware, hapi.js uses plugins
  • hapi.js is built with testing and security in mind!

Hapi.js

Let’s start saying working with this framework is incredibly easy when you understand the few concepts you need to know in order to create a Node project.

I created a sample project where I’ve integrated a mongo database, exposing few end points in order to add a new document inside a mongo collection, update a specific document, retrieve all documents available inside the database and  retrieving all the details of a selected document.

Inside the git repo you can find also the front end code (books.html in the project root) in Vanilla Javascript, mainly because if you are passionate about React or Angular or any other front end library, you’ll be able to understand the integration without any particular framework knowledge.

What I’m going to describe now will be how I’ve structured the server side code with Hapi.js.

In order to create a server in Hapi.js you just need few lines of code:

let server = new Hapi.Server();
server.connection();
server.start((err) => console.log('Server started at:', server.info.uri));

As you can see in the example (src/index.js) I’ve created the server in the first few lines after the require statements and I started the server (server.start) after the registration of the mongoDB plugin, but one step per time.

After creating the server object, I’ve defined my routes with server.route method.
The route method will allow you to set just 1 route with an object or several routes creating an array of objects.
Each route should contain the method parameter where you’ll define the method to reach the path, you can also set a wildcard (*) so any method will be accepted in order to retrieve that path.
Obviously then you have to set the route path, bear in mind you have to start always with slash (/) in order to define correctly the path.
The path accepts also variables inside curly brackets as you can see in the last route of my example: path: ‘/bookdetails/{id}’.

Last but not least you need to define what’s going to happen when a client is requesting that particular path specifying the handler property.
Handler expects a function with 2 parameters: request and reply.

This is a basic route implementation:

{
   method: 'GET',
   path: '/allbooks',
   handler: (request, reply) => { ... }
}

When you structure a real application, and not an example like this one, you can wrap the handler property inside the config property.
Config accepts an object that will become your controller for that route.
So as you can see it’s really up to you pick up the right design solution for your project, it could be inline because it’s a small project or a PoC rather than an external module because you have a large project where you want to structure properly your code in a MVC fashion way (we’ll see that in next blog post ;-)).
In my example I created the config property also because you can then use an awesome library called JOI in order to validate the data received from the client application.
Validate data with JOI is really simple:

validate: {
   payload: {
      title: Joi.string().required(),
      author: Joi.string().required(),
      pages: Joi.number().required(),
      category: Joi.string().required()
   }
}

In my example for instance I checked if I receive the correct amount of arguments (required()) and in the right format (string() or number()).

MongoDB plugin

Now that we have understood how to create a simple server with Hapi.js let’s go in deep on the Hapi.js plugin system, the most important part of this framework.
You can find several plugins created by the community, and on the official website you can find also a tutorial that explains step by step how to create a custom plugin for hapi.js.

In my example I used the hapi-mongodb plugin that allows me to connect a mongo database with my node.js application.
If you are more familiar with mongoose you can always use the mongoose plugin for Hapi.js.
One important thing to bear in mind of an Hapi.js plugin is that when it’s registered will be accessible from any handler method via request.server.plugins, so it’s injected automatically from the framework in order to facilitate the development flow.
So the first thing to do in order to use our mongodb plugin on our application is register it:

server.register({
   register: MongoDB,
   options: DBConfig.opts
}, (err) => {
   if (err) {
      console.error(err);
      throw err;
   }

   server.start((err) => console.log('Server started at:', server.info.uri));
});

As you can see I need just to specify which plugin I want to use in the register method and its configuration.
This is an example of the configuration you need to specify in order to connect your MongoDB instance with the application:

module.exports = {
   opts: {
      "url": "mongodb://username:password@id.mongolab.com:port/collection-name",       
      "settings": {          
         "db": {             
            "native_parser": false         
         }
      }    
   }
}

In my case the configuration is an external object where I specified the mongo database URL and the settings.
If you want a quick and free solution to use mongoDB on the cloud I can suggest mongolab, when you register you’ll have 500mb of data for free per account, so for testing purpose is really the perfect cloud service!
Last but not least, when the plugin registration happened I can start my server.

When I need to use your plugin inside any handler function I’ll be able to retrieve my plugin in this way:

var db = request.server.plugins['hapi-mongodb'].db;

In my sample application, I was able to create few cases: add a new document (addbook route), retrieve all the books (allbooks route) and the details of a specific book (bookdetails route).

Screen Shot 2015-12-04 at 23.44.38

If you want to update a record in mongo, remember to use update method over insert method, because, if correctly handled, update method will check inside your database if there are any other occurrences and if there is one it will update that occurrence otherwise it will create a new document inside the mongo collection.
Below an extract of this technique, where you specify in the first object the key for searching an item, then the object to replace with and last object you need to add is an object with upsert set to true (by default is false) that will allow you to create the new document if it doesn’t exist in your collection:

db.collection('books').updateOne({"title": request.payload.title}, dbDoc, {upsert: true}, (err, result) => {
    if(err) return reply(Boom.internal('Internal MongoDB error', err));
    return reply(result);
});

SAMPLE PROJECT GITHUB REPOSITORY

Resources

If you are interested to go more in deep about Hapi.js, I’d suggest to take a look to the official website or to the book currently available.
An interesting news is that there are other few books that will be published soon regarding Hapi.js:

that usually means Hapi js is getting adopt from several companies and developers and definitely it’s a good sign for the health of the framework.

Wrap up

In this post I shared with you a quick introduction to Hapi.js framework and his peculiarities.
If you’ve enjoyed please let me know what you would interested on so I’ll be able to prepare other posts with the topics you prefer.
Probably the next one will be on the different template systems (handlebars, react…) or about universal application (or isomorphic application as you prefer to call them) or a test drive of few plugins to use in Hapi.js web applications.

Anyway I’ll wait for your input as well 😀