HTTP2: the good, the bad and the ugly

I spent last few weeks investigating on HTTP2, the successor of HTTP1.1 and I’d like to share my findings and thoughts in this post.

Let’s start saying that if the question you have in mind at this point is: “Can I really use it today, not only for experiments but also in production?”
My answer would be: “YES, you can!”

First of all, I’d like to share with you the browsers implementation status for this protocol

screen-shot-2016-09-06-at-23-00-23

As you can see from the screenshot taken from caniuse.com it’s definitely well supported on the latest version of the major browsers with some caveats obviously.

If you are not convinced yet, please check this website with one of the browsers that currently supports HTTP2 and look how fast to load is!
I’d suggest to install the HTTP2 indicator Chrome extension to discover how many web apps or online services are using this protocol:

screen-shot-2016-09-07-at-21-41-09

Not yet convince?! OK let’s move to a deeper analysis then!

HTTP2 is a binary protocol with a multiplexing requests method implemented, that means all the browser requests will be handled asynchronously.

This massive change will increase drastically the performance of your application.
Considering at the moment a browser can download simultaneously a maximum of 5 resources per domain (let’s avoid talking about “resource sharding” for now), with HTTP2 we will be able to request all the resources and render them when the browser will accomplish their download, check this demo made with Go Lang for a proper comparison between the 2 protocols and check also the Network panel in the Chrome Dev Tools or Firefox dev tools in order to understand how the 2 protocols differ.

The Good

HTTP2 has really few rules in order to be implemented:

  • it works ONLY with https protocol (therefore you need a valid SSL certificate)
  • it’s backward compatible, so if a browser or a device where your application is running, don’t support HTTP2 it will fall back to HTTP1.1
  • it comes with great performance improvements out-of-the-box
  • it doesn’t require to do anything on the client side but on the server side for a basic implementation
  • few new interesting features will allow to speed up the load of your web project in a way that is not even imaginable with HTTP1.1 implementation

Despite the short list, HTTP2 is bringing a substantial change to the internet ecosystem.
One of my favourite feature is the server PUSH where a server can pass a link header specifying what the browser should download in advance before starting to parse entirely the HTML document.
In this case, we can educate the browser to download several resources like images, css or even javascript files before the engine recognise them inside the DOM, providing a better user experience to our web apps and/or games.

The Bad

There is still plenty of works to do in order to have a great penetration of this protocol, few specs are still on going (read the next paragraph: the ugly) and probably it will take quite few months before we will see a lot of services moving to this new protocol.

A part from the high level overview of the downsides, let’s look what will change on the technical side.

Considering that HTTP2 is not restrict on the amount of requests a browser is doing in order to download resources few techniques for optimising our websites will need to be reviewed or even removed from our pipeline.
Delivering all the application inside a unique javascript file won’t have any benefit with HTTP2, so we need to move our logic downloading only what we need when we need it.
Knowing that downloading large files won’t be a problem we could use sprites instead of several small images to handle the icons of our website.
Probably the different tools like Grunt, Gulp or Webpack will need to review their strategies or update their plugin in order to provide real value to this new project pipeline.

The Ugly

Google Chrome protocol implementation!
Chrome is my favorite browser and I use it extensively, in particular, when I need to debug a specific script or I need to gather metrics from a specific behavior of a web app.
At the moment it’s the only browser that requires HTTP2 server negotiation via ALPN (Application-Layer Protocol Negotiation) that basically is an extension allowing the application layer to negotiate which protocol will be used within the TLS connection.

Considering that OpenSSL integrates ALPN only from version 1.0.2, we won’t be able to enable HTTP2 protocol support for Chrome (from build 51 and above) if we don’t configure our server correctly.
For instance, on Linux OS, only Ubuntu from version 16.04 has that OpenSSL version installed by default, for all the other major Linux version you will either install the newer version manually or you’ll need to wait for the next major OS release.

I’d suggest reading carefully the article that describes this “issue” on ngnix blog before you start to configure your server for Chrome.

Wrap up

HTTP2 is not perfect and probably is not supported as it should be but, definitely, could improve (drastically in certain cases) your web project performance.
A lot of “big players” are already using HTTP2 protocols in production (Instagram, Twitter or Facebook for instance) and the results are remarkable.

Why not starting catching up with the future today?

Hot and Cold observables

One important thing to understand when you are working with Reactive Programming is the difference between hot and cold observables.
Let’s start with the definition of both types:

Cold Observables

Cold observables start running upon subscription: the observable sequence only starts pushing values to the observers when subscribe is called.

Values are also not shared among subscribers.

Hot Observables

When an observer subscribes to a hot observable sequence, it will get all values in the stream that are emitted after it subscribes.

The hot observable sequence is shared among all subscribers, and each subscriber is pushed the next value in the sequence.

Let’s see now how these 2 different observables are used across a sample application.

Using the same application I started to describe in my previous blog post, we can spot in the main class some code that is commented.
The first part represent how a cold observable works, so if you try to have the following code inside App.js you can immediately understand what’s going on when an observer subscribes to a cold observable:

let engine = new BallsCallSystem(); 
let t1 = new Ticket("t1", engine.ballStream); 

setTimeout(()=>{
    let t2 = new Ticket("t2", engine.ballStream);
}, 5000);

engine.startGame();

Our game engine instead has the following code in order to share the balls called across the tickets:

let stream = Rx.Observable
    .interval(INTERVAL)
    .map((value) => {return numbersToCall[value]})
    .take(TOTAL_CALLS);
 
this.published = stream.publish();

As you can see when we start the game there is only 1 ticket bought by the user, then after 5 seconds the user bought a new ticket and in theory we should check if in that ticket all the numbers previously called are present or not.

coldObservables
But with cold observables when an observer subscribe, it won’t be able to receive the previous data but just the data since when it subscribed, completely broken the main purpose of our bingo game… we definitely need something different in order to work with it, potentially something that won’t completely change our game logic but that could be useful to fix the issue…

That’s where the hot observables come in!
The cool part of hot observables is that you can define how much data you want to buffer inside the memory, sometimes you need to store all of them (like in the bingo game) sometimes you need only the last 4-5 occurrences, but as you will see changing from a cold to an hot observables it’s a matter of 1 line of code!

Inside the same git repo, you can find a class called BallsCallSystemReplay, this class implement an hot observable, let’s see how much the code changed inside our game engine:

this.stream = Rx.Observable
   .interval(INTERVAL)
   .map((value) => {return numbersToCall[value]})
   .take(TOTAL_CALLS)
   .shareReplay();

Basically we removed the publish method after creating the stream (that was useful if you want to control when the observable starts to stream data) and we added shareReplay that is a method that transform a cold observable to an hot one sharing all the data pass trough the stream every time a new observer subscribe to it.

So now if we change the code in App.js in this way:

let engine = new BallsCallSystemReplay();
let t1 = new Ticket("t1", engine.ballStream);
 
setTimeout(function() {
    let t2 = new Ticket("t2", engine.ballStream);
}, 5000);

Now you can see that when the second ticket is bought after the game started, we’re going anyway to check all the values called by the engine without doing a massive refactor of our game logic! Below you can see when the second ticket subscribe to the hot observable, the first thing is going to receive are all the previous values called by the engine and then is receiving the data with a certain interval following the logic of the game.

hotObservables

Reactive Programming with RxJS

In the past 6 months I spent quite few time trying to understand Reactive Programming and how it could help me on my daily job.
So I’d like to share in this post a quick example made with RxJS just to show you how Reactive Programming could help when you are handling asynchronous data streams.

If you are not familiar at all with these concepts I’d suggest to watch first my presentation on Communicating Sequential Process and Reactive Programming with RxJS (free registration) or check the slides below.

For this example I thought to create a basic bingo system that I think is a good asynchronous application example that fits perfectly with the Reactive Programming culture.
I won’t introduce in this blog post concepts like hot and cold observables, iterator pattern or observer pattern mainly because all these theoretical information are present in the webinar and the slides previously mentioned. 

You can clone the project repository directly from my git account.

Let’s start talking a little bit about the engine, basically a bingo system is composed by an engine where the numbers are called every few seconds and shared with the users in order to validate in which ticket bought by the user the number called is present.
For this purpose working with observables will facilitate the communication and the information flow between the engine and the ticket objects.
In BallsCallSystem class, after setting up the object creating few constant that we’re going to use inside the core engine, we’re going to implement the core functionality of the engine:

let stream = Rx.Observable
              .interval(INTERVAL)
              .map((value) => {return numbersToCall[value]})
              .take(TOTAL_CALLS);

These few lines of code are expressing the following intents:

  1. we create an observable (Rx.Observable)
  2. that every few milliseconds (interval method)
  3. iterate trough the interval values (incremental value from 0 to N) and return a value retrieved from the array numbersToCall (function described inside the map method)
  4. and after a certain amount of iteration we need to close the observable because the game is ended (take method) so all the observer will stop to execute their code

If we compare with an imperative programming implementation made with CSP (communicating sequential processes) I’ll end up having something similar to this one:

this.int = setInterval(this.sendData.bind(this), 3000);
[....]
sendData(){
   var val = this.numbersToCall[this.counter];
   console.log("ball called", val);
 
   csp.putAsync(this.channel, val);
   this.counter++;
 
   if(this.counter > this.numbersToCall.length){
      clearInterval(this.int);
      this.channel.close();
      console.log("GAME OVER");
   }
 }

As you can see I needed to express each single action I wanted to do in order to obtain the core functionality of my bingo system.
These 2 implementations are both solving exactly the same problem but as you can see the reactive implementation is way less verbose and easy to read than the imperative one where I’ve control of anything is happening inside the algorithm but at the same time I don’t really have a specific reason to do it.

Moving ahead with the reactive example, when we create an observable that streams data we always need an observer to retrieve these data.
So now let’s jump to Ticket class and see how we can validate against a ticket the numbers called by the engine

First of all we pass the observable via injection to a Ticket object:

let t2 = new Ticket("t2", engine.ballStream);

Then, inside the Ticket class we subscribe to the observable and we handle the different cases inside the stream (when we receive data, when an error occurs and when the stream will be terminated):

obs.subscribe(this.onData.bind(this), this.onError.bind(this), this.onComplete.bind(this));
onData(value){
    console.log("number called", value, this.tid);
    if(this.nums.indexOf(value) >= 0){
       this.totalNumsCalled.push(value);
       console.log(value + " is present in ticket " + this.tid);
    } 
 }
 
 onError(err){
    console.log("stream error:", err);
 }
 
 onComplete(){
    console.log("total numbers called in " + this.tid + ": " + this.totalNumsCalled.length);
    console.log(this.totalNumsCalled);
 }

Also here you can notice the simplicity of an implementation, for instance if we are working with React it will be very easy to handle the state of an hypothetical Ticket component and create a resilient and well structured view where each stream state is handled correctly.

An interesting benefit provided by reactive programming is for sure the simplicity and the modularity at the same time how our implementations are working.
I would really recommend to spend sometime watching the webinar in order to get the first approach to Reactive Programming and to understand better the purpose of the example described briefly above.

Hapi.js and MongoDB

During the Fullstack conference I saw a small project made with Hapi.js during a talk, so I decided to invest some time working with Hapi.js in order to investigate how easy it was create a Node.js application with this framework.

I’ve to admit, this is a framework really well done, with a plugin system that give you a lot of flexibility when you are creating your server side applications and with a decent community that provides a lot of useful information and plugins in order to speed up the projects development.

When I started to read the only book available on this framework I was impressed about the simplicity, the consideration behind the framework but more important I was impressed where Hapi.js was used for the first time.
The first enterprise app made with this framework was released during Black Friday on Walmart ecommerce. The results were amazing!
In fact one of the main contributor of this open source framework is Walmart labs, that means a big organisation with real problems to solve; definitely a good starting point!

Express vs Hapi.js

If you are asking why not express, I can reply with few arguments:

  • express is a super light and general purpose framework that works perfectly for small – medium size application.
  • hapi.js was built on top of express at the beginning but then they move away in order to create something more solid and with more built in functionalities, a framework should speed up your productivity and not giving you a structure to follow.
  • express is code base instead hapi.js is configuration base (with a lot of flexibility of course)
  • express uses middleware, hapi.js uses plugins
  • hapi.js is built with testing and security in mind!

Hapi.js

Let’s start saying working with this framework is incredibly easy when you understand the few concepts you need to know in order to create a Node project.

I created a sample project where I’ve integrated a mongo database, exposing few end points in order to add a new document inside a mongo collection, update a specific document, retrieve all documents available inside the database and  retrieving all the details of a selected document.

Inside the git repo you can find also the front end code (books.html in the project root) in Vanilla Javascript, mainly because if you are passionate about React or Angular or any other front end library, you’ll be able to understand the integration without any particular framework knowledge.

What I’m going to describe now will be how I’ve structured the server side code with Hapi.js.

In order to create a server in Hapi.js you just need few lines of code:

let server = new Hapi.Server();
server.connection();
server.start((err) => console.log('Server started at:', server.info.uri));

As you can see in the example (src/index.js) I’ve created the server in the first few lines after the require statements and I started the server (server.start) after the registration of the mongoDB plugin, but one step per time.

After creating the server object, I’ve defined my routes with server.route method.
The route method will allow you to set just 1 route with an object or several routes creating an array of objects.
Each route should contain the method parameter where you’ll define the method to reach the path, you can also set a wildcard (*) so any method will be accepted in order to retrieve that path.
Obviously then you have to set the route path, bear in mind you have to start always with slash (/) in order to define correctly the path.
The path accepts also variables inside curly brackets as you can see in the last route of my example: path: ‘/bookdetails/{id}’.

Last but not least you need to define what’s going to happen when a client is requesting that particular path specifying the handler property.
Handler expects a function with 2 parameters: request and reply.

This is a basic route implementation:

{
   method: 'GET',
   path: '/allbooks',
   handler: (request, reply) => { ... }
}

When you structure a real application, and not an example like this one, you can wrap the handler property inside the config property.
Config accepts an object that will become your controller for that route.
So as you can see it’s really up to you pick up the right design solution for your project, it could be inline because it’s a small project or a PoC rather than an external module because you have a large project where you want to structure properly your code in a MVC fashion way (we’ll see that in next blog post ;-)).
In my example I created the config property also because you can then use an awesome library called JOI in order to validate the data received from the client application.
Validate data with JOI is really simple:

validate: {
   payload: {
      title: Joi.string().required(),
      author: Joi.string().required(),
      pages: Joi.number().required(),
      category: Joi.string().required()
   }
}

In my example for instance I checked if I receive the correct amount of arguments (required()) and in the right format (string() or number()).

MongoDB plugin

Now that we have understood how to create a simple server with Hapi.js let’s go in deep on the Hapi.js plugin system, the most important part of this framework.
You can find several plugins created by the community, and on the official website you can find also a tutorial that explains step by step how to create a custom plugin for hapi.js.

In my example I used the hapi-mongodb plugin that allows me to connect a mongo database with my node.js application.
If you are more familiar with mongoose you can always use the mongoose plugin for Hapi.js.
One important thing to bear in mind of an Hapi.js plugin is that when it’s registered will be accessible from any handler method via request.server.plugins, so it’s injected automatically from the framework in order to facilitate the development flow.
So the first thing to do in order to use our mongodb plugin on our application is register it:

server.register({
   register: MongoDB,
   options: DBConfig.opts
}, (err) => {
   if (err) {
      console.error(err);
      throw err;
   }

   server.start((err) => console.log('Server started at:', server.info.uri));
});

As you can see I need just to specify which plugin I want to use in the register method and its configuration.
This is an example of the configuration you need to specify in order to connect your MongoDB instance with the application:

module.exports = {
   opts: {
      "url": "mongodb://username:password@id.mongolab.com:port/collection-name",       
      "settings": {          
         "db": {             
            "native_parser": false         
         }
      }    
   }
}

In my case the configuration is an external object where I specified the mongo database URL and the settings.
If you want a quick and free solution to use mongoDB on the cloud I can suggest mongolab, when you register you’ll have 500mb of data for free per account, so for testing purpose is really the perfect cloud service!
Last but not least, when the plugin registration happened I can start my server.

When I need to use your plugin inside any handler function I’ll be able to retrieve my plugin in this way:

var db = request.server.plugins['hapi-mongodb'].db;

In my sample application, I was able to create few cases: add a new document (addbook route), retrieve all the books (allbooks route) and the details of a specific book (bookdetails route).

Screen Shot 2015-12-04 at 23.44.38

If you want to update a record in mongo, remember to use update method over insert method, because, if correctly handled, update method will check inside your database if there are any other occurrences and if there is one it will update that occurrence otherwise it will create a new document inside the mongo collection.
Below an extract of this technique, where you specify in the first object the key for searching an item, then the object to replace with and last object you need to add is an object with upsert set to true (by default is false) that will allow you to create the new document if it doesn’t exist in your collection:

db.collection('books').updateOne({"title": request.payload.title}, dbDoc, {upsert: true}, (err, result) => {
    if(err) return reply(Boom.internal('Internal MongoDB error', err));
    return reply(result);
});

SAMPLE PROJECT GITHUB REPOSITORY

Resources

If you are interested to go more in deep about Hapi.js, I’d suggest to take a look to the official website or to the book currently available.
An interesting news is that there are other few books that will be published soon regarding Hapi.js:

that usually means Hapi js is getting adopt from several companies and developers and definitely it’s a good sign for the health of the framework.

Wrap up

In this post I shared with you a quick introduction to Hapi.js framework and his peculiarities.
If you’ve enjoyed please let me know what you would interested on so I’ll be able to prepare other posts with the topics you prefer.
Probably the next one will be on the different template systems (handlebars, react…) or about universal application (or isomorphic application as you prefer to call them) or a test drive of few plugins to use in Hapi.js web applications.

Anyway I’ll wait for your input as well😀

2015 the birth of London JS community

We are very close to the end of 2015 and it’s time give a look back to the first 11 months of this year understanding what I’ve accomplished and trying to get some resolutions for the new year as well!

2015 for my professional life  was a year of changes: I moved to a new job, I learnt new programming paradigms (reactive and functional programming), I met a lot of incredible and talented  people and many many other things.

Probably the biggest changes (or old loves?) are the creation of London JavaScript community and my returning as speaker on technical talks after a couple of years of inactivity.

Last May I started a new adventure creating a new JavaScript community in London; I spent a month to understand how and why I could do that properly.
I had the possibility in the past to be staff member of the largest Actionscript Italian community and it was really a great experience.
I was young, with a lot of passion, not much experience and the community was the perfect place for growing properly, meeting new friends, learning from the different people, trying to solve technical puzzle everyday and having fun why not!

Starting again this experience in a new country it was a real challenge for me, but this time, with way more experience than the first time, clear ideas and a touch of madness.

In 2013 I had the opportunity to spend few months in Silicon Valley and I went to several meetup events and conferences from San Francisco to San Jose.
What I was really impressed of that environment was for sure the vibe I was able to breathe in any of these events.
People from all over the world that help each others, facilitating the connections between individuals, creating opportunities and sharing knowledge.
I was astonished about this way of doing community and when I moved from Italy to London I spent the first 18 months going to different events trying to retrieve the same experience and vibe.

When I started the London Javascript community my main goal was definitely recreating that vibe in this great city where the best developers all over the world are working in interesting projects with a lots of challenges to solve.
That’s why I decided to do that, filling a gap present in London communities where there were many and strong vertical framework communities (like React and Angular one), but not a strong and general JavaScript community.

After 8 months I’ve to admit I’m very happy about the results that this community was able to generate:

  • ~1400 members
  • 7 live events
  • 1 code lab for half day
  • 2 webinars
  • an average of 60 people per event
  • Listed as O’Reilly Community partner, Google Community and Skillsmatter Community
  • Community Partner of great events like: FullStack conference and Dot.JS conference
  • more than 1900 tweets & retweets with more than 500 followers

I had also the opportunity to meet great speakers, amazing developers and passionate people.

What this community is giving me back is really invaluable and really hard to find in similar activities too, I’m really grateful of any single moment spent organising any event.

What about the resolutions for 2016?

Recently I started to gather the interest from several companies that are asking me to organise meetup events in their offices and that means all the hard work done is paying this community back!

I contacted few speakers from California and next year I’m organising a couple of webinars with speakers from Silicon Valley and directly from Google office in Mountain View.

I’m already in contact with few great speakers ready to share with the community tips and tricks on different topics like webpack, jspm, angular, webrtc, react and ES6! Keep an eye on the community page in order to discover more about these events.

Last but not least, I’m working right now on the official community website that will be a SPA website with socials integration and the possibility to subscribe to a technical newsletter where I’m going to share the best articles, tutorials and events on the web.

Obviously these are already in the roadmap but I’m really open to listen what the JavaScript community is looking for! So don’t waste this opportunity and share your ideas on how to grow and make more special OUR community!

ES2015 Destructuring assignment: by value or reference?

This week I’ve organised a meetup on ES2015 in my community, where the speaker presented his favourite features of the language.

Right after the talk I had a chance to talk with my best friend that was asking if destructuring assigns the values copying the value to the new variable or instead by reference if you work with Object or Array.
Because I hadn’t a change before to work with this new ES2015 feature I did a quick example just to get an answer to this question.

It looks like destructuring feature works by reference and it’s not copying the value.
That means anytime you’re going to change a value inside a variable that contains an Object or Array assigned via destructuring, also the original Object or Array will be affected as you can see in this simple example: destructuring example ES2015

destructuring ES2015

So when you work with destructuring bear in mind to pay a lot of attention when you change a value inside your destructured Objects and Arrays!

Automate, automate and automate!

Recently I’ve spent several days on find the best way to set up an automation process for Javascript developers and I investigated several tools strictly related to the Javascript world.
These tools allow you to save a lot of time when you perform repetitive and sometimes boring tasks in order to test your HTML5 game, website or web app.
In this post I’d like to share with you what tools I’ve found and used to create a full-stack Javascript automation pipeline for any front end developer or team.

Let’s see what a Javascript developer or a team could automate in their machine to have a better code quality and to save a lot of time when they are working on their own library or projects.
For this purpose, in my pipeline, I’ve used different CLI tools like mocha, grunt, yeoman, blanket or plato.
Each of this tool allows you to perform a specific task but combined all together these tools will provide in your projects:

  • tdd, bdd and unit test
  • code coverage
  • dependencies management
  • (custom) project template
  • static analysis
  • tasks automation (live reloading, deploy in localhost folder, files concatenation…)

These are only few of the multiple options that you can have “playing” with these tools, but let’s try to go a little bit more in deep to see what tool can effectively help to accomplish each of the item present in the list above.

TDD, BDD and UNIT TEST

Surfing on the web about this topic on Javascript you’ll find really a lot of good libraries, the one I decided to use is Mocha because is very well integrated with Blanket (code coverage) and karma (tests runner) and because it’s based on node.js so you can create your libraries and then testing in pure javascript without any need to pass through HTML pages and if you need to test javascript code that will run only inside the browser you could fake the window object with libraries like jsdom integrated in your test cases.
Mocha allows you to work in BDD, TDD and in Unit Testing you can easily mix with several assertion libraries and writing also async tests became really really easy.
Other libraries that could be useful could be Jasmine or QUnit.

CODE COVERAGE

As I wrote before I found an interesting library that work perfectly with Mocha that is Blanket.js.
Blanket is very simple and easy to use library in particular when you have all your test written in modules (node.js style) instead of a mix between html and js files.
Blanket works not only with Mocha but also with Jasmine and QUnit, so basically with the most famous testing libraries!
One thing that I really appreciate of blanket is the final output that could be exported in an interactive HTML where immediately you can recognise what it’s not tested yet and jump from a file to another one following the menu on the right side of the template.
Another one that it seems quite interesting is Istanbul.js, I didn’t try yet but it’ll be the next one for sure, I heard really good experiences from other developers with this library!

STATIC ANALYSIS

When you want to use a static analysis tool in your pipeline on of the most popular is….
But I suggest to give a try to Plato in particular if you work alone or in a small team and you want to do a sanity check of your project.

8ek3snRZ22Eq898NOi-l9Dl72eJkfbmt4t8yenImKBVvK0kTmF0xjctABnaLJIm9
Plato, in fact, store all the information of your code project locally in some JSON files and you can navigate through the report directly from an HTML page created by the tool (above a screenshoot sample).
These stats are very interesting to check the are of improvement of your project and in particular with these tools you can have an immediate feedback on where your efforts should be focused in order to deliver a better product and be sure that the maintenance shouldn’t cost too much later on.
Obviously you can also use more sophisticated solutions like SonarQube and install it inside your server with the Javascript plugin and run your static analysis every time a developer push his code in git or mercurial.
Depends always the dimension of the project and the team, my suggestion is to start with Plato in a small project and then when you see the real value move to SonarQube also if you are a small organisation.r

PROJECT TEMPLATE

When you talk about template for Javascript it’s impossible to forget of Yeoman.
Yeoman is a scaffolding tool that allow you to create skeleton of project with your favorite JS library ready to use.
I really suggest to use these kind of tools because it facilitates the beginning of new projects and give you at the same time some standards inside your company and between your projects.
There are several generators ready to use and searchable from the official website, if you can’t find what you’re looking for it’s very easy use Node.JS and the APIs already built in Yeoman to create your own generator with the functionalities that your company or projects need.

TASKS AUTOMATION and DEPENDENCIES MANAGEMENT

This is my favourite part, I found in Grunt a really good tool to automate more or less everything that is not strictly related to write the project code inside my IDE!
Grunt is the glue to assembly in a pipeline all the tools explained above, easily in one line inside your CLI: “grunt”.
The community is really huge and you can find more or less everything for Node.JS or plain Javascript, from minify, uglify and concatenate your JS files, to compile your LESS or SASS files, to convert your ES6 code to ES5, to run static analysis or push your code directly on git simply with a grunt task.
One thing that I really like of Grunt is that you can easily scale the way you are working with it using a yaml file and different js files (one per task) and assembly them at runtime.
This allows to create some common tasks for the whole company and at the same time have the freedom to add custom automation for each project and/or department of your company.
I really suggest to take a look to the official website where you can find also many technical information and then start to automate your daily Javascript workflow.
Obviously if you’re not working with JS you can still use Grunt in combination with your favourite programming language or technology like Haxe, Dart, Typescript, Coffeescript or Adobe AIR; the flexibility of this tool is really impressive!

An Alternative to Grunt could be Gulp where the main difference is that grunt favours configuration over code and Gulp exactly the opposite.
The Gulp community is growing day by day and it’s interesting to see the different approach between these two great task runners, probably in the long term the Gulp approach will be more successful but for now Grunt is exactly what I was looking for.

Conclusion

As you’ve read the JS world has got really a lot of useful tool that will save a lot of time during your daily job as developer or company.
The mix of these tools allow to create a pipeline in pure Javascript and they could really improve your code quality and your flow to have standards inside yout projects and a solid flow that will able to scale your company or projects in an easy and professional way.
Obviously there aren’t only these few tools and libraries that I’ve tried, there are many others outside there that I’d like to mention like PhantomJS or Buster or Lineman and so on, but form the next five minutes before come back on what you were doing before reading this post, try to think how to improve your flow, trust me you will remain surprise on how more productive you’ll become introducing them inside your routine.