Visual Studio Code extensions demystified

When I tried for the first time Visual Studio Code on my Mac I remained quite impressed about its performances.
The investment Microsoft did during the last few years on this editor is really remarkable, considering also that it’s an open source software and not a commercial one.
As you know with Visual Studio Code you can create your own extensions and then share with the community inside the marketplace.
This for me was just an interesting and quick pet project before going back the my reactive studies, but it is worth to share it

I created a simple extensions for retrieving all the annotations in my Javascript projects grouping per categories inside the output panel or in a markdown file.

vscode-annotations-panel

You can download the extension called vscode-annotations directly from the marketplace or inside the extensions panel in Visual Studio Code editor.
If you want instead take a look to the source, feel free to clone the project from Github.

 

extensionpanel

First steps

If you wanna quickly start working on an extension, there is a Yeoman generator provided by the Visual Studio Code team that will create the folder structure and the necessary files for publishing your extension later on.
In order to use it just run these commands in your terminal window:

npm install -g yo generator-code
yo code

During the generation, the interactive generator will ask if you prefer working with Typescript or pure Javascript, in my case I picked the latter one.
After that you will have your project ready and you can start to have fun with Visual Studio Code!
if you prefer start with the classic Hello World project feel free to check Microsoft tutorial.

In my annotations extension what I’ve done is just providing 3 commands available in the command palette (CMD+SHIFT+P or View > Command Palette) :

. output all the annotations in the file opened inside the editor
. output all the annotations in a specific project
. export all the annotations in a specific project to a Markdown file

The first two will create an output panel inside the editor showing the annotations present inside a specific file or an entire workspace, the third one will create a markdown file with all the annotations for a specific project.

When you want to create a command inside the command palette, you need to set it up in few files, the first one is the package.json:

"activationEvents": [
     "onCommand:extension.getAnnotations",
     "onCommand:extension.getAllAnnotations",
     "onCommand:extension.createAnnotationsOutput"
 ]

and then in the commands array:

"contributes": {
    "commands": [
    {
        "command": "extension.getAnnotations",
        "title": "ANNOTATIONS: check current file"
    },
    {
        "command": "extension.createAnnotationsOutput",
        "title": "ANNOTATIONS: export markdown file"
    },
    {
        "command": "extension.getAllAnnotations",
        "title": "ANNOTATIONS: check current project"
    }]
 }

so in the commands array we are just defining the label that will be inside the command palette and the action that should be triggered when the user selects a specific command.
Then we will need to add each of them in the extension.js file (created by the scaffolder) inside the activate method that will be triggered once the editor will have loaded your extension:

vscode.commands.registerCommand('extension.getAnnotations', function () {
    // extension code here
 });

Just with these few lines of code you can see the expected results of having your commands present in the palette

vscode-annotations-palette

Microsoft is providing a well documented APIs for interacting with the editor, obviously, because it’s based on Electron bear in mind that you can also use Node.js APIs for extending the functionalities of your extension, for instance to create a file or interacting with the operating system.

Working with the workspace

When you want to interact with the editor manipulating files or printing inside the embedded console you need to deal with the workspace APIs.
In order to do that you need to become confident with a couple of objects of the vscode library:

  • window
  • workspace

With window and workspace you can handle end to end the editor UI and the project selected.
Window object is mainly use to understand what’s happening inside a file meanwhile an user is editing it.
You can also use the window object for showing notification or error messages or change the status bar

With the workspace object instead, you will be able to manage all the interactions that are happening inside the menu or editor interface.
Workspace object is useful when you want to iterate trough the project files or if you need to understand which files are currently open in the editor and when they will be closed for instance.

In my extension I used these 2 objects for showing a notification to the user:

vscode.window.showErrorMessage('There aren\'t javascript files in the open project!');

for interacting with the output panel:

vscode.window.createOutputChannel(outputWin_NAME);

[....]

outputWin.appendLine(`FILE -> file://${doc.fileName}`);
outputWin.appendLine("-----------------------------------------");
outputWin.appendLine(getBody(data, OUTPUT_PANEL_CONFIG))
outputWin.appendLine(OUTPUT_PANEL_CONFIG.newline);

outputWin.show(true);

and for iterating and opening a javascript file present inside a proejct:

vscode.workspace.openTextDocument(file.path)

[...]

vscode.workspace.findFiles('**/*.js', '**/node_modules/**', 1000).then(onFilesRetrieved)

Debugging an extension

Considering you are developing an extension for an editor you can easily debug what you are doing simply running the extension debug mode (F5 or fn+F5 from your macbook).

screen-shot-2017-01-31-at-04-06-39

Few suggestions regarding the debug mode:

  • console.dir doesn’t work, console.log will substitute what console.dir does if you are inspecting an object but not an array!
  • when an error occurs it’s not very self-explained (kudos to Facebook for the react native errors handling, best implementation ever!) so you will need to follow the stack trace as usual

Publishing an extension

Last part of this brief post will be related to the submission of your extension to the Visual Studio Code marketplace.
Also in this case Microsoft did a good job creating an extensive guide on how to do that, few suggestions also in this case:

  • in order to submit an extension in the marketplace you will need to create a Microsoft and a Visual Studio Team Service accounts
  • when you create the Personal Access Token for publishing your extension, bear in mind to set access to all accounts and all scope otherwise you could end up with a 401 or 404 error when you try to publish the extension
    screen-shot-2017-01-31-at-04-26-47
  • vsce command line tool is pretty good in order to create a publisher identity and super fast to publish an application on the marketplace.
    Considering that is a CLI tool you can also automate few part of the publishing process (increasing release number for instance) adding a scripts in your package.json
  • to make your extension more accessible in the marketplace, remember to add the keywords array inside the package.json with meaningful words and the appropriate category, at the moment there are the following categories available:
    Debuggers
    Extension Packs
    Formatters
    Keymaps
    Languages
    Linters
    Snippets
    Themes
    Other

    screen-shot-2017-01-31-at-22-14-31

Wrap up

There could be tons of other things to do and to discover for developing a Visual Studio Code extension but I think that could be a good recap of the lessons learnt for creating one that you could use along with the Microsoft guide.

Advertisement

HTTP2: the good, the bad and the ugly

I spent last few weeks investigating on HTTP2, the successor of HTTP1.1 and I’d like to share my findings and thoughts in this post.

Let’s start saying that if the question you have in mind at this point is: “Can I really use it today, not only for experiments but also in production?”
My answer would be: “YES, you can!”

First of all, I’d like to share with you the browsers implementation status for this protocol

screen-shot-2016-09-06-at-23-00-23

As you can see from the screenshot taken from caniuse.com it’s definitely well supported on the latest version of the major browsers with some caveats obviously.

If you are not convinced yet, please check this website with one of the browsers that currently supports HTTP2 and look how fast to load is!
I’d suggest to install the HTTP2 indicator Chrome extension to discover how many web apps or online services are using this protocol:

screen-shot-2016-09-07-at-21-41-09

Not yet convince?! OK let’s move to a deeper analysis then!

HTTP2 is a binary protocol with a multiplexing requests method implemented, that means all the browser requests will be handled asynchronously.

This massive change will increase drastically the performance of your application.
Considering at the moment a browser can download simultaneously a maximum of 5 resources per domain (let’s avoid talking about “resource sharding” for now), with HTTP2 we will be able to request all the resources and render them when the browser will accomplish their download, check this demo made with Go Lang for a proper comparison between the 2 protocols and check also the Network panel in the Chrome Dev Tools or Firefox dev tools in order to understand how the 2 protocols differ.

The Good

HTTP2 has really few rules in order to be implemented:

  • it works ONLY with https protocol (therefore you need a valid SSL certificate)
  • it’s backward compatible, so if a browser or a device where your application is running, don’t support HTTP2 it will fall back to HTTP1.1
  • it comes with great performance improvements out-of-the-box
  • it doesn’t require to do anything on the client side but on the server side for a basic implementation
  • few new interesting features will allow to speed up the load of your web project in a way that is not even imaginable with HTTP1.1 implementation

Despite the short list, HTTP2 is bringing a substantial change to the internet ecosystem.
One of my favourite feature is the server PUSH where a server can pass a link header specifying what the browser should download in advance before starting to parse entirely the HTML document.
In this case, we can educate the browser to download several resources like images, css or even javascript files before the engine recognise them inside the DOM, providing a better user experience to our web apps and/or games.

The Bad

There is still plenty of works to do in order to have a great penetration of this protocol, few specs are still on going (read the next paragraph: the ugly) and probably it will take quite few months before we will see a lot of services moving to this new protocol.

A part from the high level overview of the downsides, let’s look what will change on the technical side.

Considering that HTTP2 is not restrict on the amount of requests a browser is doing in order to download resources few techniques for optimising our websites will need to be reviewed or even removed from our pipeline.
Delivering all the application inside a unique javascript file won’t have any benefit with HTTP2, so we need to move our logic downloading only what we need when we need it.
Knowing that downloading large files won’t be a problem we could use sprites instead of several small images to handle the icons of our website.
Probably the different tools like Grunt, Gulp or Webpack will need to review their strategies or update their plugin in order to provide real value to this new project pipeline.

The Ugly

Google Chrome protocol implementation!
Chrome is my favorite browser and I use it extensively, in particular, when I need to debug a specific script or I need to gather metrics from a specific behavior of a web app.
At the moment it’s the only browser that requires HTTP2 server negotiation via ALPN (Application-Layer Protocol Negotiation) that basically is an extension allowing the application layer to negotiate which protocol will be used within the TLS connection.

Considering that OpenSSL integrates ALPN only from version 1.0.2, we won’t be able to enable HTTP2 protocol support for Chrome (from build 51 and above) if we don’t configure our server correctly.
For instance, on Linux OS, only Ubuntu from version 16.04 has that OpenSSL version installed by default, for all the other major Linux version you will either install the newer version manually or you’ll need to wait for the next major OS release.

I’d suggest reading carefully the article that describes this “issue” on ngnix blog before you start to configure your server for Chrome.

Wrap up

HTTP2 is not perfect and probably is not supported as it should be but, definitely, could improve (drastically in certain cases) your web project performance.
A lot of “big players” are already using HTTP2 protocols in production (Instagram, Twitter or Facebook for instance) and the results are remarkable.

Why not starting catching up with the future today?

How to Dockerize your Node application

Today I’d love to share how to wrap a Node microservice or monolithic application inside a Docker container.
I assume that you’ve already installed Docker, boot2docker and Node in your machine.
If you don’t, please check the official Docker page and Node page and after picking your operating system follow the instructions.

I’ve created a simple Node application with Hapi.js that once called returns the classic “hello world” message as response, obviously you can have a way more complex application as well, my main purpose here is talking about how to setup Docker with your Node.js application.

In order to wrap your application inside a Docker container you need to create a Dockerfile, this file is basically defining the setup for the environment where your application will run.
Checking the Docker containers currently available on Docker hub, you can see for Node the official container page that will allow you to pick the right Node version for your application.

Screen Shot 2016-04-03 at 17.01.57

As you can see there are many different Node containers available, you can also use other solution like using ubuntu, fedora or centOS as base OS where your node application is going to run but I preferred to use the official one because my server side application doesn’t require any particular configuration.
For this example we are going to use the 5-onbuild that basically is doing the “dirty job” for us, obviously this is a sample application but you can customise your container as you prefer.
What the onbuild version is doing is basically:

  • creating a directory of the application inside the container
  • copying package.json in that folder
  • running “npm install” command
  • running “npm start” command

It’s fundamental to define inside the package.json all the dependencies and the start script otherwise the application is not going to work inside the container.

Inside our Dockerfile we are going to write:

FROM node:5-onbuild
EXPOSE 8080

So basically we are inheriting all the steps described above regarding the onbuild Dockerfile plus we are exposing the port 8080 that is the one used by our Node application:

const server = new Hapi.Server();
server.connection({ port: 8080 });

It’s a best practice for any Dockerfile starting always with a FROM command and reusing the images that are already created by the community on Docker hub so remember when you want to try something different check always what’s available on Docker hub.

Good so now let’s package our container and try if it works correctly.
In order to build the container we’ll need to run the following command:

docker build -t <username>/<applicationName> .

Basically here we are saying to docker to dockerize the entire folder application (dot at the end of the command)  and the container will be called <userName>/<applicationName>
This is another Docker best practice, potentially you can call your container as you prefer but a pattern is adding first your Docker username then slash (“/”) then your application name:

docker build -t lucamezzalira/docker-hapi .

Not let’s try if it works:

docker run -p 49160:8080 -d lucamezzalira/docker-hapi

With this command we are running our container, therefore our Node application, mapping the port 8080 of the container to the port 49160 of the host.
You can check easily if the application is working correctly just typing docker ps in your console, you should see something like that:

Screen Shot 2016-04-03 at 17.29.48

So now, because I’m working on Mac, I need to retrieve the boot2docker IP and check if in the port 49160 I’m able to see my hello world application up and running.
In order to do that I’ll run the command:

boot2docker ip

That should return the external IP where my application  is running, you can also see which is the container IP using the command:

docker inspect CONTAINER ID (for example: docker inspect afb5810152f6)

The container ID is easily retrievable via docker ps (first column in the picture above).
This command will return a JSON file with a lot of information related to the container.

So now that I’ve the IP where my application is running and the related port  I can type inside my bowser the address and see the result!

Screen Shot 2016-04-03 at 17.36.00

Obviously you can also map that IP to etc/hosts file adding something more meaningful like boot2docker or whatever name your think is more appropriate!

If you want to download from Docker hub the container I’ve prepared for this post just type:

docker pull lucamezzalira/docker-hapi

This is a very basic introduction to Docker world and Node, I’m working on a sample microservices application that will involve few interesting concepts and pattern, so keep an eye on this blog 😉

Hapi.js and MongoDB

During the Fullstack conference I saw a small project made with Hapi.js during a talk, so I decided to invest some time working with Hapi.js in order to investigate how easy it was create a Node.js application with this framework.

I’ve to admit, this is a framework really well done, with a plugin system that give you a lot of flexibility when you are creating your server side applications and with a decent community that provides a lot of useful information and plugins in order to speed up the projects development.

When I started to read the only book available on this framework I was impressed about the simplicity, the consideration behind the framework but more important I was impressed where Hapi.js was used for the first time.
The first enterprise app made with this framework was released during Black Friday on Walmart ecommerce. The results were amazing!
In fact one of the main contributor of this open source framework is Walmart labs, that means a big organisation with real problems to solve; definitely a good starting point!

Express vs Hapi.js

If you are asking why not express, I can reply with few arguments:

  • express is a super light and general purpose framework that works perfectly for small – medium size application.
  • hapi.js was built on top of express at the beginning but then they move away in order to create something more solid and with more built in functionalities, a framework should speed up your productivity and not giving you a structure to follow.
  • express is code base instead hapi.js is configuration base (with a lot of flexibility of course)
  • express uses middleware, hapi.js uses plugins
  • hapi.js is built with testing and security in mind!

Hapi.js

Let’s start saying working with this framework is incredibly easy when you understand the few concepts you need to know in order to create a Node project.

I created a sample project where I’ve integrated a mongo database, exposing few end points in order to add a new document inside a mongo collection, update a specific document, retrieve all documents available inside the database and  retrieving all the details of a selected document.

Inside the git repo you can find also the front end code (books.html in the project root) in Vanilla Javascript, mainly because if you are passionate about React or Angular or any other front end library, you’ll be able to understand the integration without any particular framework knowledge.

What I’m going to describe now will be how I’ve structured the server side code with Hapi.js.

In order to create a server in Hapi.js you just need few lines of code:

let server = new Hapi.Server();
server.connection();
server.start((err) => console.log('Server started at:', server.info.uri));

As you can see in the example (src/index.js) I’ve created the server in the first few lines after the require statements and I started the server (server.start) after the registration of the mongoDB plugin, but one step per time.

After creating the server object, I’ve defined my routes with server.route method.
The route method will allow you to set just 1 route with an object or several routes creating an array of objects.
Each route should contain the method parameter where you’ll define the method to reach the path, you can also set a wildcard (*) so any method will be accepted in order to retrieve that path.
Obviously then you have to set the route path, bear in mind you have to start always with slash (/) in order to define correctly the path.
The path accepts also variables inside curly brackets as you can see in the last route of my example: path: ‘/bookdetails/{id}’.

Last but not least you need to define what’s going to happen when a client is requesting that particular path specifying the handler property.
Handler expects a function with 2 parameters: request and reply.

This is a basic route implementation:

{
   method: 'GET',
   path: '/allbooks',
   handler: (request, reply) => { ... }
}

When you structure a real application, and not an example like this one, you can wrap the handler property inside the config property.
Config accepts an object that will become your controller for that route.
So as you can see it’s really up to you pick up the right design solution for your project, it could be inline because it’s a small project or a PoC rather than an external module because you have a large project where you want to structure properly your code in a MVC fashion way (we’ll see that in next blog post ;-)).
In my example I created the config property also because you can then use an awesome library called JOI in order to validate the data received from the client application.
Validate data with JOI is really simple:

validate: {
   payload: {
      title: Joi.string().required(),
      author: Joi.string().required(),
      pages: Joi.number().required(),
      category: Joi.string().required()
   }
}

In my example for instance I checked if I receive the correct amount of arguments (required()) and in the right format (string() or number()).

MongoDB plugin

Now that we have understood how to create a simple server with Hapi.js let’s go in deep on the Hapi.js plugin system, the most important part of this framework.
You can find several plugins created by the community, and on the official website you can find also a tutorial that explains step by step how to create a custom plugin for hapi.js.

In my example I used the hapi-mongodb plugin that allows me to connect a mongo database with my node.js application.
If you are more familiar with mongoose you can always use the mongoose plugin for Hapi.js.
One important thing to bear in mind of an Hapi.js plugin is that when it’s registered will be accessible from any handler method via request.server.plugins, so it’s injected automatically from the framework in order to facilitate the development flow.
So the first thing to do in order to use our mongodb plugin on our application is register it:

server.register({
   register: MongoDB,
   options: DBConfig.opts
}, (err) => {
   if (err) {
      console.error(err);
      throw err;
   }

   server.start((err) => console.log('Server started at:', server.info.uri));
});

As you can see I need just to specify which plugin I want to use in the register method and its configuration.
This is an example of the configuration you need to specify in order to connect your MongoDB instance with the application:

module.exports = {
   opts: {
      "url": "mongodb://username:password@id.mongolab.com:port/collection-name",       
      "settings": {          
         "db": {             
            "native_parser": false         
         }
      }    
   }
}

In my case the configuration is an external object where I specified the mongo database URL and the settings.
If you want a quick and free solution to use mongoDB on the cloud I can suggest mongolab, when you register you’ll have 500mb of data for free per account, so for testing purpose is really the perfect cloud service!
Last but not least, when the plugin registration happened I can start my server.

When I need to use your plugin inside any handler function I’ll be able to retrieve my plugin in this way:

var db = request.server.plugins['hapi-mongodb'].db;

In my sample application, I was able to create few cases: add a new document (addbook route), retrieve all the books (allbooks route) and the details of a specific book (bookdetails route).

Screen Shot 2015-12-04 at 23.44.38

If you want to update a record in mongo, remember to use update method over insert method, because, if correctly handled, update method will check inside your database if there are any other occurrences and if there is one it will update that occurrence otherwise it will create a new document inside the mongo collection.
Below an extract of this technique, where you specify in the first object the key for searching an item, then the object to replace with and last object you need to add is an object with upsert set to true (by default is false) that will allow you to create the new document if it doesn’t exist in your collection:

db.collection('books').updateOne({"title": request.payload.title}, dbDoc, {upsert: true}, (err, result) => {
    if(err) return reply(Boom.internal('Internal MongoDB error', err));
    return reply(result);
});

SAMPLE PROJECT GITHUB REPOSITORY

Resources

If you are interested to go more in deep about Hapi.js, I’d suggest to take a look to the official website or to the book currently available.
An interesting news is that there are other few books that will be published soon regarding Hapi.js:

that usually means Hapi js is getting adopt from several companies and developers and definitely it’s a good sign for the health of the framework.

Wrap up

In this post I shared with you a quick introduction to Hapi.js framework and his peculiarities.
If you’ve enjoyed please let me know what you would interested on so I’ll be able to prepare other posts with the topics you prefer.
Probably the next one will be on the different template systems (handlebars, react…) or about universal application (or isomorphic application as you prefer to call them) or a test drive of few plugins to use in Hapi.js web applications.

Anyway I’ll wait for your input as well 😀