Benchmarking Falcor.js

In the past few weeks I was figuring out how to solve a problem of a chatty communication between client and server considering that we are close to the release and I have already got few ideas in mind to improve the product I’m working on right now.

Thinking to possible solutions I thought to search online few different approaches that could help me out to solve the problem, at the beginning I was thinking to refactor the REST APIs in order to create them closer to what the UI needs (backend for frontend pattern) but then I remembered that I had bookmarked few projects that could help me out to implement this pattern without reinventing the wheel.
So on friday night I started to investigate Falcor.js, a library made by Netflix that was trying to solve exactly my same problem and they honestly solve the issue in a really smart way.

Let’s imagine you have a client that needs to call several REST end points in order to aggregate the data for a specific view, independently from how large is the amount of data to retrieve you have to bear in mind few other drawbacks like:

  • latency
  • no caching on specific data because they are real time data or tight to a specific user
  • amount of data to display in a view (maybe without a paging API available to split them in several views)
  • pre-flight calls for CORS end points
  • internet connection speed (mobility vs home vs office)
  • content negotiation

All of these, and probably many others, could be causes of a bad user experience and quite often we postpone to address these problems after the release of our online products.
So if we can minimise the impact in somehow we could provide a better user experience and therefore our products could be faster, more interesting and raise a good success with our users also when they have poor connection signal on their favourite device.

Here is where Falcor.js comes in support, in fact with Falcor we can minimise the amount of calls to specific end points because this library is leveraging the idea of a unique data model that could be interrogate by our clients asynchronously via Falcor APIs optimising the amount of queries to it under the hood.
The query system allows not only to fetch data from Falcor model but also to get only the data you need to use in a specific view.

services-diagram
From Falcor.js website

Looking at the image above you can spotted immediately the possible problem that Falcor solves brilliantly with a unique model.
In fact the Router is aggregating data from different end points and therefore the client can request exactly what needs in a nice and simple way.

Let’s try to explain how the system works first:

falcor-end-to-end
From Falcor.js website

The client will create a connection to a JSON model using the HTTPDataSource provided by Falcor.js client library that will allow to start the connection between the client and the backend data model.
On the backend a Falcor router is created and inside this instance there will be the description of the different queries available and what are the data to return as router system.
Doing this each client will download only the data that effectively needs to render the page and not an element more (sometimes reducing drastically the amount of data to load).
Also each client won’t need to interrogate ad hoc services for retrieving ad hoc data created for it but will just query the same data model created for the entire application only querying different end points.
As you can see from the diagram above this part sits between the client and your backend system creating a middleware that could be use only for specific end points or for the entire application.

Another interesting characteristic of Falcor is how it can optimise the query to the data model, in fact activating the batch mode, Falcor will gather all the queries to a specific route in a tick of your application performing all of them and possibly optimising to a unique roundtrip the requests instead of multiple roundtrips!

Last but not least Falcor allows you to query the APIs implementing a paging mechanism when you are iterating elements inside lists. For instance if you have an array of elements to display in your view but the APIs provided by the backend team don’t include any paging parameter, Falcor helps you via the query system, retrieving only a certain amount of elements via the paging mechanism.

So after watching the few videos available on Falcor and reading all the documentation in the website I started to experiment directly on the chatty issue I’ve got in my project.
I can’t really share the code I’ve used for my spike mainly because I’m using the product end points but I can share with you some benchmarks and thoughts on that for now.

Currently the catalogue I’m working with is composed by 5 calls to 5 different end points in order to display THE catalogue inside the view.
There is also a timer where every few minutes these 5 calls are performed again in order to retrieve new data that possibly could refresh the products available for the user inside the page.
Obviously behind the scene the application is doing several other calls in order to synchronise its status and performing some checks, but for now let’s focus only on the catalogue part.
I extracted the 5 calls in a simple HTML page replicating the current situation in the product and gathering some metrics to understand which was the starting point and how much Falcor would be able to improve the situation.
These are the benchmark I’ve retrieved for 5 XHR calls to different end points when the data are cached and when they aren’t:

NO CACHED XHR (after 5 calls):
average time on 10 tests ~678ms
average kb on 10 tests ~111kb

CACHED XHR (after 5 calls):
average time on 10 tests ~126ms
average kb on 10 tests ~3.1 kb

Then what I did was implementing a Node.js gateway where using the Falcor Router and the Falcor model I was able to query only the data I needed (and not downloading the entire JSON with information not needed for that specific view), I’ve optimised the query to the Falcor model via batching requests and these are the results with Falcor in place in front of the product APIs:

NO CACHED WITH FALCOR (Falcor optimise the 5 calls to only 2 calls):
average time on 10 tests ~32ms
average kb on 10 tests ~5 kb

CACHED WITH FALCOR (Falcor optimise the 5 calls to only 2 calls):
average time on 10 tests ~10ms
average kb on 10 tests ~406 bytes

This is really a good performance boost considering the amount of data we downloaded by default that are now optimised via Falcor queries; several roundtrips saved because of the batching requests optimisation natively available in Falcor APIs and the nice asynchronous implementation to fetch data from the unique model.

I’ve to admit that I was really surprised and I believe sooner than later I’ll introduce Falcor.js in production because it can really simplify the pipeline of work and provide great benefits to your applications in particular when you are targeting different low end devices like in my project.
Another think that would be good to invest time on(maybe another weekend :P) is GraphQL, the main “competitor”, maybe I’ll be able to do a 1 to 1 comparison based on the same problem and see which is the best library for a specific problem.

Meanwhile if you want to start playing with it I recommend Falcor website and I encourage you to keep an eye on this blog because I’d like to share in another post more technical information about Falcor that will allow you to understand how easy is working with it.

 

How to Dockerize your Node application

Today I’d love to share how to wrap a Node microservice or monolithic application inside a Docker container.
I assume that you’ve already installed Docker, boot2docker and Node in your machine.
If you don’t, please check the official Docker page and Node page and after picking your operating system follow the instructions.

I’ve created a simple Node application with Hapi.js that once called returns the classic “hello world” message as response, obviously you can have a way more complex application as well, my main purpose here is talking about how to setup Docker with your Node.js application.

In order to wrap your application inside a Docker container you need to create a Dockerfile, this file is basically defining the setup for the environment where your application will run.
Checking the Docker containers currently available on Docker hub, you can see for Node the official container page that will allow you to pick the right Node version for your application.

Screen Shot 2016-04-03 at 17.01.57

As you can see there are many different Node containers available, you can also use other solution like using ubuntu, fedora or centOS as base OS where your node application is going to run but I preferred to use the official one because my server side application doesn’t require any particular configuration.
For this example we are going to use the 5-onbuild that basically is doing the “dirty job” for us, obviously this is a sample application but you can customise your container as you prefer.
What the onbuild version is doing is basically:

  • creating a directory of the application inside the container
  • copying package.json in that folder
  • running “npm install” command
  • running “npm start” command

It’s fundamental to define inside the package.json all the dependencies and the start script otherwise the application is not going to work inside the container.

Inside our Dockerfile we are going to write:

FROM node:5-onbuild
EXPOSE 8080

So basically we are inheriting all the steps described above regarding the onbuild Dockerfile plus we are exposing the port 8080 that is the one used by our Node application:

const server = new Hapi.Server();
server.connection({ port: 8080 });

It’s a best practice for any Dockerfile starting always with a FROM command and reusing the images that are already created by the community on Docker hub so remember when you want to try something different check always what’s available on Docker hub.

Good so now let’s package our container and try if it works correctly.
In order to build the container we’ll need to run the following command:

docker build -t <username>/<applicationName> .

Basically here we are saying to docker to dockerize the entire folder application (dot at the end of the command)  and the container will be called <userName>/<applicationName>
This is another Docker best practice, potentially you can call your container as you prefer but a pattern is adding first your Docker username then slash (“/”) then your application name:

docker build -t lucamezzalira/docker-hapi .

Not let’s try if it works:

docker run -p 49160:8080 -d lucamezzalira/docker-hapi

With this command we are running our container, therefore our Node application, mapping the port 8080 of the container to the port 49160 of the host.
You can check easily if the application is working correctly just typing docker ps in your console, you should see something like that:

Screen Shot 2016-04-03 at 17.29.48

So now, because I’m working on Mac, I need to retrieve the boot2docker IP and check if in the port 49160 I’m able to see my hello world application up and running.
In order to do that I’ll run the command:

boot2docker ip

That should return the external IP where my application  is running, you can also see which is the container IP using the command:

docker inspect CONTAINER ID (for example: docker inspect afb5810152f6)

The container ID is easily retrievable via docker ps (first column in the picture above).
This command will return a JSON file with a lot of information related to the container.

So now that I’ve the IP where my application is running and the related port  I can type inside my bowser the address and see the result!

Screen Shot 2016-04-03 at 17.36.00

Obviously you can also map that IP to etc/hosts file adding something more meaningful like boot2docker or whatever name your think is more appropriate!

If you want to download from Docker hub the container I’ve prepared for this post just type:

docker pull lucamezzalira/docker-hapi

This is a very basic introduction to Docker world and Node, I’m working on a sample microservices application that will involve few interesting concepts and pattern, so keep an eye on this blog 😉

Hapi.js and MongoDB

During the Fullstack conference I saw a small project made with Hapi.js during a talk, so I decided to invest some time working with Hapi.js in order to investigate how easy it was create a Node.js application with this framework.

I’ve to admit, this is a framework really well done, with a plugin system that give you a lot of flexibility when you are creating your server side applications and with a decent community that provides a lot of useful information and plugins in order to speed up the projects development.

When I started to read the only book available on this framework I was impressed about the simplicity, the consideration behind the framework but more important I was impressed where Hapi.js was used for the first time.
The first enterprise app made with this framework was released during Black Friday on Walmart ecommerce. The results were amazing!
In fact one of the main contributor of this open source framework is Walmart labs, that means a big organisation with real problems to solve; definitely a good starting point!

Express vs Hapi.js

If you are asking why not express, I can reply with few arguments:

  • express is a super light and general purpose framework that works perfectly for small – medium size application.
  • hapi.js was built on top of express at the beginning but then they move away in order to create something more solid and with more built in functionalities, a framework should speed up your productivity and not giving you a structure to follow.
  • express is code base instead hapi.js is configuration base (with a lot of flexibility of course)
  • express uses middleware, hapi.js uses plugins
  • hapi.js is built with testing and security in mind!

Hapi.js

Let’s start saying working with this framework is incredibly easy when you understand the few concepts you need to know in order to create a Node project.

I created a sample project where I’ve integrated a mongo database, exposing few end points in order to add a new document inside a mongo collection, update a specific document, retrieve all documents available inside the database and  retrieving all the details of a selected document.

Inside the git repo you can find also the front end code (books.html in the project root) in Vanilla Javascript, mainly because if you are passionate about React or Angular or any other front end library, you’ll be able to understand the integration without any particular framework knowledge.

What I’m going to describe now will be how I’ve structured the server side code with Hapi.js.

In order to create a server in Hapi.js you just need few lines of code:

let server = new Hapi.Server();
server.connection();
server.start((err) => console.log('Server started at:', server.info.uri));

As you can see in the example (src/index.js) I’ve created the server in the first few lines after the require statements and I started the server (server.start) after the registration of the mongoDB plugin, but one step per time.

After creating the server object, I’ve defined my routes with server.route method.
The route method will allow you to set just 1 route with an object or several routes creating an array of objects.
Each route should contain the method parameter where you’ll define the method to reach the path, you can also set a wildcard (*) so any method will be accepted in order to retrieve that path.
Obviously then you have to set the route path, bear in mind you have to start always with slash (/) in order to define correctly the path.
The path accepts also variables inside curly brackets as you can see in the last route of my example: path: ‘/bookdetails/{id}’.

Last but not least you need to define what’s going to happen when a client is requesting that particular path specifying the handler property.
Handler expects a function with 2 parameters: request and reply.

This is a basic route implementation:

{
   method: 'GET',
   path: '/allbooks',
   handler: (request, reply) => { ... }
}

When you structure a real application, and not an example like this one, you can wrap the handler property inside the config property.
Config accepts an object that will become your controller for that route.
So as you can see it’s really up to you pick up the right design solution for your project, it could be inline because it’s a small project or a PoC rather than an external module because you have a large project where you want to structure properly your code in a MVC fashion way (we’ll see that in next blog post ;-)).
In my example I created the config property also because you can then use an awesome library called JOI in order to validate the data received from the client application.
Validate data with JOI is really simple:

validate: {
   payload: {
      title: Joi.string().required(),
      author: Joi.string().required(),
      pages: Joi.number().required(),
      category: Joi.string().required()
   }
}

In my example for instance I checked if I receive the correct amount of arguments (required()) and in the right format (string() or number()).

MongoDB plugin

Now that we have understood how to create a simple server with Hapi.js let’s go in deep on the Hapi.js plugin system, the most important part of this framework.
You can find several plugins created by the community, and on the official website you can find also a tutorial that explains step by step how to create a custom plugin for hapi.js.

In my example I used the hapi-mongodb plugin that allows me to connect a mongo database with my node.js application.
If you are more familiar with mongoose you can always use the mongoose plugin for Hapi.js.
One important thing to bear in mind of an Hapi.js plugin is that when it’s registered will be accessible from any handler method via request.server.plugins, so it’s injected automatically from the framework in order to facilitate the development flow.
So the first thing to do in order to use our mongodb plugin on our application is register it:

server.register({
   register: MongoDB,
   options: DBConfig.opts
}, (err) => {
   if (err) {
      console.error(err);
      throw err;
   }

   server.start((err) => console.log('Server started at:', server.info.uri));
});

As you can see I need just to specify which plugin I want to use in the register method and its configuration.
This is an example of the configuration you need to specify in order to connect your MongoDB instance with the application:

module.exports = {
   opts: {
      "url": "mongodb://username:password@id.mongolab.com:port/collection-name",       
      "settings": {          
         "db": {             
            "native_parser": false         
         }
      }    
   }
}

In my case the configuration is an external object where I specified the mongo database URL and the settings.
If you want a quick and free solution to use mongoDB on the cloud I can suggest mongolab, when you register you’ll have 500mb of data for free per account, so for testing purpose is really the perfect cloud service!
Last but not least, when the plugin registration happened I can start my server.

When I need to use your plugin inside any handler function I’ll be able to retrieve my plugin in this way:

var db = request.server.plugins['hapi-mongodb'].db;

In my sample application, I was able to create few cases: add a new document (addbook route), retrieve all the books (allbooks route) and the details of a specific book (bookdetails route).

Screen Shot 2015-12-04 at 23.44.38

If you want to update a record in mongo, remember to use update method over insert method, because, if correctly handled, update method will check inside your database if there are any other occurrences and if there is one it will update that occurrence otherwise it will create a new document inside the mongo collection.
Below an extract of this technique, where you specify in the first object the key for searching an item, then the object to replace with and last object you need to add is an object with upsert set to true (by default is false) that will allow you to create the new document if it doesn’t exist in your collection:

db.collection('books').updateOne({"title": request.payload.title}, dbDoc, {upsert: true}, (err, result) => {
    if(err) return reply(Boom.internal('Internal MongoDB error', err));
    return reply(result);
});

SAMPLE PROJECT GITHUB REPOSITORY

Resources

If you are interested to go more in deep about Hapi.js, I’d suggest to take a look to the official website or to the book currently available.
An interesting news is that there are other few books that will be published soon regarding Hapi.js:

that usually means Hapi js is getting adopt from several companies and developers and definitely it’s a good sign for the health of the framework.

Wrap up

In this post I shared with you a quick introduction to Hapi.js framework and his peculiarities.
If you’ve enjoyed please let me know what you would interested on so I’ll be able to prepare other posts with the topics you prefer.
Probably the next one will be on the different template systems (handlebars, react…) or about universal application (or isomorphic application as you prefer to call them) or a test drive of few plugins to use in Hapi.js web applications.

Anyway I’ll wait for your input as well 😀

2015 the birth of London JS community

We are very close to the end of 2015 and it’s time give a look back to the first 11 months of this year understanding what I’ve accomplished and trying to get some resolutions for the new year as well!

2015 for my professional life  was a year of changes: I moved to a new job, I learnt new programming paradigms (reactive and functional programming), I met a lot of incredible and talented  people and many many other things.

Probably the biggest changes (or old loves?) are the creation of London JavaScript community and my returning as speaker on technical talks after a couple of years of inactivity.

Last May I started a new adventure creating a new JavaScript community in London; I spent a month to understand how and why I could do that properly.
I had the possibility in the past to be staff member of the largest Actionscript Italian community and it was really a great experience.
I was young, with a lot of passion, not much experience and the community was the perfect place for growing properly, meeting new friends, learning from the different people, trying to solve technical puzzle everyday and having fun why not!

Starting again this experience in a new country it was a real challenge for me, but this time, with way more experience than the first time, clear ideas and a touch of madness.

In 2013 I had the opportunity to spend few months in Silicon Valley and I went to several meetup events and conferences from San Francisco to San Jose.
What I was really impressed of that environment was for sure the vibe I was able to breathe in any of these events.
People from all over the world that help each others, facilitating the connections between individuals, creating opportunities and sharing knowledge.
I was astonished about this way of doing community and when I moved from Italy to London I spent the first 18 months going to different events trying to retrieve the same experience and vibe.

When I started the London Javascript community my main goal was definitely recreating that vibe in this great city where the best developers all over the world are working in interesting projects with a lots of challenges to solve.
That’s why I decided to do that, filling a gap present in London communities where there were many and strong vertical framework communities (like React and Angular one), but not a strong and general JavaScript community.

After 8 months I’ve to admit I’m very happy about the results that this community was able to generate:

  • ~1400 members
  • 7 live events
  • 1 code lab for half day
  • 2 webinars
  • an average of 60 people per event
  • Listed as O’Reilly Community partner, Google Community and Skillsmatter Community
  • Community Partner of great events like: FullStack conference and Dot.JS conference
  • more than 1900 tweets & retweets with more than 500 followers

I had also the opportunity to meet great speakers, amazing developers and passionate people.

What this community is giving me back is really invaluable and really hard to find in similar activities too, I’m really grateful of any single moment spent organising any event.

What about the resolutions for 2016?

Recently I started to gather the interest from several companies that are asking me to organise meetup events in their offices and that means all the hard work done is paying this community back!

I contacted few speakers from California and next year I’m organising a couple of webinars with speakers from Silicon Valley and directly from Google office in Mountain View.

I’m already in contact with few great speakers ready to share with the community tips and tricks on different topics like webpack, jspm, angular, webrtc, react and ES6! Keep an eye on the community page in order to discover more about these events.

Last but not least, I’m working right now on the official community website that will be a SPA website with socials integration and the possibility to subscribe to a technical newsletter where I’m going to share the best articles, tutorials and events on the web.

Obviously these are already in the roadmap but I’m really open to listen what the JavaScript community is looking for! So don’t waste this opportunity and share your ideas on how to grow and make more special OUR community!

ES2015 Destructuring assignment: by value or reference?

This week I’ve organised a meetup on ES2015 in my community, where the speaker presented his favourite features of the language.

Right after the talk I had a chance to talk with my best friend that was asking if destructuring assigns the values copying the value to the new variable or instead by reference if you work with Object or Array.
Because I hadn’t a change before to work with this new ES2015 feature I did a quick example just to get an answer to this question.

It looks like destructuring feature works by reference and it’s not copying the value.
That means anytime you’re going to change a value inside a variable that contains an Object or Array assigned via destructuring, also the original Object or Array will be affected as you can see in this simple example: destructuring example ES2015

destructuring ES2015

So when you work with destructuring bear in mind to pay a lot of attention when you change a value inside your destructured Objects and Arrays!

Automate, automate and automate!

Recently I’ve spent several days on find the best way to set up an automation process for Javascript developers and I investigated several tools strictly related to the Javascript world.
These tools allow you to save a lot of time when you perform repetitive and sometimes boring tasks in order to test your HTML5 game, website or web app.
In this post I’d like to share with you what tools I’ve found and used to create a full-stack Javascript automation pipeline for any front end developer or team.

Let’s see what a Javascript developer or a team could automate in their machine to have a better code quality and to save a lot of time when they are working on their own library or projects.
For this purpose, in my pipeline, I’ve used different CLI tools like mocha, grunt, yeoman, blanket or plato.
Each of this tool allows you to perform a specific task but combined all together these tools will provide in your projects:

  • tdd, bdd and unit test
  • code coverage
  • dependencies management
  • (custom) project template
  • static analysis
  • tasks automation (live reloading, deploy in localhost folder, files concatenation…)

These are only few of the multiple options that you can have “playing” with these tools, but let’s try to go a little bit more in deep to see what tool can effectively help to accomplish each of the item present in the list above.

TDD, BDD and UNIT TEST

Surfing on the web about this topic on Javascript you’ll find really a lot of good libraries, the one I decided to use is Mocha because is very well integrated with Blanket (code coverage) and karma (tests runner) and because it’s based on node.js so you can create your libraries and then testing in pure javascript without any need to pass through HTML pages and if you need to test javascript code that will run only inside the browser you could fake the window object with libraries like jsdom integrated in your test cases.
Mocha allows you to work in BDD, TDD and in Unit Testing you can easily mix with several assertion libraries and writing also async tests became really really easy.
Other libraries that could be useful could be Jasmine or QUnit.

CODE COVERAGE

As I wrote before I found an interesting library that work perfectly with Mocha that is Blanket.js.
Blanket is very simple and easy to use library in particular when you have all your test written in modules (node.js style) instead of a mix between html and js files.
Blanket works not only with Mocha but also with Jasmine and QUnit, so basically with the most famous testing libraries!
One thing that I really appreciate of blanket is the final output that could be exported in an interactive HTML where immediately you can recognise what it’s not tested yet and jump from a file to another one following the menu on the right side of the template.
Another one that it seems quite interesting is Istanbul.js, I didn’t try yet but it’ll be the next one for sure, I heard really good experiences from other developers with this library!

STATIC ANALYSIS

When you want to use a static analysis tool in your pipeline on of the most popular is….
But I suggest to give a try to Plato in particular if you work alone or in a small team and you want to do a sanity check of your project.

8ek3snRZ22Eq898NOi-l9Dl72eJkfbmt4t8yenImKBVvK0kTmF0xjctABnaLJIm9
Plato, in fact, store all the information of your code project locally in some JSON files and you can navigate through the report directly from an HTML page created by the tool (above a screenshoot sample).
These stats are very interesting to check the are of improvement of your project and in particular with these tools you can have an immediate feedback on where your efforts should be focused in order to deliver a better product and be sure that the maintenance shouldn’t cost too much later on.
Obviously you can also use more sophisticated solutions like SonarQube and install it inside your server with the Javascript plugin and run your static analysis every time a developer push his code in git or mercurial.
Depends always the dimension of the project and the team, my suggestion is to start with Plato in a small project and then when you see the real value move to SonarQube also if you are a small organisation.r

PROJECT TEMPLATE

When you talk about template for Javascript it’s impossible to forget of Yeoman.
Yeoman is a scaffolding tool that allow you to create skeleton of project with your favorite JS library ready to use.
I really suggest to use these kind of tools because it facilitates the beginning of new projects and give you at the same time some standards inside your company and between your projects.
There are several generators ready to use and searchable from the official website, if you can’t find what you’re looking for it’s very easy use Node.JS and the APIs already built in Yeoman to create your own generator with the functionalities that your company or projects need.

TASKS AUTOMATION and DEPENDENCIES MANAGEMENT

This is my favourite part, I found in Grunt a really good tool to automate more or less everything that is not strictly related to write the project code inside my IDE!
Grunt is the glue to assembly in a pipeline all the tools explained above, easily in one line inside your CLI: “grunt”.
The community is really huge and you can find more or less everything for Node.JS or plain Javascript, from minify, uglify and concatenate your JS files, to compile your LESS or SASS files, to convert your ES6 code to ES5, to run static analysis or push your code directly on git simply with a grunt task.
One thing that I really like of Grunt is that you can easily scale the way you are working with it using a yaml file and different js files (one per task) and assembly them at runtime.
This allows to create some common tasks for the whole company and at the same time have the freedom to add custom automation for each project and/or department of your company.
I really suggest to take a look to the official website where you can find also many technical information and then start to automate your daily Javascript workflow.
Obviously if you’re not working with JS you can still use Grunt in combination with your favourite programming language or technology like Haxe, Dart, Typescript, Coffeescript or Adobe AIR; the flexibility of this tool is really impressive!

An Alternative to Grunt could be Gulp where the main difference is that grunt favours configuration over code and Gulp exactly the opposite.
The Gulp community is growing day by day and it’s interesting to see the different approach between these two great task runners, probably in the long term the Gulp approach will be more successful but for now Grunt is exactly what I was looking for.

Conclusion

As you’ve read the JS world has got really a lot of useful tool that will save a lot of time during your daily job as developer or company.
The mix of these tools allow to create a pipeline in pure Javascript and they could really improve your code quality and your flow to have standards inside yout projects and a solid flow that will able to scale your company or projects in an easy and professional way.
Obviously there aren’t only these few tools and libraries that I’ve tried, there are many others outside there that I’d like to mention like PhantomJS or Buster or Lineman and so on, but form the next five minutes before come back on what you were doing before reading this post, try to think how to improve your flow, trust me you will remain surprise on how more productive you’ll become introducing them inside your routine.

Open letter to @Adobe and @Adobe Air: the hidden part…

I don’t know if you had already read the open letter that Gary wrote recently discussing about the arguable marketing choice made by Adobe on Adobe AIR but also previously about the Flash Platform in general, but I suggest you to start from there before read this post.

If you know me personally or you are following me in the social networks or reading this blog you should know that I’m a big fan of Flash Platform, in particular of Adobe AIR.
I’m very committed to deliver amazing and cutting edge projects made with this fantastic technology and I’m involved in the community to spread the word about AIR.
From a developer perspective I’m 110% with Gary and the community; an amazing technology like Adobe AIR with really a lots of success behind in terms of developers and companies that adopted this technology and in terms of numbers of apps in released during the past few years in different platform.
AIR, in my opinion, should have a better commitment from the company that create it (partially).
Obviously I agree also that there isn’t any competition between Actionscript 3 and HTML5 (read Javascript), what you can really do with HTML5 is what a flash developer could do 5 years ago more or less.
But you can’t approach a discussion like this talking only from a developer perspective, you and we should see it from different angles also.
What I’m asking you is to follow me until to the end of this post then you can send me an email and ask me if I became totally crazy or insult me with a comment, no worries 😀

I usually goes to Adobe Max since 2006, first MAX organised by Adobe, and I remember quite clearly that few MAX ago during both keynotes nobody said anything related to new Flash Platform improvements or plan for the future of the platform.
At the beginning I was so hungry and I spent literally hours on the phone to talk with Adobe people because the 100% of my business was based on this technology and they can’t really think that HTML5 could be better that Flash Platform, in particular in 2011/2012 where the most coolest websites, RIAs and desktop applications were realised with Flash.
But have you ever tried to think from a different point of view this situation? Let’s assume for few minutes that we are inside the meeting room of the Adobe.
You have an amazing technology that millions of people is using to innovate and create the best software in the world but you are not earning what you expect from it, don’t you believe me? Take a look at this chart (ADBE):

ADBE stock 2011/2012

 

This is the graph of Adobe stock (ADBE) from January 2011 to December 2012, the value of the stock was for few months (close to October when usually Adobe Max takes place) lower than $28 and never greater of $35.
Great, obviously our white collars friends, aren’t so interested about which is the best technology in the market or how many people is using it; they care about numbers, how to increase the profit of the company and make happy the analyst to have a better position in the stocks market and with the shareholders.
These results weren’t so good for Adobe in fact, if you remember well, a lots of people started to leave the company, a lots of team was closed in USA or Europe to move the development side in India or in places where the developers are cheaper and Adobe started his commitment on the big new trend of new web technologies products like the Edge family for instance.
Everybody now knows very well the following story about the new products and how they are trying to improve the way to create websites and apps, and I guess the majority of us it’s not so happy about that.
But let’s take a look again to some numbers, the next graph will show you the Adobe stock value from 2012 to March 2014 basically in the period where Adobe left to push the Flash Platform and started to increase the investment on designers products:

ADBE stock 2012/2014

I think is quite explicit that the politic to start selling their products on Cloud (first big mover in the IT panorama), the decision to try to improve a new technology like HTML5 with new tools and so on, create around the Adobe what the management was looking for!
I agree with you that there are many ways to make money but from the metrics perspective they are going in the right direction and they did the right choices for now.
I’m not an economist and I can’t say if this strategy will pay in the long term but for sure in the short term they arrived where they wanted to be.

With this post I’m not trying to defend Adobe, but after many years where the Flash Platform is in this status I started to leave my angry mood and to interrogate myself on why they took this decision, honestly I can’t say if that it’s the only reason that drives Adobe in these big changes but excluding the technical side that’s the only way I can see this thing and now most part of their decision make finally sense.
All the comments I’ve read in the past and also in these days after the Gary’s letter to Adobe is completely true but often, as developers, we forget that it’s not just a design pattern or a performance optimisation that could save the world, the marketing and the market are the real drivers, in front of them also a big corporation like Adobe could defeated.

ADBE stock 2009/2014

 

UPDATE FROM ADOBE

Chris, the new product manager of Adobe AIR, replied to Gary’s open letter, you can read the answer in the official blog