Benchmarking Falcor.js

In the past few weeks I was figuring out how to solve a problem of a chatty communication between client and server considering that we are close to the release and I have already got few ideas in mind to improve the product I’m working on right now.

Thinking to possible solutions I thought to search online few different approaches that could help me out to solve the problem, at the beginning I was thinking to refactor the REST APIs in order to create them closer to what the UI needs (backend for frontend pattern) but then I remembered that I had bookmarked few projects that could help me out to implement this pattern without reinventing the wheel.
So on friday night I started to investigate Falcor.js, a library made by Netflix that was trying to solve exactly my same problem and they honestly solve the issue in a really smart way.

Let’s imagine you have a client that needs to call several REST end points in order to aggregate the data for a specific view, independently from how large is the amount of data to retrieve you have to bear in mind few other drawbacks like:

  • latency
  • no caching on specific data because they are real time data or tight to a specific user
  • amount of data to display in a view (maybe without a paging API available to split them in several views)
  • pre-flight calls for CORS end points
  • internet connection speed (mobility vs home vs office)
  • content negotiation

All of these, and probably many others, could be causes of a bad user experience and quite often we postpone to address these problems after the release of our online products.
So if we can minimise the impact in somehow we could provide a better user experience and therefore our products could be faster, more interesting and raise a good success with our users also when they have poor connection signal on their favourite device.

Here is where Falcor.js comes in support, in fact with Falcor we can minimise the amount of calls to specific end points because this library is leveraging the idea of a unique data model that could be interrogate by our clients asynchronously via Falcor APIs optimising the amount of queries to it under the hood.
The query system allows not only to fetch data from Falcor model but also to get only the data you need to use in a specific view.

services-diagram
From Falcor.js website

Looking at the image above you can spotted immediately the possible problem that Falcor solves brilliantly with a unique model.
In fact the Router is aggregating data from different end points and therefore the client can request exactly what needs in a nice and simple way.

Let’s try to explain how the system works first:

falcor-end-to-end
From Falcor.js website

The client will create a connection to a JSON model using the HTTPDataSource provided by Falcor.js client library that will allow to start the connection between the client and the backend data model.
On the backend a Falcor router is created and inside this instance there will be the description of the different queries available and what are the data to return as router system.
Doing this each client will download only the data that effectively needs to render the page and not an element more (sometimes reducing drastically the amount of data to load).
Also each client won’t need to interrogate ad hoc services for retrieving ad hoc data created for it but will just query the same data model created for the entire application only querying different end points.
As you can see from the diagram above this part sits between the client and your backend system creating a middleware that could be use only for specific end points or for the entire application.

Another interesting characteristic of Falcor is how it can optimise the query to the data model, in fact activating the batch mode, Falcor will gather all the queries to a specific route in a tick of your application performing all of them and possibly optimising to a unique roundtrip the requests instead of multiple roundtrips!

Last but not least Falcor allows you to query the APIs implementing a paging mechanism when you are iterating elements inside lists. For instance if you have an array of elements to display in your view but the APIs provided by the backend team don’t include any paging parameter, Falcor helps you via the query system, retrieving only a certain amount of elements via the paging mechanism.

So after watching the few videos available on Falcor and reading all the documentation in the website I started to experiment directly on the chatty issue I’ve got in my project.
I can’t really share the code I’ve used for my spike mainly because I’m using the product end points but I can share with you some benchmarks and thoughts on that for now.

Currently the catalogue I’m working with is composed by 5 calls to 5 different end points in order to display THE catalogue inside the view.
There is also a timer where every few minutes these 5 calls are performed again in order to retrieve new data that possibly could refresh the products available for the user inside the page.
Obviously behind the scene the application is doing several other calls in order to synchronise its status and performing some checks, but for now let’s focus only on the catalogue part.
I extracted the 5 calls in a simple HTML page replicating the current situation in the product and gathering some metrics to understand which was the starting point and how much Falcor would be able to improve the situation.
These are the benchmark I’ve retrieved for 5 XHR calls to different end points when the data are cached and when they aren’t:

NO CACHED XHR (after 5 calls):
average time on 10 tests ~678ms
average kb on 10 tests ~111kb

CACHED XHR (after 5 calls):
average time on 10 tests ~126ms
average kb on 10 tests ~3.1 kb

Then what I did was implementing a Node.js gateway where using the Falcor Router and the Falcor model I was able to query only the data I needed (and not downloading the entire JSON with information not needed for that specific view), I’ve optimised the query to the Falcor model via batching requests and these are the results with Falcor in place in front of the product APIs:

NO CACHED WITH FALCOR (Falcor optimise the 5 calls to only 2 calls):
average time on 10 tests ~32ms
average kb on 10 tests ~5 kb

CACHED WITH FALCOR (Falcor optimise the 5 calls to only 2 calls):
average time on 10 tests ~10ms
average kb on 10 tests ~406 bytes

This is really a good performance boost considering the amount of data we downloaded by default that are now optimised via Falcor queries; several roundtrips saved because of the batching requests optimisation natively available in Falcor APIs and the nice asynchronous implementation to fetch data from the unique model.

I’ve to admit that I was really surprised and I believe sooner than later I’ll introduce Falcor.js in production because it can really simplify the pipeline of work and provide great benefits to your applications in particular when you are targeting different low end devices like in my project.
Another think that would be good to invest time on(maybe another weekend :P) is GraphQL, the main “competitor”, maybe I’ll be able to do a 1 to 1 comparison based on the same problem and see which is the best library for a specific problem.

Meanwhile if you want to start playing with it I recommend Falcor website and I encourage you to keep an eye on this blog because I’d like to share in another post more technical information about Falcor that will allow you to understand how easy is working with it.

 

How to Dockerize your Node application

Today I’d love to share how to wrap a Node microservice or monolithic application inside a Docker container.
I assume that you’ve already installed Docker, boot2docker and Node in your machine.
If you don’t, please check the official Docker page and Node page and after picking your operating system follow the instructions.

I’ve created a simple Node application with Hapi.js that once called returns the classic “hello world” message as response, obviously you can have a way more complex application as well, my main purpose here is talking about how to setup Docker with your Node.js application.

In order to wrap your application inside a Docker container you need to create a Dockerfile, this file is basically defining the setup for the environment where your application will run.
Checking the Docker containers currently available on Docker hub, you can see for Node the official container page that will allow you to pick the right Node version for your application.

Screen Shot 2016-04-03 at 17.01.57

As you can see there are many different Node containers available, you can also use other solution like using ubuntu, fedora or centOS as base OS where your node application is going to run but I preferred to use the official one because my server side application doesn’t require any particular configuration.
For this example we are going to use the 5-onbuild that basically is doing the “dirty job” for us, obviously this is a sample application but you can customise your container as you prefer.
What the onbuild version is doing is basically:

  • creating a directory of the application inside the container
  • copying package.json in that folder
  • running “npm install” command
  • running “npm start” command

It’s fundamental to define inside the package.json all the dependencies and the start script otherwise the application is not going to work inside the container.

Inside our Dockerfile we are going to write:

FROM node:5-onbuild
EXPOSE 8080

So basically we are inheriting all the steps described above regarding the onbuild Dockerfile plus we are exposing the port 8080 that is the one used by our Node application:

const server = new Hapi.Server();
server.connection({ port: 8080 });

It’s a best practice for any Dockerfile starting always with a FROM command and reusing the images that are already created by the community on Docker hub so remember when you want to try something different check always what’s available on Docker hub.

Good so now let’s package our container and try if it works correctly.
In order to build the container we’ll need to run the following command:

docker build -t <username>/<applicationName> .

Basically here we are saying to docker to dockerize the entire folder application (dot at the end of the command)  and the container will be called <userName>/<applicationName>
This is another Docker best practice, potentially you can call your container as you prefer but a pattern is adding first your Docker username then slash (“/”) then your application name:

docker build -t lucamezzalira/docker-hapi .

Not let’s try if it works:

docker run -p 49160:8080 -d lucamezzalira/docker-hapi

With this command we are running our container, therefore our Node application, mapping the port 8080 of the container to the port 49160 of the host.
You can check easily if the application is working correctly just typing docker ps in your console, you should see something like that:

Screen Shot 2016-04-03 at 17.29.48

So now, because I’m working on Mac, I need to retrieve the boot2docker IP and check if in the port 49160 I’m able to see my hello world application up and running.
In order to do that I’ll run the command:

boot2docker ip

That should return the external IP where my application  is running, you can also see which is the container IP using the command:

docker inspect CONTAINER ID (for example: docker inspect afb5810152f6)

The container ID is easily retrievable via docker ps (first column in the picture above).
This command will return a JSON file with a lot of information related to the container.

So now that I’ve the IP where my application is running and the related port  I can type inside my bowser the address and see the result!

Screen Shot 2016-04-03 at 17.36.00

Obviously you can also map that IP to etc/hosts file adding something more meaningful like boot2docker or whatever name your think is more appropriate!

If you want to download from Docker hub the container I’ve prepared for this post just type:

docker pull lucamezzalira/docker-hapi

This is a very basic introduction to Docker world and Node, I’m working on a sample microservices application that will involve few interesting concepts and pattern, so keep an eye on this blog😉

Hapi.js and MongoDB

During the Fullstack conference I saw a small project made with Hapi.js during a talk, so I decided to invest some time working with Hapi.js in order to investigate how easy it was create a Node.js application with this framework.

I’ve to admit, this is a framework really well done, with a plugin system that give you a lot of flexibility when you are creating your server side applications and with a decent community that provides a lot of useful information and plugins in order to speed up the projects development.

When I started to read the only book available on this framework I was impressed about the simplicity, the consideration behind the framework but more important I was impressed where Hapi.js was used for the first time.
The first enterprise app made with this framework was released during Black Friday on Walmart ecommerce. The results were amazing!
In fact one of the main contributor of this open source framework is Walmart labs, that means a big organisation with real problems to solve; definitely a good starting point!

Express vs Hapi.js

If you are asking why not express, I can reply with few arguments:

  • express is a super light and general purpose framework that works perfectly for small – medium size application.
  • hapi.js was built on top of express at the beginning but then they move away in order to create something more solid and with more built in functionalities, a framework should speed up your productivity and not giving you a structure to follow.
  • express is code base instead hapi.js is configuration base (with a lot of flexibility of course)
  • express uses middleware, hapi.js uses plugins
  • hapi.js is built with testing and security in mind!

Hapi.js

Let’s start saying working with this framework is incredibly easy when you understand the few concepts you need to know in order to create a Node project.

I created a sample project where I’ve integrated a mongo database, exposing few end points in order to add a new document inside a mongo collection, update a specific document, retrieve all documents available inside the database and  retrieving all the details of a selected document.

Inside the git repo you can find also the front end code (books.html in the project root) in Vanilla Javascript, mainly because if you are passionate about React or Angular or any other front end library, you’ll be able to understand the integration without any particular framework knowledge.

What I’m going to describe now will be how I’ve structured the server side code with Hapi.js.

In order to create a server in Hapi.js you just need few lines of code:

let server = new Hapi.Server();
server.connection();
server.start((err) => console.log('Server started at:', server.info.uri));

As you can see in the example (src/index.js) I’ve created the server in the first few lines after the require statements and I started the server (server.start) after the registration of the mongoDB plugin, but one step per time.

After creating the server object, I’ve defined my routes with server.route method.
The route method will allow you to set just 1 route with an object or several routes creating an array of objects.
Each route should contain the method parameter where you’ll define the method to reach the path, you can also set a wildcard (*) so any method will be accepted in order to retrieve that path.
Obviously then you have to set the route path, bear in mind you have to start always with slash (/) in order to define correctly the path.
The path accepts also variables inside curly brackets as you can see in the last route of my example: path: ‘/bookdetails/{id}’.

Last but not least you need to define what’s going to happen when a client is requesting that particular path specifying the handler property.
Handler expects a function with 2 parameters: request and reply.

This is a basic route implementation:

{
   method: 'GET',
   path: '/allbooks',
   handler: (request, reply) => { ... }
}

When you structure a real application, and not an example like this one, you can wrap the handler property inside the config property.
Config accepts an object that will become your controller for that route.
So as you can see it’s really up to you pick up the right design solution for your project, it could be inline because it’s a small project or a PoC rather than an external module because you have a large project where you want to structure properly your code in a MVC fashion way (we’ll see that in next blog post ;-)).
In my example I created the config property also because you can then use an awesome library called JOI in order to validate the data received from the client application.
Validate data with JOI is really simple:

validate: {
   payload: {
      title: Joi.string().required(),
      author: Joi.string().required(),
      pages: Joi.number().required(),
      category: Joi.string().required()
   }
}

In my example for instance I checked if I receive the correct amount of arguments (required()) and in the right format (string() or number()).

MongoDB plugin

Now that we have understood how to create a simple server with Hapi.js let’s go in deep on the Hapi.js plugin system, the most important part of this framework.
You can find several plugins created by the community, and on the official website you can find also a tutorial that explains step by step how to create a custom plugin for hapi.js.

In my example I used the hapi-mongodb plugin that allows me to connect a mongo database with my node.js application.
If you are more familiar with mongoose you can always use the mongoose plugin for Hapi.js.
One important thing to bear in mind of an Hapi.js plugin is that when it’s registered will be accessible from any handler method via request.server.plugins, so it’s injected automatically from the framework in order to facilitate the development flow.
So the first thing to do in order to use our mongodb plugin on our application is register it:

server.register({
   register: MongoDB,
   options: DBConfig.opts
}, (err) => {
   if (err) {
      console.error(err);
      throw err;
   }

   server.start((err) => console.log('Server started at:', server.info.uri));
});

As you can see I need just to specify which plugin I want to use in the register method and its configuration.
This is an example of the configuration you need to specify in order to connect your MongoDB instance with the application:

module.exports = {
   opts: {
      "url": "mongodb://username:password@id.mongolab.com:port/collection-name",       
      "settings": {          
         "db": {             
            "native_parser": false         
         }
      }    
   }
}

In my case the configuration is an external object where I specified the mongo database URL and the settings.
If you want a quick and free solution to use mongoDB on the cloud I can suggest mongolab, when you register you’ll have 500mb of data for free per account, so for testing purpose is really the perfect cloud service!
Last but not least, when the plugin registration happened I can start my server.

When I need to use your plugin inside any handler function I’ll be able to retrieve my plugin in this way:

var db = request.server.plugins['hapi-mongodb'].db;

In my sample application, I was able to create few cases: add a new document (addbook route), retrieve all the books (allbooks route) and the details of a specific book (bookdetails route).

Screen Shot 2015-12-04 at 23.44.38

If you want to update a record in mongo, remember to use update method over insert method, because, if correctly handled, update method will check inside your database if there are any other occurrences and if there is one it will update that occurrence otherwise it will create a new document inside the mongo collection.
Below an extract of this technique, where you specify in the first object the key for searching an item, then the object to replace with and last object you need to add is an object with upsert set to true (by default is false) that will allow you to create the new document if it doesn’t exist in your collection:

db.collection('books').updateOne({"title": request.payload.title}, dbDoc, {upsert: true}, (err, result) => {
    if(err) return reply(Boom.internal('Internal MongoDB error', err));
    return reply(result);
});

SAMPLE PROJECT GITHUB REPOSITORY

Resources

If you are interested to go more in deep about Hapi.js, I’d suggest to take a look to the official website or to the book currently available.
An interesting news is that there are other few books that will be published soon regarding Hapi.js:

that usually means Hapi js is getting adopt from several companies and developers and definitely it’s a good sign for the health of the framework.

Wrap up

In this post I shared with you a quick introduction to Hapi.js framework and his peculiarities.
If you’ve enjoyed please let me know what you would interested on so I’ll be able to prepare other posts with the topics you prefer.
Probably the next one will be on the different template systems (handlebars, react…) or about universal application (or isomorphic application as you prefer to call them) or a test drive of few plugins to use in Hapi.js web applications.

Anyway I’ll wait for your input as well😀

ES2015 Destructuring assignment: by value or reference?

This week I’ve organised a meetup on ES2015 in my community, where the speaker presented his favourite features of the language.

Right after the talk I had a chance to talk with my best friend that was asking if destructuring assigns the values copying the value to the new variable or instead by reference if you work with Object or Array.
Because I hadn’t a change before to work with this new ES2015 feature I did a quick example just to get an answer to this question.

It looks like destructuring feature works by reference and it’s not copying the value.
That means anytime you’re going to change a value inside a variable that contains an Object or Array assigned via destructuring, also the original Object or Array will be affected as you can see in this simple example: destructuring example ES2015

destructuring ES2015

So when you work with destructuring bear in mind to pay a lot of attention when you change a value inside your destructured Objects and Arrays!

Haxe-watchify: automatic build tool for Haxe and OpenFL projects

I’ve started recently working in a new company very focused on cross platform projects with Haxe.
In my commuting time I worked on an automatic build tool for Haxe and OpenFL projects.
The tool is called haxe-watchify and with a sample JSON file or directly through the command line, you’ll be able to setup how to continuously build your project in background during your development flow.
Haxe-watchify has got interesting features in particular for the Haxe target like the possibility to setup the completion server instead the traditional compiler to speed up the building of your projects.
In fact the completion server implements a cache system to build faster your projects, in this case haxe-watchify takes care for you to start the server and communicate with it.

Currently I’ve published the tool on npm registry so in order to install it just type in your CLI:

npm install haxe-watchify -g

I wrote an extensive documentation on how to use the tool on the readme file on the project repository otherwise you can check the –help command directly on your terminal window.
I tried for now only on Mac OS X so if you find any bug in any other platforms please let me know

I’ve already thought few possible implementations to add in the next releases like a pre and post build in order to launch your tests or run static analysis tool or assets optimisation and then move to the build.
Anyway I’m very keen to learn more about your current projects workflow and how haxe-watchify could help you to improve your situation.

if you want to share any comment please do feel free to share adding a comment to this post or via email

Automate, automate and automate!

Recently I’ve spent several days on find the best way to set up an automation process for Javascript developers and I investigated several tools strictly related to the Javascript world.
These tools allow you to save a lot of time when you perform repetitive and sometimes boring tasks in order to test your HTML5 game, website or web app.
In this post I’d like to share with you what tools I’ve found and used to create a full-stack Javascript automation pipeline for any front end developer or team.

Let’s see what a Javascript developer or a team could automate in their machine to have a better code quality and to save a lot of time when they are working on their own library or projects.
For this purpose, in my pipeline, I’ve used different CLI tools like mocha, grunt, yeoman, blanket or plato.
Each of this tool allows you to perform a specific task but combined all together these tools will provide in your projects:

  • tdd, bdd and unit test
  • code coverage
  • dependencies management
  • (custom) project template
  • static analysis
  • tasks automation (live reloading, deploy in localhost folder, files concatenation…)

These are only few of the multiple options that you can have “playing” with these tools, but let’s try to go a little bit more in deep to see what tool can effectively help to accomplish each of the item present in the list above.

TDD, BDD and UNIT TEST

Surfing on the web about this topic on Javascript you’ll find really a lot of good libraries, the one I decided to use is Mocha because is very well integrated with Blanket (code coverage) and karma (tests runner) and because it’s based on node.js so you can create your libraries and then testing in pure javascript without any need to pass through HTML pages and if you need to test javascript code that will run only inside the browser you could fake the window object with libraries like jsdom integrated in your test cases.
Mocha allows you to work in BDD, TDD and in Unit Testing you can easily mix with several assertion libraries and writing also async tests became really really easy.
Other libraries that could be useful could be Jasmine or QUnit.

CODE COVERAGE

As I wrote before I found an interesting library that work perfectly with Mocha that is Blanket.js.
Blanket is very simple and easy to use library in particular when you have all your test written in modules (node.js style) instead of a mix between html and js files.
Blanket works not only with Mocha but also with Jasmine and QUnit, so basically with the most famous testing libraries!
One thing that I really appreciate of blanket is the final output that could be exported in an interactive HTML where immediately you can recognise what it’s not tested yet and jump from a file to another one following the menu on the right side of the template.
Another one that it seems quite interesting is Istanbul.js, I didn’t try yet but it’ll be the next one for sure, I heard really good experiences from other developers with this library!

STATIC ANALYSIS

When you want to use a static analysis tool in your pipeline on of the most popular is….
But I suggest to give a try to Plato in particular if you work alone or in a small team and you want to do a sanity check of your project.

8ek3snRZ22Eq898NOi-l9Dl72eJkfbmt4t8yenImKBVvK0kTmF0xjctABnaLJIm9
Plato, in fact, store all the information of your code project locally in some JSON files and you can navigate through the report directly from an HTML page created by the tool (above a screenshoot sample).
These stats are very interesting to check the are of improvement of your project and in particular with these tools you can have an immediate feedback on where your efforts should be focused in order to deliver a better product and be sure that the maintenance shouldn’t cost too much later on.
Obviously you can also use more sophisticated solutions like SonarQube and install it inside your server with the Javascript plugin and run your static analysis every time a developer push his code in git or mercurial.
Depends always the dimension of the project and the team, my suggestion is to start with Plato in a small project and then when you see the real value move to SonarQube also if you are a small organisation.r

PROJECT TEMPLATE

When you talk about template for Javascript it’s impossible to forget of Yeoman.
Yeoman is a scaffolding tool that allow you to create skeleton of project with your favorite JS library ready to use.
I really suggest to use these kind of tools because it facilitates the beginning of new projects and give you at the same time some standards inside your company and between your projects.
There are several generators ready to use and searchable from the official website, if you can’t find what you’re looking for it’s very easy use Node.JS and the APIs already built in Yeoman to create your own generator with the functionalities that your company or projects need.

TASKS AUTOMATION and DEPENDENCIES MANAGEMENT

This is my favourite part, I found in Grunt a really good tool to automate more or less everything that is not strictly related to write the project code inside my IDE!
Grunt is the glue to assembly in a pipeline all the tools explained above, easily in one line inside your CLI: “grunt”.
The community is really huge and you can find more or less everything for Node.JS or plain Javascript, from minify, uglify and concatenate your JS files, to compile your LESS or SASS files, to convert your ES6 code to ES5, to run static analysis or push your code directly on git simply with a grunt task.
One thing that I really like of Grunt is that you can easily scale the way you are working with it using a yaml file and different js files (one per task) and assembly them at runtime.
This allows to create some common tasks for the whole company and at the same time have the freedom to add custom automation for each project and/or department of your company.
I really suggest to take a look to the official website where you can find also many technical information and then start to automate your daily Javascript workflow.
Obviously if you’re not working with JS you can still use Grunt in combination with your favourite programming language or technology like Haxe, Dart, Typescript, Coffeescript or Adobe AIR; the flexibility of this tool is really impressive!

An Alternative to Grunt could be Gulp where the main difference is that grunt favours configuration over code and Gulp exactly the opposite.
The Gulp community is growing day by day and it’s interesting to see the different approach between these two great task runners, probably in the long term the Gulp approach will be more successful but for now Grunt is exactly what I was looking for.

Conclusion

As you’ve read the JS world has got really a lot of useful tool that will save a lot of time during your daily job as developer or company.
The mix of these tools allow to create a pipeline in pure Javascript and they could really improve your code quality and your flow to have standards inside yout projects and a solid flow that will able to scale your company or projects in an easy and professional way.
Obviously there aren’t only these few tools and libraries that I’ve tried, there are many others outside there that I’d like to mention like PhantomJS or Buster or Lineman and so on, but form the next five minutes before come back on what you were doing before reading this post, try to think how to improve your flow, trust me you will remain surprise on how more productive you’ll become introducing them inside your routine.

HaXe, my new toy!

After Adobe MAX 2011 everything should not be the same for me and maybe for a lot of flash platform developers around the world, Adobe brings some “directions”  that didn’t find my consent mainly for the way that communicate these news and the impact that had in the market, but we know that Flash Platform is not dead and it will go ahead for many years.
Obviously nothing was the same after that, in fact many developers started to look around for new technologies and frameworks like Backbone.js, Sencha Touch, Ext JS and so on.
Personally I started to checked in last few months many Javascript frameworks because my aim was find something that could replace Flash Platform in the future and I have to spend time in next years to consolidate it and go ahead with Flash Platform too.
Last week a big friend of mine gave me this link: http://www.haxenme.org/ and when I started to read what you can do and how you can do it, I immediately started to go in deep with HaXe in my spare time and trust me that I had a lot of fun!

First of all what is HaXe?
HaXe is an open source multiplatform programming language, it allows to write once and deploy everywhere (in the right meaning of therm “everywhere”).
In fact with HaXe we can write in a programming language similar to Actionscript 3 (strictly typed, OOP, …) but more powerful (it has enum, generics, dynamic type, …), with HaXe we can target our projects for Flash, C++, Neko, HTML5, Node.JS, PHP, iOS, Android… if we work with multiplatform APIs we can write once and deploy our project for multiple targets.
So for many developers that come from Javascript, Actionscript, Java and so on, will be so easy to start deal with HaXe.
Another interesting thing of HaXe is that we can work with the library present in the SWF files and integrate movieclip in our project, we can create also SWF file without Flash Professional with SWFMill that is used for the generation of asset libraries containing images (PNG and JPEG), fonts (TTF) or other SWF movies.
That’s so interesting because it means that designers that usually prepare assets for developers don’t need to change own daily workflow!
If you need to extend your target platform we can add new features with external libraries, it’s so important because we can really cover everything with this feature; we can find a lot of ready to use libraries directly on the lib HaXe website.
With HaXe you can communicate between different languages like JS and Flash in both direction, you can easily find many frameworks and library porting in HaXe, for example javascript like JQuery, Sencha Touch, Node.JS and so on.

What about the IDE to work with HaXe (so important for a developer!)!?
On Mac you can use TextMate or FDT on Win FDT or FlashDevelop this one seems the best one but I didn’t try it. For more specs I suggest to take a look at HaXe site section, maybe you can find your favorite IDE in the list.

Finally I made an easy sample to understand better the powerful of HaXe NME, this sample loads an external XML file and an external SWF library with a movieclip inside exported for Actionscript, so I added a drag&drop feature to the list. Then I tried to compile it for iOS, Mac OS X Lion, C++ and SWF with the same basecode and everything work so well and smooth!


You can download source files here, to compile it take a look at HaXeNME section and you can find everything you need to try this sample and start to play with HaXeNME!

If you want to deal with HaXe, I suggest two books, the first one is really a good start to work with this fantastic language:
HaXe 2 beginner’s guide
Professional HaXe and Neko

Last but not least, next April in Paris there will be World Wide HaXe conference, I’ll be there to learn more about the future of this amazing platform if you are planning to be there it will be a pleasure for me catch up for a beer!

I hope soon to publish more experiments and informations about HaXe because it is a thrilling programming language!!!
So stay tuned!