Micro-frontends decisions framework

REV 20 DEC

In the past year (2019) I spent a lot of time talking, writing, podcasting and just chatting with people about micro-frontends.
I’m truly passionate about the topic, mainly because it’s still an unexplored land where challenges arise every single day.

During all my talks, workshops and webinars I was building step by step a mental model that allowed me to simplify my thoughts for making them approachable by people listening to me or reading my contents.
I’m sure there is still a lot of work to do but from the feedback I received so far, I was able to explain in a reasonable way what I had in my mind.

Recently I had the pleasure to do a 2h workshop in Bergen (Norway) about this topic and during my presentation (literally on the stage) I realized that I was very close to simplify even further the decisions process for embracing a micro-frontends architecture.
In JavaScriptland we are used to choosing a framework and getting along with that for the entire lifecycle of a project, sometimes we change it in favor of another one but many times we stick with it for many months/years till the end of life for a specific project.
Every time we use a JavaScript framework, someone else made architectural decisions that we live with (or trying to) and we focus on what matters the most from a business point of view: the features of an application.
This doesn’t mean we have an easy challenge in front of us at all, we still have an essential part of this “game” taking the right design decisions for delivering the project, but first, we need to focus on having a solid base that allows us to build on top of it, aka the architecture.

Martin Fowler defines software architecture as:

“Software architecture is those decisions which are both important and hard to change”

Based on this definition, focusing on the key things, even better, the most important decisions for defining a micro-frontends architecture I came up with 4 pillars we have to decide upfront when we start architecting a micro-frontends project.
Those pillars can be summarized in:

1. Definition

2. Composition

3. Route

4. Communication

Each of them plays a fundamental role in the good outcome of a micro-frontends project.
After taking those 4 decisions there will be many others to take along the journey but with those cornerstones, we should be able to cascade vast majority of the other decisions without any need of evaluating again our architecture.
Bear in mind that those decisions could change over the lifecycle of a project but changing them will require a considerable development effort in order to achieve a coherent project’s architecture.

Defining Micro-Frontends

The first decision to take is how we identify a micro-frontend, is it a part of a view (horizontal split) or the entire view (vertical split)?
Splitting horizontally would mean identifying micro-frontends inside the same view, therefore multiple teams are taking care of the page composition coordinating themselves for the final result presented to a user.

Splitting vertically, instead, would assign a specific view, or group of views, to a team allowing that team mastering a specific area of the application.

In general, deciding to approach a micro-frontends project with a vertical slicing will simplify many decisions to take further down the line, because it’s closer the way a frontend developer is used to work so many practices can be easily applied to this approach.
Also, the coordination between teams should be easier considering each team is owning a vertical slice of the application and not multiple parts spread across the application.

There are additional challenges when we decide to go with the horizontal split, anyway I described more in-depth how to identify micro-frontends in another post available on this website.

Micro-Frontends Composition

The second decision is understanding where we want to compose micro-frontends, here we have 3 options to choose from:

  • client-side composition
  • edge-side composition
  • server-side composition

Client-side means implementing techniques like an App Shell loading single page applications, for instance, check out Single-SPA project, with iframes like Luigi framework or via a library using client-side transclusion technique.
Implementing over the edge would mean using CDNs capabilities like Edge-side includes (not always available in all the CDN providers or not fully implemented) or via computation happening on/near the edge (bear in mind there are restrictions in how much logic we could run with a solution like lambda@edge for instance).
Finally, on the server-side, there are plenty of frameworks like Ara FrameworkOpen ComponentsPiral or Tailor.js and many others.
When we decide to go with a vertical split we should aim for a client-side composition using Single-SPA or similar mechanisms like we do in DAZN.
Instead, when we decide to identify our micro-frontends as a single part of a view we have all the options available (client-side, edge-side and server-side).

Micro-Frontends Routing

After defining our micro-frontends composition strategy we need to decide how we want to route our views.
This time the decision is not mutually exclusive, we can decide to route client-side and adding logic on the edge or having the routing logic on the client or server only.
In this case, my suggestion is to be coherent with the composition part if you have decided to go with a client-side composition, use the app shell for mapping the routing logic, if you have chosen the server-side composition, use the webserver for managing the routing logic.
In the case we decided to go with the edge-side composition we will rely on the URL associated with a page.

In the following diagram, you can easily discover the different routing possibilities associated with a composition technique.

I wrote an article about this topic a few months ago that I recommend reading if you are interested in this topic

Communicating between Micro-Frontends

The final decision is finding a good way of communicating between micro-frontends bearing in mind that are independent units that should be completely decoupled between each other.
When we decide to have multiple micro-frontends on the same page we need to decide how (and if) a micro-frontend should notify when an event or a user interaction happens for coordinating changes in other micro-frontends.
In this case, we could use custom events or an event emitter library (there are many available) so our micro-frontends just emit/dispatch an event and who is listening reacts to that specific event.
However, either we identify multiple micro-frontends per page or we consider a micro-frontend an entire view, we need to decide how a view would exchange data with another view.
We can decide to use a web-storage solution (local and/or session storage) where every time a new view is loaded looks inside the web-storage to find the information needed or a URL routing where the application request to the backend the user state as displayed in the following diagram.

As we have seen, there are some decisions to make when we want to approach a micro-frontends architecture, those will shape the other challenges will be faced alongside a micro-frontends project like how to create a seamless user experience, how to define our CI/CD pipelines, how to optimize our micro-frontends reducing our JavaScript bundles and so on.
However, those 4 decisions made upfront will provide a strong guideline for facing all the other challenges and restrict the number of options to choose from.

Last but not least…

During my last keynote at JS-Poland I introduced this decisions framework for the first time, you can find the slides here (hopefully the recording soon!) with some additional details.

I’d really like to hear your feedback about this decisions framework.
If you feel it may help in your current or future projects if you think something is missing, if you think you have a better way to summarize a complex topic like this one, please leave a comment, I’m eager to read your point of view.

Advertisement

The art of coding

Today I spent few hours in my favourite London museum: The Design Museum.
I went to the main exhibition called Designer, Maker, User where they created a journey between these 3 actors explaining how they are linked together and all the innovations they brought up in the past 100 years in different industries like transportation, fashion, design and so on.

During the time spent in the museum I was able to create a parallelism between the exhibition topic and the software world.
I truly believe that we are creating some sort of art when we write code, if you want in an hidden way because our users can appreciate just the “visual” result more the journey behind it but in a certain way it’s what we are evaluating when we look at a statue or a painting right?

Communication

At the beginning of this journey there was a sign with the topic explanation:

Design is a process carried out by people, for people.
At its heart is a dialogue between three key people: the designer, the maker and the user.
[…] The exhibition shows how designers respond to the needs of makers and users, how users consume and influence design and how revolutions in technology and manufacturing transform our world

Does it sound familiar? Have you ever thought how many times we are executing this process on a daily basis in our job?
All the innovations we went trough in the past few years moving from procedural programming to functional programming, from MVC to Reactive architectures, from Monolith to Microservices architectures?

Have you ever realised how important is the feedback loop (dialogue between designer, maker and user) in our job too?
A feedback loop that is partially hidden behind techniques like unit testing, TDD, continuous integration or continuous delivery.
I saw several times in my career people integrating or using these techniques without understanding why they should do it.
The feedback loop is a well known technique in many Agile and Lean frameworks and you can learn about ita lot from different talks and books.
If you are curious how to improve it I’d suggest to start reading a book on Kaizen, you will discover the feedback loop as cornerstone for your continuous improvement journey.

Simplicity

Few steps later, I found a couple of subway maps: the first London Tube Map and the New York subway map.

the first Tube Map

Both were created around 70s with the main intent of simplify how to travel across the city.
Independently that these maps distort the reality not providing a perfect representation of the city, they become the standard for travelling for millions of users from all over the world.

The takeaway here is the simplicity: simplicity in our code, simplicity in our tests, simplicity in our architectures, simplicity in our daily tasks.

A technique that I learnt during my career is no matter how complex is a specific task or implementation, it can always be divided in smaller chunks of work and when you reach the smallest one, then you can start working on it.
This approach will give you 2 main benefits:

  1. you are defining a list of steps to follow in order to create a more complex algorithm, therefore you are already thinking how to achieve the final solution in a systematic way.
    I’d suggest you to write down the list of things to do so you can visually see what you need to achieve (what is not visualised, doesn’t exist)
  2. if you have a list of small tasks that you can achieve in minutes instead of hours your self awareness will grow minute by minute and at the end of the day you will see many things achieved instead “just a few” big tasks.

Modularisation

Another interesting video that I watched in the museum was related to the design of new Tube trains available in the future.
In this case the main goal for the designers would be increasing the frequency and the capacity of each line maintaining the same infrastructure.

new London Tube train

A part from the current infrastructure (tunnel size for instance) the designers need to take in consideration how often these trains will be replaced.
In fact each train will remain in use for 30–40 years and you can understand how many innovations and improvements the humans can do in this large amount of time.
So they approached the problem taking in consideration the 2 big constrains (time and infrastructure) and they designed a larger train optimising every single centimetre inside each coach, creating bigger entrances for speeding up the access inside the coach but, more importantly, for accommodating the future innovations that could improve their trains, they have created modular part inside each coach that could be removed and exchange with something better in the future.

That approach completely blew my mind, the modularisation of applications is a problem solved in many ways inside the software world.
Encouraging loosely coupled relations over tight coupled ones, is a cornerstone in all the main frameworks.
Think for a moment about the frontend framework evolution where, after React release, they moved all to a more component-oriented architecture.

Think then to all the design patterns that are encouraging decoupling objects since the beginning of the software history… it’s incredible how easily we can retrieve these important concepts of software engineering in the real world.

If we want to take a similar approach in an industry closer to the software one, let’s talk about the Fairphone 2 released in 2016.

Fairphone 2

Fairphone 2 is a modular phone that will guarantee the longevity of your device changing different parts on demand instead the entire phone.
The company made the modularisation a feature of the new device.
Totally a different approach taken by Apple with the iPhone for instance where replacing a part of the phone is (almost) impossible.

Every time you are thinking on a new or existing project, think about how to modularise it, how to break dependencies and improve your code.
In particular how the different modules are going to “stick” together.
Often we don’t spend much time on the “contracts” between systems, but it’s very important investing the right amount of time for creating the “perfect glue” for your application.
Think how much we have done in the bast moving from a monolithic architecture to microservices where communication, monitoring and logging are way more important of the code is running inside a microservice for instance.

Wrapping up

We are often very busy working with new libraries, frameworks and languages; anyway sometimes we will need to stop for few hours or even a day, looking around us and understand that probably the next innovation in the software industry could be “borrowed” by existing tools already available in the real world.

HTTP2: the good, the bad and the ugly

I spent last few weeks investigating on HTTP2, the successor of HTTP1.1 and I’d like to share my findings and thoughts in this post.

Let’s start saying that if the question you have in mind at this point is: “Can I really use it today, not only for experiments but also in production?”
My answer would be: “YES, you can!”

First of all, I’d like to share with you the browsers implementation status for this protocol

screen-shot-2016-09-06-at-23-00-23

As you can see from the screenshot taken from caniuse.com it’s definitely well supported on the latest version of the major browsers with some caveats obviously.

If you are not convinced yet, please check this website with one of the browsers that currently supports HTTP2 and look how fast to load is!
I’d suggest to install the HTTP2 indicator Chrome extension to discover how many web apps or online services are using this protocol:

screen-shot-2016-09-07-at-21-41-09

Not yet convince?! OK let’s move to a deeper analysis then!

HTTP2 is a binary protocol with a multiplexing requests method implemented, that means all the browser requests will be handled asynchronously.

This massive change will increase drastically the performance of your application.
Considering at the moment a browser can download simultaneously a maximum of 5 resources per domain (let’s avoid talking about “resource sharding” for now), with HTTP2 we will be able to request all the resources and render them when the browser will accomplish their download, check this demo made with Go Lang for a proper comparison between the 2 protocols and check also the Network panel in the Chrome Dev Tools or Firefox dev tools in order to understand how the 2 protocols differ.

The Good

HTTP2 has really few rules in order to be implemented:

  • it works ONLY with https protocol (therefore you need a valid SSL certificate)
  • it’s backward compatible, so if a browser or a device where your application is running, don’t support HTTP2 it will fall back to HTTP1.1
  • it comes with great performance improvements out-of-the-box
  • it doesn’t require to do anything on the client side but on the server side for a basic implementation
  • few new interesting features will allow to speed up the load of your web project in a way that is not even imaginable with HTTP1.1 implementation

Despite the short list, HTTP2 is bringing a substantial change to the internet ecosystem.
One of my favourite feature is the server PUSH where a server can pass a link header specifying what the browser should download in advance before starting to parse entirely the HTML document.
In this case, we can educate the browser to download several resources like images, css or even javascript files before the engine recognise them inside the DOM, providing a better user experience to our web apps and/or games.

The Bad

There is still plenty of works to do in order to have a great penetration of this protocol, few specs are still on going (read the next paragraph: the ugly) and probably it will take quite few months before we will see a lot of services moving to this new protocol.

A part from the high level overview of the downsides, let’s look what will change on the technical side.

Considering that HTTP2 is not restrict on the amount of requests a browser is doing in order to download resources few techniques for optimising our websites will need to be reviewed or even removed from our pipeline.
Delivering all the application inside a unique javascript file won’t have any benefit with HTTP2, so we need to move our logic downloading only what we need when we need it.
Knowing that downloading large files won’t be a problem we could use sprites instead of several small images to handle the icons of our website.
Probably the different tools like Grunt, Gulp or Webpack will need to review their strategies or update their plugin in order to provide real value to this new project pipeline.

The Ugly

Google Chrome protocol implementation!
Chrome is my favorite browser and I use it extensively, in particular, when I need to debug a specific script or I need to gather metrics from a specific behavior of a web app.
At the moment it’s the only browser that requires HTTP2 server negotiation via ALPN (Application-Layer Protocol Negotiation) that basically is an extension allowing the application layer to negotiate which protocol will be used within the TLS connection.

Considering that OpenSSL integrates ALPN only from version 1.0.2, we won’t be able to enable HTTP2 protocol support for Chrome (from build 51 and above) if we don’t configure our server correctly.
For instance, on Linux OS, only Ubuntu from version 16.04 has that OpenSSL version installed by default, for all the other major Linux version you will either install the newer version manually or you’ll need to wait for the next major OS release.

I’d suggest reading carefully the article that describes this “issue” on ngnix blog before you start to configure your server for Chrome.

Wrap up

HTTP2 is not perfect and probably is not supported as it should be but, definitely, could improve (drastically in certain cases) your web project performance.
A lot of “big players” are already using HTTP2 protocols in production (Instagram, Twitter or Facebook for instance) and the results are remarkable.

Why not starting catching up with the future today?

Hapi.js and MongoDB

During the Fullstack conference I saw a small project made with Hapi.js during a talk, so I decided to invest some time working with Hapi.js in order to investigate how easy it was create a Node.js application with this framework.

I’ve to admit, this is a framework really well done, with a plugin system that give you a lot of flexibility when you are creating your server side applications and with a decent community that provides a lot of useful information and plugins in order to speed up the projects development.

When I started to read the only book available on this framework I was impressed about the simplicity, the consideration behind the framework but more important I was impressed where Hapi.js was used for the first time.
The first enterprise app made with this framework was released during Black Friday on Walmart ecommerce. The results were amazing!
In fact one of the main contributor of this open source framework is Walmart labs, that means a big organisation with real problems to solve; definitely a good starting point!

Express vs Hapi.js

If you are asking why not express, I can reply with few arguments:

  • express is a super light and general purpose framework that works perfectly for small – medium size application.
  • hapi.js was built on top of express at the beginning but then they move away in order to create something more solid and with more built in functionalities, a framework should speed up your productivity and not giving you a structure to follow.
  • express is code base instead hapi.js is configuration base (with a lot of flexibility of course)
  • express uses middleware, hapi.js uses plugins
  • hapi.js is built with testing and security in mind!

Hapi.js

Let’s start saying working with this framework is incredibly easy when you understand the few concepts you need to know in order to create a Node project.

I created a sample project where I’ve integrated a mongo database, exposing few end points in order to add a new document inside a mongo collection, update a specific document, retrieve all documents available inside the database and  retrieving all the details of a selected document.

Inside the git repo you can find also the front end code (books.html in the project root) in Vanilla Javascript, mainly because if you are passionate about React or Angular or any other front end library, you’ll be able to understand the integration without any particular framework knowledge.

What I’m going to describe now will be how I’ve structured the server side code with Hapi.js.

In order to create a server in Hapi.js you just need few lines of code:

let server = new Hapi.Server();
server.connection();
server.start((err) => console.log('Server started at:', server.info.uri));

As you can see in the example (src/index.js) I’ve created the server in the first few lines after the require statements and I started the server (server.start) after the registration of the mongoDB plugin, but one step per time.

After creating the server object, I’ve defined my routes with server.route method.
The route method will allow you to set just 1 route with an object or several routes creating an array of objects.
Each route should contain the method parameter where you’ll define the method to reach the path, you can also set a wildcard (*) so any method will be accepted in order to retrieve that path.
Obviously then you have to set the route path, bear in mind you have to start always with slash (/) in order to define correctly the path.
The path accepts also variables inside curly brackets as you can see in the last route of my example: path: ‘/bookdetails/{id}’.

Last but not least you need to define what’s going to happen when a client is requesting that particular path specifying the handler property.
Handler expects a function with 2 parameters: request and reply.

This is a basic route implementation:

{
   method: 'GET',
   path: '/allbooks',
   handler: (request, reply) => { ... }
}

When you structure a real application, and not an example like this one, you can wrap the handler property inside the config property.
Config accepts an object that will become your controller for that route.
So as you can see it’s really up to you pick up the right design solution for your project, it could be inline because it’s a small project or a PoC rather than an external module because you have a large project where you want to structure properly your code in a MVC fashion way (we’ll see that in next blog post ;-)).
In my example I created the config property also because you can then use an awesome library called JOI in order to validate the data received from the client application.
Validate data with JOI is really simple:

validate: {
   payload: {
      title: Joi.string().required(),
      author: Joi.string().required(),
      pages: Joi.number().required(),
      category: Joi.string().required()
   }
}

In my example for instance I checked if I receive the correct amount of arguments (required()) and in the right format (string() or number()).

MongoDB plugin

Now that we have understood how to create a simple server with Hapi.js let’s go in deep on the Hapi.js plugin system, the most important part of this framework.
You can find several plugins created by the community, and on the official website you can find also a tutorial that explains step by step how to create a custom plugin for hapi.js.

In my example I used the hapi-mongodb plugin that allows me to connect a mongo database with my node.js application.
If you are more familiar with mongoose you can always use the mongoose plugin for Hapi.js.
One important thing to bear in mind of an Hapi.js plugin is that when it’s registered will be accessible from any handler method via request.server.plugins, so it’s injected automatically from the framework in order to facilitate the development flow.
So the first thing to do in order to use our mongodb plugin on our application is register it:

server.register({
   register: MongoDB,
   options: DBConfig.opts
}, (err) => {
   if (err) {
      console.error(err);
      throw err;
   }

   server.start((err) => console.log('Server started at:', server.info.uri));
});

As you can see I need just to specify which plugin I want to use in the register method and its configuration.
This is an example of the configuration you need to specify in order to connect your MongoDB instance with the application:

module.exports = {
   opts: {
      "url": "mongodb://username:password@id.mongolab.com:port/collection-name",       
      "settings": {          
         "db": {             
            "native_parser": false         
         }
      }    
   }
}

In my case the configuration is an external object where I specified the mongo database URL and the settings.
If you want a quick and free solution to use mongoDB on the cloud I can suggest mongolab, when you register you’ll have 500mb of data for free per account, so for testing purpose is really the perfect cloud service!
Last but not least, when the plugin registration happened I can start my server.

When I need to use your plugin inside any handler function I’ll be able to retrieve my plugin in this way:

var db = request.server.plugins['hapi-mongodb'].db;

In my sample application, I was able to create few cases: add a new document (addbook route), retrieve all the books (allbooks route) and the details of a specific book (bookdetails route).

Screen Shot 2015-12-04 at 23.44.38

If you want to update a record in mongo, remember to use update method over insert method, because, if correctly handled, update method will check inside your database if there are any other occurrences and if there is one it will update that occurrence otherwise it will create a new document inside the mongo collection.
Below an extract of this technique, where you specify in the first object the key for searching an item, then the object to replace with and last object you need to add is an object with upsert set to true (by default is false) that will allow you to create the new document if it doesn’t exist in your collection:

db.collection('books').updateOne({"title": request.payload.title}, dbDoc, {upsert: true}, (err, result) => {
    if(err) return reply(Boom.internal('Internal MongoDB error', err));
    return reply(result);
});

SAMPLE PROJECT GITHUB REPOSITORY

Resources

If you are interested to go more in deep about Hapi.js, I’d suggest to take a look to the official website or to the book currently available.
An interesting news is that there are other few books that will be published soon regarding Hapi.js:

that usually means Hapi js is getting adopt from several companies and developers and definitely it’s a good sign for the health of the framework.

Wrap up

In this post I shared with you a quick introduction to Hapi.js framework and his peculiarities.
If you’ve enjoyed please let me know what you would interested on so I’ll be able to prepare other posts with the topics you prefer.
Probably the next one will be on the different template systems (handlebars, react…) or about universal application (or isomorphic application as you prefer to call them) or a test drive of few plugins to use in Hapi.js web applications.

Anyway I’ll wait for your input as well 😀

Haxe-watchify: automatic build tool for Haxe and OpenFL projects

I’ve started recently working in a new company very focused on cross platform projects with Haxe.
In my commuting time I worked on an automatic build tool for Haxe and OpenFL projects.
The tool is called haxe-watchify and with a sample JSON file or directly through the command line, you’ll be able to setup how to continuously build your project in background during your development flow.
Haxe-watchify has got interesting features in particular for the Haxe target like the possibility to setup the completion server instead the traditional compiler to speed up the building of your projects.
In fact the completion server implements a cache system to build faster your projects, in this case haxe-watchify takes care for you to start the server and communicate with it.

Currently I’ve published the tool on npm registry so in order to install it just type in your CLI:

npm install haxe-watchify -g

I wrote an extensive documentation on how to use the tool on the readme file on the project repository otherwise you can check the –help command directly on your terminal window.
I tried for now only on Mac OS X so if you find any bug in any other platforms please let me know

I’ve already thought few possible implementations to add in the next releases like a pre and post build in order to launch your tests or run static analysis tool or assets optimisation and then move to the build.
Anyway I’m very keen to learn more about your current projects workflow and how haxe-watchify could help you to improve your situation.

if you want to share any comment please do feel free to share adding a comment to this post or via email

Git Flow vs Github Flow

Recently I’ve spent time to study a good way to manage a software projects with GIT.
I really read a lots of blog post to check different points of view and to find out which is the best technique to use in different situations.

The principals ways to manage a software in GIT are: the Git Flow and the Github Flow.
These 2 methods can really help you to manage your project and optimise your workflow in the team.
Let’s see the differences between them.

Git Flow

git flow
git flow

Git flow works with different branches to manage easily each phase of the software development, it’s suggested to be used when your software has the concept of “release” because, as you can see in the scheme above, it’s not the best decision when you work in Continuous Delivery or Continuos Deployment environment where this concept is missing.
Another good point of this flow is that fits perfectly when you work in team and one or more developers have to collaborate to the same feature.
But let’s take a look closer to this model.

The main branches in this flow are:

  • master
  • develop
  • features
  • hotfix
  • release

When you clone a GIT repository in your local folder you have immediately to create a branch from the master called develop, this branch will be the main branch for the development and where all the developers in a team will work to implement new features or bug fixing before the release.
Every time a developer needs to add a new feature he will create a new branch from develop that allow him to work properly in that feature without compromise the code for the other people in the team in the develop branch.
When the feature will be ready and tested it could be rebased inside the develop branch, our goal is to have always a stable version of develop branch because we merge the code only when the new feature is completed and it’s working.
When all the features related to a new release are implemented in the develop branch it’s time to branch the code to the release branch where there you’ll start to test properly before the final deployment.
When you branch your code from develop to release you should avoid to add new features but you should only fix bugs inside the release branch code until to you create a stable release branch.
At the end, when you are ready to push live deploy live your project, you will tag the version inside the master branch so there you can have all the different versions that you release week by week.
Apparently it could seem to much steps but it’s for sure quite safe and helps you to avoid mistakes or problem when you release, obviously to accomplish all this tasks you can find online a lots of scripts that could help you to work with Git flow in the command line or if you prefer you can use visual tools like SourceTree by Atlassian that make the dirty work for you, so you have to follow only the instructions inside the software to manage all the branches, for this reason I’ve also prepared a short video to see how use this flow with SourceTree

You can go more in deep about this flow reading these 2 interesting articles: nvie blog or datasift documentation.

Github Flow

Screen Shot 2014-03-08 at 23.07.36

So now, do you think that Github is working with Git Flow? Of course no! (Honestly I was really surprised when I read that!)
In fact they are working with a continuos deployment environment where there isn’t the concept of “release” because every time they finish to prepare a new feature they push live immediately (after the whole automation chain created in the environment).
The main concepts behind the Github flow are:

  • Anything in the master branch is deployable
  • To work on something new, create a descriptively named branch off of master (ie:new-oauth2-scopes)
  • Commit to that branch locally and regularly push your work to the same named branch on the server
  • When you need feedback or help, or you think the branch is ready for merging, open a pull request
  • After someone else has reviewed and signed off on the feature, you can merge it into master
  • Once it is merged and pushed to ‘master’, you can and should deploy immediately

I found an amazing interactive page where you can deepen the knowledge of this method, but I see it’s very common when you work in QA teams, small teams or you are working as freelance because it’s true that is a lightweight flow to manage a project but it’s also quite clear and secure when you want to merge your code in the master branch.
Another good resource about Github Flow is the blog post made by the Github evangelist Scott Chacon.
I recorded also a video on how to use Github flow with SourceTree:

If you have any other method to manage your project in GIT feel free to share because I’m quite interesting to see how you usually work with GIT and if there are better ways to work with and if you have any other feedback or question I’m here for you!

Product Owner, Scrum Master and Team: this is Scrum!

As you read in my previous post “Approaching Scrum”, in the short description of this Agile technique, I talked about  3 main roles that composed the Scrum team.

The main 3 roles are:

  • Product Owner
  • Scrum Master
  • Development Team

There are also other people that partecipate in a Scrum team and I’ll talk about them in a bit.
I’d like to share a brief introduction to each role before talk, in another post, about the Scrum process in details.

Where is the project manager ?

That’s a good question, as you can see above in the list of the main roles, the project manager role is not mentioned.
Basically what the project manager represents in the traditional software development where basically manage the communication between clients and the company, manage the team’s tasks and so on, in Scrum we are dividing those functions trough the members of the Scrum team (product owner, scrum master and development team) giving more responsibilities to them.
This thing will have a positive impact with the whole team that is more involved in each part of the project (planning, decision and development) and they can drive the risk using Scrum to achieve the best business value so the best result for their customers.
Another important aspect to take in consideration, when you are working with Scrum, is that we don’t have the concept of “project” anymore, but we’ll work on features and improvements with a continuos delivery instead of a “big bang delivery” at the end of the project.
This will ensure that we’ll focus more in the business value instead of waste time in something that is not important for that project and that couldn’t have the right value for our stakeholders.
For my perspective a natural evolution for a project manager could be the product owner, or if he has got a technical background could be Scrum Master; for sure the most important thing is that he has to change the way to approach any development and he has to think in a collaborative instead of a hierarchical way.

Product Owner

This role is very important inside a Scrum team, in fact the product owner is the “voice” of stakeholders and customers or users.
He is also the person that has the product vision, that is in charge to take the right business decisions during the development of a new or existing product and he has to help the team, if it’s possible everyday, answering on questions related to the product.
He is in charge to create and maintain the Product Backlog and to share the product vision with the whole Scrum team.
The most important thing is that the Product Owner is always available for the team because he is part of the team and his role is essential to accomplish all the business aims of a project.
In fact he is in charge to order the activities to do in the Product Backlog and to update them (this process is called grooming) following the customer’s perspective.
Usually the Product Owner is someone that have familiarity to talk with customers or internal management as well and he is quite technical to talk with the Development Team.

Scrum Master

The perfect definition of a Scrum Master is: “Mr. Wolf” !
The Scrum Master is the person that try to facilitate the use of  Scrum inside the team, he has to remove impediments to the Scrum team, facilitate Scrum introduction in a new team,  avoid any external distraction to the Development team and so on.
The key concept for any person that is approaching this role is that you are not a chief of someboy, you are coaching Scrum to people not imposing something to them!
In fact another good definition of Scrum Master is the servant leader.
Usually the Scrum Master is a Lead Developer or a QA person anyway it should be someone that has got enough technical knowledge to understand and solve the problem of the Team and to discuss with the Product Owner and guide him to create good artifacts for the Scrum team.
This activity required more or less the 50% of the time for a person so usually the Scrum Master is often member of the Development Team as well.

Development Team

When Scrum talks about Development Team it doesn’t mean only developers but a group of people that work actively to the product.
So inside the development team we can find: analysts, designers, testers, UX team, developers and so on.
In Scrum the team has cross-functional skills that allow them to deliver feature every Sprint.
Another important thing is that the team is self organised so nobody (nor the Scrum Master neither the Product Owner) can add activities to their daily job or impose the timeframe to finish a task.
At the beginning it could be so strange but in this case people in a team assume more responsibility and the project will be done in the best way ever.
In the old way to manage a team where the project manager imposed his role to have the feature that he wants when he wants, the team was only executive people without any relation with the project and often without any reward if the project will be well done.
In Scrum the team is in charge to estimate the project, to choose the task to accomplish during a Sprint deciding them with the Product Owner and, last but not least the team has to develop more features as possible with an high quality of code (tested, documented and so on).
We’ll talk more in deep about estimates and about the “definition of done” in next posts for now what you need to know is that the team is not passive anymore but they play an active and important role to achieve the goals of each product.

Other actors

In Scrum the main roles are what you have read above but there are many other people that partecipate to the good delivery of a product, in particular when you start to scalate Scrum to many teams.
In fact Scrum is a framework to manage 1 or 2 teams but not more, there are many other Agile frameworks that help you in this activity like SAFe (Scaled Agile Framework).
Other people that partecipate to the delivery of a project are for sure the stakeholders.
Stakeholders can be internal or external, when we refer to internal stakeholders we have to think to our management that share with the team their needs and how their point of view to improve or change the product developed by the team.
The external stakeholders are our customers or users, yes you read well, our customers or users are very important in the Scrum process because they are the ones that decide if the project is going in the right direction or is failing.
Personally I think is the biggest challenge involve our customers inside the Scrum process but for sure it will be very useful for the team and the product because they can constantly give us feedback on the right direction to take with the product.

Approaching Scrum

Have you ever felt disappointed because you don’t finish a project in time?
When you try to estimate a project you feel like a poker player?
Have you ever wrote bad code because you were overtime and you needed to delivery as fast as possible?

If you think that there isn’t a way to escape from this nightmare you are completely wrong and I understood it in the las few months as well!
I don’t want to say that with Agile you can solve everything, but what I can say that for sure it can help you to achieve your goals and deliver in time with the best quality and value for your customers or users.

Let’s start from the beginning…

What is Scrum?

Scrum is an iterative and incremental Agile software development framework for managing software projects and product or application development. Its focus is on “a flexible, holistic product development strategy where a development team works as a unit to reach a common goal” as opposed to a “traditional, sequential approach”. Scrum enables the creation of self-organizing teams by encouraging co-location of all team members, and verbal communication among all team members and disciplines in the project.

A key principle of Scrum is its recognition that during a project the customers can change their minds about what they want and need (often called requirements churn), and that unpredicted challenges cannot be easily addressed in a traditional predictive or planned manner. As such, Scrum adopts an empirical approach—accepting that the problem cannot be fully understood or defined, focusing instead on maximizing the team’s ability to deliver quickly and respond to emerging requirements.

From Wikipedia.org

When you start to read about Scrum a lot of things seem impossible to apply in your daily job but I think that, like any transition period, you can arrive to work completely in Scrum in less than 6 months if you really want change your organization and improve it.
From my experience Scrum is a good framework that should be used extensively if you are working in products that will be delivered to the final user, otherwise you have to train up your customer to be Agile; but in the real world it could be more difficult.
Mainly because you need to have a good trustability before start to involve them in an Agile process and sometimes there isn’t enough time to do so.

Why have I to use Scrum?

That’s a good question, it was the same that I asked me few months ago and finally I’ve an answer, if you are a developer think when you started to write the first lines of code, obviously day by day you increase your knowledge and your code became even better until to the procedural code it wasn’t enough and you try to look to something more.
Then you have discovered OOP concepts and maybe design patterns, after few times you started to work with MVC, MVP, MVVM or your favourite architecture and probably after many years if you look back you won’t write procedural code anymore.
Does it sound familiar?
In the same way of Design Patterns and a micro-architecture that drive you to create solid and maintainable project, Scrum can help you to organise your projects creating a great business value, knowing every time the next steps and the actual status of the project, estimating better the goals and the time to achieve them and last but not least to drive the risk in a better way than the “traditional” methodology (waterfall model for instance).

How does Scrum works?

Scrum diagram
Scrum

As you can see from the diagram above, Scrum is an iterative workflow that happens in a small amount of time (usually 2 weeks or 4 weeks at least) where with few documents and a lot of communication you can achieve the best trade off between the business value for your final customer and the best quality of your software.
Scrum is composed by some actors (Product Owner, Scrum Master, Development team and Stakeholders), some meetings (Release Planning, Backlog Refinement, Daily Scrum, Sprint Planning, Sprint Review, Sprint Retrospective) and few artifacts (Product Backlog, Sprint Backlog, Burndown chart, Product Increment).
The most important concept that you have to keep in mind is that Scrum is easy to use and to understand but if you want to have its benefits you have to follow its rules.
To enter in this interesting world you have to keep in mind the 3 main concepts of an empirical process like Scrum:

  • Transparency
  • Inspection
  • Adaptation

Without this 3 fundamentals principles, Scrum it’s not useful at all!
Transparency means that you and your team don’t have to hide anything to anybody, if there are any impediments or problems or bottleneck following Scrum you can find and resolve them.
Inspection means that you and your team have to analyse what you have done after a small amount of time (Sprint Retrospective) and find what positive or negative was happened during that period.
Adaptation means that everything is not binary (0 or 1, true or false) but you have to adapt your way to work day by day improving yourself inspecting what you have done and being agile!

If you want Scrum is not only a good approach to work, it could be a good approach for life as well!
(Check also this useful article on Scrum Alliance website that explains Scrum in 30 secs)

Ok, now I’m really interested in Scrum, where to begin?

There are many books that allow you to enter in this amazing world, the first one that I can suggest you is Essential Scrum: a practical guide to the most popular Agile process

Essential Scrum
Essential Scrum

In this book you can really understand how the Scrum framework works and how to use it in your daily job.
I also suggest it if you are planning to join in a Scrum training course, it can really help you to have a good preparation for the course and for the following certification exam.

Next steps

What I’d like to share with you is my notes about Scrum studied on books, read on blog or social networks and share with my fellows, my idea is to fix few concepts on this blog that could be helpful also for people that is approaching Scrum right now or they would like to know more about it.
There are really tons of things to know and you’ll never finish to learn (as usual) so I think a blog it could be a good resource to share the basic of Scrum and in the future, going more in deep with real case studies related to my daily job.
I hope you will enjoy this information that are not what you usually find in my blog but maybe could be interesting as well, as usual any suggestion will be very appreciated so don’t be shy and share your thoughts!

Photoshop, Edge Reflow and Edge Inspect: the new responsive workflow

Today I’d like to talk about something that is not strictly related to development process but that it’s very useful when you are running your company as freelance or entrepreneur or if you are team leader of a team.
One of the most important thing for me when you approach a new technology is not only understand if it could fit all your needs but also understand when you introduce in your team or company how to have the best result as soon as possible.
That’s why I keep always a lot of attention on how to create a flexible and elastic workflow that allow my team to create or modify client side solutions without waste our time.
In last years we rapidly see the grow of an hot topic, strictly related to HTML5 and Javascript, like Responsive Design, so the capability to create an interface that is viewable and usable on different devices (from smartphones to web browsers for instance).
Personally, if I didn’t find anything that help my team to be immediately very productive I usually avoid to introduce new softwares in the actual workflow, but this time we are in the middle of a big revolution where HTML5 and Javascript are the main protagonists.
During last Adobe MAX I saw a couple of interesting demo on Edge “family” and I was impressed on the capability of Edge Reflow and its interaction with Photoshop CC to create user interfaces for different devices in really few time, that’s why I was really waiting to test this feature and I’d like to share with you my first experiment.

rwdTools

Photoshop CC and Edge Reflow

I think a lot of designers create the UI for a project with Photoshop, last Monday (9th September) Adobe released an update of Photoshop CC and Edge Reflow, but we start with Photoshop because the news are really cool.
One of the most boring activity for a designer (or for me when I did it as freelance :D) is to cut all images and prepare assets in different folders for the developers.
Photoshop CC helps us introducing a new feature called Adobe Generator, a new way to automate this long and tedious phase, where the designer has only to follow some simple rules on how to nominate Photoshop levels and the software automatically export all the assets for us, ready to be delivered to the developers team!
For instance if you want to export a particular level as PNG you need only to nominate the level with a PNG extension (for example: “background.png”) and run the new Photoshop command Generate > Image Assets to have all our files ready to be added on the real project.

Photoshop Generate command

To know more about Adobe Generator and in particular to know how to set the name of each level I warmly suggest to take a look to Photoshop.com where there are all the information to do that.
Another option that we have (as you can see in the image above) is the capability to export the UI structure and the assets to Edge Reflow.
If you don’t know what is Edge Reflow I explain it in few words.
Edge Reflow is a tool useful to create responsive design layout and, from yesterday, completely integrated with Photoshop CC.
In fact now you can import in Edge Reflow your layout and you can start to customise it visually for any screen resolution your project will work.

Edge Reflow

The most interesting thing is that you can export from Photoshop an Edge Reflow project, or you can synchronise in real time the changes when the 2 softwares are open.
Then you can create your layout for different resolutions only copying and paste the code generate from Edge Reflow in your favorite code IDE; I mean copy and paste for now instead of import because probably (at 99%) you’ll have to improve or change it a little bit after paste but it’s really a good step forward for a software in preview like Edge Reflow.
With Edge Reflow you can create <div> adding box elements in your layout and you can show or hide elements present in different screen resolution simply with the options in the left side of the software interface.
Another very cool thing is the capability to work with your Typekit account (integrated in your Creative Cloud subscription) to download the fonts needed in the layout made with Photoshop.

Edge Reflow and Edge Inspect

Last but not least, Edge Reflow is integrated with another cool product of the Edge family called Inspect.
Edge Inspect is a simple application that you can add as plug-in in Chrome or you can download in your iOS or Android device from the relative store, and it allows you to test in real time all the changes you are doing in a website or more in general in HTML, JS or CSS file checking in real time the final result in one or more than one device simultaneously.
This is a capability that partially missed in the flash development workflow where the mobile test was a real pain (in particular the first releases of Adobe AIR on mobile), in this case with all those new technologies Adobe decides to evolve and improve this experience giving good tools to develop.

From a developer perspective

Personally I think that the integration of a technology like Node.JS in last Adobe softwares (Brackets, Adobe Generator, Edge Reflow and so on) is giving a real boost to them, and they are opening new horizons in the desktop application field, in particular I suggest to take a look to Node-Webkit, an open source project that allow you to work with HTML5, Javascript (Node.JS obviously) and WebGL to create desktop application for different platforms.
There are many other tools that could help to achieve the same goal like TideSDK for example, but I think Node-Webkit could be very interesting if the project will be well approach by the community.

Conclusions

Demo Photoshop and Edge Reflow

Finally the big players on the market are delivering tools that allow us to create engaging and amazing experience with HTML5 and Javascript like other technologies did in the past (Flash Platform in primis).
The combination of Photoshop CC, Adobe Generator, Edge Reflow and Edge Inspect give us a real flexible and integrated workflow where in few steps we can save a lot of hours spent on the code with great results.
Obviously those tools are new and in “preview” so they are not perfect but they are stable and useful enough at this point to be integrated in the actual daily workflow giving immediately importart results.
I really hope this is the first steps to give us the freedom to create instead became crazy to have layouts working in different browsers and devices.

Comparison of colours in Actionscript 3

In this post I’d like to highlight a topic that is not the classic tip to use everyday but when it needs probably this “recipe” could help.

In last few weeks I had the opportunity to study how to compare 2 or more images and find similar colors between them.
Sincerely when I started I really don’t know how to do that, so I tried to read on the web the best way to accomplish this task.
There are many different way to compare colours, the easiest way that I found it’s working with RGB color space but a lots of experts  advice against this technique and they suggest to run away from RGB color space and work with different color spaces, in particular with which one work in 3D space.
The most complete is for sure the CIELAB so if you need to create an algorithm that compare in the best way colours I really suggest to work with this color space and read this post on this topic, in my case I need something in the middle so a colour space that could guarantee the perfect trade off between complexity and accuracy.
In fact I worked in this example with a 3D space like HSV (Hue, Saturation, Value), before go ahead with the post I warmly suggest to read more about HSV directly on wikipedia.

Hue, Saturation, Value

And now let’s go ahead!!!

I prepare a class that you don’t need to change if the idea behind it fit your needs, so this is the code to compare 2 images with my ColorComparision object:

var util:ColorComparision = new ColorComparision();
var finalDiffArr:Array = util.getSimilars(bmp1.bitmapData, bmp2.bitmapData);

As you can see, in the second line you will receive an Array of colors in RGB that you can use if you want in the UI or only as data if you want to add more complexity to this algorithm.

This comparison is made in this way:

1. I retrieve 64 average colors inside each image (inside the class ColorComparision I set a constant if you need to change this value)
2. I convert each colour from RGB to HSV
3. I calculate the distance between each colour of the first image compared with each colour of the second image

To be more complete as possible I expose also the capability to set the tolerance of the comparison in this way:

util.tollerance = Math.round(slider.value);

Finally I made a quick example where you can play with your images and see what happens, basically in this basic sample when you’ll click “compare” button you will find the color similar between 2 images you will compare.

As3 Color Comparison

Last but not least I leave you also this quick method to translate RGB color to HEX  and use them in your Flash application:

function createRect(_obj:Object):Sprite{
 // the obj has 3 properties: r -> red, g -> green, b -> blue
 var intVal:int = _obj.r << 16 | _obj.g << 8 | _obj.b;
 var hex:String = "0x" + intVal.toString(16);

 var s:Sprite = new Sprite();
 s.graphics.beginFill(uint(hex));
 s.graphics.drawRect(0,0, 20, 20);
 s.graphics.endFill();

 return s;

}

On my github space you can find the source to use this algorithm, enjoy!