Identifying micro-frontends in our applications

I’ve always spent a lot of time reading, attending conferences, researching different topics and those learnings really helped me shape my career, one of them is definitely Domain Driven Design (DDD).

Let’s take a step back first, why am I talking about DDD?

One thing that always puzzled me in this industry is the lack of learnings from different technology communities, for instance, we can find quite a lot food for thoughts when we examine the principles behind microservices more than focusing on the bare implementation.
On the backend side, there are often practices, methodologies, more in general ideas, that are totally applicable on the frontend too, but often we don’t think how to do it.
Often just taking a step back, understanding why someone implemented a pattern over another, allows us to open up a world of opportunities that we would never think about because “it’s not the standard way to do things”.
Contaminations from different industries or technologies allow us to see the world from a different perspective, creating new possibilities not explored enough (or at all sometimes), giving us the possibility to apply concepts and mental model to our day to day work.

DDD key concepts

Ok, now we can explain why DDD is mentioned in a post where I talk about micro-frontends.
DDD brings on the table some of the key concepts for defining a micro-frontends because it helps our organisation to align the business with the tech side, unifying de facto 2 main areas of our companies: product and tech.
DDD starts with the idea of identifying parts of our application that represents a subdomain of the final application.
Usually, an application is focused on a core domain, for instance, Netflix core domain is streaming movies anywhere at any time, considering the domain is usually a complex proposition, DDD suggests to split the domain into multiple subdomains allowing a company to understand how to structure the company as well as the project.
Some examples of subdomains could be the authentication, customer support inventory management and so on.

Subdomains are divided into 3 categories:

Core Subdomains: those are the main reason why an application should exist, core subdomains should be treated as a premium citizen in our organizations because they are the ones that deliver values above anything else
Supporting Subdomains: subdomains related to the core ones but not key differentiators, those subdomains could support the core subdomains but at the same time are not essential for delivering the real value to our users
Generic Subdomains: those are needed subdomains used for completing the platform and often the companies decide to go with off the shelf software because not strictly related to their domain, for instance, authentication or payments management, more in general anything that is not related to our code business

Inside each subdomain, tech and product teams should identify a ubiquitous language, or rather a way where business meets tech using the same language for identifying functionalities, objects but more, in general, the domain model.

Think about it, how often we speak with a product owner that defines part of an application in a completely different way from the techies!

Ubiquitous language is not a static language, should evolve with the business and the applications running alongside it.
In this way, we would be able to define a domain-model similar to what we discuss day to day with the domain experts, constantly up-to-date.
Let’s take Netflix for instance, I think we are all familiar with this famous streaming platform, a subdomain of Netflix might be the catalogue, inside it we can identify multiple areas with specific functionalities, those could be directly connected to backend APIs related to a subset of the entire application.
In Netflix case, they are using Backend For Frontend pattern (BFF) nevertheless the principles remain the same.

Netflix web platform

In this screenshot, we can identify some components, those might be linked to some microservices like the personalisation service, the catalogue per country, the most popular contents and so on.
Despite the technical integration that could be via backend for frontend, GraphQL, Server Side Rendering and so on, the most important thing to understand is that those areas are all linked to the same subdomain.

Therefore those microservices, as well as the frontend, should be encapsulated in a unique subdomain with its own ubiquitous language.

Following just an example to understand what a subdomain should contain:

An example of subdomain based on the Netflix platform

This is a fundamental step for identifying how to “slice” our application, understanding that the frontend is part of the subdomain allows us to think holistically about our web application.
If we then extend the concept to the infrastructure too, we finally see all the components for developing a subdomain in the hands of a team that can own Frontend, Backend and Infrastructure end to end without too many external dependencies.

Up to now, DDD was applied to the backend layer but not very often to the frontend as well, extending these concepts to the frontend allow us to easily identify our micro-frontends.
I’d like to highlight another important concept, a subdomain cannot (and shouldn’t) be recognised as a component in a page, it’s true that in each UI we can find links or graphical elements related to different subdomains but at the same time we need to understand that they are not standalone identifying a subdomain and they need to have teams owning a subdomain end to end as we are going to see in the next few paragraphs.

Identifying a micro-frontends bounded context

Following the DDD principles, identifying a micro-frontend becomes quite trivial.
Usually, there are 2 main scenarios to deal with on a day to day based: greenfield projects, usually very exciting for any developer but also more complicated because we don’t have real information about our user base and how they would consume our content; legacy projects, where we have a tons of information (if we have diligently tracked our users behaviours using Google Analytics or similar tools) and therefore it’s easier to rationalise a logical identification of bounded context across the entire platform following our users’ behaviours

Having data to consult is one of the best situations we can aim for, understanding the users’ behaviours allow us to easily identify the subdomains of our applications.
Let’s assume we see a huge amount of traffic consulting the landing page, then 70% of those users are moving to the authentication journey (sign in, sign up, payment…), from here only 40% of the traffic subscribes to a service or use their credentials for accessing the service.

Users’ behaviours example in an application

Those are good indications about our users’ behaviours in our platform, DDD would suggest starting from the domain model of our application identifying the subdomains and their related bounded context and having behavioural data supports us on how to “slice” the frontend applications,

Users’ behaviours are invaluable for identifying our micro-frontends.

In the example discussed before, if we think about the technical implementation, allowing 100% of our users downloading only the code related to the landing page will allow them to have a faster experience because they won’t download the entire application immediately and the 30% of users who won’t move forward to the authentication area will have just enough code downloaded for understanding our service.
Obviously, mobile devices with slow connections can only benefit from this approach for multiple reasons: less KB to download, less memory used, less Javascript to parse and execute and a faster first interaction of the page.

Greenfield projects are a bit more complicated to manage, identifying micro-frontends upfront without knowing how our users interact with the platform could result in bad experiences but nevertheless, we have to find a way for structuring our micro-frontends architecture.
In this case, working closely with the product team or the subject experts could make a huge difference.
In my experience, any startup or medium-large organization have always a team or a person that has a clear idea of how the platform should behave, that knows inside out the core domain of an organization.
This person or team is key for understanding how the user should behave and therefore how to identify the domain model of our application as well as our micro-frontends.

It’s essential to understand that a subdomains evolves with the business, never assume that is immutable!

In DAZN we have decided to split our Single Page Application into multiple subdomains based on the data retrieved in the past years, ending up with 5 different micro-frontends with a few components developed by external teams and embedded as dependencies in one micro-frontend.
We identified the following micro-frontends:

. Landing page
. Authentication
. Catalogue
. Playback
. Sports data
. User account
. Help
. Chat

For instance, Playback and Sports Data are components living inside the catalogue micro-frontends, the complexity of those 2 subdomains lead us to assign a team dedicated to each of those subdomains.
Those components are published in an NPM private repository and are treated as an external dependency for the catalogue micro-frontend.
All the others are SPAs or single pages loaded by our client side orchestrator.

The power of local decisions

Working with subdomains allow us to assign a team to a specific area of our application, now stop for a moment and think how powerful could be this concept…
One of the key thing that I’ve always envy to startups is how fast they are capable to move and how quickly they take decisions on architecture, design or UX even.
When they need to take a decision, it’s a matter of minutes or hours but not weeks like in large organizations where we need to have a quorum of people agreeing on the solution.
If we think even further, a startup can react very quickly because they can take local decisions, in their case a local decision is a company decision, but if we extend this concept to medium-large organizations, dividing an application by subdomains allow us to have “multiple startups” inside an organization, therefore, empowering a team to take local decisions will allow to speed up the delivery, reduce the frustration and brings on the table interesting concepts like independent builds and deployments, less external dependencies, less frustration and more innovation.
The outcome of using DDD for identifying a ubiquitous language and subdomains would be creating a cross-functional team composed by frontend developers, backend developers, manual QAs and dev-in-test working closely to their product team/subject expert and being able to take a wide range of local decisions, from product decisions to infrastructure decisions, being responsible of the subdomain end to end.

Obviously, this team cannot (and shouldn’t!) be compared to a remote island where any decision is taken locally, these teams have to collaborate with the rest of the organization using services like architects, cloud experts and other functions inside the organization following the boundaries created by the heads of the technical department.

Organization example where each team represent a subdomain

In the past years, I read a lot about DDD and I found an interesting box inside Domain Driven Design Distilled book that caught my attention and I think is worth to share in this post to enforce the concepts explained in this paragraph:

Bounded Contexts, Teams, and Source Code Repositories

There should be one team assigned to work on one Bounded Context. There should also be a separate source code repository for each Bounded Context. It is possible that one team could work on multiple Bounded Contexts, but multiple teams should not work on a single Bounded Context. […]

It is especially important to be clear that one team works on a single Bounded Context. This completely eliminates the chances of any unwelcome surprises that arise when another team makes a change to your source code. Your team owns the source code and the database and defines the official interfaces through which your Bounded Context must be used. It’s a benefit of using DDD.

from Domain Driven Design Distilled — chapter 2

5 suggestions for dividing your frontend monolith

Last but not least, I think would be helpful having some takeaways of this post based on my experience and what I saw so far:

  1. Gather data: if you have a legacy project you can use Google Analytics or similar services for understanding how your users are interacting with your application, you will find a clear idea how your user base is interacting with your application.
    For greenfield projects, engage with your product team or customer, add GA or similar tools in your web application and via data validate the initial assumptions. Remember, bounded context and subdomains evolve with your business, are not defined once and set in stone!
  2. Talk with the domain experts: invest time with your product team or the domain experts in your company, understand their point of view, their roadmap, how they think to evolve the project, those are vital information for identifying the micro-frontends
  3. Review the teams organization: don’t fall in the trap of defining once the teams and don’t change them anymore, teams, like your business, should be fluid, if you see that following DDD there are some teams crossing multiple subdomains, make the bold decision reviewing the internal organization, the entire business will benefit from it!
  4. A micro-frontend could be a single page or a SPA or SSR: as long you are following DDD for identifying your subdomains, a micro-frontend may end up to be represented by a single page like in the case of a landing page, or a more complex solution based on a Single Page Application architecture or a Server Side Rendering one.
    Components risk being not representative of a subdomain because tightly linked to the container where they are nested, therefore the overlap of multiple contexts could cause more issues than benefits.
  5. Invest the right amount of time at the beginning of your project: designing an architecture upfront is not the best way for starting a project, usually an architecture should work iteratively, therefore we should start designing “just enough” and slowly but steady we enhance the design based on additional information we found engaging with the product team, developers and users.
    When you are identifying the different subdomains of your application invest enough time because this decision could impact how to structure the tech teams as well as how much communication overhead your company is going to spend due to dependencies between teams

Micro-frontends, the future of Frontend architectures

Micro-frontends architecture

In the past 30 months, I had the opportunity to work on one of the most challenging architectures I’ve ever designed in my career.
The main requirements were based on the speed of delivery, scalability and code quality.
Frontend applications are becoming more challenging daily and achieving those requirements in a company with a massive growth like DAZN was far to be an easy task.

The first step for me was identifying how to achieve those requirements in a meaningful manner, therefore, I started thinking how I can reach those goals in an ideal world and then work retrospectively through the constraints we had inside our company.

The speed of delivery could have been achieved parallelising tasks in multiple teams the real challenge although is having teams independent enough to not be stopped by external dependencies in particular when the teams are distributed and not co-located.

Scalability on the Frontend ecosystem is not only represented by technical challenges but mainly by autonomous teams, too often I experienced the frustration of frontend developers from external dependencies and because they have to maintain and improve a codebase started for one purpose and evolved in a monster becoming unmanageable after some months or a few years of work, ideally we should be able to scale our teams organically and adapting them to the business needs without too much friction, more than being trapped inside codebases that do not really follow the “business rhythm”.

Code Quality is a non-functional requirement that is always aimed by any team and company out there but often, despite the goodwill of each team members, due to pressure from the business, we had to make some hard decisions cutting some corners so the tech debt increases and, without being addressed properly, having a knock-out effect on the entire organization and the teams morale.

On top of those key goals, a personal one I thought was key for the project I was about to redesign was innovation, in the JavaScript community there are plenty of talented teams and individuals that are contributing to open source projects with great libraries, frameworks but more in general solutions, that could make our life easier or even accelerate the time to market of specific feature, ignoring this fantastic ecosystem would have been a technical suicide considering I was working on an architecture for the future that should have remained in the company for the foreseeable future.

For achieving all of these goals I had to think outside the box, leveraging the past experiences and the learnings from successes as well as failures happened in my career.
It’s then that I thought about micro-frontends, following the microservices principles, I was able to extract a manifesto based on what I need to achieve:

DAZN micro-frontends manifesto

Usually, when we design new architecture we need to bear in mind that architecture and technical decisions are not affecting merely the code and our technical teams but also the entire organization we work for, therefore is essential understanding the impact of those choices across our company.

If you wanna learn more, I summarise this incredible journey in this talk with my colleague Max Gallo during the last edition of Frontend Developer Love Conference, the feedback at the conference was really positive, but I decided to use this platform for understanding what other people think and create a genuine discussion around a topic that is going to change the future of our Frontend applications: micro-frontends.

Enjoy the talk and feel free to comment or ask any questions, I’d really like to gather the experience and common questions/doubts of the community around micro-frontends doing my best to answer them all.

Last but not least, if you wanna learn more on micro-frontends I warmly recommend joining me the 26th April in the 3 hours online workshop organised in collaboration with O’Reilly Media

HTTP2: the good, the bad and the ugly

I spent last few weeks investigating on HTTP2, the successor of HTTP1.1 and I’d like to share my findings and thoughts in this post.

Let’s start saying that if the question you have in mind at this point is: “Can I really use it today, not only for experiments but also in production?”
My answer would be: “YES, you can!”

First of all, I’d like to share with you the browsers implementation status for this protocol

screen-shot-2016-09-06-at-23-00-23

As you can see from the screenshot taken from caniuse.com it’s definitely well supported on the latest version of the major browsers with some caveats obviously.

If you are not convinced yet, please check this website with one of the browsers that currently supports HTTP2 and look how fast to load is!
I’d suggest to install the HTTP2 indicator Chrome extension to discover how many web apps or online services are using this protocol:

screen-shot-2016-09-07-at-21-41-09

Not yet convince?! OK let’s move to a deeper analysis then!

HTTP2 is a binary protocol with a multiplexing requests method implemented, that means all the browser requests will be handled asynchronously.

This massive change will increase drastically the performance of your application.
Considering at the moment a browser can download simultaneously a maximum of 5 resources per domain (let’s avoid talking about “resource sharding” for now), with HTTP2 we will be able to request all the resources and render them when the browser will accomplish their download, check this demo made with Go Lang for a proper comparison between the 2 protocols and check also the Network panel in the Chrome Dev Tools or Firefox dev tools in order to understand how the 2 protocols differ.

The Good

HTTP2 has really few rules in order to be implemented:

  • it works ONLY with https protocol (therefore you need a valid SSL certificate)
  • it’s backward compatible, so if a browser or a device where your application is running, don’t support HTTP2 it will fall back to HTTP1.1
  • it comes with great performance improvements out-of-the-box
  • it doesn’t require to do anything on the client side but on the server side for a basic implementation
  • few new interesting features will allow to speed up the load of your web project in a way that is not even imaginable with HTTP1.1 implementation

Despite the short list, HTTP2 is bringing a substantial change to the internet ecosystem.
One of my favourite feature is the server PUSH where a server can pass a link header specifying what the browser should download in advance before starting to parse entirely the HTML document.
In this case, we can educate the browser to download several resources like images, css or even javascript files before the engine recognise them inside the DOM, providing a better user experience to our web apps and/or games.

The Bad

There is still plenty of works to do in order to have a great penetration of this protocol, few specs are still on going (read the next paragraph: the ugly) and probably it will take quite few months before we will see a lot of services moving to this new protocol.

A part from the high level overview of the downsides, let’s look what will change on the technical side.

Considering that HTTP2 is not restrict on the amount of requests a browser is doing in order to download resources few techniques for optimising our websites will need to be reviewed or even removed from our pipeline.
Delivering all the application inside a unique javascript file won’t have any benefit with HTTP2, so we need to move our logic downloading only what we need when we need it.
Knowing that downloading large files won’t be a problem we could use sprites instead of several small images to handle the icons of our website.
Probably the different tools like Grunt, Gulp or Webpack will need to review their strategies or update their plugin in order to provide real value to this new project pipeline.

The Ugly

Google Chrome protocol implementation!
Chrome is my favorite browser and I use it extensively, in particular, when I need to debug a specific script or I need to gather metrics from a specific behavior of a web app.
At the moment it’s the only browser that requires HTTP2 server negotiation via ALPN (Application-Layer Protocol Negotiation) that basically is an extension allowing the application layer to negotiate which protocol will be used within the TLS connection.

Considering that OpenSSL integrates ALPN only from version 1.0.2, we won’t be able to enable HTTP2 protocol support for Chrome (from build 51 and above) if we don’t configure our server correctly.
For instance, on Linux OS, only Ubuntu from version 16.04 has that OpenSSL version installed by default, for all the other major Linux version you will either install the newer version manually or you’ll need to wait for the next major OS release.

I’d suggest reading carefully the article that describes this “issue” on ngnix blog before you start to configure your server for Chrome.

Wrap up

HTTP2 is not perfect and probably is not supported as it should be but, definitely, could improve (drastically in certain cases) your web project performance.
A lot of “big players” are already using HTTP2 protocols in production (Instagram, Twitter or Facebook for instance) and the results are remarkable.

Why not starting catching up with the future today?

Hapi.js and MongoDB

During the Fullstack conference I saw a small project made with Hapi.js during a talk, so I decided to invest some time working with Hapi.js in order to investigate how easy it was create a Node.js application with this framework.

I’ve to admit, this is a framework really well done, with a plugin system that give you a lot of flexibility when you are creating your server side applications and with a decent community that provides a lot of useful information and plugins in order to speed up the projects development.

When I started to read the only book available on this framework I was impressed about the simplicity, the consideration behind the framework but more important I was impressed where Hapi.js was used for the first time.
The first enterprise app made with this framework was released during Black Friday on Walmart ecommerce. The results were amazing!
In fact one of the main contributor of this open source framework is Walmart labs, that means a big organisation with real problems to solve; definitely a good starting point!

Express vs Hapi.js

If you are asking why not express, I can reply with few arguments:

  • express is a super light and general purpose framework that works perfectly for small – medium size application.
  • hapi.js was built on top of express at the beginning but then they move away in order to create something more solid and with more built in functionalities, a framework should speed up your productivity and not giving you a structure to follow.
  • express is code base instead hapi.js is configuration base (with a lot of flexibility of course)
  • express uses middleware, hapi.js uses plugins
  • hapi.js is built with testing and security in mind!

Hapi.js

Let’s start saying working with this framework is incredibly easy when you understand the few concepts you need to know in order to create a Node project.

I created a sample project where I’ve integrated a mongo database, exposing few end points in order to add a new document inside a mongo collection, update a specific document, retrieve all documents available inside the database and  retrieving all the details of a selected document.

Inside the git repo you can find also the front end code (books.html in the project root) in Vanilla Javascript, mainly because if you are passionate about React or Angular or any other front end library, you’ll be able to understand the integration without any particular framework knowledge.

What I’m going to describe now will be how I’ve structured the server side code with Hapi.js.

In order to create a server in Hapi.js you just need few lines of code:

let server = new Hapi.Server();
server.connection();
server.start((err) => console.log('Server started at:', server.info.uri));

As you can see in the example (src/index.js) I’ve created the server in the first few lines after the require statements and I started the server (server.start) after the registration of the mongoDB plugin, but one step per time.

After creating the server object, I’ve defined my routes with server.route method.
The route method will allow you to set just 1 route with an object or several routes creating an array of objects.
Each route should contain the method parameter where you’ll define the method to reach the path, you can also set a wildcard (*) so any method will be accepted in order to retrieve that path.
Obviously then you have to set the route path, bear in mind you have to start always with slash (/) in order to define correctly the path.
The path accepts also variables inside curly brackets as you can see in the last route of my example: path: ‘/bookdetails/{id}’.

Last but not least you need to define what’s going to happen when a client is requesting that particular path specifying the handler property.
Handler expects a function with 2 parameters: request and reply.

This is a basic route implementation:

{
   method: 'GET',
   path: '/allbooks',
   handler: (request, reply) => { ... }
}

When you structure a real application, and not an example like this one, you can wrap the handler property inside the config property.
Config accepts an object that will become your controller for that route.
So as you can see it’s really up to you pick up the right design solution for your project, it could be inline because it’s a small project or a PoC rather than an external module because you have a large project where you want to structure properly your code in a MVC fashion way (we’ll see that in next blog post ;-)).
In my example I created the config property also because you can then use an awesome library called JOI in order to validate the data received from the client application.
Validate data with JOI is really simple:

validate: {
   payload: {
      title: Joi.string().required(),
      author: Joi.string().required(),
      pages: Joi.number().required(),
      category: Joi.string().required()
   }
}

In my example for instance I checked if I receive the correct amount of arguments (required()) and in the right format (string() or number()).

MongoDB plugin

Now that we have understood how to create a simple server with Hapi.js let’s go in deep on the Hapi.js plugin system, the most important part of this framework.
You can find several plugins created by the community, and on the official website you can find also a tutorial that explains step by step how to create a custom plugin for hapi.js.

In my example I used the hapi-mongodb plugin that allows me to connect a mongo database with my node.js application.
If you are more familiar with mongoose you can always use the mongoose plugin for Hapi.js.
One important thing to bear in mind of an Hapi.js plugin is that when it’s registered will be accessible from any handler method via request.server.plugins, so it’s injected automatically from the framework in order to facilitate the development flow.
So the first thing to do in order to use our mongodb plugin on our application is register it:

server.register({
   register: MongoDB,
   options: DBConfig.opts
}, (err) => {
   if (err) {
      console.error(err);
      throw err;
   }

   server.start((err) => console.log('Server started at:', server.info.uri));
});

As you can see I need just to specify which plugin I want to use in the register method and its configuration.
This is an example of the configuration you need to specify in order to connect your MongoDB instance with the application:

module.exports = {
   opts: {
      "url": "mongodb://username:password@id.mongolab.com:port/collection-name",       
      "settings": {          
         "db": {             
            "native_parser": false         
         }
      }    
   }
}

In my case the configuration is an external object where I specified the mongo database URL and the settings.
If you want a quick and free solution to use mongoDB on the cloud I can suggest mongolab, when you register you’ll have 500mb of data for free per account, so for testing purpose is really the perfect cloud service!
Last but not least, when the plugin registration happened I can start my server.

When I need to use your plugin inside any handler function I’ll be able to retrieve my plugin in this way:

var db = request.server.plugins['hapi-mongodb'].db;

In my sample application, I was able to create few cases: add a new document (addbook route), retrieve all the books (allbooks route) and the details of a specific book (bookdetails route).

Screen Shot 2015-12-04 at 23.44.38

If you want to update a record in mongo, remember to use update method over insert method, because, if correctly handled, update method will check inside your database if there are any other occurrences and if there is one it will update that occurrence otherwise it will create a new document inside the mongo collection.
Below an extract of this technique, where you specify in the first object the key for searching an item, then the object to replace with and last object you need to add is an object with upsert set to true (by default is false) that will allow you to create the new document if it doesn’t exist in your collection:

db.collection('books').updateOne({"title": request.payload.title}, dbDoc, {upsert: true}, (err, result) => {
    if(err) return reply(Boom.internal('Internal MongoDB error', err));
    return reply(result);
});

SAMPLE PROJECT GITHUB REPOSITORY

Resources

If you are interested to go more in deep about Hapi.js, I’d suggest to take a look to the official website or to the book currently available.
An interesting news is that there are other few books that will be published soon regarding Hapi.js:

that usually means Hapi js is getting adopt from several companies and developers and definitely it’s a good sign for the health of the framework.

Wrap up

In this post I shared with you a quick introduction to Hapi.js framework and his peculiarities.
If you’ve enjoyed please let me know what you would interested on so I’ll be able to prepare other posts with the topics you prefer.
Probably the next one will be on the different template systems (handlebars, react…) or about universal application (or isomorphic application as you prefer to call them) or a test drive of few plugins to use in Hapi.js web applications.

Anyway I’ll wait for your input as well 😀