ES2015 Destructuring assignment: by value or reference?

This week I’ve organised a meetup on ES2015 in my community, where the speaker presented his favourite features of the language.

Right after the talk I had a chance to talk with my best friend that was asking if destructuring assigns the values copying the value to the new variable or instead by reference if you work with Object or Array.
Because I hadn’t a change before to work with this new ES2015 feature I did a quick example just to get an answer to this question.

It looks like destructuring feature works by reference and it’s not copying the value.
That means anytime you’re going to change a value inside a variable that contains an Object or Array assigned via destructuring, also the original Object or Array will be affected as you can see in this simple example: destructuring example ES2015

destructuring ES2015

So when you work with destructuring bear in mind to pay a lot of attention when you change a value inside your destructured Objects and Arrays!

Leading Lean Software Development: book review

It’s a long time that I don’t share thoughts regarding a book that I’ve read, but this time it’s totally worth the effort to write down this blog post.
I used to read between 15 to 20 books per year during my commuting time and I really enjoy the different points of views of the authors and it’s fascinating how the same topic can be treated in so many ways.

Leading Lean Software Development

The book that caught my attention is called Leading Lean Software Development from Addison Wesley.
I read many books on Lean and Agile topics but this one, I’ve to say, it’s one of the best for people like me that works in software development with these methodologies.
I strongly believe that there isn’t a framework that could rule them all, so I love learn and try different approaches like Scrum, Kanban, Kaizen, Grows and many others outside there and catch the essence in order to use the best technique when I need it.

This book does exactly that, it’s not only emphasising the importance of Lean methodologies but it’s mixing Lean methodologies with software engineering practices and give many example how these techniques helped to succeed not only in the software industry but also in the “real world”.

The most stunning example is regarding how the Empire State Building was created in less than a year and how it’s possible to have the same efficiency in the software development!

There so many take aways inside Leading Lean Software Development book that it’s hard to share all of them, but I think it’s a great collection of my believes in the Lean and software development world.
For the first time I could say that I’m represented by a book!
It’s a MUST read book in my opinion and the most interesting part is that there is another book written by the same authors way more practical that I can’t wait to read: Implementing Lean Software Development.

I hope you’ll find the time to read them and if you want to share other books with great value just leave a comment

Haxe-watchify: automatic build tool for Haxe and OpenFL projects

I’ve started recently working in a new company very focused on cross platform projects with Haxe.
In my commuting time I worked on an automatic build tool for Haxe and OpenFL projects.
The tool is called haxe-watchify and with a sample JSON file or directly through the command line, you’ll be able to setup how to continuously build your project in background during your development flow.
Haxe-watchify has got interesting features in particular for the Haxe target like the possibility to setup the completion server instead the traditional compiler to speed up the building of your projects.
In fact the completion server implements a cache system to build faster your projects, in this case haxe-watchify takes care for you to start the server and communicate with it.

Currently I’ve published the tool on npm registry so in order to install it just type in your CLI:

npm install haxe-watchify -g

I wrote an extensive documentation on how to use the tool on the readme file on the project repository otherwise you can check the –help command directly on your terminal window.
I tried for now only on Mac OS X so if you find any bug in any other platforms please let me know

I’ve already thought few possible implementations to add in the next releases like a pre and post build in order to launch your tests or run static analysis tool or assets optimisation and then move to the build.
Anyway I’m very keen to learn more about your current projects workflow and how haxe-watchify could help you to improve your situation.

if you want to share any comment please do feel free to share adding a comment to this post or via email

The power of feedback loops

Working in software development requires that we find the right techniques to achieve quality and customer satisfaction. Learning about development and flow techniques from experienced peers can be helpful. But if the team has a lack of senior peers or insufficient commitment, we may have to find alternative routes.

While studying how to improve my methods, I watched an interesting video wherein Noel Ford explained how useful feedback loops are in software development. Working from that premise, I started collecting and applying the best practices that could most help developers deliver software that meets customer expectations while also improving code quality.

First of all, let’s define a feedback loop. According to Wikipedia, “Feedback occurs when outputs of a system are ‘fed back’ as inputs as part of a chain of cause and effect that forms a circuit or loop.”

Any Agile or Lean framework incorporates this concept, but I think the closest implementation of feedback loop is defined inside kaizen with the PDCA cycle.

PDCA cycle

PDCA stands for “plan-do-check-act” and in kaizen is a technique used for continuous improvement of a company’s standards. The team tries first to analyze a problem, retrieve the metrics in the current situation, define a strategy for improvement toward achieving the intended result, and finally retrieve the metrics of the process to assess whether improvement really occurred so the team can adapt strategy to obtain better results and create a new standard.

This is a continuous process that should be carried out several times to aim for the best possible results.

Source: http://upload.wikimedia.org/wikipedia/commons/a/a8/PDCA_Process.png

Feedback loops in software development

As software creation follows an empirical process, we need to create several interim checkpoints to determine whether what we’re producing is really what our customers are looking for. Often during development we don’t realize how much time (and money) we waste simply because we don’t gather good specifications or show the progress of our work to the customer. Just because the waste is to a certain degree hidden doesn’t mean it is without cost; we often invest more time and expense than we realize, yet with a few shrewd tactics could actually prevent such loss and achieve what our customers really expect.

We can add several types of feedback loops during development to help accomplish that. The key rule of any feedback loop technique is “shorten, and often”!

Let’s go through the different types of feedback loops we can apply:

TDD/BDD/Unit Test

Test-driven development, behavior-driven development, or simple unit testing is the first feedback loop that should be established on any project.

It’s helpful for the healthiness of your project not only because helps you to better define the requirements, design better APIs, and obviously check the sanity of what you’ve written or refactored but also because testing gives you feedback on your work in seconds, and you can have this feedback every time you save a code file or when you push the code in your version control system.

Static analysis

Often we avoid proper static analysis in the belief that we lack the time to define what we need to analyze, but it doesn’t really take a huge effort to establish a good strategy, and the potential benefits are important.

Static analysis can help you immediately recognize areas in your software that could be simplified or improved without deep-diving inside the code. It supplies a powerful feedback loop that in seconds or minutes can provide metrics to analyze during the whole development life cycle.

One of my favorite metrics techniques is cyclomatic complexity. Applying this metric during the whole life of your software will help you understand the healthiness of your project and what areas need attention.

Code coverage is another useful metric, as it can suggest the extent to which your system is covered by tests. Unfortunately, though, it doesn’t give you feedback on the test quality.

Pair programming

Introduced by XP, pair programming is a powerful loop technique that provides peer feedback in seconds while letting you consolidate your knowledge and explain problems or solutions to someone else. Developers often skip this option when the business department deems it too expensive to have two people working on the same piece of code, but research demonstrates it’s not as costly as one might think. In fact, several studies show that while pair programming lowers developer productivity by a mere 10 to 15 percent, it increases the robustness of your software, consolidates and improves the skills of developers, and, most important, helps detect bugs and architecture mistakes more than any other technique. If executed in the right way, it can be a great investment for both your company and your software.

Code review

Code review definitely improves project quality and normally should be performed every time a developer in the team makes a pull request directly inside the version control system. Unfortunately, they are often skipped or, worse, ignored because we trust our peers too much or don’t have time. But if applied properly this technique can improve the code quality and architecture of your software to an amazing degree.

Usually it takes days, not seconds or minutes, so bear in mind that with this technique your developers don’t have the kind of frequent feedback afforded by the previously described techniques.

Agile Modeling

Several books describe and discuss this topic; I suggest scheduling one or more sessions at the beginning of any project to architect enough of your software that you can start development and review the design and implementations iteration by iteration (possibly with every sprint). You’ll see a constant improvement in your code without a huge up-front effort, meaning less need to accommodate changes for future business requests and more likelihood that your architecture and design decisions will faithfully reflect the project specifications. Usually these feedback loops are the only ones that maintain overall code structure and architecture sanity as viewed with an “eagle eye.”

Daily stand-up

This ceremony, used not only by Scrum teams but in many Kanban implementations, primarily provides an opportunity to synchronize team member efforts as well as a starting point for discussing potential improvements. As the name suggests, it’s a daily activity whose value hinges on not only on the technical aspects but also team communication and the big picture behind the project. These aspects are fundamental, because often developers are so focused on details they miss the perspective that could keep them from spending unnecessary work or time on secondary features (remember the Pareto principle).

Retrospective

An Agile retrospective, if well organized, is the most effective ceremony inside the Scrum framework to produce feedback that really helps improve projects and promote healthy teams. I facilitated several retrospectives and I’ve always gained a lot from them; if you can create a good atmosphere, defining properly what you want to achieve, you may reap surprisingly good results. For this purpose the retrospective is organized at the end of each iteration (usually every two weeks), but I really suggest more often if the team is recently defined or has more than one new member, to help newcomers feel part of the team and acquire its mind-set and to permit review of the resulting team performance as a whole.

Conclusion

Feedback loops help on a daily basis with project improvement and delivery success; more feedback loops mean greater control of software quality and better customer satisfaction, exactly the goals you are trying to achieve every day. Don’t be afraid to try new techniques to improve your team and your company. Fail fast, retrieve metrics, and learn from them; the worst that can happen is that you really improve!

Automate, automate and automate!

Recently I’ve spent several days on find the best way to set up an automation process for Javascript developers and I investigated several tools strictly related to the Javascript world.
These tools allow you to save a lot of time when you perform repetitive and sometimes boring tasks in order to test your HTML5 game, website or web app.
In this post I’d like to share with you what tools I’ve found and used to create a full-stack Javascript automation pipeline for any front end developer or team.

Let’s see what a Javascript developer or a team could automate in their machine to have a better code quality and to save a lot of time when they are working on their own library or projects.
For this purpose, in my pipeline, I’ve used different CLI tools like mocha, grunt, yeoman, blanket or plato.
Each of this tool allows you to perform a specific task but combined all together these tools will provide in your projects:

  • tdd, bdd and unit test
  • code coverage
  • dependencies management
  • (custom) project template
  • static analysis
  • tasks automation (live reloading, deploy in localhost folder, files concatenation…)

These are only few of the multiple options that you can have “playing” with these tools, but let’s try to go a little bit more in deep to see what tool can effectively help to accomplish each of the item present in the list above.

TDD, BDD and UNIT TEST

Surfing on the web about this topic on Javascript you’ll find really a lot of good libraries, the one I decided to use is Mocha because is very well integrated with Blanket (code coverage) and karma (tests runner) and because it’s based on node.js so you can create your libraries and then testing in pure javascript without any need to pass through HTML pages and if you need to test javascript code that will run only inside the browser you could fake the window object with libraries like jsdom integrated in your test cases.
Mocha allows you to work in BDD, TDD and in Unit Testing you can easily mix with several assertion libraries and writing also async tests became really really easy.
Other libraries that could be useful could be Jasmine or QUnit.

CODE COVERAGE

As I wrote before I found an interesting library that work perfectly with Mocha that is Blanket.js.
Blanket is very simple and easy to use library in particular when you have all your test written in modules (node.js style) instead of a mix between html and js files.
Blanket works not only with Mocha but also with Jasmine and QUnit, so basically with the most famous testing libraries!
One thing that I really appreciate of blanket is the final output that could be exported in an interactive HTML where immediately you can recognise what it’s not tested yet and jump from a file to another one following the menu on the right side of the template.
Another one that it seems quite interesting is Istanbul.js, I didn’t try yet but it’ll be the next one for sure, I heard really good experiences from other developers with this library!

STATIC ANALYSIS

When you want to use a static analysis tool in your pipeline on of the most popular is….
But I suggest to give a try to Plato in particular if you work alone or in a small team and you want to do a sanity check of your project.

8ek3snRZ22Eq898NOi-l9Dl72eJkfbmt4t8yenImKBVvK0kTmF0xjctABnaLJIm9
Plato, in fact, store all the information of your code project locally in some JSON files and you can navigate through the report directly from an HTML page created by the tool (above a screenshoot sample).
These stats are very interesting to check the are of improvement of your project and in particular with these tools you can have an immediate feedback on where your efforts should be focused in order to deliver a better product and be sure that the maintenance shouldn’t cost too much later on.
Obviously you can also use more sophisticated solutions like SonarQube and install it inside your server with the Javascript plugin and run your static analysis every time a developer push his code in git or mercurial.
Depends always the dimension of the project and the team, my suggestion is to start with Plato in a small project and then when you see the real value move to SonarQube also if you are a small organisation.r

PROJECT TEMPLATE

When you talk about template for Javascript it’s impossible to forget of Yeoman.
Yeoman is a scaffolding tool that allow you to create skeleton of project with your favorite JS library ready to use.
I really suggest to use these kind of tools because it facilitates the beginning of new projects and give you at the same time some standards inside your company and between your projects.
There are several generators ready to use and searchable from the official website, if you can’t find what you’re looking for it’s very easy use Node.JS and the APIs already built in Yeoman to create your own generator with the functionalities that your company or projects need.

TASKS AUTOMATION and DEPENDENCIES MANAGEMENT

This is my favourite part, I found in Grunt a really good tool to automate more or less everything that is not strictly related to write the project code inside my IDE!
Grunt is the glue to assembly in a pipeline all the tools explained above, easily in one line inside your CLI: “grunt”.
The community is really huge and you can find more or less everything for Node.JS or plain Javascript, from minify, uglify and concatenate your JS files, to compile your LESS or SASS files, to convert your ES6 code to ES5, to run static analysis or push your code directly on git simply with a grunt task.
One thing that I really like of Grunt is that you can easily scale the way you are working with it using a yaml file and different js files (one per task) and assembly them at runtime.
This allows to create some common tasks for the whole company and at the same time have the freedom to add custom automation for each project and/or department of your company.
I really suggest to take a look to the official website where you can find also many technical information and then start to automate your daily Javascript workflow.
Obviously if you’re not working with JS you can still use Grunt in combination with your favourite programming language or technology like Haxe, Dart, Typescript, Coffeescript or Adobe AIR; the flexibility of this tool is really impressive!

An Alternative to Grunt could be Gulp where the main difference is that grunt favours configuration over code and Gulp exactly the opposite.
The Gulp community is growing day by day and it’s interesting to see the different approach between these two great task runners, probably in the long term the Gulp approach will be more successful but for now Grunt is exactly what I was looking for.

Conclusion

As you’ve read the JS world has got really a lot of useful tool that will save a lot of time during your daily job as developer or company.
The mix of these tools allow to create a pipeline in pure Javascript and they could really improve your code quality and your flow to have standards inside yout projects and a solid flow that will able to scale your company or projects in an easy and professional way.
Obviously there aren’t only these few tools and libraries that I’ve tried, there are many others outside there that I’d like to mention like PhantomJS or Buster or Lineman and so on, but form the next five minutes before come back on what you were doing before reading this post, try to think how to improve your flow, trust me you will remain surprise on how more productive you’ll become introducing them inside your routine.

Open letter to @Adobe and @Adobe Air: the hidden part…

I don’t know if you had already read the open letter that Gary wrote recently discussing about the arguable marketing choice made by Adobe on Adobe AIR but also previously about the Flash Platform in general, but I suggest you to start from there before read this post.

If you know me personally or you are following me in the social networks or reading this blog you should know that I’m a big fan of Flash Platform, in particular of Adobe AIR.
I’m very committed to deliver amazing and cutting edge projects made with this fantastic technology and I’m involved in the community to spread the word about AIR.
From a developer perspective I’m 110% with Gary and the community; an amazing technology like Adobe AIR with really a lots of success behind in terms of developers and companies that adopted this technology and in terms of numbers of apps in released during the past few years in different platform.
AIR, in my opinion, should have a better commitment from the company that create it (partially).
Obviously I agree also that there isn’t any competition between Actionscript 3 and HTML5 (read Javascript), what you can really do with HTML5 is what a flash developer could do 5 years ago more or less.
But you can’t approach a discussion like this talking only from a developer perspective, you and we should see it from different angles also.
What I’m asking you is to follow me until to the end of this post then you can send me an email and ask me if I became totally crazy or insult me with a comment, no worries😀

I usually goes to Adobe Max since 2006, first MAX organised by Adobe, and I remember quite clearly that few MAX ago during both keynotes nobody said anything related to new Flash Platform improvements or plan for the future of the platform.
At the beginning I was so hungry and I spent literally hours on the phone to talk with Adobe people because the 100% of my business was based on this technology and they can’t really think that HTML5 could be better that Flash Platform, in particular in 2011/2012 where the most coolest websites, RIAs and desktop applications were realised with Flash.
But have you ever tried to think from a different point of view this situation? Let’s assume for few minutes that we are inside the meeting room of the Adobe.
You have an amazing technology that millions of people is using to innovate and create the best software in the world but you are not earning what you expect from it, don’t you believe me? Take a look at this chart (ADBE):

ADBE stock 2011/2012

 

This is the graph of Adobe stock (ADBE) from January 2011 to December 2012, the value of the stock was for few months (close to October when usually Adobe Max takes place) lower than $28 and never greater of $35.
Great, obviously our white collars friends, aren’t so interested about which is the best technology in the market or how many people is using it; they care about numbers, how to increase the profit of the company and make happy the analyst to have a better position in the stocks market and with the shareholders.
These results weren’t so good for Adobe in fact, if you remember well, a lots of people started to leave the company, a lots of team was closed in USA or Europe to move the development side in India or in places where the developers are cheaper and Adobe started his commitment on the big new trend of new web technologies products like the Edge family for instance.
Everybody now knows very well the following story about the new products and how they are trying to improve the way to create websites and apps, and I guess the majority of us it’s not so happy about that.
But let’s take a look again to some numbers, the next graph will show you the Adobe stock value from 2012 to March 2014 basically in the period where Adobe left to push the Flash Platform and started to increase the investment on designers products:

ADBE stock 2012/2014

I think is quite explicit that the politic to start selling their products on Cloud (first big mover in the IT panorama), the decision to try to improve a new technology like HTML5 with new tools and so on, create around the Adobe what the management was looking for!
I agree with you that there are many ways to make money but from the metrics perspective they are going in the right direction and they did the right choices for now.
I’m not an economist and I can’t say if this strategy will pay in the long term but for sure in the short term they arrived where they wanted to be.

With this post I’m not trying to defend Adobe, but after many years where the Flash Platform is in this status I started to leave my angry mood and to interrogate myself on why they took this decision, honestly I can’t say if that it’s the only reason that drives Adobe in these big changes but excluding the technical side that’s the only way I can see this thing and now most part of their decision make finally sense.
All the comments I’ve read in the past and also in these days after the Gary’s letter to Adobe is completely true but often, as developers, we forget that it’s not just a design pattern or a performance optimisation that could save the world, the marketing and the market are the real drivers, in front of them also a big corporation like Adobe could defeated.

ADBE stock 2009/2014

 

UPDATE FROM ADOBE

Chris, the new product manager of Adobe AIR, replied to Gary’s open letter, you can read the answer in the official blog

Git Flow vs Github Flow

Recently I’ve spent time to study a good way to manage a software projects with GIT.
I really read a lots of blog post to check different points of view and to find out which is the best technique to use in different situations.

The principals ways to manage a software in GIT are: the Git Flow and the Github Flow.
These 2 methods can really help you to manage your project and optimise your workflow in the team.
Let’s see the differences between them.

Git Flow

git flow
git flow

Git flow works with different branches to manage easily each phase of the software development, it’s suggested to be used when your software has the concept of “release” because, as you can see in the scheme above, it’s not the best decision when you work in Continuous Delivery or Continuos Deployment environment where this concept is missing.
Another good point of this flow is that fits perfectly when you work in team and one or more developers have to collaborate to the same feature.
But let’s take a look closer to this model.

The main branches in this flow are:

  • master
  • develop
  • features
  • hotfix
  • release

When you clone a GIT repository in your local folder you have immediately to create a branch from the master called develop, this branch will be the main branch for the development and where all the developers in a team will work to implement new features or bug fixing before the release.
Every time a developer needs to add a new feature he will create a new branch from develop that allow him to work properly in that feature without compromise the code for the other people in the team in the develop branch.
When the feature will be ready and tested it could be rebased inside the develop branch, our goal is to have always a stable version of develop branch because we merge the code only when the new feature is completed and it’s working.
When all the features related to a new release are implemented in the develop branch it’s time to branch the code to the release branch where there you’ll start to test properly before the final deployment.
When you branch your code from develop to release you should avoid to add new features but you should only fix bugs inside the release branch code until to you create a stable release branch.
At the end, when you are ready to push live deploy live your project, you will tag the version inside the master branch so there you can have all the different versions that you release week by week.
Apparently it could seem to much steps but it’s for sure quite safe and helps you to avoid mistakes or problem when you release, obviously to accomplish all this tasks you can find online a lots of scripts that could help you to work with Git flow in the command line or if you prefer you can use visual tools like SourceTree by Atlassian that make the dirty work for you, so you have to follow only the instructions inside the software to manage all the branches, for this reason I’ve also prepared a short video to see how use this flow with SourceTree

You can go more in deep about this flow reading these 2 interesting articles: nvie blog or datasift documentation.

Github Flow

Screen Shot 2014-03-08 at 23.07.36

So now, do you think that Github is working with Git Flow? Of course no! (Honestly I was really surprised when I read that!)
In fact they are working with a continuos deployment environment where there isn’t the concept of “release” because every time they finish to prepare a new feature they push live immediately (after the whole automation chain created in the environment).
The main concepts behind the Github flow are:

  • Anything in the master branch is deployable
  • To work on something new, create a descriptively named branch off of master (ie:new-oauth2-scopes)
  • Commit to that branch locally and regularly push your work to the same named branch on the server
  • When you need feedback or help, or you think the branch is ready for merging, open a pull request
  • After someone else has reviewed and signed off on the feature, you can merge it into master
  • Once it is merged and pushed to ‘master’, you can and should deploy immediately

I found an amazing interactive page where you can deepen the knowledge of this method, but I see it’s very common when you work in QA teams, small teams or you are working as freelance because it’s true that is a lightweight flow to manage a project but it’s also quite clear and secure when you want to merge your code in the master branch.
Another good resource about Github Flow is the blog post made by the Github evangelist Scott Chacon.
I recorded also a video on how to use Github flow with SourceTree:

If you have any other method to manage your project in GIT feel free to share because I’m quite interesting to see how you usually work with GIT and if there are better ways to work with and if you have any other feedback or question I’m here for you!