Algorithms: Merge Sort

Merge sort is one of the commonly used sorting algorithms in computer science. It’s based on the idea that it’s easier to sort two sorted arrays rather than one unsorted array. Once you have two sorted arrays you can merge them into a single array, comparing their items one by one and adding the smaller item first.

At the end of the sorting any left variables are concatenated at the end of our results list — since the A and B arrays are already sorted this will not cause reordering.

The concept is fairly easy to understand but how do we reach a state in which we have two sorted arrays? Arrays of multiple items can’t be compared until they are sorted! We can achieve this by splitting the array into halves recursively until we reach a point in which we compare multiple pairs of single-item arrays.


[1,9,3,8] [6,5,7,4,2]

[1,9] [3,8] [6,5] [7,4,2]

[1] [9] [3] [8] [6] [5] [7] [4,2]

[1] [9] [3] [8] [6] [5] [7] [4] [2]

This is not the most comprehensive schema ever but you can see how the array is recursively split into smaller and smaller arrays until we reach the state in which we have only arrays containing a single item. At that point we start comparing them one by one and using the above mentioned concatenating method — sort our array.

[1]   [9]   [3]   [8]   [6]   [5]   [7]   [4]   [2]

[1] [9] [3] [8] [6] [5] [7] [2,4]

[1,9] [3,8] [5,6] [2,4,7]

[1,3,8,9] [2,4,5,7,6]


In other words, we split our array to it’s smallest pieces and then assemble them again but in the correct order. Here’s an implementation in JavaScript which utilizes the built-in slice function in order to get the parts of the array that we need.

const mergeSort = (arr) => {
if (arr.length === 1) {
return arr
const middle = Math.floor(arr.length / 2)
return mergeArray(
mergeSort(arr.slice(0, middle)),

const mergeArray = (left, right) => {
let result = []
let indexLeft = 0
let indexRight = 0

while (indexLeft < left.length && indexRight < right.length) {
if (left[indexLeft] < right[indexRight]) {
} else {

return result.concat(left.slice(indexLeft)).concat(right.slice(indexRight))


Understanding JavaScript Memoization

I’ll go straight to the point here. Memoization is the programmatic practice of making long recursive/iterative functions run much faster.

But how? Well, the speed is achieved by caching function results after its execution so they can be immediately returned the next time the function is called with the same arguments.

Here’s a basic example.

const add = (a, b) => a + b
console.log(add(2,3)) // => returns 5

Every time we call this function, a calculation is going to be performed and a value returned.

Now lets create a simple memoize function that takes another function and modifies it to memoize calls.

const memoize = (fn) => {
let cache = {}
return (...args) => {
let n = JSON.stringify(args);
if (n in cache) {
console.log('Fetching from cache', n);
return cache[n];
else {
console.log('Calculating result', n);
let result = fn(...args);
cache[n] = result;
return result;

Now we can easily memoize any pure function, like our add function. Since memoize is a high order function, we have two ways of doing this. One is creating a new memoized function, like below.

const add = (a, b) => a + b
const memoAdd = memoize(add)
console.log(memoAdd(2,3)) // => Calculating result 5
console.log(memoAdd(2,3)) // => Fetching from cache 5

Or we can change add the function:

const add = memoize((a, b) => a + b)
console.log(add(2,3)) // => Calculating result 5
console.log(add(2,3)) // => Fetching from cache 5

You want to see the performance boost in action? Take a look at this example below.

// recursive fibonacci function
const fibonacci = (n) => {
if (n === 0 || n === 1) {
return n;
return fibonacci(n - 1) + fibonacci(n - 2);

This fibonacci function takes a number, the position number of the Fibonacci sequence, and calculates the corresponding value. Since it’s recursive, it tends to get slower if the position you set gets higher. Lets go nuts here and test it with the 50th element on the sequence, shall we?

console.log(fibonacci(50)) // Call the function and go for a walk!

It took nearly 4 minutes to run the code. The recursive function must be called over 40 billion times (40,730,022,147 times to be specific)! It’s pretty insane.

Now lets memoize it. Since the fibonacci is a recursive function, we can not achieve this by creating a new memoized function, like the code below.

const memoFibonacci = memoize(fibonacci)
console.log(memoFibonacci(50)) // insane amount of time

This happens because we are only memoizing the first call. All subsequent recursive calls to fibonacci are not memoized. The only way to memoize recursive functions is by wrapping the main function with the memoize.

// recursive and memoized fibonacci function
const fibonacci = memoize((n) => {
if (n === 0 || n === 1) {
return n;
return fibonacci(n - 1) + fibonacci(n - 2);

console.log(fibonacci(50)) // runs fast!

Now it runs in less than one second and the results are exactly the same. Amazing! As you can see in the console, a lot os the calls are cached, preventing billions of recursive calls.

Is memoization same as caching?

Yes, kind of. Memoization is actually a specific type of caching. While caching can refer in general to any storing technique for future use, memoizing specifically involves caching the return values of a function.


Keep in mind that memoization does not make sense for all function calls. It does not work well for functions that are not pure, that is functions that have side effects. Memoizing only allows us to cache a result, so any other side effects get lost on successive calls. That said, you can get around this constraint, by returning a function that includes your side effects that you will need to execute after getting the result.

Also, there is a higher memory overhead since we must store our cached results so that we can later recall them as well as an added complexity of using memoization, so it really only makes sense for functions that are computationally expensive.

Immutability in JavaScript

Immutability is a super hot subject in modern JavaScript and the reason for that is, of course, because of the functional programming paradigm.

Immutable data is tightly connected with a functional approach where any mutation is considered as an unwanted side effect. But first, let’s dive into details of mutability and immutability.

Continue reading “Immutability in JavaScript”

What the hell is Serverless architecture?

The Information Technology (IT) market constantly presents new trends, some of them pick up and others do not even go beyond the concept phase. The latest trend is the Wave of Serverless, a cloud computing solution that has caught the eye of developers and industry professionals since last year.

Also presented to the market as FaaS (Function as a Service) or Serving Platform as a Service, Serverless is already being disseminated by a group of Cloud Computing providers.

Continue reading “What the hell is Serverless architecture?”

Creating your own Node module

In this article we are going to talk a little bit about creating NPM modules and sharing them on NPM registry, but first let’s define what’s NPM.

NPM is a package manager for Node.js packages, or modules and the npm website hosts thousands of free packages to download and use freely. NPM program is installed on your computer when you install Node.js.

Continue reading “Creating your own Node module”

What is Node.js?

Node.js (or just Node) is a lean, fast, cross-platform JavaScript runtime environment that is useful for both servers and desktop applications. Node is an asynchronous event driven JavaScript runtime built upon Chrome’s V8 JavaScript engine. It’s designed to build scalable network applications.

That being the raw definition, let me clarify. Node.js enables you to write server side JavaScript. You may now be wondering, how? As you know, JavaScript is a language which runs in a browser. The browser’s engine takes JavaScript code and compiles it into commands. The creator of Node.js took Chrome’s engine and built a runtime for it to work on a server. Don’t get confused with the word runtime. It’s an environment where the language can get interpreted. So what do we have now? A way to write JavaScript on the back end.

Continue reading “What is Node.js?”

Reading and Writing files with Node.js

In this post I’m going to show you how we can read and write files on our local filesystem using Node.js.

Being able to read files from you local filesystem can be very useful and there’s a buntch of stuff you can do with it, for example read local Json files and load Spreadsheets to your program.

To start, just create a js file and name it whatever you want. Then open this file in your editor of choice.

Reading from files is done via the core fs module. The fs module provides an API for interacting with the file system in a manner closely modeled around standard POSIX functions.

Continue reading “Reading and Writing files with Node.js”

Node.js & Express.js fundamentals

What is Node.js

Node.js is a JavaScript runtime built on Chrome’s V8 JavaScript engine. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient.

As an asynchronous event driven JavaScript runtime, Node is designed to build scalable network applications. This is in contrast to today’s more common concurrency model where OS threads are employed. Thread-based networking is relatively inefficient and very difficult to use. Furthermore, users of Node are free from worries of dead-locking the process, since there are no locks. Almost no function in Node directly performs I/O, so the process never blocks. Because nothing blocks, scalable systems are very reasonable to develop in Node.

Continue reading “Node.js & Express.js fundamentals”

How To Build a Local Development Environment Using Docker

So let’s talk a little bit about development environments, shall we?

We developers are always trying to find the best solution when it comes to creating the best development environment, but that’s not always easy. Things are changing very fast nowadays and if you work on more than one project, things can get a little confusing.

About three years ago I thought I had the best solution for this problem using VirtualBox and Vagrant. I could start up a linux box in minutes and changes made in a project’s box would not affect other project’s environment. That was a nearly perfect solution when it worked. Most of the times things got out of hand when the projects got to big and complex and sometimes it was nearly impossible to recreate exacly the production environment using only one VM, and that’s when this solution failed.

Continue reading “How To Build a Local Development Environment Using Docker”