Scaling Nodejs server

Uday Reddy
3 min readAug 21, 2018

In the recent years many applications have been implemented in nodejs stack and its no wonder given the scalability, so lets look into a few real-world problems and tools to solve them.

Its so easy to setup nodeJs isnt it, write an index.js with some basic js in it and run node index.js

But doing so is not enough when it comes to production environments. Node is designed to use multiple cores on your system and with not much configuration your application will only use single core for processing. Its common-sensical to use as many cores are possible to improve server availability.

Nodejs has a built-in library called cluster, this creates multiple instances of the same application. Even if a core is busy doing some heavy CPU intensive calculations subscequent requests will be routed to cores which are free.

If you dont want get your hands dirty using cluster you can use pm2, it got the following features:
- Load balance wrt number of cores available.
- Log management: Rotate logs and gzip them.
- Internal monitering: pm2 provides some serious insights for your production environment.
- Forever alive server: As node runs on a single thread, if there was an error in the application, pm2 automatically restarts the server.

Its always important to identify bottle-necks and fix them and there can be bottlenecks at multiple levels:

  • Network
  • Disk
  • Memory
  • CPU

Network: Even though cloud network has the best internet connections a request may have to hop through multiple server even after entering your intranet and may cause delay somewhere.

If you use server rendering you need to be careful how you serve the html file. HTML code size can easily increase by the time all data is rendered on server, serving this huge file can be a bottle-neck. Easy way to solve this problem is to compress HTML file but because gzip library in node is CPU intensive its better we offload this task to ngnix which doesnt consume much CPU to compress html files.

2. Network profiling: Use these commands

  1. netstat: Use to moniter tcp connections on a machine.
  2. lsof: Gives information of files in your system.
  3. watch
  4. ulimit: As everything is a file in linux and if your node server stops accepting new requests as you might have hit a upper limit of number of file descriptors you can create. You can set a decent number for open files.

3. Memory: Be sure to select a proper logger library as it could write synchronously which can block main thread, subsequently lead to reduction of CPU through-put.

Important thing to remember while in nodejs is that garbage-collection is costly, once an object is garbage collected the memory usage might decrease but the CPU usage increases.

Always remember to clear time setTimeouts uset in your code

4. CPU: We need to optimze in node code as OS is already optimized.

We can use node’s — prof option to do some profiling to check which function call is taking most time to execute.

We can use 0x npm package which gives you a flamechart, this talks about amout of time spent to execute a particular function. Tall towers are good but towers with flat tops are bad and indicates that a particular function spent a lot of time executing a function.

This has flat tops which suggests the a particular function has executed for a very long time and has kept the CPU busy, this is not good.

Finally, its always important to LoadTest -> Profile -> Fix -> Repeat.

--

--