Side note: Get a Node.js logs dashboard
Save hours of sifting through Node.js logs. Centralize with Better Stack and start visualizing your log data in minutes.
See the Node.js demo dashboard live.
Node.js has built-in web server capabilities that is perfectly capable of being used in production. However, the conventional advice that has persisted from its inception is that you should always hide a production-ready Node.js application behind a reverse proxy server.
In this tutorial, you will learn why the recommended practice of placing a reverse proxy in front of a Node.js server is a good one to follow and how you can set one up quickly with only a few lines of code. We'll start by discussing what a reverse proxy is and the benefits it provides before you get some hands-on practice by setting up a reverse proxy for a Node.js application through NGINX, one of the most popular options for this purpose.
Save hours of sifting through Node.js logs. Centralize with Better Stack and start visualizing your log data in minutes.
See the Node.js demo dashboard live.
Before you proceed with the remainder of this tutorial, ensure that you have met the following requirements:
A reverse proxy is a special kind of web server that accepts requests from various clients, forwards each request to the appropriate server that can handle it, and returns the server's response to the originating client. It is usually positioned at the edge of the network to intercept client requests before they reach the origin server. It is often configured to modify the request in some manner before routing it appropriately.
Once a response is sent back by the origin server, it also goes through the reverse proxy where further processing may occur. For example, the response body may be subjected to gzip compression or encryption for security purposes. Another common use case for a reverse proxy is to enable SSL or TLS in situations where the underlying server does not support it.
The use of a reverse proxy provides several benefits for web applications:
There are many options to select from when it comes to reverse proxy servers—Apache, HAProxy, NGINX, Caddy and Traefix to name a few. NGINX is chosen here because of its track record as the most popular and performant option in its category with lots of features that should satisfy most use cases.
NGINX can be used as a reverse proxy, load balancer, mail proxy and HTTP cache. It is also often used to serve static files from the filesystem, an area it particularly excels in when compared to Node.js (over twice as fast compared to Express' static middleware).
Before we install and set up NGINX on our Linux server, let's create a Node.js application in the next step.
In this step, you will set up a basic Node.js application that will be used to demonstrate the concepts discussed in this article. This application will provide a single endpoint for retrieving price change statistics for various cryptocurrencies in the last 24 hours. It utilizes a free API from Binance as the data source.
Create a directory on your filesystem for this demo Node.js project and change into it:
mkdir cypto-stats && cd cypto-stats
Initialize your project with a package.json
file:
npm init -y
Afterwards, install the necessary dependencies: fastify as the web server framework, got for making HTTP requests, and node-cache for in-memory caching.
npm install fastify got node-cache
Once the installation completes, create a new server.js
file in the root of
your project directory and open it in a text editor:
nano server.js
Go ahead and populate the file with the following code, which sets up a
/crypto
endpoint for retrieving the price change statistics and caching it for
five minutes.
const fastify = require("fastify")({
logger: true,
});
const got = require("got");
const NodeCache = require("node-cache");
const appCache = new NodeCache();
fastify.get("/crypto", async function (_req, res) {
try {
let tickerPrice = appCache.get("24hrTickerPrice");
if (tickerPrice == null) {
const response = await got("<https://api2.binance.com/api/v3/ticker/24hr>");
tickerPrice = response.body;
appCache.set("24hrTickerPrice", tickerPrice, 300);
}
res
.header("Content-Type", "application/json; charset=utf-8")
.send(tickerPrice);
} catch (err) {
fastify.log.error(err);
res.code(err.response.code).send(err.response.body);
}
});
fastify.listen(3000, (err, address) => {
if (err) {
fastify.log.error(err);
process.exit(1);
}
});
Save and close the file, then return to your terminal and run the following command to start the server on port 3000:
node server.js
You should see the following output, indicating that the server started successfully:
{"level":30,"time":1638163169765,"pid":3474,"hostname":"Kreig","msg":"Server listening at <http://127.0.0.1:3000>"}
Now that a running Node.js application is in place, let's go ahead and install the NGINX server in the next section.
In this step, you will install NGINX on your server through its package manager.
Since NGINX is already in the default Ubuntu repositories, you should first
update the local package index and install the nginx
package.
Run the following commands in a separate terminal instance:
sudo apt update
sudo apt install nginx
After the installation is complete, run the following command to confirm that it was installed successfully and see the installed version.
nginx -v
You should observe the following output:
nginx version: nginx/1.18.0 (Ubuntu)
If you cannot install NGINX successfully using the method described above, try the alternative procedures listed on the NGINX installation guide and ensure that you're able to install NGINX before proceeding.
After installing NGINX, Ubuntu should enable and start it automatically. You can
confirm that the nginx
service is up and running through the command below:
sudo systemctl status nginx
The following output indicates that the service started successfully:
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2021-11-28 12:17:36 UTC; 6s ago
Docs: man:nginx(8)
Process: 532819 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/S>
Process: 532829 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Main PID: 532831 (nginx)
Tasks: 2 (limit: 1136)
Memory: 5.7M
CGroup: /system.slice/nginx.service
├─532831 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
└─532832 nginx: worker process
Nov 28 12:17:36 ubuntu-20-04 systemd[1]: Starting A high performance web server and a reverse proxy server...
If you're running a system firewall, don't forget to allow access to NGINX before proceeding:
sudo ufw allow 'NGINX Full'
You can now open your server's IP address in the browser to verify that everything is working. You should see the default NGINX landing page:
If you're not sure about your server's public IP address, run the command below to print it to the standard output:
curl -4 icanhazip.com
Now that you've successfully installed and enabled NGINX, you can proceed to the next step where it will be configured as a reverse proxy for your Node.js application.
In this step, you will create a server block configuration file for your
application in the NGINX sites-available
directory and set up NGINX to proxy
requests to your application.
First, change into the /etc/nginx/sites-available/
directory:
cd /etc/nginx/sites-available/
Create a new file in this directory with the domain's name on which you wish to
expose your application, and open it in your text editor. This tutorial will use
your_domain
, but ensure to replace it with your actual domain (if available).
nano your_domain
Once open, populate the file with the following NGINX server block:
server {
server_name <your_domain>;
location / {
proxy_pass <http://localhost:3000>;
}
}
If you don't have a domain name for your application, you can use your server's public IP address instead:
server {
server_name <your_server_ip>;
location / {
proxy_pass <http://localhost:3000>;
}
}
The server
block above defines a virtual server used to handle requests of a
defined type. The server_name
directive indicates the server IP address or
domain name that is mapped to your IP address, while the location
block is
used to define how NGINX should handle requests for the specified URI. Finally,
the proxy_pass
directive is used here to direct all requests in the root
location to the specified address.
Once you've saved the file, head back to your terminal and create a symbolic
link (symlink) of this your_domain
file in the /etc/nginx/sites-enabled
directory:
sudo ln -s /etc/nginx/sites-available/your_domain /etc/nginx/sites-enabled/your_domain
The difference between the sites-available
and sites-enabled
directory is
that the former is for storing all of your virtual host (website)
configurations, whether or not they're currently enabled, while the latter
contains symlinks to files in the sites-available
folder so that you can
selectively disable a virtual host by removing its symlink.
Before your changes can take effect, you need to reload the nginx
configuration as shown below:
sudo nginx -s reload
In the next step, we'll test the NGINX reverse proxy by making requests to the running app through the server's public IP address or connected domain to confirm that it works as expected.
At this point, you should be able to access your Node.js application via the
domain or public IP address of the Ubuntu server. Run the command below to
access the /crypto
endpoint with curl:
curl <your_domain>/crypto
or
curl <your_server_ip>/crypto
You should see the following output (truncated):
[
{
"symbol":"ETHBTC",
"priceChange":"0.00114000",
"priceChangePercent":"1.531",
"weightedAvgPrice":"0.07509130",
"prevClosePrice":"0.07445100",
"lastPrice":"0.07558400",
"lastQty":"0.06960000",
"bidPrice":"0.07559700",
"bidQty":"1.34580000",
"askPrice":"0.07559800",
"askQty":"4.62410000",
"openPrice":"0.07444400",
"highPrice":"0.07580100",
"lowPrice":"0.07432200",
"volume":"61307.31800000",
"quoteVolume":"4603.64643133",
"openTime":1638075364169,
"closeTime":1638161764169,
"firstId":311613024,
"lastId":311773622,
"count":160599
},
{
"symbol":"LTCBTC",
"priceChange":"-0.00001900",
"priceChangePercent":"-0.544",
"weightedAvgPrice":"0.00348225",
"prevClosePrice":"0.00348900",
"lastPrice":"0.00347100",
"lastQty":"3.11600000",
"bidPrice":"0.00347100",
"bidQty":"3.85200000",
"askPrice":"0.00347200",
"askQty":"20.40000000",
"openPrice":"0.00349000",
"highPrice":"0.00353000",
"lowPrice":"0.00341900",
"volume":"90987.24300000",
"quoteVolume":"316.84041690",
"openTime":1638075364074,
"closeTime":1638161764074,
"firstId":74054439,
"lastId":74085858,
"count":31420
}
]
Once you can access your Node.js application in the manner described above, you've successfully set up NGINX as a reverse proxy for your application.
Load balancing refers to the process of distributing incoming traffic across multiple servers so that the workload is spread evenly between them. The main benefit of load balancing is that it improves the responsiveness and availability of the application.
In this step, you'll use the pm2 process manager to create many independent instances of your Node.js application and configure NGINX to distribute incoming requests evenly between them.
Return to your Node.js project directory in the terminal, and run the following
command to install the pm2
package:
npm install pm2@latest
Afterward, open the server.js
file in your text editor:
nano server.js
And change the following lines:
. . .
fastify.listen(300 + process.env.NODE_APP_INSTANCE, (err, address) => {
if (err) {
fastify.log.error(err);
process.exit(1);
}
});
The NODE_APP_INSTANCE
environmental variable to used to a number that's used
to differentiate between running processes. Since no two instances of an app
spawned by pm2
can have the same number, each one will use a different port on
the server:
Save and close the file, then kill the previous server instance with Ctrl-C
before running the command below to start the application in cluster mode using
the total number of available CPU cores on your server.
npx pm2 start server.js -i max --name "cryptoStats"
You should observe a similar output to the one below:
[PM2] Starting /home/ayo/crypto-stats/server.js in cluster_mode (0 instance)
[PM2] Done.
┌────┬────────────────────┬──────────┬──────┬───────────┬──────────┬──────────┐
│ id │ name │ mode │ ↺ │ status │ cpu │ memory │
├────┼────────────────────┼──────────┼──────┼───────────┼──────────┼──────────┤
│ 0 │ cryptoStats │ cluster │ 0 │ online │ 0% │ 48.1mb │
│ 1 │ cryptoStats │ cluster │ 0 │ online │ 0% │ 44.6mb │
│ 2 │ cryptoStats │ cluster │ 0 │ online │ 0% │ 43.5mb │
│ 3 │ cryptoStats │ cluster │ 0 │ online │ 0% │ 33.1mb │
└────┴────────────────────┴──────────┴──────┴───────────┴──────────┴──────────┘
Afterward, check the logs to see the ports where the Node.js application instances are running:
npx pm2 logs
A subset of the output for the above command is shown below:
. . .
0|cryptoSt | {"level":30,"time":1638172796810,"pid":29333,"hostname":"Kreig","msg":"Server listening at <http://127.0.0.1:3000>"}
0|cryptoSt | {"level":30,"time":1638172796859,"pid":29340,"hostname":"Kreig","msg":"Server listening at <http://127.0.0.1:3001>"}
0|cryptoSt | {"level":30,"time":1638172796917,"pid":29349,"hostname":"Kreig","msg":"Server listening at <http://127.0.0.1:3002>"}
0|cryptoSt | {"level":30,"time":1638172797000,"pid":29362,"hostname":"Kreig","msg":"Server listening at <http://127.0.0.1:3003>"}
In this case, the application has four instances on ports 3000
, 3001
,
3002
, and 3003
. Armed with this information, we can now configure NGINX as a
load balancer. Return to the /etc/nginx/sites-available
directory:
cd /etc/nginx/sites-available
Open the your_domain
file in your text editor:
nano your_domain
Update the file as shown below:
upstream cryptoStats {
server localhost:3000;
server localhost:3001;
server localhost:3002;
server localhost:3003;
}
server {
server_name <your_domain_or_server_ip>;
location / {
proxy_pass <http://cryptoStats>;
}
}
In the example above, there are four instances of the Node.js application
running on ports 3000 to 3003. All requests are proxied to the cryptoStats
server group, and NGINX applies load balancing to distribute the requests. Note
that when the load balancing method is not specified, it defaults to
round-robin
.
Ensure to reload the NGINX configuration once again to apply your changes:
sudo nginx -s reload
At this point, incoming requests to the domain or IP address will now be evenly distributed across all specified servers in a round-robin fashion.
Head over to Better Uptime start monitoring your endpoints in 2 minutes
In this tutorial, you learned how to set up NGINX as a reverse proxy for a Node.js application. You also utilized its load balancing feature to distribute traffic to multiple servers, another recommended practice for production-ready applications. Of course, NGINX can do a lot more than what we covered in this article, so ensure to read through its documention to learn more about how you can use its extensive features to achieve various results.
Thanks for reading, and happy coding!
Are you a developer and love writing and sharing your knowledge with the world? Join our guest writing program and get paid for writing amazing technical guides. We'll get them to the right readers that will appreciate them.
Write for usWrite a script, app or project on top of Better Stack and share it with the world. Make a public repository and share it with us at our email.
community@betterstack.comor submit a pull request and help us build better products for everyone.
See the full list of amazing projects on github