This article demonstrates how to build cloud-native Node.js applications using Kubernetes (K8s) and distributed SQL.
Developing scalable and reliable applications is a labor of love. A cloud-native system might include unit tests, integration tests, build tests, and a complete pipeline for building and deploying applications at the click of a button.
Some intermediate steps may be required to deliver a reliable product. As distributed and containerized applications flood the market, so do container orchestration tools like Kubernetes . Kubernetes allows us to build distributed applications across clusters of nodes, with fault tolerance, self-healing, and load balancing—among many other features.
Let’s explore some of these tools by building a distributed to-do list application in Node.js powered by the YugabyteDB distributed SQL database.
getting Started
Production deployment may involve setting up a complete CI/CD pipeline to push containerized builds to Google Container Registry to run on Google Kubernetes Engine or similar cloud services.
For demonstration purposes, let's focus on running a similar stack locally. We will develop a simple Node.js server that is built as a docker image and can be run on Kubernetes on our machine.
We will use this Node.js server to connect to the YugabyteDB distributed SQL cluster and return records from the rest endpoint.
Install dependencies
We first install some dependencies to build and run our application.
-
-
Docker is used to build container images, which we will host locally.
-
-
-
Create a local Kubernetes cluster to run our distributed application
-
YugabyteDB Hosting
Next, we create a YugabyteDB hosting account and launch a cluster in the cloud. YugabyteDB is compatible with PostgreSQL, so you can run a PostgreSQL database elsewhere, or run YugabyteDB locally if desired.
For high availability, I created a 3-node database cluster running on AWS, but for demonstration purposes, a free single-node cluster works fine.
Seed our database
Once our database is up and running in the cloud, it's time to create some tables and records. YugabyteDB Managed has a cloud shell that can be used to connect via a web browser, but I chose to use the YugabyteDB client shell on my local machine.
Before connecting, we need to download the root certificate from the cloud console.
I created a SQL script that creates a todos
table and some records.
CREATE TYPE todo_status AS ENUM ('complete', 'in-progress', 'incomplete');
CREATE TABLE todos (
id serial PRIMARY KEY,
description varchar(255),
status todo_status
);
INSERT INTO
todos (description, status)
VALUES
(
'Learn how to connect services with Kuberenetes',
'incomplete'
),
(
'Build container images with Docker',
'incomplete'
),
(
'Provision multi-region distributed SQL database',
'incomplete'
);
We can use this script to seed our database.
> ./ysqlsh "user=admin \
host=<DATABASE_HOST> \
sslmode=verify-full \
sslrootcert=$PWD/root.crt" -f db.sql
After seeding our database, we can connect to it via Node.js.
Building a Node.js server
Connecting to our database is simple using the node-postgres driver. YugabyteDB builds on top of this library with the YugabyteDB Node.js smart driver, which has additional features that unlock distributed SQL capabilities, including load balancing and topology awareness.
> npm install express
> npm install @yugabytedb/pg
const express = require("express");
const App = express();
const { Pool } = require("@yugabytedb/pg");
const fs = require("fs");
let config = {
user: "admin",
host: "<DATABASE_HOST>",
password: "<DATABASE_PASSWORD>",
port: 5433,
database: "yugabyte",
min: 5,
max: 10,
idleTimeoutMillis: 5000,
connectionTimeoutMillis: 5000,
ssl: {
rejectUnauthorized: true,
ca: fs.readFileSync("./root.crt").toString(),
servername: "<DATABASE_HOST>",
},
};
const pool = new Pool(config);
App.get("/todos", async (req, res) => {
try {
const data = await pool.query("select * from todos");
res.json({ status: "OK", data: data?.rows });
} catch (e) {
console.log("error in selecting todos from db", e);
res.status(400).json({ error: e });
}
});
App.listen(8000, () => {
console.log("App listening on port 8000");
});
Containerize our Node.js application
To run our Node.js application in Kubernetes, we first need to build a container image. Create a Dockerfile in the same directory.
FROM node:latest
WORKDIR /app
COPY . .
RUN npm install
EXPOSE 8000
ENTRYPOINT [ "npm", "start" ]
All of our server dependencies will be built into the container image. To run our application using this command, npm start
update your files with the startup script.package.json
…
"scripts": {
"start": "node index.js"
}
…
Now, we are ready to build our image using Docker.
> docker build -t todo-list-app .
Sending build context to Docker daemon 458.4MB
Step 1/6 : FROM node:latest
---> 344462c86129
Step 2/6 : WORKDIR /app
---> Using cache
---> 49f210e25bbb
Step 3/6 : COPY . .
---> Using cache
---> 1af02b568d4f
Step 4/6 : RUN npm install
---> Using cache
---> d14416ffcdd4
Step 5/6 : EXPOSE 8000
---> Using cache
---> e0524327827e
Step 6/6 : ENTRYPOINT [ "npm", "start" ]
---> Using cache
---> 09e7c61855b2
Successfully built 09e7c61855b2
Successfully tagged todo-list-app:latest
Our application is now packaged and ready to run in Kubernetes.
Run Kubernetes locally using Minikube
To run a Kubernetes environment locally, we will run Minikube, which will create a Kubernetes cluster inside a Docker container running on our machine.
> minikube start
That's easy! Now we can kubectl
deploy our application from the Kubernetes configuration file using command line tools.
Deploy to Kubernetes
First, we create a configuration file called kubeConfig.yaml
, which will define the components of the cluster. Kubernetes deployments are used to keep pods running and up to date. todo-app
Here we are creating a cluster of nodes running the containers we have built with Docker.
apiVersion: apps/v1
kind: Deployment
metadata:
name: todo-app-deployment
labels:
app: todo-app
spec:
selector:
matchLabels:
app: todo-app
replicas: 3
template:
metadata:
labels:
app: todo-app
spec:
containers:
- name: todo-server
image: todo
ports:
- containerPort: 8000
imagePullPolicy: Never
In the same file, we will create a Kubernetes service that sets network rules for your application and exposes them to clients.
---
apiVersion: v1
kind: Service
metadata:
name: todo-app-service
spec:
type: NodePort
selector:
app: todo-app
ports:
- name: todo-app-service-port
protocol: TCP
port: 8000
targetPort: 8000
nodePort: 30100
Let's use our configuration file to create our todo-app-deployment
and todo-app-service
. This creates a networked cluster that is resilient to failures and orchestrated by Kubernetes!
> kubectl create -f kubeConfig.yaml
Access our applications in Minikube
> minikube service todo-app-service --url
Starting tunnel for service todo-app-service.
因为你在 darwin 上使用 Docker 驱动程序,所以需要打开终端才能运行它。
We can find the tunnel port by executing the following command.
> ps -ef | grep [email protected]
503 2363 2349 0 9:34PM ttys003 0:00.01 ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -N [email protected] -p 53664 -i /Users/bhoyer/.minikube/machines/minikube/id_rsa -L 63650:10.107.158.206:8000
The output shows that our tunnel is running on port 63650. We can access our endpoint/todos
through this URL in the browser or through the client.
> curl -X GET http://127.0.0.1:63650/todos -H 'Content-Type: application/json'
{"status":"OK","data":[{"id":1,"description":"Learn how to connect services with Kuberenetes","status":"incomplete"},{"id":2,"description":"Build container images with Docker","status":"incomplete"},{"id":3,"description":"Provision multi-region distributed SQL database","status":"incomplete"}]}