DEV Community

Cover image for πŸŽ‰Monitor your Javascript application like a proπŸ§™β€β™‚οΈπŸ’«
Eden Federman for Odigos

Posted on

πŸŽ‰Monitor your Javascript application like a proπŸ§™β€β™‚οΈπŸ’«

TL;DR

In this tutorial, you will learn how to monitor your Javascript application with modern tools and best practices.

Explore the power of distributed tracing, and discover how to seamlessly integrate and utilize tools like Odigos and Jaeger to enhance your monitoring capabilities.

What you will learn: ✨

  • How to build microservices 🐜 in Javascript.
  • Setting up Docker containers πŸ“¦ for microservices.
  • Configuring Kubernetes ☸️ for managing microservices.
  • Integrating a tracing backend for visualizing the traces πŸ”.

Are you ready to become a pro at monitoring your JS application? 😍 Say Yes, sir!.

I can't hear you. Say it louder. πŸ™‰

Say it louder GIF


Let's set it up πŸ¦„

🚨 In this section of the blog, we'll be building a dummy JavaScript microservices application and deploying it on local Kubernetes. If you already have one and are following along, feel free to skip this part.

Create the initial folder structure for your application as shown below. πŸ‘‡πŸ»

mkdir microservices-demo
cd microservices-demo
mkdir src
cd src
Enter fullscreen mode Exit fullscreen mode

Setting up the Server πŸ–₯️

πŸ‘€ For demonstration purposes, I will create two microservices that will communicate with each other, and eventually, we can use that to visualize distributed tracing.

  • Build and Dockerize Microservice 1

Inside the /src folder, create a new folder /microservice-1. Inside the folder initialize a NodeJS project and install the required dependencies.

mkdir microservice-1
cd microservice-1
npm init -y
npm install --save express node-fetch
Enter fullscreen mode Exit fullscreen mode

Create a new file index.js and add the following code:

// πŸ‘‡πŸ»/src/microservice-1/index.js
const express = require("express");
const fetch   = require("node-fetch")

const app = express();
const PORT = 3001;

app.use(express.json());

app.get("/", async (req, res) => {
  try {
    const response = await fetch("http://microservice2:8081/api/data");
    const data = await response.json();
    res.json({
      data: "Microservice 2 data received in Microservice 1",
      microservice2Data: data,
    });
  } catch (error) {
    console.error(error.message);
    res.status(500).json({ error: "Internal Server Error" });
  }
});

app.listen(PORT, () => {
  console.log(`Microservice 1 listening on port ${PORT}`);
});

Enter fullscreen mode Exit fullscreen mode

πŸ’‘ If you've noticed, we're requesting data from http://microservice2:8081/api/data. You might be wondering, what is this microservice2? Well, we can use service names as host names. πŸ˜‰ We will build this service later.

The server is listening on port 3001 and on get request to / we are requesting data from microservice2 and returning the response as a JSON object. πŸ“¦

Now, it's time to dockerize this microservice. Create a new Dockerfile inside the /microservice-1 folder and add the following code:

// πŸ‘‡πŸ»/src/microservice-1/Dockerfile
FROM node:18

# Use /usr/src/app as the working directory
WORKDIR /usr/src/app

# Copy package files and install production dependencies
COPY --chown=node:node package*.json /usr/src/app
RUN npm install --production

# Copy the rest of the files
COPY --chown=node:node . /usr/src/app/

# Switch to the user node with limited permissions
USER node

# Expose the application port
EXPOSE 3001

# Set the default command to run the application
CMD ["node", "index.js"]

Enter fullscreen mode Exit fullscreen mode

It is always nice to add files to .dockerignore that we do not want to push to the container. Create a .dockerignore file with the names of the files we don't want to push.

// πŸ‘‡πŸ»/src/microservice-1/.dockerignore
node_modules
Dockerfile
Enter fullscreen mode Exit fullscreen mode

Finally, build πŸ—οΈ the docker image by running the following command:

docker build -t microservice1-image:latest .
Enter fullscreen mode Exit fullscreen mode

Now, that is the entire setup for our first microservice. ✨

  • Build and Dockerize Microservice 2

We will have a setup similar to microservice1, with just a few changes here and there.

Inside the /src folder, create a new folder /microservice-2. Inside the folder, initialize a NodeJS project and install the required dependencies.

mkdir microservice-2
cd microservice-2
npm init -y
npm install --save express node-fetch
Enter fullscreen mode Exit fullscreen mode

Create a new file index.js and add the following code:

// πŸ‘‡πŸ»/src/microservice-2/index.js
const express = require("express");
const fetch   = require("node-fetch")

const app = express();
const PORT = 3002;

app.use(express.json());

app.get("/api/data", async (req, res) => {
  const url = "https://jsonplaceholder.typicode.com/users";

  try {
    const response = await fetch(url);
    const data = await response.json();
    res.json(data);
  } catch (error) {
    console.error(error.message);
    res.status(500).json({ error: "Internal Server Error" });
  }
});

app.listen(PORT, () => {
  console.log(`Microservice 2 listening on port ${PORT}`);
});

Enter fullscreen mode Exit fullscreen mode

The server is listening on port 3002, and upon a GET request to /api/data, we fetch data from jsonplaceholder and return the response as a JSON object. πŸ“¦

Now, it's time to dockerize this microservice as well. Copy and paste the entire Dockerfile content for microservice1 and just change the port from 3001 to 3002.

Also, add a .dockerignore file and include the same files that we added when creating microservice1.

Finally, build πŸ—οΈ the Docker image by running the following command:

docker build -t microservice2-image:latest .
Enter fullscreen mode Exit fullscreen mode

Now, that is the entire setup for our second microservice as well. ✨

  • Setting up Kubernetes

Make sure Minikube is installed, or follow this link for installation instructions. πŸ‘€

Create a new local Kubernetes cluster, by running the following command. We will need it when setting up Odigos and Jaeger.

Start Minikube: πŸš€

minikube start
Enter fullscreen mode Exit fullscreen mode

Now that we have both of our microservices ready and dockerized, it's time to set up Kubernetes for managing these services.

At the root of the project, create a new folder /k8s/manifests. Inside this folder, we will add deployment and service configurations for both of our microservices.

  • Deployment Configuration πŸ“œ: For actually deploying the containers on the Kubernetes Cluster.
  • Service Configuration πŸ“„: To expose the pods to both within the cluster and outside the cluster.

First, let's create the manifest for the microservice1. Create a new file microservice1-deployment-service.yaml and add the following content:

// πŸ‘‡πŸ»/k8s/manifests/microservice1-deployment-service.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: microservice1
spec:
  selector:
    matchLabels:
      app: microservice1
  template:
    metadata:
      labels:
        app: microservice1
    spec:
      containers:
        - name: microservice1
          image: microservice1-image
          # Make sure to set it to Never, or else it will pull from the docker hub and fail.
          imagePullPolicy: Never
          resources:
            limits:
              memory: "200Mi"
              cpu: "500m"
          ports:
            - containerPort: 3001
---
apiVersion: v1
kind: Service
metadata:
  name: microservice1
  labels:
    app: microservice1
spec:
  type: NodePort
  selector:
    app: microservice1
  ports:
    - port: 8080
      targetPort: 3001
      nodePort: 30001
Enter fullscreen mode Exit fullscreen mode

This configuration deploys a microservice named microservice1 with resource limits of 200MB memory πŸ—ƒοΈ and 0.5 CPU cores. It exposes the microservice internally on port 3001 through a Deployment and externally on NodePort 30001 through a Service.

πŸ€” Remember the Dockerfile we built with the name microservice1-image? We are using the same image to create the container.

It is accessible on port 8080 within the cluster. We assume microservice1-image is locally available with imagePullPolicy: Never. If this is not in place, it would attempt to pull the image from the Docker Hub πŸ‹ and fail.

Now, let's create the manifest for microservice2. Create a new file named microservice2-deployment-service.yaml and add the following content:

// πŸ‘‡πŸ»/k8s/manifests/microservice1-deployment-service.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: microservice2
spec:
  selector:
    matchLabels:
      app: microservice2
  template:
    metadata:
      labels:
        app: microservice2
    spec:
      containers:
        - name: microservice2
          image: microservice2-image
          # Make sure to set it to Never, or else it will pull from the docker hub and fail.
          imagePullPolicy: Never
          resources:
            limits:
              memory: "200Mi"
              cpu: "500m"
          ports:
            - containerPort: 3002
---
apiVersion: v1
kind: Service
metadata:
  name: microservice2
  labels:
    app: microservice2
spec:
  type: NodePort
  selector:
    app: microservice2
  ports:
    - port: 8081
      targetPort: 3002
      nodePort: 30002
Enter fullscreen mode Exit fullscreen mode

It is similar to the manifest for microservice1, with just a few changes. πŸ‘€

This configuration deploys a microservice named microservice2 and exposes it internally on port 3002 through a Deployment and externally on NodePort 30002 through a Service.

Accessible on port 8081 within the cluster, assuming the microservice2-image is locally available with imagePullPolicy: Never.

Once, this is all done, make sure to apply these configurations and start the Kubernetes cluster with these services. Change the directory to /manifests and execute the following commands: πŸ‘‡πŸ»

kubectl apply -f microservice1-deployment-service.yaml
kubectl apply -f microservice2-deployment-service.yaml
Enter fullscreen mode Exit fullscreen mode

Check that both our deployments are Running by executing the following command: πŸ‘‡πŸ»

kubectl get pods
Enter fullscreen mode Exit fullscreen mode

Kubernetes Pods

Finally, our application is ready and deployed on Kubernetes with the necessary deployment configurations. πŸŽ‰


Installing Odigos 😍

πŸ’‘ Odigos is an open-source observability control plane that enables organizations to create and maintain their observability pipeline.

Odigos - Monitoring Tool

ℹ️ If you are running on a Mac run the following command to install Odigos locally.

brew install keyval-dev/homebrew-odigos-cli/odigos
Enter fullscreen mode Exit fullscreen mode

ℹ️ If you are on a Linux machine, consider installing it from GitHub releases by executing the following commands. Make sure to change the file according to your Linux distribution.

ℹ️ If the Odigos binary is not executable, run this command chmod +x odigos to make it executable before running the install command.

curl -LJO https://github.com/keyval-dev/odigos/releases/download/v1.0.9/cli_1.0.9_linux_amd64.tar.gz
tar -xvzf cli_1.0.9_linux_amd64.tar.gz
./odigos install
Enter fullscreen mode Exit fullscreen mode

Odigos Installation

If you need more brief instructions on its installation, follow this link.

Now, Odigos is ready to run πŸŽ‰. We can execute its UI, configure the tracing backend, and send traces accordingly.


Connect Odigos with a Tracing Backend πŸ’«

πŸ’‘ Jaeger is an open source, end-to-end distributed tracing system.

Odigos - Distributed Tracing Platform

Setting up Jaeger! ✨

For this tutorial, we will use Jaeger πŸ•΅οΈβ€β™‚οΈ, a popular open-source platform for viewing distributed traces in a microservices application. We will use it to view the traces generated by Odigos.

For Jaeger installation instructions, follow this link. πŸ‘€

To deploy Jaeger on a Kubernetes cluster, run the following commands: πŸ‘‡πŸ»

kubectl create ns tracing
kubectl apply -f https://raw.githubusercontent.com/keyval-dev/opentelemetry-go-instrumentation/master/docs/getting-started/jaeger.yaml -n tracing
Enter fullscreen mode Exit fullscreen mode

Here, we are creating a tracing namespace and applying the deployment configuration πŸ“ƒ for Jaeger in that namespace.

This command sets up the self-hosted Jaeger instance and its service. πŸ‘€

Run the below command to get the status of the running pods: πŸ‘‡πŸ»

kubectl get pods -A -w
Enter fullscreen mode Exit fullscreen mode

Wait for all three pods to be Running before proceeding further.

Kubernetes Pods

Now, to view the Jaeger Interface πŸ’» locally, we need to port forward. Forward traffic from port 16686 on the local machine to port 16686 on the selected pod within the Kubernetes cluster.

kubectl port-forward -n tracing svc/jaeger 16686:16686
Enter fullscreen mode Exit fullscreen mode

This command creates a tunnel between the local machine and the Jaeger pod, exposing the Jaeger UI so you can interact with it.

Finally, open up http://localhost:16686 on your browser and see the Jaeger Instance running.

Jaeger UI

Setting up Odigos to work with Jaeger! 🌟

ℹ️ For Linux users, go to the folder where you downloaded the Odigos binaries from GitHub releases and run the following command to launch the Odigos UI.

./odigos ui
Enter fullscreen mode Exit fullscreen mode

ℹ️ For Mac users, just run:

odigos ui
Enter fullscreen mode Exit fullscreen mode

Visit http://localhost:3000 and you will be presented with the Odigos interface where you will see both your deployments in the default namespace.

Odigos Landing Page

Select both of these and click Next. On the next page, choose Jaeger as the backend, and add the following details when prompted:

  • Destination Name πŸ›£οΈ: Give any name you want, let's say express-tracing.
  • Endpoint 🎯: Add jaeger.tracing:4317 for the endpoint.

And that's it β€” Odigos is all set to send traces to our Jaeger backend. It's that simple. 🀯

Odigos UI with two microservices


View the Distributed Tracing 🧐

After setting up Odigos, on the Jaeger homepage at http://localhost:16686, you will already see both of our microservices listed.

Jaeger UI listing two microservices

Odigos has already begun sending traces of our application to Jaeger. πŸ˜‰

Remember, this is our microservices application. Make a few more requests to microservice1 since it serves as the starting point, it will subsequently request microservice2 for data and return it. Eventually, Jaeger will begin to populate with the traces.

Jaeger Distributed Tracing

Click on any one of the requests, and you should be able to observe how the request flows through your application and the time taken to complete each request.

This was all done without changing a single line of code. 🀯 All thanks to Odigos! 🀩

Mind Blown GIF

Just imagine, this was such a small dummy application but for a bigger application with tons of microservices running πŸƒπŸ»β€β™‚οΈ and interacting with each other, distributed tracing would be extremely powerful! πŸ’ͺ

With distributed tracing, you could easily identify bottlenecks in your application and determine which service is causing problems or is taking a longer time. πŸ•’


Let's Wrap Up! πŸ₯±

So far, you've learned how to closely monitor πŸ‘€ your Javascript application with distributed tracing, using Odigos as the middleware between your application and the tracing backend Jaeger. πŸ‘

If you have made it this far, give yourself a pat on the back. πŸ₯³ You deserve it! πŸ˜‰

If you found the article and the tools helpful, be sure to give a star 🌟 Odigos and Jaeger on their GitHub repositories.

The source code for this tutorial is available here:

https://github.com/keyval-dev/blog/tree/main/odigos-monitor-JS-like-a-pro

If you have any questions or suggestions about this article, please share them in the comments section below. πŸ‘‡πŸ»

So, that is it for this article. Thank you for reading! πŸŽ‰πŸ«‘

Top comments (1)

Collapse
 
shricodev profile image
Shrijal Acharya

What a beautiful way to demonstrate distributed tracing of a JS application. Kudos to you, Eden! 🀩🀩
Now, I am a big fan of Odigos already. This is going in my toolkit.

I am really looking forward to seeing how to monitor a Python application the same way this is done.