Video Thumbnail for Lesson
6.2: Running the Demo Application

Running the Demo Application

Transcript:

The first thing that I'll do is navigate to our module six directory. And we're actually going to start at the bottom and work our way up. So the first thing that we're going to need to do is deploy our database, I've got a number of tasks here in my tech task file to do. So first thing that I'm going to do here is execute this postgres run postgres command. And you can see it's issuing a docker run command and passing the environment variable of postgres password, we're setting that to foobar baz, we're creating a volume to store the underlying data. And we're mapping that to this path within the container, which is where postgres expects its data to be stored. And finally, we are connecting port 5432 on my localhost on my system to port 5432 in that container. And this is the image that we're running postgres version 16.3 with an alpine base image. Okay, I'm going to leave that terminal up and running and open a new one. Now, as I mentioned, I'll be storing information about each request in a table in the database. And so I need to create that table and the schema for it. I have this migration file here, create users table up and create users table down. So an up migration is going to create it. And if something went wrong, I would run that down migration. In here, you can see I'm going to create a table named request in the public schema that has two different columns. The first column is the created at column, which is going to be of type timestamp. And it's going to use the timestamp when any given row is created. And the second column is API name. And that is going to be either node or go depending which API is calling. Now to run this, I can issue this, I can run this particular task. Which as you can see, first, I'm getting the container ID by doing a docker ps command and filtering down to the particular image that I'm running. I'm then copying this migrations file from my local host system into that container. And finally, issuing a docker exec command using that container ID I grabbed in the first step, using the P SQL command line with the user Postgres and executing this SQL script. Based on the logs, it successfully issued this create table command. And now that should exist in my Postgres database. Now with my database running, and with the schema created, I can spin up my two back end applications. So we'll move one layer up the stack to our back end API's. The go API lives in this API going subdirectory, the dependencies are defined within this go dot mod file, I'm using two top level dependencies, the gin API framework, and the PGX Postgres client. In order to install these dependencies locally, I can run the install command, which under the hood runs go mod tidy. In this case, I already had all these dependencies installed, so nothing happened. If you ran this for the first time, it would install the dependencies onto your system. Now I can run it with I can then run the application with the run task, which behind the scenes passes in a database URL containing the credentials for my database, as well as where on the network you can find it. Because I'm running Postgres in a Docker container mapped to a port on my local host, I can connect to it at this address. And then finally calls go run main go, which is going to compile and run the application. It's now listening on port 8000. So let me go ahead and access that. You can see it gives me the current time from the database and the number of requests I've made. So each time I refresh this, we get an updated time as well as an incremented request count. Great. In the console, we can see it logging out. These 200 responses are successful API responses. And because I don't have a favicon defined, it's given me a 404 when the browser tries to request the favicon. Looking at the source code briefly, my main function is very simple. We load in that database URL, either from an environment variable or from a file. We then initialize a client to connect to the database. And finally, set up two API endpoints. The first one is just the root endpoint. So when I when I request that API with no additional path, we do two things. One, we insert a row into the table saying, hey, a request was made. And then two, we load the current time and select the request count from the database. If I jump into these two functions, the first one we're inserting into the request table, the API name here is going to be go and the timestamp is going to be picked up automatically. We then issue this function where we select the current time and then we count the number of rows where the API name matches my current API name. So that's all there is to it with this API. We also have a health check endpoint that checks if it can make a database connection. And if so, it returns a 200. We can now jump to the Node.js application. I'll leave this running and create a new terminal. Navigate into the API node subdirectory, start my devbox shell, and then we're going to install our dependencies. This calls npm install. OK, our dependencies should be installed now. And then we'll call npm run dev, passing in, again, our database URL containing the connectivity information. This is now listening on port 3000. Let's just validate that it's working. OK, we get back the current time and the number of requests. As we request more and more, the request count climbs. And as you can see, the request count is independent between the two APIs. That's what we want. We're seeing the responses as before, like with the Go application, 200s on the API requests and 404s on the favicon because there is no favicon. 404 meaning not found. Looking in my package.json, you can see that I'm using Express as my API server. Morgan is a logging utility and PG is a Postgres client. So we now have our Postgres instance and our two back-end APIs running. Let's set up the React client that lives in the client react subdirectory. Start my devbox shell. We're going to install our dependencies, runs an npm install, and then an npm run. Here we don't have any credentials required. It's calling our back-end APIs, which don't have any authentication. We can navigate to our browser and see here's the application. It's making a request to both of these APIs and returning the results to the front end. As I refresh the page, each one is going to increment a single time because they're both making a single call. Looking at the source for this, it is a very simple React app. We have our top-level app.jsx. It has a function, currentTime, which is going to be used to create a single call. It has a function, currentTime, which is going to be used for both of those components. We have two instances of that currentTime component with two separate APIs. The first one is calling our Go-based API. The second one is calling our Node API. The data from those get returned and populated into this React component. There we go. Now the final service here is written in Python and it is just going to make repeated requests to one of our back-end APIs. Let me create one more terminal. I'll start my devbox shell. We're going to install our dependencies. Now I'm using a package manager called Poetry to help manage my Python environment. Here I'm calling Poetry install no-root. In this case, all of my dependencies have already been installed, so that works fine. And then finally, I'll issue my Poetry run command. And in this case, that behind the scenes calls Poetry run Python on my main.py and specifically is calling localhost 8000. So if you remember, 8000 was the port that my Golang application was serving on. And we can see in this terminal over here, the Golang application now is getting hit repeatedly by that load generator. If I load my page on the front end, you can see the Golang API request count is climbing rapidly, whereas the Node API is stable until I make another request. We've made 300 requests to the Golang API. It's now up to 340. If we take a look at the source code of the load generator, it's very simple. Within our main.py, we're importing some dependencies. We have two things. We have our load generator function. So this is just going to loop forever until this terminate value is false. For each of those loops, we're going to try to make a response to the API that's provided. We're going to log some info to console. And if for some reason it fails, we will catch that exception and keep trying. Then we sleep for, in this case, a configurable number of milliseconds. And then the only other piece of this is handling termination signals, meaning if we want to kill this application with a control C, it will catch that signal. It will catch that signal, log the fact that it has done so, toggle the terminate value to true, which will end this loop and the process will terminate. Here at the bottom, I'm just setting up some basic logging, loading which API I want to call, as well as how long it should sleep between iterations, setting up my signal handling, and then calling my run load generator function at the bottom. And that's really all there is to it. While each of these individual services is quite minimal, they cover a variety of different languages and many of the different types of configuration that you might want to handle within a microservice-based application that you're deploying onto Kubernetes.