Deploy Node.js application with MySQL to AWS EC2 using Docker Compose

Ljubiša Moćić
5 min readSep 28, 2020

--

Let’s combine everything together Node.js, MySQL, AWS EC2, Express, env, Docker Compose, GitHub

Why?

Few days ago I’ve helped out my friend to deploy Node.js application to AWS EC2. Now I’ll show you, how I did it.

Create new Node.js app

Let’s init new Node.js application and install express as a dependency.

# Install GitHub CLI
brew install gh
# Create private repo
gh repo create sample-app --private -y
cd sample-app
npm init -y
npm i express
touch index.js
echo "node_modules" >> .gitignore

Let’s add “Hello, World” response and start the server in index.js.

const express = require("express");const app = express();
const port = "3000";
app.get("/", (req, res) => {
res.status(200).send("Hello, World!");
});
app.listen(port, () => {
console.log(`[INFO] Listening on http://localhost:${port}`);
});

Now we can run the app with node index.js

Add MySQL connection

First let’s install mysql driver npm i sql . Also, we don’t want to store credentials in git, so we are going to use npm i dotenv too.

So we can add credentials in .env file, make sure to replace with real values

DB_NAME=test
DB_PORT=3306
DB_USER=test
DB_PASS=test
DB_ROOT_PASS=test

Now, let’s make connection to the database in index.js

const mysql = require("mysql");const connection = mysql.createConnection({
host: process.env.DB_HOST,
user : process.env.DB_USER,
password : process.env.DB_PASS,
database : process.env.DB_NAME,
port : process.env.DB_PORT,
});
connection.connect(function(err) {
if (err) console.log(err.message) ;
console.log("Successfully connected");
});

Let’s try running it again node index.js

It actually behaves as expected, we haven’t started our database, but we will do it soon.

Dockerize the application

Now we want to make an image of our backend service

First, let’s create .dockerignore file since we don’t need to copy everything to the image and for now we will ignore node_modules .

Also, let’s create Dockerfile

echo "node_modules" >> .dockerignore
touch Dockerfile

The next we can add actual content of Dockerfile

FROM node:lts-alpine
WORKDIR /app
COPY . .
RUN npm install
EXPOSE 3000
CMD ["node", "index.js"]

Next, let’s create docker-compose.yml

touch docker-compose.yml

We will have two services, backend and database service in the same bridged network with exposed port 3000.

version: "3.8"
services:
backend:
build: . # Build image from local Dockerfile
environment:
DB_HOST: db
DB_PORT: "${DB_PORT}"
DB_USER: "${DB_USER}"
DB_PASSWORD: "${DB_PASS}"
DB_NAME: "${DB_NAME}"
ports:
- "3000:3000" # Expose port 3000 on host
depends_on:
- db # Wait until database service is loaded
networks:
- app-network
db:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: "${DB_ROOT_PASS}"
MYSQL_DATABASE: "${DB_NAME}"
MYSQL_USER: "${DB_USER}"
MYSQL_PASSWORD: "${DB_PASS}"
networks:
- app-network
networks: # Specify the network for bridging
app-network:
driver: bridge

Now let’s run it:

# Start services
docker-compose up -d
# See how they respond
curl localhost:3000

Since it works well, let’s push it to repo, so we can later pull it from EC2 instance.

# Push everything to the repo
git add .
git commit -m 'Initial upload'
git push origin master

Launch EC2 instance

Get the AWS account if you don’t have one already.

  1. Search for EC2 service

2. Click launch instances:

3. Select AMI

4. Select one with free tier, since it will be more than enough for this demo

5. Click Review & Launch, then Launch

6. Create new key pair which is going to be used to securely connect to the instance

7. Name the key pair and don’t forget to download it

8. Then, just click Launch Instances

And after a few moments, we should be able to see instance

Connect via SSH

Now is the time to connect to the instance from our terminal

Chmod previously downloaded private key:

chmod 400 test.pem 

Connect via ssh:

ssh -i "test.pem" ec2-user@your-ec2-url.com

And now you should be able to access your EC2 instance via terminal

Prepare EC2 instance for running our services

  1. Install and start Docker:
sudo amazon-linux-extras install docker
sudo service docker start

2. Add user to group so we can use commands without sudo (+ Login again, to apply changes)

sudo usermod -a -G docker ec2-user

3. Let’s install docker-compose

# Download and install
sudo curl -L https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
# Fix permissions
sudo chmod +x /usr/local/bin/docker-compose

4. We will also need git

sudo yum install -y git

5. Clone repo

git clone <link-to-your-repo>

6. Start the service

cd sample-app
docker-compose up -d

Even if our service runs fine, we still won’t be able to access it from our machine without terminal. We need to configure inbound rules.

Let’s configure inbound rules

  1. Go to security groups

2. Edit inbound rules

3. Verify that it’s working

curl http://<machine_ip>:3000

Finally, we can see the response from our backend! :D

Ping me @ljmocic if you have any questions!

— — — — — — — — — — — — — — — — — — —

If you got so far, you probably found something useful. Please consider supporting me :D

--

--