# Deploying Docker Containers to AWS ECR/ECS (Beginner's Guide)

In today's fast-paced world of software development, the ability to quickly
package, deploy, and scale applications on public cloud infrastructure has
become more essential than ever. Containerization has truly revolutionized these
processes, with Docker standing out as a catalyst in the advancement of this
movement. Docker containers can provide consistent runtime environments for your
applications, allowing developers to build, deploy, and iterate with
unprecedented efficiency.

Containers make a lot of sense in the public cloud. The cloud offers a powerful
and versatile platform for provisioning infrastructure, so enterprises no longer
have to plan their infrastructure requirements well ahead of their current
needs. Instead, they can quickly scale their resources up and down as necessary.
Moreover, big cloud providers offer various services that can easily integrate
with containerized applications, making it even easier for organizations to
reduce their operational burdens.

There are many cloud providers out there, but Amazon Web Services (AWS)
undoubtedly stands out as the current market leader. Services such as Amazon
Elastic Compute Cloud (EC2), Amazon Relational Database Service (RDS), and
Amazon Elastic Container Service (ECS) can drastically simplify the
orchestration, scaling, and management of Docker containers, making it easier
for users to deploy their containerized applications on AWS reliably and
securely.

This article will guide you through several possible methods of deploying your
containerized applications on AWS. You'll start by preparing the Docker images
for your application containers and setting up the necessary infrastructure
(provisioning a relational database and configuring networking settings). You'll
then explore a valid but laborious method for deployment based on EC2 instances
and auto-scaling groups. This will give you the fundamentals needed for
understanding the more advanced deployment method presented in this tutorial,
which shows you how to deploy your containers on a serverless platform using AWS
ECS.

By the end of this tutorial, you'll have a much deeper understanding of how
serverless environments like AWS ECS operate and will be well-equipped to deploy
your containerized applications on cloud infrastructure.

Without further ado, let's get this journey started!

[ad-uptime]

## Prerequisites

- Good understanding of Docker images and containers for local development.
- Prior experience using Linux for basic system administration tasks.
- Access to an AWS account to provision the required services and
  infrastructure.

Please note that setting up a domain name and configuring HTTPS for your
applications are not going to be covered in this tutorial. They are, however, an
essential part of deploying applications in a production environment, so
remember to research them separately to solidify your knowledge.

## Preparing your Docker images

In this tutorial, you'll work with one of the demo applications created for an
earlier tutorial
([Building Production-Ready Docker Images for PHP Apps](https://betterstack.com/community/guides/scaling-php/php-docker-images/)).
You don't need to be familiar with that tutorial to complete this section, as
all the necessary steps for building the relevant Docker images will be outlined
here too, but feel free to review it if you'd like to obtain some additional
information.

The demo application is called the
[Product API](https://github.com/betterstack-community/product-api). The Product
API provides a REST API allowing users to perform simple CRUD operations
(creating, retrieving, updating, and deleting) against some fictional product
database. It requires a web server to accept incoming HTTP connection requests
and forward them to the PHP runtime for execution, as well as a database for
storing the product information.

In the PHP world, the web server and the PHP runtime usually live in two
separate containers, so this section will show you how to set up the images for
them both.

### Preparing a PHP image

Let's begin by preparing the PHP image. You'll create two distinct flavors of
this image: a production version that you can use for deployment and a
development version that you can use for generating some test data in your
database.

Clone the `product-api` repository locally to obtain the application source
code, and `cd` into its folder:

```command
git clone https://github.com/betterstack-community/product-api.git
```

```command
cd product-api
```

Create a new `Dockerfile` and populate it with the following contents:

```text
[label Dockerfile]
FROM composer:2.7.6 AS composer
FROM php:8.3.7-fpm-alpine3.19

# Install required PHP extensions.
RUN docker-php-ext-install pdo_mysql

# Copy application source code to image.
COPY --chown=www-data:www-data . /var/www/html

# Install Composer packages.
COPY --from=composer /usr/bin/composer /usr/bin/composer
USER www-data
ARG COMPOSER_NO_DEV=1
ENV COMPOSER_NO_DEV=$COMPOSER_NO_DEV
RUN composer install

# Reset main user.
USER root
```

To build the production image, run:

```command
docker build -t product-api:1.0.0 .
```

To build the development image, run:

```command
docker build --build-arg COMPOSER_NO_DEV=0 -t product-api:1.0.0-dev .
```

The only difference between the production and the development image is that the
latter contains some additional development tools and packages necessary for
seeding the database with dummy data. If you have followed the instructions
correctly, both images should now be available locally:

```command
docker image ls product-api
```

```text
[output]
REPOSITORY    TAG         IMAGE ID       CREATED         SIZE
product-api   1.0.0-dev   bbfb1f00ef55   3 minutes ago   171MB
product-api   1.0.0       22a92001add3   4 minutes ago   121MB
```

With that, you can proceed with preparing the web server image.

### Preparing a web server image

As already mentioned, the `product-api` container expects a `web-server`
container to run in front of it. The `web-server` container accepts all incoming
HTTP requests, translates them to FastCGI, and proxies them to the `product-api`
container for execution.

Preparing the `web-server` image is quite straightforward. You can use
[NGINX](https://hub.docker.com/_/nginx) as a base image and add a custom
configuration file on top of it to specify the correct settings for properly
translating and redirecting requests to the `product-api` container.

Create a new folder named `web-server` and `cd` into it:

```command
mkdir web-server
```

```command
cd web-server
```

Create a new file named `nginx.conf` and populate it with the following
contents:

```text
[label nginx.conf]
server {
    listen 80 default_server;
    listen [::]:80 default_server;

    root /usr/share/nginx/html;

    add_header X-Frame-Options "SAMEORIGIN";
    add_header X-Content-Type-Options "nosniff";

    index index.php;

    charset utf-8;

    location / {
        try_files $uri /index.php?$query_string;
    }

    location = /favicon.ico { access_log off; log_not_found off; }
    location = /robots.txt  { access_log off; log_not_found off; }

    error_page 404 /index.php;

    location ~ \.php$ {
        root /var/www/html/public;
        fastcgi_pass localhost:9000;
        fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name;
        include fastcgi_params;
    }

    location ~ /\.(?!well-known).* {
        deny all;
    }
}
```

This file specifies a default virtual server configuration block that captures
all incoming HTTP requests in the `web-server` container. It also provides the
necessary instructions for NGINX to recognize and forward PHP requests to the
PHP-FPM FastCGI server running inside the `product-api` container (this assumes
that the `web-server` container can reach the `product-api` container at
`localhost:9000`).

Create a new `Dockerfile` and populate it with the following contents:

```text
[label Dockerfile]
FROM nginx:1.26.0-alpine-slim

COPY ./nginx.conf /etc/nginx/conf.d/default.conf
```

To build this image, run:

```command
docker build -t web-server:1.0.0 .
```

If you have followed the instructions correctly, the `web-server` image should
now be available locally:

```command
docker image ls web-server
```

```text
[output]
REPOSITORY   TAG       IMAGE ID       CREATED         SIZE
web-server   1.0.0     97764447cae7   9 seconds ago   17.1MB
```

With that, you have all the Docker images necessary for deploying the Product
API application on AWS.

## Creating a database

A quick note before getting right into creating your first managed database on
AWS. Throughout the examples that follow, I'll often refer to your AWS account
ID (`<AWS_ACCOUNT_ID>`) and region (`<AWS_REGION>`). Both of them will show up
on many of the included screenshots and code snippets as:

- `123456789012` for the account ID.
- `eu-north-1` (Stockholm) for the region.

Make sure to change these accordingly to reflect your account ID and region
settings. With that out of the way, let's go ahead and create a database for the
Product API.

The database is where the Product API stores all of its product information.
While you can run the database as a single Docker container for local
development, a production environment typically needs a much more robust
solution.

One possible solution could be to provision a few AWS EC2 Linux servers and
install the database software yourself, but think about the operational overhead
of having to configure backups, set up replication, manage load balancing, and
perform regular version upgrades while continuously monitoring and ensuring that
your entire setup works normally. This could quickly go out of hand, and that's
exactly the problem that a managed database service, such as AWS RDS, solves.

With RDS, AWS manages everything for you. The database clusters it provisions
have automated backups and failover configured right out of the box. The
operating systems and database distributions powering the underlying database
servers receive regular security patches and updates, and you can easily scale
resources such as CPU, RAM, and storage from a convenient web interface rather
than having to manually provision and setup Linux machines on your own.

In RDS, you usually interact with your database through a unique service
endpoint (i.e., a special hostname that points to some form of load balancer or
proxy that sits in front of your database servers), which can intelligently
route traffic to your primary server or one of its read replicas (if such
exists). You don't have to worry about setting up any load balancing software or
DNS records yourself, as AWS already does that for you.

Of course, this is best illustrated with an example, so let's go ahead and
provision a new database cluster in RDS.

### Provisioning a database

In the AWS Console, find the Relational Database Service (RDS):

![find RDS service](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/529001a4-c634-4218-44c3-f6dff334ff00/lg1x =960x276)

Click **Create database**:

![create database](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/f407469e-f622-42d4-db20-28f6a3466700/public =960x322)

Select a database engine. The Product API requires a MySQL-compatible database,
so you can set the **Engine type** to **Aurora (MySQL Compatible)**:

![select database engine](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/ee395e25-fe38-437b-9922-861f3473f700/lg1x =976x1303)

Aurora is a MySQL replacement developed internally at AWS that offers certain
performance and scalability advantages over traditional MySQL databases. This is
mainly due to the mechanism it uses to store and replicate data. It is typically
faster for performing failover, storage expansion, and crash recovery, and it
usually costs a bit less to operate.

Next, scroll down to the **Templates** section and choose a template
corresponding to your requirements. The **Production** defaults are generally a
good starting point, but if you just want to click around and explore without
incurring a huge charge, you might want to select the **Dev/Test** template
instead, which will only add a single instance to your database cluster:

![select database template](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/f6e09211-9c21-4c14-ceaf-4b59906f3900/md2x =960x303)

In a production setting, of course, always make sure that your cluster contains
at least two database instances located in two different availability zones.
That way, if one instance goes down, the other one can take over.

Scroll down to the **Settings** section and specify a database cluster
identifier (e.g., `tutorial`). Then, under **Credential Settings** choose the
**Self managed** option and check the **Auto generate password** option:

![modify database settings](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/fb6eae11-ecee-49b2-b665-622e8ffac500/md2x =961x780)

The database cluster identifier will give your cluster a unique name that can be
used to distinguish it from other RDS clusters running in this particular region
of your AWS account. As for the database credentials, the AWS Secrets Manager
option can provide an added level of security for production databases (bear in
mind that it incurs some additional charges as well, though), but the
self-managed option makes it easier to explore RDS without having to involve
another AWS service.

For some additional cost savings while testing, you may scroll down to the
**Instance configuration** section and opt for one of the burstable database
instance classes, such as `db.t3.medium`:

![choose database instance class](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/292da584-2794-468d-5e17-904e8cc0e300/lg1x =960x485)

Bear in mind, though, that for real production workloads, you'll be much better
off with one of the memory optimized classes, as they come with larger amounts
of memory, better networking, and more consistent CPU performance, so your
databases will be operating a lot faster on them.

Next, scroll down to the **Connectivity** section, find the **Public access**
option and choose **Yes** to allow public access to your cluster:

![enable public access](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/d89167ba-2431-47ae-28b6-bc9f10951b00/lg1x =961x515)

Once you get acquainted with AWS, you'll find that leaving **Public access**
disabled is a better choice for increasing the security of your production
databases and protecting them from unauthorized access. However, the tradeoff
here has been made consciously to let you access the database cluster directly
from your local machine, rather than having to set up
[AWS Site-to-Site VPN](https://aws.amazon.com/vpn/site-to-site-vpn/) or
[AWS Client VPN](https://aws.amazon.com/vpn/client-vpn/), or using an EC2
instance to act as an SSH proxy, which will result in a lot of added complexity.

Take note of the VPC security group assigned to your cluster (named `default`).
This group determines the firewall settings for your database. Right after
launching the cluster, you'll have to tweak it a little bit so traffic from your
local machine can flow through to the database:

![default VPC security group](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/410526a7-ab77-40a6-788c-ccfa048ff800/lg2x =959x522)

With all of this done, leave everything else at its default settings, scroll
down to the bottom of the page, and click **Create database**:

![create database](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/86526870-f6df-4942-1221-7188ecf2ab00/orig =963x518)

Provisioning the cluster may take a while, but at some point you'll see a flash
message indicating that the cluster was created successfully. When this happens,
go ahead and press the **View connection details** button:

![view connection details](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/bbce77ec-3d6e-4c51-d25b-fc43b1190d00/lg2x =960x349)

Write down the connection details and store them somewhere safe, as you'll need
them later to connect to your database. When you're ready, click **Close**:

![connection details](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/9313559a-7122-4250-ef44-5804b4d3e500/lg2x =618x488)

At this point, ensure that both the database cluster (`tutorial`) and the
database instance (`tutorial-instance-1`) appear as **Available**. If the
instance is not **Available**, you won't be able to connect to the cluster and
interact with it:

![database instance status](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/120817e2-b5fd-48f5-fc75-0924d0188900/lg1x =960x439)

The only thing left is to add your public IP address to the `default` security
group that regulates network access to your database cluster.

In the AWS console, find the **Security groups** feature:

![security groups](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/3c418ffb-9d27-4b92-f437-df4d982a9e00/md1x =959x276)

Select the `default` security group, and click **Edit inbound rules**:

![default security group](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/4bd03d64-7c32-4b80-8888-49a5fb594500/lg2x =962x770)

Click **Add rule**:

![add rule](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/cac6a118-ea90-411c-ae3b-c2adcf72ab00/lg2x =1472x590)

Specify **MYSQL/Aurora** as the **Type** and **My IP** as the **Source**:

![save rules](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/ce92af37-3ff4-4980-d659-d23af86b7000/public =1450x696)

Now is also a good time to add a rule that allows any resources deployed in your
private network on AWS (the
[AWS VPC](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html))
to access this database as well. Click the **Add rule** button once again, then
specify **MYSQL/Aurora** as the **Type** and **Custom** as the **Source**,
inputting the CIDR range `172.31.0.0/16`. This CIDR range corresponds to all
private IP addresses allocated within your default AWS virtual private network.

When you're ready, click **Save rules**:

![save rules](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/c1f5f690-185f-4924-e354-fdf48bbac000/lg2x =1451x785)

With these rules added, database traffic coming from your local machine will be
allowed to pass through the firewall, and other services deployed in your AWS
account will also be able to reach the database.

You can now connect to the database using your preferred database client. You'll
do this in the next section.

### Connecting to the database

It's time to test the database connection and create a user for the application,
and a database schema that it can write to and interact with. You may use any
client you like, but I prefer [DBeaver](https://dbeaver.io/).

Open up DBeaver and click the **New Database Connection** button:

![new database connection](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/3dc25830-028a-43b9-20be-695ab9034000/orig =957x200)

Select the **MySQL** database driver and click **Next**:

![select database driver](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/bbb722ce-ffd0-491c-2099-e7bce9cae700/md1x =771x755)

Enter the **Server Host**, **Username**, and **Password** that correspond to the
connection details that you obtained earlier by clicking the **View connection
details** button:

![DBeaver connection settings](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/eb43ae33-0f3f-4426-fbdf-031b1a437f00/public =771x754)

Navigate to the **Driver properties** tab and set the `allowPublicKeyRetrieval`
setting to `true`:

![DBeaver driver properties](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/b598b5e7-5cc0-4671-593c-98a68009a800/orig =771x757)

Otherwise, you might get the following error when trying to connect:

```text
Public Key Retrieval is not allowed
```

Finally, click **Test Connection**:

![test connection](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/531d9673-d6ad-4da5-fcb3-1b4e46df5f00/md1x =782x756)

You should see the following response:

![connection successful](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/c7b219af-2edc-491f-4f4c-bd1e9a118e00/lg1x =430x252)

The connection seems to be working, so you can click **Finish**:

![finish setup](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/a13f2dc6-4b2c-4f0b-706a-a65828564000/md1x =784x756)

It's time to create a user and a schema for the application. Double-click on the
name of the connection to connect to the database:

![double-click connection name](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/43db40ad-b428-4ca4-71ff-2a4567223700/md2x =959x230)

After connecting, expand the **Databases** section and then the **Users**
section. You'll notice two things:

1. There are no other database schemas besides `sys`.
2. There are no other users besides your initial `admin` user and some system
   users created by AWS for internal usage.

![databases and users sections](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/f4a1e4e7-4a12-4790-0c44-7bd083ef9000/md1x =511x557)

The Product API application needs both a dedicated database user and a clean
database schema that it can work with.

Click the **Open SQL Script** button to execute a new set of SQL statements
against your database:

![open sql script](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/d8836f23-2325-4c1f-1b38-7c239474ce00/lg1x =954x288)

Paste the following SQL statements into the command window:

```sql
CREATE DATABASE product_api;
CREATE USER 'product_api'@'%' IDENTIFIED BY 'test123';
GRANT ALL ON product_api.* TO 'product_api'@'%';
```

These will create a new schema called `product_api` and a user named
`product_api` (with password `test123`) with full access to that schema.

Click the **Execute SQL Script** button to execute everything:

![execute sql script](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/d5248ae4-85f6-4280-4e88-e95af3903a00/md2x =960x322)

You should see a similar result:

![sql script result](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/9ab27f87-1dc2-49cf-7158-a240db76cd00/md1x =672x249)

With that, the user and the schema are now created, and you can proceed with
configuring the Product API to communicate with the database using the specified
credentials.

### Populating the database

The schema that you created is currently empty, but the Product API requires
certain database tables to function properly. You can address this by running
database migrations. Here, for the sake of simplicity, you'll run the migrations
locally from your machine, which is only possible because you enabled public
access to your database earlier on.

For small, non-critical applications that you're solely responsible for, this
can be considered an acceptable approach. For larger applications, of course,
it's much better to automate the migration process through a CI/CD pipeline,
thus reducing the chance of human error and tightening the security of your
system.

As you already have the `product-api` images prepared locally, running the
migrations is a matter of executing a specific `php` command inside a
short-lived container launched from the `product-api:1.0.0` image (the precise
command is `php artisan migrate`). This container must also be made aware of the
relevant database connection details. You can pass them to the container in the
form of environment variables.

Create a new file named `db.env` and populate it with the following contents:

```text
[label db.env]
DB_HOST=tutorial.cluster-cb08aaskslz3.eu-north-1.rds.amazonaws.com
DB_USERNAME=product_api
DB_PASSWORD=test123
DB_DATABASE=product_api
```

Then run the following command:

```command
docker run -it --rm --env-file db.env product-api:1.0.0 php artisan migrate
```

An interactive prompt appears, asking you to confirm your request:

![confirm migration](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/f8b66ca5-266b-4866-15d0-72a1e5e2f200/orig =962x215)

Select **Yes** and hit **Enter**. Soon after, the migration process begins,
creating the necessary tables in the specified database schema:

![run migration](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/172c3c15-2089-431b-f5f6-d9304fb37e00/lg1x =958x401)

The application now has everything necessary to boot up. However, to make this a
little more interesting, you may want to add some dummy data to the newly
created tables. This isn't something you'd be doing in a real production
environment, but it is helpful to have some data to work with in this tutorial.

You can use the `product-api:1.0.0-dev` image for that purpose, as it includes
all the development dependencies required for generating test data. You have to
execute the command `php artisan db:seed`, as follows:

```command
docker run -it --rm --env-file db.env product-api:1.0.0-dev php artisan db:seed ProductSeeder
```

Once again, a prompt appears asking you to confirm your request:

![confirm seeding](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/42392545-7a5e-4574-2fc6-543deb672000/md2x =1040x213)

After you do, the database is seeded with test data:

![seed database](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/92bcdd92-ea78-46ef-b81a-ea69bf6c6600/md2x =1039x290)

With that, the database is fully prepared for integration with your application.

## Pushing to a container registry

The `product-api` and `web-server` images are currently stored on your local
machine, but to launch containers from them in the cloud, AWS should be able to
pull them from a more accessible location, such as a remote container registry.

You could certainly use something familiar, such as
[Docker Hub](https://hub.docker.com/), but you'll find that
[AWS ECR](https://aws.amazon.com/ecr/) (Elastic Container Registry) integrates a
lot better with other AWS services (such as EC2 and ECS), offering faster
download speeds, easier authentication, and lower infrastructure costs.

The next steps will walk you through setting up the necessary private image
repositories for uploading your custom images to ECR.

### Setting up private repositories

Find the Elastic Container Registry (ECR) service in the AWS console:

![find ECR service](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/9c66dcf0-ec7e-432a-3ab1-3a6b843e6b00/lg1x =961x273)

Under **Private registry**, choose **Repositories** from the menu on the left:

![choose Repositories](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/94f233c5-3143-411f-ed3d-fe6d448a9f00/orig =960x400)

Click **Create repository**:

![start creating repository](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/80cbc13a-d691-4832-a6c5-fb5ffe5c7500/lg2x =958x496)

You have to set up two repositories here. One for the `product-api` image and
one for the `web-server` image.

Start with the `product-api`. Specify a **Repository name** in the corresponding
input field:

![specify repository name](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/b73b7373-f312-45c4-5436-df89bca34a00/public =960x550)

Take note of the URL prefix. ECR repositories in AWS have the following URL
format:

```text
<AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/<REPOSITORY_NAME>
```

For my account ID (`123456789012`) and region (`eu-north-1`), the `product-api`
repository becomes available at:

```text
123456789012.dkr.ecr.eu-north-1.amazonaws.com/product-api
```

Leave all other options at their default settings, scroll down to the bottom of
the screen, and click **Create repository**:

![finish creating repository](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/317918cf-4c78-40a1-19c7-f61b71a52800/public =965x410)

Repeat the same procedure for the `web-server` repository.

In the end, your private registry should have the following repositories:

![private registry repositories](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/966b5aee-23a8-439c-60dc-edcd700ebd00/lg1x =1165x552)

This allows you to push the `product-api` and `web-server` images to ECR, but
you must configure your local Docker client to authenticate with AWS first. This
requires issuing an authorization token. There are several possible ways to
obtain one, but using the [AWS CLI](https://aws.amazon.com/cli/) with an
[IAM](https://aws.amazon.com/iam/) user is probably the easiest.

### Creating an IAM user

Find the **Identity and Access Management (IAM)** service in the AWS console:

![find IAM service](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/c83bb834-231d-4165-7465-c0d144c80200/md1x =960x274)

Choose **Users** from the menu on the left:

![select users](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/99678984-7445-4a73-d91e-fda794b70800/public =960x559)

Click **Create user**:

![click create user](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/8579b420-42e5-459b-9f7a-baedb594b800/public =960x419)

Specify a username (e.g., `docker`) and click **Next**:

![specify username](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/fd5d89cb-e8ae-45a6-24a8-1e9d9610db00/orig =959x685)

The `docker` user will be utilized solely for allowing your local machine to
access your AWS account so you can push images to ECR.

Once again, if your application is relatively small and you're the main person
responsible for its deployment, this may be considered a suitable choice. If,
however, you're working on a larger project, it would make far more sense to
automate the entire image building and publishing process, integrating it into
your CI/CD pipelines. It's not only going to be much more efficient, but it will
also increase the security of your system and reduce the chances of human error.

On the next page, select the **Attach policies directly** option:

![attach policies directly](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/3f11903f-42fe-4ada-a8f0-50c4fc56ee00/lg1x =960x645)

Scroll down to the **Permissions policies** section, and select the
`AmazonEC2ContainerRegistryPowerUser` policy from the list of available
policies. Then click **Next**:

![attach AmazonEC2ContainerRegistryPowerUser policy](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/cc5bdf5b-9575-487b-e956-421e181e0200/lg2x =963x623)

The `AmazonEC2ContainerRegistryPowerUser` is one of the AWS-managed security
policies. AWS-managed policies are predefined permission sets designed by Amazon
to support the most typical use cases and scenarios in their cloud. The
`AmazonEC2ContainerRegistryPowerUser` policy, in particular, allows IAM users to
read and write to private repositories and issue the corresponding authorization
tokens.

After clicking **Next**, you're prompted to confirm the creation of your new
user account. Click **Create user** to do that:

![create user](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/ee8c705f-bd88-4667-49b2-2b58b06bbf00/md2x =961x956)

A message appears confirming the operation. Click the **View user** button to
continue further:

![flash message](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/e70fd7d7-1813-41c2-bf58-a1148bca1a00/md2x =961x187)

This leads you to the user management page, where you have to click **Create
access key**:

![create access key](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/cf06c509-f525-4ea8-728a-fd9f25fa9e00/lg2x =960x446)

The access key allows you to authenticate with the AWS CLI in order to obtain
ECR authorization tokens from your local machine.

A lot of options appear on the page that follows. These options all direct you
towards more secure alternatives for granting access to your AWS account instead
of using a static access key. Indeed, the suggested alternatives could be better
suited for a production setting, but for the sake of simplicity, choose
**Other** and click **Next**:

![click other and next](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/cf8cfa84-3678-4133-fa77-ea07f78fd300/orig =961x1166)

On the next screen, specify a description to remind you what this user account
will be used for, then click **Create access key**:

![specify a description](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/bfc0f52a-a57c-4e64-b5a3-3af394d16000/md1x =960x502)

Copy the generated access and secret access key and store them somewhere safe.
You'll need them in a bit to configure your local Docker client to authenticate
with AWS. Bear in mind that after navigating away from this page, you'll no
longer be able to retrieve the secret access key, so failing to copy it now will
render your credentials useless!

Once you're ready, click **Done**:

![copy secret and access key](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/de959a15-5600-4006-a14d-d3d821b7f000/public =975x819)

Using the generated key pair, you can now configure your local Docker client to
authenticate with AWS to push your custom images to ECR.

### Configuring the AWS CLI

The `AmazonEC2ContainerRegistryPowerUser` policy that you attached to your IAM
user grants you permission to request (and receive) valid ECR authorization
tokens on behalf of your user through the AWS API. Your local Docker client can
then use these tokens to authenticate with AWS and push your custom images to
ECR.

The easiest way to interact with the AWS API from your local machine is through
the AWS CLI. As I generally like to keep my local Linux installation clean and
organized, I prefer to run the AWS CLI through a Docker container rather than
installing it directly on my machine. This is more convenient, as it helps me
avoid potential conflicts with other dependencies on my system.

Setting up the AWS CLI locally is quite straightforward, using the following
command:

```command
docker run --rm -it -v awscli:/root/.aws public.ecr.aws/aws-cli/aws-cli:2.15.42 configure
```

There are a couple of interesting things to note here:

- The `--rm` option ensures that the container is removed after it finishes
  running the specified command.
- The `-it` option enables interactive input.
- The `-v awscli:/root/.aws` option creates a new local volume named `awscli`
  (mounted as `/root/.aws` inside the container) where your AWS CLI credentials
  and configuration files persist for subsequent command invocations.
- The `public.ecr.aws/aws-cli/aws-cli:2.15.42` reference points to the
  [official AWS CLI v2 image](https://gallery.ecr.aws/aws-cli/aws-cli) supplied
  by Amazon, which contains the AWS CLI installation with all of its mandatory
  dependencies.
- The `configure` command starts the initial AWS CLI configuration process,
  where you are prompted to enter your access key, secret access key, default
  region, and output format preferences, and the respective credential and
  configuration files are created as a result.

Running this command results in a similar flow:

![configure AWS CLI](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/e058f92b-015c-4657-c365-65654dfd1800/md2x =960x165)

Here, you should supply the keys you created earlier in IAM for the `docker`
user. Optionally, you may also specify your default region.

Bear in mind: even though the secret access key is displayed in plain text on
the screenshot above, this is only done to make the example clearer. You should
never share your secret access keys in plain text with anyone. They are a
sensitive piece of information that should be kept secure at all times!

With that out of the way, you're ready to request an ECR token for your Docker
client.

### Obtaining an ECR token

The AWS CLI command for obtaining an ECR token is `aws ecr get-login-password`.
That command returns a base64-encoded string that you can pass to the
`docker login` command in order to authenticate your local client with ECR. Note
that this token is only valid for the next 12 hours, after which you'll have to
issue a new one.

The complete authentication command goes like this:

```command
docker run --rm -it -v awscli:/root/.aws public.ecr.aws/aws-cli/aws-cli:2.15.42 ecr get-login-password --region <AWS_REGION> | docker login --username AWS --password-stdin <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com
```

Upon successful execution, you'll get a similar result:

```text
[output]
WARNING! Your password will be stored unencrypted in /home/marin/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
```

Do note that passing in the authorization token directly to `docker login` is a
quick and easy way to get started. Using a credential helper is a lot more
secure, but it requires installing additional software packages, and that's
beyond the scope of the tutorial.

You're now ready to use your local Docker client to push your custom images to
AWS.

### Pushing images to ECR

At this point, you should have the following images on your local machine:

```command
docker image ls product-api
```

```text
[output]
REPOSITORY    TAG         IMAGE ID       CREATED       SIZE
product-api   1.0.0-dev   bbfb1f00ef55   6 hours ago   171MB
product-api   1.0.0       22a92001add3   6 hours ago   121MB
```

```command
docker image ls web-server
```

```text
[output]
REPOSITORY   TAG       IMAGE ID       CREATED       SIZE
web-server   1.0.0     97764447cae7   5 hours ago   17.1MB
```

Pushing these images to ECR requires tagging them with the appropriate
repository URL. As you remember, the format is:

```text
<AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/<REPOSITORY_NAME>
```

To tag them, execute the following commands, replacing `<AWS_ACCOUNT_ID>` and
`<AWS_REGION>` with the actual values corresponding to your AWS account:

```command
docker tag product-api:1.0.0 <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/product-api:1.0.0
```

```command
docker tag web-server:1.0.0 <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/web-server:1.0.0
```

You can now push these images to ECR by executing the following commands:

```command
docker push <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/product-api:1.0.0
```

```command
docker push <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/web-server:1.0.0
```

Assuming you have followed the instructions correctly, both images should now be
available in your private registry:

![product-api image on ECR](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/5abec9f2-89a9-496c-7b0e-498af8335e00/md1x =1049x446)

![web-server image on ECR](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/ab9b077b-8ecf-4def-be93-29d924447800/lg1x =1049x445)

### Revoking credentials

After successfully uploading your images to ECR, you may want to revoke the
credentials that you used to guarantee the security of your account.

You can begin by issuing:

```command
docker logout <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com
```

```text
[output]
Removing login credentials for 123456789012.dkr.ecr.eu-north-1.amazonaws.com
```

This will remove any traces of the ECR authorization token from your local
`/home/<username>/.docker/config.json` file.

Next, you can delete the `awscli` Docker volume:

```command
docker volume rm awscli
```

This will erase the AWS access key and secret access key stored on your local
machine.

Finally, you can remove the entire user account that you created earlier in the
AWS console.

![remove user account](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/2ea9f5a5-f54f-4b2a-a7a9-67b81277de00/orig =960x399)

Alternatively, you may keep the account but revoke its access key. However, this
account will not be used anymore in the tutorial, so it's better to stick with
the former option. These steps will ensure that the risk of unauthorized access
to your AWS resources is reduced to an absolute minimum.

You're now ready to deploy the Product API service on AWS infrastructure.

## Deploying to AWS EC2

Running containers directly on a Linux VM is a common method for deploying
Docker images in production. On AWS, Linux VMs are known as EC2 instances. An
EC2 instance is basically a virtual Linux server that you can access over SSH to
install packages and execute commands.

While this is a perfectly valid deployment method, you'll find that it involves
a decent amount of manual work, and there are some easier alternatives available
in the form of serverless platform-as-a-service (PaaS) solutions, such as AWS
ECS. Nevertheless, exploring it is very useful, as it provides a great
understanding of how ECS simplifies the deployment process.

The next few sections will show you how to launch a new EC2 instance and
configure it for deploying the containers you made earlier.

### Launching an EC2 instance

Find the EC2 service in the AWS console:

![find EC2 service](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/53f1c928-7836-4b29-6c27-db74fb56a200/lg2x =960x271)

From the menu on the left, navigate to **Instances**:

![navigate to Instances](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/0fe29869-b35f-441f-511a-3677249d3a00/lg2x =960x496)

Click **Launch instances**:

![launch instances](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/a3237d52-5293-4fa1-cc41-6f2ade854200/lg1x =958x339)

Set the instance name to `product-api`:

![set instance name](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/df0f41ab-8b28-4c70-0fda-a1981bfea000/lg2x =960x428)

Scroll down to the **Application and OS Images** section and pick **Debian**:

![choose AMI](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/02984721-f02a-4f4a-5d84-daa490d20400/md2x =960x793)

**Debian** will provide a familiar environment that's easy to install Docker in.

Scroll down, locate the **Key pair (login)** section, and click **Create new key
pair**:

![create new key pair](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/11bb706e-fb55-4228-6b7c-ea9606bc5a00/lg2x =960x325)

Specify `product-api` as the **Key pair name** and click **Create key pair**:

![create new key pair](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/6193fa8c-14ae-4997-a236-ae315e2dd900/lg1x =624x653)

When prompted to do so, save the generated private key file to your local
filesystem (mine goes at `/home/marin/Downloads/product-api.pem`):

![save key pair](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/0d65a0de-8132-4ccd-b2a3-66f7d019fd00/orig =873x390)

Make sure to change the ownership of the downloaded file on your local machine.
Otherwise, `ssh` commands will complain that the file permissions are too open
and refuse to initiate connections to your server.

```command
chmod 0600 ~/Downloads/product-api.pem
```

Scroll down to the **Network settings** section, and allow SSH traffic for your
IP, as well as HTTP traffic from the Internet:

![network settings](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/a0bef152-db84-4841-3b03-7902a3ccae00/md1x =963x785)

Leave everything else at its default settings, scroll down to the bottom of the
page, and click **Launch instance**:

![launch instance](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/62c47f8a-6f76-4ef8-1283-386342407600/lg1x =960x644)

Soon after, a message appears, confirming the successful launch of your
instance. Click on the instance identifier:

![instance identifier](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/47ce9350-c608-4c2e-d31c-9289215fa700/md2x =963x236)

This takes you to a listing, where you'll be able to find the IP address of your
newly created instance:

![instance public IP](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/b1409e11-a889-45ef-1837-c7ab0552e400/md1x =960x301)

You can use that IP address and the private key that you downloaded earlier to
connect to your instance by executing the following command:

```command
ssh -i ~/Downloads/product-api.pem admin@51.20.124.12
```

Once you're logged in, you should see a familiar Bash prompt:

```text
[output]
Linux ip-172-31-21-122 6.1.0-10-cloud-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.37-1 (2023-07-03) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
admin@ip-172-31-21-122:~$
```

### Installing Docker

An EC2 instance comes with a clean Linux install, and to be able to launch any
Docker containers on it, you'll have to set up the Docker engine yourself. You
can follow the
[official installation instructions](https://docs.docker.com/engine/install/debian/),
which I've included below for convenience:

```command
sudo apt update
sudo apt install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
```

You can run the `hello-world` image, as suggested in the official documentation,
to verify that the installation is successful:

```command
sudo docker run --rm hello-world
```

```text
[output]
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
c1ec31eb5944: Pull complete
Digest: sha256:a26bff933ddc26d5cdf7faa98b4ae1e3ec20c4985e6f87ac0973052224d24302
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.
. . .
```

It seems like the Docker engine works correctly, so you can proceed with pulling
the Product API images to the EC2 instance.

### Creating an IAM role

Earlier, you used a specially created IAM user (named `docker`) for pushing your
custom images to ECR. Later, you deleted that user to maintain the security of
your AWS account. You may now be wondering how the EC2 instance will be able to
pull any images when that user is no longer available.

AWS provides a convenient mechanism for letting your EC2 instances automatically
authenticate with other services running on your AWS infrastructure. This is
done through IAM roles, and the following steps will show you how to set up one.

Find the Identity and Access Management (IAM) service in the AWS console:

![find IAM service](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/55123f1d-33b5-41c2-2a67-11fef691a900/lg1x =960x274)

Click **Roles** from the menu on the left:

![click roles](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/038b1313-a790-4202-c90e-d74c6b388600/md1x =960x458)

Click **Create role**:

![click create role](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/c8e34ffb-b487-4e6f-66fb-631604ef8800/orig =960x314)

Specify **AWS service** as the **Trusted entity type**:

![specify trusted entity type](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/8fa8a8ff-97fc-40d1-dc7e-30e9a902b500/md2x =960x369)

Scroll down to the **Use case** section, pick **EC2**, then click **Next**:

![specify use case](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/e096fde0-cb66-440b-db3f-7bfc8374b900/md2x =960x803)

A trusted entity basically determines which resource types are allowed to assume
the role that you are creating. In this case, you're specifying EC2 as the
trusted entity, which means that all of your EC2 instances will be able to use
this role for accessing other resources in your AWS account.

On the next screen, find the `AmazonEC2ContainerRegistryReadOnly` managed policy
and attach it to your role, then click **Next**:

![attach AmazonEC2ContainerRegistryReadOnly policy](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/2d663533-d076-49f9-86f6-a79e9d4e1700/md2x =1276x601)

Unlike the `AmazonEC2ContainerRegistryPowerUser`, the
`AmazonEC2ContainerRegistryReadOnly` policy only allows pulling images from your
private registry. That way, you can ensure that your EC2 instances can't do any
harm by making modifications to your images.

Next, specify a name for your role (e.g., `product-api`) and add a description
that indicates its purpose:

![specify name and description](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/f280cabc-d9e9-4343-bbb3-28bf4e248500/public =960x571)

Finally, scroll down to the bottom of the page and click **Create role**:

![create role](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/ee50fddc-df01-471a-099d-bbcb35766300/lg2x =960x429)

After a moment, a message confirms the successful creation of your IAM role:

![create role flash message](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/12926b22-fa62-4314-9e02-2e98f0a94700/orig =961x143)

You can now attach the role to your EC2 instance.

### Attaching an IAM role

Navigate back to the EC2 instances dashboard and select the `product-api`
instance:

![select product-api instance](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/ffc391f3-7487-45a2-1068-fcf8d084c800/md2x =960x295)

From the **Actions** menu, navigate to **Security** and click **Modify IAM
role**:

![modify IAM role](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/93b20d1f-f9b3-4c90-2cf9-2ce2818ec900/public =958x377)

Specify the `product-api` IAM role that you created earlier as the preferred
**IAM role**, then click **Update IAM role**:

![update IAM role](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/9b5e1e65-6476-4017-3a6a-1b5127af3500/md2x =959x479)

A message confirms the operation:

![role attached flash message](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/33b7788f-53d6-4e7a-5401-6651b461ce00/md2x =961x290)

Keep in mind that it may take a few seconds before the role attachment
propagates to your EC2 instance. All of this information is then reflected in
the instance metadata describing the EC2 instance. The instance metadata is a
rich set of information (unique to every individual EC2 instance) that describes
its properties as they are stored in the internal inventory systems of AWS.

The instance metadata can be accessed from within the EC2 instance itself
through a special HTTP endpoint (`http://169.254.169.254/latest/meta-data`).

To do that, SSH into your instance:

```command
ssh -i ~/Downloads/product-api.pem admin@51.20.124.12
```

Then request an authorization token from the metadata service and assign it to a
shell variable (`TOKEN`) for use in subsequent commands:

```command
TOKEN=$(curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 1200")
```

Finally, send a request to the metadata service using the generated token to
list the available metadata categories:

```command
curl -s -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data
```

You'll see a ton of interesting metadata endpoints that you can use to obtain
various pieces of information about your EC2 instance:

```text
[output]
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
events/
hostname
iam/
identity-credentials/
instance-action
instance-id
instance-life-cycle
instance-type
local-hostname
local-ipv4
mac
metrics/
network/
placement/
profile
public-hostname
public-ipv4
public-keys/
reservation-id
security-groups
services/
system
```

The AWS CLI uses these endpoints to automatically obtain temporary security
credentials (i.e., an access key ID and a secret access key). This enables the
CLI to authenticate with your AWS account and perform the operations permitted
by the attached IAM role.

Since you created your EC2 instance from the official Debian AMI (Amazon Machine
Image), the AWS CLI is already available on your Linux VM, which you can verify
by executing:

```command
aws --version
```

```text
[output]
aws-cli/2.9.19 Python/3.11.2 Linux/6.1.0-10-cloud-amd64 source/x86_64.debian.12 prompt/off
```

To retrieve an ECR authorization token for the Docker client installed on your
EC2 instance, you can then run:

```command
aws ecr get-login-password | docker login --username AWS --password-stdin <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com
```

This will lead to a similar output:

```text
[output]
WARNING! Your password will be stored unencrypted in /home/admin/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
```

With that, you can finally pull the images from ECR to your EC2 instance.

### Launching Docker containers

Go ahead and pull the images to your EC2 instance by running the following
commands:

```command
sudo docker pull <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/product-api:0.1.0
```

```command
sudo docker pull <AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/web-server:0.1.0
```

Instead of launching them manually, you can create a simple `compose.yaml` file,
populated with the following contents:

```yaml
[label compose.yaml]
x-common-settings: &common-settings
  restart: always
  network_mode: host

services:
  web-server:
    <<: *common-settings
    image: 381491990672.dkr.ecr.eu-north-1.amazonaws.com/web-server:0.1.0
  product-api:
    <<: *common-settings
    image: 381491990672.dkr.ecr.eu-north-1.amazonaws.com/product-api:0.5.0
    environment:
      - APP_KEY=base64:a2nQ3bQFHbjU50y1oeeaNxfFpDCsF5t4egS/zEiY5lQ=
      - DB_HOST=tutorial.cluster-cb08aaskslz3.eu-north-1.rds.amazonaws.com
      - DB_USERNAME=product_api
      - DB_PASSWORD=test123
      - DB_DATABASE=product_api
```

This `compose.yaml` file defines two services named `web-server` and
`product-api`, and an interesting `x-common-settings` fragment.

The `x-common-settings` fragment defines the following settings that are
inherited by both services:

- `restart: always` ensures that the `web-server` and `product-api` containers
  are automatically restarted in case they terminate unexpectedly or in case the
  Docker daemon restarts on the VM (e.g., after a system reboot).
- `network_mode: host` places the network stack of both containers directly on
  the host machine. In other words, their network is not isolated from the host,
  and port forwarding is not needed. This generally results in optimized network
  performance, but the real reason for doing it is because it allows the
  `web-server` container to reach `product-api` on `localhost:9000`, which is
  identical to how both containers see each other when deployed with AWS ECS.

The `environment` settings for the `product-api` have been taken directly from
the `db.env` file you created earlier. The only exception is `APP_KEY`, where
you're hard coding a dummy encryption key that the PHP application requires for
booting up.

At this point, you can finally launch the containers:

```command
sudo docker compose up -d
```

To verify that the application works, open up a browser and input the public IP
address of your EC2 instance in the address bar. You should see a page listing
the five fictional products that you generated earlier with the
`artisan db:seed` command:

![api products](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/d2bbcf53-d8f1-4620-2b93-7d23751cf300/public =960x717)

### Scaling your deployment

The application is now running and can be publicly accessed over HTTP. Depending
on its size and the amount of traffic it receives, a single EC2 instance may
suffice for a while, but what if you need to handle more traffic or ensure high
availability?

In that case, you may want to consider implementing
[auto-scaling groups](https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-groups.html)
and using an
[application load balancer](https://aws.amazon.com/elasticloadbalancing/application-load-balancer/)
to distribute traffic evenly across multiple EC2 instances for increased
performance and reliability.

Let's briefly glance over this topic.

Auto-scaling groups are created from another resource called
[launch templates](https://docs.aws.amazon.com/autoscaling/ec2/userguide/launch-templates.html).
You may think of launch templates as blueprints for spinning new EC2 instances.
Creating one requires creating an AMI (Amazon Machine Image) from your
customized EC2 instance. You can then use that AMI to define a launch template.

Navigate back to your EC2 instance dashboard, select the `product-api` instance,
and then from **Actions**, navigate to **Image and templates** and click
**Create image**:

![create image](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/0155051e-5cba-4703-615c-bc11d0a35e00/md2x =959x406)

Specify a name for your image (e.g., `product-api`):

![specify image name](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/784ea163-170b-4c61-8008-c67483876700/md2x =961x394)

Leave everything else at its default settings, scroll down to the bottom of the
page, and click **Create image**:

![create image](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/0f85ed2a-c14c-4885-72d5-d9b146655000/md1x =1525x589)

A message appears indicating that the operation has been accepted for execution:

![AMI flash message](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/0d985d1a-113e-4025-02d4-a0a79a933100/public =960x312)

Creating the image takes some time and your EC2 instance is restarted before it
completes. This allows AWS to capture an accurate snapshot of the storage volume
attached to the instance.

You can find out whether the AMI has finished creating by navigating to the
**AMIs** page:

![navigate to AMI](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/00539312-04db-433d-e413-7065ee16fb00/orig =960x335)

There, select the `product-api` AMI and observe its **Status**. When the AMI is
ready, status should show as **Available**:

![AMI status](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/2b3ce9fd-27cd-4528-7f77-41a3be25bf00/lg2x =959x647)

As soon as the AMI status becomes **Available**, you can navigate to the
**Launch Templates** page:

![navigate to launch templates](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/6a4da579-56ed-4fd0-8eb5-0c942de88300/lg1x =960x384)

There, click the **Create launch template** button:

![create launch template](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/5db624bf-9995-442f-d1d3-1348f6164900/public =959x229)

On the next page, find the **Application and OS Images (Amazon Machine Image)**
section, select the **My AMIs** tab, restrict the options to **Owned by me**,
and select the `product-api` AMI you created earlier:

![application and os images](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/28f2a03c-30b7-483d-d3f6-a9202e307b00/lg1x =960x834)

Scroll down to the **Instance type** section and set the instance type to
`t3.micro`:

![set instance type](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/7737853b-b839-4aa2-3a07-0c01de477700/md1x =961x370)

In general, you should specify an instance type capable of meeting the resource
requirements of your application. Since the Product API has very modest
requirements, the `t3.micro` will suffice for this tutorial (but would be rarely
useful for most real-world applications).

Scroll down a bit further and find the **Key pair (login)** section. There,
choose the `product-api` pair that you created earlier:

![specify key-pair](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/466b384c-a3d5-4969-bdc4-042744c40600/md2x =957x325)

Each EC2 instance in the auto-scaling group will utilize that key-pair, which
means that you'll be able to SSH into each instance in the group if you need to.

Scroll down a bit more and find the **Network settings** section. There, choose
**Select existing security group** and specify the `launch-wizard-1` as the
security group to be attached to your EC2 instance:

![specify security group](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/8603e967-0af5-47e0-6544-be2766c60800/lg2x =960x550)

The `launch-wizard-1` group was created during the initial launch of the
original `product-api` EC2 instance that you made an AMI from. It allows SSH
traffic from your public IP address, and HTTP traffic from the Internet.

Leave everything else at its default settings, scroll down to the bottom of the
page, and click **Create launch template**:

![create launch template](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/0ce94a8f-8836-4ea2-3072-9e23f0f37a00/lg2x =960x548)

A message confirms the creation:

![flash message](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/e6b207cd-e78d-4285-3c1f-37bb8174bf00/md2x =961x238)

Navigate to **Auto Scaling Groups** from the menu on the left:

![auto-scaling groups](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/acf80494-3c8d-4bef-d0d3-e60027058300/orig =966x229)

Click **Create Auto Scaling group**:

![create auto-scaling group](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/22398cec-800a-4808-d6aa-0b04f3842c00/public =960x600)

Specify a name for your auto-scaling group and select `product-api` as the
**Launch template**:

![choose launch template](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/d3512a42-9344-4c10-f983-842ae8082400/lg1x =960x843)

Scroll down to the bottom of the page and click **Next**:

![choose launch template](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/45fd430a-2531-4404-79bf-c1fdbc584400/lg1x =955x312)

At the **Choose instance launch options** step, enable all **Availability
Zones** and click **Next**:

![choose launch template](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/ed95a763-810e-4024-3bd4-195d8b7c5d00/orig =978x1335)

At the **Configure advanced options** step, find the **Load balancing** section
and select **Attach to a new load balancer**:

![attach to a new load balancer option](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/4038ffa0-19dc-46fe-0b5a-31a31220b600/md1x =960x694)

A new section named **Attach to a new load balancer** appears. There, set the
**Load balancer type** to **Application Load Balancer**, the **Load balancer
name** to `product-api`, and the **Load balancer scheme** to
**Internet-facing**.

![attach to a new load balancer section](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/f1cc95a7-4450-42f5-6c70-2dbf41101c00/lg1x =961x671)

A little further down, find the **Listeners and routing** setting, select
**Create a target group**, and set the **New target group name** to
`product-api`:

![create a target group](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/17de842b-840e-4318-74e7-7f036a3af800/md1x =962x714)
Scroll down to the very bottom of the page and click **Next**:

![click next](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/edf472ef-bc4d-4238-85dc-2efb5ba64200/md1x =960x311)

These configurations will automatically create two new resources in your AWS
account: an application load balancer and a target group. The application load
balancer opens up a new public endpoint for receiving HTTP requests directed
towards your application. The target group then specifies every EC2 instance
that the load balancer may forward traffic to.

The next step is titled **Configure group size and scaling**. The **Group size**
section allows you to specify the exact number of EC2 instances that will be
launched initially in your auto-scaling group through the **Desired capacity**
setting:

![desired capacity](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/cbbaec4b-7b82-4de1-06da-1b8a44e6e800/lg2x =961x632)

Further down, the **Scaling** section allows you to specify the auto-scaling
criteria:

![scaling section](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/ef5ba840-ed8a-469a-6597-f43be74f5900/public =960x887)

The **Scaling limits** (**Min desired capacity** and **Max desired capacity**)
determine the minimum and maximum amount of EC2 instances that will be launched
in your AWS account as part of this auto-scaling group. The **Automatic
scaling - optional** section allows you to enable a **Target tracking scaling
policy** through which you could specify a **Metric type** (such as **Average
CPU utilization**) based on which auto-scaling events will be triggered (and EC2
instances will be launched or terminated to match the demand).

Scroll to the very bottom of the page and click **Skip to review** to fast-track
to the final step of the process (as none of the remaining steps contain
anything essential):

![skip to review](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/0e9192b0-bbee-4d0a-2e1f-7663dc611a00/lg1x =960x344)

The final **Review** step allows you to confirm your configuration settings:

![review step](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/4066beb3-ae9b-4442-9ff6-2c4954b9e200/lg2x =960x684)

Just scroll down to the bottom of the page and click **Create Auto Scaling
group**:

![create auto scaling group](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/f7e278eb-fe73-4e03-4af7-b0e4eb3cd500/public =961x332)

Provisioning the auto-scaling group, the initial EC2 instance, the application
load balancer, and the respective target group will take a bit of time, so
please wait for everything to complete.

When everything is ready, you'll see 1 instance reported as running in the
`product-api` auto-scaling group:

![product-api auto-scaling group](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/2343a142-ba8c-49e8-9ac8-589e91d57200/md1x =960x433)

Expand the menu on the left and find the **Load Balancers** link to navigate to
the load balancers overview page:

![load balancers link](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/1b42731b-d46f-44f0-c5dc-4a61f1dcfe00/orig =960x480)

From the **Load balancers** page, navigate to the `product-api` load balancer:

![product-api load balancer](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/cc7656e6-aa5b-423d-ad58-fde8bc021600/md1x =959x379)

On its page, you'll find the **DNS name** pointing to your load balancer:

![load balancer DNS name](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/9a38ee84-e7ce-43a0-abd0-3fe440969f00/lg2x =1183x664)

Paste that address in your browser, and you should see a listing of the test
products you created earlier with the `artisan db:seed` command:

![product listing](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/e0f078dc-cd88-4c55-18b7-19690e0b9b00/orig =955x714)

Your auto-scaling group works, but as you can see, setting everything up was a
long and tedious process. Let's clean up the resources you created so far and
then explore a much more convenient method for deploying Docker containers on
AWS.

### Cleaning up

The next section will present a much easier method for deploying your
application on AWS. But before moving on, please go ahead and remove all the
resources listed below, as they won't be needed anymore, and some of them will
also conflict with resources that you're about to create in the next section:

1. `product-api` auto-scaling group.
2. `product-api` application load balancer.
3. `product-api` target group.
4. `product-api` AMI.
5. `product-api` EBS snapshot.
6. `product-api` launch template.

Follow the provided order, as deleting a resource may require one of the earlier
resources to be removed first (e.g., you can't delete the target group before
removing the load balancer).

When you're ready, you can move on to the next section of the tutorial.

## Deploying to AWS ECS

As you can see, deploying your images directly to an EC2 instance, while sparing
you a lot of low-level infrastructure details, still involves a lot of work.
Wouldn't it be great to just upload your images to AWS and let them handle
everything else for you? That's where a serverless platform-as-a-service (PaaS)
solution such as AWS ECS can help.

ECS allows you to specify the URL to your Docker images and let the platform do
all the heavy lifting for you. You don't need to provision any Linux VMs and
load balancers yourself. There are no IAM roles to set up, no launch templates
to create, no auto-scaling groups to configure, and no Docker Engine to install.
AWS handles everything for you automatically!

Let's see how this works.

### Creating a task definition

Find the Elastic Container Service (ECS) in the AWS console:

![find ECS service](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/f47530e1-e0db-4c60-edb8-ab82a1927900/orig =962x274)

Select **Task definitions** from the menu on the left:

![select task definitions](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/be87961d-4eb1-49a4-db05-f000cfa32800/md1x =961x443)

In AWS, a task definition allows you to supply ECS with detailed information
about the workload you're intending to deploy.

Click **Create new task definition**:

![create new task definition](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/cda36963-e84e-4119-3efa-901a97945100/public =961x304)

Specify `product-api` as the **Task definition family**:

![specify task definition family](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/c2c7571b-b0e8-41ee-aa5d-dee958ef0a00/lg1x =960x404)

The task definition family provides a unique name to distinguish the present and
future versions of your task definition (modifying an existing task definition
results in a new version being created within the same task definition family).

Don't modify anything related to the **Infrastructure requirements**; just make
sure that **AWS Fargate** is checked:

![AWS Fargate](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/24d1784a-5be6-47f1-327a-50e92fb0ca00/md2x =970x335)

[AWS Fargate](https://aws.amazon.com/fargate/) is the serverless compute
platform that allows you to run containers without managing any underlying
infrastructure and spinning up your own EC2 instances.

Scroll down to the **Container - 1** section and start filling in the required
details. Specify `web-server` as the **Name** and
`<AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/web-server:1.0.0` as the
**Image URI**:

![configure container 1](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/6db4a7d6-beed-44dc-1e67-2d329d985d00/md1x =960x390)

Don't be confused by the **Private registry authentication** toggle switch:

![private registry authentication](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/b2cd3255-b9b1-4f8b-691e-2367671e9900/lg2x =961x389)

Even though you are using Docker images hosted in your private registry on ECR,
ECS is already capable of authenticating with it automatically. There are no IAM
users to create or IAM roles to set up. Enabling this option is only necessary
if you are pulling private images from an external registry (such as Docker
Hub). This is an excellent example of why choosing ECR over Docker Hub was a
good idea; it integrates much better with other AWS services, saving you
additional time and effort.

You may leave everything else for **Container - 1** at its default settings.
Scroll further down and find the **Add container** button:

![add container](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/cae10d3d-8447-46a9-ec02-f1b7b8186b00/public =960x175)

Click on it and a new section named **Container - 2** will appear. Fill in the
**Name** and **Image URI** like you did for **Container - 1**, but this time use
`product-api` as the name and
`<AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/product-api:1.0.0` as the
image:

![configure container 2](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/9cb31804-9b7f-4b0d-27ee-c1b73c883400/md1x =960x388)

Unlike **Container - 1**, however, **Container - 2** requires one additional
setting. You have to expose its FastCGI port to `localhost`, so the `web-server`
container can reach the `product-api` container on `localhost:9000`.

For that purpose, click the **Add port mapping** button:

![add port mapping](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/8d7d07da-9695-432d-c1d7-2d9f3a557e00/md2x =961x382)

Then input `9000` in the **Container port** field:

![map port 9000](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/7d6c0f65-7ecc-41f4-5ce9-6b69f0295000/lg1x =960x293)

Leave everything else for **Container - 2** at its default settings, scroll down
to the very bottom of the page, and click the **Create** button:

![create task definition](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/0f17618b-990f-4d92-cfa6-0c4aaec3db00/md2x =965x343)

With that, your task definition is almost ready for deployment:

![task definition created](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/26ea40b2-4175-4400-0afa-f0bcb192fb00/lg1x =960x608)

The only thing left is to create a Fargate cluster that you can deploy it on.

### Creating a cluster

Deploying a task definition requires a cluster. A cluster abstracts away the
infrastructure required for running your Docker containers.

To create one, navigate to **Clusters** from the menu on the left:

![navigate to clusters](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/7f13a043-2ef8-40af-dfa2-5defac4c2b00/lg1x =960x385)

Click **Create cluster**:

![create cluster](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/0a7a3d86-5a9b-4933-8d02-191d659d3000/public =959x404)

Specify `tutorial` as the **Cluster name**:

![specify cluster name](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/8726e8f6-30c3-408c-bfaf-b4a253585500/orig =961x534)

Make sure that the cluster uses AWS Fargate as its **Infrastructure** component:

![specify cluster infrastructure](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/e156016d-d8bf-4c31-07fc-700113bcf000/md2x =961x414)

Leave everything else at its default settings, scroll down to the very bottom of
the page, and click **Create**:

![create cluster](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/b0f34f08-b75e-4687-ac0b-20ea4dbb2500/lg1x =965x271)

Creating the cluster takes some time, so be patient and wait for the process to
complete:

![cluster created](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/db08f311-c3b4-4b8d-64e9-82c098ad5000/md2x =958x403)

Once the cluster is created, you can proceed with deploying your task definition
onto it.

### Creating a service

With your cluster created, click on **View cluster**:

![view cluster](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/bb9f8e1b-b3a9-4a6a-6a18-6977601be800/orig =961x402)

This will take you to an overview page with a lot of details about the cluster.
There, find the **Services** section and click the **Create** button to initiate
the deployment of a new service:

![create service](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/ede2c5b6-cb31-4ae7-9369-ce00c5a77800/md1x =960x757)

You can provide configuration for your service using the form that appears next:

![create service form](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/6805179c-2da7-4e49-4c6d-928cee12d400/lg2x =960x552)

Scroll down, find the **Deployment configuration** section, and set the task
definition **Family** to `product-api`:

![specify task definition family](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/7bdabefa-4478-43c4-1a27-2656bd103d00/md1x =960x625)

Scroll down a little further and find the **Networking** section. There, add
`launch-wizard-1` to the list of selected security groups. As you remember, the
`launch-wizard-1` security group allowed HTTP traffic from the internet to reach
the application. Not selecting it here will result in your deployment not being
able to receive incoming HTTP requests from external sources:

![specify security group](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/08cf4991-1a32-49f5-a3d6-1c076c844900/md1x =958x810)

Scroll down a little further and find the **Load balancing** section. There, set
the **Load balancer type** to **Application Load Balancer** and the **Load
balancer name** to `product-api`

![specify load balancer name and type](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/f8de1451-8a7d-421d-66b2-79671d323d00/md2x =960x681)

This will automatically create an application load balancer to route traffic
across your deployed `product-api` instances.

A little further down, change the **Target group name** to `product-api` and the
**Health check path** to `/api/products`:

![specify target group name and health check path](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/84ec3b56-a451-413c-5bea-cf321f496500/public =960x820)

The health check path specifies a URI in your application that the application
load balancer will periodically send requests to in order to validate whether
it's healthy or not (any response code other than `200` is considered
unhealthy). As the Product API doesn't have any specific health check endpoint,
using the default `/api/products` URI is a suitable option.

Leave everything else at its default settings, scroll down to the very bottom of
the page, and click **Create**:

![create service](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/66503c9b-3e21-4eb9-c11a-7cb8131d9800/md2x =962x254)

As with the creation of the Fargate cluster, the initial provisioning of
everything required to start your service (load balancer, target group, Docker
containers) would take a while, so be patient and wait for everything to come
up. After it does, the `product-api` will appear in the list services running on
the `tutorial` cluster (with **Status** reported as **Active**).

Go ahead and click on the **Service name** to continue further:

![service list](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/f0e33d00-544a-4e29-bdc8-bf6c56503c00/md1x =960x445)

Click **View load balancer**:

![view load balancer](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/00b865ec-7790-4755-bdc5-17f4076ce900/lg1x =1061x676)

This takes you to the application load balancer page, where you can find the
**DNS name** pointing to the application:

![view load balancer](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/6434fa66-7ee7-4b74-e787-429910253400/lg2x =1219x642)

Paste the DNS name in the address bar of your browser, and you should see the
application returning a response:

![view app](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/fe71436e-d12b-4dce-1671-215da3b75000/public =959x717)

Good job! The Product API is running in the cloud, and you didn't have to
provision any EC2 instances or additional infrastructure by yourself. The ECS
serverless approach results in highly reduced maintenance overhead and
complexity, and you can see for yourself that, compared to the EC2 auto-scaling
group approach, the deployment process is much simpler and easier to manage.

This marks the end of the tutorial.

## Final thoughts

Congratulations on finishing this tutorial! You went through a lot of steps to
get to this point, but you surely learned a lot in the process and should now be
a lot more confident in utilizing AWS (and ECS in particular) in your future
projects. If this was your first time exploring AWS, you must have learned a ton
of new terms and concepts, which will help you navigate through its complexities
and explore its services with greater confidence and ease.

I encourage you to continue exploring AWS further and experimenting with the
different tools and solutions it can offer to keep improving your deployment and
development processes. Some things worth trying are:

- Setting up [AWS Client VPN](https://docs.aws.amazon.com/vpn/) to access your
  cloud resources securely without exposing them to the public.
- Using [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/) to
  store your database credentials and pass them to your applications.
- Looking into [AWS CodeBuild](https://docs.aws.amazon.com/codebuild/) for
  automatically building and publishing your Docker images to ECR without using
  your local machine.
- Disabling public access to your RDS instance and running your database
  migrations from an EC2 instance instead of your local machine.
- Getting familiar with [Route 53](https://docs.aws.amazon.com/route53/) and
  [AWS ACM](https://docs.aws.amazon.com/acm/) for setting up HTTPS for your
  application.
- Learning more about [VPC](https://docs.aws.amazon.com/vpc/) and
  [security groups](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html)
  to make sure your cloud environment is properly secured and isolated.

Thanks for reading, and until next time!