Run a PHP application on AWS Fargate

Following the trend of serverless, all that hype (or not?) I was looking through the AWS services offered and stumbled upon AWS Fargate, a service that lets you run containerized applications on either Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Services (EKS).

For the tooling (development and deployment) of our PHP application I'd like to stick to probably the widely adopted tools, like:

Of course, you're not bound to this tooling. If you want to use another external database hosted somewhere else than on Amazon, feel free to do. The same applies to logging. You've got a working Greylog up and running then you're free to use that. These things are not mandatory for AWS Fargate. I just decided to use that for convenience reasons.

If you want to know about pricing of this setup by now, let's leave it with what advocates usually say: It depends! πŸ˜‰ To get a general feeling have look at the costs section at the end of this article. A lot of the (further) costs depend on the pricing. For this project/experiment I use Laravel 7. Of course, you can use any framework you'd like to run this application. Just make sure you adjust certain paths for building the Docker images.

Grab the code

All files can be found in this Github repository. Feel free to open issues and PRs if you spot anything wrong.

The final directory layout:

β”œβ”€β”€ .dockerignore
β”œβ”€β”€ .editorconfig
β”œβ”€β”€ .env
β”œβ”€β”€ .env.example
β”œβ”€β”€ .git
β”œβ”€β”€ .gitattributes
β”œβ”€β”€ .github
β”œβ”€β”€ .gitignore
β”œβ”€β”€ .idea
β”œβ”€β”€ .styleci.yml
β”œβ”€β”€ Dockerfile
β”œβ”€β”€ app
β”œβ”€β”€ artisan
β”œβ”€β”€ bootstrap
β”œβ”€β”€ composer.json
β”œβ”€β”€ composer.lock
β”œβ”€β”€ config
β”œβ”€β”€ database
β”œβ”€β”€ docker
β”œβ”€β”€ docker-compose.yml
β”œβ”€β”€ node_modules
β”œβ”€β”€ package.json
β”œβ”€β”€ phpunit.xml
β”œβ”€β”€ public
β”œβ”€β”€ resources
β”œβ”€β”€ routes
β”œβ”€β”€ server.php
β”œβ”€β”€ storage
β”œβ”€β”€ task-definition.json
β”œβ”€β”€ tests
β”œβ”€β”€ vendor
β”œβ”€β”€ webpack.mix.js
└── yarn.lock

So then, let's get started!

Preparing the Docker images

To provide the complete application to AWS Fargate, we'll split it into three containers:

  • Nginx
  • PHP-FPM
  • NodeJS

For the database we don't create a separate container as we need a stateful solution. We'll rely on a multi-stage build as well as on alpine images only for our Docker images to keep the image sizes extremely small.

Alpine images

Fast and small: saving costs when storing with ECR - Less size: less attack surface

You can find some comparison and further reading on Alpine images in this blog post or on the Alpine docker page.
Here is an example just to demonstrate the basic usage for our multi-stage build. You can find the full Dockerfile in the repository.

### PHP

# Use the php7.4-fpm-alpine image and aliasing it as 'laravelapp_php'
FROM php:7.4-fpm-alpine AS laravelapp_php

# Your instruction to build the image for php
# Then you need to copy the items needed from another image to this. In our case we copy the 'composer' executable

COPY --from=composer:latest /usr/bin/composer /usr/bin/composer

# copy everything, excluding the one from .dockerignore file
COPY . ./

RUN set -eux; \
    mkdir -p storage/logs storage/framework bootstrap/cache; \
    composer install --prefer-dist --no-progress --no-suggest --optimize-autoloader; \
    composer clear-cache

# ... and so on ...

### NGINX

FROM nginx:1.17-alpine AS laravelapp_nginx

# Copy our files from the laravelapp_php image to this one here
# Nginx only needs to have the files in 'public/'. The other php files to only exist in the php image
COPY --from=laravelapp_php /srv/laravelapp/public public/

Due to the fact that we only hold the files needed for the containers we keep to size of the images relatively small. For the php image, the size mainly depends on the modules needed as we need to include the build and development files for creating the extensions. This might increase our image size.

For my setup I ended with these sizes:

REPOSITORY                TAG            IMAGE ID            CREATED           SIZE
laravelapp-nginx       latest         252bcd38e8aa        42 hours ago      19.8MB
laravelapp-php-fpm     latest         9c2afe650880        43 hours ago      220MB
laravelapp-nodejs      latest         9a0b33434754        46 hours ago      381MB

Container orchestration

Although included in the project and supported by AWS Fargate, Docker compose is not going to be used in our AWS deployment. For orchestration of the containers we are going to use Amazons ECS tasks definition in which you can link containers. I will come to that a bit later when talking about the task definition. You can use the the docker-compose file for local testing of the containers. In there you can see how we addressed to different stages of the Docker image - using target, although we have only one Dockerfile.

version: "3.4"

services:
  php-fpm:
    build:
      context: .
      target: laravelapp_php

Through the depends_on item the containers are linked.
Do not use links. Using links in your docker-compose.yml is considered deprecated and being removed more or less soon. Either use depends_on or create a user-defined network. For local testing you just need to run docker-compose up. Afterwards you'll find the three images, linked, up and running. The push to the repository is part of our deployment pipeline which is our next topic.

Deployment with Github Actions

For deployment we are going to use Github Actions to create the Docker images, push them to a Docker registry (ECR in our case), creating a deployment task for ECS to pick up the images and spin up a new service that runs our three containers on ECS.

The workflow

Most of you are already familiar with so we are going to step right into our workflow file at .github/workflows/deploy.yml. To make the workflow work you need to register secrets in Github for your AWS key, AWS secret key and AWS region. Furthermore you need to adjust the ECR_REPOSITORY names (3 in total) according to the repositories you have created in AWS. For that, we quickly switch to our ECR console and create three repositories.

Tag immutability

This feature is supported since mid of 2019 and prevents tags from being overwritten. As we're tagging the images with latest and wanting this tag to reflect the latest changes we set this to disabled.

Scan on push

Image scanning helps in identifying software vulnerabilities in your container images. We are going to use this here. You can certainly disable it too, if you don't like it for whatever reasons.

After you created the secret in your Github repository, set up the repositories on AWS ECR, adjusted the names for the repositories in your workflow let's quickly have a look at each of the workflow steps:

  1. Checkout: Checkout your repository from Github
  2. Configure AWS credentials: Configure AWS credential environment variables for use in other GitHub Actions
  3. Login to Amazon ECR: Log into Amazon ECR with the local Docker client
  4. Build, tag, push image: nginx: Build the Docker image for nginx and pushes it to Amazon ECR
  5. Build, tag, push image: php-fpm: Build the Docker image for php-fpm and pushes it to Amazon ECR
  6. Build, tag, push image: nodejs: Build the Docker image for nodejs and pushes it to Amazon ECR
  7. Render task definition: nginx: Renders the final repository URL including the name of repository into the task-definition.json file. We'll come to that in the next topic.
  8. Render task definition: php-fpm: Renders the final repository URL including the name of repository into the task-definition.json file.
  9. Render task definition: nodejs: Renders the final repository URL including the name of repository into the task-definition.json file.
  10. Deploy Amazon ECS task definition: A task definition is required to run Docker containers in Amazon ECS. You define your containers, its hardware resources, inter-container connections as well as host connections, where to send logs to, and many more. See the next section about this topic. But first let's check for two important settings, service and cluster.
    cluster is the name you are going to choose in Amazon ECS.
    service is the name for the service that picks up the task and deploys it into the cluster.
  11. Logout of Amazon ECR: Log out from Amazon ECR and erase any credentials connected with it

Task definition for ECS

In ECS, the basic unit of a deployment is a task, a logical construct that models one or more containers. This means that the ECS APIs operate on tasks rather than individual containers. In ECS, you can’t run a container: rather, you run a task, which, in turns, run your container(s). A task contains one or more containers.

In our workflow the steps 7, 8 and 9 are responsible to adjust the task-definition.json file. This file can be compared to the docker-compose.yml or any other orchestration file you use to connect your Docker containers. One special thing here is the following line appearing in each of step 7, 8 and 9:

task-definition: task-definition.json
container-name: nginx
image: ${{ steps.build-image-nginx.outputs.image }}
task-definition: ${{ steps.task-def-nginx.outputs.task-definition }}
container-name: php-fpm
image: ${{ steps.build-image-php-fpm.outputs.image }}
task-definition: ${{ steps.task-def-php-fpm.outputs.task-definition }}
container-name: nodejs
image: ${{ steps.build-image-nodejs.outputs.image }}

What it does in 8 and 9 is, that it takes the former out and exchange the image field inside. In step 7 it uses the task-definition.json as it is. This is needed to iteratively inserting the image used from our Docker registry and finally pushing the complete task definition in step 10.
Moving on to the task-definition.json file itself. There is a whole documentation sections on this file AWS docs page. I'll continue with the parts that are relevant for us here.

{
  "family": "laravel-backend-app",
  "containerDefinitions": [
    {
      // container 1, nginx
    },
    {
      // container 2, php-fpm
    },
    {
      // container 3, nodejs
    }
  ],
  "executionRoleArn": "ecsTaskExecutionRole",
  "cpu": "2048",
  "memory": "4096",
  "networkMode": "awsvpc",
  "requiresCompatibilities": ["FARGATE"]
}

Let's go through the file, but starting at the end:

  • requiresCompatibilities: This needs to be set to FARGATE. Otherwise ECS won't recognize it properly.
  • networkMode: This is set to awsvpc, so every task that is launched from that task definition gets its own elastic network interface (ENI) and a primary private IP address. That makes it possible to call services and applications as if they would be in one system (not in distributed containers).
    Example for nginx calling php-fpm: fastcgi_pass 127.0.0.1:9000; If we would orchestrate with Docker Compose we normally call a container by its name. So the above statement would probably be fastcgi_pass php:9000;.
  • cpu: The cpu value can be expressed in CPU units or vCPUs in a task definition but is converted to an integer indicating the CPU units when the task definition is registered.
  • memory: The memory value can be expressed in MiB or GB in a task definition but is converted to an integer indicating the MiB when the task definition is registered.
    Both values, cpu and memory can be defined for each container separately or for the complete task. In this sample application here, I defined the values for the complete task. This runs just fine. And remember, you can change (scale) this as you need. This is just a point to start from.
  • executionRoleArn: This is connected to permissions on AWS. We leave this to the value ecsTaskExecutionRole. I'll come to this in the section about configuring AWS Fargate and its roles.
  • family: This is the name for our task that can be freely choosen. So you can deploy multiple laravel-backend-app if you like and do balancing and stuff. It is just common name for a set of containers, in our case here for three.
  • containerDefinitions: Here comes the fun part... the containers. I am going to summarize the important things you'll encounter in inside the task-definition.json:
    nginx: Open port 80 to host to be accessible from outside
    php-fpm: Open port 9000 only for inter-container communication to be accessible from nginx, Secrets and environment variable like keys, settings for debugging, env to be used are properly set
    nodejs: nothing special

All containers have essential set to true. This means, that if one container fails or stops for any reason, all other containers that are part of the task are stopped. Next is all the configuration about AWS Fargate and all the connected services we are going to use.

AWS Fargate

AWS Fargate cannot be configured directly as it is more an underlying technology to run serverless applications on Amazon AWS. In the next chapters we are going to step into each part that needs configuration to get the whole Laravel application running.

Security: IAMs, roles and permissions

Talking about security on AWS could fill an entire series of posts. I'll keep this to a minumum where I'd personally think this is a reasonable way to operate an application.

Separate user and group for your application In your Identity and Access Management (IAM) create a new user called laravelapp that is the one who is allowed to run all tasks around your application.

Assign and manage permissions via a group, f. ex. LaravelAppManageServices. Make the user laravelapp member of this group and then assign permissions to this group instead of directly to the user.

Roles for ECR and ECS By default the newly created user laravelapp has no rights to execute any operation on ECR or ECS. In our case, this user need to be able to push images to ECR or to run services on ECS to execute our task to spin up the container. For that we need to attach to two policies to our group:

  • AmazonEC2ContainerRegistryFullAccess
  • AmazonECS_FullAccess

Surely we would need to tighten the permissions a bit later on. I'll get to that in a separate post.

Execution role

AWS docs ...

The Amazon ECS container agent, and the Fargate agent for your Fargate tasks, make calls to the Amazon ECS API on your behalf. The agent requires an IAM role for the service to know that the agent belongs to you. This IAM role is referred to as a task execution IAM role.

So the, create a new role called ecsTaskExecutionRole and attach the following policies:

  • AmazonECSTaskExecutionRolePolicy
  • AmazonSSMReadOnlyAccess: we are going to need that to read our environment variables from AWS Systems Manager Parameter Store

The name of this role ecsExecutionRole needs to match the value in our workflow.

"executionRoleArn": "ecsTaskExecutionRole",
"cpu": "2048",
"memory": "4096",
"networkMode": "awsvpc",

Create the ECS cluster

Head over to your AWS Management Console, open Services, type ECS and click on Elastic Container Service.

On the left side menu click on Amazon ECS > Clusters and hit the Create Cluster button.

Create service: Step 1

Launch type: Fargate
Task definition:
This is prepopulated with the family name in our task-definition.json.

{
    "family": "laravel-backend-app",
    "containerDefinitions": [
        {
            // container 1, nginx
        },

Service name:
This should match the service name from our workflow file.

- name: "Deploy Amazon ECS task definition"
  uses: aws-actions/amazon-ecs-deploy-task-definition@v1
  with:
    task-definition: ${{ steps.task-def-nodejs.outputs.task-definition }}
    service: laravelapp-backend
    cluster: ${{ secrets.ECS_CLUSTER_NAME }}

Number of tasks: Leave that to 1 for now. We will not run more than one instance of this service.

Create service: Step 2

Cluster VPC: Make sure you select the correct Virtual Private Cluster (VPC) group that was created together with your cluster. It should be selected automatically.

Subnets: Select the subnet that are unselected, normally two. Load balancers: We will go with None for the moment and come back later to add a Load Balancer to our service.

Service discovery: Disable service discovery as we won't use Amazon Route 53 for this project. If needed you can add this later on, of course.

Create service: Step 3

Auto-scaling: Skip that. We won't use that.

Create service: Step 4

Review all your settings and hit Create service.

Environment variables and secrets

Managing secrets and/or environment variables on AWS can be done with either AWS Secrets Manager or with AWS Systems Manager Parameter Store. I decided to go with the parameter store for one main reason: it is (almost) free of charge.

  • Parameter store: Free of charge, limit of 10,000 parameters per account
  • Secrets manager: $0.40 per secret stored and additional $0.05 for 10,000 API calls

For those who want to read more about differences and pros and cons of both solutions have a look at this blog post for comparison of both. Whether it is a configuration like APP_DEBUG or an actual secret like the APP_KEY. Both is going to be stored in the parameter store and injected into the container in our task-defintion.json.

 "secrets": [
    {
        "name": "APP_ENV",
        "valueFrom": "laravelapp_app_env"
    },
    {
        "name": "APP_DEBUG",
        "valueFrom": "laravelapp_app_debug"
    },
    {
        "name": "APP_KEY",
        "valueFrom": "laravelapp_app_key"
    }
],

After you entered your settings the parameter store should like like this.

Store your APP_KEY as SecureString instead of type String. A SecureString parameter is any sensitive data that needs to be stored and referenced in a secure manner. If you have data that you don't want users to alter or reference in plain text, such as passwords or license keys, create those parameters using the SecureString datatype.

Although I used the normal String type you can do better for the above mentioned reasons.

Quick check

To quickly check if you can reach your site, navigate to your cluster, and check the task that is currently running. There you will find your public IP address that directly points to port 80 of your app. When you enter that page you should be welcomed with the Laravel landing page from our fresh install.

Add an Elastic Load Balancer (ELB)

Note: We won't handle neither HTTPS nor Amazon Route 53 here.

After adding the load balancer you can point your domain to CNAME of ELB. The ELB is going to direct all traffic to port 80 of our application where our nginx is listening. A load balancer can be created in the EC2 service. On the left select Load balancers and hit the button Create Load Balancer. Next select Application Load Balancer.

Create an ELB: Step 1

In the first step make sure you

  • have a listener for HTTP on port 80
  • have the correct Virtual Private Cloud (VPC) selected.

Additionally add the availability zones for your different subnets. Next would be step 2 which we are going to skip, as it is only about HTTPS.

Create an ELB: Step 3 (Security group)

We create a new security group laravelapp-sg with only one rule that allows traffic coming from Anywhere of type HTTP to reach our instance via TCP on port 80.

TypeProtocolPort rangeSource
HTTPTCP80Anywhere

Create an ELB: Step 4 (Target group)

Here we create a new target group laravelapp-tg with a target type IP. Our load balancer routes requests to the targets in this target group using the protocol and port that you specify.

Create an ELB: Step 5+6

The Register Targets step can be skipped and on step 6 you please check all the settings again. If everything looks good, you can save the configuration of the ELB.

Conclusions

Now when pushing changes to your Github repository the deploy workflow start, build the images, pushed them to the ECR Docker registry, creates the task that is picked up by the service and creates your application.

Costs

So, what's the price you have to pay for this setup? As I mentioned earlier, this is hard to say, as it depends on usage, dimensioning and the region. Here is a nice overview, how the different AWS regions varies by costs.

To make it short, the top 5:

  • N. Virginia
  • Ohio
  • Oregon
  • Mumbai
  • Stockholm

Use the AWS pricing calculator to get a proper pricing. If you're in a first phase of your project you are probably eligible for the AWS Free Usage Tier. This will surely give you a lot of space to play around and test. Here is a list of the usage for the service I created and played around with for this post.

As you can see the most important for now is the space for our Docker registry on ECR. So to keep our images small is basically saving us money.

Additional ideas

Just a quick rundown on improvements further ideas to be done. Provide HTTPS access, refine the groups, policies in IAM to tighten access and strengthen security, see the AWS SDK for Laravel to make handling for AWS in Laravel easier

... and probably many many more things :-)