Introduction
In today's fast-paced development environment, delivering applications quickly and reliably is crucial. As developers, we strive for efficiency in both the deployment process and continuous updates. This is where containerization and Continuous Integration/Continuous Deployment (CI/CD) pipelines play a vital role. In this blog post, we'll walk through the steps to dockerize a Laravel application and set up a robust CI/CD pipeline using Jenkins.
By the end of this guide, you'll have a fully automated deployment process for your Laravel project that allows you to push changes seamlessly from your Git repository to production. We'll also cover essential topics such as containerization with Docker, pipeline architecture, and domain management with SSL using Cloudflare. Whether you're a beginner or an experienced developer, this tutorial will help you streamline your development workflow and reduce deployment complexities.
Let’s dive in!
"In this blog, we're not diving deep into the details of Docker, Jenkins, NGINX, or Laravel. Instead, we'll assume you already have a basic understanding of these technologies. Our focus is to show a practical, real-world example of how we leverage these tools to implement a solid deployment strategy and CI/CD pipeline for a Laravel application."
A.El Kaimouni
Quick Overview of the Laravel Application
The application we’ll be deploying is a platform that allows users to browse and watch TV series with a monthly subscription model. It provides a seamless streaming experience, enhanced by a personalized recommendation engine that suggests shows based on user preferences and viewing history. Here's a breakdown of its main features:
- Administrator Panel: The application includes an admin dashboard where administrators can manage the content library, including adding new series, uploading episodes, and organizing genres. Admins also have control over user subscriptions, financial reports, and content moderation.
- User Space: Subscribers can create and manage their profiles in a personalized space. Users can update their account details, view their subscription status, and access a recommendation engine that suggests new series based on their previous viewing habits.
- Monthly Subscription: The platform operates on a subscription model, where users pay a monthly fee to access the library of TV series. The subscription system is integrated into the application, ensuring a smooth payment process and managing access permissions based on the user's subscription status.
- Streaming & Recommendations: Users can browse various TV series, with recommendations tailored to their tastes, thanks to the integrated recommendation engine. The app makes content discovery easier, offering personalized suggestions based on what they’ve previously watched or interacted with.
This combination of an administrator anel for content management, user subscription handling, and personalized content recommendations makes the platform a comprehensive solution for delivering entertainment to subscribers. In this guide, we’ll explore how to take this Laravel application, dockerize it, and set up a CI/CD pipeline for automatic deployment.
Analyzing the Application's Dockerization Needs
To effectively deploy and scale our Laravel application using Docker, we need to assess its core requirements:
- PHP 8 and Laravel 9: The application is built using Laravel 9, which requires PHP 8 for compatibility. Docker will ensure that the correct PHP environment is consistently deployed across different systems.
- MySQL Database: The application uses MySQL to manage user data, subscriptions, and series content. A MySQL container will handle this requirement, allowing isolated and reliable database management.
- Nginx Web Server: For serving the application, an Nginx web server is necessary to handle HTTP requests and serve the Laravel application, providing fast and secure routing of traffic.
- Storage for Uploads: The application needs persistent storage for uploaded images and videos, ensuring that files are retained across container updates and restarts.
- SMTP Server for Emails: Sending transactional emails such as password resets or subscription updates requires an SMTP server, which will be configured for the application to send notifications.
These are the fundamental components that need to be dockerized for the application, ensuring smooth, consistent deployment and operation. Next, we'll propose the architecture to handle these requirements.
Analyzing the Application's Dockerization Needs
To effectively deploy and scale our Laravel application using Docker, we need to assess its core requirements:
- PHP 8 and Laravel 9: The application is built using Laravel 9, which requires PHP 8 for compatibility. Docker will ensure that the correct PHP environment is consistently deployed across different systems.
- MySQL Database: The application uses MySQL to manage user data, subscriptions, and series content. A MySQL container will handle this requirement, allowing isolated and reliable database management.
- Nginx Web Server: For serving the application, an Nginx web server is necessary to handle HTTP requests and serve the Laravel application, providing fast and secure routing of traffic.
- Storage for Uploads: The application needs persistent storage for uploaded images and videos, ensuring that files are retained across container updates and restarts.
- SMTP Server for Emails: Sending transactional emails such as password resets or subscription updates requires an SMTP server, which will be configured for the application to send notifications.
These are the fundamental components that need to be dockerized for the application, ensuring smooth, consistent deployment and operation. Next, we'll propose the architecture to handle these requirements.
Proposing an Architecture for Dockerization
To efficiently dockerize the Laravel application, we will create an architecture that leverages Docker containers to ensure modularity and scalability. The proposed setup consists of three core containers and the use of Docker volumes and networking:
Containers
- PHP Laravel Application Container: This container will run the Laravel application using PHP 8. It will handle the business logic, routing, and application functionality.
- MySQL Database Container: A separate container for MySQL will manage the database operations, storing all the application’s data related to users, subscriptions, and content.
- Nginx Web Server Container: Nginx will serve as the web server, responsible for handling incoming HTTP requests and directing them to the Laravel application. It will also serve static files such as images and videos.
Virtual Network
All three containers will be connected via a Docker virtual network. This ensures that the PHP, Nginx, and MySQL containers can communicate securely without exposing internal services directly to the public.
Volumes
We will create two Docker volumes to ensure data persistence and easy access:
- Database Volume: A dedicated volume will be used to store MySQL data files, ensuring that database information is preserved even if the container is restarted or rebuilt.
- Media Volume: This volume will store uploaded media files (images and videos). It will be shared between the Nginx container (to serve static media) and the Laravel application container (to handle media uploads and storage).
This architecture provides a scalable and maintainable solution, where each container has a single responsibility, and the application can easily grow or scale as needed. In the next section, we will begin implementing this architecture.
4. Implementing the Dockerization
In this section, we’ll walk through the Dockerization of the Laravel application based on the architecture we proposed. Here's a breakdown of the docker-compose.yml
file, which defines the setup for running your application in three containers: one for the Nginx web server, one for the MySQL database, and one for the Laravel backend (PHP).
1. Networks
We’ve created a network called app-network
to connect the containers. This ensures that each service (webserver, database, and backend) can communicate securely and privately within the network.
1
2
networks:
app-network:
2. Volumes
Two volumes are created:
tv-db
: Stores MySQL database files to ensure data persistence.tv-storage
: Shared between the webserver and backend to store uploaded media files like images and videos.
1
2
3
volumes:
tv-db:
tv-storage:
3. Nginx Webserver Service
This service uses the official Nginx Alpine image. It exposes port 80
, which will be mapped to a custom environment variable (${NGINX_PORT}
) for flexibility. The container will serve static files from the tv-storage
volume and use the mounted Nginx configuration from your local setup.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
webserver:
image: nginx:1.21.6-alpine
container_name: tv-webserver
restart: unless-stopped
tty: true
ports:
- "${NGINX_PORT}:80"
env_file:
- prod.env
volumes:
- tv-storage:/var/www/public/avatars
- tv-storage:/var/www/public/ethumbnails
- tv-storage:/var/www/public/thumbnails
- tv-storage:/var/www/storage/app/videos
- tv-storage:/var/www/public/posters
- ./public:/var/www/public
- .docker/nginx:/etc/nginx/conf.d
networks:
app-network:
depends_on:
- backend
- Volumes: Static files like avatars, posters, and videos are stored in the shared
tv-storage
volume. - Port Mapping: The Nginx container serves HTTP on port
80
, mapped to${NGINX_PORT}
. - Depends On: Ensures that the backend (PHP container) starts before Nginx.
NGINX Configuration for Laravel in Docker
To properly serve your Laravel application inside the Docker environment, we need to configure NGINX. Below is the NGINX configuration that will handle PHP requests and serve static files:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
server {
client_max_body_size 20M; # Limit for file uploads, set to 20MB
listen 80; # NGINX will listen on port 80 for incoming HTTP requests
index index.php index.html; # Default files to serve when accessing the root
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /var/www/public; # Point to Laravel's public directory, which serves as the document root
location ~ \.php$ {
try_files $uri =404; # Return 404 if the requested PHP file doesn’t exist
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass backend:9000; # Forward PHP requests to PHP-FPM running in the backend container
fastcgi_index index.php;
include fastcgi_params; # Include default FastCGI parameters
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # Map the requested script to the correct path
fastcgi_param PATH_INFO $fastcgi_path_info; # Capture and forward any path info
}
location / {
try_files $uri $uri/ /index.php?$query_string; # Handle routing in Laravel, falling back to index.php
gzip_static on; # Enable static Gzip compression for better performance
}
}
Key Aspects of the Configuration:
- PHP Processing: The block starting with
location ~ \.php$
is critical for processing PHP files. It forwards all PHP requests to the PHP-FPM service running in thebackend
container. Thefastcgi_pass backend:9000;
tells NGINX to connect to the backend container on port 9000, where PHP-FPM is listening. - Root Directory: The
root /var/www/public;
directive ensures NGINX serves files from Laravel'spublic
folder, which is where the app’s front-end files (e.g., CSS, JS, images) are stored.
By integrating this NGINX configuration, your Laravel app will efficiently serve dynamic PHP requests and static assets within Docker. The backend
container will handle PHP processing, while the webserver
container will serve as the front-facing entry point.
4. MySQL Database Service
This container runs MySQL 8. It defines necessary environment variables for the database, including DB_DATABASE
, DB_USERNAME
, and DB_PASSWORD
, from the prod.env
file. It also binds port 3306
to 30000
for external access if needed.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
db:
container_name: tv-db
image: mysql:8.0
restart: always
environment:
MYSQL_DATABASE: '${DB_DATABASE}'
MYSQL_ROOT_PASSWORD: '${DB_PASSWORD}'
MYSQL_ROOT_HOST: '%'
ports:
- "3306:3306"
networks:
app-network:
volumes:
- tv-db:/var/lib/mysql
- Ports: Exposes port
3306
on30000
for access. - Volumes: Data is stored in the
tv-db
volume to ensure persistence.
5. PHP Laravel Backend Service
This container is built from a custom Dockerfile located in .docker/dockerfile
. It runs the Laravel application, handles API requests, and processes uploaded media files. It shares the tv-storage
volume with the Nginx container, ensuring access to media files.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
backend:
restart: always
build:
context: ./
dockerfile: .docker/dockerfile
container_name: tv-backend
env_file:
- prod.env
volumes:
- tv-storage:/var/www/public/avatars
- tv-storage:/var/www/public/ethumbnails
- tv-storage:/var/www/public/thumbnails
- tv-storage:/var/www/public/posters
- tv-storage:/var/www/storage/app/videos
- .docker/php/php.ini:/usr/local/etc/php/conf.d/local.ini
networks:
app-network:
depends_on:
- db
- Volumes: Shared media storage is available in both the backend and webserver containers through the
tv-storage
volume. - Depends On: Ensures that the database starts before the backend service.
Dockerfile for Laravel Application
Below is the Dockerfile used to build the Laravel application container. This file defines how the PHP application will be set up and run inside the Docker environment.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
FROM php:8.1-fpm
# Install system dependencies
RUN apt-get update && apt-get install -y \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
unzip \
nodejs \
npm
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install PHP extensions
RUN docker-php-ext-install pdo pdo_mysql mbstring exif pcntl bcmath gd
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/local/bin/composer
# Setup working directory
WORKDIR /var/www/
# Copy Project Files
COPY . .
# Grant Permissions
RUN chown -R www-data:www-data /var/www/
RUN chmod -R 775 /var/www/
# Switch User
USER www-data
# Install project dependencies
RUN composer install --no-dev --optimize-autoloader
RUN npm install
# build resources using nodejs
RUN npm run prod
EXPOSE 9000
CMD ["php-fpm"]
- Base Image: PHP 8.1 with FPM for handling PHP requests with NGINX.
- System Dependencies: Required packages for Laravel, asset building, and database interaction are installed.
- PHP Extensions: Installs extensions like
pdo_mysql
for MySQL support andgd
for image processing. - Composer & Node.js: Composer installs PHP dependencies, while
npm
installs and compiles front-end assets. - Permissions: Ensures the correct permissions for the application to run under the
www-data
user. - Final Setup: Exposes port 9000 for communication with NGINX and starts PHP-FPM.
Deploying the Application Using Docker and Git
In this section, we will provision a VPS, install Docker from the official Docker website, and deploy the Laravel application. The VPS will be an AWS EC2 instance with 8GB of RAM to ensure smooth performance and leave room for future Jenkins installation (which requires at least 4GB of RAM).
Step 1: Provisioning a VPS instance
in this blog we gonn ause aws lightsale to provision a VPS.
1. Launch a Lisghtsale VPS instance from the AWS console with the following specifications:
- Instance type: 4 GB Memory, 2 vCPUs, 80 GB SSD Storage
- OS: Ubuntu 24.04 LTS.
- Storage: Add at least 30GB to accommodate system and media files.
2. Connect to the instance using SSH
Step 2: Install Docker and Git
We will follow Docker's official installation guide for Ubuntu to ensure you are using the latest version of Docker.
1. Install Docker:
Follow the steps in the official Docker installation guide for Ubuntu.
2. Install Git:
Once Docker is installed, we’ll also install Git to clone your project repository:
1
sudo apt install git -y
Step 3: Clone the Application Repository
Now that Docker and Git are installed, clone your Laravel application repository:
1. Clone the repository:
1
2
git clone https://github.com/AElKaimouni/pfa-laravel-app
cd pfa-laravel-app
2. Set the correct permissions for the project files:
1
sudo chown -R ubuntu:ubuntu .
Step 4: Create the Environment File
Before running the containers, create the prod.env
file, which will hold the necessary environment variables for your Laravel app.
1. Create the .env
file:
1
sudo nano prod.env
2. Fill the file with the following variables:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
APP_NAME=Laravel
APP_ENV=local
APP_KEY={YOUR_APP_KEY}
APP_DEBUG=true
APP_URL=http://{YOUR_VPS_PUBLIC_IP}
APP_HOST=http://{YOUR_VPS_PUBLIC_IP}
NGINX_PORT=80
LOG_CHANNEL=stack
LOG_DEPRECATIONS_CHANNEL=null
LOG_LEVEL=debug
DB_CONNECTION=mysql
DB_HOST=tv-db
DB_PORT=3306
DB_DATABASE=laravel
DB_USERNAME=root
DB_PASSWORD={DB_PASSWORD}
Step 5: Build and Run the Containers
With everything set up, use Docker Compose to build and start the application containers:
1. Run Docker Compose to build and start the containers:
1
sudo docker compose --env-file prod.env up -d
2. Migrate Database to have the tables required to run the application:
youc an use this command to get inside the container and run migrations
1
sudo docker exec -it tv-backend php artisan migrate
This command will build and launch the three containers: the Nginx web server, Laravel backend, and MySQL database.
3. Verify the containers are running by checking their status:
1
sudo docker ps
If all containers are running successfully, you should be able to access your Laravel application by visiting the public IP address of your VPS instance.
7. Analyzing CI/CD Requirements
In this section, we will outline the essential CI/CD requirements to automate the deployment process for your Laravel application. A robust CI/CD pipeline will facilitate efficient development, testing, and deployment, ensuring that updates can be made quickly and reliably.
1. Continuous Integration (CI) Needs
- Version Control: Utilize a version control system (e.g., Git) to manage code changes. Each commit to the repository should trigger the CI process.
- Automated Testing: Integrate automated testing to ensure that new changes do not introduce bugs.
- Build Automation: Automatically build the application upon successful test completion, guaranteeing that the code is always in a deployable state.
2. Continuous Deployment (CD) Needs
- Deployment Automation: Use Docker Compose to automate the deployment of containers, allowing for quick updates with minimal downtime.
- Environment Configuration: Manage different environment configurations (e.g., development, staging, production) through environment files or configuration management tools.
- Rollback Mechanism: Implement a strategy for rolling back to the previous stable version in case of deployment failures, such as keeping previous Docker images or using version tags.
8. Implementing CI/CD Pipeline with Jenkins
In this section, we will walk through the process of setting up a CI/CD pipeline using Jenkins, which will automate the build and deployment of our Laravel application whenever code is pushed to the prod
branch on GitHub.
Install Jenkins
The first step is to install Jenkins on the server. You can follow the official Jenkins installation guide for Linux, available here: Jenkins Installation Documentation. This guide covers everything from installing the required packages to setting up the Jenkins service.
Access Jenkins on Port 8080
Once Jenkins is installed and running, access the Jenkins UI by entering the following URL in your browser:
1
http://<your-server-ip>:8080
You’ll be prompted to set up Jenkins and create an admin user. Follow the setup instructions provided in the Jenkins UI.
Connect Jenkins to GitHub
To automate deployments, we’ll configure Jenkins to listen for changes on a prod
branch in your GitHub repository.
1. Create the prod
branch in your GitHub repository.
2. Install the GitHub Integration Plugin in Jenkins:
- Navigate to
Manage Jenkins
>Manage Plugins
>Available
tab. - Search for "GitHub" and install the relevant plugin.
3. Set up a new pipeline job:
- Go to
New Item
in Jenkins, and choose "Pipeline". - Connect your GitHub repository by providing the repository URL and credentials if needed.
4. Configure Jenkins to trigger builds on push to prod
:
- Under
Build Triggers
, select "GitHub hook trigger for GITScm polling". - In GitHub, go to your repository's settings, under
Webhooks
, and add your Jenkins webhook URL to trigger Jenkins on push events.
Set Up the Jenkinsfile
The pipeline will be defined in a Jenkinsfile
located in the root of your repository. This file outlines the steps Jenkins will follow to build and deploy your Laravel application using Docker.
Here's the Jenkinsfile:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
pipeline {
agent any
stages {
stage('Build Docker Images') {
steps {
script {
// Build Docker images
sh 'docker compose -p prod --env-file prod.env up -d --force-recreate --build'
}
}
}
}
post {
success {
echo 'Build, tests, and deployment were successful.'
}
failure {
echo 'Build or tests failed.'
}
}
}
This pipeline performs the following tasks:
- Build Docker Images: Jenkins uses
docker compose
to build and start the Docker containers for the Laravel application. - Post-Build Actions:
- Always stops any running testing containers after the build.
- Provides feedback on whether the build and deployment were successful or if they failed.
Add the prod.env
File to Jenkins
Before running the pipeline, ensure that Jenkins has access to the necessary environment variables by copying the prod.env
file into Jenkins' workspace for the project. You can do this by adding the file directly to the repository or copying it manually to Jenkins if sensitive data is involved.
Give jenkins permission to run docker
to achive that tou can run this command, then restart the jenkins server
1
2
sudo usermod -aG docker jenkins
sudo systemctl restart jenkins
Test the Pipeline
To test your Jenkins CI/CD pipeline, push any changes to the prod branch in GitHub. Jenkins will automatically trigger the build process and deploy the application using Docker.
- Check Jenkins Console Output to monitor the pipeline execution and ensure that the containers are built and deployed correctly.
- After a successful build, access your Laravel application through the EC2 instance's public IP address to verify that everything works as expected.
Run Migrations
Once connected, you can run the Laravel migrations inside the tv-backend
container by executing the following Docker command:
1
docker exec tv-backend php artisan migrate
This command will apply any pending migrations, ensuring that your MySQL database is properly configured to match the Laravel application’s current schema.
Now, your application should be ready to use with an up-to-date database structure. After the migrations are successfully run, proceed to the next step to test the application.
Step 9: Integrate Testing into the CI/CD Pipeline
We'll now integrate testing into the CI/CD process to ensure code is thoroughly tested before deployment. Here's how we will do it, broken down step-by-step:
1. Create docker-compose-test.yml
The first step is to create a separate Docker Compose file, docker-compose-test.yml
, which sets up an isolated environment for testing. This file will define a MySQL database and a Laravel backend container, but we won't include any persistent storage or web server, as these aren't necessary for testing purposes.
Here’s the content of the docker-compose-test.yml
file:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
version: '3.8'
networks:
test-app-network:
services:
db:
container_name: test-tv-db
image: mysql:8.0
restart: always
environment:
MYSQL_DATABASE: '${DB_DATABASE}'
MYSQL_ROOT_PASSWORD: '${DB_PASSWORD}'
MYSQL_ROOT_HOST: '%'
networks:
test-app-network:
backend:
restart: always
build:
context: ./
dockerfile: .docker/dockerfile.test
container_name: test-tv-backend
environment:
DB_HOST: 'test-tv-db'
env_file:
- prod.env
networks:
test-app-network:
depends_on:
- db
Explanation:
- This file sets up two services:
db
(a MySQL database for testing) andbackend
(the Laravel backend). - The backend container will depend on the
db
service. - We connect the containers using a dedicated network,
test-app-network
, without needing persistent volumes or web server services since we only care about testing the backend logic.
2. Create Dockerfile.test
Next, we create a specialized Dockerfile, Dockerfile.test
, to ensure the testing environment has everything it needs, including development dependencies and testing libraries.
the new dokcer file will be the same for the previous one expect for one line which the composer install command, in this case we gonna remove --no-dev param to install all libraries to helps us perfom testing operations.
1
2
//RUN composer install --no-dev --optimize-autoloader
RUN composer install --optimize-autoloader
Explanation:
- This Dockerfile installs additional development dependencies and testing libraries.
- The
composer install
command installs all necessary dependencies, including dev packages needed for running tests.
3. Update the Jenkinsfile
Now, we will modify the Jenkinsfile to include the new testing stage. This stage will build the Docker environment for testing, run migrations, and execute tests. Afterward, the testing environment will be torn down.
Here’s how we’ll modify the Jenkinsfile
:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
stage('Build & Test') {
steps {
script {
// Build Docker images
sh 'docker compose -p test -f docker-compose-test.yml --env-file prod.env up -d --build'
def maxRetries = 10
def retryCount = 0
def commandSucceeded = false
while (retryCount < maxRetries && !commandSucceeded) {
try {
// Replace the following line with your command
sh 'docker exec test-tv-backend php artisan migrate --force'
commandSucceeded = true
} catch (Exception e) {
retryCount++
echo "Command failed. Retry count: ${retryCount}"
if (retryCount >= maxRetries) {
error "Command failed after ${maxRetries} attempts"
}
sleep 10 // Optional: wait before retrying
}
}
// Run tests in the test-tv-backend container
sh 'docker exec test-tv-backend php artisan test'
}
}
}
Explanation:
- Build & Test Stage: This stage builds the containers defined in
docker-compose-test.yml
, creates the testing environment, and runs migrations and tests. - The script retries the migration up to 10 times in case the database service is not ready when the migration command is run.
- Once the migrations succeed, the tests are executed using
php artisan test
. - Stop Test Containers: After the tests are completed, the test environment is torn down using
docker compose down
.
1
2
3
4
always {
// stop testing containers
sh 'docker compose -p test -f docker-compose-test.yml down'
}
This method ensures that all code changes pushed to the prod
branch are tested in a clean environment before they are deployed to production. Automated testing catches issues early, improving code quality and reliability.
0 Comments