Part 5: Making Jenkins Automation Scalable
Scaling to backup multiple DB

Jenkins CI/CD Series
Goal
The goal of this project is to transform the existing Jenkins automation into a scalable and reusable solution by externalizing scripts and ensuring persistence across container restarts.
In this exercise, we will move the backup script outside the container and mount it using Docker volumes, allowing it to persist even if the container is recreated. Additionally, we will extend the setup to support multiple databases and multiple S3 buckets, making the automation flexible for real-world use cases.
By the end of this project, you will have a Jenkins-driven automation setup that is resilient, reusable, and scalable across multiple environments.
Purpose
The purpose of this exercise is to demonstrate how to design Jenkins automation in a way that is production-ready and maintainable.
In earlier parts, the automation worked, but it was tightly coupled to the container lifecycle. In real-world DevOps environments, automation must be:
Persistent – survives container restarts or failures
Reusable – can be applied across multiple use cases
Scalable – supports multiple applications, databases, or environments
By externalizing the script and using Docker volume mapping, we ensure that critical automation logic is not lost when containers are recreated.
Extending the solution to handle multiple databases and S3 buckets introduces flexibility, which is essential for managing enterprise-scale infrastructure and data workflows.
This approach reflects real DevOps practices where pipelines are designed to be modular, reusable, and environment-agnostic.
Preresuisite
- Ready to use
Hostand thedirectorystructure to runDockerfilesanddocker-compose.yml(Refer to Part 1)
Step-by-step implementation
How to create this script permanently on a container?
Go to the home folder and create an aws-s3.sh and copy the content of the script.sh
/home/jenkins/jenkins_home$vi aws-s3.sh
- Go to docker-compose.yml and update the
volumesblock forremote_hostwhich will copy this script to the remote host in case of a container crash
services:
jenkins:
container_name: jenkins
image: jenkins/jenkins:lts
ports:
- "8080:8080"
volumes:
- $PWD/jenkins_home:/var/jenkins_home
networks:
- net
remote_host:
container_name: remote-host
image: remote-host
build:
context: centos7
volumes:
- "$PWD/aws-s3.sh:/tmp/script.sh"
networks:
- net
db_host:
container_name: db
image: mysql:8.0
environment:
MYSQL_ROOT_PASSWORD: 1234
volumes:
- $PWD/db_data:/var/lib/mysql
networks:
- net
networks:
net:
- Test it by deleting the remote-host
What if we have multiple databases and the backup has to be uploaded to different S3 buckets?
Create another S3
"jenkins-mysql-backup-mylab-2"
- Create another database
testdb2
- Now, let's try to take this
testdb2backup
- Check logs
- Verify upload
Done!!!
Conclusion
In this part of the series, we enhanced our Jenkins automation to make it scalable, reusable, and production-ready.
We achieved this by:
Externalizing the backup script to ensure persistence
Using Docker volumes to maintain scripts across container restarts
Testing resilience by recreating containers
Extending the solution to support multiple databases and S3 buckets
This transformation moves our setup from a simple working automation to a robust DevOps solution capable of handling real-world scenarios.
Designing scalable automation is a key skill for DevOps engineers, as it ensures systems can grow and adapt without requiring constant reconfiguration.
🔗 Continue the Series
⬅️ Previous Article: Part 4 Automating MySQL Backup to AWS S3
➡️ Next Article: Part 6 Running Ansible from Jenkins
⭐ If you found this article useful, follow https://ask-abhi.com for more DevOps tutorials.






