<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[ask-abhi]]></title><description><![CDATA[Enterprise-focused DevOps tutorials covering CI/CD pipelines, Terraform, automation, cloud security, and production best practices.]]></description><link>https://ask-abhi.com</link><generator>RSS for Node</generator><lastBuildDate>Wed, 15 Apr 2026 10:26:23 GMT</lastBuildDate><atom:link href="https://ask-abhi.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Part 10: Environment Variables in Pipelines]]></title><description><![CDATA[Jenkins CI/CD Series



Part
Article



1
Install Jenkins with Docker


2
Creating Your First Jenkins Pipeline


3
Jenkins SSH Remote Execution


4
Automating MySQL Backup to AWS S3


5
Making Jenkins]]></description><link>https://ask-abhi.com/part-10-environment-variables-in-pipelines</link><guid isPermaLink="true">https://ask-abhi.com/part-10-environment-variables-in-pipelines</guid><category><![CDATA[Devops]]></category><category><![CDATA[automation]]></category><category><![CDATA[GitHub]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Jenkins]]></category><category><![CDATA[Docker]]></category><category><![CDATA[ansible]]></category><dc:creator><![CDATA[Abhinandan Chougule]]></dc:creator><pubDate>Wed, 18 Mar 2026 08:08:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/d802858a-3110-4897-8e8c-9f2602eb28cd.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<h3>Jenkins CI/CD Series</h3>
<table>
<thead>
<tr>
<th>Part</th>
<th>Article</th>
</tr>
</thead>
<tbody><tr>
<td>1</td>
<td><a href="https://ask-abhi.com/part-1-install-jenkins-with-docker">Install Jenkins with Docker</a></td>
</tr>
<tr>
<td>2</td>
<td><a href="https://ask-abhi.com/part-2-creating-your-first-jenkins-pipeline">Creating Your First Jenkins Pipeline</a></td>
</tr>
<tr>
<td>3</td>
<td><a href="https://ask-abhi.com/part-3-jenkins-ssh-remote-execution">Jenkins SSH Remote Execution</a></td>
</tr>
<tr>
<td>4</td>
<td><a href="https://ask-abhi.com/part-4-automating-mysql-backup-to-aws-s3">Automating MySQL Backup to AWS S3</a></td>
</tr>
<tr>
<td>5</td>
<td><a href="https://ask-abhi.com/part-5-making-jenkins-automation-scalable">Making Jenkins Automation Scalable</a></td>
</tr>
<tr>
<td>6</td>
<td><a href="https://ask-abhi.com/part-6-running-ansible-from-jenkins">Running Ansible from Jenkins</a></td>
</tr>
<tr>
<td>7</td>
<td><a href="https://ask-abhi.com/part-7-control-jenkins-pipeline-through-github">Jenkins Pipeline from GitHub</a></td>
</tr>
<tr>
<td>8</td>
<td><a href="https://ask-abhi.com/part-8-multibranch-pipelines">Multibranch Pipelines</a></td>
</tr>
<tr>
<td>9</td>
<td><a href="https://ask-abhi.com/part-9-parameterized-pipelines">Parameterized Pipelines</a></td>
</tr>
<tr>
<td>10</td>
<td><a href="https://ask-abhi.com/part-10-environment-variables-in-pipelines">Environment Variables in Pipelines</a></td>
</tr>
</tbody></table>
<hr />
<h3>Goal</h3>
<p>Understand how to use <strong>environment variables in Jenkins pipelines</strong></p>
<ul>
<li><p>Define variables at:</p>
<ul>
<li><p>Global level</p>
</li>
<li><p>Pipeline level</p>
</li>
<li><p>Stage level</p>
</li>
</ul>
</li>
</ul>
<p>Use environment variables to <strong>control pipeline behavior dynamically</strong></p>
<p>Learn how to manage <strong>sensitive data securely</strong></p>
<hr />
<h3>Purpose</h3>
<p>The purpose of this part is to make your pipelines <strong>configurable, reusable, and secure</strong>.</p>
<p>So far, you have:</p>
<ul>
<li><p>Static pipelines</p>
</li>
<li><p>Parameterized inputs</p>
</li>
</ul>
<p>But real-world pipelines require:</p>
<ul>
<li><p>Managing configuration like <strong>URLs, ports, and credentials</strong></p>
</li>
<li><p>Handling <strong>different environments (Dev, UAT, Prod)</strong></p>
</li>
<li><p>Avoiding hardcoding sensitive data in scripts</p>
</li>
</ul>
<p>Environment variables help you:</p>
<ul>
<li><p>Centralize configuration</p>
</li>
<li><p>Reuse pipelines across environments</p>
</li>
<li><p>Improve maintainability and readability</p>
</li>
</ul>
<p>This is a <strong>core DevOps practice</strong> used in:</p>
<ul>
<li><p>CI/CD pipelines</p>
</li>
<li><p>Kubernetes deployments</p>
</li>
<li><p>Cloud-native applications</p>
</li>
</ul>
<hr />
<h3>Prerequisite</h3>
<p>Before starting this part, ensure:</p>
<ul>
<li><strong>Ready to use</strong> <code>Host</code> <strong>and the</strong> <code>directory</code> <strong>structure to run</strong> <code>Dockerfiles</code> <strong>and</strong> <code>docker-compose.yml</code> <strong>(Refer to</strong> <a href="https://ask-abhi.com/part-1-install-jenkins-with-docker"><strong>Part 1</strong></a><strong>)</strong></li>
</ul>
<hr />
<h3>Step-by-step implementation</h3>
<ul>
<li><strong>Prepare the code</strong></li>
</ul>
<pre><code class="language-typescript">pipeline {
    agent any

    environment {
        def myString = "Hello World"
        def myNumber = 10
        def myBool = true
    }

    stages{
        stage("Demo") {
            steps {
                echo "myString: ${myString}"
                echo "myNumber: ${myNumber}"
                echo "myBool: ${myBool}"
            }
        }
    }
}
</code></pre>
<ul>
<li><strong>Create pipeline "variable-jenkins."</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/cb521024-5c27-44b5-b243-c5c36cc4833a.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Paste the code into the</strong> <code>pipeline script</code></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/4659097f-a1f2-4f34-840e-d1e2db00bbb6.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><p><strong>Build Now</strong></p>
</li>
<li><p><strong>Now we can see the pipeline using the variables given in the environment block</strong></p>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/2bbc1e60-fdd7-46fb-ba42-7e7596f28b6c.png" alt="" style="display:block;margin:0 auto" />

<h3>Done!!!</h3>
<hr />
<h3>Conclusion</h3>
<p>In this part, you upgraded your pipeline to a <strong>fully configurable and production-ready CI/CD system</strong></p>
<p>You learned how to:</p>
<ul>
<li><p>Use environment variables effectively in pipelines</p>
</li>
<li><p>Avoid hardcoding values</p>
</li>
<li><p>Manage configurations across environments</p>
</li>
<li><p>Improve pipeline reusability and security</p>
</li>
</ul>
<p>This is how modern DevOps pipelines are designed:</p>
<ul>
<li><p>Flexible across environments</p>
</li>
<li><p>Secure with proper secret handling</p>
</li>
<li><p>Easy to maintain and scale</p>
</li>
</ul>
<hr />
<h2>🔚 Final Note (Series Completion)</h2>
<p>🎉 Congratulations! we have completed the <strong>Jenkins CI/CD Mastery Series</strong></p>
<p>We have built:</p>
<ul>
<li><p>End-to-end CI/CD pipelines</p>
</li>
<li><p>Scalable automation workflows</p>
</li>
<li><p>Production-ready DevOps practices</p>
</li>
</ul>
<hr />
<h3>🔗 Continue the Series</h3>
<p>⬅️ <strong>Previous Article:</strong> <a href="https://ask-abhi.com/part-9-parameterized-pipelines">Part 9 <strong>Parameterized Pipelines</strong></a><br />➡️ <strong>Next Article:</strong> <a href="https://ask-abhi.com">Home</a></p>
<hr />
<p>⭐ If you found this article useful, follow <strong><a href="https://ask-abhi.com">https://ask-abhi.com</a></strong> for more DevOps tutorials.</p>
<hr />
]]></content:encoded></item><item><title><![CDATA[Part 9: Parameterized Pipelines]]></title><description><![CDATA[Jenkins CI/CD Series



Part
Article



1
Install Jenkins with Docker


2
Creating Your First Jenkins Pipeline


3
Jenkins SSH Remote Execution


4
Automating MySQL Backup to AWS S3


5
Making Jenkins]]></description><link>https://ask-abhi.com/part-9-parameterized-pipelines</link><guid isPermaLink="true">https://ask-abhi.com/part-9-parameterized-pipelines</guid><category><![CDATA[Devops]]></category><category><![CDATA[automation]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Jenkins]]></category><category><![CDATA[GitHub]]></category><category><![CDATA[Pipeline]]></category><category><![CDATA[Docker]]></category><dc:creator><![CDATA[Abhinandan Chougule]]></dc:creator><pubDate>Wed, 18 Mar 2026 07:30:17 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/a8a83c8c-8e75-41e5-b6dd-8fbd2cba307d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<h3>Jenkins CI/CD Series</h3>
<table>
<thead>
<tr>
<th>Part</th>
<th>Article</th>
</tr>
</thead>
<tbody><tr>
<td>1</td>
<td><a href="https://ask-abhi.com/part-1-install-jenkins-with-docker">Install Jenkins with Docker</a></td>
</tr>
<tr>
<td>2</td>
<td><a href="https://ask-abhi.com/part-2-creating-your-first-jenkins-pipeline">Creating Your First Jenkins Pipeline</a></td>
</tr>
<tr>
<td>3</td>
<td><a href="https://ask-abhi.com/part-3-jenkins-ssh-remote-execution">Jenkins SSH Remote Execution</a></td>
</tr>
<tr>
<td>4</td>
<td><a href="https://ask-abhi.com/part-4-automating-mysql-backup-to-aws-s3">Automating MySQL Backup to AWS S3</a></td>
</tr>
<tr>
<td>5</td>
<td><a href="https://ask-abhi.com/part-5-making-jenkins-automation-scalable">Making Jenkins Automation Scalable</a></td>
</tr>
<tr>
<td>6</td>
<td><a href="https://ask-abhi.com/part-6-running-ansible-from-jenkins">Running Ansible from Jenkins</a></td>
</tr>
<tr>
<td>7</td>
<td><a href="https://ask-abhi.com/part-7-control-jenkins-pipeline-through-github">Jenkins Pipeline from GitHub</a></td>
</tr>
<tr>
<td>8</td>
<td><a href="https://ask-abhi.com/part-8-multibranch-pipelines">Multibranch Pipelines</a></td>
</tr>
<tr>
<td>9</td>
<td><a href="https://ask-abhi.com/part-9-parameterized-pipelines">Parameterized Pipelines</a></td>
</tr>
<tr>
<td>10</td>
<td><a href="https://ask-abhi.com/part-10-environment-variables-in-pipelines">Environment Variables in Pipelines</a></td>
</tr>
</tbody></table>
<hr />
<h3>Goal</h3>
<p>Create a Jenkins pipeline named <code>parameter-pipeline</code> that accepts user inputs through parameters to control a deployment configuration. The pipeline should allow the user to specify a custom deployment name, choose the target AWS Availability Zone, and confirm the deployment before execution.</p>
<hr />
<h3>Purpose</h3>
<p>The purpose of this part is to make your Jenkins pipelines <strong>interactive, flexible, and production-ready</strong>.</p>
<p><em><strong>So far, your pipelines were:</strong></em></p>
<ul>
<li><p>Static</p>
</li>
<li><p>Hardcoded</p>
</li>
<li><p>Same behavior for every run</p>
</li>
</ul>
<p><em><strong>In real-world DevOps:</strong></em></p>
<ul>
<li><p>Deployments vary by <strong>environment (Dev, UAT, Prod)</strong></p>
</li>
<li><p>Infrastructure changes by <strong>region/AZ</strong></p>
</li>
<li><p>Releases need <strong>manual confirmation or approvals</strong></p>
</li>
</ul>
<p><em><strong>Parameterized pipelines solve this by:</strong></em></p>
<ul>
<li><p>Allowing <strong>user input during execution</strong></p>
</li>
<li><p>Enabling <strong>controlled deployments</strong></p>
</li>
<li><p>Reducing the need to modify pipeline code</p>
</li>
</ul>
<p><strong>This is a critical DevOps practice used in:</strong></p>
<ul>
<li><p>Multi-environment deployments</p>
</li>
<li><p>Release approvals</p>
</li>
<li><p>Dynamic infrastructure provisioning</p>
</li>
</ul>
<hr />
<h3>Prerequisite</h3>
<p>Before starting this part, ensure:</p>
<ul>
<li><strong>Ready to use</strong> <code>Host</code> <strong>and the</strong> <code>directory</code> <strong>structure to run</strong> <code>Dockerfiles</code> <strong>and</strong> <code>docker-compose.yml</code> <strong>(Refer to</strong> <a href="https://ask-abhi.com/part-1-install-jenkins-with-docker"><strong>Part 1</strong></a><strong>)</strong></li>
</ul>
<hr />
<h3>Step-by-step implementation</h3>
<p>Here is the code that will be used</p>
<pre><code class="language-typescript">pipeline {
    agent any

    parameters {
        string(
            name: "deploymentName",
            defaultValue: "",
            description: "Deployment Name?"
        )

        choice(
            name: "azDeploy",
            choices: ["ap-southeast-1a", "ap-southeast-1b", "ap-southeast-1c"],
            description: "What AZ?"
        )

        booleanParam(
            name: "confirmDeploy",
            defaultValue: false,
            description: "CONFIRM DEPLOYMENT?"
        )
    }

    stages {
        stage("Deploy") {
            steps {
                echo "Deployment Name: ${params.deploymentName}"
                echo "AZ Selected: ${params.azDeploy}"
                echo "Deployment Confirmation: ${params.confirmDeploy}"
            }
        }
    }
}
</code></pre>
<ul>
<li><strong>Create Pipeline</strong> <code>parameter-pipeline</code><strong>"</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/19ab4202-af9a-4259-bb26-87f8ef7d7c93.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Add the script to the</strong> <code>pipeline script</code></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/d6a281e0-1ff9-4e2a-b01b-154533527444.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Build Now</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/483c7115-129f-4f90-8bd0-c7d22ad617cb.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Provide inputs and <mark class="bg-yellow-200 dark:bg-yellow-500/30">click</mark></strong> <code>build with parameters</code></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/da21cb7c-31a1-4380-8731-08c16a274827.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Expected</strong> <code>console output</code></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/c4a41512-e3d4-496e-9b66-06b2485334a9.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Pipeline status</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/63257ee8-ea95-4ee2-900b-4ceecdb4c5f8.png" alt="" style="display:block;margin:0 auto" />

<p><strong>Done!!!</strong></p>
<hr />
<h3>Conclusion</h3>
<p>In this part, you transformed your pipeline into a <strong>dynamic, user-driven CI/CD system</strong></p>
<p>We learned how to:</p>
<ul>
<li><p>Accept user inputs directly from Jenkins UI</p>
</li>
<li><p>Control pipeline execution using parameters</p>
</li>
<li><p>Make deployments flexible and environment-aware</p>
</li>
<li><p>Reduce hardcoding in pipeline scripts</p>
</li>
</ul>
<p>This is how real-world pipelines operate in production:</p>
<ul>
<li><p>Same pipeline, <strong>multiple use cases</strong></p>
</li>
<li><p>Controlled deployments with <strong>user validation</strong></p>
</li>
<li><p>Better flexibility without changing code</p>
</li>
</ul>
<hr />
<h3>🔗 Continue the Series</h3>
<p>⬅️ <strong>Previous Article:</strong> <a href="https://ask-abhi.com/part-8-multibranch-pipelines">Part 8 <strong>Multibranch Pipelines</strong></a><br />➡️ <strong>Next Article:</strong> <a href="https://ask-abhi.com/part-10-environment-variables-in-pipelines">Part 10 Environment Variables in Pipelines</a></p>
<hr />
<p>⭐ If you found this article useful, follow <strong><a href="https://ask-abhi.com">https://ask-abhi.com</a></strong> for more DevOps tutorials.</p>
<hr />
]]></content:encoded></item><item><title><![CDATA[Part 8: Multibranch Pipelines]]></title><description><![CDATA[Jenkins CI/CD Series



Part
Article



1
Install Jenkins with Docker


2
Creating Your First Jenkins Pipeline


3
Jenkins SSH Remote Execution


4
Automating MySQL Backup to AWS S3


5
Making Jenkins]]></description><link>https://ask-abhi.com/part-8-multibranch-pipelines</link><guid isPermaLink="true">https://ask-abhi.com/part-8-multibranch-pipelines</guid><category><![CDATA[Devops]]></category><category><![CDATA[GitHub]]></category><category><![CDATA[Jenkins]]></category><category><![CDATA[Pipeline]]></category><category><![CDATA[automation]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Docker]]></category><dc:creator><![CDATA[Abhinandan Chougule]]></dc:creator><pubDate>Wed, 11 Mar 2026 02:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/7c4399f9-bf34-44b4-8b6f-54388fef3ee7.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<h3>Jenkins CI/CD Series</h3>
<table>
<thead>
<tr>
<th>Part</th>
<th>Article</th>
</tr>
</thead>
<tbody><tr>
<td>1</td>
<td><a href="https://ask-abhi.com/part-1-install-jenkins-with-docker">Install Jenkins with Docker</a></td>
</tr>
<tr>
<td>2</td>
<td><a href="https://ask-abhi.com/part-2-creating-your-first-jenkins-pipeline">Creating Your First Jenkins Pipeline</a></td>
</tr>
<tr>
<td>3</td>
<td><a href="https://ask-abhi.com/part-3-jenkins-ssh-remote-execution">Jenkins SSH Remote Execution</a></td>
</tr>
<tr>
<td>4</td>
<td><a href="https://ask-abhi.com/part-4-automating-mysql-backup-to-aws-s3">Automating MySQL Backup to AWS S3</a></td>
</tr>
<tr>
<td>5</td>
<td><a href="https://ask-abhi.com/part-5-making-jenkins-automation-scalable">Making Jenkins Automation Scalable</a></td>
</tr>
<tr>
<td>6</td>
<td><a href="https://ask-abhi.com/part-6-running-ansible-from-jenkins">Running Ansible from Jenkins</a></td>
</tr>
<tr>
<td>7</td>
<td><a href="https://ask-abhi.com/part-7-control-jenkins-pipeline-through-github">Jenkins Pipeline from GitHub</a></td>
</tr>
<tr>
<td>8</td>
<td><a href="https://ask-abhi.com/part-8-multibranch-pipelines">Multibranch Pipelines</a></td>
</tr>
<tr>
<td>9</td>
<td><a href="https://ask-abhi.com/part-9-parameterized-pipelines">Parameterized Pipelines</a></td>
</tr>
<tr>
<td>10</td>
<td><a href="https://ask-abhi.com/part-10-environment-variables-in-pipelines">Environment Variables in Pipelines</a></td>
</tr>
</tbody></table>
<hr />
<h3>Goal</h3>
<ul>
<li><p>Implement a <strong>Multibranch Pipeline in Jenkins</strong></p>
</li>
<li><p>Automatically detect and build <strong>multiple Git branches</strong></p>
</li>
<li><p>Enable Jenkins to run pipelines <strong>per branch dynamically</strong></p>
</li>
<li><p>Understand how code changes in different branches trigger separate builds</p>
</li>
</ul>
<hr />
<h3>Purpose</h3>
<ul>
<li><p>The purpose of this part is to move from a <strong>single pipeline setup</strong> to a <strong>scalable, branch-aware CI/CD system</strong>.</p>
<p><em><strong>In real-world DevOps environments:</strong></em></p>
<ul>
<li><p>Teams work on <strong>multiple branches</strong> (feature, dev, release, main)</p>
</li>
<li><p>Each branch may have <strong>different code and pipeline behavior</strong></p>
</li>
<li><p>Manual pipeline creation per branch is <strong>not scalable</strong></p>
</li>
</ul>
<p><em><strong>Multibranch Pipelines solve this by:</strong></em></p>
<ul>
<li><p>Automatically discovering branches from GitHub</p>
</li>
<li><p>Running pipelines based on <strong>Jenkinsfile inside each branch</strong></p>
</li>
<li><p>Allowing parallel and independent builds per branch</p>
</li>
</ul>
<p>This is a <strong>key step toward production-grade CI/CD pipelines</strong>.</p>
</li>
</ul>
<hr />
<h3>Prerequisites</h3>
<p>Before starting this part, ensure:</p>
<p><em><strong>Infrastructure Setup</strong></em></p>
<ul>
<li><p>Ready to use <code>Host</code> and the <code>directory</code> structure to run <code>Dockerfiles</code> and <code>docker-compose.yml</code> (Refer to <a href="https://ask-abhi.com/part-1-install-jenkins-with-docker">Part 1</a>)</p>
</li>
<li><p>GitHub repository <a href="https://github.com/abhinandan-chougule/demo-app-java-maven.git">demo-app</a></p>
</li>
<li><p>Jenkins has:</p>
<ul>
<li><p>Git plugin installed</p>
</li>
<li><p>Access to GitHub repository (public or credentials configured)</p>
</li>
</ul>
</li>
</ul>
<hr />
<h3>Step-by-step implementation</h3>
<ul>
<li><strong>Go to the</strong> <a href="https://github.com/abhinandan-chougule/demo-app-java-maven.git"><strong>repo</strong></a> <strong>and add another branch.</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/21f54923-0ffc-401c-b876-5a35808c7308.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Scan the Multibranch pipeline now, and it will populate the branches</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/b102edd3-ead9-48eb-bb81-d8da61f27b9c.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Let's make some changes to the repo and commit to the new branch."</strong><code>devbranch</code><strong>"</strong></li>
</ul>
<p>Here, I removed a small block from the <code>Jenkinsfile</code> and merged it to <code>devbranch</code></p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/cc4013fb-fa85-4f3a-af25-b6fde1c7277e.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/9463fa15-2b8c-4908-82bd-5cbc311b8bba.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/13d36bd9-7447-4f96-bcb4-3812e59acc91.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Let's check the last Build from</strong> <code>devbranch</code></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/a30bf874-f3e9-4a2c-94ca-16a1be1c6e56.png" alt="" style="display:block;margin:0 auto" />

<h3><strong>Done!!!</strong></h3>
<hr />
<h3>Conclusion</h3>
<p>In this part, you successfully evolved your CI/CD pipeline to a <strong>branch-aware automation system</strong></p>
<p>You learned how to:</p>
<ul>
<li><p>Automatically detect new branches in GitHub</p>
</li>
<li><p>Run pipelines dynamically per branch</p>
</li>
<li><p>Manage different code versions using Jenkins pipelines</p>
</li>
<li><p>Enable scalable CI/CD workflows for team-based development</p>
</li>
</ul>
<p>This is exactly how modern DevOps teams handle:</p>
<ul>
<li><p>Feature development</p>
</li>
<li><p>Parallel testing</p>
</li>
</ul>
<hr />
<h3>🔗 Continue the Series</h3>
<p>⬅️ <strong>Previous Article:</strong> <a href="https://ask-abhi.com/part-7-control-jenkins-pipeline-through-github">Part 7 <strong>Control Jenkins Pipeline through GitHub</strong></a><br />➡️ <strong>Next Article:</strong> <a href="https://ask-abhi.com/part-9-parameterized-pipelines">Part 9 Parameterized Pipelines</a></p>
<hr />
<p>⭐ If you found this article useful, follow <strong><a href="https://ask-abhi.com">https://ask-abhi.com</a></strong> for more DevOps tutorials.</p>
<hr />
]]></content:encoded></item><item><title><![CDATA[Building a Kubernetes Cluster on a laptop  using kubeadm, VirtualBox, and Vagrant]]></title><description><![CDATA[Running Kubernetes locally is one of the best ways to understand how clusters work internally. In this article, I’ll walk through how I built a multi-node Kubernetes cluster on my Mac using Vagrant an]]></description><link>https://ask-abhi.com/building-a-kubernetes-cluster-on-a-laptop-using-kubeadm-virtualbox-and-vagrant</link><guid isPermaLink="true">https://ask-abhi.com/building-a-kubernetes-cluster-on-a-laptop-using-kubeadm-virtualbox-and-vagrant</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[kubeadm]]></category><category><![CDATA[Platform Engineering ]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[SRE]]></category><category><![CDATA[Devops]]></category><category><![CDATA[AWS]]></category><dc:creator><![CDATA[Abhinandan Chougule]]></dc:creator><pubDate>Tue, 10 Mar 2026 09:41:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/76905494-10c0-4b04-a63a-d8c256bd607f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Running Kubernetes locally is one of the best ways to understand how clusters work internally. In this article, I’ll walk through how I built a <strong>multi-node Kubernetes cluster on my Mac using Vagrant and VirtualBox</strong>, bootstrapped it with <strong>kubeadm</strong>, installed networking with <strong>Calico</strong>, and finally deployed an <strong>Nginx application</strong> to verify everything works.</p>
<p>This setup is perfect for <strong>DevOps engineers, SREs, and Kubernetes learners</strong> who want a production-like environment on their laptop.</p>
<blockquote>
<p>Tips -</p>
<p>Follow/Read this article first to get base idea and then deploy as per your requirement by using <a href="https://github.com/abhinandan-chougule/cka-practice-code.git">cka-practice-code</a> repo for specfic commands</p>
</blockquote>
<h3>Lab Architecture</h3>
<pre><code class="language-plaintext">Mac (Host Machine)
        │
        ▼
VirtualBox Hypervisor
        │
        ▼
Vagrant Managed VMs
        │
        ├── controlplane
        ├── node01
        └── node02
</code></pre>
<h3>Environment Setup</h3>
<p>Tools used in the lab:</p>
<ul>
<li><p><strong>Oracle VM VirtualBox</strong> – VM hypervisor</p>
</li>
<li><p><strong>Vagrant</strong> – Infrastructure automation</p>
</li>
<li><p><strong>Kubernetes</strong> – Cluster platform</p>
</li>
<li><p><strong>Project Calico</strong> – CNI networking plugin</p>
</li>
</ul>
<h3>VM Resources</h3>
<table>
<thead>
<tr>
<th>Node</th>
<th>CPU</th>
<th>RAM</th>
</tr>
</thead>
<tbody><tr>
<td>controlplane</td>
<td>2</td>
<td>2GB</td>
</tr>
<tr>
<td>node01</td>
<td>2</td>
<td>2GB</td>
</tr>
<tr>
<td>node02</td>
<td>2</td>
<td>2GB</td>
</tr>
</tbody></table>
<h3>Configure laptop and build VMs</h3>
<p>Install the following tools on Windows or macOS x86.</p>
<p>Download and install <strong>Oracle VM VirtualBox</strong></p>
<p><a href="https://www.virtualbox.org/wiki/Downloads">https://www.virtualbox.org/wiki/Downloads</a></p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/a47c745e-1ca3-4a99-b36b-7f388d9b249f.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/95c54b51-7ac3-4820-aea6-cc56d07a6364.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/a0d24b71-ca1d-422a-860e-fd66ff54425a.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><h3>Go to the <code>App</code> <code>click</code> <code>VirtualBox</code> to complete the installation</h3>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/731a8296-ecf3-4d41-a8c2-c9e98808303d.png" alt="" style="display:block;margin:0 auto" />

<h3>Go to Terminal and Install Vagrant</h3>
<pre><code class="language-plaintext">brew tap hashicorp/tap
brew install hashicorp/tap/hashicorp-vagrant
</code></pre>
<ul>
<li><strong>Verify Installation with</strong> <code>vagrant --version</code></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/9299c1b3-2b98-469d-a1e2-e8b7396c81bd.png" alt="" style="display:block;margin:0 auto" />

<h3>Clone the <a href="https://github.com/abhinandan-chougule/cka-practice-code.git">cka-practice-code</a> locally</h3>
<ul>
<li><h3>Navigate to the cloned directory</h3>
</li>
</ul>
<p><code>cka-practice-code/kubeadm-clusters/virtualbox</code> <strong>and type</strong> <code>vagrant up</code> <strong>to start creating virtual machines</strong> <code>(Controlplane, node01 &amp; node02)</code> <strong>through a script</strong></p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/8df524f8-6226-4a75-948c-4c4520be07ea.png" alt="" style="display:block;margin:0 auto" />

<blockquote>
<p>Note the IPs after completion, required for SSH</p>
</blockquote>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/b205896e-34a6-419d-9df2-59e30b8f35ab.png" alt="" style="display:block;margin:0 auto" />

<p>We can check the status in <strong>Oracle Virtual Manager</strong> as well</p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/0dd70f6a-fb68-42c8-870e-74b928514b78.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Access VMs by SSH to install required tools/components</strong></li>
</ul>
<p>Use the commands below to play with the environment</p>
<p>Check Status</p>
<pre><code class="language-plaintext">vagrant status
</code></pre>
<p>Stop all VMs:</p>
<pre><code class="language-plaintext">vagrant halt
</code></pre>
<p>Destroy environment:</p>
<pre><code class="language-plaintext">vagrant destroy
</code></pre>
<p>Show VM's Names</p>
<pre><code class="language-plaintext">VBoxManage list vms | awk -F\" '{print $2}'
</code></pre>
<hr />
<h3>Install Multipass on Mac M series (Apple Silicon)</h3>
<p><a href="https://canonical.com/multipass/install">https://canonical.com/multipass/install</a></p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/50253748-beb6-44f6-b829-2c72d6b2a2aa.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Navigate to the Apple-Silicon directory and run the script</strong> <code>./deploy-virtual-machines.sh</code></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/dca23c17-2777-4f2a-84ae-90151389e886.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Destroyed by the script</strong> <code>./destroy-virtual-machines.sh</code></li>
</ul>
<hr />
<h3>Primary setup: Run these commands on all three nodes <code>Controlplane, node01 &amp; node3</code></h3>
<blockquote>
<p>Tips - <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/">Kubernetes document</a> for reference</p>
</blockquote>
<p>Go to the repo <a href="https://github.com/abhinandan-chougule/cka-practice-code/tree/main/kubeadm-clusters/generic">04-node-setup.md</a> and start running commands one by one</p>
<p>Connect all three by SSH with the below credentials.</p>
<blockquote>
<p>username: vagrant<br />password: vagrant</p>
</blockquote>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/c77554d5-687d-4d40-ac95-a43dfd210b1d.png" alt="" style="display:block;margin:0 auto" />

<blockquote>
<p>Note- Run all these commands on all three (Controlplane, node01, node2)</p>
</blockquote>
<hr />
<h3><strong>Boot the controlplane</strong></h3>
<p>Configure Controlplane by following <a href="https://github.com/abhinandan-chougule/cka-practice-code/blob/main/kubeadm-clusters/generic/05-controlplane.md">05-Controlplane.md</a> commands</p>
<blockquote>
<p>Note- Run this only on Controlplane</p>
<p>Refer <a href="https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-onprem/onpremises">CNI documents</a> but required commands provided in repo itself</p>
</blockquote>
<p>Remember to copy the kubeadm join command and follow the remaining steps until</p>
<ul>
<li><strong>Verify Controlplane is Ready</strong></li>
</ul>
<pre><code class="language-plaintext">kubectl get pods -n kube-system
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/164e5dc6-7a04-4ee6-95e0-d4a22fc6ab44.png" alt="" style="display:block;margin:0 auto" />

<hr />
<h3>Join the workers nodes to Controlplane by using step <a href="https://github.com/abhinandan-chougule/certified-kubernetes-administrator-course/blob/master/kubeadm-clusters/generic/06-workers.md">06-workers.md</a></h3>
<p>Hope you have the below ready-to-use command copied from Controlplane; otherwise, it can be retrieved by using <code>kubeadm token create --print-join-command</code></p>
<pre><code class="language-plaintext">kubeadm join 192.168.0.185:6443 --token 2lccuf.ha2rxb9dclxzkvrp \
        --discovery-token-ca-cert-hash sha256:26d681f9cd77e66a71bbe5e4cd903fae34458660ce2304ff7b73f168fcf9c4c2
</code></pre>
<ul>
<li><h3>Verify on Controlplane if nodes are joined as Workers</h3>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/94698065-9865-4398-a76f-509ea0bf7c90.png" alt="" style="display:block;margin:0 auto" />

<hr />
<h3><strong>Test the cluster, follow</strong> <a href="https://github.com/abhinandan-chougule/cka-practice-code/blob/main/kubeadm-clusters/generic/07-test.md"><strong>07-test.md</strong></a></h3>
<ul>
<li>Test worker node01</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/24e1beac-39c4-4ccf-a631-13c3014537ad.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li>Test worker node02</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/e84588f8-3e53-40e6-8194-6ab7a0d4a0b5.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><h3>Since we have installed the cluster with bridge networking (the default), we can view NodePort services with your browser.</h3>
</li>
</ul>
<p>Run the following command on <code>controlplane</code> to get the browser address, then copy the output to your browser:</p>
<pre><code class="language-plaintext">echo "http://\((dig +short node01):\)PORT_NUMBER"
</code></pre>
<h3>Verify from the browser</h3>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/759da800-8b89-44af-a5f8-346d93eb5da7.png" alt="" style="display:block;margin:0 auto" />

<hr />
<h3>Key Learnings</h3>
<p>Building Kubernetes manually with kubeadm provides a deeper understanding of:</p>
<ul>
<li><p>Control plane components</p>
</li>
<li><p>Node registration</p>
</li>
<li><p>Cluster networking</p>
</li>
<li><p>Kubernetes troubleshooting</p>
</li>
<li><p>Real production cluster architecture</p>
</li>
</ul>
<h3>Final Result</h3>
<p>✔ Multi-node Kubernetes cluster running locally<br />✔ Networking configured with Calico<br />✔ Worker nodes successfully joined<br />✔ Nginx application deployed and accessible from the browser</p>
<p>This environment closely simulates a <strong>real production Kubernetes setup</strong> and is extremely useful for learning Kubernetes architecture and troubleshooting.</p>
<blockquote>
<p><code>Thanks for your time and effort to get your hands dirty.</code></p>
</blockquote>
<h3>Happy Learning!</h3>
<blockquote>
<p>Abhinandan Chougule</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Part 7: Control Jenkins Pipeline through GitHub]]></title><description><![CDATA[Jenkins CI/CD Series



Part
Article



1
Install Jenkins with Docker


2
Creating Your First Jenkins Pipeline


3
Jenkins SSH Remote Execution


4
Automating MySQL Backup to AWS S3


5
Making Jenkins]]></description><link>https://ask-abhi.com/part-7-control-jenkins-pipeline-through-github</link><guid isPermaLink="true">https://ask-abhi.com/part-7-control-jenkins-pipeline-through-github</guid><category><![CDATA[Devops]]></category><category><![CDATA[automation]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[GitHub]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Jenkins]]></category><dc:creator><![CDATA[Abhinandan Chougule]]></dc:creator><pubDate>Tue, 10 Mar 2026 02:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/10e717dc-f7e5-42ec-9d52-04ab8849c1c6.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Jenkins CI/CD Series</h3>
<table>
<thead>
<tr>
<th>Part</th>
<th>Article</th>
</tr>
</thead>
<tbody><tr>
<td>1</td>
<td><a href="https://ask-abhi.com/part-1-install-jenkins-with-docker">Install Jenkins with Docker</a></td>
</tr>
<tr>
<td>2</td>
<td><a href="https://ask-abhi.com/part-2-creating-your-first-jenkins-pipeline">Creating Your First Jenkins Pipeline</a></td>
</tr>
<tr>
<td>3</td>
<td><a href="https://ask-abhi.com/part-3-jenkins-ssh-remote-execution">Jenkins SSH Remote Execution</a></td>
</tr>
<tr>
<td>4</td>
<td><a href="https://ask-abhi.com/part-4-automating-mysql-backup-to-aws-s3">Automating MySQL Backup to AWS S3</a></td>
</tr>
<tr>
<td>5</td>
<td><a href="https://ask-abhi.com/part-5-making-jenkins-automation-scalable">Making Jenkins Automation Scalable</a></td>
</tr>
<tr>
<td>6</td>
<td><a href="https://ask-abhi.com/part-6-running-ansible-from-jenkins">Running Ansible from Jenkins</a></td>
</tr>
<tr>
<td>7</td>
<td><a href="https://ask-abhi.com/part-7-control-jenkins-pipeline-through-github">Jenkins Pipeline from GitHub</a></td>
</tr>
<tr>
<td>8</td>
<td><a href="https://ask-abhi.com/part-8-multibranch-pipelines">Multibranch Pipelines</a></td>
</tr>
<tr>
<td>9</td>
<td><a href="https://ask-abhi.com/part-9-parameterized-pipelines">Parameterized Pipelines</a></td>
</tr>
<tr>
<td>10</td>
<td><a href="https://ask-abhi.com/part-10-environment-variables-in-pipelines">Environment Variables in Pipelines</a></td>
</tr>
</tbody></table>
<hr />
<h3>Goal</h3>
<p>Configure Jenkins to use a <strong>Jenkinsfile stored in a GitHub repository</strong>, enabling pipelines to be version-controlled, automatically updated, and executed directly from source code.</p>
<hr />
<h3>Purpose</h3>
<p>The purpose of this part is to introduce <strong>Pipeline as Code using GitHub</strong>, which is a core DevOps practice.</p>
<p>By the end of this setup, you will:</p>
<ul>
<li><p>Store Jenkins pipeline logic inside GitHub</p>
</li>
<li><p>Configure Jenkins to pull pipeline code from SCM</p>
</li>
<li><p>Automatically detect changes in the repository</p>
</li>
<li><p>Build pipelines that are <strong>version-controlled and reusable</strong></p>
</li>
</ul>
<p>This approach ensures that your CI/CD pipelines are:</p>
<ul>
<li><p><strong>Traceable</strong> (tracked via Git commits)</p>
</li>
<li><p><strong>Collaborative</strong> (teams can contribute)</p>
</li>
<li><p><strong>Consistent</strong> (same pipeline across environments)</p>
</li>
</ul>
<hr />
<h3>Prerequisites</h3>
<p>Before starting this part, ensure you have:</p>
<ul>
<li><p>Ready to use <code>Host</code> and the <code>directory</code> structure to run <code>Dockerfiles</code> and <code>docker-compose.yml</code> (Refer to <a href="https://ask-abhi.com/part-1-install-jenkins-with-docker">Part 1</a>)</p>
</li>
<li><p>A GitHub account</p>
</li>
<li><p>Access to a <a href="https://github.com/abhinandan-chougule/demo-app-java-maven.git">demo-app</a> repository (fork or create)</p>
</li>
</ul>
<hr />
<h3>Step-by-step implementation</h3>
<ul>
<li><strong>Jenkinsfile is placed at the location below in the repo, and will be used</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/3b626a02-4ebd-4066-b5d2-a23966698191.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Create a new Job</strong> <code>maven-github-pipeline</code></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/1d4981fa-062c-4281-9def-2ea1c3f5b6fe.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Select options as below</strong></li>
</ul>
<p><strong>Definition</strong>: <code>Pipeline script from SCM</code><br /><strong>SCM</strong>: <code>Git</code><br /><strong>Repository URL :</strong> <code>Forked repo HTTP URL</code><br /><strong>Branch Specifier (blank for 'any')</strong>: <code>*/main</code><br /><strong>Script</strong> <strong>Path</strong>: <code>jenkins/Jenkinsfile</code></p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/10b6f381-0920-45b2-aaca-d7be1b29edb3.png" alt="" style="display:block;margin:0 auto" />

<p><strong>Build now</strong></p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/73863c38-5ca1-4cca-a4ab-f8a6402d0575.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Go through the console output logs</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/652c7c4b-bfb9-49dc-94cf-537bb9304cb7.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Build now one more time, and go to Jobs Console output and compare</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/3604f007-0b12-41cd-9ece-d64a2643a7fb.png" alt="" style="display:block;margin:0 auto" />

<p><mark class="bg-yellow-200 dark:bg-yellow-500/30">This is not cloning the repo again; instead, it's checking if there are any latest commits and changes</mark></p>
<h3><strong>Done!!!</strong></h3>
<hr />
<h3>Conclusion</h3>
<p>In this part, you successfully moved your Jenkins pipeline to <strong>GitHub</strong>, adopting the powerful concept of <strong>Pipeline as Code</strong>.</p>
<p>You have:</p>
<ul>
<li><p><em><strong>Connected Jenkins to a GitHub repository</strong></em></p>
</li>
<li><p><em><strong>Configured pipeline jobs to fetch the Jenkinsfile from SCM</strong></em></p>
</li>
<li><p><em><strong>Executed builds directly from the repository code</strong></em></p>
</li>
<li><p><em><strong>Observed how Jenkins detects changes without re-cloning</strong></em></p>
</li>
<li><p><em><strong>Introduced version-controlled CI/CD workflows</strong></em></p>
</li>
</ul>
<p>This is a critical step toward <strong>modern DevOps practices</strong>, where pipelines are no longer managed manually in Jenkins but are fully integrated with source control systems.</p>
<p>With this setup, your pipeline becomes:</p>
<ul>
<li><p><strong>Scalable</strong></p>
</li>
<li><p><strong>Maintainable</strong></p>
</li>
<li><p><strong>Production-ready</strong></p>
</li>
</ul>
<hr />
<h3>🔗 Continue the Series</h3>
<p>⬅️ <strong>Previous Article:</strong> <a href="https://ask-abhi.com/part-6-running-ansible-from-jenkins">Part 6 Running Ansible from Jenkins</a><br />➡️ <strong>Next Article:</strong> <a href="https://ask-abhi.com/part-8-multibranch-pipelines">Part 8 Multibranch Pipelines</a></p>
<hr />
<p>⭐ If you found this article useful, follow <strong><a href="https://ask-abhi.com">https://ask-abhi.com</a></strong> for more DevOps tutorials.</p>
<hr />
]]></content:encoded></item><item><title><![CDATA[Part 6: Running Ansible from Jenkins]]></title><description><![CDATA[Jenkins CI/CD Series



Part
Article



1
Install Jenkins with Docker


2
Creating Your First Jenkins Pipeline


3
Jenkins SSH Remote Execution


4
Automating MySQL Backup to AWS S3


5
Making Jenkins]]></description><link>https://ask-abhi.com/part-6-running-ansible-from-jenkins</link><guid isPermaLink="true">https://ask-abhi.com/part-6-running-ansible-from-jenkins</guid><category><![CDATA[Devops]]></category><category><![CDATA[automation]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[ansible]]></category><dc:creator><![CDATA[Abhinandan Chougule]]></dc:creator><pubDate>Mon, 09 Mar 2026 02:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/45f0a8d1-a73b-460d-8887-17bc0ca91e0f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Jenkins CI/CD Series</h3>
<table>
<thead>
<tr>
<th>Part</th>
<th>Article</th>
</tr>
</thead>
<tbody><tr>
<td>1</td>
<td><a href="https://ask-abhi.com/part-1-install-jenkins-with-docker">Install Jenkins with Docker</a></td>
</tr>
<tr>
<td>2</td>
<td><a href="https://ask-abhi.com/part-2-creating-your-first-jenkins-pipeline">Creating Your First Jenkins Pipeline</a></td>
</tr>
<tr>
<td>3</td>
<td><a href="https://ask-abhi.com/part-3-jenkins-ssh-remote-execution">Jenkins SSH Remote Execution</a></td>
</tr>
<tr>
<td>4</td>
<td><a href="https://ask-abhi.com/part-4-automating-mysql-backup-to-aws-s3">Automating MySQL Backup to AWS S3</a></td>
</tr>
<tr>
<td>5</td>
<td><a href="https://ask-abhi.com/part-5-making-jenkins-automation-scalable">Making Jenkins Automation Scalable</a></td>
</tr>
<tr>
<td>6</td>
<td><a href="https://ask-abhi.com/part-6-running-ansible-from-jenkins">Running Ansible from Jenkins</a></td>
</tr>
<tr>
<td>7</td>
<td><a href="https://ask-abhi.com/part-7-control-jenkins-pipeline-through-github">Jenkins Pipeline from GitHub</a></td>
</tr>
<tr>
<td>8</td>
<td><a href="https://ask-abhi.com/part-8-multibranch-pipelines">Multibranch Pipelines</a></td>
</tr>
<tr>
<td>9</td>
<td><a href="https://ask-abhi.com/part-9-parameterized-pipelines">Parameterized Pipelines</a></td>
</tr>
<tr>
<td>10</td>
<td><a href="https://ask-abhi.com/part-10-environment-variables-in-pipelines">Environment Variables in Pipelines</a></td>
</tr>
</tbody></table>
<hr />
<h3>Goal</h3>
<p>Enable Jenkins to execute <strong>Ansible automation tasks</strong> by integrating Ansible into the Jenkins container, allowing centralized orchestration of remote infrastructure using playbooks.</p>
<hr />
<h3>Purpose</h3>
<p>The purpose of this part is to extend your CI/CD pipeline capabilities by introducing <strong>configuration management with Ansible</strong>.</p>
<p>By the end of this setup, you will:</p>
<ul>
<li><p>Run Ansible commands directly from Jenkins</p>
</li>
<li><p>Manage remote systems using <strong>Ansible inventory + playbooks</strong></p>
</li>
<li><p>Automate infrastructure tasks via Jenkins jobs</p>
</li>
<li><p>Establish a foundation for <strong>Infrastructure as Code (IaC)</strong></p>
</li>
</ul>
<p>This bridges the gap between <strong>CI/CD pipelines and infrastructure automation</strong>, which is a core DevOps skill.</p>
<hr />
<h3>Prerequisites</h3>
<p>Before starting this part, ensure you have:</p>
<ul>
<li>Ready to use <code>Host</code> and the <code>directory</code> structure to run <code>Dockerfiles</code> and <code>docker-compose.yml</code> (Refer to <a href="https://ask-abhi.com/part-1-install-jenkins-with-docker">Part 1</a>)</li>
</ul>
<hr />
<h3>Step-by-step implementation</h3>
<h3>Install Ansible on Docker (on an existing Jenkins Container)</h3>
<p>Create another folder <code>/home/jenkins/jenkins_home/jenkins-ansible$</code></p>
<ul>
<li><strong>Create a customized Dockerfile for Jenkins to create a Jenkins container to support Ansible</strong></li>
</ul>
<pre><code class="language-dockerfile">FROM jenkins/jenkins
USER root

RUN apt-get update &amp;&amp; apt-get install python3-pip -y

# New lines to set up a virtual environment
####
ENV ANSIBLE_VENV=/ansible_venv
RUN mkdir $ANSIBLE_VENV &amp;&amp; \
    chown jenkins:jenkins $ANSIBLE_VENV &amp;&amp; \
    apt-get install python3-venv -y
####

USER jenkins

# Activate the virtual environment
RUN python3 -m venv $ANSIBLE_VENV

# Use the venv to install Ansible
RUN $ANSIBLE_VENV/bin/pip3 install ansible

# Ensure the Ansible binary is accessible
ENV PATH=\(PATH:\)ANSIBLE_VENV/bin
</code></pre>
<ul>
<li><strong>Go back to one more directory, where we have our Docker Compose file, and update the content to fetch details from the Dockerfile we created in the earlier step.</strong></li>
</ul>
<p><code>/home/jenkins/jenkins_home/docker-compose.yml</code></p>
<pre><code class="language-yaml">services:
  jenkins:
    container_name: jenkins
    image: jenkins-ansible
    build:
      context: jenkins-ansible
    ports:
      - "8080:8080"
    volumes:
      - $PWD/jenkins_home:/var/jenkins_home
    networks:
      - net

  remote_host:
    container_name: remote-host
    image: remote-host
    build:
      context: centos7
    volumes:
      - "$PWD/aws-s3.sh:/tmp/script.sh"
    networks:
      - net

  db_host:
    container_name: db
    image: mysql:8.0
    environment:
      MYSQL_ROOT_PASSWORD: 1234
    volumes:
      - $PWD/db_data:/var/lib/mysql
    networks:
      - net

networks:
  net:
</code></pre>
<blockquote>
<p>Tips - Update Jenkins block (<code>image: jenkins-ansible, build: context: jenkins-ansible)</code></p>
</blockquote>
<pre><code class="language-plaintext">docker compose build
</code></pre>
<blockquote>
<p>Note- Always run the docker compose from the directory where we have docker-compose.yml</p>
</blockquote>
<ul>
<li><strong>The new Jenkins Image has been built</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/2335e87b-906f-436e-9444-ce2c2ef6e91e.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Create a Jenkins container by using this Image</strong></li>
</ul>
<pre><code class="language-shell">#Below commands used to vrify Jenkins container has ansible installed

docker compose up -d
docker ps
docker exec -it jenkins bash
ansible
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/923b84a5-26ce-4c6c-b794-f3191a7ebd6d.png" alt="" style="display:block;margin:0 auto" />

<blockquote>
<p>Note- Typing ansible will show avaible options to run any command means, its installed</p>
</blockquote>
<p>Created an Ansible folder for the Ansible home directory</p>
<p><code>/home/jenkins/jenkins_home/ansible</code></p>
<blockquote>
<p>Note- /jenkins_home is acting as an volumes for our jenkins container since we have given same context in docker-compose.yml</p>
</blockquote>
<ul>
<li><strong>Copy the remote key to the Ansible folder directly</strong></li>
</ul>
<pre><code class="language-plaintext">home/jenkins/jenkins_home$ sudo cp centos7/remote-key jenkins_home/ansible/
</code></pre>
<p>Since we're moving around the <strong>remote-key</strong> file, it's always a good idea to ensure that <strong>Jenkins has the correct permissions</strong> for it.</p>
<ul>
<li><strong>Run the following command to set the right permissions:</strong></li>
</ul>
<pre><code class="language-plaintext">docker exec # Asks Docker to run a command 
-u root # Using the root user 
jenkins # On the container named jenkins 
bash # Using bash 
-c "chmod 400 /var/jenkins_home/ansible/remote-key &amp;&amp; chown 1000:1000 /var/jenkins_home/ansible -R" # Set proper permissions
</code></pre>
<ul>
<li><strong>What does this do?</strong></li>
</ul>
<p><code>chmod 400 /tmp/remote-key</code> → Sets read-only permissions for the key file.</p>
<p><code>chown 1000:1000 /tmp/remote-key</code> → Ensures Jenkins (user ID 1000) owns the key.</p>
<p>This step prevents permission issues when using the key inside the container.</p>
<hr />
<ul>
<li><strong>Create Inventory in Ansible</strong></li>
</ul>
<p>Create a file named <code>hosts</code> on <code>/home/jenkins/jenkins_home/jenkins-ansible</code> and add the below content</p>
<pre><code class="language-plaintext">[all:vars]

ansible_connection - ssh

[test] 
test1 ansible_host=remote_host ansible_user=remote_user ansible_private_key_file=/var/jenkins_home/ansible/remote-key
</code></pre>
<ul>
<li><strong>Copy the hosts file to the</strong> <code>Jenkins</code> <strong>container in</strong> <code>jenkins_home/ansible</code> <strong>folder</strong></li>
</ul>
<pre><code class="language-plaintext">cp hosts ../jenkins_home/ansible
</code></pre>
<ul>
<li><strong>Log in to the Jenkins container and run the command</strong></li>
</ul>
<pre><code class="language-plaintext">docker exec -it jenkins bash
</code></pre>
<ul>
<li><strong>Verify files</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/4a5e9199-3a5e-4104-9a00-3d1138e880b6.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Run the inventory</strong></li>
</ul>
<pre><code class="language-plaintext">ansible -i hosts -m ping test1
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/59477006-3016-4182-9eb0-46fbaeb68b03.png" alt="" style="display:block;margin:0 auto" />

<blockquote>
<p>Note - Run ansible command from where is the inventory file located</p>
<p>any issues to connect, it must be SSH key permission issue or when connecting to from jenkins to remote_host</p>
<p><code>ssh -i /var/jenkins_home/ansible/remote-key remote_user@remote_host</code></p>
</blockquote>
<hr />
<h3>Create a Playbook <code>play.yml</code> at the host directory <code>/home/jenkins/jenkins_home/jenkins-ansible</code></h3>
<pre><code class="language-yaml">- name: Test playbook 
  hosts: test1 
  tasks:
    - name: Create file using shell 
      shell: echo Hello World &gt; /tmp/ansible-file
</code></pre>
<ul>
<li><strong>Copy play.yml to the Jenkins container</strong></li>
</ul>
<pre><code class="language-plaintext">cp play.yml ../jenkins_home/ansible/
</code></pre>
<ul>
<li><p><strong>Log in to the Jenkins container</strong></p>
</li>
<li><p><strong>Check if the file is present</strong></p>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/ecb590e4-3bd8-4a62-abb9-8128620944e0.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Run the</strong> <code>play.yml</code></li>
</ul>
<pre><code class="language-plaintext">ansible-playbook -i hosts play.yml
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/79de2169-1ad6-4238-99c2-3ee1b7061352.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Check where the file is generated?</strong></li>
</ul>
<p><strong>Here it is on the remote-host</strong></p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/4a3134f3-0527-4d42-91e7-af38ea7159dc.png" alt="" style="display:block;margin:0 auto" />

<hr />
<h3>Jenkins and Ansible Integration</h3>
<ul>
<li><strong>Log in to the Jenkins console and install the Ansible plugin.</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/9daa9b88-f309-4110-ab03-dc374aa7dc74.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Create Freestyle pipeline ansible-test</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/80eacd0c-6fba-49ac-be8a-e80edfdba04a.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/4fb7c298-395b-400a-8346-6a0ab3e2a86d.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/167fcc1e-cf67-46bd-bfe0-851f32af19b6.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><p><strong>Click on</strong> <code>Build Now</code> <strong>and</strong> <code>Console Output</code></p>
</li>
<li><p><strong>Verify output</strong></p>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/68e1f28b-2e95-4e56-be0a-e6cb9ee0136b.png" alt="" style="display:block;margin:0 auto" />

<h3>Done!!!</h3>
<hr />
<h3>Conclusion</h3>
<p>In this part, you successfully integrated <strong>Ansible with Jenkins</strong>, transforming your CI/CD pipeline into a powerful automation engine capable of managing infrastructure.</p>
<p>You have:</p>
<ul>
<li><p>Installed Ansible inside a custom Jenkins container</p>
</li>
<li><p>Configured secure SSH access using private keys</p>
</li>
<li><p>Created and tested Ansible inventory and playbooks</p>
</li>
<li><p>Executed automation tasks on remote systems</p>
</li>
<li><p>Integrated Ansible execution into Jenkins jobs</p>
</li>
</ul>
<p>This is a <strong>major milestone</strong> in your DevOps journey — you are no longer just building pipelines, but <strong>automating infrastructure at scale</strong>.</p>
<p>This setup reflects real-world practices where Jenkins acts as an <strong>orchestrator</strong>, and Ansible handles <strong>configuration management and deployment</strong>.</p>
<hr />
<h3>🔗 Continue the Series</h3>
<p>⬅️ <strong>Previous Article:</strong> <a href="https://ask-abhi.com/part-5-making-jenkins-automation-scalable">Part 5 Making Jenkins Automation Scalable</a><br />➡️ <strong>Next Article:</strong> <a href="https://ask-abhi.com/part-7-control-jenkins-pipeline-through-github">Part 7 Jenkins Pipeline from GitHub</a></p>
<hr />
<p>⭐ If you found this article useful, follow <strong><a href="https://ask-abhi.com">https://ask-abhi.com</a></strong> for more DevOps tutorials.</p>
<hr />
]]></content:encoded></item><item><title><![CDATA[Part 5: Making Jenkins Automation Scalable]]></title><description><![CDATA[Jenkins CI/CD Series



Part
Article



1
Install Jenkins with Docker


2
Creating Your First Jenkins Pipeline


3
Jenkins SSH Remote Execution


4
Automating MySQL Backup to AWS S3


5
Making Jenkins]]></description><link>https://ask-abhi.com/part-5-making-jenkins-automation-scalable</link><guid isPermaLink="true">https://ask-abhi.com/part-5-making-jenkins-automation-scalable</guid><category><![CDATA[Devops]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[AWS]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Abhinandan Chougule]]></dc:creator><pubDate>Sun, 08 Mar 2026 02:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/f2f943e0-8454-434e-9ba1-575bdbc026f7.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Jenkins CI/CD Series</h3>
<table>
<thead>
<tr>
<th>Part</th>
<th>Article</th>
</tr>
</thead>
<tbody><tr>
<td>1</td>
<td><a href="https://ask-abhi.com/part-1-install-jenkins-with-docker">Install Jenkins with Docker</a></td>
</tr>
<tr>
<td>2</td>
<td><a href="https://ask-abhi.com/part-2-creating-your-first-jenkins-pipeline">Creating Your First Jenkins Pipeline</a></td>
</tr>
<tr>
<td>3</td>
<td><a href="https://ask-abhi.com/part-3-jenkins-ssh-remote-execution">Jenkins SSH Remote Execution</a></td>
</tr>
<tr>
<td>4</td>
<td><a href="https://ask-abhi.com/part-4-automating-mysql-backup-to-aws-s3">Automating MySQL Backup to AWS S3</a></td>
</tr>
<tr>
<td>5</td>
<td><a href="https://ask-abhi.com/part-5-making-jenkins-automation-scalable">Making Jenkins Automation Scalable</a></td>
</tr>
<tr>
<td>6</td>
<td><a href="https://ask-abhi.com/part-6-running-ansible-from-jenkins">Running Ansible from Jenkins</a></td>
</tr>
<tr>
<td>7</td>
<td><a href="https://ask-abhi.com/part-7-control-jenkins-pipeline-through-github">Jenkins Pipeline from GitHub</a></td>
</tr>
<tr>
<td>8</td>
<td><a href="https://ask-abhi.com/part-8-multibranch-pipelines">Multibranch Pipelines</a></td>
</tr>
<tr>
<td>9</td>
<td><a href="https://ask-abhi.com/part-9-parameterized-pipelines">Parameterized Pipelines</a></td>
</tr>
<tr>
<td>10</td>
<td><a href="https://ask-abhi.com/part-10-environment-variables-in-pipelines">Environment Variables in Pipelines</a></td>
</tr>
</tbody></table>
<hr />
<h3>Goal</h3>
<p>The goal of this project is to transform the existing Jenkins automation into a <strong>scalable and reusable solution</strong> by externalizing scripts and ensuring persistence across container restarts.</p>
<p>In this exercise, we will move the backup script outside the container and mount it using Docker volumes, allowing it to persist even if the container is recreated. Additionally, we will extend the setup to support <strong>multiple databases and multiple S3 buckets</strong>, making the automation flexible for real-world use cases.</p>
<p>By the end of this project, you will have a Jenkins-driven automation setup that is <strong>resilient, reusable, and scalable across multiple environments</strong>.</p>
<hr />
<h3>Purpose</h3>
<p>The purpose of this exercise is to demonstrate how to design Jenkins automation in a way that is <strong>production-ready and maintainable</strong>.</p>
<p>In earlier parts, the automation worked, but it was tightly coupled to the container lifecycle. In real-world DevOps environments, automation must be:</p>
<ul>
<li><p><strong>Persistent</strong> – survives container restarts or failures</p>
</li>
<li><p><strong>Reusable</strong> – can be applied across multiple use cases</p>
</li>
<li><p><strong>Scalable</strong> – supports multiple applications, databases, or environments</p>
</li>
</ul>
<p>By externalizing the script and using Docker volume mapping, we ensure that critical automation logic is not lost when containers are recreated.</p>
<p>Extending the solution to handle multiple databases and S3 buckets introduces flexibility, which is essential for managing <strong>enterprise-scale infrastructure and data workflows</strong>.</p>
<p>This approach reflects real DevOps practices where pipelines are designed to be <strong>modular, reusable, and environment-agnostic</strong>.</p>
<hr />
<h3>Preresuisite</h3>
<ul>
<li>Ready to use <code>Host</code> and the <code>directory</code> structure to run <code>Dockerfiles</code> and <code>docker-compose.yml</code> (Refer to <a href="https://ask-abhi.com/part-1-install-jenkins-with-docker">Part 1</a>)</li>
</ul>
<hr />
<h3>Step-by-step implementation</h3>
<ul>
<li><p><strong>How to create this script permanently on a container?</strong></p>
</li>
<li><p><strong>Go to the home folder and create an</strong> <a href="http://aws-s3.sh"><strong>aws-s3.sh</strong></a> <strong>and copy the content of the</strong> <a href="http://script.sh"><strong>script.sh</strong></a></p>
</li>
</ul>
<pre><code class="language-plaintext">/home/jenkins/jenkins_home$vi aws-s3.sh
</code></pre>
<ul>
<li><strong>Go to docker-compose.yml and update the</strong> <code>volumes</code> <strong>block for</strong> <code>remote_host</code> which will copy this script to the remote host in case of a container crash</li>
</ul>
<pre><code class="language-yaml">services:
  jenkins:
    container_name: jenkins
    image: jenkins/jenkins:lts
    ports:
      - "8080:8080"
    volumes:
      - $PWD/jenkins_home:/var/jenkins_home
    networks:
      - net

  remote_host:
    container_name: remote-host
    image: remote-host
    build:
      context: centos7
    volumes:
      - "$PWD/aws-s3.sh:/tmp/script.sh"
    networks:
      - net

  db_host:
    container_name: db
    image: mysql:8.0
    environment:
      MYSQL_ROOT_PASSWORD: 1234
    volumes:
      - $PWD/db_data:/var/lib/mysql
    networks:
      - net

networks:
  net:
</code></pre>
<ul>
<li><strong>Test it by deleting the remote-host</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/c5730e33-93dd-410f-93c7-2d2adf0b5166.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><p><strong>What if we have multiple databases and the backup has to be uploaded to different S3 buckets?</strong></p>
</li>
<li><p><strong>Create another S3</strong> <code>"jenkins-mysql-backup-mylab-2"</code></p>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/238389d3-d0e9-4484-9ec3-89512a0f8999.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Create another database</strong> <code>testdb2</code></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/8ad40b7a-3630-4f5a-b1d3-b16020bca5d9.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Now, let's try to take this</strong> <code>testdb2</code> <strong>backup</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/65f30c71-3934-4cfe-b872-35060c000c99.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Check logs</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/c29592cf-3217-4721-a87e-ad4abb5970f0.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Verify upload</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/445f83d5-97e4-4cbc-b17b-f464cfe44a00.png" alt="" style="display:block;margin:0 auto" />

<h3><strong>Done!!!</strong></h3>
<hr />
<h3>Conclusion</h3>
<p>In this part of the series, we enhanced our Jenkins automation to make it <strong>scalable, reusable, and production-ready</strong>.</p>
<p>We achieved this by:</p>
<ul>
<li><p><em><strong>Externalizing the backup script to ensure persistence</strong></em></p>
</li>
<li><p><em><strong>Using Docker volumes to maintain scripts across container restarts</strong></em></p>
</li>
<li><p><em><strong>Testing resilience by recreating containers</strong></em></p>
</li>
<li><p><em><strong>Extending the solution to support multiple databases and S3 buckets</strong></em></p>
</li>
</ul>
<p>This transformation moves our setup from a simple working automation to a <strong>robust DevOps solution capable of handling real-world scenarios</strong>.</p>
<p>Designing scalable automation is a key skill for DevOps engineers, as it ensures systems can grow and adapt without requiring constant reconfiguration.</p>
<hr />
<h3>🔗 Continue the Series</h3>
<p>⬅️ <strong>Previous Article:</strong> <a href="https://ask-abhi.com/part-4-automating-mysql-backup-to-aws-s3">Part 4 <strong>Automating MySQL Backup to AWS S3</strong></a><br />➡️ <strong>Next Article:</strong> <a href="https://ask-abhi.com/part-6-running-ansible-from-jenkins">Part 6 Running Ansible from Jenkins</a></p>
<hr />
<p>⭐ If you found this article useful, follow <strong><a href="https://ask-abhi.com">https://ask-abhi.com</a></strong> for more DevOps tutorials.</p>
<hr />
]]></content:encoded></item><item><title><![CDATA[Part 4: Automating MySQL Backup to AWS S3]]></title><description><![CDATA[Jenkins CI/CD Series



Part
Article



1
Install Jenkins with Docker


2
Creating Your First Jenkins Pipeline


3
Jenkins SSH Remote Execution


4
Automating MySQL Backup to AWS S3


5
Making Jenkins]]></description><link>https://ask-abhi.com/part-4-automating-mysql-backup-to-aws-s3</link><guid isPermaLink="true">https://ask-abhi.com/part-4-automating-mysql-backup-to-aws-s3</guid><category><![CDATA[Devops]]></category><category><![CDATA[automation]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Jenkins]]></category><category><![CDATA[AWS]]></category><category><![CDATA[MySQL]]></category><category><![CDATA[Backup]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Abhinandan Chougule]]></dc:creator><pubDate>Sat, 07 Mar 2026 02:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/1d2fb488-a310-4d4d-ba17-8d5abdb94415.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Jenkins CI/CD Series</h3>
<table>
<thead>
<tr>
<th>Part</th>
<th>Article</th>
</tr>
</thead>
<tbody><tr>
<td>1</td>
<td><a href="https://ask-abhi.com/part-1-install-jenkins-with-docker">Install Jenkins with Docker</a></td>
</tr>
<tr>
<td>2</td>
<td><a href="https://ask-abhi.com/part-2-creating-your-first-jenkins-pipeline">Creating Your First Jenkins Pipeline</a></td>
</tr>
<tr>
<td>3</td>
<td><a href="https://ask-abhi.com/part-3-jenkins-ssh-remote-execution">Jenkins SSH Remote Execution</a></td>
</tr>
<tr>
<td>4</td>
<td><a href="https://ask-abhi.com/part-4-automating-mysql-backup-to-aws-s3">Automating MySQL Backup to AWS S3</a></td>
</tr>
<tr>
<td>5</td>
<td><a href="https://ask-abhi.com/part-5-making-jenkins-automation-scalable">Making Jenkins Automation Scalable</a></td>
</tr>
<tr>
<td>6</td>
<td><a href="https://ask-abhi.com/part-6-running-ansible-from-jenkins">Running Ansible from Jenkins</a></td>
</tr>
<tr>
<td>7</td>
<td><a href="https://ask-abhi.com/part-7-control-jenkins-pipeline-through-github">Jenkins Pipeline from GitHub</a></td>
</tr>
<tr>
<td>8</td>
<td><a href="https://ask-abhi.com/part-8-multibranch-pipelines">Multibranch Pipelines</a></td>
</tr>
<tr>
<td>9</td>
<td><a href="https://ask-abhi.com/part-9-parameterized-pipelines">Parameterized Pipelines</a></td>
</tr>
<tr>
<td>10</td>
<td><a href="https://ask-abhi.com/part-10-environment-variables-in-pipelines">Environment Variables in Pipelines</a></td>
</tr>
</tbody></table>
<hr />
<h3>Goal</h3>
<p>The goal of this project is to build a <strong>real-world DevOps automation workflow</strong> where Jenkins automatically creates a <strong>MySQL database backup</strong> and uploads it to <strong>Amazon S3</strong>.</p>
<p>In this exercise, Jenkins will connect to the remote server created in Part 2, execute a script that performs a <code>mysqldump</code>, and securely upload the backup file to an S3 bucket using AWS CLI.</p>
<p>By the end of this project, you will have a <strong>fully automated database backup pipeline controlled by Jenkins</strong>.</p>
<p>We can make use of this server to take a MySQL DB backup and upload it to a remote location (S3)</p>
<hr />
<h3>Purpose</h3>
<p>The purpose of this exercise is to demonstrate how Jenkins can be used to automate <strong>infrastructure maintenance tasks</strong>, such as database backups and secure cloud storage operations.</p>
<p>In production environments, database backups are critical for:</p>
<ul>
<li><p><em><strong>Disaster recovery</strong></em></p>
</li>
<li><p><em><strong>Data protection</strong></em></p>
</li>
<li><p><em><strong>Compliance requirements</strong></em></p>
</li>
<li><p><em><strong>Operational resilience</strong></em></p>
</li>
</ul>
<p>Instead of manually running backup scripts, DevOps teams typically automate this process using CI/CD tools like Jenkins.</p>
<p>Through this project, readers will learn how to:</p>
<ul>
<li><p>Use Jenkins to trigger automation scripts</p>
</li>
<li><p>Execute scripts on remote infrastructure</p>
</li>
<li><p>Securely manage credentials</p>
</li>
<li><p>Integrate Jenkins with AWS services</p>
</li>
</ul>
<p>This project simulates a <strong>real-world backup automation pipeline commonly used in enterprise infrastructure</strong>.</p>
<hr />
<h3>Prerequisites</h3>
<p>Before starting this project, ensure the following components are already set up.</p>
<ul>
<li><p>Ready to use <code>Host</code> and the <code>directory</code> structure to run <code>Dockerfiles</code> a <code>docker-compose.yml</code> (Refer to <a href="https://ask-abhi.com/part-1-install-jenkins-with-docker">Part 1</a>)</p>
</li>
<li><p>An AWS account</p>
</li>
</ul>
<hr />
<h3>Step-by-step implementation</h3>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/8daffdf1-e7ed-4cc8-9625-89ed7a112fc9.png" alt="" style="display:block;margin:0 auto" />

<p>Before we create the "db" container, we need to install useful tools, mainly the <strong>MySQL</strong> server, client and <strong>AWS</strong> <strong>CLI,</strong> on the remote-host container</p>
<blockquote>
<p>Pro-Tip: Read Dockerfile carefully</p>
</blockquote>
<ul>
<li><strong>Go to</strong> <code>/home/jenkins/jenkins_home/centos7$</code> <strong>and Modify</strong> <code>Dockerfile</code></li>
</ul>
<pre><code class="language-dockerfile">FROM rockylinux:9

# Install required packages
RUN dnf install -y \
    openssh-server \
    passwd \
    sudo \
    mysql \
    unzip \
    less \
    groff &amp;&amp; \
    dnf clean all

# Install AWS CLI v2
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" &amp;&amp; \
    unzip awscliv2.zip &amp;&amp; \
    ./aws/install &amp;&amp; \
    ./aws/install --update &amp;&amp; \
    rm -rf aws awscliv2.zip

# Create user
RUN useradd -m remote_user &amp;&amp; \
    echo "remote_user:1234" | chpasswd

# Configure SSH directory
RUN mkdir -p /home/remote_user/.ssh

# Copy public SSH key
COPY remote-key.pub /home/remote_user/.ssh/authorized_keys

# Fix SSH permissions
RUN chown -R remote_user:remote_user /home/remote_user/.ssh &amp;&amp; \
    chmod 700 /home/remote_user/.ssh &amp;&amp; \
    chmod 600 /home/remote_user/.ssh/authorized_keys

# Enable password authentication for SSH
RUN sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/' /etc/ssh/sshd_config

# Generate SSH host keys
RUN ssh-keygen -A

# Expose SSH port
EXPOSE 22

# Start SSH daemon
CMD ["/usr/sbin/sshd","-D"]
</code></pre>
<ul>
<li><strong>Update</strong> <code>docker-compose.yml</code> <strong>to create a db container (We have it on</strong> <code>\jenkins_home\</code><strong>)</strong><br />Create a container named "db" by pulling image 8.0, its password will be "1234", and volumes will be stored at the path given, and will be using the same existing network to communicate with other containers</li>
</ul>
<pre><code class="language-yaml">services:
  jenkins:
    container_name: jenkins
    image: jenkins/jenkins:lts
    ports:
      - "8080:8080"
    volumes:
      - $PWD/jenkins_home:/var/jenkins_home
    networks:
      - net

  remote_host:
    container_name: remote-host
    image: remote-host
    build:
      context: centos7
    networks:
      - net

  db_host:
    container_name: db
    image: mysql:8.0
    environment:
      MYSQL_ROOT_PASSWORD: 1234
    volumes:
      - $PWD/db_data:/var/lib/mysql
    networks:
      - net

networks:
  net:
</code></pre>
<ul>
<li><strong>Build a new image for the container remote-host (SSH server)</strong></li>
</ul>
<pre><code class="language-plaintext">docker compose build 
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/344e50a9-333b-4c1d-8d8a-776c0ce91fcc.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Create db container</strong></li>
</ul>
<pre><code class="language-plaintext">docker compose up -d
</code></pre>
<ul>
<li><strong>Verify the logs, if MySQL is ready</strong></li>
</ul>
<pre><code class="language-plaintext">docker logs -f db
</code></pre>
<ul>
<li><strong>Look at the message shown below</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/da815ff0-fab0-4c8a-ad06-9f17c0d364f6.png" alt="" style="display:block;margin:0 auto" />

<pre><code class="language-plaintext">docker ps
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/b287d5b8-178e-4d51-ad20-4afebf8ccf86.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Log in to the db container and verify</strong></li>
</ul>
<pre><code class="language-shell"># login to container
docker exec -it db bash

# login to mysql
mysql -u root -p

# password is given in docker compose
1234

# check databases
show databases;
</code></pre>
<ul>
<li><strong>These results should look like this:</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/90fe23bf-4779-4bca-98ef-c478ec911d4b.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Log in to the</strong> <code>remote-host</code> <strong>and check if the basic MySQL and AWS CLI tools are working</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/077fb2e5-4c25-43d7-99fb-fd897f686d37.png" alt="" style="display:block;margin:0 auto" />

<blockquote>
<p>Note - Error are expected, means its installed.</p>
</blockquote>
<ul>
<li><p><strong>Go back to db container and log in</strong> <code>docker exec -ti db bash</code></p>
</li>
<li><p><strong>Create a Database by using the below SQL queries</strong></p>
</li>
</ul>
<pre><code class="language-plaintext"># Create DB
mysql&gt; create databse testdb;

# Change db
mysql&gt; use testdb

# Create DB table
mysql&gt; create table info (name varchar(20), lastname varchar(20), age int(2));

# Add some fake info
mysql&gt; insert into info values ('abhi', 'chougule', 30);

# Vrify
mysql&gt; select * from info
</code></pre>
<ul>
<li><strong>Look Like below</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/06d5f10e-571d-4bbf-bdd7-fb2651adb3cd.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Insert some details</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/b69960f9-d5cf-4d83-af38-ffc389f69e90.png" alt="" style="display:block;margin:0 auto" />

<h3>Next AWS S3</h3>
<p><strong>Create an S3 bucket by logging into AWS</strong></p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/51867d66-11b8-43c0-b621-467a38b7a2d0.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Create an IAM user with S3 access</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/7e991bf1-adac-45c2-adbb-d1f69cce611b.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Create a secret access key with the</strong> <code>Command Line Interface</code></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/c7570a7a-356f-4fc7-8207-a2a710dfcc3b.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Manual upload: Go back to the <mark class="bg-yellow-200 dark:bg-yellow-500/30">remote-host</mark> container and take a backup of MySQL</strong></li>
</ul>
<blockquote>
<p>[IMP] Note - We are taking backup from remote-host bash terminal</p>
</blockquote>
<pre><code class="language-plaintext">mysqldump -u root -h remote-host -p testdb &gt; /tmp/db.sql
</code></pre>
<ul>
<li><strong>Verify backup</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/30b5179c-89ef-4ef2-9ff1-011502cd1d79.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Configure AWS CLI on the remote-host container to S3 and upload</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/3a053a05-50e0-415d-8f42-f73018df86a9.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Verify on S3</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/99598f3d-222b-4c09-bcec-d4d5d93facb9.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Let's automate this process of creating a dump and uploading it using Jenkins</strong></li>
</ul>
<p>Create a <a href="http://script.sh">script.sh</a> on remote-host under <code>/tmp/script.sh</code></p>
<pre><code class="language-shell">#!/bin/bash

DATE=$(date +%H-%M-%S)
BACKUP=db-$DATE.sql

DB_HOST=$1
DB_PASSWORD=$2
DB_NAME=$3
AWS_SECRET=$4
BUCKET_NAME=$5

mysqldump -u root -h "\(DB_HOST" -p"\)DB_PASSWORD" "\(DB_NAME" &gt; /tmp/\)BACKUP &amp;&amp; \
export AWS_ACCESS_KEY_ID=AKIAQE537I5GMUA722C6 &amp;&amp; \
export AWS_SECRET_ACCESS_KEY=$AWS_SECRET &amp;&amp; \
echo "uploading your $BACKUP backup" &amp;&amp; \
aws s3 cp /tmp/db-\(DATE.sql s3://\)BUCKET_NAME/$BACKUP
</code></pre>
<p><strong>Description</strong></p>
<p><strong>DATE</strong>: The variable for date will be saved<br /><strong>BACKUP</strong>: Name will be derived for SQL dump<br /><strong>Numbering in front of variables</strong>: Will decide the script sequence<br /><strong>Where</strong> the actions will be performed<br /><strong>Export</strong> Access ID is provided, but the Access key will be retrieved from Jenkins credentials<br />Copy-Paste command</p>
<ul>
<li><p><strong>Go to the Jenkins console and create a job to trigger this process</strong></p>
</li>
<li><p><strong>Create secret credentials</strong></p>
</li>
<li><p><strong>Go to Manage Jenkins--&gt; Credentials--&gt;Add credentials--&gt; Secret text</strong><br /><code>AWS_SECRET_ACCESS_KEY MYSQL_PASSWORD</code></p>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/f71de4fc-48a4-4ecf-8fe5-d65752d5d6f3.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><p><strong>Create</strong> <code>backup-mysql</code> <code>freestyle</code> <strong>job and add these details</strong></p>
</li>
<li><p><strong>Select:</strong> <code>This Project is parameterized</code><br /><strong>String parameter</strong>: <code>MYSQL_HOST</code><br /><strong>Default Valu</strong>e: <code>db_host</code></p>
<p><strong>String Parameter</strong>: <code>DATABASE_NAME</code><br /><strong>Default Value</strong>: <code>testdb</code></p>
<p><strong>String Parameter</strong>: <code>AWS_BUCKET_NAME</code><br /><strong>Default Value</strong>: <code>jenkins-mysql-backup-mylab</code></p>
<p><strong>String Parameter</strong>: <code>DATABASE_NAME</code><br /><strong>Default Valu</strong>e: <code>testdb</code></p>
</li>
<li><p><strong>Environments:</strong><br /><strong>secret text</strong>: <code>MYSQL_PASSWORD</code><br /><strong>Credentials</strong>: <code>&lt;MYSQL_PASSWORD&gt;</code></p>
<p><strong>secret text</strong>: <code>AWS_SECRET</code><br /><strong>Credentials</strong>: <code>&lt;AWS_SECRET_ACCESS_KEY&gt;</code></p>
</li>
<li><p><strong>Build Steps:</strong><br /><strong>Select</strong>: <code>Execute a shell script on a remote host using SSH</code><br /><strong>SSH site</strong>: <code>remote_user@remote_host:22</code><br /><strong>Command</strong>: <code>/tmp/script.sh \(MYSQL_HOST \)MYSQL_PASSWORD \(DATABASE_NAME \)AWS_SECRET $AWS_BUCKET_NAME</code></p>
</li>
</ul>
<blockquote>
<p>Pro-Tip: The <strong>command</strong>: sequnce is based on script numbering to call each block from the setup</p>
<p>Note 1 - This step we created SSH site at start of the article<br />Go to <code>Manage Jenkins</code> --&gt; <code>SSH Server</code> and add details of our SSH server</p>
<p>Note 2- Form the command based on the sequence in script.sh for all the above parameters created here and use the "$" prefix</p>
</blockquote>
<p><strong>Here is what I set up</strong></p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/ab3e717b-443c-4544-8d21-79b623c9dc4a.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Run the</strong> <code>Build with Parameters</code></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/0f8efaa7-c086-4aa4-849e-e13bb169ea13.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Logs</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/cb3d7251-40db-40ab-89d2-81d8e98312ed.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Verify upload</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/602ed0d8-647f-4fc3-85c1-8db622cb7dbc.png" alt="" style="display:block;margin:0 auto" />

<h3><strong>Done!!!</strong></h3>
<hr />
<h3>Conclusion:</h3>
<p>In this project, we built a <strong>fully automated database backup pipeline using Jenkins</strong>.</p>
<p>The workflow included:</p>
<ul>
<li><p>Jenkins triggering automation tasks</p>
</li>
<li><p>Remote server executing the backup script</p>
</li>
<li><p>MySQL database dump creation</p>
</li>
<li><p>Secure upload to AWS S3</p>
</li>
</ul>
<p>This setup demonstrates a <strong>practical DevOps automation pattern used in production environments</strong>, where CI/CD tools manage infrastructure operations such as backups, deployments, and system maintenance.</p>
<p>By automating database backups through Jenkins, organizations can ensure <strong>reliable, repeatable, and secure data protection workflows</strong>.</p>
<hr />
<h3>🔗 Continue the Series</h3>
<p>⬅️ <strong>Previous Article:</strong> <a href="https://ask-abhi.com/part-3-jenkins-ssh-remote-execution">Part 3 <strong>Jenkins + SSH Remote Execution</strong></a><br />➡️ <strong>Next Article:</strong> <a href="https://ask-abhi.com/part-5-making-jenkins-automation-scalable">Part 5 <strong>Making Jenkins Automation Scalable</strong></a></p>
<hr />
<p>⭐ If you found this article useful, follow <strong><a href="https://ask-abhi.com">https://ask-abhi.com</a></strong> for more DevOps tutorials.</p>
<hr />
]]></content:encoded></item><item><title><![CDATA[Part 2: Creating Your First Jenkins Pipeline]]></title><description><![CDATA[Jenkins CI/CD Series



Part
Article



1
Install Jenkins with Docker


2
Creating Your First Jenkins Pipeline


3
Jenkins SSH Remote Execution


4
Automating MySQL Backup to AWS S3


5
Making Jenkins]]></description><link>https://ask-abhi.com/part-2-creating-your-first-jenkins-pipeline</link><guid isPermaLink="true">https://ask-abhi.com/part-2-creating-your-first-jenkins-pipeline</guid><category><![CDATA[Devops]]></category><category><![CDATA[automation]]></category><category><![CDATA[Jenkins]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[GitHub]]></category><dc:creator><![CDATA[Abhinandan Chougule]]></dc:creator><pubDate>Sat, 07 Mar 2026 02:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/c20babb0-6167-49ab-9a54-ff9f6fe571c5.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Jenkins CI/CD Series</h3>
<table>
<thead>
<tr>
<th>Part</th>
<th>Article</th>
</tr>
</thead>
<tbody><tr>
<td>1</td>
<td><a href="https://ask-abhi.com/part-1-install-jenkins-with-docker">Install Jenkins with Docker</a></td>
</tr>
<tr>
<td>2</td>
<td><a href="https://ask-abhi.com/part-2-creating-your-first-jenkins-pipeline">Creating Your First Jenkins Pipeline</a></td>
</tr>
<tr>
<td>3</td>
<td><a href="https://ask-abhi.com/part-3-jenkins-ssh-remote-execution">Jenkins SSH Remote Execution</a></td>
</tr>
<tr>
<td>4</td>
<td><a href="https://ask-abhi.com/part-4-automating-mysql-backup-to-aws-s3">Automating MySQL Backup to AWS S3</a></td>
</tr>
<tr>
<td>5</td>
<td><a href="https://ask-abhi.com/part-5-making-jenkins-automation-scalable">Making Jenkins Automation Scalable</a></td>
</tr>
<tr>
<td>6</td>
<td><a href="https://ask-abhi.com/part-6-running-ansible-from-jenkins">Running Ansible from Jenkins</a></td>
</tr>
<tr>
<td>7</td>
<td><a href="https://ask-abhi.com/part-7-control-jenkins-pipeline-through-github">Jenkins Pipeline from GitHub</a></td>
</tr>
<tr>
<td>8</td>
<td><a href="https://ask-abhi.com/part-8-multibranch-pipelines">Multibranch Pipelines</a></td>
</tr>
<tr>
<td>9</td>
<td><a href="https://ask-abhi.com/part-9-parameterized-pipelines">Parameterized Pipelines</a></td>
</tr>
<tr>
<td>10</td>
<td><a href="https://ask-abhi.com/part-10-environment-variables-in-pipelines">Environment Variables in Pipelines</a></td>
</tr>
</tbody></table>
<hr />
<h3>Goal</h3>
<p>The goal of this project is to create and run the <strong>first Jenkins Pipeline</strong> that automates the process of building and testing an application.</p>
<p>In this exercise, we will configure Jenkins to pull application code from a Git repository, execute build commands using Maven, and run automated tests. This introduces the fundamental structure of a <strong>Jenkins Declarative Pipeline</strong>, including stages such as workspace cleanup, source code checkout, build execution, and testing.</p>
<p>By the end of this project, you will understand how Jenkins pipelines automate repetitive development tasks and form the foundation of modern CI/CD workflows.</p>
<hr />
<h3>Purpose</h3>
<p>The purpose of this exercise is to demonstrate how Jenkins pipelines automate the software development lifecycle by defining a structured workflow for building and testing applications.</p>
<p>In traditional development environments, developers manually run build and test commands. Jenkins pipelines eliminate these manual steps by automatically executing them whenever a job is triggered.</p>
<p>Through this project, you will learn how to:</p>
<ul>
<li><p>Define a <strong>Declarative Jenkins Pipeline</strong></p>
</li>
<li><p>Organize automation tasks using <strong>pipeline stages</strong></p>
</li>
<li><p>Integrate Jenkins with <strong>Git repositories</strong></p>
</li>
<li><p>Execute <strong>build and test commands</strong> automatically</p>
</li>
<li><p>Understand how CI pipelines validate code changes</p>
</li>
</ul>
<p>This hands-on exercise introduces the <strong>core concept of Continuous Integration</strong>, where code is automatically built and verified before moving further in the deployment pipeline.</p>
<hr />
<h3>Prerequisites</h3>
<p>Before starting this project, ensure the following requirements are completed:</p>
<ul>
<li><p>Ready to use <code>Host</code> and the <code>directory</code> structure to run <code>Dockerfiles</code> and <code>docker-compose.yml</code> (Refer to <a href="https://ask-abhi.com/part-1-install-jenkins-with-docker">Part 1</a>)</p>
</li>
<li><p><strong>Demo-app repo:</strong> Fork the <a href="https://github.com/abhinandan-chougule/demo-app-java-maven">application</a> repository used for the pipeline demonstration:</p>
</li>
</ul>
<hr />
<h3>Step-by-step Implementation</h3>
<p><code>/home/jenkins/jenkins_home/docker-compose.yml</code></p>
<p>docker-compose.yml</p>
<pre><code class="language-yaml">services:
  jenkins:
    container_name: jenkins
    image: jenkins/jenkins
    ports:
      - "8080:8080"
    volumes:
      - $PWD/jenkins_home:/var/jenkins_home
    networks:
      - net
networks:
  net:
</code></pre>
<blockquote>
<p>Note- No need to use customized Dockerfile since this will download a jenkins image from Docker Hub which has enough packages required for this project</p>
</blockquote>
<pre><code class="language-plaintext">docker compose up -d
</code></pre>
<ul>
<li><strong>Install Maven</strong></li>
</ul>
<pre><code class="language-shell"># Login
docker exec -it -u root jenkins bash

# Install maven
apt-get install maven
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/95550608-4790-4a70-9b26-f63cefffb086.png" alt="" style="display:block;margin:0 auto" />

<blockquote>
<p>Note- Since I have Maven already installed, it shows it exists</p>
</blockquote>
<ul>
<li><strong>Create Jenkins Job I names it</strong> <code>maven-pipeline</code></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/02a3dea0-7c5e-4b75-aa74-df5c39cde701.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><p><strong>Create</strong> <code>Jenkinsfile</code> <strong>seperatly and paste it into Pipleine script area</strong></p>
</li>
<li><p><strong>Demo application</strong> <a href="https://github.com/abhinandan-chougule/demo-app-java-maven"><strong>repo</strong></a> <strong>can be</strong> <code>forked</code></p>
</li>
</ul>
<pre><code class="language-dockerfile">pipeline {
    agent any

    stages {

        stage('Clean Workspace') {
            steps {
                deleteDir()
            }
        }

        stage('Checkout Code') {
            steps {
                git url: 'https://github.com/abhinandan-chougule/demo-app-java-maven.git', branch: 'main'
            }
        }

        stage('Build with Maven') {
            steps {
                sh 'mvn clean install'
            }
        }

        stage('Run Tests') {
            steps {
                sh 'mvn test'
            }
        }
    }
}
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/02d63777-7639-47a0-9255-12bd64ade2ae.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Click on</strong> <code>Build Now</code></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/1e0d97b5-8244-423d-9c6f-285bfd6da001.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Console output Logs</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/e703944f-8b61-4ceb-aa39-58277f062bea.png" alt="" style="display:block;margin:0 auto" />

<h3>Done!!!</h3>
<hr />
<h3>Project 4: Jenkinsfile controlled through version control GitHub</h3>
<ul>
<li><strong>Goal</strong>:<br />Create Pipeline Job<br />Provide where to refer the Jenkinsfile<br />Run the Build</li>
</ul>
<blockquote>
<p>Note- We will make use of Jenkins is from above project</p>
</blockquote>
<ul>
<li><p><strong>Fork</strong> <a href="https://github.com/abhinandan-chougule/demo-app-java-maven.git"><strong>repo</strong></a></p>
</li>
<li><p><strong>Jenkinsfile placed at below locaion in repo and will be used</strong></p>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/3b626a02-4ebd-4066-b5d2-a23966698191.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Create new Job</strong> <code>maven-github-pipeline</code></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/1d4981fa-062c-4281-9def-2ea1c3f5b6fe.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Select options as below</strong></li>
</ul>
<p><strong>Definition</strong>: <code>Pipeline script from SCM</code><br /><strong>SCM</strong>: <code>Git</code><br /><strong>Repository URL :</strong> <code>Forked repo HTTP URL</code><br /><strong>Branch Specifier (blank for 'any')</strong>: <code>*/main</code><br /><strong>Script</strong> <strong>Path</strong>: <code>jenkins/Jenkinsfile</code></p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/10b6f381-0920-45b2-aaca-d7be1b29edb3.png" alt="" style="display:block;margin:0 auto" />

<p><strong>Build now</strong></p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/73863c38-5ca1-4cca-a4ab-f8a6402d0575.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Go through Console output logs</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/652c7c4b-bfb9-49dc-94cf-537bb9304cb7.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Build now one more time and go to Jobs Console output and compare</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/3604f007-0b12-41cd-9ece-d64a2643a7fb.png" alt="" style="display:block;margin:0 auto" />

<p><mark class="bg-yellow-200 dark:bg-yellow-500/30">This is not cloning the repo again, instead it's checking if there are any latest commits and changes</mark></p>
<p>Done!!!</p>
<hr />
<p>Project: 5 Multibranch Pipeline</p>
<ul>
<li><strong>Go to the repo and add another branch.</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/21f54923-0ffc-401c-b876-5a35808c7308.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Scan the Multibranch pipeline now, and it will populate the branches</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/b102edd3-ead9-48eb-bb81-d8da61f27b9c.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Let's make some changes to the repo and commit to the new branch."</strong><code>devbranch</code><strong>"</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/cc4013fb-fa85-4f3a-af25-b6fde1c7277e.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>I am deleting the last stage</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/9463fa15-2b8c-4908-82bd-5cbc311b8bba.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/13d36bd9-7447-4f96-bcb4-3812e59acc91.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/a30bf874-f3e9-4a2c-94ca-16a1be1c6e56.png" alt="" style="display:block;margin:0 auto" />

<p><strong>Done!!!</strong></p>
<hr />
<h3>Project 6: Parameterize to give the option in the Jenkins Job to run on a specific AZ</h3>
<h3>Goal</h3>
<p>Create a Jenkins pipeline named <code>parameter-pipeline</code> that accepts user inputs through parameters to control a deployment configuration. The pipeline should allow the user to specify a custom deployment name, choose the target AWS Availability Zone, and confirm the deployment before execution.</p>
<h3>Purpose</h3>
<p>The purpose of this Configuration is to demonstrate how <strong>parameterized Jenkins pipelines</strong> work. By defining parameters, the pipeline becomes more flexible and interactive, allowing users to provide deployment-specific inputs at runtime instead of hardcoding values in the pipeline.</p>
<p>This exercise introduces three commonly used Jenkins parameter types:</p>
<ul>
<li><p><strong>String Parameter</strong> – to provide a custom deployment name.</p>
</li>
<li><p><strong>Choice Parameter</strong> – to select the AWS Availability Zone (EU-WEST-2A, EU-WEST-2B, EU-WEST-2C).</p>
</li>
<li><p><strong>Boolean Parameter</strong> – to confirm whether the deployment should proceed.</p>
</li>
</ul>
<p>Using parameterized pipelines is a common practice in <strong>DevOps CI/CD workflows</strong>, enabling controlled deployments across different environments or regions while minimizing manual changes to pipeline code.</p>
<p>Here is the code that will be used</p>
<pre><code class="language-typescript">pipeline {
    agent any

    parameters {
        string(
            name: "deploymentName",
            defaultValue: "",
            description: "Deployment Name?"
        )

        choice(
            name: "azDeploy",
            choices: ["ap-southeast-1a", "ap-southeast-1b", "ap-southeast-1c"],
            description: "What AZ?"
        )

        booleanParam(
            name: "confirmDeploy",
            defaultValue: false,
            description: "CONFIRM DEPLOYMENT?"
        )
    }

    stages {
        stage("Deploy") {
            steps {
                echo "Deployment Name: ${params.deploymentName}"
                echo "AZ Selected: ${params.azDeploy}"
                echo "Deployment Confirmation: ${params.confirmDeploy}"
            }
        }
    }
}
</code></pre>
<ul>
<li><strong>Create Pipeline</strong> <code>parameter-pipeline</code><strong>"</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/19ab4202-af9a-4259-bb26-87f8ef7d7c93.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Add the script to the</strong> <code>pipeline script</code></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/d6a281e0-1ff9-4e2a-b01b-154533527444.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><code>Build Now</code></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/483c7115-129f-4f90-8bd0-c7d22ad617cb.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Provide inputs and <mark class="bg-yellow-200 dark:bg-yellow-500/30">click</mark></strong> <code>build with parameters</code></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/da21cb7c-31a1-4380-8731-08c16a274827.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Expected</strong> <code>console output</code></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/c4a41512-e3d4-496e-9b66-06b2485334a9.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Pipeline status</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/63257ee8-ea95-4ee2-900b-4ceecdb4c5f8.png" alt="" style="display:block;margin:0 auto" />

<p><strong>Done!!!</strong></p>
<hr />
<h3>Project: How to virtualize the Jenkins jobs so it would be reusable</h3>
<h3>Goal</h3>
<p>Create a Jenkins pipeline that demonstrates how to define and use <strong>environment variables</strong> within a Jenkins Declarative Pipeline. The pipeline initializes different types of variables (string, number, and boolean) and verifies that they can be accessed and printed during pipeline execution.</p>
<h3>Purpose</h3>
<p>The purpose of this task is to understand how <strong>environment variables work inside a Jenkins pipeline</strong> and how they can be referenced within stages and steps. By defining variables such as a string, a number, and a boolean inside the <code>environment</code> block, the pipeline provides a simple demonstration of how configuration values can be stored and reused across pipeline stages.</p>
<p>This approach helps standardize pipeline configurations and allows DevOps engineers to manage commonly used values in a centralized location within the pipeline. Environment variables are widely used in CI/CD workflows for tasks such as configuring application settings, defining build parameters, and passing values between different stages of a pipeline.</p>
<p>Through this exercise, users learn how Jenkins pipelines can manage and reference variables efficiently, which is a foundational concept when building more advanced automation and deployment pipelines.</p>
<ul>
<li><strong>Prepare the code</strong></li>
</ul>
<pre><code class="language-typescript">pipeline {
    agent any

    environment {
        def myString = "Hello World"
        def myNumber = 10
        def myBool = true
    }

    stages{
        stage("Demo") {
            steps {
                echo "myString: ${myString}"
                echo "myNumber: ${myNumber}"
                echo "myBool: ${myBool}"
            }
        }
    }
}
</code></pre>
<ul>
<li><strong>Create pipeline "variable-jenkins."</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/cb521024-5c27-44b5-b243-c5c36cc4833a.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Paste the code into the</strong> <code>pipeline script</code></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/4659097f-a1f2-4f34-840e-d1e2db00bbb6.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><p><strong>Build Now</strong></p>
</li>
<li><p><strong>Now we can see the pipeline using the variables given in the environment block</strong></p>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/2bbc1e60-fdd7-46fb-ba42-7e7596f28b6c.png" alt="" style="display:block;margin:0 auto" />

<h3>Done!!!</h3>
<hr />
<h3>Conclusion</h3>
<p>In this part of the Jenkins CI/CD series, we created and executed our <strong>first Jenkins pipeline</strong>. The pipeline automated a basic CI workflow that includes cleaning the workspace, cloning source code from Git, building the application using Maven, and running automated tests.</p>
<p>This project introduced the fundamental structure of Jenkins pipelines and demonstrated how build and test processes can be automated through a simple pipeline script.</p>
<p>Understanding how to create and execute pipelines is a critical step toward building scalable CI/CD workflows. Jenkins pipelines allow teams to standardize build processes, reduce manual intervention, and ensure consistent application validation.</p>
<p>In the next part of the series, we will expand Jenkins automation capabilities by enabling it to <strong>execute commands on remote servers using SSH</strong>, which is a common requirement in deployment and infrastructure automation workflows.</p>
<hr />
<h2>🔗 Continue the Series</h2>
<p>⬅️ <strong>Previous Article:</strong> <a href="#">Part 1 Install Jenkins with Docker</a><br />➡️ <strong>Next Article:</strong> <a href="#">Part 3 Jenkins SSH Remote Execution</a></p>
<hr />
<p>⭐ If you found this article useful, follow <strong><a href="https://ask-abhi.com">https://ask-abhi.com</a></strong> for more DevOps tutorials.</p>
<hr />
]]></content:encoded></item><item><title><![CDATA[Part 3: Jenkins + SSH Remote Execution]]></title><description><![CDATA[Jenkins CI/CD Series



Part
Article



1
Install Jenkins with Docker


2
Creating Your First Jenkins Pipeline


3
Jenkins SSH Remote Execution


4
Automating MySQL Backup to AWS S3


5
Making Jenkins]]></description><link>https://ask-abhi.com/part-3-jenkins-ssh-remote-execution</link><guid isPermaLink="true">https://ask-abhi.com/part-3-jenkins-ssh-remote-execution</guid><category><![CDATA[Devops]]></category><category><![CDATA[automation]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Site Reliability Engineering]]></category><dc:creator><![CDATA[Abhinandan Chougule]]></dc:creator><pubDate>Fri, 06 Mar 2026 02:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/d7f0c5c5-1055-4c0c-a8aa-61dd155bb629.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Jenkins CI/CD Series</h3>
<table>
<thead>
<tr>
<th>Part</th>
<th>Article</th>
</tr>
</thead>
<tbody><tr>
<td>1</td>
<td><a href="https://ask-abhi.com/part-1-install-jenkins-with-docker">Install Jenkins with Docker</a></td>
</tr>
<tr>
<td>2</td>
<td><a href="https://ask-abhi.com/part-2-creating-your-first-jenkins-pipeline">Creating Your First Jenkins Pipeline</a></td>
</tr>
<tr>
<td>3</td>
<td><a href="https://ask-abhi.com/part-3-jenkins-ssh-remote-execution">Jenkins SSH Remote Execution</a></td>
</tr>
<tr>
<td>4</td>
<td><a href="https://ask-abhi.com/part-4-automating-mysql-backup-to-aws-s3">Automating MySQL Backup to AWS S3</a></td>
</tr>
<tr>
<td>5</td>
<td><a href="https://ask-abhi.com/part-5-making-jenkins-automation-scalable">Making Jenkins Automation Scalable</a></td>
</tr>
<tr>
<td>6</td>
<td><a href="https://ask-abhi.com/part-6-running-ansible-from-jenkins">Running Ansible from Jenkins</a></td>
</tr>
<tr>
<td>7</td>
<td><a href="https://ask-abhi.com/part-7-control-jenkins-pipeline-through-github">Jenkins Pipeline from GitHub</a></td>
</tr>
<tr>
<td>8</td>
<td><a href="https://ask-abhi.com/part-8-multibranch-pipelines">Multibranch Pipelines</a></td>
</tr>
<tr>
<td>9</td>
<td><a href="https://ask-abhi.com/part-9-parameterized-pipelines">Parameterized Pipelines</a></td>
</tr>
<tr>
<td>10</td>
<td><a href="https://ask-abhi.com/part-10-environment-variables-in-pipelines">Environment Variables in Pipelines</a></td>
</tr>
</tbody></table>
<hr />
<h3>Goal</h3>
<p>The goal of this project is to extend the Jenkins environment created in <strong>Part 1</strong> by integrating it with a remote server through <strong>SSH (Secure Shell)</strong>. In this exercise, we will create a container that acts as a remote host with an SSH server and configure Jenkins to securely connect to it.</p>
<p>By the end of this project, Jenkins will be able to execute commands and scripts on a remote machine, enabling automation tasks such as running scripts, managing servers, and performing deployment-related operations.</p>
<hr />
<h3>Purpose</h3>
<p>The purpose of this exercise is to demonstrate how Jenkins can interact with external systems through SSH. In real-world DevOps environments, Jenkins often needs to run commands on remote machines to perform operations such as application deployment, server configuration, and maintenance tasks.</p>
<p>By creating a dedicated SSH-enabled container and integrating it with Jenkins, we simulate a typical automation scenario where Jenkins acts as the orchestration tool while executing commands on remote infrastructure.</p>
<p>This setup helps us to understand how Jenkins communicates with remote systems securely using SSH keys and credentials, which is a fundamental concept in building automated CI/CD pipelines.</p>
<hr />
<h3>Prerequisites</h3>
<p>Before starting this project, ensure the following requirements are completed:</p>
<ul>
<li><p>Ready to use <code>Host</code> and the <code>directory</code> structure to run <code>Dockerfiles</code> and <code>docker-compose.yml</code> (Refer to <a href="https://ask-abhi.com/part-1-install-jenkins-with-docker">Part 1)</a></p>
</li>
<li><p>Access to the Jenkins dashboard with administrative privileges.</p>
</li>
</ul>
<hr />
<h3>Step-by-Step Implementation</h3>
<p>The purpose of the SSH server is to run shell commands or scripts on other servers</p>
<p>The directory should look like this:</p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/5e5482c6-7f97-42f8-81ec-09d3d04f5bf9.png" alt="" style="display:block;margin:0 auto" />

<blockquote>
<p>Note- Since we are using Docker, we will use Dockerfiles or images available on Docker Hub.</p>
</blockquote>
<p><strong>Create Containers</strong></p>
<p><strong>Jenkins</strong> - will download the latest image from Docker Hub<br /><strong>SSH server</strong> - Customized Dockerfile (name: remote-host)</p>
<ul>
<li>Create a Dockerfile in the directory <code>jenkins/jenkins_home/centos7</code></li>
</ul>
<blockquote>
<p>Note- centos7 is just random directory name I used</p>
</blockquote>
<p><code>Dockerfile</code></p>
<pre><code class="language-yaml">FROM rockylinux:9

RUN dnf install -y openssh-server passwd sudo &amp;&amp; dnf clean all

RUN useradd -m remote_user &amp;&amp; \
    echo "remote_user:1234" | chpasswd

RUN mkdir -p /home/remote_user/.ssh

COPY remote-key.pub /home/remote_user/.ssh/authorized_keys

RUN chown -R remote_user:remote_user /home/remote_user/.ssh &amp;&amp; \
    chmod 700 /home/remote_user/.ssh &amp;&amp; \
    chmod 600 /home/remote_user/.ssh/authorized_keys

RUN sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/' /etc/ssh/sshd_config

RUN ssh-keygen -A

EXPOSE 22

CMD ["/usr/sbin/sshd","-D"]
</code></pre>
<ul>
<li><strong>The YAML will create two containers, and I have placed it at</strong> <code>jenkins/jenkins_home/</code></li>
</ul>
<p><code>docker-compose.yml</code></p>
<pre><code class="language-yaml">services:
  jenkins:
    container_name: jenkins
    image: jenkins/jenkins
    ports:
      - "8080:8080"
    volumes:
      - $PWD/jenkins_home:/var/jenkins_home
    networks:
      - net
  remote_host:
    container_name: remote-host
    image: remote-host
    build:
      context: centos7
    networks:
      - net
networks:
  net:
</code></pre>
<ul>
<li><strong>Build the remote-host image for the SSH server</strong></li>
</ul>
<pre><code class="language-plaintext">docker compose build
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/4926b00e-bfe2-4ce6-9ea6-86b316191e5f.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Verify Image</strong></li>
</ul>
<pre><code class="language-plaintext">docker images
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/2c4ecd99-9bbb-4f30-94a5-24741d1a8b25.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Create containers by using the docker-compose.yml file from</strong> <code>jenkins/jenkins_home/</code></li>
</ul>
<pre><code class="language-plaintext">docker compose up -d
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/b7c9d998-29e6-4e32-9ee0-ef3ab823e403.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Verify containers with</strong> <code>docker ps</code></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/77680176-f4df-4d99-917e-b0a68df99bb4.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Log in to Jenkins</strong></li>
</ul>
<blockquote>
<p>Note - host_ip:8080</p>
<p>By using the credentials you created (Refer First part of this article if needed)</p>
</blockquote>
<ul>
<li><strong>Go to Manage Jenkins--&gt; Plugins and install the <mark class="bg-yellow-200 dark:bg-yellow-500/30">SSH</mark> plugin</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/53865929-7be2-4049-b878-443c05692d1c.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Go to Manage Jenkins --&gt; Credentials --&gt; SSH Username with Private key</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/efddc612-4cf4-4e50-840a-a6f61af8e4dd.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Add username</strong> <code>remote-user</code> <strong>and copy the private key as a remote key below</strong></li>
</ul>
<blockquote>
<p>Note- This is created part of docker compose and placed in container context (centos: it will act like persistant volume for remote-host container)</p>
</blockquote>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/bb623bd2-935a-4a16-9d7e-50674aa247e4.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/3fb8c164-fcd9-4294-9505-ea1dc0ce3163.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Go to</strong> <code>Manage Jenkins</code> <strong>--&gt;</strong> <code>SSH Servere</code> <strong>and add details of our SSH server and</strong> <code>Check connection</code></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/a0df1038-02a9-4eaa-93b3-ab37774656a9.png" alt="" style="display:block;margin:0 auto" />

<blockquote>
<p>here: Integration completed</p>
</blockquote>
<ul>
<li><strong>Let's test by creating a Freestyle project and testing</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/7bf982db-f798-4a28-979f-661dad2fe7d5.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Go to Build Steps and add Execute shell script on remote host using SSH</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/8f2b6fbb-2576-4773-81e0-ab0956354160.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Select our user details from the drop-down,</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/aca8be4a-628f-4d62-938a-85704d9dd1a1.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Verify if this Job Runs the command on the remote-host and creates a file</strong></li>
</ul>
<p><strong>Click</strong> <code>Build now</code> <strong>and check</strong> <code>console output</code> <strong>from Job number</strong></p>
<ul>
<li><strong>Logs</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/aa6bc4ad-4385-47c7-8833-4332e8215a73.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Verify if the file has been created or not on the</strong> <code>remote-host</code></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/78ec8a4e-a50e-46ea-8b44-a9b7fd92b143.png" alt="" style="display:block;margin:0 auto" />

<h3>Done!!!</h3>
<hr />
<h3>Conclusion</h3>
<p>In this part of the series, we successfully created a remote server container running an SSH service and integrated it with Jenkins. We configured Jenkins to securely connect to the remote host using SSH credentials and executed commands on the remote system through a Jenkins job.</p>
<p>This setup demonstrates how Jenkins can control external infrastructure and automate tasks beyond its own environment. Remote command execution is widely used in DevOps pipelines for tasks such as application deployment, server maintenance, configuration management, and system monitoring.</p>
<p>With this integration in place, Jenkins is now capable of orchestrating tasks across multiple systems, which is a critical capability for modern CI/CD automation.</p>
<p>In the next part of the series, we will build on this foundation by creating a <strong>real-world automation workflow where Jenkins performs a MySQL database backup and uploads it to AWS S3 automatically</strong>.</p>
<hr />
<h2>🔗 Continue the Series</h2>
<p>⬅️ <strong>Previous Article:</strong> <a href="https://ask-abhi.com/part-2-creating-your-first-jenkins-pipeline">Part <strong>2 Creating Your First Jenkins Pipeline</strong></a><br />➡️ <strong>Next Article:</strong> <a href="https://ask-abhi.com/part-4-automating-mysql-backup-to-aws-s3">Part 4 Automating MySQL Backup to AWS S3</a></p>
<hr />
<p>⭐ If you found this article useful, follow <strong><a href="https://ask-abhi.com">https://ask-abhi.com</a></strong> for more DevOps tutorials.</p>
<hr />
]]></content:encoded></item><item><title><![CDATA[Gateway API: Modern Kubernetes Traffic Routing replacing Ingress]]></title><description><![CDATA[Kubernetes networking has evolved significantly over the years. Traditional Ingress controllers solved many problems, but they were often limited in flexibility and extensibility.
The Gateway API intr]]></description><link>https://ask-abhi.com/gateway-api-modern-kubernetes-traffic-routing-replacing-ingress</link><guid isPermaLink="true">https://ask-abhi.com/gateway-api-modern-kubernetes-traffic-routing-replacing-ingress</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Devops]]></category><category><![CDATA[EKS]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Gateway API]]></category><dc:creator><![CDATA[Abhinandan Chougule]]></dc:creator><pubDate>Thu, 05 Mar 2026 18:52:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/70c9669f-5ce6-44a7-a785-5854644a5d4e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Kubernetes networking has evolved significantly over the years. Traditional Ingress controllers solved many problems, but they were often limited in flexibility and extensibility.</p>
<p>The <strong>Gateway API</strong> introduces a more powerful and expressive model for managing traffic in Kubernetes clusters.</p>
<p>In this guide, we will build a <strong>modern traffic routing architecture using KGateway and the Gateway API on Kubernetes</strong>.</p>
<hr />
<p><strong>Advantages</strong>:</p>
<ul>
<li><p>Better than traditional Ingress</p>
</li>
<li><p>Standard Kubernetes networking API</p>
</li>
<li><p>Flexible routing rules</p>
</li>
<li><p>Multi-cluster support</p>
</li>
<li><p>Advanced traffic management</p>
</li>
</ul>
<hr />
<p><strong>We will cover:</strong></p>
<ul>
<li><p>Installing Gateway API CRDs</p>
</li>
<li><p>Deploying the KGateway controller</p>
</li>
<li><p>Creating a Gateway</p>
</li>
<li><p>Deploying an application</p>
</li>
<li><p>Routing traffic using HTTPRoute</p>
</li>
</ul>
<table>
<thead>
<tr>
<th>Component</th>
<th>Purpose</th>
</tr>
</thead>
<tbody><tr>
<td>Gateway API CRDs</td>
<td>Extend Kubernetes networking</td>
</tr>
<tr>
<td>KGateway Controller</td>
<td>Implements the Gateway API</td>
</tr>
<tr>
<td>Gateway</td>
<td>Entry point for external traffic</td>
</tr>
<tr>
<td>HTTPRoute</td>
<td>Defines traffic routing rules</td>
</tr>
<tr>
<td>Service</td>
<td>Internal load balancing</td>
</tr>
<tr>
<td>Pods</td>
<td>Running application</td>
</tr>
</tbody></table>
<h3>Environment Setup</h3>
<p>Create an EC2 Instance with <code>t2-medium</code> and <code>30gb gp3</code> disk, it will be used as a jump host to manage the cluster</p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/cf97144e-db1c-4124-9d89-8216e0fb1bb6.png" alt="" style="display:block;margin:0 auto" />

<pre><code class="language-plaintext">sudo apt update
</code></pre>
<ul>
<li><h3>Connect to EC2 by SSH</h3>
</li>
<li><h3>Clone the <a href="https://github.com/abhinandan-chougule/eks-terraform-code.git">repo</a></h3>
</li>
</ul>
<p>This will create all basic configurations for EKS</p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/df4aef7b-32df-4985-b9fd-33f9df77b517.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Install AWS CLI</strong></li>
</ul>
<pre><code class="language-plaintext">#!/bin/bash

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
sudo apt install unzip
unzip awscliv2.zip
sudo ./aws/install
aws configure
</code></pre>
<ul>
<li><h3>Create an Access key and secret key if its first time using the AWS console, and Enter into the console when it asks while installing AWS CLI</h3>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/8e82995f-c96a-4c09-b866-c664a4c6c5da.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/eeec8484-6ccf-47c5-9765-e2210315bc76.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Install Terraform</strong></li>
</ul>
<pre><code class="language-plaintext">#!/bin/bash

#Update system package
sudo apt-get update

#Install GNU software properties and curl packages
sudo apt-get install -y gnupg software-properties-common -y

#Install the HashiCorp GPG key
wget -O- https://apt.releases.hashicorp.com/gpg | \
gpg --dearmor | \
sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg &gt; /dev/null

#Verify the key's fingerprint
gpg --no-default-keyring \
--keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg \
--fingerprint

echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] \
https://apt.releases.hashicorp.com $(lsb_release -cs) main" | \
sudo tee /etc/apt/sources.list.d/hashicorp.list

sudo apt update
sudo apt-get install terraform -y

terraform --version
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/ef576e23-df15-4fd2-a29f-c7b0444bfb42.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Install Kubectl</strong></li>
</ul>
<pre><code class="language-plaintext">#!/bin/bash

curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin
kubectl version --short --client
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/aa89a919-dae4-4d94-8ddb-41168c1f8138.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><code>aws configure</code> <strong>to connect with the AWS account</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/f2349d33-223e-439e-9607-748dfd4de94b.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><code>aws sts get-caller-identity</code> to Verify Account</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/5ccda608-fde0-49e1-8c38-ae48bd14bad2.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><code>cd</code> <strong>to the directory and</strong> <code>Terraform init</code></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/017932b7-7b9d-40c5-8b94-8b7a82426be8.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><code>terraform plan</code></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/b4465a02-9732-4b56-adb7-7bcaa55a3e3c.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Apply the changes to deploy EKS</strong></li>
</ul>
<p><code>terraform apply --auto-approve</code></p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/43814715-e9b8-491a-9077-4ff5f4dda6fe.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Install helm</strong></li>
</ul>
<pre><code class="language-plaintext">sudo apt update &amp; sudo apt upgrade -y
curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/dc7c67fb-6a54-4429-95cb-4548543ffde3.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Verify if the cluster is accessible</strong></li>
</ul>
<pre><code class="language-shell">aws eks --region ap-southeast-1 update-kubeconfig --name askabhi-cluster

kubectl get nodes
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/6766fcb2-bf94-4f84-a48a-6a43325453be.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Install CRD for kgateway by helm upgrade</strong></li>
</ul>
<blockquote>
<p>Note: <strong>Prerequisites</strong></p>
<p>Before you begin, ensure you have:</p>
<ul>
<li><p>A Kubernetes cluster.</p>
</li>
<li><p><code>kubectl</code> installed and configured to communicate with your cluster.</p>
</li>
<li><p><code>helm</code> installed, the package manager for Kubernetes.</p>
</li>
</ul>
</blockquote>
<p><strong>Install the Kubernetes kgateway CRD and Gateway API CRDs (Custom Resource Definitions)</strong><br />Kgateway is an implementation of the Kubernetes Gateway API, which requires specific CRDs to be present in your cluster. The official documentation recommends installing the CRDs separately from the main <code>kgateway</code> chart.</p>
<pre><code class="language-plaintext">helm upgrade -i --create-namespace --namespace kgateway-system kgateway-crds oci://cr.kgateway.dev/kgateway-dev/charts/kgateway-crds --version v2.3.0-main
</code></pre>
<p><em>(Note: The</em> <code>--version</code> <em>tag may need to be updated to the</em> <a href="https://github.com/kgateway-dev/kgateway/releases"><em><strong>latest release</strong></em></a><em>.)</em></p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/a9189b5b-c7ad-4c78-aeb1-807bc3535b19.png" alt="" style="display:block;margin:0 auto" />

<p><strong>Install the Kgateway Controller</strong><br />Once the CRDs are in place, install the main <code>kgateway</code> controller.</p>
<pre><code class="language-plaintext">helm upgrade -i --namespace kgateway-system kgateway oci://cr.kgateway.dev/kgateway-dev/charts/kgateway --version v2.3.0-main
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/418571f4-906d-456d-85f4-38a15a8f3c06.png" alt="" style="display:block;margin:0 auto" />

<p><strong>Verify the Installation</strong><br />After the installation is complete, verify that the <code>kgateway</code> pods are running in the <code>kgateway-system</code> namespace.</p>
<pre><code class="language-plaintext">kubectl get pods -n kgateway-system
</code></pre>
<p>You should see pods with a <code>Running</code> status. You can also verify that the <code>GatewayClass</code> resource named <code>kgateway</code> has been created.</p>
<p><mark class="bg-yellow-200 dark:bg-yellow-500/30">We need to decide which controller needs to be used</mark></p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/2ca96657-0d01-4a47-9a96-37114c4ff091.png" alt="" style="display:block;margin:0 auto" />

<pre><code class="language-plaintext">kubectl get gatewayclass kgateway
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/47f88b85-c0b1-4631-9e1d-efbf82375efa.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Install the Gateway API CRDs</strong></li>
</ul>
<p>First, apply the standard CRDs from the official Kubernetes sigs <a href="https://github.com/kubernetes-sigs/gateway-api"><strong>GitHub repository</strong></a> to your cluster. The standard channel is recommended for most users as it includes stable (GA or beta) resources:</p>
<pre><code class="language-plaintext">kubectl apply --server-side -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.5.0/standard-install.yaml
</code></pre>
<p><strong>To verify the installation, you can run:</strong></p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/d1662e17-b27f-41b9-87b0-87383025820d.png" alt="" style="display:block;margin:0 auto" />

<p><strong>Verify</strong></p>
<pre><code class="language-plaintext">kubectl get crds | grep gateway
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/db142be7-db8a-4853-ab2d-d1b060871a79.png" alt="" style="display:block;margin:0 auto" />

<p>Install the httpbin Application</p>
<p>The <code>httpbin</code> sample application for <a href="https://kgateway.dev/docs/envoy/2.0.x/install/sample-app/"><strong>kgateway</strong></a> (formerly part of Gloo) can be deployed in Kubernetes using the following command to test routing rules:</p>
<pre><code class="language-plaintext">kubectl apply -f https://raw.githubusercontent.com/kgateway-dev/kgateway/refs/heads/main/examples/httpbin.yaml
</code></pre>
<p><strong>Purpose</strong>: This YAML deploys a <code>httpbin</code> service in a namespace (typically <code>httpbin</code>), which is used to verify that the Gateway proxy is correctly routing external traffic to backend services.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/0a75e158-3590-4016-8363-3c3946691d30.png" alt="" style="display:block;margin:0 auto" />

<p><strong>Verify what the namespaces we have are</strong></p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/9021be64-d995-4e05-b128-fc6761a7bc06.png" alt="" style="display:block;margin:0 auto" />

<p><strong>Check the route for the gateway</strong></p>
<pre><code class="language-plaintext">kubectl get gateway
</code></pre>
<p><strong>Traffic is going to which service</strong></p>
<pre><code class="language-plaintext">kubectl apply -f- &lt;&lt;EOF
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: kgateway-system
spec:
  gatewayClassName: kgateway
  listeners:
  - name: http
    protocol: HTTP
    port: 80
    # Optional: restricts which namespaces can attach routes to this listener
    allowedRoutes:
      namespaces:
        from: All # Other options: Same, Selector
EOF
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/4814f531-4c9d-4aba-b9f3-13f4c56447da.png" alt="" style="display:block;margin:0 auto" />

<p><strong>Check gateway Created</strong></p>
<pre><code class="language-plaintext">kubectl get gateway
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/7be32513-dda2-4b7c-b796-38da755a55e1.png" alt="" style="display:block;margin:0 auto" />

<p>The address shown in the above snap is Cloud Load Balancer <code>a243521ba1b904868a7d7b8707db3b7c-2045018130.ap-southeast-1.elb.amazonaws.com</code></p>
<ul>
<li><strong>Copy this and go to GoDaddy and add a CNAME</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/5d9a3fe5-848d-47aa-a45b-7b937dedc235.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li>Go back to the server and do <code>nslookup</code> for the Cloud Load Balancer and get the IP.</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/102f0d99-208c-4291-aa85-8304cfbe5080.png" alt="" style="display:block;margin:0 auto" />

<blockquote>
<p>Note- we should see the IP's like below else wait</p>
</blockquote>
<ul>
<li><strong>Add an</strong> <code>A</code> <strong>record with the IP you see</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/a920703b-e952-441c-a5df-9c44d5ea16fd.png" alt="" style="display:block;margin:0 auto" />

<p><strong>Route the traffic to the Domain</strong></p>
<pre><code class="language-yaml">kubectl apply -f- &lt;&lt;EOF
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: httpbin
  namespace: httpbin
spec:
  parentRefs:
  - name: kgateway-system
    namespace: default
  hostnames:
  - "mytestproject.shop"
  rules:
  - backendRefs:
    - name: httpbin
      port: 8000
EOF
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/f7c7b8ed-fe90-4274-b850-f57324729e3c.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>What did we do? We pointed out the following</strong> <code>service/httpbin</code> <strong>to route through the domain</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/f670d96f-ca76-41ba-8e73-258dd1082608.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li>Now you can try to access the Domain</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/6711f183-824d-4009-a9d8-2b60a21d2086.png" alt="" style="display:block;margin:0 auto" />

<hr />
<p><strong>Conclusion</strong>:</p>
<p>Kubernetes networking continues to evolve, and the Gateway API represents the next step forward in traffic management.</p>
<p>By combining <strong>Gateway API and KGateway</strong>, we can build scalable and flexible networking architectures that are easier to manage than traditional ingress-based setups.</p>
<p>This architecture enables modern DevOps teams to implement advanced routing patterns while keeping Kubernetes networking standardized and maintainable.</p>
]]></content:encoded></item><item><title><![CDATA[Part 1 – Install Jenkins with Docker]]></title><description><![CDATA[Jenkins CI/CD Series



Part
Article



1
Install Jenkins with Docker


2
Creating Your First Jenkins Pipeline


3
Jenkins SSH Remote Execution


4
Automating MySQL Backup to AWS S3


5
Making Jenkins]]></description><link>https://ask-abhi.com/part-1-install-jenkins-with-docker</link><guid isPermaLink="true">https://ask-abhi.com/part-1-install-jenkins-with-docker</guid><category><![CDATA[Devops]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Jenkins]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Abhinandan Chougule]]></dc:creator><pubDate>Thu, 05 Mar 2026 02:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/36399262-a435-4965-99e7-54fb2be6ca61.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3>Jenkins CI/CD Series</h3>
<table>
<thead>
<tr>
<th>Part</th>
<th>Article</th>
</tr>
</thead>
<tbody><tr>
<td>1</td>
<td><a href="https://ask-abhi.com/part-1-install-jenkins-with-docker">Install Jenkins with Docker</a></td>
</tr>
<tr>
<td>2</td>
<td><a href="https://ask-abhi.com/part-2-creating-your-first-jenkins-pipeline">Creating Your First Jenkins Pipeline</a></td>
</tr>
<tr>
<td>3</td>
<td><a href="https://ask-abhi.com/part-3-jenkins-ssh-remote-execution">Jenkins SSH Remote Execution</a></td>
</tr>
<tr>
<td>4</td>
<td><a href="https://ask-abhi.com/part-4-automating-mysql-backup-to-aws-s3">Automating MySQL Backup to AWS S3</a></td>
</tr>
<tr>
<td>5</td>
<td><a href="https://ask-abhi.com/part-5-making-jenkins-automation-scalable">Making Jenkins Automation Scalable</a></td>
</tr>
<tr>
<td>6</td>
<td><a href="https://ask-abhi.com/part-6-running-ansible-from-jenkins">Running Ansible from Jenkins</a></td>
</tr>
<tr>
<td>7</td>
<td><a href="https://ask-abhi.com/part-7-control-jenkins-pipeline-through-github">Jenkins Pipeline from GitHub</a></td>
</tr>
<tr>
<td>8</td>
<td><a href="https://ask-abhi.com/part-8-multibranch-pipelines">Multibranch Pipelines</a></td>
</tr>
<tr>
<td>9</td>
<td><a href="https://ask-abhi.com/part-9-parameterized-pipelines">Parameterized Pipelines</a></td>
</tr>
<tr>
<td>10</td>
<td><a href="https://ask-abhi.com/part-10-environment-variables-in-pipelines">Environment Variables in Pipelines</a></td>
</tr>
</tbody></table>
<blockquote>
<p>This Part 1 base requirement for rest of the series parts</p>
</blockquote>
<h3>Goal</h3>
<p>The goal of this project is to install <strong>Jenkins using Docker</strong> and create a fully functional Jenkins environment that can be used for building CI/CD pipelines. By the end of this exercise, Jenkins will be running inside a Docker container and accessible through a web browser.</p>
<p>This setup provides a clean and reproducible environment for Jenkins, making it easy to deploy, manage, and upgrade without directly installing Jenkins on the host operating system.</p>
<h3>Purpose</h3>
<p>The purpose of this exercise is to introduce the <strong>foundation of Jenkins-based CI/CD environments</strong> by containerizing Jenkins using Docker. Running Jenkins inside a container ensures portability, easier management, and better isolation from the host system.</p>
<p>This approach is widely used in modern DevOps environments because it allows teams to quickly spin up Jenkins instances, maintain persistent data using volumes, and scale automation infrastructure efficiently.</p>
<p>Through this project, we will:</p>
<ul>
<li><p>Install Docker and configure the Docker repository</p>
</li>
<li><p>Pull the official Jenkins image from Docker Hub</p>
</li>
<li><p>Deploy Jenkins using <strong>Docker Compose</strong></p>
</li>
<li><p>Configure persistent storage for Jenkins data</p>
</li>
<li><p>Access the Jenkins web interface and complete the initial setup</p>
</li>
</ul>
<p>This forms the <strong>starting point for the Jenkins CI/CD Mastery series</strong>, where the Jenkins instance created in this step will be used for building pipelines, automating tasks, and integrating with other tools in later projects.</p>
<h3>Prerequisites</h3>
<p>Prepare Host Linux machine locally on Windows/Mac (<a href="https://ask-abhi.com/building-a-kubernetes-cluster-on-a-laptop-using-kubeadm-virtualbox-and-vagrant">Refer</a>), or on any Cloud</p>
<blockquote>
<p>Pro-Tip: Recommended on AWS:EC2, Stop the EC2 instance whenever finish the task</p>
</blockquote>
<h3>Step-by-Step Implementation</h3>
<p>Run the following command to uninstall all conflicting packages in case you are using any existing Linux system</p>
<pre><code class="language-plaintext">sudo apt remove $(dpkg --get-selections docker.io docker-compose docker-compose-v2 docker-doc podman-docker containerd runc | cut -f1)
</code></pre>
<h3><strong>Install using the</strong> <code>apt</code> <strong>repository</strong></h3>
<p>Before you install Docker Engine for the first time on a new host machine, you need to set up the Docker <code>apt</code> repository. Afterward, you can install and update Docker from the repository.</p>
<ul>
<li><strong>Set up Docker's</strong> <code>apt</code> <strong>repository.</strong></li>
</ul>
<p>Add Docker's official GPG key:</p>
<pre><code class="language-plaintext">sudo apt update
sudo apt install ca-certificates curl sudo install -m 0755 -d /etc/apt/keyrings sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc sudo chmod a+r /etc/apt/keyrings/docker.asc
</code></pre>
<p>Add the repository to Apt sources:</p>
<pre><code class="language-plaintext">sudo tee /etc/apt/sources.list.d/docker.sources &lt;&lt;EOF Types: deb URIs: https://download.docker.com/linux/ubuntu Suites: \((. /etc/os-release &amp;&amp; echo "\){UBUNTU_CODENAME:-$VERSION_CODENAME}") Components: stable Signed-By: /etc/apt/keyrings/docker.asc EOF
</code></pre>
<pre><code class="language-plaintext">sudo apt update
</code></pre>
<hr />
<h3>Install the Docker packages.</h3>
<p>To install the latest version, run:</p>
<pre><code class="language-console">sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
</code></pre>
<blockquote>
<p><strong>Note</strong></p>
<p>The Docker service starts automatically after installation. To verify that Docker is running, use:</p>
</blockquote>
<pre><code class="language-plaintext">sudo systemctl status docker
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/e7f454d6-5677-4976-9665-66e18a90f41b.png" alt="" style="display:block;margin:0 auto" />

<blockquote>
<p>Some systems may have this behavior disabled and will require a manual start:</p>
</blockquote>
<pre><code class="language-plaintext"> sudo systemctl start docker
</code></pre>
<hr />
<h3>Download Jenkins Image</h3>
<p>It will have all the required configurations</p>
<pre><code class="language-plaintext">sudo docker pull jenkins/jenkins
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/8f485107-a89a-4203-8531-f21f28940b5b.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Navigating/creating folder structure</strong> <code>home/jenkins/jenkins_home#</code></li>
</ul>
<p><strong>Create</strong> <code>docker-compose.yml</code></p>
<pre><code class="language-yaml">services:
  jenkins:
    container_name: jenkins
    image: jenkins/jenkins
    ports:
      - "8080:8080"
    volumes:
      - $PWD/jenkins_home:/var/jenkins_home
    networks:
      - net
networks:
  net:
</code></pre>
<ul>
<li><strong>Create another folder that is mentioned in the YAML above</strong> <code>home/jenkins/jenkins_home</code></li>
</ul>
<p><strong>Give the permissions</strong></p>
<pre><code class="language-plaintext">sudo chown 1000:1000 jenkins_home -R
docker compose up -d
docker ps
</code></pre>
<blockquote>
<p>Note - Alwasy run docker compose from directory where docker-compose.yml resides</p>
</blockquote>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/8e42043f-51a8-4645-9172-7b37b6c503b7.png" alt="" style="display:block;margin:0 auto" />

<blockquote>
<p>Pro Tips - Check the logs. <code>docker logs -f jenkins</code></p>
</blockquote>
<ul>
<li><strong>Access by using "host_IP:8080."</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/effbd585-e2eb-4fe8-95fb-74a3e5b19d48.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>The password will be stored at</strong> <code>/var/jenkins_home/secrets/initialAdminPassword</code></li>
</ul>
<hr />
<h3><strong>Install Suggested Plugins</strong></h3>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/80b41529-3836-4734-9a97-e59932ab48e5.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Create an admin user and go to the Home page</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/7dbd94ff-4095-487a-9729-dffa8095e813.png" alt="" style="display:block;margin:0 auto" />

<h3><strong>Done!!!</strong></h3>
<hr />
<h3>Conclusion:</h3>
<p>In this first part of the <strong>Jenkins CI/CD Mastery Series</strong>, we successfully set up Jenkins using Docker, providing a clean and portable environment for running Jenkins without installing it directly on the host operating system. By containerizing Jenkins, we ensured that the setup is easy to manage, reproducible, and scalable for future CI/CD workloads.</p>
<p>We installed Docker, configured the Docker repository, pulled the official Jenkins image, and deployed Jenkins using <strong>Docker Compose</strong> with persistent storage. Finally, we accessed the Jenkins web interface and completed the initial setup by installing the suggested plugins and creating the administrator account.</p>
<p>This Jenkins instance will serve as the <strong>foundation for the rest of the series</strong>, where we will build real-world automation and CI/CD workflows.</p>
<p>In the next part, we will extend this environment by creating an <strong>SSH-enabled remote server container</strong> and integrating it with Jenkins. This will allow Jenkins to execute commands on remote systems — a key capability used in many DevOps automation pipelines.</p>
<hr />
<h3>🔗 Continue the Series</h3>
<p>⬅️ <strong>Previous Article:</strong> <a href="https://ask-abhi.com">Home</a><br />➡️ <strong>Next Article:</strong> <a href="https://ask-abhi.com/part-2-creating-your-first-jenkins-pipeline">Part 2 <strong>Creating Your First Jenkins Pipeline</strong></a></p>
<hr />
<p>⭐ If you found this article useful, follow <strong><a href="https://ask-abhi.com">https://ask-abhi.com</a></strong> for more DevOps tutorials.</p>
<hr />
]]></content:encoded></item><item><title><![CDATA[Part 6: Observability stack setup to monitor]]></title><description><![CDATA[How to monitor EKS cluster, Kubernetes components, and workloads using Prometheus and Grafana via HELM (on the master machine)

Install Helm Chart


curl -fsSL -o get_helm.sh https://raw.githubusercon]]></description><link>https://ask-abhi.com/part-6-observability-stack-setup-to-monitor</link><guid isPermaLink="true">https://ask-abhi.com/part-6-observability-stack-setup-to-monitor</guid><dc:creator><![CDATA[Abhinandan Chougule]]></dc:creator><pubDate>Wed, 04 Mar 2026 10:59:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/8a23c6a6-6031-49d2-bb6e-9a2e07ee5ef7.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<ul>
<li><h2><strong>How to monitor EKS cluster, Kubernetes components, and workloads using Prometheus and Grafana via HELM (on the master machine)</strong></h2>
</li>
<li><h3>Install Helm Chart</h3>
</li>
</ul>
<pre><code class="language-bash">curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
</code></pre>
<ul>
<li><strong>Add Helm Stable Charts for Your Local Client</strong></li>
</ul>
<pre><code class="language-bash">helm repo add stable https://charts.helm.sh/stable
</code></pre>
<ul>
<li><strong>Add Prometheus Helm Repository</strong></li>
</ul>
<pre><code class="language-bash">helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
</code></pre>
<ul>
<li><h3>Create Prometheus Namespace</h3>
</li>
</ul>
<pre><code class="language-bash">kubectl create namespace prometheus
kubectl get ns
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/6dc42bbc-4407-4412-b34e-70aacb862b8a.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li>Install Prometheus using Helm</li>
</ul>
<pre><code class="language-bash">helm install stable prometheus-community/kube-prometheus-stack -n prometheus
</code></pre>
<ul>
<li>Verify Prometheus installation</li>
</ul>
<pre><code class="language-bash">kubectl get pods -n prometheus
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/04ddb757-4143-45b6-973c-567ac92313fc.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Check the services file (svc) of the Prometheus</strong></li>
</ul>
<pre><code class="language-plaintext">kubectl get svc -n prometheus
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/f7c3d438-ff0b-4f0f-ac2a-63314316d8b7.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Expose Prometheus and Grafana to the external world through Node Port</strong></li>
</ul>
<blockquote>
<p><em><strong>[Note] change it from Cluster IP to NodePort after changing make sure you save the file and open the assigned nodeport to the service.</strong></em></p>
</blockquote>
<pre><code class="language-bash">kubectl edit svc/stable-kube-prometheus-sta-prometheus -n prometheus
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/83a91c9b-8d44-4b8f-bae3-34b8dd33a019.png" alt="" style="display:block;margin:0 auto" />

<p><strong>Verify service</strong></p>
<pre><code class="language-bash">kubectl get svc -n prometheus
</code></pre>
<ul>
<li><strong>Now, let’s change the SVC file of Grafana and expose it to the outer world</strong></li>
</ul>
<pre><code class="language-bash">kubectl edit svc stable-grafana -n prometheus
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/683c26f6-ed6a-441e-9626-1b5849d570ad.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Check the Grafana service</strong></li>
</ul>
<pre><code class="language-bash">kubectl get svc -n prometheus
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/b7995261-2aa0-4009-bc45-2356bdf0bd9b.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Access the Grafana with</strong> <code>&lt;public-ip-of-master&gt;:32108</code> <strong>and Prometheus with</strong> <code>&lt;public-ip-of-master&gt;:31521</code></li>
</ul>
<blockquote>
<p>Note - First port-forward through the below command:</p>
</blockquote>
<pre><code class="language-plaintext">kubectl port-forward svc/stable-grafana 32108:80 -n prometheus --address 0.0.0.0 &amp;

kubectl port-forward svc/stable-kube-prometheus-sta-prometheus 31521:9090 -n prometheus --address 0.0.0.0 &amp;
</code></pre>
<ul>
<li><strong>Prometheus:</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/35e4dbdb-ad12-4dc1-a902-b8acbe31a629.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Get a password for Grafana</strong></li>
</ul>
<blockquote>
<p>[Note] Username: admin</p>
</blockquote>
<pre><code class="language-plaintext">kubectl get secret --namespace prometheus stable-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
</code></pre>
<ul>
<li><strong>Now, view the Dashboard in Grafana</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/44030b15-1525-4ede-b727-1468ec5b3e7a.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Click on Dashboards</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/29142bfc-5ea9-4ac7-a591-8543bbf958f7.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Put the Data Source: Prometheus and Namespace: wanderlust and run for the last 5 minutes.</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/da472d40-df57-442c-af6c-e1d69228eb82.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Custom dashboard by searching on</strong> <a href="http://google.com"><strong>google.com</strong></a></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/04e86853-1eb6-4a76-976f-6670c49976cb.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Select anything you liked! and copy the ID</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/8f412c00-216b-4f22-b759-8ffd8545c2ac.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li>Import</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/2952c239-86c4-4ce7-8a0a-6f51bfddcc53.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/b9657187-12f1-4074-946e-8070798a1050.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/650ab50d-9cbf-4082-ba17-58238b11006a.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/004f5072-ef01-4a0e-ba45-12030394968e.png" alt="" style="display:block;margin:0 auto" />

<p>Finish!!!</p>
<p>If you managed to follow all the parts and implemented all the steps sucessfully then it's time to celebrate because its most demanding skills in Market</p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/5c5940b0-bf9a-4229-98ee-5ca2a2ec9f3c.png" alt="" style="display:block;margin:0 auto" />

<hr />
]]></content:encoded></item><item><title><![CDATA[Part 5: Set up ArgoCD ]]></title><description><![CDATA[Go to the Jenkins-master node and add our own EKS cluster to ArgoCD for application deployment using the CLI

Login to argoCD from CLI


argocd login 65.2.186.181:31797 --username admin


[Note] 65.2.]]></description><link>https://ask-abhi.com/part-5-set-up-argocd</link><guid isPermaLink="true">https://ask-abhi.com/part-5-set-up-argocd</guid><dc:creator><![CDATA[Abhinandan Chougule]]></dc:creator><pubDate>Wed, 04 Mar 2026 09:44:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/fb0841f5-925a-4004-932c-71caba5649eb.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<ul>
<li><h3>Go to the Jenkins-master node and add our own EKS cluster to ArgoCD for application deployment using the CLI</h3>
</li>
<li><p>Login to <code>argoCD</code> from CLI</p>
</li>
</ul>
<pre><code class="language-plaintext">argocd login 65.2.186.181:31797 --username admin
</code></pre>
<blockquote>
<p>[Note] 65.2.186.181:31797 --&gt; This should be your argocd url and after login enter your password</p>
</blockquote>
<ul>
<li><strong>Check how many clusters are available in</strong> <code>argocd</code></li>
</ul>
<pre><code class="language-plaintext">argocd cluster list
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/c73a5596-1a65-46d6-bdaf-ceddee66497b.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Get your cluster name</strong></li>
</ul>
<pre><code class="language-plaintext">kubectl config get-contexts
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/710d5f7a-606f-4491-b930-1481d1f2332f.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Prepare the command to add your cluster to</strong> <code>argocd</code> <strong>and Enter</strong></li>
</ul>
<pre><code class="language-plaintext">argocd cluster add iam-root-account@wanderlust.ap-southeast-1.eksctl.io --name wanderlust-eks-cluster
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/725d879e-81cb-4e74-b87e-4663b16bf7da.png" alt="" style="display:block;margin:0 auto" />

<blockquote>
<p>Note: Unknown possible due to sync issues, miss config or may takes some time, troubleshoot accordingly, continue to next and try later by readding again after connecting to repo in next step</p>
</blockquote>
<p><strong>Go to Settings --&gt; Repositories and click on Connect your forked</strong> <a href="https://github.com/abhinandan-chougule/fullstack-gitops-project"><strong>repo</strong></a></p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/7d8e5dc1-f5fa-4cc0-b552-265a70b969f2.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/a163f7c8-9354-42e7-9350-dbf358f2ba74.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/e89afcab-7e71-47ec-8f53-a2dfe1d6b2cd.png" alt="" style="display:block;margin:0 auto" />

<blockquote>
<p><em><strong>[Note] Connection should be <mark class="bg-yellow-200 dark:bg-yellow-500/30">successful</mark></strong></em></p>
</blockquote>
<ul>
<li><strong>Now, go to Applications and click on New App</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/62c55bdb-b269-4ea1-9ff4-e8ee93e5d690.png" alt="" style="display:block;margin:0 auto" />

<blockquote>
<p><em><strong>[Important] Make sure to click on the Auto-Create Namespace option while creating argocd application</strong></em></p>
</blockquote>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/1e3fc38c-3df4-4a2f-9b6a-af7988cca829.png" alt="" style="display:block;margin:0 auto" />

<p><mark class="bg-yellow-200 dark:bg-yellow-500/30">Congratulations, your application is deployed on the AWS EKS Cluster</mark></p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/9fc70c84-dd14-484e-a427-7a310f578f39.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/126e1a89-199e-497c-a9f0-657a7c01d17f.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/e81e4a81-2386-4d08-ade5-869b25338270.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Open ports 31000 and 31100 on the nodes of the EKS Cluster, and access it on a browser</strong></li>
</ul>
<pre><code class="language-plaintext">&lt;worker-public-ip&gt;:31000
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/3f3ebb9e-1176-4011-a5dd-798a6b18f5cf.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/16c9c7bc-f2a9-4cc0-8409-a36e99ad9990.png" alt="" style="display:block;margin:0 auto" />

<p><strong>Email Notification</strong></p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/5598964f-bb9a-4f1b-bd90-fd61f2e88c95.png" alt="" style="display:block;margin:0 auto" />

<p>Next...</p>
<hr />
<div>
<div>💡</div>
<div><a target="_blank" rel="noopener noreferrer nofollow" class="text-primary underline underline-offset-2 hover:text-primary/80 cursor-pointer" href="https://hashnode.com/edit/cmmbxcd9q009s2eoa8z5vbnsj" style="pointer-events:none">Part 6: Observability stack setup to monitor</a></div>
</div>]]></content:encoded></item><item><title><![CDATA[Part 4: Create CI CD Jobs in Jenkins]]></title><description><![CDATA[Create a fullstack CI pipeline



The Jenkins Pipeline script you will get in my GitHub —> Jenkinsfile, before putting the pipeline script in Jenkins, do the below steps:


Fork the Repository

Go to ]]></description><link>https://ask-abhi.com/part-4-ci-job-for-security-and-quality-gate</link><guid isPermaLink="true">https://ask-abhi.com/part-4-ci-job-for-security-and-quality-gate</guid><dc:creator><![CDATA[Abhinandan Chougule]]></dc:creator><pubDate>Wed, 04 Mar 2026 08:22:57 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/56a1da62-0e3a-4baf-b039-640a62306906.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3><strong>Create a <mark class="bg-yellow-200 dark:bg-yellow-500/30">fullstack CI</mark> pipeline</strong></h3>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/842054ff-b285-4154-a8d6-c37d9e2dadc7.png" alt="" style="display:block;margin:0 auto" />

<blockquote>
<p><em><strong>The Jenkins Pipeline script you will get in my</strong></em> <a href="https://github.com/abhinandan-chougule/fullstack-gitops-project.git"><em><strong>GitHub</strong></em></a> <em><strong>—&gt;</strong></em> <a href="https://github.com/abhinandan-chougule/fullstack-gitops-project/blob/main/Jenkinsfile">Jenkinsfile</a><em><strong>, before putting the pipeline script in Jenkins, do the below steps:</strong></em></p>
</blockquote>
<ul>
<li><p><strong>Fork the</strong> <a href="https://github.com/abhinandan-chougule/fullstack-gitops-project.git"><strong>Repository</strong></a></p>
</li>
<li><p><strong>Go to</strong> <code>Jenkinsfile</code></p>
</li>
<li><p><strong>Make Sure you have forked and configured the shared library repository for</strong> <code>jenkinsfile</code> <strong>which uses</strong> <code>groovy</code> <strong>syntax.</strong></p>
</li>
<li><p><strong>Name the Jenkins slave node by</strong> <code>Node</code> <strong>only.</strong></p>
</li>
<li><p><strong>The SonarQube tool, which Jenkins will install, should be named as</strong> <code>Sonar</code> <strong>as per the pipeline script.</strong></p>
</li>
<li><p><strong>Make Sure you change the repository URL and branch as per your GitHub URL.</strong></p>
</li>
<li><p><strong>Change the image name and username.</strong></p>
</li>
</ul>
<blockquote>
<p>Note- In terms of sucessful pipleine run we need to setup all customized parameters as per our environment variables</p>
</blockquote>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/b0dfb8c9-a023-4506-983e-97493071ebbb.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/8c81e593-2dd1-4278-a71b-11c14ffc7171.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li>In place of <code>fullstack-backend-beta</code>, Put the name of docker image that you want, and in place of <code>abhic25,</code> put your Dockerhub username so that it gets tagged with it.</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/c0a046b3-3021-46b6-baa9-22fff7644434.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/4a02f6f2-4e37-47cf-adc9-4a6323cb972e.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/ec53f169-e2ae-4d09-aa46-e12b5e2ac4fd.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/289b7036-6f5a-4cd4-84a8-01247b0ff80d.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/284beaf0-bf40-4031-94a6-2eb3d7379b3d.png" alt="" style="display:block;margin:0 auto" />

<hr />
<h3>Create one more pipeline <mark class="bg-yellow-200 dark:bg-yellow-500/30">fullstack-cd</mark></h3>
<ul>
<li>The Jenkins Pipeline script you will get in my <a href="https://github.com/abhinandan-chougule/fullstack-gitops-project/tree/main/GitOps">GitHub</a> —&gt; GitOps → <a href="https://github.com/abhinandan-chougule/fullstack-gitops-project/blob/main/GitOps/Jenkinsfile">Jenkinsfile</a>, before putting the pipeline script in Jenkins, change all the customized parameters with respect to environments</li>
</ul>
<p>Ex. What we did for the CI Job earlier</p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/7d3170e2-f698-4757-8509-61af6b32e823.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Go to</strong> <code>Kubernetes</code> <strong>→ Change the username in</strong> <code>backend.yaml</code> <strong>and</strong> <code>frontend.yaml</code> <strong>as per your Dockerhub username.</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/a53302f6-0861-4c35-aa4b-19a9dab2436e.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/7ffa00d5-f9a5-45ae-9cdb-b776a7bd0069.png" alt="" style="display:block;margin:0 auto" />

<p>After all of this, Our Jobs are now ready <strong>create one more pipeline, <mark class="bg-yellow-200 dark:bg-yellow-500/30">fullstack-CD</mark></strong></p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/486f1aa0-d3a8-4d57-8d9c-1dfd43fc5de0.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><p>Provide permission to docker socket so that Docker build and push commands do not fail (On the Jenkins Slave)</p>
</li>
<li><pre><code class="language-shell">Sudo chmod 777 /var/run/docker.sock
</code></pre>
</li>
<li><h3>Now Click on <strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">fullstack</mark></strong>-ci and build, and then when it gets completed, <strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">fullstack</mark></strong>-cd will automatically trigger and get built.</h3>
</li>
<li><blockquote>
<p>Note- We have provide new Tag in ci job each time so CD will detect cahnge in Image and copletes job sucessfully</p>
</blockquote>
</li>
<li><img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/2fb98bf4-1b3e-4e80-bbbf-635b586e219f.png" alt="" style="display:block;margin:0 auto" />
</li>
<li><p><strong>CI Job</strong></p>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/bed94f22-20b2-4e8d-8692-b92123031b28.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>CD Job</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/6478ac67-175d-4305-9eea-37e484567619.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/2a7c7e65-0c74-4366-804c-922411e40ea1.png" alt="" style="display:block;margin:0 auto" />

<p>Next...</p>
<hr />
<div>
<div>💡</div>
<div><a target="_blank" rel="noopener noreferrer nofollow" class="text-primary underline underline-offset-2 hover:text-primary/80 cursor-pointer" href="https://hashnode.com/edit/cmmbuom77004x2eoacxwh32cn" style="pointer-events:none">Part 5: Set up ArgoCD</a></div>
</div>]]></content:encoded></item><item><title><![CDATA[Part 3: Tools Integration with Jenkins]]></title><description><![CDATA[Go to Jenkins Master and click on Manage Jenkins --> Plugins --> Available plugins. Install the following plugins:Install (Some of them get installed during the suggested plugins installation stage at]]></description><link>https://ask-abhi.com/part-3-tools-integration-with-jenkins</link><guid isPermaLink="true">https://ask-abhi.com/part-3-tools-integration-with-jenkins</guid><dc:creator><![CDATA[Abhinandan Chougule]]></dc:creator><pubDate>Wed, 04 Mar 2026 07:12:38 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/9390fe57-ddff-4d72-8d77-79dfde44c7bf.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Go to Jenkins Master and click on Manage Jenkins --&gt; Plugins --&gt; Available plugins. Install the following plugins:<br /><strong>Install</strong> (Some of them get installed during the suggested plugins installation stage at start)</p>
<p>• OWASP (Dependency Check)<br />• SonarQube Scanner (Quality)<br />• Git (Code)<br />• Docker (Containerrization)<br />• Docker Pipeline<br />• Pipeline: Stage View<br />• Pipeline: Job<br />• Blue Ocean</p>
<p>Fork GitHub Repo: <a href="https://github.com/abhinandan-chougule/fullstack-gitops-project.git">fullstack</a> and <a href="https://github.com/abhinandan-chougule/shared-library-jenkins.git">share-library</a></p>
<ul>
<li><p><strong>Configure OWASP, move to <mark class="bg-yellow-200 dark:bg-yellow-500/30">Manage Jenkins --&gt; Plugins --&gt; Available plugins</mark></strong></p>
</li>
<li><p><strong>After the OWASP plugin is installed, move to <mark class="bg-yellow-200 dark:bg-yellow-500/30">Manage Jenkins --&gt; Tools</mark></strong></p>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/1cd6069c-0526-43bd-b2b4-8dea273d38ce.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Log in to the SonarQube server and create the credentials for Jenkins to integrate with SonarQube<br />Navigate to Administration --&gt; Security --&gt; Users --&gt; Token</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/66ef4d3c-8abe-4ca6-b7b8-abfe69d9f8dd.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/57af62e3-8474-42c7-a11c-a4d69862659a.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Now, go to Manage Jenkins --&gt; credentials</strong><br /><strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">SonarQube: Integration</mark></strong><mark class="bg-yellow-200 dark:bg-yellow-500/30"><br /></mark><strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">GitHub: Pull shared library and application source code</mark></strong><mark class="bg-yellow-200 dark:bg-yellow-500/30"><br /></mark><strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">Docker Hub: to push the image</mark></strong><br /><strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">Create an API credentials for OWASP</mark></strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/08d5b806-8d2f-4f94-a689-6eae94af1acb.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>GitHub user name and PAT (as a password)</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/632633aa-6828-4875-a1bb-5c537c46ea80.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><p><strong>Create and use an NVD API Key so OWASP can download with the API</strong><br />Generate API Key. Go to: <a href="https://nvd.nist.gov/developers/request-an-api-key">https://nvd.nist.gov/developers/request-an-api-key</a><br />Takes ~10 seconds. Key arrives by email.</p>
</li>
<li><p><strong>Add Key to Jenkins Credentials In Jenkins: Manage Jenkins → Credentials → Global → Add Credentials</strong><br />Type: Secret Text<br />ID: NVD_API_KEY<br />Secret: Fix Your API Key: 397ff9xxxxxxxxxxxxxx7b7d1c37</p>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/7a6155db-8ad6-414e-843c-046bf4231d17.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Go to Manage Jenkins --&gt; Tools and search for SonarQube Scanner installations:</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/295a2a9e-e365-4201-ab03-e8edff8ba366.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Now again, go to Manage Jenkins --&gt; System and search for Global Trusted Pipeline Libraries:</strong></li>
</ul>
<blockquote>
<p><em><strong>Note: Before putting this shared library, Fork GitHub shared</strong></em> <a href="https://github.com/abhinandan-chougule/shared-library-jenkins.git"><em><strong>repository</strong></em></a><em><strong>, and put the project repository URL of your repository.</strong></em></p>
</blockquote>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/45f1d0aa-f9f2-4a98-bb1b-cf785d8b0905.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Log in to the SonarQube server, go to <mark class="bg-yellow-200 dark:bg-yellow-500/30">Administration --&gt; Webhook,</mark> and click on create</strong></li>
</ul>
<blockquote>
<p>Public Ip is of Jenkins CI (Master node) and port 8080 because SonarQube is running on it only.</p>
<p><a href="http://public-ip-of-master:8080/sonarqube-webhook">http://public-ip-of-master:8080/sonarqube-webhook</a></p>
</blockquote>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/3f5125c9-fe0d-4587-9208-f1867957e8f0.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Now, go to the fullstack GitHub</strong> <a href="https://github.com/abhinandan-chougule/fullstack-gitops-project.git"><strong>repository</strong></a> <strong>(hope you forked already) and under the <mark class="bg-yellow-200 dark:bg-yellow-500/30">Automations</mark> directory, update the <mark class="bg-yellow-200 dark:bg-yellow-500/30">instance-id</mark> field on both the</strong> <a href="http://updatefrontendnew.sh"><strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">updatefrontendnew.sh</mark></strong></a> <a href="http://updatebackendnew.sh"><strong><mark class="bg-yellow-200 dark:bg-yellow-500/30">updatebackendnew.sh</mark></strong></a> <strong>with the EKS worker's node instance id</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/b71ae7dc-8157-415d-8dd2-5a09891d1cf3.png" alt="" style="display:block;margin:0 auto" />

<p>Next...</p>
<hr />
<div>
<div>💡</div>
<div><a target="_blank" rel="noopener noreferrer nofollow" class="text-primary underline underline-offset-2 hover:text-primary/80 cursor-pointer" href="https://hashnode.com/edit/cmmbrre88000f2dn08be909i2" style="pointer-events:none">Part 4: Create CI CD Jobs in Jenkins</a></div>
</div>]]></content:encoded></item><item><title><![CDATA[Part 2: Install Jenkins Master, Slave & Tools]]></title><description><![CDATA[What will we set up in this part?

Jenkins Master

Jenkins Slave

EKS

SonarQube

ArgoCD

Trivy




[Note] This project will be implemented in the Singapore region (ap-southeast-1).

Setting up the Je]]></description><link>https://ask-abhi.com/part-2-create-jenkins-master-and-slave</link><guid isPermaLink="true">https://ask-abhi.com/part-2-create-jenkins-master-and-slave</guid><dc:creator><![CDATA[Abhinandan Chougule]]></dc:creator><pubDate>Tue, 03 Mar 2026 03:58:33 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/8160543b-90ab-419d-97bb-565d86b9c6f2.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>What will we set up in this part?</strong></p>
<ul>
<li><p><strong>Jenkins Master</strong></p>
</li>
<li><p><strong>Jenkins Slave</strong></p>
</li>
<li><p><strong>EKS</strong></p>
</li>
<li><p><strong>SonarQube</strong></p>
</li>
<li><p><strong>ArgoCD</strong></p>
</li>
<li><p><strong>Trivy</strong></p>
</li>
</ul>
<hr />
<blockquote>
<p>[Note] This project will be implemented in the <strong>Singapore</strong> region (ap-southeast-1).</p>
</blockquote>
<h3><strong>Setting up the Jenkins Master</strong></h3>
<ul>
<li><strong>Spinup AWS EC2 instances with 2CPU, 8GB of RAM (t2.large), and 30 GB of storage, and install Docker on them.</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/d7282dcd-22d2-4705-9f7a-a423a94a612a.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Open the ports below in the security group of the master machine, and also attach the same security group to the Jenkins worker node (We will create the worker node shortly)</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/097d603f-6f84-4ffd-80c3-d317ee84d953.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Install Java</strong></li>
</ul>
<pre><code class="language-shell">sudo apt update
sudo apt install openjdk-21-jre-headless
java -version
</code></pre>
<pre><code class="language-shell">openjdk 21.0.8 2025-07-15
OpenJDK Runtime Environment (build 21.0.8+9-Debian-1)
OpenJDK 64-Bit Server VM (build 21.0.8+9-Debian-1, mixed mode, sharing)
</code></pre>
<ul>
<li><strong>Install and configure</strong> <a href="https://github.com/abhinandan-chougule/devops-tools-install/blob/main/Jenkins/jenkins.sh"><strong>Jenkins</strong></a> <strong>(Master machine)</strong></li>
</ul>
<pre><code class="language-shell">sudo wget -O /etc/yum.repos.d/jenkins.repo \
    https://pkg.jenkins.io/rpm-stable/jenkins.repo
sudo dnf upgrade

# Add required dependencies for the Jenkins package
sudo dnf install jenkins
sudo systemctl daemon-reload 
</code></pre>
<ul>
<li><strong>Start Jenkins. You can enable the Jenkins service to</strong></li>
</ul>
<pre><code class="language-shell">#start at boot with 
sudo systemctl enable jenkins 

#start the Jenkins service 
sudo systemctl start jenkins 

#check the status of the Jenkins service
sudo systemctl status Jenkins
</code></pre>
<p><strong>Get your password from</strong></p>
<pre><code class="language-shell">sudo cat /var/lib/jenkins/secrets/initialAdminPassword
</code></pre>
<ul>
<li><p><strong>Now, access the Jenkins Master in your browser using the EC2 public IP on port 8080 and configure it.</strong></p>
</li>
<li><p><strong>Install suggested plugins</strong></p>
</li>
</ul>
<blockquote>
<p>Note - If we install suggested pugins then it may slows down Jenkins UI</p>
</blockquote>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/7b894adf-b87b-4d01-ae51-1b9378f1b990.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/dde83db6-7ad4-44a8-b491-21571234841f.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Create an admin user to manage Jenkins</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/b5f340b7-2027-4e79-893a-638d13c95db7.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Create AWS IAM--&gt;user--&gt;Create user and then create Access Key</strong></li>
</ul>
<p>Ex.<br />Access ID: AKIAZDZTBOHGUD5KKPX3<br />Secret access key • uPBtoXTLltBRhxoMs8IdfUzQM42LAASsm9nzOgQa</p>
<ul>
<li><strong>AWS CLI and configure on Jenkins Master (Setup</strong> <a href="https://github.com/abhinandan-chougule/devops-tools-install/blob/main/AWSCLI/AWSCLI.sh"><strong>AWSCLI</strong></a><strong>)</strong></li>
</ul>
<pre><code class="language-shell">"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
    sudo apt install unzip
    unzip awscliv2.zip
    sudo ./aws/install
    aws configure
</code></pre>
<ul>
<li><strong>Check AWS connectivity</strong></li>
</ul>
<pre><code class="language-shell">
aws sts get-caller-identity
</code></pre>
<ul>
<li><strong>Install kubectl (Master machine) (Setup kubectl )</strong></li>
</ul>
<pre><code class="language-shell">    curl -o kubectl https://amazon-eks.s3.us-west- 2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl
    chmod +x ./kubectl
    sudo mv ./kubectl /usr/local/bin
    kubectl version --short --client
</code></pre>
<ul>
<li><strong>Install eksctl (Master machine) (Setup eksctl)</strong></li>
</ul>
<pre><code class="language-shell">    curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
    sudo mv /tmp/eksctl /usr/local/bin
    eksctl version
</code></pre>
<ul>
<li><p><strong>Create EKS Cluster (Master machine)</strong></p>
</li>
<li><blockquote>
<p>Note It may take several minutes to create <strong>a</strong> new cluster</p>
</blockquote>
</li>
</ul>
<pre><code class="language-shell">    eksctl create cluster --name=wandercluster \
    --region=ap-south-1 \
    --version=1.30 \
    --without-nodegroup
</code></pre>
<ul>
<li><strong>Associate IAM OIDC Provider (Master machine)</strong></li>
</ul>
<pre><code class="language-plaintext">    eksctl utils associate-iam-oidc-provider \
    --region ap-south-1 \
    --cluster wandercluster \
    --approve
</code></pre>
<ul>
<li><strong>Create Nodegroup (On Jenkins Master)</strong></li>
</ul>
<pre><code class="language-plaintext">    eksctl create nodegroup --cluster=wandercluster \
    --region=ap-southeast-1 \
    --name=wandercluster \
    --node-type=t2.large \
    --nodes=2 \
    --nodes-min=2 \
    --nodes-max=2 \
    --node-volume-size=29 \
    --ssh-access \
    --ssh-public-key=my-devops-key
</code></pre>
<h3><strong>Install and configure SonarQube (On the Master node)</strong></h3>
<pre><code class="language-shell">docker run -itd --name SonarQube-Server -p 9000:9000 sonarqube:lts-community
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/2a19c2bb-5542-46be-90a2-a1146c0ca2c7.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Access the Sonarqube server on:</strong> <code>public-ip-of-jenkins-master:9000</code></li>
</ul>
<blockquote>
<p>Note- Initial username= <strong>admin</strong> and password= <strong>admin</strong></p>
</blockquote>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/19191e6b-0550-4775-9c61-85b382e7752f.png" alt="" style="display:block;margin:0 auto" />

<h3>Install and Configure ArgoCD (On the Jenkins Master)</h3>
<ul>
<li><strong>Create argocd namespace</strong></li>
</ul>
<pre><code class="language-plaintext">kubectl create namespace argocd
</code></pre>
<ul>
<li><strong>Apply the argocd manifest</strong></li>
</ul>
<pre><code class="language-plaintext">kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
</code></pre>
<ul>
<li><strong>Make sure all pods are running in the argocd namespace</strong></li>
</ul>
<pre><code class="language-plaintext">watch kubectl get pods -n argocd
</code></pre>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/66ddc2fa-13d2-4c4e-a523-cd7e72762c0d.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Install argocd CLI</strong></li>
</ul>
<pre><code class="language-plaintext">sudo curl --silent --location -o /usr/local/bin/argocd https://github.com/argoproj/argo cd/releases/download/v2.4.7/argocd-linux-amd64
</code></pre>
<ul>
<li><strong>Provide executable permission</strong></li>
</ul>
<pre><code class="language-plaintext">sudo chmod +x /usr/local/bin/argocd
</code></pre>
<ul>
<li><strong>Verify argocd</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/43ba9325-46d0-48b9-8fda-2697c686bbc1.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Check argocd services</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/3daa25e0-8893-4d41-8f62-9215bfc1e8a9.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Change the argocd server's service from <mark class="bg-yellow-200 dark:bg-yellow-500/30">ClusterIP</mark> to <mark class="bg-yellow-200 dark:bg-yellow-500/30">NodePort</mark></strong></li>
</ul>
<pre><code class="language-plaintext">kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'
</code></pre>
<p><strong>Or you can manually edit the manifest file also by going inside it, using this command:</strong></p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/2f24980f-eccb-47df-ba6e-81198627f5fa.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Confirm service is patched or not</strong></li>
</ul>
<pre><code class="language-shell">kubectl get svc -n argocd
</code></pre>
<ul>
<li><strong>Check the port where the ArgoCD server is running and expose it on the security groups of a worker node</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/9f70945d-323f-4d29-a054-5ca5438d0ab2.png" alt="" style="display:block;margin:0 auto" />

<p><strong>port: 31797</strong></p>
<ul>
<li>Access it on the browser, click on advance, and proceed with <code>public-ip-master-node:317979</code></li>
</ul>
<blockquote>
<p>Note: If you find an error like this in the above image, then run the command below:</p>
</blockquote>
<pre><code class="language-shell">kubectl port-forward svc/argocd-server -n argocd 31797:80 --address 0.0.0.0 &amp;
</code></pre>
<ul>
<li>Page</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/3d95014a-7604-4932-a7da-7eab82d879ad.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Fetch the initial password of the argocd server from Jenkins-ci-master</strong></li>
</ul>
<pre><code class="language-plaintext"> kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
</code></pre>
<p><strong>Username</strong>: <code>admin</code> •<br /><strong>Now, go to User Info and update your argocd password</strong> <code>wnx0ZrUozW5oZeDq</code></p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/2780d7c0-0bfd-4338-83ff-3834e2782c55.png" alt="" style="display:block;margin:0 auto" />

<hr />
<h3><strong>Setting up Jenkins Slave node</strong></h3>
<ul>
<li>Create a new EC2 instance (Jenkins Slave) with 2CPU, 8GB of RAM (t2.large), and 30 GB of storage, and install Java on it</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/9e99a4de-c794-4f83-8849-03f5d981ca69.png" alt="" style="display:block;margin:0 auto" />

<pre><code class="language-shell">sudo apt update
sudo apt install openjdk-21-jre-headless
java -version
</code></pre>
<ul>
<li><strong>Attach the IAM role with administrator access</strong></li>
</ul>
<p>Create an IAM role with administrator access, and attach it to the Jenkins Slave node. Select Jenkins slave EC2 instance --&gt; Actions --&gt; Security --&gt; Modify IAM role</p>
<ul>
<li><strong>Configure AWSCLI (Setup</strong> <a href="https://github.com/abhinandan-chougule/devops-tools-install/blob/main/AWSCLI/AWSCLI.sh"><strong>AWSCLI</strong></a><strong>)</strong></li>
</ul>
<pre><code class="language-shell">"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
    sudo apt install unzip
    unzip awscliv2.zip
    sudo ./aws/install
    aws configure
</code></pre>
<ul>
<li><strong>**Generate ssh keys (<mark class="bg-yellow-200 dark:bg-yellow-500/30">on the master</mark>) to set up the Jenkins master-slave relation<br />**</strong><code>ssh-keygen</code></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/2c113e12-f29b-4502-aea2-219ba3704c7c.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>On the <mark class="bg-yellow-200 dark:bg-yellow-500/30">Jenkins Slave node,</mark> move to the directory(.</strong><code>ssh</code><strong>) where your ssh keys are generated, and copy the content of the public key and paste to authorized_keys file.</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/7df10a62-42c7-4117-9d44-42d12887885e.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/0f558dc7-4681-47e3-a374-b098600bcb4e.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Now, go to the Jenkins master and navigate to Manage Jenkins --&gt; Nodes, and click on New node</strong></li>
</ul>
<p>Name: Node • <code>Type: permanent agent</code><br />Number of executors: <code>2</code><br />Remote root directory: <code>/home/ubuntu</code><br />Labels: <code>Node</code><br />Usage: <code>Only build jobs with label expressions matching this node</code><br />Launch method: <code>Via ssh</code><br />Host: <code>public ip of Jenkins slave</code></p>
<p><strong>Manage Jenkins---&gt; Credentials</strong>: <code>Add --&gt; Kind: ssh username with private key --&gt; ID: Worker --&gt; Description: Worker --&gt; Username: ubuntu --&gt; Private key: Enter directly --&gt; Add Private key</code></p>
<p>Host Key Verification Strategy: Non<code>-Verifying Verification Strategy</code><br />Availability: <code>Keep this agent online as much as possible</code></p>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/eca108ee-a30f-412b-b7d1-d93c7d17e2b0.png" alt="" style="display:block;margin:0 auto" />

<blockquote>
<p>Note: the public IP should be the master’s instance-generated private, which is stored in id_ed25519.</p>
</blockquote>
<ul>
<li><strong>And your Jenkins slave node is added</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/b12d2a05-7b73-4442-9ffd-ec01126f16b2.png" alt="" style="display:block;margin:0 auto" />

<blockquote>
<p>If any error occurs, open the settings icon of the Node and make changes, then launch the agent again.</p>
</blockquote>
<ul>
<li><strong>Install Docker (Jenkins slave)</strong></li>
</ul>
<pre><code class="language-shell">sudo apt install docker.io -y
sudo usermod -aG docker ubuntu &amp;&amp; newgrp docker
</code></pre>
<h3>Install Trivy (On the Jenkins slave)</h3>
<pre><code class="language-shell">sudo apt-get install wget apt-transport-https gnupg lsb-release -y
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt-get update -y
sudo apt-get install trivy -y
</code></pre>
<hr />
<h3>Steps to add email notification</h3>
<ul>
<li><p>If not yet, then -&gt; Go to your Jenkins Master EC2 instance and allow for SMTPS</p>
</li>
<li><p>Now, we need to generate an application password from our Gmail account to authenticate with Jenkins</p>
</li>
<li><p>Open Gmail and go to Manage your Google Account --&gt; Security</p>
</li>
</ul>
<blockquote>
<p>Note- [Important] Make sure 2-step verification is on</p>
</blockquote>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/c81da363-643c-49d7-8828-a8a8144b0eb5.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><h3>Search for the App password and create an app password for Jenkins</h3>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/b11a8dca-093d-4ce6-9824-6cb62bf18549.png" alt="" style="display:block;margin:0 auto" />

<blockquote>
<p>Note- App password will be in this form "atqw gvjh ifrr wdkf" make sure when you enter in jenkins it should have no <mark class="bg-yellow-200 dark:bg-yellow-500/30">space</mark> between them like "atqwgvjhifrrwdkf"</p>
</blockquote>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/f93d8597-ba0d-4787-b4e1-322f765618e8.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Once the app password is created, go back to Jenkins, Manage Jenkins --&gt; Credentials to add username and password for email notification</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/1a903088-4b6f-43ba-9ad3-10ddd384d264.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Go back to Manage Jenkins --&gt; System and search for Extended E-mail Notification and set up email notification</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/0c730a40-df99-4dcd-993e-39c8b718ab9c.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>[Important] Enter your Gmail password, which we copied recently, in the password field E-mail Notification --&gt; Advanced</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/0124253e-16ab-49b5-ac0e-28f3c81c7f39.png" alt="" style="display:block;margin:0 auto" />

<ul>
<li><strong>Verify if the test email was received (Jenkins to Gmail setup successful)</strong></li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/0ee301bf-c35a-4b13-8629-7a8ada497c28.png" alt="" style="display:block;margin:0 auto" />

<blockquote>
<p>The next configuration is in Part 3</p>
</blockquote>
<hr />
<div>
<div>💡</div>
<div><a target="_blank" rel="noopener noreferrer nofollow" class="text-primary underline underline-offset-2 hover:text-primary/80 cursor-pointer" href="https://hashnode.com/edit/cmmbp8z5z001q2ep59qx48tfe" style="pointer-events:none">Part 3: Tools Integration with Jenkins</a></div>
</div>]]></content:encoded></item><item><title><![CDATA[In short, about myself]]></title><description><![CDATA[Hello, I’m Abhinandan — a Senior Cloud & Security Platform Engineer with over 16 years of experience in building scalable infrastructure, automating workflows, and securing enterprise systems.
This bl]]></description><link>https://ask-abhi.com/in-short-about-myself</link><guid isPermaLink="true">https://ask-abhi.com/in-short-about-myself</guid><dc:creator><![CDATA[Abhinandan Chougule]]></dc:creator><pubDate>Mon, 02 Mar 2026 10:33:38 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/3b58d662-d90e-47e4-bf46-f2f16bb79a71.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hello, I’m <strong>Abhinandan</strong> — a Senior Cloud &amp; Security Platform Engineer with over 16 years of experience in building scalable infrastructure, automating workflows, and securing enterprise systems.</p>
<p>This blog is my space to share <strong>practical DevOps, Cloud, and Security insights</strong> — not just theory, but step-by-step guides, reusable templates, and lessons learned from real-world projects.</p>
<h2>🌐 What You’ll Find Here</h2>
<ul>
<li><p><strong>Cloud Migration Playbooks</strong> — structured comparisons, decision matrices, and automation scripts.</p>
</li>
<li><p><strong>CI/CD Pipeline Optimization</strong> — best practices for traceable Docker tagging, robust error handling, and secure automation.</p>
</li>
<li><p><strong>Security &amp; Compliance</strong> — IAM, PAM, vulnerability management, and embedding security into DevOps workflows.</p>
</li>
<li><p><strong>Portfolio Projects</strong> — curated case studies and technical walkthroughs that recruiters and peers can explore</p>
</li>
</ul>
<h2>🎯 My Goal</h2>
<p>I want this blog to be:</p>
<ul>
<li><p>A <strong>knowledge hub</strong> for engineers tackling complex migrations and automation challenges.</p>
</li>
<li><p>A <strong>portfolio showcase</strong> for my projects, highlighting both technical depth and professional polish.</p>
</li>
<li><p>A <strong>community resource</strong> where ideas are accessible, actionable, and future-proof.</p>
</li>
</ul>
<h2>🚀 What’s Next</h2>
<p>If you’re a recruiter, engineer, or tech enthusiast — welcome! This is where I share my journey and the systems I build.</p>
]]></content:encoded></item><item><title><![CDATA[Part 1: What is expected from this series?]]></title><description><![CDATA[🔹 Introduction

Why CI/CD pipelines are critical for modern cloud deployments.

The importance of integrating security checks, automation, and monitoring.

A quick overview of the tools used: Jenkins]]></description><link>https://ask-abhi.com/part-1-introduction-to-secure-ci-cd-pipelines</link><guid isPermaLink="true">https://ask-abhi.com/part-1-introduction-to-secure-ci-cd-pipelines</guid><category><![CDATA[Devops]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[Jenkins]]></category><category><![CDATA[DevSecOps]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Abhinandan Chougule]]></dc:creator><pubDate>Sun, 01 Mar 2026 01:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/93777d55-cba8-4d1f-9fae-54b069cc990e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>🔹 Introduction</h2>
<ul>
<li><p>Why CI/CD pipelines are critical for modern cloud deployments.</p>
</li>
<li><p>The importance of integrating <strong>security checks, automation, and monitoring</strong>.</p>
</li>
<li><p>A quick overview of the tools used: Jenkins, GitHub, OWASP, SonarQube, Trivy, Docker, Argo CD, Prometheus, and Grafana.</p>
</li>
</ul>
<h2>🔹 Architecture Overview</h2>
<ul>
<li><p><strong>Developer → GitHub → Jenkins CI/CD → AWS EKS</strong>.</p>
</li>
<li><p>CI pipeline handles <strong>code quality, security scans, and image builds</strong>.</p>
</li>
<li><p>CD pipeline handles <strong>deployment automation and monitoring</strong>.</p>
</li>
<li><p>Monitoring stack (Prometheus + Grafana) ensures visibility.</p>
</li>
<li><p>Notifications via email keep stakeholders updated.</p>
</li>
</ul>
<h2>🔹 CI Pipeline Stages</h2>
<ol>
<li><p><strong>Code Checkout</strong> → Jenkins pulls code from GitHub.</p>
</li>
<li><p><strong>Dependency Scanning (OWASP)</strong> → Detects vulnerable libraries.</p>
</li>
<li><p><strong>Code Quality (SonarQube)</strong> → Enforces coding standards and quality gates.</p>
</li>
<li><p><strong>Security Scan (Trivy)</strong> → Scans filesystem and Docker images for vulnerabilities.</p>
</li>
<li><p><strong>Docker Build &amp; Push</strong> → Builds container images and pushes to Docker Hub.</p>
</li>
<li><p><strong>Version Update</strong> → Updates Kubernetes manifests with new image tags.</p>
</li>
<li><p><strong>Commit Back to GitHub</strong> → Ensures GitOps workflow consistency.</p>
</li>
</ol>
<h2>🔹 CD Pipeline Stages</h2>
<ol>
<li><p><strong>Trigger from CI</strong> → Jenkins CD job starts.</p>
</li>
<li><p><strong>Argo CD Deployment</strong> → Syncs manifests from GitHub to AWS EKS.</p>
</li>
<li><p><strong>Kubernetes Execution</strong> → Deploys updated pods/services.</p>
</li>
<li><p><strong>Monitoring</strong>:</p>
<ul>
<li><p><strong>Prometheus</strong> → Collects metrics.</p>
</li>
<li><p><strong>Grafana</strong> → Visualizes dashboards.</p>
</li>
</ul>
</li>
<li><p><strong>Notifications</strong> → Email alerts on pipeline completion.</p>
</li>
</ol>
<h2>🔹 Tools &amp; Integrations</h2>
<ul>
<li><p><strong>Jenkins</strong> → CI/CD automation.</p>
</li>
<li><p><strong>GitHub</strong> → Source control + GitOps repo.</p>
</li>
<li><p><strong>OWASP Dependency Check</strong> → Security scanning.</p>
</li>
<li><p><strong>SonarQube</strong> → Code quality analysis.</p>
</li>
<li><p><strong>Trivy</strong> → Vulnerability scanning.</p>
</li>
<li><p><strong>Docker Hub</strong> → Image registry.</p>
</li>
<li><p><strong>Argo CD</strong> → GitOps deployment.</p>
</li>
<li><p><strong>AWS EKS</strong> → Managed Kubernetes cluster.</p>
</li>
<li><p><strong>Prometheus &amp; Grafana</strong> → Monitoring and visualization.</p>
</li>
<li><p><strong>Email (SMTP)</strong> → Notifications.</p>
</li>
</ul>
<h2>🔹 What have I done in This Series?</h2>
<ul>
<li><p><strong>Part 1:</strong> <a href="https://hashnode.com/edit/cmm9bq92e001s2fo87gj7981q"><strong>What is expected from this series?</strong></a><br />Why CI/CD matters, an overview of tools, and an architecture diagram.</p>
</li>
<li><p><strong>Part 2:</strong> <a href="https://hashnode.com/edit/cmma2viof00g82fo820uacljh"><strong>Install Jenkins Master, Slave &amp; Tools</strong></a><br />Jenkins jobs, GitHub webhooks, credentials.</p>
</li>
<li><p><strong>Part 3:</strong> <a href="https://hashnode.com/edit/cmmbp8z5z001q2ep59qx48tfe"><strong>Tools Integration with Jenkins</strong></a><br />Install plugins, set up credentials, and configure</p>
</li>
<li><p><strong>Part 4:</strong> <a href="https://hashnode.com/edit/cmmbrre88000f2dn08be909i2"><strong>CI Job for Security and Quality Gate</strong></a><br />CI Pipeline setup</p>
</li>
<li><p><strong>Part 5:</strong> <a href="https://hashnode.com/edit/cmmbuom77004x2eoacxwh32cn"><strong>Set up ArgoCD</strong></a><br />EKS and Application repo</p>
</li>
<li><p><strong>Part 6:</strong> <a href="https://hashnode.com/edit/cmmbxcd9q009s2eoa8z5vbnsj"><strong>Observability stack setup to monitor</strong></a><br />Setting up dashboards and alerts</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[All-in-One Ansible Concepts: A Complete Guide for DevOps Engineers]]></title><description><![CDATA[In modern cloud and DevOps environments, automation is essential for managing infrastructure efficiently. Manual server configuration is time-consuming, error-prone, and difficult to scale.
Ansible is]]></description><link>https://ask-abhi.com/all-in-one-ansible-concepts</link><guid isPermaLink="true">https://ask-abhi.com/all-in-one-ansible-concepts</guid><category><![CDATA[ansible]]></category><category><![CDATA[Devops]]></category><category><![CDATA[automation]]></category><category><![CDATA[Linux]]></category><category><![CDATA[configuration management]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[ci-cd]]></category><dc:creator><![CDATA[Abhinandan Chougule]]></dc:creator><pubDate>Thu, 01 Jan 2026 01:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/6997142f587b14d25b5231b7/28a4f2fd-8cf0-484f-92ba-1fe784877e11.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In modern cloud and DevOps environments, automation is essential for managing infrastructure efficiently. Manual server configuration is time-consuming, error-prone, and difficult to scale.</p>
<p><strong>Ansible</strong> is one of the most powerful automation tools used by DevOps engineers to manage infrastructure, configure systems, and deploy applications across hundreds or thousands of servers.</p>
<p>With its <strong>agentless architecture</strong> and simple <strong>YAML-based playbooks</strong>, Ansible makes infrastructure automation easy and scalable.</p>
<p>In this guide, we will cover the <strong>core concepts of Ansible</strong>, including:</p>
<ul>
<li><p>Ansible architecture</p>
</li>
<li><p>Inventory</p>
</li>
<li><p>Modules</p>
</li>
<li><p>Playbooks</p>
</li>
<li><p>Roles</p>
</li>
<li><p>Variables</p>
</li>
<li><p>Handlers</p>
</li>
<li><p>Vault</p>
</li>
<li><p>Real-world DevOps use cases</p>
</li>
</ul>
<hr />
<h3>Table of Contents</h3>
<ol>
<li><p>What is Ansible</p>
</li>
<li><p>Why DevOps Engineers Use Ansible</p>
</li>
<li><p>Ansible Architecture</p>
</li>
<li><p>Core Components of Ansible</p>
</li>
<li><p>Ansible Inventory</p>
</li>
<li><p>Ansible Modules</p>
</li>
<li><p>Ansible Playbooks</p>
</li>
<li><p>Ansible Roles</p>
</li>
<li><p>Ansible Variables</p>
</li>
<li><p>Ansible Handlers</p>
</li>
<li><p>Ansible Vault</p>
</li>
<li><p>Real-World DevOps Use Case</p>
</li>
<li><p>Conclusion</p>
</li>
</ol>
<hr />
<h3>What is Ansible?</h3>
<p>Ansible is an <strong>open-source automation tool</strong> used for:</p>
<ul>
<li><p>Configuration management</p>
</li>
<li><p>Infrastructure provisioning</p>
</li>
<li><p>Application deployment</p>
</li>
<li><p>Workflow orchestration</p>
</li>
</ul>
<p>Unlike traditional configuration management tools, Ansible <strong>does not require agents on target machines</strong>. Instead, it communicates using <strong>SSH for Linux systems and WinRM for Windows systems</strong>.</p>
<p>This makes Ansible lightweight, secure, and easy to maintain.</p>
<hr />
<h3>Why DevOps Engineers Use Ansible</h3>
<p>Ansible has become one of the most popular automation tools in DevOps due to its simplicity and flexibility.</p>
<h3>Key Benefits</h3>
<p>✔ Agentless architecture<br />✔ Easy YAML syntax<br />✔ Powerful automation modules<br />✔ Supports multi-cloud environments<br />✔ Scalable infrastructure management</p>
<p>Organizations use Ansible to automate:</p>
<ul>
<li><p>Server configuration</p>
</li>
<li><p>Software installation</p>
</li>
<li><p>Security hardening</p>
</li>
<li><p>Application deployments</p>
</li>
<li><p>Cloud infrastructure provisioning</p>
</li>
</ul>
<hr />
<h3>Ansible Architecture</h3>
<p>Ansible follows a <strong>simple control architecture</strong>.</p>
<h3>Control Node</h3>
<p>The <strong>control node</strong> is the machine where Ansible is installed. It executes playbooks and manages automation tasks.</p>
<h3>Managed Nodes</h3>
<p>Managed nodes are the servers that Ansible configures and manages.</p>
<p>Ansible connects to these nodes using <strong>SSH</strong>.</p>
<h3>Architecture Overview</h3>
<pre><code class="language-plaintext">DevOps Engineer
      │
      ▼
Control Node (Ansible)
      │
 ┌────┴──────┐
 ▼           ▼
Web Server   Database Server
</code></pre>
<p>The control node sends instructions to managed nodes through <strong>Ansible playbooks and modules</strong>.</p>
<hr />
<h3>Core Components of Ansible</h3>
<p>Understanding Ansible requires knowing its core components.</p>
<h3>Key Components</h3>
<ul>
<li><p>Inventory</p>
</li>
<li><p>Modules</p>
</li>
<li><p>Playbooks</p>
</li>
<li><p>Roles</p>
</li>
<li><p>Tasks</p>
</li>
<li><p>Variables</p>
</li>
<li><p>Handlers</p>
</li>
<li><p>Templates</p>
</li>
</ul>
<p>Each component plays an important role in automation workflows.</p>
<hr />
<h3>Ansible Inventory</h3>
<p>The <strong>inventory</strong> defines the list of servers that Ansible manages.</p>
<p>It can be a simple text file that organizes hosts into groups.</p>
<h3>Example Inventory</h3>
<pre><code class="language-plaintext">[webservers]
web1.example.com
web2.example.com

[dbservers]
db1.example.com
</code></pre>
<p>Inventories can be:</p>
<ul>
<li><p><strong>Static inventory</strong> (manual server lists)</p>
</li>
<li><p><strong>Dynamic inventory</strong> (cloud-based infrastructure)</p>
</li>
</ul>
<p>Dynamic inventories are commonly used with cloud providers such as AWS and Azure.</p>
<hr />
<h3>Ansible Modules</h3>
<p>Modules are the <strong>building blocks of Ansible automation</strong>.</p>
<p>They perform specific tasks such as installing packages, managing services, or copying files.</p>
<h3>Common Modules</h3>
<ul>
<li><p>package</p>
</li>
<li><p>apt</p>
</li>
<li><p>yum</p>
</li>
<li><p>service</p>
</li>
<li><p>copy</p>
</li>
<li><p>file</p>
</li>
<li><p>command</p>
</li>
<li><p>shell</p>
</li>
</ul>
<h3>Example Module Usage</h3>
<pre><code class="language-yaml">- name: Install nginx
  apt:
    name: nginx
    state: present
</code></pre>
<p>Ansible includes <strong>thousands of modules</strong> for managing different systems and services.</p>
<hr />
<h3>Ansible Playbooks</h3>
<p>Playbooks define the automation tasks executed by Ansible.</p>
<p>They are written in <strong>YAML format</strong> and describe the desired system state.</p>
<h3>Example Playbook</h3>
<pre><code class="language-yaml">- hosts: webservers
  become: yes

  tasks:
    - name: Install nginx
      apt:
        name: nginx
        state: present
</code></pre>
<p>This playbook installs <strong>Nginx on all web servers</strong>.</p>
<h3>Running a Playbook</h3>
<pre><code class="language-yaml">ansible-playbook install-nginx.yml
</code></pre>
<p>Playbooks are used for:</p>
<ul>
<li><p>Software deployment</p>
</li>
<li><p>Server configuration</p>
</li>
<li><p>Infrastructure setup</p>
</li>
</ul>
<hr />
<h3>Ansible Roles</h3>
<p>Roles allow you to organize automation code into <strong>reusable components</strong>.</p>
<p>Instead of writing large playbooks, you can break automation tasks into structured roles.</p>
<h3>Role Directory Structure</h3>
<pre><code class="language-yaml">roles/
 └ nginx/
     ├ tasks/
     ├ handlers/
     ├ templates/
     ├ vars/
     ├ defaults/
</code></pre>
<p>Roles improve:</p>
<ul>
<li><p>Code organization</p>
</li>
<li><p>Reusability</p>
</li>
<li><p>Scalability</p>
</li>
</ul>
<p>Many roles are available through <strong>Ansible Galaxy</strong>.</p>
<hr />
<h3>Ansible Variables</h3>
<p>Variables make playbooks <strong>dynamic and flexible</strong>.</p>
<p>They allow values to change based on the environment.</p>
<h3>Example Variable</h3>
<pre><code class="language-yaml">vars:
  package_name: nginx
</code></pre>
<p>Variables can be defined in:</p>
<ul>
<li><p>playbooks</p>
</li>
<li><p>inventory</p>
</li>
<li><p>group variables</p>
</li>
<li><p>host variables</p>
</li>
</ul>
<hr />
<h3>Ansible Handlers</h3>
<p>Handlers are special tasks that run <strong>only when triggered by other tasks</strong>.</p>
<p>They are commonly used to restart services.</p>
<h3>Example Handler</h3>
<pre><code class="language-yaml">handlers:
  - name: restart nginx
    service:
      name: nginx
      state: restarted
</code></pre>
<p>Handlers run only when notified, which helps avoid unnecessary actions.</p>
<hr />
<h3>Ansible Vault</h3>
<p>Ansible Vault is used to <strong>secure sensitive information</strong> such as passwords and API keys.</p>
<h3>Encrypt a File</h3>
<pre><code class="language-yaml">ansible-vault encrypt secrets.yml
</code></pre>
<h3>Edit Encrypted File</h3>
<pre><code class="language-plaintext">ansible-vault edit secrets.yml
</code></pre>
<p>This ensures sensitive data is protected inside your automation workflows.</p>
<hr />
<h3>Real-World DevOps Use Case</h3>
<p>Ansible is commonly used inside <strong>CI/CD pipelines</strong> to automate infrastructure configuration.</p>
<h3>Example Workflow</h3>
<pre><code class="language-yaml">Git Commit
     │
     ▼
CI/CD Pipeline
     │
     ▼
Ansible Playbook Execution
     │
     ▼
Application deployed to servers
</code></pre>
<p>For example:</p>
<ul>
<li><p>Terraform creates infrastructure</p>
</li>
<li><p>Ansible configures servers</p>
</li>
<li><p>Jenkins deploys applications</p>
</li>
</ul>
<p>This approach is widely used in <strong>cloud-native DevOps environments</strong>.</p>
<hr />
<h3>Conclusion</h3>
<p>Ansible is a powerful automation tool that simplifies infrastructure management and application deployment.</p>
<p>With its <strong>agentless architecture</strong>, <strong>YAML-based playbooks</strong>, and extensive module ecosystem, Ansible enables DevOps teams to automate complex operations efficiently.</p>
<p>By understanding key concepts like <strong>inventory, modules, playbooks, roles, and vault</strong>, engineers can build scalable and reliable automation pipelines.</p>
<p>Ansible remains one of the <strong>most important tools in the modern DevOps toolkit</strong>.</p>
<hr />
]]></content:encoded></item></channel></rss>