DevOps Guide

10 Beginner DevOps Projects

10 Beginner DevOps Projects You Can Build in 2025 (GitHub Actions, Docker, Terraform & More)

You have watched the tutorials. You have read the "What is DevOps?" articles. You might even have a certificate or two. But when you sit down to open your terminal, you freeze. "What do I actually build?"

This is the most common hurdle for aspiring DevOps engineers. The theory is abundant, but practical, scoped project ideas are rare. You don't learn DevOps by reading about it; you learn it by breaking things, fixing pipelines, and seeing that green checkmark appear after hours of debugging.

If you have already followed our From Zero to CI/CD Tutorial, you have a solid foundation. Now, it is time to build a portfolio that proves you can do the work.

In this guide, I will give you 10 concrete, hands-on DevOps project ideas for beginners. These aren't abstract concepts; they are specific, resume-ready projects complete with toolstacks and checklists. They range from simple CI/CD setups to cloud infrastructure and monitoring.

How to Use These DevOps Project Ideas

Don't try to build all 10 at once. Pick one that interests you and commit to it for the weekend. Here is the golden rule for a DevOps portfolio:

  • Keep the scope small: A finished small project is infinitely better than an unfinished "Netflix clone".
  • Document everything: A GitHub repo with no `README.md` is invisible to recruiters. Explain what you built, how you built it, and creating a diagram is a bonus point.
  • Make it public: Put it on GitHub. Pin it to your profile. Share it on LinkedIn.

Ready? Let's start building.

1. Set Up CI Tests with GitHub Actions for a Simple Node/Python App

CI/CD GitHub Actions

The Goal: Create a pipeline that automatically runs unit tests whenever you push code.

This is the "Hello World" of DevOps. Before you try to orchestrate Kubernetes clusters, you simply need to ensure your code doesn't break when you change it. If you followed our previous tutorial, you are already halfway there.

What you will learn:

  • Basic YAML syntax for GitHub Actions.
  • How to trigger workflows on `push` events.
  • How to run automated tests (Jest/Pytest) in a cloud runner.

Tools & Stack:

One simple app (Node.js or Python) + GitHub Actions.

Core Steps:

  1. Create a simple "Calculator" app that adds two numbers.
  2. Write a unit test that asserts `1 + 1 = 2`.
  3. Create a `.github/workflows/test.yml` file.
  4. Configure the workflow to run `npm test` or `pytest` on every push to the `main` branch.
  5. Showcase it: Push a broken code change (where `1 + 1 = 3`) and take a screenshot of the failed GitHub Action run. Then fix it and show the green checkmark.

2. Dockerize a Simple Web App and Run It Locally

Docker Containers

The Goal: Package an application so it runs exactly the same on your machine, your friend's machine, and the server.

"It works on my machine" is the most expensive sentence in software engineering. Docker solves this. By containerizing your app, you bundle the OS, libraries, and dependencies into a single artifact.

What you will learn:

  • How to write a `Dockerfile`.
  • Understanding Docker images vs. containers.
  • Port mapping (exposing port 3000 inside container to 8080 outside).

Tools & Stack:

Docker Desktop + Any Web App (e.g., Express.js or Flask).

Core Steps:

  1. Write a `Dockerfile` for your app (start from `node:alpine` or `python:slim`).
  2. Run `docker build -t my-demo-app .` to create the image.
  3. Run `docker run -p 3000:3000 my-demo-app` to start the container.
  4. Access `localhost:3000` in your browser.
  5. Showcase it: Include the `Dockerfile` in your repo and a screenshot of the app running from a container.

3. Build a PR Workflow: Linting & Testing on Pull Requests

Automation Code Quality

The Goal: Prevent bad code from ever being merged into the `main` branch.

In a real team, you don't push directly to `main`. You open a Pull Request (PR). This project simulates that workflow. You will set up an "automated gatekeeper" that checks code quality (Linting) and correctness (Testing) before allowing a merge.

What you will learn:

  • Branch protection rules.
  • Linting tools (ESLint for JS, Flake8/Black for Python).
  • GitHub Actions `pull_request` triggers.

Tools & Stack:

GitHub Actions + ESLint/Flake8.

Core Steps:

  1. Create a new branch `feature/bad-code`.
  2. Add some messy code (unused variables, bad formatting).
  3. Create a workflow that runs a linter (e.g., `npm run lint`).
  4. Open a PR. The action should fail.
  5. Fix the code, push again, and watch it turn green.
  6. Showcase it: A screenshot of a PR with the "Checks Failed" status is a powerful addition to a portfolio.

4. Create a Basic Terraform Script to Provision a VM

IaC Terraform

The Goal: Create infrastructure using code (IaC) instead of clicking buttons in a web console.

Clicking around the AWS/Azure console is fine for learning, but unmanageable for production. Terraform allows you to define "I want one server" in a text file, run one command, and have it appear magicially.

What you will learn:

  • Basics of Infrastructure as Code (IaC).
  • Terraform providers (AWS/Azure/GCP).
  • `terraform init`, `plan`, and `apply`.

Tools & Stack:

Terraform + AWS Free Tier (EC2) or DigitalOcean.

Core Steps:

  1. Install Terraform.
  2. Configure your cloud credentials (use environment variables!).
  3. Write a `main.tf` file defining a single EC2 instance (t2.micro is free).
  4. Run `terraform plan` to see what will happen.
  5. Run `terraform apply` to create the server.
  6. Showcase it: The `main.tf` file properly formatted, and a screenshot of the CLI output showing "Apply complete!".

5. Deploy a Containerized App to the Cloud (Render/Heroku/AWS)

Deployment Cloud

The Goal: Automate the deployment of your Docker container to a public URL.

You have a Docker image. Now let's show it to the world. We will create a pipeline that builds the image and pushes it to a cloud provider whenever you merge to main.

What you will learn:

  • CD (Continuous Deployment) principles.
  • Managing Cloud Secrets in GitHub.
  • Registry concepts (Docker Hub or GitHub Container Registry).

Tools & Stack:

GitHub Actions + Render (easiest for beginners) or AWS App Runner.

Core Steps:

  1. Create a simpler Dockerfile.
  2. Set up a service on Render/Heroku pointing to your repo.
  3. Or better yet: Use a GitHub Action that builds the image, pushes to Docker Hub, and triggers a webhook to deploy.
  4. Showcase it: The live URL of your running app (e.g., `my-cool-app.onrender.com`).

6. Set Up a Simple Monitoring Stack (Prometheus + Grafana)

Monitoring Observability

The Goal: Visualize what is happening inside your application.

Deploying code is only half the battle. Keeping it running is the other half. In this project, you will spin up a monitoring stack that scrapes metrics from your app and displays them on a beautiful dashboard.

What you will learn:

  • Docker Compose (orchestrating multiple containers).
  • The concept of "scrapers" and "exporters".
  • Creating graphs in Grafana.

Tools & Stack:

Docker Compose + Prometheus + Grafana + Node.js (with `prom-client`).

Core Steps:

  1. Create a `docker-compose.yml` file defining 3 services: `app`, `prometheus`, and `grafana`.
  2. Add an endpoint `/metrics` to your Node.js app that exposes CPU usage or request counts.
  3. Configure Prometheus (`prometheus.yml`) to scrape `http://app:3000/metrics` every 5 seconds.
  4. Start everything: `docker-compose up`.
  5. Open Grafana (localhost:3000), add Prometheus as a data source, and make a graph.
  6. Showcase it: A screenshot of your Grafana dashboard showing live data from your app.

7. Automate Daily Backups with a Script and Cron/GitHub Actions

Scripting Automation

The Goal: Protect your data by automatically saving it every day at midnight.

DevOps engineers write a lot of "glue" scripts. This project teaches you how to write a robust shell or Python script and schedule it. You can mock a database by simply backing up a folder of text files.

What you will learn:

  • Bash or Python scripting.
  • Cron jobs (scheduler).
  • Using AWS CLI or similar to upload files to cloud storage (S3).

Tools & Stack:

Bash/Python + GitHub Actions (Schedule trigger) + AWS S3 (Free Tier).

Core Steps:

  1. Write a script `backup.sh` that compresses a directory into a `.tar.gz` file with today's date in the filename.
  2. Add functionality to upload this file to an S3 bucket (or Google Drive API if you prefer).
  3. Create a GitHub Action with `on: schedule: - cron: '0 0 * * *'`.
  4. Showcase it: The script code and a screenshot of your S3 bucket filled with dated backup files.

8. Implement a Blue-Green Deployment Simulation

Deployment Strategy Advanced

The Goal: Update an application with zero downtime.

In a traditional deployment, you stop the server, upgrade, and restart. This causes 30 seconds of downtime. In Blue-Green, you have two environments. You deploy to the idle one, test it, and then switch traffic instantly.

What you will learn:

  • Load Balancing concepts (Nginx).
  • Zero-downtime deployment strategies.
  • Docker Compose scaling.

Tools & Stack:

Docker Compose + Nginx (as load balancer) + Two instances of your App.

Core Steps:

  1. Create two containers: `app-blue` (v1) and `app-green` (v2).
  2. Set up Nginx to route traffic to `app-blue`.
  3. "Deploy" v2 by starting `app-green`.
  4. Modify Nginx config to point to `app-green` and reload it (`nginx -s reload`).
  5. Verify traffic is now hitting v2 without dropping connections.
  6. Showcase it: A simple diagram in your README explaining how you switched traffic.

9. Set Up a Basic Security Scan in CI (DevSecOps)

DevSecOps Security

The Goal: Automatically find vulnerabilities in your dependencies or code.

Security shouldn't be an afterthought. By adding a security scanner to your pipeline, you are practicing "Shift Left" security—catching bugs early in the process.

What you will learn:

  • Vulnerability scanning (CVEs).
  • Secret scanning (finding accidental API keys).
  • Tools like Trivy or Snyk.

Tools & Stack:

GitHub Actions + Trivy (Open Source scanner).

Core Steps:

  1. Take an existing repo (like your Docker project).
  2. Add a GitHub Action step that runs `aquasecurity/trivy-action`.
  3. Configure it to scan your Docker image for "High" and "Critical" vulnerabilities.
  4. Showcase it: A screenshot of the security report in your GitHub Actions logs showing (hopefully) 0 vulnerabilities.

10. Build Your "DevOps Portfolio" Repo

Career Portfolio

The Goal: A central hub that links to all the cool stuff you just built.

You have built 9 projects. Don't let them get lost. Create a "master" repository that serves as your professional portfolio.

Core Steps:

  1. Create a repo named `devops-portfolio` (or use your GitHub Profile README).
  2. Write a beautiful `README.md` with a table of contents.
  3. For each project, write a 2-sentence summary and link to the repo.
  4. Include architecture diagrams (use draw.io) for the complex ones.
  5. Showcase it: Pin this repo to your GitHub profile so it's the first thing recruiters see.

Conclusion: Start Building Today

There you have it—10 projects that will take you from "I've watched a tutorial" to "I can build unmatched infrastructure."

You don't need to do all of them. Even completing Project #1 (CI Pipeline) and Project #2 (Docker) puts you ahead of 80% of applicants who only have "aspiring DevOps engineer" in their bio. The key is to start small, document your wins, and keep shipping.

If you found this list helpful, start with Project #1 this weekend, and don't forget to check out our Full CI/CD Tutorial if you need a step-by-step guide to get started.