AWS Spot Instances: Run Your Staging Server for $2/Month
Stop paying $50+/month for staging servers. Here's how to set up a complete staging environment with AWS Spot Instances, auto-deploy via GitHub Actions, and free SSL.
Table of Contents
- What You'll Build
- Why Spot Instances for Staging?
- Step 1: IAM Setup
- Step 2: Launch the Spot Instance
- Step 3: Connect and Deploy
- Step 4: Nginx Reverse Proxy
- Step 5: Free SSL with Certbot
- Step 6: Cloudflare DNS
- Auto-Update IP on Reboot
- Step 7: Auto-Deploy with GitHub Actions
- Staging
- Production
- Branch Flow
- Common Gotchas
- Cost Breakdown
- Bottom Line
AWS Spot Instances: Run Your Staging Server for $2/Month
Your staging server doesn't need to cost $50/month.
AWS Spot Instances give you the same compute power at up to 90% off. A t3.micro in Ohio costs ~$2.20/month on spot pricing. That's enough to run Docker, PostgreSQL, Nginx, and your app.
Here's the complete setup.
What You'll Build
Developer → Push to develop branch
→ GitHub Actions → AWS SSM → Spot Instance (pull & rebuild)
User → https://api-staging.yourdomain.com
→ Cloudflare DNS → Certbot SSL → Nginx → Docker → Your App
Total cost: ~$2.70/month.
Why Spot Instances for Staging?
| On-Demand | Spot | |
|---|---|---|
| t3.micro (Ohio) | ~$7.50/mo | ~$2.20/mo |
| Interruption risk | None | Rare (~5%) |
| Good for staging? | Overkill | Perfect |
Spot instances can be reclaimed by AWS, but for staging that's fine. Your data lives in PostgreSQL and S3 — the instance is disposable.
Step 1: IAM Setup
You need two IAM resources:
IAM Role (for the EC2 instance to talk to SSM):
- IAM → Roles → Create role
- Trusted entity → AWS service → EC2 Role for AWS Systems Manager
- Name it
your-app-ec2-ssm
IAM User (for GitHub Actions to trigger deployments):
- IAM → Users → Create user →
your-app-github-actions - Don't check console access
- Attach:
AmazonSSMFullAccess+AmazonS3FullAccess - Create access key → Third-party service → Save both keys
Step 2: Launch the Spot Instance
EC2 → Launch instances:
| Setting | Value |
|---|---|
| AMI | Ubuntu 24.04 LTS |
| Type | t3.micro |
| Storage | 20GB gp3 |
| Key pair | Create new, download .pem |
Security group inbound rules:
| Port | Source | Purpose |
|---|---|---|
| 22 | My IP | SSH |
| 80 | 0.0.0.0/0 | HTTP |
| 443 | 0.0.0.0/0 | HTTPS |
Under Advanced details:
- Purchasing option → Spot instances
- IAM instance profile → Select your SSM role
- User data → Paste this:
#!/bin/bash
set -e
# 2GB swap prevents OOM on 1GB instances
fallocate -l 2G /swapfile
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile
echo '/swapfile none swap sw 0 0' >> /etc/fstab
# Install everything
apt-get update
apt-get install -y docker.io git postgresql \
postgresql-contrib nginx certbot python3-certbot-nginx
# Docker Compose v2 (v1 from apt causes ContainerConfig errors)
curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
# Docker
systemctl enable docker && systemctl start docker
usermod -aG docker ubuntu
# PostgreSQL
systemctl enable postgresql && systemctl start postgresql
sudo -u postgres psql -c "ALTER USER postgres PASSWORD 'password';"
sudo -u postgres psql -c "CREATE DATABASE your_db;"
PG_HBA=$(find /etc/postgresql -name pg_hba.conf)
PG_CONF=$(find /etc/postgresql -name postgresql.conf)
echo "host all all 172.16.0.0/12 md5" >> "$PG_HBA"
sed -i "s/#listen_addresses = 'localhost'/listen_addresses = '*'/" "$PG_CONF"
systemctl restart postgresql
# SSH key for GitHub
sudo -u ubuntu ssh-keygen -t ed25519 -C "staging" \
-f /home/ubuntu/.ssh/id_ed25519 -N ""
sudo -u ubuntu ssh-keyscan github.com >> /home/ubuntu/.ssh/known_hosts
# App directory
mkdir -p /app && chown ubuntu:ubuntu /app
Step 3: Connect and Deploy
ssh -i your-key.pem ubuntu@<PUBLIC_IP>
# Get the SSH key, add to GitHub → Settings → SSH Keys
cat ~/.ssh/id_ed25519.pub
# Test connection
ssh -T git@github.com
# Clone and run
cd /app
git clone git@github.com:YourOrg/YourRepo.git your-app
cd your-app
git checkout develop
nano .env # add your env vars
docker-compose up --build -d
Important: Inside Docker, localhost means the container, not your host. Use host.docker.internal for database connections:
DB_HOST=host.docker.internal
DATABASE_URL=postgresql://postgres:password@host.docker.internal:5432/your_db
Step 4: Nginx Reverse Proxy
Don't expose port 7000 directly. Put Nginx in front:
sudo nano /etc/nginx/sites-available/your-app
server {
listen 80;
listen [::]:80;
server_name api-staging.yourdomain.com;
location / {
proxy_pass http://127.0.0.1:7000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
sudo ln -s /etc/nginx/sites-available/your-app /etc/nginx/sites-enabled/
sudo rm /etc/nginx/sites-enabled/default
sudo nginx -t && sudo systemctl restart nginx
Step 5: Free SSL with Certbot
One command:
sudo certbot --nginx -d api-staging.yourdomain.com \
--non-interactive --agree-tos -m your-email@example.com
Auto-renews every 90 days. Done.
Step 6: Cloudflare DNS
Add an A record pointing to your instance IP. Use DNS only (grey cloud) since Certbot handles SSL.
| Type | Name | Content | Proxy |
|---|---|---|---|
| A | api-staging | your-instance-ip | DNS only |
Auto-Update IP on Reboot
Spot instance IPs change on restart. Fix that with a cron job:
sudo nano /app/update-dns.sh
#!/bin/bash
IP=$(curl -s http://checkip.amazonaws.com)
curl -s -X PUT \
"https://api.cloudflare.com/client/v4/zones/ZONE_ID/dns_records/RECORD_ID" \
-H "Authorization: Bearer CF_TOKEN" \
-H "Content-Type: application/json" \
--data "{\"type\":\"A\",\"name\":\"api-staging\",\"content\":\"$IP\",\"ttl\":60,\"proxied\":false}"
sudo chmod +x /app/update-dns.sh
sudo crontab -e
# Add: @reboot sleep 15 && /app/update-dns.sh
Get your Cloudflare Zone ID from the dashboard sidebar. Create an API token with Edit zone DNS permissions.
Step 7: Auto-Deploy with GitHub Actions
Staging (SSM — no IP needed)
SSM sends commands to your instance by ID, not IP. Perfect for spot instances.
name: Deploy to Staging
on:
push:
branches: [develop]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Deploy via SSM
run: |
aws ssm send-command \
--instance-ids "${{ secrets.STAGING_INSTANCE_ID }}" \
--document-name "AWS-RunShellScript" \
--parameters 'commands=["cd /app/your-app && git pull origin develop && docker-compose up --build -d"]'
Production (SSH)
For production, use a regular instance with a static IP and SSH:
name: Deploy to Production
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: appleboy/ssh-action@v1
with:
host: ${{ secrets.PROD_HOST }}
username: ${{ secrets.PROD_USER }}
key: ${{ secrets.PROD_SSH_KEY }}
script: |
cd /app/your-app
git pull origin main
docker-compose up --build -d
Branch Flow
staging → PR to develop → CI checks → merge → deploy to spot (SSM)
develop → PR to main → CI checks → merge → deploy to production (SSH)
Common Gotchas
IAM role MUST be attached at launch. This is the #1 missed step. If you forget to select the IAM instance profile when launching, SSM won't work. The agent runs but fails silently with EC2RoleProvider ERROR in its logs. GitHub Actions will fail with InvalidInstanceId. Fix: EC2 → Instance → Actions → Security → Modify IAM role → select your SSM role → restart the agent with sudo systemctl restart snap.amazon-ssm-agent.amazon-ssm-agent.
Old docker-compose v1 breaks with newer Docker. Ubuntu's apt install docker-compose gives you v1.29 which throws ContainerConfig KeyError. Install v2 instead:
sudo apt remove docker-compose -y
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
hash -r
Pydantic rejects extra env vars. If your .env has variables not defined in your Settings class, Pydantic throws Extra inputs are not permitted. Add extra = "ignore" to your Settings Config class.
Debian Buster is dead. Use python:3.10-slim-bookworm in your Dockerfile, not buster. The repos are gone.
Cloudflare free SSL only covers one subdomain level. api-staging.domain.com works. api.staging.domain.com doesn't. Use hyphens, not dots.
Docker can't reach localhost. Inside a container, localhost is the container itself. Use host.docker.internal with extra_hosts: "host.docker.internal:host-gateway" in docker-compose.
Nginx needs listen 80; AND listen [::]:80;. Include both IPv4 and IPv6 listeners. If you install Certbot and later remove the cert, it may strip the listen directive. Always check.
Stop before rebuilding. If a deploy triggers while the previous build is still running, you get conflicts. Use docker-compose down before docker-compose up --build -d. For code-only changes, docker-compose restart is faster and zero-downtime.
Cost Breakdown
| Resource | Monthly |
|---|---|
| Spot t3.micro (Ohio) | $2.20 |
| S3 (small usage) | $0.50 |
| Cloudflare DNS | Free |
| Certbot SSL | Free |
| GitHub Actions | Free |
| Total | ~$2.70 |
Compare that to Amplify, Heroku, or Railway for a staging environment. Not even close.
Bottom Line
Spot instances are perfect for staging. Cheap, disposable, and with SSM you don't even need a static IP for deployments.
Setup takes about 30 minutes. Saves you $50+/month.
Need help setting up your staging environment?
We build and deploy applications for teams across Nigeria and beyond.
📞 WhatsApp: +234 708 711 0468 📧 info@www.raspibtech.com 📍 Lagos Island
Related:
Need Help with Your Project?
Let's discuss how Raspib Technology can help transform your business
Related Articles
How to Migrate an RDS Database Between AWS Accounts
Moving your PostgreSQL or MySQL database to a new AWS account? Here's the complete snapshot-based migration process, including encrypted snapshots, KMS key sharing, and cleanup to stop charges.
Read more →How to Migrate S3 Buckets Between AWS Accounts in 5 Minutes
Moving to a new AWS account? Here's how to migrate your S3 bucket data without losing a single file. Simple, fast, and free.
Read more →Laravel 11: What Changed and Why You Should Care
Laravel 11 is out. Slimmer structure, better performance, and features that actually save time. Here's what matters.
Read more →