How to install Mastodon 4.4.5 on Kubernetes
Complete step-by-step guide to deploy Mastodon 4.4.5 on Kubernetes with PostgreSQL 17, Redis, Sidekiq autoscaling, automated backups & cleanup. Includes working YAML configs, secure secrets with kubeseal, and GitOps-ready setup. Perfect for production deployment.

Intro
If you've landed on this page, it means you, like me, have faced the task of installing Mastodon on your Kubernetes cluster. I tried to solve this problem for quite a long time, as I only recently started learning Kubernetes and encountered many ways to do this - some didn't fit my specific case, some were already outdated. So I decided to write this guide for people like me, for whom it's important to get everything running, see that everything works, and then continue with fine-tuning.
Disclaimer: I'm not an expert and obviously this isn't the best or most correct way to install Mastodon on Kubernetes. I'm deliberately not using Helm to have as much control as possible over everything and to learn more about working with Kubernetes.
I use a GitOps approach with ArgoCD, so here I'll stick to the same storytelling method. But you can apply the yaml files without uploading them to git - it doesn't really matter much. The most important thing is that I'm sharing completely working configs that are successfully and reliably running right now on my personal Mastodon instance dol.social. If you follow this guide exactly, everything will work for you too. If there are any difficulties or errors - I'd be happy to hear your comments.
Also, if you have any suggestions - I'd be happy to hear them too. This is important to me.
Well then, let's begin.
My config
- Kubernetes version 1.33
- 1 load balancer
- 3 nodes with 4GB RAM, 2 cores and 20GB SSD each
- cert-manager for automatic certificate generation
- nginx ingress (you can use Traefik or another one, it's not essential)
- kubeseal for secure secret storage
What we will be installing
- Mastodon 4.4.5
- PostgreSQL 17.6 (the latest at the time of writing, for future-proofing)
- Redis 7
- Mastodon Streaming
- Mastodon Sidekiq
- S3-Compatible Object Storage with CDN
- Cron job for automatic daily cleanup of outdated media files, so storage doesn't grow infinitely
- Cron job for automatic daily full backup of database and files via rclone with subsequent upload to AWS S3
- Horizontal autoscaler for sidekiq, so that additional pods are launched during peak loads instead of one giant pod
I decided not to install ElasticSearch as it consumes a lot of RAM, and for my personal instance it doesn't make much sense. If it becomes necessary, let me know and I'll write instructions for that too.
Getting Started
First, make sure you have all the necessary programs and tools installed.
I recommend using kubeseal for secure secret storage. I use macOS, so I can only guarantee the functionality and accuracy of commands for this platform.
But first, we need to install the Kubeseal Controller if it's not already installed:
kubectl apply -f https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.32.2/controller.yaml
After this, you can proceed to install Kubeseal on your local machine.
To install kubeseal on macOS, you can use brew:
brew install kubeseal
For Linux, you can use the following commands:
# Latest version as for 28.09.2025
export KUBESEAL_VERSION="0.32.2"
curl -OL "https://github.com/bitnami-labs/sealed-secrets/releases/download/v${KUBESEAL_VERSION}/kubeseal-${KUBESEAL_VERSION}-linux-amd64.tar.gz"
tar -xvzf kubeseal-${KUBESEAL_VERSION}-linux-amd64.tar.gz kubeseal
sudo install -m 755 kubeseal /usr/local/bin/kubeseal
rm kubeseal-${KUBESEAL_VERSION}-linux-amd64.tar.gz
If you are using Go, it can be done with one command directly from the source.
go install github.com/bitnami-labs/sealed-secrets/cmd/kubeseal@main
Now check the installation and ensure everything is working correctly:
kubeseal --version
It should output
kubeseal version: v0.32.2
If everything is okay, we can move on to the next step.
Creating a Namespace
I chose the namespace 'mastodon' but you can choose another.
apiVersion: v1
kind: Namespace
metadata:
name: mastodon
namespace.yaml
kubectl apply -f namespace.yaml
Apply it
Ingress Configuration
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: mastodon-ingress
namespace: mastodon
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- your-domain.com
secretName: your-domain.com-tls
rules:
- host: your-domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: mastodon-web
port:
number: 80
- path: /api/v1/streaming
pathType: Prefix
backend:
service:
name: mastodon-streaming
port:
number: 4000
ingress.yaml
kubectl apply -f ingress.yaml
Apply it
Creating PVCs
Let’s create PVCs for Redis and Postgres. You may also use StatefulSet, but in my case, I chose PVC as I don't plan on scaling Postgres for now. I may update or supplement this guide later. For Ghost and Umami Analytics, I used StatefulSet, and instructions will come later.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
namespace: mastodon
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
pvc-postgres.yaml
kubectl apply -f pvc-postgres.yaml
Apply it
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-pvc
namespace: mastodon
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
pvc-redis.yaml
kubectl apply -f pvc-redis.yaml
Apply it
Creating Services
Postgres Service
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: mastodon
spec:
ports:
- port: 5432
selector:
app: postgres
clusterIP: None
svc-postgres.yaml
kubectl apply -f svc-postgres.yaml
Apply it
Mastodon Web Service
apiVersion: v1
kind: Service
metadata:
name: mastodon-web
namespace: mastodon
spec:
selector:
app: mastodon-web
ports:
- port: 80
targetPort: 3000
svc-mastodon-web.yaml
kubectl apply -f svc-mastodon-web.yaml
Apply it
Redis Service
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: mastodon
spec:
ports:
- port: 6379
selector:
app: redis
clusterIP: None
svc-redis.yaml
kubectl apply -f svc-redis.yaml
Apply it
Mastodon Streaming Service
apiVersion: v1
kind: Service
metadata:
name: mastodon-streaming
namespace: mastodon
spec:
selector:
app: mastodon-streaming
ports:
- port: 4000
targetPort: 4000
svc-mastodon-streaming.yaml
kubectl apply -f svc-mastodon-streaming.yaml
Apply it
Creating Secrets
To create the necessary secrets for Mastodon, we need to have generated constants that must never be changed once set in our environment.
Here’s the list:
ACTIVE_RECORD_ENCRYPTION_DETERMINISTIC_KEY=<32 bytes min>
ACTIVE_RECORD_ENCRYPTION_KEY_DERIVATION_SALT=<20 bytes min>
ACTIVE_RECORD_ENCRYPTION_PRIMARY_KEY=<12 bytes min>
SECRET_KEY_BASE=<64 bytes>
OTP_SECRET=<64 bytes>
VAPID_PRIVATE_KEY=<32 bytes>
VAPID_PUBLIC_KEY=<65 bytes>
They're required to run your Mastodon instance
The easiest way to generate Active Record Encryption Keys is using Docker with a single command:
docker run --rm -it --entrypoint /bin/bash lscr.io/linuxserver/mastodon:latest generate-active-record
After executing the command, you will receive a similar message:
# Do NOT change these variables once they are set
ACTIVE_RECORD_ENCRYPTION_DETERMINISTIC_KEY=YOUR_DETERMINISTIC_KEY_HERE
ACTIVE_RECORD_ENCRYPTION_KEY_DERIVATION_SALT=YOUR_DERIVATION_SALT_KEY
ACTIVE_RECORD_ENCRYPTION_PRIMARY_KEY=YOUR_PRIMARY_KEY
Save them and proceed to the next step.
For SECRET_KEY_BASE and OTP_SECRET, run the command twice.
docker run --rm -it tootsuite/mastodon:latest sh -c "bundle exec rake secret"
For the VAPID_PRIVATE_KEY and VAPID_PUBLIC_KEY, use the following command:
docker run --rm -it tootsuite/mastodon:latest sh -c "bundle exec rake mastodon:webpush:generate_vapid_key"
Important Notes
- SECRET_KEY_BASE is used for encrypting browser sessions—changing it will break all active sessions.
- OTP_SECRET is used for two-factor authentication—changing it will break 2FA for all users.
- VAPID keys are used for push notifications—changing them will break notifications.
- All these keys must be generated only once and never changed.
Now we need to add all these keys to the mastodon-env secret (or whatever you choose to name it).
Create a file mastodon-env.yaml with the following content (you can modify it according to your requirements).
apiVersion: v1
kind: Secret
metadata:
name: mastodon-env
namespace: mastodon
type: Opaque
stringData:
SECRET_KEY_BASE: "SECRET_YOU_GENERATED_ABOVE"
OTP_SECRET: "SECRET_YOU_GENERATED_ABOVE"
VAPID_PRIVATE_KEY: "SECRET_YOU_GENERATED_ABOVE"
VAPID_PUBLIC_KEY: "SECRET_YOU_GENERATED_ABOVE"
ACTIVE_RECORD_ENCRYPTION_DETERMINISTIC_KEY: "SECRET_YOU_GENERATED_ABOVE"
ACTIVE_RECORD_ENCRYPTION_KEY_DERIVATION_SALT: "SECRET_YOU_GENERATED_ABOVE"
ACTIVE_RECORD_ENCRYPTION_PRIMARY_KEY: "SECRET_YOU_GENERATED_ABOVE"
SINGLE_USER_MODE: "true"
LOCAL_DOMAIN: "your-domain.com"
WEB_DOMAIN: "your-domain.com"
DB_HOST: "postgres" // or postgres.mastodon.svc.cluster.local
DB_PASS: "your_postgres_password" // define your secure password here
REDIS_HOST: "redis" // or redis.mastodon.svc.cluster.local
REDIS_PORT: "6379"
SMTP_SERVER: "smtp.eu.mailgun.org" // or your own SMTP server
SMTP_PORT: "587" // if you use secure SMTP
SMTP_LOGIN: "mail@your-domain.com"
SMTP_PASSWORD: "your_smtp_password"
SMTP_FROM_ADDRESS: "mail@blog.dol.ch"
CACHE_REDIS_URL: "redis://redis:6379/1" // or redis://redis.mastodon.svc.cluster.local:6379/1
RAILS_CACHE_STORE: "redis_cache_store"
S3_ENABLED: "true" // we'll configure S3 later
S3_BUCKET: "bucket_name" // change it to your desired bucket name
S3_REGION: "us-east-1" // your S3 region
AWS_ACCESS_KEY_ID: "" // your S3 access_key_id here
AWS_SECRET_ACCESS_KEY: "" // your S3 secret_access_key here
S3_ENDPOINT: "" // your S3 endpoint here
S3_FORCE_PATH_STYLE: "true" // it is recommended if you use S3 compatible provider
S3_ALIAS_HOST: "cdn.your-domain.com" // if you use CDN, put the name here
mastodon-env.yaml
Creating Sealed Secrets
To prevent our sensitive data from being stored in the open, we will use Kubeseal to encrypt them with our controller and store them in Git. Never store API keys, passwords, and other sensitive data in a Git repo!
Execute the following command
kubeseal --controller-namespace kube-system --controller-name sealed-secrets-controller --format yaml < mastodon-env.yaml > sealed-mastodon-env.yaml
Then, it's best to delete mastodon-env.yaml and create our sealed secret in the cluster:
kubectl apply -f sealed-mastodon-env.yaml
Now we have everything to begin creating our deployments.
Creating Deployments
Let’s start with Postgres. I decided to choose version 17.6 at the time of writing, as it is fully compatible with Mastodon 4.4.5 and won’t need updating even in the event of a major Mastodon update. You may change the version as you wish.
Postgresql
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: mastodon
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:17.6
env:
- name: POSTGRES_DB
value: mastodon_production
- name: POSTGRES_USER
value: mastodon
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: mastodon-env
key: DB_PASS
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
- name: dshm
mountPath: /dev/shm
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-pvc
- name: dshm
emptyDir:
medium: Memory
sizeLimit: 256Mi
postgres-deployment.yaml
kubectl apply -f deployment-postgres.yaml
Apply it
Redis
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
namespace: mastodon
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:7-alpine
ports:
- containerPort: 6379
volumeMounts:
- name: redis-data
mountPath: /data
command:
- redis-server
- "--appendonly"
- "yes"
volumes:
- name: redis-data
persistentVolumeClaim:
claimName: redis-pvc
deployment-redis.yaml
kubectl apply -f deployment-redis.yaml
Apply it
Streaming
apiVersion: apps/v1
kind: Deployment
metadata:
name: mastodon-streaming
namespace: mastodon
spec:
replicas: 1
selector:
matchLabels:
app: mastodon-streaming
template:
metadata:
labels:
app: mastodon-streaming
spec:
containers:
- name: streaming
image: ghcr.io/mastodon/mastodon-streaming:v4.4.5
envFrom:
- secretRef:
name: mastodon-env
ports:
- containerPort: 4000
command:
- node
- ./streaming/index.js
deployment-streaming.yaml
kubectl apply -f deployment-streaming.yaml
Apply it
Mastodon Web (Puma)
apiVersion: apps/v1
kind: Deployment
metadata:
name: mastodon-web
namespace: mastodon
spec:
replicas: 1
selector:
matchLabels:
app: mastodon-web
template:
metadata:
labels:
app: mastodon-web
spec:
securityContext:
fsGroup: 1000
initContainers:
- name: volume-permissions
image: busybox
command: [ "sh", "-c", "chmod -R u+rwX /mastodon/public/system" ]
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /mastodon/public/system
name: system-files
containers:
- name: web
image: tootsuite/mastodon:v4.4.5
volumeMounts:
- name: system-files
mountPath: /mastodon/public/system
envFrom:
- secretRef:
name: mastodon-env
env:
- name: WEB_CONCURRENCY
value: "1"
- name: MAX_THREADS
value: "3"
- name: DB_POOL
value: "5"
- name: MALLOC_ARENA_MAX
value: "2"
- name: RAILS_MAX_THREADS
value: "3"
ports:
- containerPort: 3000
command:
- bundle
- exec
- puma
- -C
- config/puma.rb
resources:
requests:
cpu: "300m"
memory: "600Mi"
limits:
cpu: "600m"
memory: "1.2Gi"
volumes:
- name: system-files
persistentVolumeClaim:
claimName: mastodon-system-pvc
deployment-web.yaml
kubectl apply -f deployment-web.yaml
Apply it
Sidekiq
apiVersion: apps/v1
kind: Deployment
metadata:
name: mastodon-sidekiq
namespace: mastodon
spec:
selector:
matchLabels:
app: mastodon-sidekiq
template:
metadata:
labels:
app: mastodon-sidekiq
spec:
containers:
- name: sidekiq
image: tootsuite/mastodon:v4.4.5
command:
- bash
- -lc
- >
bundle exec sidekiq
-c 5
-q default,8
-q push,6
-q ingress,4
-q mailers,2
-q fasp,2
-q pull,1
-q scheduler,1
envFrom:
- secretRef:
name: mastodon-env
env:
- name: DB_POOL
value: "5"
- name: MALLOC_ARENA_MAX
value: "2"
resources:
requests:
cpu: "500m"
memory: "300Mi"
limits:
cpu: "1000m"
memory: "600Mi"
deployment-sidekiq.yaml
kubectl apply -f deployment-sidekiq.yaml
Apply it
Sidekiq Autoscaler
I preferred a horizontal autoscale for Sidekiq to evenly distribute resources across nodes rather than creating one giant process using 2 CPUs and 3GB of RAM. It's one of the most demanding services, so I decided on up to 3 replicas to start additional pods during peak loads. You may skip this if everything works well for you and there's no intensive federation.
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: mastodon-sidekiq-hpa
namespace: mastodon
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: mastodon-sidekiq
minReplicas: 1
maxReplicas: 3
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 90
sidekiq-autoscaler.yaml
kubectl apply -f autoscaler-sidekiq.yaml
Apply it
CronJobs
To tackle the infinite expansion of object storage, I prefer a daily cron job for automatic cleanup of unnecessary media files.
Here’s my simple cron job that runs automatically every day at 3 am to keep the storage under control. You can edit it as needed. I've added --verbose flags to easily track errors and overall progress if needed.
apiVersion: batch/v1
kind: CronJob
metadata:
name: mastodon-media-cleanup
namespace: mastodon
labels:
app: mastodon
job-type: cleanup
spec:
schedule: "0 3 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: mastodon
image: tootsuite/mastodon:v4.4.5
command: ["/bin/bash"]
args:
- "-c"
- |
RAILS_ENV=production bin/tootctl media remove --days=7 --verbose && \
RAILS_ENV=production bin/tootctl media remove-orphans && \
RAILS_ENV=production bin/tootctl preview_cards remove --days=7 --verbose && \
RAILS_ENV=production bin/tootctl cache clear && \
RAILS_ENV=production bin/tootctl accounts prune && \
RAILS_ENV=production bin/tootctl statuses remove --days=7 --verbose
envFrom:
- secretRef:
name: mastodon-env
restartPolicy: OnFailure
mastodon-media-cleanup.yaml
Backup
This is one of the tedious tasks, as you’ll need to create your own package in GitHub (or another registry of your choice, or if you know of an existing better option, let me know).
I needed the ability to run pg_dump for my Postgres version 17, use rclone for copying media files from my profile from one S3-compatible source to AWS S3 via rclone, so creating my own package and Bash script was unavoidable.
For this, we need to install AWS CLI, rclone, and PostgreSQL Client 17.
First, create postgres-backup.sh.
Make sure your database user is mastodon, and the database name is mastodon_production; otherwise, change it in the script.
Also, change your rclone configuration name (RCLONE_CONFIG_NAME) to yours. Also, change the bucket name from YOUR_MASTODON_BUCKET to yours.
For S3, don’t forget to change YOUR_S3_DESTINATION_BUCKET to your S3 bucket name.
If you use only S3/S3, you can edit the script at your discretion and not install rclone, for example. This is my personal example of a working script.
#!/bin/bash
cd /home/root
date1=$(date +%Y%m%d-%H%M)
backup_dir="mastodon-backup"
file_name="mastodon-backup-$date1.tar.gz"
mkdir -p $backup_dir/db
mkdir -p $backup_dir/media/accounts
mkdir -p $backup_dir/media/media_attachments
PGPASSWORD="$PG_PASS" pg_dump -h postgres -p 5432 -U mastodon -d mastodon_production -Fc > $backup_dir/db/mastodon-db.dump
if [ $? -ne 0 ]; then
echo "Error: Failed to create PostgreSQL dump"
exit 1
fi
rclone sync RCLONE_CONFIG_NAME:YOUR_MASTODON_BUCKET/accounts/ $backup_dir/media/accounts/ --config=/home/root/rclone.conf
if [ $? -ne 0 ]; then
echo "Error: Failed to copy accounts media files with rclone"
exit 1
fi
rclone sync RCLONE_CONFIG_NAME:YOUR_MASTODON_BUCKET/media_attachments/ $backup_dir/media/media_attachments/ --config=/home/root/rclone.conf
if [ $? -ne 0 ]; then
echo "Error: Failed to copy media attachments with rclone"
exit 1
fi
media_size=$(du -sb $backup_dir/media | cut -f1)
if [ "$media_size" -lt 1024 ]; then
echo "Warning: Media files size is too small ($media_size bytes)"
fi
tar -zcvf $file_name $backup_dir
if [ $? -ne 0 ]; then
echo "Error: Failed to create archive"
exit 1
fi
if [ "$(stat -c %s $file_name)" -gt 10 ]; then
aws s3 cp $file_name s3://YOUR_S3_DESTINATION_BUCKET/
if [ $? -ne 0 ]; then
echo "Error: Failed to upload backup to S3"
exit 1
fi
echo "Backup successful: $file_name"
else
echo "Backup failed: Archive size too small"
exit 1
fi
rm -rf $backup_dir $file_name
postgres-backup.sh
Here’s my Dockerfile (remove unnecessary parts if your config differs)
FROM ubuntu:24.04
RUN apt-get update && apt-get upgrade -y
RUN apt-get install -y python3 python3-venv curl gnupg2 unzip
RUN python3 -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
RUN curl -sO https://bootstrap.pypa.io/get-pip.py && python3 get-pip.py
RUN pip install awscli
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ noble-pgdg main" | tee /etc/apt/sources.list.d/pgdg.list
RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 7FCC7D46ACCC4CF8
RUN apt update && apt -y install postgresql-client-17
# Install rclone
RUN curl -O https://downloads.rclone.org/rclone-current-linux-amd64.zip && \
unzip rclone-current-linux-amd64.zip && \
mv rclone-*/rclone /usr/bin/ && \
rm -rf rclone-*
COPY postgres-backup.sh /home/root/
COPY rclone.conf /home/root/
RUN chmod +x /home/root/postgres-backup.sh
USER root
CMD ["/bin/bash"]
Dockerfile
If you already have your configured rclone.conf, just use it or create a new one by example. I use Infomaniak Public Cloud, which works via Swift, so the config might differ, double-check this.
[infomaniak]
type = swift
env_auth = true
user = YOUR_USER
key = YOUR_PASSWORD
auth = YOUR_AUTH_URL
domain = default
tenant = YOUR_TENANT
tenant_domain = default
region = YOUR_REGION
rclone.conf
Create or use your existing repository for the upcoming package upload.
Now build the image.
docker build --progress=plain -f Dockerfile -t ghcr.io/YOUR_GITHUB_ACCOUNT/YOUR_REPO:latest --platform linux/amd64 .
If you work on MacOS with ARM chips, specifying --platform linux/amd64 is mandatory, otherwise, your image will build for ARM and not run on your x86 server.
If everything is okay, upload your image.
docker push ghcr.io/YOUR_GITHUB_ACCOUNT/YOUR_REPO:latest
If successful, proceed with setup.
Now create ghcr and aws secrets. The easiest way is to create them with the command:
kubectl create secret docker-registry mastodon-ghcr-secret \
--docker-server=ghcr.io \
--docker-username=YOUR_USERNAME \
--docker-password=YOUR_TOKEN \
--namespace=mastodon \
--dry-run=client -o yaml > mastodon-ghcr-secret.yaml
And
kubectl create secret generic mastodon-backup-secrets \
--from-literal=aws_access_key_id=YOUR_ACCESS_KEY \
--from-literal=aws_secret_access_key=YOUR_SECRET_KEY \
--from-literal=pg_pass=YOUR_PASSWORD \
--namespace=mastodon \
--dry-run=client -o yaml > mastodon-backup-secret.yaml
You will get files in the following format:
apiVersion: v1
kind: Secret
metadata:
name: mastodon-ghcr-secret
namespace: mastodon
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: YOUR_BASE64_AUTH_DATA
mastodon-ghcr-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: mastodon-ghcr-secret
namespace: mastodon
type: kubernetes.io/dockerconfigjson
data:
aws_access_key_id: YOUR_BASE64_AUTH_DATA
aws_secret_access_key: YOUR_BASE64_AUTH_DATA
pg_pass: YOUR_BASE64_AUTH_DATA
mastodon-backup-secret.yaml
Now we need to encrypt these secrets using kubeseal and delete them.
kubeseal --controller-namespace kube-system --controller-name sealed-secrets-controller --format yaml < mastodon-ghcr-secret.yaml > sealed-mastodon-ghcr-secret.yaml
kubeseal --controller-namespace kube-system --controller-name sealed-secrets-controller --format yaml < mastodon-backup-secret.yaml > sealed-mastodon-backup-secret.yaml
Apply them
kubectl apply -f sealed-mastodon-ghcr-secret.yaml
kubectl apply -f sealed-mastodon-backup-secret.yaml
Don’t forget to delete mastodon-ghcr-secret.yaml and mastodon-backup-secret.yaml.
Now everything's ready to create our Backup Cronjob. Every night at 4 am.
apiVersion: batch/v1
kind: CronJob
metadata:
name: mastodon-full-backup
namespace: mastodon
spec:
schedule: "0 4 * * *"
jobTemplate:
spec:
template:
spec:
imagePullSecrets:
- name: mastodon-ghcr-secret
containers:
- name: backup-job
image: ghcr.io/YOUR_GITHUB_ACCOUNT/YOUR_REPO:latest
env:
- name: PG_PASS
valueFrom:
secretKeyRef:
name: mastodon-backup-secrets
key: pg_pass
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: mastodon-backup-secrets
key: aws_access_key_id
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: mastodon-backup-secrets
key: aws_secret_access_key
- name: AWS_DEFAULT_REGION
value: "eu-central-2"
- name: S3_BUCKET
value: "YOUR_BUCKET/mastodon-backups"
command: ["/bin/bash", "-c", "cd /home/root && ./postgres-backup.sh"]
imagePullPolicy: Always
restartPolicy: OnFailure
backoffLimit: 3
cron-mastodon-full-backup.yaml
Congratulations, now you have (or at least should have) a working Mastodon instance on Kubernetes with automatic daily backups and secure secret storage.
I would appreciate any comments, questions, or suggestions and will update this guide as Kubernetes & Mastodon evolve.