- Complete k8s manifests with Kustomize support - Production and staging overlays - ConfigMap/Secret management - Ingress with TLS (Traefik/NGINX) - Persistent storage for SQLite - Comprehensive k8s README with operations guide - Updated main README with k8s deployment instructions - Gitignore for k8s secrets Usage: kubectl apply -k k8s/overlays/production
Kubernetes Deployment
This directory contains Kubernetes manifests for deploying the DWS Dynamic DNS service on K3s, Kubernetes, or any K8s-compatible platform.
Directory Structure
k8s/
├── base/ # Base manifests (don't edit directly)
│ ├── deployment.yaml # Deployment, Service, PVC
│ ├── configmap.yaml # Non-sensitive configuration
│ ├── secrets.yaml # Sensitive configuration (placeholders)
│ ├── ingress.yaml # Ingress with TLS
│ └── kustomization.yaml # Base kustomization
│
├── overlays/
│ ├── production/ # Production environment
│ │ ├── kustomization.yaml # Production-specific settings
│ │ ├── deployment-patch.yaml # Resource adjustments
│ │ ├── namespace.yaml # Production namespace
│ │ ├── secrets.yaml # Production secrets (gitignored)
│ │ └── secrets.example.yaml # Example secrets template
│ │
│ └── staging/ # Staging environment
│ ├── kustomization.yaml # Staging-specific settings
│ ├── deployment-patch.yaml # Single replica, lower resources
│ ├── namespace.yaml # Staging namespace
│ ├── secrets.yaml # Staging secrets (gitignored)
│ └── secrets.example.yaml # Example secrets template
│
└── README.md # This file
Quick Start
Prerequisites
- Kubernetes 1.21+ or K3s cluster
- kubectl configured with cluster access
- cert-manager installed (for TLS certificates)
- Ingress controller (Traefik, NGINX, etc.)
- Storage class for persistent volumes
Production Deployment
# 1. Clone and enter directory
git clone https://git.dws.rip/DWS/dyn.git
cd dyn/k8s
# 2. Create production secrets
cp overlays/production/secrets.example.yaml overlays/production/secrets.yaml
# 3. Edit secrets with your Technitium credentials
# Replace 'your-production-api-token-here' with actual token
nano overlays/production/secrets.yaml
# 4. Deploy to production
kubectl apply -k overlays/production
# 5. Verify deployment
kubectl get pods -n dyn-ddns
kubectl get svc -n dyn-ddns
kubectl get ingress -n dyn-ddns
# 6. Check logs
kubectl logs -n dyn-ddns -l app.kubernetes.io/name=dyn-ddns -f
Staging Deployment
# Similar process for staging
cp overlays/staging/secrets.example.yaml overlays/staging/secrets.yaml
# Edit with staging credentials
kubectl apply -k overlays/staging
Configuration
Secrets (Required)
Create overlays/production/secrets.yaml:
apiVersion: v1
kind: Secret
metadata:
name: dyn-ddns-secrets
type: Opaque
stringData:
# Choose ONE authentication method:
# Method 1: API Token (recommended)
TECHNITIUM_TOKEN: "your-actual-api-token-here"
# Method 2: Username/Password
# TECHNITIUM_USERNAME: "admin"
# TECHNITIUM_PASSWORD: "your-password"
# Optional: Trusted proxies
# TRUSTED_PROXIES: "10.0.0.0/8,172.16.0.0/12"
Important: Never commit secrets.yaml to git. It's already in .gitignore.
ConfigMap (Optional Overrides)
Edit overlays/production/kustomization.yaml:
configMapGenerator:
- name: dyn-ddns-config
behavior: merge
literals:
- TECHNITIUM_URL=https://dns.dws.rip
- BASE_DOMAIN=dws.rip
- SPACE_SUBDOMAIN=space
- RATE_LIMIT_PER_IP=10
- RATE_LIMIT_PER_TOKEN=1
Ingress Customization
By default, the ingress is configured for Traefik with cert-manager:
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
For NGINX ingress, change annotations in base/ingress.yaml:
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
Architecture
Components
-
Deployment - Runs the DDNS bridge container
- Replicas: 2 (production), 1 (staging)
- Resource limits configurable per overlay
- Health checks (liveness & readiness probes)
-
Service - ClusterIP exposing port 80
-
Ingress - TLS termination at edge
- Host: dyn.dws.rip (customize as needed)
- Automatic certificate via cert-manager
-
PersistentVolumeClaim - SQLite database storage
- Size: 1Gi (adjustable)
- AccessMode: ReadWriteOnce
Resource Requirements
Production:
- CPU: 200m request / 1000m limit
- Memory: 128Mi request / 512Mi limit
- Replicas: 2
Staging:
- CPU: 100m request / 500m limit
- Memory: 64Mi request / 256Mi limit
- Replicas: 1
Operations
View Logs
# All pods
kubectl logs -n dyn-ddns -l app.kubernetes.io/name=dyn-ddns
# Specific pod
kubectl logs -n dyn-ddns -f deployment/prod-dyn-ddns
# Previous container logs (after restart)
kubectl logs -n dyn-ddns --previous deployment/prod-dyn-ddns
Scale Deployment
# Scale to 3 replicas
kubectl scale deployment -n dyn-ddns prod-dyn-ddns --replicas=3
# Edit deployment directly
kubectl edit deployment -n dyn-ddns prod-dyn-ddns
Database Backup
The SQLite database is stored in the persistent volume at /data/dyn.db:
# Find the pod
POD=$(kubectl get pod -n dyn-ddns -l app.kubernetes.io/name=dyn-ddns -o jsonpath='{.items[0].metadata.name}')
# Copy database locally
kubectl cp dyn-ddns/$POD:/data/dyn.db ./dyn-backup-$(date +%Y%m%d).db
# Or exec into pod
kubectl exec -it -n dyn-ddns $POD -- sh
# Then: sqlite3 /data/dyn.db "SELECT * FROM spaces;"
Update Deployment
# Update to latest image
kubectl rollout restart deployment -n dyn-ddns prod-dyn-ddns
# Or set specific image tag
kubectl set image deployment -n dyn-ddns prod-dyn-ddns dyn-ddns=git.dws.rip/DWS/dyn:v1.0.0
# Monitor rollout
kubectl rollout status deployment -n dyn-ddns prod-dyn-ddns
Troubleshooting
Pod stuck in Pending:
kubectl describe pod -n dyn-ddns <pod-name>
# Check: Storage class available? PV provisioned?
Pod crash looping:
kubectl logs -n dyn-ddns --previous <pod-name>
# Check: Secrets configured? Technitium URL reachable?
Ingress not working:
kubectl describe ingress -n dyn-ddns prod-dyn-ddns
kubectl get certificate -n dyn-ddns
# Check: DNS pointing to ingress controller? Cert-manager working?
Advanced Usage
Multi-Environment Setup
Deploy to multiple environments:
# Staging
kubectl apply -k overlays/staging
# Production
kubectl apply -k overlays/production
# Verify both
kubectl get pods --all-namespaces -l app.kubernetes.io/name=dyn-ddns
Custom Overlay
Create your own overlay for specific needs:
mkdir overlays/custom
cat > overlays/custom/kustomization.yaml << 'EOF'
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
namespace: my-namespace
configMapGenerator:
- name: dyn-ddns-config
behavior: merge
literals:
- TECHNITIUM_URL=https://my-dns-server.example.com
- RATE_LIMIT_PER_IP=5
EOF
kubectl apply -k overlays/custom
Monitoring
The deployment exposes standard metrics. Add Prometheus scraping:
# Add to deployment-patch.yaml
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
Security Considerations
-
Secrets Management
- Never commit secrets to git
- Consider using external secrets operator (Vault, Sealed Secrets)
- Rotate Technitium API tokens regularly
-
Network Policies
# Example network policy apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: dyn-ddns-netpol namespace: dyn-ddns spec: podSelector: matchLabels: app.kubernetes.io/name: dyn-ddns policyTypes: - Ingress - Egress ingress: - from: - namespaceSelector: matchLabels: name: ingress-nginx ports: - protocol: TCP port: 8080 egress: - to: [] # Allow all egress (for Technitium API) -
Pod Security
- Container runs as non-root (in Dockerfile)
- Read-only root filesystem recommended
- Drop all capabilities
Maintenance
Regular Tasks
- Backup database weekly
- Monitor rate limit metrics
- Review access logs
- Update base image for security patches
Version Upgrades
- Update image tag in overlay kustomization
- Apply changes:
kubectl apply -k overlays/production - Verify rollout:
kubectl rollout status ... - Monitor for errors in logs
Support
For issues specific to Kubernetes deployment:
- Check pod logs:
kubectl logs ... - Describe resources:
kubectl describe ... - Check ingress controller logs
- Verify cert-manager is issuing certificates