Templates Kubernetes Production-Ready
Ces manifests sont utilisés sur mes clusters de production. Copiez, adaptez, déployez.
Template 1 : Deployment Complet
Deployment avec toutes les bonnes pratiques : probes, resources, security context.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
version: v1
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
version: v1
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/metrics"
spec:
serviceAccountName: my-app
securityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
containers:
- name: my-app
image: my-registry/my-app:v1.0.0
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
protocol: TCP
env:
- name: APP_ENV
value: "production"
- name: LOG_LEVEL
value: "info"
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: my-app-secrets
key: db-password
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
livenessProbe:
httpGet:
path: /health/live
port: http
initialDelaySeconds: 15
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /health/ready
port: http
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- ALL
volumeMounts:
- name: tmp
mountPath: /tmp
- name: config
mountPath: /app/config
readOnly: true
volumes:
- name: tmp
emptyDir: {}
- name: config
configMap:
name: my-app-config
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: my-app
topologyKey: kubernetes.io/hostname
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: ScheduleAnyway
labelSelector:
matchLabels:
app: my-app
Template 2 : Service ClusterIP + Headless
# Service standard pour load balancing
apiVersion: v1
kind: Service
metadata:
name: my-app
labels:
app: my-app
spec:
type: ClusterIP
ports:
- name: http
port: 80
targetPort: http
protocol: TCP
selector:
app: my-app
---
# Service headless pour accès direct aux pods (StatefulSet, DNS)
apiVersion: v1
kind: Service
metadata:
name: my-app-headless
labels:
app: my-app
spec:
type: ClusterIP
clusterIP: None
ports:
- name: http
port: 80
targetPort: http
selector:
app: my-app
Template 3 : Ingress avec TLS
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "10m"
nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
nginx.ingress.kubernetes.io/proxy-send-timeout: "60"
# Rate limiting
nginx.ingress.kubernetes.io/limit-rps: "100"
nginx.ingress.kubernetes.io/limit-connections: "50"
spec:
tls:
- hosts:
- api.example.com
secretName: my-app-tls
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app
port:
number: 80
Template 4 : HorizontalPodAutoscaler
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-app
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
minReplicas: 3
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 10
periodSeconds: 60
scaleUp:
stabilizationWindowSeconds: 0
policies:
- type: Percent
value: 100
periodSeconds: 15
- type: Pods
value: 4
periodSeconds: 15
selectPolicy: Max
Template 5 : PodDisruptionBudget
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: my-app
spec:
minAvailable: 2
# OU: maxUnavailable: 1
selector:
matchLabels:
app: my-app
Template 6 : ConfigMap + Secret
apiVersion: v1
kind: ConfigMap
metadata:
name: my-app-config
data:
config.yaml: |
server:
port: 8080
timeout: 30s
logging:
level: info
format: json
features:
cache_enabled: true
rate_limit: 1000
---
apiVersion: v1
kind: Secret
metadata:
name: my-app-secrets
type: Opaque
stringData:
db-password: "CHANGE_ME_IN_PRODUCTION"
api-key: "CHANGE_ME_IN_PRODUCTION"
Template 7 : ServiceAccount + RBAC
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-app
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: my-app
rules:
- apiGroups: [""]
resources: ["configmaps", "secrets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-app
subjects:
- kind: ServiceAccount
name: my-app
roleRef:
kind: Role
name: my-app
apiGroup: rbac.authorization.k8s.io
Template 8 : NetworkPolicy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: my-app
spec:
podSelector:
matchLabels:
app: my-app
policyTypes:
- Ingress
- Egress
ingress:
# Autoriser le trafic depuis l'ingress controller
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
ports:
- protocol: TCP
port: 8080
# Autoriser le trafic depuis les pods du même namespace
- from:
- podSelector: {}
ports:
- protocol: TCP
port: 8080
egress:
# Autoriser DNS
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
# Autoriser accès à la base de données
- to:
- podSelector:
matchLabels:
app: postgresql
ports:
- protocol: TCP
port: 5432
Template 9 : StatefulSet (Database)
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgresql
spec:
serviceName: postgresql-headless
replicas: 3
selector:
matchLabels:
app: postgresql
template:
metadata:
labels:
app: postgresql
spec:
containers:
- name: postgresql
image: postgres:15-alpine
ports:
- name: postgres
containerPort: 5432
env:
- name: POSTGRES_DB
value: myapp
- name: POSTGRES_USER
value: myapp
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgresql-secrets
key: password
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
resources:
requests:
cpu: "250m"
memory: "512Mi"
limits:
cpu: "1000m"
memory: "2Gi"
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
readinessProbe:
exec:
command:
- pg_isready
- -U
- myapp
initialDelaySeconds: 5
periodSeconds: 10
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: fast-ssd
resources:
requests:
storage: 50Gi
Template 10 : CronJob
apiVersion: batch/v1
kind: CronJob
metadata:
name: backup-job
spec:
schedule: "0 2 * * *" # Tous les jours à 2h
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 3
jobTemplate:
spec:
backoffLimit: 3
activeDeadlineSeconds: 3600
template:
spec:
restartPolicy: OnFailure
containers:
- name: backup
image: my-registry/backup-tool:v1
env:
- name: BACKUP_BUCKET
value: "s3://my-backups"
resources:
requests:
cpu: "100m"
memory: "256Mi"
limits:
cpu: "500m"
memory: "512Mi"
Template 11 : ResourceQuota + LimitRange
apiVersion: v1
kind: ResourceQuota
metadata:
name: team-quota
spec:
hard:
requests.cpu: "20"
requests.memory: "40Gi"
limits.cpu: "40"
limits.memory: "80Gi"
pods: "50"
services: "10"
secrets: "20"
configmaps: "20"
persistentvolumeclaims: "10"
---
apiVersion: v1
kind: LimitRange
metadata:
name: default-limits
spec:
limits:
- type: Container
default:
cpu: "500m"
memory: "512Mi"
defaultRequest:
cpu: "100m"
memory: "128Mi"
min:
cpu: "50m"
memory: "64Mi"
max:
cpu: "2000m"
memory: "4Gi"
- type: PersistentVolumeClaim
min:
storage: "1Gi"
max:
storage: "100Gi"
Template 12 : PriorityClass
# Pour les applications critiques
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: critical
value: 1000000
globalDefault: false
description: "Pour les pods critiques (bases de données, etc.)"
---
# Pour les applications standard
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: high-priority
value: 100000
globalDefault: false
description: "Pour les applications de production"
---
# Défaut pour les applications non-critiques
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
name: low-priority
value: 1000
globalDefault: true
description: "Priorité par défaut"
Script de Déploiement
#!/bin/bash
set -e
NAMESPACE=${1:-default}
APP_NAME=${2:-my-app}
echo "Deploying $APP_NAME to $NAMESPACE..."
# Appliquer dans l'ordre
kubectl apply -n $NAMESPACE -f rbac.yaml
kubectl apply -n $NAMESPACE -f config.yaml
kubectl apply -n $NAMESPACE -f networkpolicy.yaml
kubectl apply -n $NAMESPACE -f deployment.yaml
kubectl apply -n $NAMESPACE -f service.yaml
kubectl apply -n $NAMESPACE -f hpa.yaml
kubectl apply -n $NAMESPACE -f pdb.yaml
kubectl apply -n $NAMESPACE -f ingress.yaml
# Attendre le rollout
kubectl rollout status deployment/$APP_NAME -n $NAMESPACE --timeout=300s
echo "Deployment complete!"
kubectl get pods -n $NAMESPACE -l app=$APP_NAME
Ces templates suivent les best practices Kubernetes 1.28+. Adaptez les valeurs selon vos besoins.