Intermédiaire
⭐ Article vedette

Docker et Conteneurisation : Guide Complet pour la Production

Maîtrisez Docker et la conteneurisation : images optimisées, orchestration, sécurité, monitoring et déploiement en production.

Publié le
16 décembre 2024
Lecture
21 min
Vues
0
Auteur
Florian Courouge
Docker
Containers
Orchestration
Security
Production
DevOps

Table des matières

📋 Vue d'ensemble rapide des sujets traités dans cet article

Cliquez sur les sections ci-dessous pour naviguer rapidement

Docker et Conteneurisation : Guide Complet pour la Production

Docker a révolutionné le déploiement d'applications en simplifiant la conteneurisation. Ce guide couvre tous les aspects essentiels pour maîtriser Docker en production, de la création d'images optimisées à l'orchestration complexe.

💡Fondamentaux Docker

Installation et Configuration

#!/bin/bash
# docker-setup.sh

# Installation Docker sur Ubuntu
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin

# Ajouter l'utilisateur au groupe docker
sudo usermod -aG docker $USER

# Configuration du daemon Docker
sudo mkdir -p /etc/docker
cat << 'EOF' | sudo tee /etc/docker/daemon.json
{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "live-restore": true,
  "userland-proxy": false,
  "experimental": false,
  "metrics-addr": "127.0.0.1:9323",
  "default-ulimits": {
    "nofile": {
      "Name": "nofile",
      "Hard": 64000,
      "Soft": 64000
    }
  }
}
EOF

# Redémarrer Docker
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl enable docker

# Vérifier l'installation
docker --version
docker compose version

Structure de Projet Docker

# Structure recommandée pour un projet Docker
mkdir -p docker-project/{
    docker/{development,production,testing},
    scripts,
    config,
    docs,
    .github/workflows
}

# Dockerfile multi-stage pour une application Node.js
cat > docker-project/docker/production/Dockerfile << 'EOF'
# Stage 1: Build
FROM node:18-alpine AS builder

WORKDIR /app

# Copier les fichiers de dépendances
COPY package*.json ./
COPY yarn.lock ./

# Installer les dépendances
RUN yarn install --frozen-lockfile --production=false

# Copier le code source
COPY . .

# Build de l'application
RUN yarn build

# Stage 2: Production
FROM node:18-alpine AS production

# Créer un utilisateur non-root
RUN addgroup -g 1001 -S nodejs && \
    adduser -S nextjs -u 1001

WORKDIR /app

# Installer dumb-init pour la gestion des signaux
RUN apk add --no-cache dumb-init

# Copier les dépendances de production
COPY package*.json ./
COPY yarn.lock ./
RUN yarn install --frozen-lockfile --production=true && \
    yarn cache clean

# Copier les fichiers buildés
COPY --from=builder --chown=nextjs:nodejs /app/dist ./dist
COPY --from=builder --chown=nextjs:nodejs /app/public ./public

# Configuration de sécurité
USER nextjs

# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD curl -f http://localhost:3000/health || exit 1

# Exposer le port
EXPOSE 3000

# Point d'entrée avec dumb-init
ENTRYPOINT ["dumb-init", "--"]
CMD ["node", "dist/server.js"]
EOF

💡Images Docker Optimisées

Techniques d'Optimisation

# Dockerfile optimisé pour Python
FROM python:3.11-slim AS base

# Variables d'environnement
ENV PYTHONUNBUFFERED=1 \
    PYTHONDONTWRITEBYTECODE=1 \
    PIP_NO_CACHE_DIR=1 \
    PIP_DISABLE_PIP_VERSION_CHECK=1

# Stage de build
FROM base AS builder

# Installer les dépendances système pour la compilation
RUN apt-get update && apt-get install -y \
    build-essential \
    libpq-dev \
    && rm -rf /var/lib/apt/lists/*

# Créer un environnement virtuel
RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"

# Copier et installer les dépendances Python
COPY requirements.txt .
RUN pip install --upgrade pip && \
    pip install -r requirements.txt

# Stage de production
FROM base AS production

# Installer seulement les dépendances runtime
RUN apt-get update && apt-get install -y \
    libpq5 \
    curl \
    && rm -rf /var/lib/apt/lists/* \
    && apt-get clean

# Créer un utilisateur non-root
RUN groupadd -r appuser && useradd -r -g appuser appuser

# Copier l'environnement virtuel
COPY --from=builder /opt/venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"

# Créer les répertoires de travail
WORKDIR /app
RUN chown -R appuser:appuser /app

# Copier l'application
COPY --chown=appuser:appuser . .

# Changer vers l'utilisateur non-root
USER appuser

# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
    CMD python -c "import requests; requests.get('http://localhost:8000/health', timeout=5)"

EXPOSE 8000

CMD ["gunicorn", "--bind", "0.0.0.0:8000", "--workers", "4", "app:app"]

Optimisation des Couches

# Dockerfile avec optimisation des couches
FROM ubuntu:22.04

# Combiner les commandes RUN pour réduire les couches
RUN apt-get update && \
    apt-get install -y \
        curl \
        wget \
        git \
        vim \
        htop \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/* \
    && rm -rf /tmp/* \
    && rm -rf /var/tmp/*

# Utiliser .dockerignore pour exclure les fichiers inutiles
# .dockerignore
# node_modules
# npm-debug.log
# .git
# .gitignore
# README.md
# .env
# coverage/
# .nyc_output

# Optimisation avec BuildKit
# syntax=docker/dockerfile:1
FROM node:18-alpine

# Utiliser les mount cache pour les dépendances
RUN --mount=type=cache,target=/root/.npm \
    npm install -g npm@latest

COPY package*.json ./
RUN --mount=type=cache,target=/root/.npm \
    npm ci --only=production

COPY . .
RUN npm run build

💡Docker Compose et Orchestration

Stack Complète avec Docker Compose

# docker-compose.yml
version: '3.8'

services:
  # Application web
  web:
    build:
      context: .
      dockerfile: docker/production/Dockerfile
      target: production
    ports:
      - "3000:3000"
    environment:
      - NODE_ENV=production
      - DATABASE_URL=postgresql://postgres:password@db:5432/myapp
      - REDIS_URL=redis://redis:6379
    depends_on:
      db:
        condition: service_healthy
      redis:
        condition: service_healthy
    volumes:
      - ./logs:/app/logs
    networks:
      - app-network
    restart: unless-stopped
    deploy:
      resources:
        limits:
          cpus: '1.0'
          memory: 512M
        reservations:
          cpus: '0.5'
          memory: 256M
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s

  # Base de données PostgreSQL
  db:
    image: postgres:15-alpine
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: password
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./docker/postgres/init.sql:/docker-entrypoint-initdb.d/init.sql:ro
    networks:
      - app-network
    restart: unless-stopped
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 10s
      timeout: 5s
      retries: 5

  # Redis pour le cache
  redis:
    image: redis:7-alpine
    command: redis-server --appendonly yes --requirepass redispassword
    volumes:
      - redis_data:/data
    networks:
      - app-network
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "redis-cli", "--raw", "incr", "ping"]
      interval: 10s
      timeout: 3s
      retries: 5

  # Nginx reverse proxy
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./docker/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./docker/nginx/ssl:/etc/nginx/ssl:ro
      - ./logs/nginx:/var/log/nginx
    depends_on:
      - web
    networks:
      - app-network
    restart: unless-stopped

  # Monitoring avec Prometheus
  prometheus:
    image: prom/prometheus:latest
    ports:
      - "9090:9090"
    volumes:
      - ./docker/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro
      - prometheus_data:/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'
      - '--web.console.libraries=/etc/prometheus/console_libraries'
      - '--web.console.templates=/etc/prometheus/consoles'
      - '--storage.tsdb.retention.time=200h'
      - '--web.enable-lifecycle'
    networks:
      - app-network
    restart: unless-stopped

  # Grafana pour la visualisation
  grafana:
    image: grafana/grafana:latest
    ports:
      - "3001:3000"
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=admin
    volumes:
      - grafana_data:/var/lib/grafana
      - ./docker/grafana/provisioning:/etc/grafana/provisioning:ro
    networks:
      - app-network
    restart: unless-stopped

volumes:
  postgres_data:
    driver: local
  redis_data:
    driver: local
  prometheus_data:
    driver: local
  grafana_data:
    driver: local

networks:
  app-network:
    driver: bridge
    ipam:
      config:
        - subnet: 172.20.0.0/16

Configuration Nginx

# docker/nginx/nginx.conf
events {
    worker_connections 1024;
}

http {
    upstream web_backend {
        server web:3000;
    }

    # Rate limiting
    limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
    limit_req_zone $binary_remote_addr zone=login:10m rate=1r/s;

    server {
        listen 80;
        server_name localhost;

        # Security headers
        add_header X-Frame-Options "SAMEORIGIN" always;
        add_header X-XSS-Protection "1; mode=block" always;
        add_header X-Content-Type-Options "nosniff" always;

        # Gzip compression
        gzip on;
        gzip_vary on;
        gzip_min_length 1024;
        gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        location / {
            proxy_pass http://web_backend;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            
            # Timeouts
            proxy_connect_timeout 60s;
            proxy_send_timeout 60s;
            proxy_read_timeout 60s;
        }

        location /api/ {
            limit_req zone=api burst=20 nodelay;
            proxy_pass http://web_backend;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }

        location /health {
            access_log off;
            proxy_pass http://web_backend;
        }
    }
}

💡Sécurité des Conteneurs

Bonnes Pratiques de Sécurité

# Dockerfile sécurisé
FROM node:18-alpine AS base

# Mise à jour des packages système
RUN apk update && apk upgrade && \
    apk add --no-cache dumb-init && \
    rm -rf /var/cache/apk/*

# Créer un utilisateur non-root avec UID/GID spécifiques
RUN addgroup -g 1001 -S nodejs && \
    adduser -S nextjs -u 1001 -G nodejs

# Définir le répertoire de travail
WORKDIR /app

# Changer la propriété du répertoire
RUN chown -R nextjs:nodejs /app

# Copier les fichiers avec les bonnes permissions
COPY --chown=nextjs:nodejs package*.json ./

# Installer les dépendances
USER nextjs
RUN npm ci --only=production && npm cache clean --force

# Copier le code source
COPY --chown=nextjs:nodejs . .

# Supprimer les fichiers sensibles
RUN rm -rf .git .env.example README.md

# Configuration de sécurité
USER nextjs

# Utiliser dumb-init comme PID 1
ENTRYPOINT ["dumb-init", "--"]

# Commande par défaut
CMD ["node", "server.js"]

Scan de Sécurité

#!/bin/bash
# security-scan.sh

# Installation de Trivy pour le scan de vulnérabilités
curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin

# Fonction de scan d'image
scan_image() {
    local image=$1
    echo "Scanning image: $image"
    
    # Scan des vulnérabilités
    trivy image --severity HIGH,CRITICAL --format table $image
    
    # Scan des secrets
    trivy image --scanners secret $image
    
    # Scan des configurations
    trivy image --scanners config $image
    
    # Générer un rapport JSON
    trivy image --format json --output ${image//\//_}-scan.json $image
}

# Scan de toutes les images locales
for image in $(docker images --format "{{.Repository}}:{{.Tag}}" | grep -v "<none>"); do
    scan_image $image
done

# Analyse des conteneurs en cours d'exécution
echo "Analyzing running containers..."
docker ps --format "table {{.Names}}\t{{.Image}}\t{{.Status}}" > running-containers.txt

# Vérification des configurations Docker
echo "Checking Docker daemon configuration..."
docker system info | grep -E "(Security Options|Cgroup Driver|Storage Driver)"

# Audit des volumes et réseaux
echo "Auditing volumes and networks..."
docker volume ls
docker network ls

Configuration de Sécurité Avancée

# docker-compose.security.yml
version: '3.8'

services:
  web:
    build: .
    security_opt:
      - no-new-privileges:true
    cap_drop:
      - ALL
    cap_add:
      - NET_BIND_SERVICE
    read_only: true
    tmpfs:
      - /tmp:noexec,nosuid,size=100m
      - /var/run:noexec,nosuid,size=50m
    user: "1001:1001"
    environment:
      - NODE_ENV=production
    networks:
      - app-network
    deploy:
      resources:
        limits:
          cpus: '0.5'
          memory: 256M
          pids: 100
        reservations:
          cpus: '0.25'
          memory: 128M

  # Conteneur de sécurité avec Falco
  falco:
    image: falcosecurity/falco:latest
    privileged: true
    volumes:
      - /var/run/docker.sock:/host/var/run/docker.sock
      - /dev:/host/dev
      - /proc:/host/proc:ro
      - /boot:/host/boot:ro
      - /lib/modules:/host/lib/modules:ro
      - /usr:/host/usr:ro
      - /etc:/host/etc:ro
    environment:
      - FALCO_GRPC_ENABLED=true
    networks:
      - app-network

networks:
  app-network:
    driver: bridge
    driver_opts:
      com.docker.network.bridge.enable_icc: "false"
      com.docker.network.bridge.enable_ip_masquerade: "true"

💡Monitoring et Logging

Monitoring avec Prometheus

# docker/prometheus/prometheus.yml
global:
  scrape_interval: 15s
  evaluation_interval: 15s

rule_files:
  - "alert_rules.yml"

alerting:
  alertmanagers:
    - static_configs:
        - targets:
          - alertmanager:9093

scrape_configs:
  - job_name: 'prometheus'
    static_configs:
      - targets: ['localhost:9090']

  - job_name: 'docker'
    static_configs:
      - targets: ['host.docker.internal:9323']

  - job_name: 'node-exporter'
    static_configs:
      - targets: ['node-exporter:9100']

  - job_name: 'cadvisor'
    static_configs:
      - targets: ['cadvisor:8080']

  - job_name: 'app'
    static_configs:
      - targets: ['web:3000']
    metrics_path: '/metrics'

Logging Centralisé

# docker-compose.logging.yml
version: '3.8'

services:
  # Application avec logging structuré
  web:
    build: .
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "3"
        labels: "service,environment"
    labels:
      - "service=web"
      - "environment=production"

  # Elasticsearch pour le stockage des logs
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.5.0
    environment:
      - discovery.type=single-node
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - xpack.security.enabled=false
    volumes:
      - elasticsearch_data:/usr/share/elasticsearch/data
    networks:
      - logging

  # Logstash pour le traitement des logs
  logstash:
    image: docker.elastic.co/logstash/logstash:8.5.0
    volumes:
      - ./docker/logstash/pipeline:/usr/share/logstash/pipeline:ro
      - ./docker/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
    depends_on:
      - elasticsearch
    networks:
      - logging

  # Kibana pour la visualisation
  kibana:
    image: docker.elastic.co/kibana/kibana:8.5.0
    ports:
      - "5601:5601"
    environment:
      - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
    depends_on:
      - elasticsearch
    networks:
      - logging

  # Filebeat pour la collecte des logs
  filebeat:
    image: docker.elastic.co/beats/filebeat:8.5.0
    user: root
    volumes:
      - ./docker/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
      - /var/lib/docker/containers:/var/lib/docker/containers:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
    depends_on:
      - elasticsearch
    networks:
      - logging

volumes:
  elasticsearch_data:

networks:
  logging:
    driver: bridge

Configuration Filebeat

# docker/filebeat/filebeat.yml
filebeat.inputs:
- type: container
  paths:
    - '/var/lib/docker/containers/*/*.log'
  processors:
    - add_docker_metadata:
        host: "unix:///var/run/docker.sock"
    - decode_json_fields:
        fields: ["message"]
        target: ""
        overwrite_keys: true

output.elasticsearch:
  hosts: ["elasticsearch:9200"]
  index: "docker-logs-%{+yyyy.MM.dd}"

setup.template.name: "docker-logs"
setup.template.pattern: "docker-logs-*"

logging.level: info
logging.to_files: true
logging.files:
  path: /var/log/filebeat
  name: filebeat
  keepfiles: 7
  permissions: 0644

💡Optimisation des Performances

Optimisation des Images

#!/bin/bash
# optimize-images.sh

# Fonction d'optimisation d'image
optimize_image() {
    local image=$1
    local optimized_image="${image}-optimized"
    
    echo "Optimizing image: $image"
    
    # Utiliser dive pour analyser l'image
    dive $image
    
    # Utiliser docker-slim pour optimiser
    docker-slim build --target $image --tag $optimized_image \
        --http-probe-cmd /health \
        --include-path /app \
        --include-path /usr/local/lib/node_modules \
        --continue-after 60
    
    # Comparer les tailles
    original_size=$(docker images $image --format "{{.Size}}")
    optimized_size=$(docker images $optimized_image --format "{{.Size}}")
    
    echo "Original size: $original_size"
    echo "Optimized size: $optimized_size"
}

# Multi-stage build optimisé
cat > Dockerfile.optimized << 'EOF'
# Utiliser des images distroless pour la production
FROM gcr.io/distroless/nodejs18-debian11

WORKDIR /app

# Copier seulement les fichiers nécessaires
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./

# Utiliser un utilisateur non-root
USER 1001

EXPOSE 3000

CMD ["dist/server.js"]
EOF

Optimisation Runtime

# docker-compose.performance.yml
version: '3.8'

services:
  web:
    build: .
    deploy:
      resources:
        limits:
          cpus: '1.0'
          memory: 512M
        reservations:
          cpus: '0.5'
          memory: 256M
    environment:
      - NODE_ENV=production
      - NODE_OPTIONS=--max-old-space-size=400
    ulimits:
      nofile:
        soft: 65536
        hard: 65536
    sysctls:
      - net.core.somaxconn=1024
    volumes:
      - type: tmpfs
        target: /tmp
        tmpfs:
          size: 100M
    networks:
      - app-network

  # Cache Redis avec optimisations
  redis:
    image: redis:7-alpine
    command: >
      redis-server
      --maxmemory 256mb
      --maxmemory-policy allkeys-lru
      --save 900 1
      --save 300 10
      --save 60 10000
    deploy:
      resources:
        limits:
          cpus: '0.5'
          memory: 300M
        reservations:
          cpus: '0.25'
          memory: 256M
    networks:
      - app-network

networks:
  app-network:
    driver: bridge
    driver_opts:
      com.docker.network.driver.mtu: 1450

💡CI/CD avec Docker

Pipeline GitLab CI

# .gitlab-ci.yml
stages:
  - test
  - build
  - security
  - deploy

variables:
  DOCKER_DRIVER: overlay2
  DOCKER_TLS_CERTDIR: "/certs"
  IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
  LATEST_TAG: $CI_REGISTRY_IMAGE:latest

services:
  - docker:20.10.16-dind

before_script:
  - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY

# Tests unitaires
test:
  stage: test
  image: node:18-alpine
  script:
    - npm ci
    - npm run test
    - npm run lint
  coverage: '/Lines\s*:\s*(\d+\.\d+)%/'
  artifacts:
    reports:
      coverage_report:
        coverage_format: cobertura
        path: coverage/cobertura-coverage.xml

# Build de l'image Docker
build:
  stage: build
  image: docker:20.10.16
  script:
    - docker build -t $IMAGE_TAG -t $LATEST_TAG .
    - docker push $IMAGE_TAG
    - docker push $LATEST_TAG
  only:
    - main
    - develop

# Scan de sécurité
security_scan:
  stage: security
  image: aquasec/trivy:latest
  script:
    - trivy image --exit-code 1 --severity HIGH,CRITICAL $IMAGE_TAG
  allow_failure: true
  only:
    - main

# Déploiement en staging
deploy_staging:
  stage: deploy
  image: alpine:latest
  before_script:
    - apk add --no-cache curl
    - curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
    - chmod +x kubectl
    - mv kubectl /usr/local/bin/
  script:
    - kubectl set image deployment/myapp myapp=$IMAGE_TAG -n staging
    - kubectl rollout status deployment/myapp -n staging
  environment:
    name: staging
    url: https://staging.example.com
  only:
    - develop

# Déploiement en production
deploy_production:
  stage: deploy
  image: alpine:latest
  before_script:
    - apk add --no-cache curl
    - curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
    - chmod +x kubectl
    - mv kubectl /usr/local/bin/
  script:
    - kubectl set image deployment/myapp myapp=$IMAGE_TAG -n production
    - kubectl rollout status deployment/myapp -n production
  environment:
    name: production
    url: https://example.com
  when: manual
  only:
    - main

Scripts de Déploiement

#!/bin/bash
# deploy.sh

set -e

# Configuration
REGISTRY="registry.example.com"
IMAGE_NAME="myapp"
VERSION=${1:-latest}
ENVIRONMENT=${2:-staging}

# Couleurs pour les logs
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'

log() {
    echo -e "${GREEN}[$(date +'%Y-%m-%d %H:%M:%S')] $1${NC}"
}

warn() {
    echo -e "${YELLOW}[$(date +'%Y-%m-%d %H:%M:%S')] WARNING: $1${NC}"
}

error() {
    echo -e "${RED}[$(date +'%Y-%m-%d %H:%M:%S')] ERROR: $1${NC}"
    exit 1
}

# Vérifications préalables
check_prerequisites() {
    log "Checking prerequisites..."
    
    command -v docker >/dev/null 2>&1 || error "Docker is not installed"
    command -v docker-compose >/dev/null 2>&1 || error "Docker Compose is not installed"
    
    docker info >/dev/null 2>&1 || error "Docker daemon is not running"
    
    log "Prerequisites check passed"
}

# Build de l'image
build_image() {
    log "Building image: $REGISTRY/$IMAGE_NAME:$VERSION"
    
    docker build \
        --build-arg VERSION=$VERSION \
        --build-arg BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ') \
        --build-arg VCS_REF=$(git rev-parse --short HEAD) \
        -t $REGISTRY/$IMAGE_NAME:$VERSION \
        -t $REGISTRY/$IMAGE_NAME:latest \
        .
    
    log "Image built successfully"
}

# Test de l'image
test_image() {
    log "Testing image..."
    
    # Démarrer le conteneur de test
    docker run -d --name test-container \
        -p 3001:3000 \
        $REGISTRY/$IMAGE_NAME:$VERSION
    
    # Attendre que l'application soit prête
    sleep 10
    
    # Test de santé
    if curl -f http://localhost:3001/health; then
        log "Health check passed"
    else
        error "Health check failed"
    fi
    
    # Nettoyer
    docker stop test-container
    docker rm test-container
    
    log "Image testing completed"
}

# Push vers le registry
push_image() {
    log "Pushing image to registry..."
    
    docker push $REGISTRY/$IMAGE_NAME:$VERSION
    docker push $REGISTRY/$IMAGE_NAME:latest
    
    log "Image pushed successfully"
}

# Déploiement
deploy() {
    log "Deploying to $ENVIRONMENT..."
    
    # Sélectionner le fichier de configuration
    COMPOSE_FILE="docker-compose.$ENVIRONMENT.yml"
    
    if [[ ! -f $COMPOSE_FILE ]]; then
        error "Compose file not found: $COMPOSE_FILE"
    fi
    
    # Déploiement avec rolling update
    export IMAGE_TAG=$VERSION
    docker-compose -f $COMPOSE_FILE pull
    docker-compose -f $COMPOSE_FILE up -d --remove-orphans
    
    # Vérifier le déploiement
    sleep 15
    if docker-compose -f $COMPOSE_FILE ps | grep -q "Up"; then
        log "Deployment successful"
    else
        error "Deployment failed"
    fi
}

# Rollback
rollback() {
    local previous_version=$1
    
    warn "Rolling back to version: $previous_version"
    
    export IMAGE_TAG=$previous_version
    docker-compose -f docker-compose.$ENVIRONMENT.yml up -d --remove-orphans
    
    log "Rollback completed"
}

# Nettoyage
cleanup() {
    log "Cleaning up old images..."
    
    # Supprimer les images non utilisées
    docker image prune -f
    
    # Supprimer les conteneurs arrêtés
    docker container prune -f
    
    log "Cleanup completed"
}

# Fonction principale
main() {
    case ${1:-deploy} in
        "build")
            check_prerequisites
            build_image
            test_image
            ;;
        "push")
            push_image
            ;;
        "deploy")
            check_prerequisites
            deploy
            ;;
        "rollback")
            rollback $2
            ;;
        "cleanup")
            cleanup
            ;;
        "full")
            check_prerequisites
            build_image
            test_image
            push_image
            deploy
            cleanup
            ;;
        *)
            echo "Usage: $0 {build|push|deploy|rollback|cleanup|full} [version] [environment]"
            exit 1
            ;;
    esac
}

main "$@"

💡Conclusion

Docker et la conteneurisation offrent de nombreux avantages pour le déploiement d'applications :

Avantages Clés

Bonnes Pratiques

Considérations Production

La maîtrise de Docker est essentielle pour tout professionnel DevOps moderne. Les techniques présentées dans cet article vous permettront de déployer des applications conteneurisées robustes et sécurisées en production.

Pour un accompagnement dans la conteneurisation de vos applications, contactez-moi pour une consultation personnalisée.

À propos de l'auteur

Florian Courouge - Expert DevOps et Apache Kafka avec plus de 5 ans d'expérience dans l'architecture de systèmes distribués et l'automatisation d'infrastructures.

Cet article vous a été utile ?

Découvrez mes autres articles techniques ou contactez-moi pour discuter de vos projets DevOps et Kafka.