DEVOPS
Avance

Redis en Production : Guide Complet du Cache et des Sessions

Redis en Production : Guide Complet du Cache et des Sessions

Deployer et optimiser Redis en production : architecture cluster, persistence, securite, monitoring et patterns avances pour le caching et la gestion de sessions.

Florian Courouge
22 min de lecture
2,466 mots
0 vues
Redis
Cache
NoSQL
Performance
Cluster
Sessions
Pub/Sub

Redis en Production : Guide Complet du Cache et des Sessions

Redis est bien plus qu'un simple cache en memoire. C'est une base de donnees de structures de donnees polyvalente capable de gerer des millions d'operations par seconde. Ce guide couvre tout ce qu'il faut savoir pour deployer et operer Redis en production.

Architecture Redis Cluster
Architecture d'un cluster Redis avec replication

Fondamentaux Redis

Structures de Donnees

Redis propose plusieurs structures de donnees natives :

Structure Cas d'usage Complexite
String Cache, compteurs, sessions O(1)
Hash Objets, profils utilisateurs O(1)
List Queues, timelines O(1) extremites
Set Tags, relations uniques O(1)
Sorted Set Leaderboards, indices O(log N)
Stream Event sourcing, logs O(1)

Configuration de Base

# /etc/redis/redis.conf

# Reseau
bind 0.0.0.0
port 6379
protected-mode yes
tcp-backlog 511
timeout 0
tcp-keepalive 300

# Memoire
maxmemory 4gb
maxmemory-policy allkeys-lru

# Persistence
save 900 1
save 300 10
save 60 10000
appendonly yes
appendfsync everysec

# Securite
requirepass your_strong_password_here

# Performances
io-threads 4
io-threads-do-reads yes

Patterns de Caching

Cache-Aside Pattern

import redis
import json
from functools import wraps

redis_client = redis.Redis(
    host='redis-cluster',
    port=6379,
    password='your_password',
    decode_responses=True
)

def cache_aside(ttl=3600, prefix="cache"):
    """Decorator pour le pattern Cache-Aside"""
    def decorator(func):
        @wraps(func)
        def wrapper(*args, **kwargs):
            # Generer la cle de cache
            cache_key = f"{prefix}:{func.__name__}:{hash(str(args) + str(kwargs))}"

            # Tenter de recuperer du cache
            cached = redis_client.get(cache_key)
            if cached:
                return json.loads(cached)

            # Sinon, executer la fonction
            result = func(*args, **kwargs)

            # Stocker en cache
            redis_client.setex(
                cache_key,
                ttl,
                json.dumps(result)
            )

            return result
        return wrapper
    return decorator

@cache_aside(ttl=300, prefix="user")
def get_user_profile(user_id: int) -> dict:
    """Recupere le profil utilisateur depuis la DB"""
    # Simuler un appel DB lent
    return db.query(f"SELECT * FROM users WHERE id = {user_id}")

Write-Through Pattern

class WriteThroughCache:
    def __init__(self, redis_client, db_client):
        self.redis = redis_client
        self.db = db_client

    def get(self, key: str):
        # Toujours lire depuis Redis
        value = self.redis.get(key)
        if value:
            return json.loads(value)

        # Fallback sur la DB
        value = self.db.get(key)
        if value:
            self.redis.set(key, json.dumps(value))
        return value

    def set(self, key: str, value: dict, ttl: int = 3600):
        # Ecrire dans les deux en meme temps
        pipeline = self.redis.pipeline()
        pipeline.setex(key, ttl, json.dumps(value))
        pipeline.execute()

        # Ecriture synchrone en DB
        self.db.set(key, value)

    def invalidate(self, key: str):
        # Supprimer du cache
        self.redis.delete(key)

Cache Stampede Protection

import time
import random

def get_with_probabilistic_refresh(key: str, ttl: int, beta: float = 1.0):
    """
    Protection contre le cache stampede avec refresh probabiliste.
    Plus le TTL restant est faible, plus la probabilite de refresh est haute.
    """
    cached = redis_client.get(key)
    if not cached:
        return None

    data = json.loads(cached)
    remaining_ttl = redis_client.ttl(key)

    # Calcul probabiliste du refresh
    # Formule: exp(-beta * remaining_ttl / ttl) * random() < 1
    if remaining_ttl > 0:
        probability = 1 - (remaining_ttl / ttl)
        if random.random() < (probability * beta):
            # Refresh anticipe
            return None  # Force le rechargement

    return data


def distributed_lock(key: str, timeout: int = 10):
    """
    Lock distribue pour eviter les calculs redondants.
    """
    lock_key = f"lock:{key}"
    identifier = str(time.time())

    # Tenter d'acquerir le lock
    acquired = redis_client.set(
        lock_key,
        identifier,
        nx=True,  # Only if not exists
        ex=timeout
    )

    if acquired:
        return identifier
    return None


def release_lock(key: str, identifier: str):
    """Liberer le lock de maniere atomique."""
    lock_key = f"lock:{key}"

    # Script Lua pour release atomique
    script = """
    if redis.call("get", KEYS[1]) == ARGV[1] then
        return redis.call("del", KEYS[1])
    else
        return 0
    end
    """
    redis_client.eval(script, 1, lock_key, identifier)

Redis Cluster

Configuration Cluster

# redis-cluster.conf
port 6379
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes
appendfilename "appendonly.aof"

# Replication
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync yes
repl-diskless-sync-delay 5

Creation du Cluster

# Creer un cluster avec 3 masters et 3 replicas
redis-cli --cluster create \
    redis-1:6379 redis-2:6379 redis-3:6379 \
    redis-4:6379 redis-5:6379 redis-6:6379 \
    --cluster-replicas 1 \
    -a your_password

# Verifier l'etat du cluster
redis-cli -c -h redis-1 -a your_password cluster info

# Voir les nodes
redis-cli -c -h redis-1 -a your_password cluster nodes

# Resharding (deplacer des slots)
redis-cli --cluster reshard redis-1:6379 \
    --cluster-from node_id_source \
    --cluster-to node_id_dest \
    --cluster-slots 1000 \
    --cluster-yes \
    -a your_password

Client Cluster Python

from redis.cluster import RedisCluster

# Connexion au cluster
rc = RedisCluster(
    host='redis-1',
    port=6379,
    password='your_password',
    decode_responses=True,
    skip_full_coverage_check=True
)

# Les operations sont automatiquement routees
rc.set('user:1000', 'John Doe')
rc.set('user:1001', 'Jane Doe')

# Pipeline cluster-aware
with rc.pipeline() as pipe:
    # Les commandes sont groupees par slot
    for i in range(1000):
        pipe.set(f'key:{i}', f'value:{i}')
    pipe.execute()

Gestion des Sessions

Session Store avec Redis

from flask import Flask, session
from flask_session import Session
import redis

app = Flask(__name__)

# Configuration session Redis
app.config['SESSION_TYPE'] = 'redis'
app.config['SESSION_PERMANENT'] = True
app.config['SESSION_USE_SIGNER'] = True
app.config['SESSION_KEY_PREFIX'] = 'session:'
app.config['SESSION_REDIS'] = redis.Redis(
    host='redis-cluster',
    port=6379,
    password='your_password',
    db=0
)
app.config['PERMANENT_SESSION_LIFETIME'] = 86400  # 24h

Session(app)

@app.route('/login', methods=['POST'])
def login():
    # Authentification...
    session['user_id'] = user.id
    session['roles'] = user.roles
    session['login_time'] = time.time()
    return jsonify({'status': 'logged_in'})

@app.route('/logout')
def logout():
    session.clear()
    return jsonify({'status': 'logged_out'})

Session avec Refresh Token

class SessionManager:
    def __init__(self, redis_client):
        self.redis = redis_client
        self.session_ttl = 3600  # 1h
        self.refresh_ttl = 86400 * 7  # 7 jours

    def create_session(self, user_id: str, metadata: dict) -> dict:
        session_id = secrets.token_urlsafe(32)
        refresh_token = secrets.token_urlsafe(64)

        # Stocker la session
        session_data = {
            'user_id': user_id,
            'created_at': time.time(),
            **metadata
        }

        pipe = self.redis.pipeline()
        pipe.hset(f'session:{session_id}', mapping=session_data)
        pipe.expire(f'session:{session_id}', self.session_ttl)

        # Stocker le refresh token
        pipe.set(
            f'refresh:{refresh_token}',
            json.dumps({
                'session_id': session_id,
                'user_id': user_id
            }),
            ex=self.refresh_ttl
        )

        # Index des sessions utilisateur
        pipe.sadd(f'user_sessions:{user_id}', session_id)
        pipe.execute()

        return {
            'session_id': session_id,
            'refresh_token': refresh_token,
            'expires_in': self.session_ttl
        }

    def refresh_session(self, refresh_token: str) -> dict:
        data = self.redis.get(f'refresh:{refresh_token}')
        if not data:
            raise ValueError('Invalid refresh token')

        data = json.loads(data)
        session_id = data['session_id']

        # Prolonger la session
        self.redis.expire(f'session:{session_id}', self.session_ttl)

        return {'session_id': session_id, 'expires_in': self.session_ttl}

    def invalidate_all_sessions(self, user_id: str):
        """Deconnexion de tous les appareils"""
        sessions = self.redis.smembers(f'user_sessions:{user_id}')

        pipe = self.redis.pipeline()
        for session_id in sessions:
            pipe.delete(f'session:{session_id}')
        pipe.delete(f'user_sessions:{user_id}')
        pipe.execute()

Pub/Sub et Streams

Pub/Sub Pattern

import threading

class PubSubManager:
    def __init__(self, redis_client):
        self.redis = redis_client
        self.pubsub = self.redis.pubsub()
        self.handlers = {}

    def subscribe(self, channel: str, handler):
        """S'abonner a un channel avec un handler"""
        self.handlers[channel] = handler
        self.pubsub.subscribe(**{channel: self._message_handler})

    def _message_handler(self, message):
        if message['type'] == 'message':
            channel = message['channel']
            if channel in self.handlers:
                self.handlers[channel](message['data'])

    def start_listening(self):
        """Demarrer l'ecoute dans un thread separe"""
        thread = threading.Thread(target=self._listen)
        thread.daemon = True
        thread.start()

    def _listen(self):
        for message in self.pubsub.listen():
            pass  # Handler appele automatiquement

    def publish(self, channel: str, message: str):
        return self.redis.publish(channel, message)


# Utilisation
pubsub = PubSubManager(redis_client)

def handle_notification(data):
    print(f"Notification: {data}")

pubsub.subscribe('notifications', handle_notification)
pubsub.start_listening()

# Publier
pubsub.publish('notifications', json.dumps({
    'type': 'new_message',
    'user_id': '1234'
}))

Redis Streams pour Event Sourcing

class EventStream:
    def __init__(self, redis_client, stream_name: str):
        self.redis = redis_client
        self.stream = stream_name

    def add_event(self, event_type: str, data: dict) -> str:
        """Ajouter un evenement au stream"""
        event = {
            'type': event_type,
            'timestamp': str(time.time()),
            'data': json.dumps(data)
        }
        return self.redis.xadd(self.stream, event)

    def read_events(self, last_id: str = '0', count: int = 100):
        """Lire les evenements depuis un ID"""
        events = self.redis.xread(
            {self.stream: last_id},
            count=count,
            block=5000  # 5s timeout
        )
        return events

    def create_consumer_group(self, group_name: str):
        """Creer un groupe de consommateurs"""
        try:
            self.redis.xgroup_create(
                self.stream,
                group_name,
                id='0',
                mkstream=True
            )
        except redis.ResponseError:
            pass  # Groupe existe deja

    def consume(self, group_name: str, consumer_name: str):
        """Consommer en groupe (garantie de delivery)"""
        events = self.redis.xreadgroup(
            group_name,
            consumer_name,
            {self.stream: '>'},
            count=10,
            block=5000
        )
        return events

    def ack(self, group_name: str, *message_ids):
        """Acquitter les messages traites"""
        self.redis.xack(self.stream, group_name, *message_ids)


# Utilisation pour un systeme de commandes
order_stream = EventStream(redis_client, 'orders')

# Producteur
order_stream.add_event('order_created', {
    'order_id': '12345',
    'customer_id': 'C001',
    'items': [{'sku': 'PROD1', 'qty': 2}]
})

# Consommateur
order_stream.create_consumer_group('order_processors')

while True:
    events = order_stream.consume('order_processors', 'worker-1')
    for stream, messages in events:
        for msg_id, fields in messages:
            process_order(fields)
            order_stream.ack('order_processors', msg_id)

Monitoring et Performance

Metriques Cles

# Statistiques en temps reel
redis-cli -a your_password INFO stats

# Metriques memoire
redis-cli -a your_password INFO memory

# Clients connectes
redis-cli -a your_password CLIENT LIST

# Commandes lentes
redis-cli -a your_password SLOWLOG GET 10

# Latence
redis-cli -a your_password --latency

# Big keys (attention en prod!)
redis-cli -a your_password --bigkeys

Configuration Prometheus

# prometheus.yml
scrape_configs:
  - job_name: 'redis'
    static_configs:
      - targets:
        - redis-exporter:9121
    metrics_path: /scrape
    params:
      target: ['redis://redis-1:6379']

Alertes Importantes

# alerts.yml
groups:
  - name: redis
    rules:
      - alert: RedisDown
        expr: redis_up == 0
        for: 1m
        labels:
          severity: critical

      - alert: RedisMemoryHigh
        expr: redis_memory_used_bytes / redis_memory_max_bytes > 0.9
        for: 5m
        labels:
          severity: warning

      - alert: RedisReplicationBroken
        expr: redis_connected_slaves < 1
        for: 2m
        labels:
          severity: critical

      - alert: RedisTooManyConnections
        expr: redis_connected_clients > 1000
        for: 5m
        labels:
          severity: warning

Securite

Configuration SSL/TLS

# redis.conf
tls-port 6379
port 0
tls-cert-file /etc/redis/tls/redis.crt
tls-key-file /etc/redis/tls/redis.key
tls-ca-cert-file /etc/redis/tls/ca.crt
tls-auth-clients yes

ACLs (Redis 6+)

# Creer un utilisateur avec permissions limitees
redis-cli ACL SETUSER app_user on >password123 \
    ~app:* \
    +get +set +del +expire +ttl \
    -@dangerous

# Utilisateur read-only
redis-cli ACL SETUSER reader on >readpass \
    ~* \
    +get +mget +keys +scan \
    -@write -@dangerous

# Voir les utilisateurs
redis-cli ACL LIST

Deploiement Kubernetes

# redis-cluster.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: redis-cluster
spec:
  serviceName: redis-cluster
  replicas: 6
  selector:
    matchLabels:
      app: redis-cluster
  template:
    metadata:
      labels:
        app: redis-cluster
    spec:
      containers:
      - name: redis
        image: redis:7-alpine
        command:
          - redis-server
          - /etc/redis/redis.conf
          - --cluster-enabled yes
          - --cluster-config-file /data/nodes.conf
        ports:
        - containerPort: 6379
          name: client
        - containerPort: 16379
          name: gossip
        volumeMounts:
        - name: data
          mountPath: /data
        - name: config
          mountPath: /etc/redis
        resources:
          requests:
            memory: "1Gi"
            cpu: "500m"
          limits:
            memory: "2Gi"
            cpu: "1000m"
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 10Gi

Conclusion

Redis est un outil extremement puissant pour :

  • Caching : Reduire la latence et la charge sur vos bases de donnees
  • Sessions : Gerer l'authentification de maniere distribuee
  • Pub/Sub : Communication temps reel entre services
  • Streams : Event sourcing et traitement de donnees en temps reel

Les points cles pour la production :

  1. Toujours activer la persistence (RDB + AOF)
  2. Utiliser Redis Cluster pour la haute disponibilite
  3. Monitorer la memoire et configurer maxmemory-policy
  4. Securiser avec TLS et ACLs
  5. Tester les failovers regulierement

Ressources

F

Florian Courouge

Expert DevOps & Kafka | Consultant freelance specialise dans les architectures distribuees et le streaming de donnees.

Articles similaires