Deploying Nginx in Container Environments
In this lesson, we'll explore how to deploy Nginx in modern container environments using Docker and Docker Compose. You'll learn to containerize Nginx applications efficiently while leveraging the configuration skills you've developed throughout this course.
Why Containerize Nginx?
Containerization provides consistent environments across development, testing, and production. Nginx in containers offers:
- Portability: Run the same Nginx configuration anywhere Docker runs
- Isolation: Keep Nginx and its dependencies contained
- Scalability: Easily scale Nginx instances horizontally
- Version control: Track Nginx and configuration versions together
Basic Nginx Docker Setup
Let's start with a simple Nginx container using the official image:
FROM nginx:1.24-alpine
# Copy custom configuration
COPY nginx.conf /etc/nginx/nginx.conf
# Copy static content
COPY static/ /usr/share/nginx/html/
# Expose port 80
EXPOSE 80
docker build -t my-nginx .
docker run -p 8080:80 my-nginx
Use the alpine variant for smaller image sizes and better security. The Alpine-based Nginx image is about 10x smaller than the default version.
Docker Compose for Multi-Service Deployments
Most real-world Nginx deployments work with other services. Here's a complete setup:
version: '3.8'
services:
nginx:
build: .
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./ssl:/etc/nginx/ssl:ro
- ./logs:/var/log/nginx
depends_on:
- app-server
networks:
- app-network
app-server:
image: node:18-alpine
working_dir: /app
volumes:
- ./app:/app
command: ["node", "server.js"]
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
nginx-logs:
driver: local
Managing Configuration and Content
Configuration Strategies
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# Log format with container info
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'container_id=$hostname';
access_log /var/log/nginx/access.log main;
error_log /var/log/nginx/error.log warn;
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://app-server:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /static/ {
alias /usr/share/nginx/html/;
expires 1y;
add_header Cache-Control "public, immutable";
}
}
}
Dynamic Configuration with Environment Variables
FROM nginx:1.24-alpine
# Install envsubst for template processing
RUN apk add --no-cache gettext
# Copy configuration template
COPY nginx.conf.template /etc/nginx/nginx.conf.template
# Copy startup script
COPY docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]
#!/bin/sh
# Substitute environment variables in nginx configuration
envsubst < /etc/nginx/nginx.conf.template > /etc/nginx/nginx.conf
# Start nginx
exec "$@"
Production-Ready Nginx Container
FROM nginx:1.24-alpine
# Security: Run as non-root user
RUN chown -R nginx:nginx /var/cache/nginx && \
chmod -R 755 /var/cache/nginx
# Remove default content
RUN rm -rf /usr/share/nginx/html/*
# Copy custom configuration
COPY nginx.conf /etc/nginx/nginx.conf
COPY security-headers.conf /etc/nginx/conf.d/security-headers.conf
# Copy static assets
COPY build/ /usr/share/nginx/html/
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost/health || exit 1
USER nginx
EXPOSE 8080
Always run Nginx containers as non-root users in production. The official Nginx image includes an nginx user for this purpose. Update your configuration to listen on ports above 1024 when running as non-root.
Kubernetes Deployment
For orchestrated environments, here's a basic Kubernetes setup:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: my-nginx:latest
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
livenessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 2
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
type: LoadBalancer
Common Pitfalls
- Configuration persistence: Forgetting to mount configuration files as volumes leads to losing custom settings when containers restart
- Log management: Not configuring log rotation can fill up container storage
- Static content handling: Copying large static files into the image instead of using volumes increases build time and image size
- Port conflicts: Running multiple Nginx containers on the same host port without proper mapping
- Health checks: Missing health checks in production deployments can prevent proper load balancing and self-healing
- Security context: Running containers as root or with unnecessary privileges
- Resource limits: Not setting memory and CPU limits in production can lead to resource exhaustion
Summary
Containerizing Nginx enables consistent deployments across environments while maintaining the powerful configuration capabilities you've mastered. Key takeaways include:
- Use official Nginx images with Alpine variants for efficiency
- Mount configuration files as read-only volumes for flexibility
- Implement health checks for container orchestration
- Run as non-root users and configure appropriate security headers
- Use Docker Compose for local development and testing
- Prepare for orchestration platforms like Kubernetes with proper resource limits and probes
Nginx Containerization and Deployment Best Practices
What is the main advantage of using the Alpine-based Nginx image?