Skip to main content

Implementing Reverse Proxy with Nginx

Now that you're comfortable with Nginx basics and serving static content, let's explore one of Nginx's most powerful features: the reverse proxy. This lesson will teach you how to forward client requests to backend applications while maintaining a single entry point for your web services.

Learning Goals

  • Understand what a reverse proxy is and when to use it
  • Configure Nginx to proxy requests to backend servers
  • Handle headers and timeouts properly
  • Implement health checks for backend services
  • Debug common reverse proxy issues

What is a Reverse Proxy?

A reverse proxy sits between clients and backend servers, forwarding client requests to the appropriate backend service and returning the response. Unlike a forward proxy that protects clients, a reverse proxy protects and load balances servers.

Common use cases:

  • Hiding backend server details for security
  • Load balancing across multiple application servers
  • SSL termination
  • Caching static content
  • A/B testing by routing traffic to different backends

Basic Reverse Proxy Configuration

Let's start with a simple reverse proxy setup that forwards requests to a Node.js application running on port 3000.

/etc/nginx/sites-available/reverse-proxy
server {
listen 80;
server_name api.example.com;

location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
tip

Always include the proxy_set_header directives to preserve original client information. Backend applications need this to log real client IPs and handle redirects correctly.

Advanced Proxy Configuration

Handling Timeouts and Buffering

/etc/nginx/sites-available/advanced-proxy
server {
listen 80;
server_name app.example.com;

location / {
proxy_pass http://localhost:8080;

# Headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;

# Timeouts
proxy_connect_timeout 30s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;

# Buffering
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;

# Health check
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
}
}

Proxying to Multiple Backends

/etc/nginx/sites-available/multiple-backends
upstream backend_servers {
server 192.168.1.10:8000;
server 192.168.1.11:8000;
server 192.168.1.12:8000 backup;
}

server {
listen 80;
server_name myapp.example.com;

location / {
proxy_pass http://backend_servers;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
warning

When using multiple backend servers, ensure your application is stateless or uses shared session storage. Otherwise, users might lose their session when routed to different backends.

WebSocket Proxy Configuration

WebSocket connections require special handling in Nginx:

/etc/nginx/sites-available/websocket-proxy
server {
listen 80;
server_name ws.example.com;

location /chat/ {
proxy_pass http://localhost:8080;

# WebSocket specific headers
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;

# Longer timeouts for persistent connections
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
}

Path-Based Routing

You can route different URL paths to different backend services:

/etc/nginx/sites-available/path-routing
server {
listen 80;
server_name microservices.example.com;

# API service
location /api/ {
proxy_pass http://localhost:3001/;
proxy_set_header Host $host;
}

# Admin dashboard
location /admin/ {
proxy_pass http://localhost:3002/;
proxy_set_header Host $host;
}

# Main application
location / {
proxy_pass http://localhost:3000/;
proxy_set_header Host $host;
}
}
note

Notice the trailing slashes in proxy_pass. location /api/ with proxy_pass http://backend/ will send /api/users to http://backend/api/users, while proxy_pass http://backend/ will send it to http://backend/users.

Common Pitfalls

  • Missing headers: Forgetting proxy_set_header directives can break applications that rely on client IP addresses or host information
  • Trailing slash confusion: Incorrect trailing slashes in location and proxy_pass can cause unexpected URL rewriting
  • Buffer size issues: Too small buffer sizes can cause performance problems with large responses
  • Timeout mismatches: Proxy timeouts shorter than backend processing time can cause premature connection closures
  • DNS resolution: Using domain names in proxy_pass without resolver directive can cause issues if DNS changes

Summary

Reverse proxy configuration is essential for modern web architectures. You learned how to:

  • Set up basic reverse proxy with proper headers
  • Configure timeouts and buffering for performance
  • Handle WebSocket connections
  • Route traffic based on URL paths
  • Avoid common configuration mistakes

The reverse proxy pattern enables you to build scalable, secure applications by separating concerns between your web server and application logic.

Nginx Reverse Proxy and Upstream Handling

What is the primary purpose of the `X-Forwarded-For` header?

Question 1/5