Performance Tuning and Optimization
Lesson 13: Performance Tuning and Optimization
Welcome to Lesson 13! After mastering Nginx monitoring in the previous lesson, you're now ready to optimize your web server's performance. In this lesson, you'll learn practical techniques to make your Nginx deployment faster and more efficient under heavy loads.
Learning Objectives
By the end of this lesson, you will be able to:
- Configure worker processes and connections for optimal performance
- Implement efficient buffer and timeout settings
- Optimize static file serving and connection handling
- Use performance monitoring tools to validate improvements
Worker Process Optimization
Nginx uses a master process that manages multiple worker processes. Proper configuration of these workers is crucial for handling concurrent connections efficiently.
Configuring Worker Processes and Connections
worker_processes auto; # Automatically set to number of CPU cores
worker_rlimit_nofile 100000; # Increase file descriptor limit
events {
worker_connections 4096; # Connections per worker
use epoll; # Efficient event method for Linux
multi_accept on; # Accept multiple connections at once
}
Use worker_processes auto; to automatically match your CPU core count. For high-traffic servers, monitor CPU usage and adjust if needed.
Optimizing Worker Affinity
worker_processes 4;
worker_cpu_affinity 0001 0010 0100 1000;
Connection and Buffer Optimization
Proper buffer and timeout settings prevent memory issues and improve response times.
Buffer Size Configuration
http {
client_body_buffer_size 16k;
client_header_buffer_size 1k;
client_max_body_size 8m;
large_client_header_buffers 4 8k;
# Timeout settings
client_body_timeout 12;
client_header_timeout 12;
send_timeout 10;
# Keepalive settings
keepalive_timeout 65;
keepalive_requests 1000;
}
Buffer Sizing Guidelines:
client_body_buffer_size: Match your average POST request sizeclient_header_buffer_size: 1k is sufficient for most headers- Adjust
large_client_header_buffersif using long authentication tokens
Static File Serving Optimization
Optimize how Nginx serves static content for maximum performance.
Efficient Static File Configuration
server {
location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
expires 1y;
add_header Cache-Control "public, immutable";
access_log off;
# Optimized serving
sendfile on;
tcp_nopush on;
tcp_nodelay on;
}
location /static/ {
# Memory mapping for frequently accessed files
sendfile on;
sendfile_max_chunk 1m;
# Direct I/O for large files
directio 4m;
directio_alignment 512;
}
}
Advanced Performance Features
File Descriptor Caching
http {
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
}
Thread Pools for Blocking Operations
http {
aio threads;
sendfile on;
location /video/ {
# Use thread pools for large file serving
aio threads;
output_buffers 1 2m;
}
}
Performance Monitoring and Validation
Use these tools to measure your optimization results:
Nginx Status Module
server {
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
}
Testing with Apache Bench
# Test with 1000 requests and 100 concurrent connections
ab -n 1000 -c 100 http://yourserver.com/
# Monitor real-time performance
nginx -t && nginx -s reload
Common Pitfalls
- Over-allocating workers: More workers than CPU cores can cause context switching overhead
- Insufficient file descriptors: Set
worker_rlimit_nofilehigh enough for your connection load - Ignoring OS limits: Check
ulimit -nand adjust system limits if needed - Buffer size mismatches: Too small buffers cause extra I/O, too large wastes memory
- Forgetting sendfile: Always enable
sendfile onfor static content on Linux
Memory Considerations: Large buffer settings can significantly increase memory usage. Monitor your server's memory consumption after making changes and adjust accordingly.
Summary
In this lesson, you've learned essential Nginx performance tuning techniques:
- Optimized worker process configuration for your hardware
- Fine-tuned buffer sizes and timeout settings
- Implemented efficient static file serving with caching headers
- Configured advanced features like file descriptor caching and thread pools
- Set up monitoring to validate performance improvements
Remember that performance tuning is an iterative process. Start with conservative settings, monitor your server's behavior under load, and gradually optimize based on real-world usage patterns.
Quiz
Nginx Performance Optimization and Tuning
What does `worker_processes auto;` automatically configure?