反向代理与负载均衡
2026/3/20大约 9 分钟
反向代理与负载均衡
正向代理与反向代理
概念对比
| 特性 | 正向代理 | 反向代理 |
|---|---|---|
| 代理对象 | 客户端 | 服务端 |
| 部署位置 | 客户端侧 | 服务端侧 |
| 客户端感知 | 客户端需要配置代理 | 客户端无感知 |
| 服务端感知 | 服务端不知道真实客户端 | 服务端知道请求来自代理 |
| 典型用途 | 翻墙、缓存、访问控制 | 负载均衡、安全防护、缓存加速 |
| 代表软件 | Squid、V2Ray | Nginx、HAProxy |
反向代理配置详解
proxy_pass 基础用法
server {
listen 80;
server_name example.com;
# 基本反向代理
location / {
proxy_pass http://127.0.0.1:8080;
}
# 代理到 HTTPS 后端
location /secure/ {
proxy_pass https://backend.example.com;
}
# 代理到 Unix Socket
location /app/ {
proxy_pass http://unix:/var/run/app.sock;
}
}
proxy_pass 带不带 URI 的区别
这是最容易出错的地方:
# 假设请求:GET /api/users
# 情况1:proxy_pass 不带 URI(不带尾部斜杠)
location /api/ {
proxy_pass http://backend;
# 转发: http://backend/api/users
# 保留原始 URI
}
# 情况2:proxy_pass 带 URI(带尾部斜杠)
location /api/ {
proxy_pass http://backend/;
# 转发: http://backend/users
# /api/ 被替换为 /
}
# 情况3:proxy_pass 带路径
location /api/ {
proxy_pass http://backend/v2/;
# 转发: http://backend/v2/users
# /api/ 被替换为 /v2/
}
# 情况4:proxy_pass 带路径(不带尾部斜杠)
location /api/ {
proxy_pass http://backend/v2;
# 转发: http://backend/v2users
# 注意:缺少斜杠会导致路径拼接错误!
}
重要规则
- proxy_pass 不带 URI:将完整的原始 URI 转发给后端
- proxy_pass 带 URI(包括只有
/):将 location 匹配的部分替换为 proxy_pass 中的 URI - 使用正则 location 时,proxy_pass 不能带 URI
:::
proxy_set_header 配置
location / {
proxy_pass http://backend;
# 传递真实 Host
proxy_set_header Host $host;
# 传递客户端真实 IP
proxy_set_header X-Real-IP $remote_addr;
# 传递代理链 IP
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# 传递协议类型
proxy_set_header X-Forwarded-Proto $scheme;
# 传递原始 Host
proxy_set_header X-Forwarded-Host $host;
# 传递原始端口
proxy_set_header X-Forwarded-Port $server_port;
# 传递请求 ID(用于链路追踪)
proxy_set_header X-Request-ID $request_id;
# 关闭代理缓存
proxy_set_header Connection "";
# HTTP 版本
proxy_http_version 1.1;
}
proxy_redirect 配置
# 当后端返回重定向时,修改 Location 头
location / {
proxy_pass http://backend;
# 默认行为:自动替换
proxy_redirect default;
# 替换指定 URL
proxy_redirect http://backend/ http://example.com/;
# 正则替换
proxy_redirect ~^http://[^/]+(/.*) http://example.com$1;
# 关闭替换
proxy_redirect off;
}
代理缓冲与超时设置
location / {
proxy_pass http://backend;
# === 缓冲配置 ===
# 开启代理缓冲
proxy_buffering on;
# 响应头缓冲区大小
proxy_buffer_size 4k;
# 响应体缓冲区(数量 大小)
proxy_buffers 8 16k;
# 高负载时的缓冲区
proxy_busy_buffers_size 32k;
# 临时文件大小
proxy_max_temp_file_size 1024m;
# 临时文件写入大小
proxy_temp_file_write_size 32k;
# === 超时配置 ===
# 与后端建立连接的超时
proxy_connect_timeout 60s;
# 等待后端响应的超时
proxy_read_timeout 60s;
# 向后端发送请求的超时
proxy_send_timeout 60s;
# 下一个服务器超时
proxy_next_upstream_timeout 30s;
# 下一个服务器重试次数
proxy_next_upstream_tries 3;
# 触发故障转移的条件
proxy_next_upstream error timeout http_502 http_503 http_504;
}
WebSocket 代理
http {
# WebSocket 升级映射
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name ws.example.com;
location /ws/ {
proxy_pass http://websocket_backend;
proxy_http_version 1.1;
# WebSocket 必需头
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
# 传递真实 IP
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# WebSocket 超时(默认60s,需要加大)
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
# 关闭缓冲
proxy_buffering off;
}
}
}
gRPC 代理
server {
listen 443 ssl http2;
server_name grpc.example.com;
ssl_certificate /etc/nginx/ssl/cert.pem;
ssl_certificate_key /etc/nginx/ssl/key.pem;
location / {
# gRPC 代理
grpc_pass grpc://127.0.0.1:50051;
# 或 gRPC over TLS
# grpc_pass grpcs://127.0.0.1:50051;
# gRPC 错误处理
error_page 502 = /error502grpc;
}
location = /error502grpc {
internal;
default_type application/grpc;
add_header grpc-status 14;
add_header content-type application/grpc;
add_header grpc-message "unavailable";
return 204;
}
}
upstream 负载均衡
轮询(Round Robin)
# 默认算法,按顺序逐一分配请求
upstream backend {
server 192.168.1.101:8080;
server 192.168.1.102:8080;
server 192.168.1.103:8080;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
加权轮询(Weighted Round Robin)
# 按权重比例分配请求
upstream backend {
server 192.168.1.101:8080 weight=5; # 接收 5/8 的请求
server 192.168.1.102:8080 weight=2; # 接收 2/8 的请求
server 192.168.1.103:8080 weight=1; # 接收 1/8 的请求
}
IP Hash
# 同一 IP 的请求始终发送到同一后端
upstream backend {
ip_hash;
server 192.168.1.101:8080;
server 192.168.1.102:8080;
server 192.168.1.103:8080;
}
最少连接(Least Connections)
# 将请求发送到当前连接数最少的服务器
upstream backend {
least_conn;
server 192.168.1.101:8080;
server 192.168.1.102:8080;
server 192.168.1.103:8080;
}
一致性哈希(Hash)
# 根据指定 key 进行哈希分配
upstream backend {
hash $request_uri consistent; # consistent 启用一致性哈希
server 192.168.1.101:8080;
server 192.168.1.102:8080;
server 192.168.1.103:8080;
}
# 基于 Cookie 的哈希
upstream backend {
hash $cookie_jsessionid consistent;
server 192.168.1.101:8080;
server 192.168.1.102:8080;
}
负载均衡算法对比
| 算法 | 指令 | 特点 | 适用场景 |
|---|---|---|---|
| 轮询 | 默认 | 简单均匀,无需配置 | 后端性能一致的场景 |
| 加权轮询 | weight | 按性能比例分配 | 后端性能不一致 |
| IP Hash | ip_hash | 同一 IP 访问同一后端 | 有状态的会话场景 |
| 最少连接 | least_conn | 动态感知负载 | 请求处理时间差异大 |
| 一致性哈希 | hash ... consistent | 最小化服务变更影响 | 缓存服务器集群 |
| 随机 | random [two [method]] | 两次随机选择 | 多层负载均衡 |
upstream 服务器参数
upstream backend {
# weight: 权重,默认 1
server 192.168.1.101:8080 weight=5;
# max_fails: 最大失败次数,超过后标记为不可用
# fail_timeout: 不可用持续时间 + 判断失败的时间窗口
server 192.168.1.102:8080 max_fails=3 fail_timeout=30s;
# backup: 备份服务器,仅在主服务器全部不可用时启用
server 192.168.1.103:8080 backup;
# down: 标记为永久不可用
server 192.168.1.104:8080 down;
# max_conns: 最大并发连接数(限制单台后端的压力)
server 192.168.1.105:8080 max_conns=1000;
# slow_start: 慢启动时间(商业版 Nginx Plus)
# 服务器恢复后,权重从0逐步增加到设定值
# server 192.168.1.106:8080 slow_start=30s;
# resolve: 动态 DNS 解析(商业版)
# server backend.example.com resolve;
# keepalive 连接数
keepalive 32;
# keepalive 超时
keepalive_timeout 60s;
# 每个 keepalive 连接的最大请求数
keepalive_requests 1000;
}
健康检查
被动健康检查
upstream backend {
# 在 fail_timeout 时间内失败 max_fails 次,标记为不可用
# 不可用持续 fail_timeout 时间后,重新尝试
server 192.168.1.101:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.102:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.103:8080 max_fails=3 fail_timeout=30s;
}
server {
location / {
proxy_pass http://backend;
# 定义哪些错误触发故障转移
proxy_next_upstream error timeout http_500 http_502 http_503 http_504;
# 故障转移超时
proxy_next_upstream_timeout 10s;
# 最大重试次数
proxy_next_upstream_tries 3;
}
}
主动健康检查(nginx_upstream_check_module)
# 需要安装 nginx_upstream_check_module 第三方模块
upstream backend {
server 192.168.1.101:8080;
server 192.168.1.102:8080;
server 192.168.1.103:8080;
# 主动健康检查
# interval: 检查间隔(毫秒)
# rise: 连续成功次数后标记为健康
# fall: 连续失败次数后标记为不健康
# timeout: 检查超时(毫秒)
# type: 检查类型(tcp|http|ssl_hello|mysql|ajp)
check interval=3000 rise=2 fall=3 timeout=1000 type=http;
# HTTP 检查配置
check_http_send "HEAD /health HTTP/1.0\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
# 健康检查状态页面
location /upstream_status {
check_status;
access_log off;
allow 192.168.0.0/16;
deny all;
}
}
会话保持
IP Hash 方式
upstream backend {
ip_hash;
server 192.168.1.101:8080;
server 192.168.1.102:8080;
server 192.168.1.103:8080;
# 临时移除服务器时使用 down,不要直接删除
# 否则会导致哈希重新分配
# server 192.168.1.103:8080 down;
}
Cookie 方式
# 方法1:使用 map 和 hash
map $cookie_backend_server $backend_id {
default backend;
~^server_(?P<id>\d+) $id;
}
upstream backend {
server 192.168.1.101:8080;
server 192.168.1.102:8080;
}
server {
location / {
proxy_pass http://backend;
# 后端应用设置 Cookie 来标识服务器
}
}
# 方法2:Nginx Plus sticky cookie(商业版)
# upstream backend {
# sticky cookie srv_id expires=1h domain=.example.com path=/;
# server 192.168.1.101:8080;
# server 192.168.1.102:8080;
# }
Route 方式
# 基于 URI 路由
map $request_uri $backend_server {
~^/user/ "192.168.1.101:8080";
~^/order/ "192.168.1.102:8080";
~^/product/ "192.168.1.103:8080";
default "192.168.1.101:8080";
}
server {
location / {
proxy_pass http://$backend_server;
}
}
实战配置示例
前后端分离项目代理
server {
listen 80;
server_name www.example.com;
root /var/www/frontend/dist;
index index.html;
# 前端静态资源
location / {
try_files $uri $uri/ /index.html;
}
# API 代理到后端
location /api/ {
proxy_pass http://api_backend/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# WebSocket 代理
location /ws/ {
proxy_pass http://ws_backend/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 3600s;
}
# 静态资源缓存
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff|woff2)$ {
expires 30d;
add_header Cache-Control "public, immutable";
}
}
upstream api_backend {
least_conn;
server 127.0.0.1:8001;
server 127.0.0.1:8002;
keepalive 32;
}
upstream ws_backend {
ip_hash;
server 127.0.0.1:9001;
server 127.0.0.1:9002;
}
多后端服务代理
# 微服务网关配置
upstream user_service {
server 192.168.1.101:8080;
server 192.168.1.102:8080;
keepalive 16;
}
upstream order_service {
server 192.168.1.103:8080;
server 192.168.1.104:8080;
keepalive 16;
}
upstream product_service {
server 192.168.1.105:8080;
server 192.168.1.106:8080;
keepalive 16;
}
server {
listen 80;
server_name api.example.com;
# 用户服务
location /api/users/ {
proxy_pass http://user_service/;
include proxy_params.conf;
}
# 订单服务
location /api/orders/ {
proxy_pass http://order_service/;
include proxy_params.conf;
}
# 商品服务
location /api/products/ {
proxy_pass http://product_service/;
include proxy_params.conf;
}
# 统一错误处理
error_page 502 503 504 = @fallback;
location @fallback {
default_type application/json;
return 503 '{"code": 503, "message": "Service Unavailable"}';
}
}
公共代理参数文件:
# /etc/nginx/proxy_params.conf
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Connection "";
proxy_connect_timeout 10s;
proxy_read_timeout 30s;
proxy_send_timeout 30s;
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 16k;
TCP/UDP 四层代理(stream 模块)
# 需要在编译时加入 --with-stream
stream {
# 日志格式
log_format tcp_log '$remote_addr [$time_local] '
'$protocol $status $bytes_sent $bytes_received '
'$session_time "$upstream_addr"';
access_log /var/log/nginx/stream.access.log tcp_log;
# MySQL 代理
upstream mysql_cluster {
least_conn;
server 192.168.1.201:3306 weight=5;
server 192.168.1.202:3306 weight=3;
server 192.168.1.203:3306 backup;
}
server {
listen 3306;
proxy_pass mysql_cluster;
proxy_timeout 300s;
proxy_connect_timeout 10s;
}
# Redis 代理
upstream redis_cluster {
server 192.168.1.211:6379;
server 192.168.1.212:6379;
}
server {
listen 6379;
proxy_pass redis_cluster;
proxy_timeout 60s;
}
# SSH 代理
server {
listen 2222;
proxy_pass 192.168.1.100:22;
proxy_timeout 600s;
}
# DNS UDP 代理
upstream dns_servers {
server 8.8.8.8:53;
server 8.8.4.4:53;
}
server {
listen 53 udp;
proxy_pass dns_servers;
proxy_timeout 5s;
proxy_responses 1;
}
}
灰度发布配置
http {
# 方法1:基于 Cookie 灰度
map $cookie_version $backend_pool {
default "stable_backend";
"canary" "canary_backend";
}
# 方法2:基于 Header 灰度
map $http_x_canary $backend_pool_header {
default "stable_backend";
"true" "canary_backend";
}
# 方法3:基于 IP 灰度
geo $backend_pool_ip {
default stable_backend;
192.168.1.0/24 canary_backend;
10.0.0.0/8 canary_backend;
}
# 方法4:基于权重灰度(10% 流量到灰度)
split_clients "${remote_addr}${request_uri}" $backend_pool_weight {
10% canary_backend;
* stable_backend;
}
upstream stable_backend {
server 192.168.1.101:8080;
server 192.168.1.102:8080;
}
upstream canary_backend {
server 192.168.1.201:8080;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://$backend_pool_weight;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
# 添加灰度标识响应头
add_header X-Backend-Pool $backend_pool_weight;
}
}
}
总结
本章介绍了 Nginx 反向代理与负载均衡的核心内容:
- 代理概念:正向代理代理客户端,反向代理代理服务端
- 反向代理配置:proxy_pass 路径规则、请求头传递、缓冲与超时
- 特殊协议代理:WebSocket、gRPC 代理配置
- 负载均衡算法:轮询、加权、IP Hash、最少连接、一致性哈希
- 健康检查:被动检查和主动检查机制
- 会话保持:IP Hash、Cookie、Route 三种方式
- 实战应用:前后端分离、微服务网关、四层代理、灰度发布