proxy_cache_lock

The `proxy_cache_lock` directive enables serialization of requests for the same resource when a cache miss occurs to reduce load on the upstream server. — NGINX HTTP Core

proxy_cache_lock
httpserverlocation
Синтаксисproxy_cache_lock on | off;
По умолчаниюoff
Контекстhttp, server, location
МодульNGINX HTTP Core
Аргументыflag

Описание

When `proxy_cache_lock` is set to on, NGINX will use a locking mechanism to prevent multiple concurrent requests from going upstream for the same resource that is not available in the cache. Instead, the first request that arrives will fetch the resource, and subsequent requests will wait until the first request completes to serve the cached content from the cache once it becomes available. This directive is particularly useful to improve the performance and efficiency of web applications that rely on backend services, especially in scenarios where cache misses may lead to a sudden spike in upstream requests. By enabling `proxy_cache_lock`, NGINX minimizes the load on backend servers and reduces the chances of overwhelming them with requests for the same item. You can use this directive at the `http`, `server`, or `location` context. Setting this option to on is crucial in high-traffic environments where caching is critical for performance and resource management.

Пример конфига

http {
    proxy_cache_path /tmp/cache levels=1:2 keys_zone=my_cache:10m max_size=1g;
    server {
        location / {
            proxy_pass http://backend;
            proxy_cache my_cache;
            proxy_cache_lock on;
        }
    }
}

Enabling `proxy_cache_lock` may introduce latency on subsequent requests while waiting for the first request to complete.

This directive should be used cautiously with low TTLs (Time-To-Live) cache settings, as it may lead to longer waiting times in high traffic situations.