fastcgi_cache_lock
The 'fastcgi_cache_lock' directive controls the lock behavior for FastCGI cache operations to avoid simultaneous requests causing cache stampedes.
Description
The 'fastcgi_cache_lock' directive can be used to prevent multiple requests from generating identical FastCGI responses concurrently. When enabled, if two or more requests are made for the same cached resource, only the first request will proceed to generate the upstream response. Subsequent requests will be queued until the response is generated and cached, allowing them to then serve the cached response instead of invoking multiple upstream requests.
This directive accepts a flag ('on' or 'off'). When set to 'on', NGINX will apply the locking mechanism, allowing only one request to generate the response while the rest wait. Setting it to 'off' disables this behavior, resulting in all requests for the same resource potentially generating new upstream requests simultaneously, which can lead to increased load and performance issues in high-traffic environments.
It's important to note that using 'fastcgi_cache_lock' can improve efficiency in scenarios where cache stampedes are likely, but it should be paired with a suitable caching strategy to ensure that performance remains optimal while cache usage is maximized.
Config Example
location /api {
fastcgi_pass 127.0.0.1:9000;
fastcgi_cache my_cache;
fastcgi_cache_lock on;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
}Enabling cache locking can lead to increased response time for queued requests, so it should be tested under load to find optimal settings.
If 'fastcgi_cache_lock' is enabled, ensure that your cache expiration settings are also tuned, or stale responses may be served longer than desired.