scgi_cache_lock
The 'scgi_cache_lock' directive controls whether to apply a lock when a request for a cached SCGI response is being processed concurrently.
Description
The 'scgi_cache_lock' directive in NGINX enables or disables request locking when a cache entry is being regenerated. This means when one process is already generating a response for a request that is not in cache, subsequent requests for the same resource must wait for the first to complete, avoiding multiple simultaneous generation processes. The lock ensures that only one process updates the cache at a time, improving cache integrity and user experience by reducing the load on backend servers. If the lock is enabled and a concurrent request is received while a response is being generated, the new requests will wait until the response is ready instead of directly querying the backend.
The directive takes a flag as an argument, typically 'on' or 'off'. By default, if not specified, locking behavior will default to 'off', meaning multiple requests could generate multiple backend calls simultaneously, potentially causing strain on the backend or inconsistent response times. Users should carefully evaluate their application's concurrency needs when deciding to enable this directive.
Config Example
http {
scgi_cache_path /path/to/cache levels=1:2 keys_zone=my_zone:10m;
location /scgi {
scgi_pass backend;
scgi_cache my_zone;
scgi_cache_lock on;
}
}Using 'scgi_cache_lock on' can increase response time for users if many concurrent requests are made since they will wait for the first response to complete.
Locking should be used judiciously, as enabling it can lead to delays in response times if backend processing is slow.