fastcgi_cache_use_stale
The `fastcgi_cache_use_stale` directive controls whether stale cached responses are served during specific scenarios, such as when an upstream server is down or when a request times out.
Description
The fastcgi_cache_use_stale directive allows NGINX to serve stale cached responses under certain conditions, improving user experience and performance by reducing downtime or delays during server issues. You can enable this directive for various scenarios, including 'error', 'timeout', 'invalid_header', or 'updating'. When a request encounters these issues, instead of returning an error, NGINX can serve the last successfully cached response. This behavior is particularly useful for high-availability setups where maintaining user engagement is critical even when backend services face temporary issues.
This directive can be specified in several contexts: http, server, and location, making it flexible for various configurations. Each argument you provide must be a specification of the scenarios when stale data should be returned. If multiple conditions are specified, the stale content will be served if any of the conditions are met. This directive works in conjunction with the fastcgi_cache directive, and effectively enhances the caching mechanism's robustness by mitigating the impact of upstream outages or slow responses on client experience.
Config Example
location /api {
fastcgi_pass backend;
fastcgi_cache my_cache;
fastcgi_cache_use_stale error timeout;
}Stale responses should be managed carefully to ensure that users are not served outdated or incorrect data.
Using this directive without proper cache expiration and invalidation strategies can lead to stale data issues.
Make sure that the usage scenarios specified are relevant to your application's needs. For instance, serving stale data during all errors might not be ideal for all applications.