NGINX Variables
195 variables across 12 modules — searchable, with examples and gotchas for each.
Description
The $http_user_agent variable in NGINX is used to capture the User-Agent string from incoming HTTP requests. This string identifies the client software making the request, which generally includes information about the client's browser type, version, and operating system. The User-Agent header is typically set by web browsers, mobile applications, and various user agents to inform the server about their capabilities and characteristics. It's important to note that this variable is populated in the request processing phase when the server receives the request, making it available for logging, processing, or conditional configuration. When the NGINX server processes an incoming request, it parses various HTTP headers, and among these headers is the User-Agent. If the User-Agent header is present in the request, its value is stored in the $http_user_agent variable, allowing server administrators to implement rules based on the type of client making the request. For example, administrators may choose to serve different content to different browsers or devices based on their User-Agent string. Typical values for this variable might include strings like "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36" for Google Chrome or "Mozilla/5.0 (iPhone; CPU iPhone OS 14_0 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0 Mobile/15E148 Safari/604.1" for Safari on iOS.
Config Example
server {
listen 80;
location / {
access_log /var/log/nginx/access.log;
if ($http_user_agent ~* "MSIE") {
return 403; # Deny access to Internet Explorer users
}
# other configuration...
}
}Ensure that the User-Agent header is set in the request; otherwise, the variable will be empty.
Improper usage in conditional blocks can lead to unintended behavior if the User-Agent is not correctly parsed.
Be mindful of user-agent spoofing; clients can modify this header, making it unreliable for security decisions.
Description
The $http_referer variable is used in NGINX to access the value of the 'Referer' HTTP request header, which indicates the URL of the webpage that linked to the resource being requested. This variable can help determine where visitors are coming from, allowing server administrators to execute logic based on the originating pages. For instance, if a website offers referral bonuses, it can use this variable to track whether users are coming from an affiliate site. The value of $http_referer is set based on the HTTP headers sent by the browser. If a browser does not send a Referer header (which can occur due to user privacy settings), the variable will be empty. Typical values for this variable include URLs from websites, but it may also be absent in secure contexts or when redirected across different protocols (e.g., from HTTPS to HTTP). Therefore, it’s crucial to handle the potential absence of this variable in your configurations or scripts to avoid unexpected behavior. Additionally, one must ensure proper sanitization when using this variable for purposes such as logging or decision-making, as it could be manipulated by clients. This variable is typically employed in access control, logging, or redirection rules, allowing for fine-grained control based on the origin of the request.
Config Example
location /example {
if ($http_referer ~* "example.com") {
return 403;
}
}If the client does not send a Referer header, $http_referer will be empty, which might lead to unexpected behavior if not properly handled.
Relying on this variable for access control can be risky, as users can modify their Referer header or use privacy tools that exclude it.
Description
The `$http_via` variable in NGINX is used to access the value of the 'via' header from an HTTP request. This header is typically added by web proxies to indicate the intermediary servers through which the request has passed, providing insight into the request's routing and any caching mechanisms involved. When a client sends a request through one or more proxies, the 'via' header is appended by each proxy, concatenating the information, and allowing the downstream servers to analyze the request path. Setting this variable occurs automatically when NGINX processes the headers of incoming requests. If the 'via' header is present, `$http_via` will hold its value, which can include multiple pieces of information if the request has traversed several proxies. Common values for this header might include details such as the software of the proxy server and its version. If the request does not have a 'via' header, the variable will be empty, allowing for conditional logic in your server configuration based on the presence of this header.
Config Example
server {
listen 80;
location / {
if ($http_via) {
add_header X-Proxy-Server $http_via;
}
proxy_pass http://backend;
}
}Ensure that the 'via' header is actually appended by the upstream proxies; otherwise, the variable will be empty.
Be cautious of relying on the presence of `$http_via` for security or access control, as any client could spoof HTTP headers in requests sent directly to the server.
Description
The $http_x_forwarded_for variable in NGINX is used to extract the value of the X-Forwarded-For HTTP header, which is commonly added by proxies in a request chain to indicate the source IP address of the client. When a request passes through one or more proxies, the original client's IP address is included in this header, allowing the downstream server to capture this information. It could contain a single IP address or a comma-separated list of IPs that represent the client's address and any subsequent proxies that handled the request. Typically, the X-Forwarded-For header will have a value that looks like this: "203.0.113.195" which is an example of a direct IP, or "203.0.113.195, 198.51.100.0" if there were proxies involved. When configuring NGINX, this variable can be utilized in access logs, conditional configurations, or security checks to allow or deny access based on the client's originating address. It's important to ensure that any configuration appropriately handles the incoming header values, especially if multiple proxies are in the request chain to avoid incorrect identification of the client's IP. NGINX will only set this variable if the client has sent an X-Forwarded-For header. It can be particularly useful in load-balanced environments where several back-end servers need to determine the true client IP. Users should note that this header can be easily spoofed if appropriate security measures are not in place, making validation necessary in scenarios where security is a concern.
Config Example
http {
log_format custom '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "http://$http_x_forwarded_for"';
access_log /var/log/nginx/access.log custom;
server {
listen 80;
server_name example.com;
location / {
# Use $http_x_forwarded_for for access control
if ($http_x_forwarded_for ~* '203.0.113.195') {
return 403;
}
proxy_pass http://backend_servers;
}
}
}Ensure the X-Forwarded-For header is being correctly set by your proxies; otherwise, the variable may contain unexpected values.
Since the header can be easily spoofed, implement proper validation and trust only known proxies when relying on this header for security decisions.
Consider the possibility of receiving multiple IP addresses in the header; parsing them can lead to complexity.
Description
In NGINX, the $http_cookie variable is used to access the value of the Cookie header sent by clients in their HTTP requests. This variable is set by NGINX during the processing of a request when it encounters the client-provided Cookie header. Typically, $http_cookie will contain a string of name-value pairs for each cookie, with pairs separated by semicolons. For example, if a request includes 'Cookie: user_id=12345; session_token=abcde;', then $http_cookie would return 'user_id=12345; session_token=abcde'. The variable is especially useful for web applications that rely on cookies for session management, tracking user states, or personalizing content based on user preferences. It can be accessed within various contexts like server, location, or if directives. When using this variable, it is important to keep in mind that its output may vary depending on the cookies set in the client's browser and whether they are allowed or modified by any related directives in the NGINX configuration. Given that cookies can contain sensitive information, it is advisable to handle this variable carefully to avoid unintentional exposure of data in logs or error messages, and to ensure that secure and HTTP-only cookie flags are respected in the application logic.
Config Example
location / {
if ($http_cookie ~* "session_id") {
# Logic that depends on the session_id cookie
}
}The $http_cookie variable will be empty if there are no cookies sent by the client.
Ensure that your application handles cookie data securely, especially if it contains sensitive information.
Using the variable in an 'if' context can introduce complexities regarding request handling and should be tested thoroughly.
Description
The $content_length variable in NGINX provides the length of the request body that is included in the 'Content-Length' HTTP header. It is set only for requests where this header is present, typically in POST and PUT methods. If there is no 'Content-Length' header sent by the client, the variable will be empty. This variable is primarily useful for handling requests with known content sizes, allowing for conditional logic based on the size of incoming data. When processing requests, NGINX sets $content_length after parsing the headers of the incoming request. If the 'Content-Length' header is specified, it will hold the numeric value; however, if the header is absent or malformed, $content_length will not reflect a valid size. In practice, valid values for this variable are strictly non-negative integers, indicating the byte count of the request body. Developers often use this variable to impose size limits, log request sizes, or conditionally handle large payloads in a more efficient manner.
Config Example
http {
server {
listen 80;
location /upload {
if ($content_length > 10000) {
return 413; # Request entity too large
}
}
}
}Ensure that the 'Content-Length' header is actually present; otherwise, $content_length will be empty.
$content_length is only useful for methods such as POST and PUT where a body is usually present; for GET requests, it will not be set.
Not all clients correctly use the 'Content-Length' header, which can lead to unexpected results. When absent, the variable will not yield the size.
Description
The $content_type variable in NGINX is automatically set when processing a response to represent the Content-Type header of that response. Typically, it is used to indicate the media type of the resource being returned, following the format defined by RFC 6838. Common values include 'text/html' for HTML documents, 'application/json' for JSON responses, and 'image/png' for PNG images. This variable is set during the request processing phase and can be manipulated by various directives that influence the response, such as 'add_header', 'default_type', and 'types'. When NGINX serves static files, the Content-Type is determined based on the file extension and configuration settings defined under the 'http' block or within specific 'server' or 'location' contexts. For example, if a request is made for a file with a '.html' extension, the $content_type would typically be set to 'text/html'. In cases where no explicit type is set, NGINX defaults to the value provided by the 'default_type' directive if available, otherwise it may be left unset.
Config Example
location /images/ {
types {
image/png png;
image/jpeg jpg;
image/gif gif;
}
default_type application/octet-stream;
add_header Content-Type $content_type;
}If no type is matched, $content_type may be empty, leading to unexpected responses.
Custom types should be declared correctly in the 'types' block to ensure proper assignment of $content_type.
Using $content_type in a context that doesn't deal with responses may lead to irrelevant values.
Description
The $host variable is crucial for handling incoming requests, as it directly extracts the host name from either the Host request header or the appropriate server name defined in the NGINX configuration. When a request comes in, NGINX checks the Host header; if it is present, the value of this header is used to set the $host variable. If the header is absent, NGINX falls back to using the server name configured in the server block that handled the request. This behavior allows NGINX to respond appropriately to requests made to different domains or subdomains that point to the same server instance. This variable is typically assigned values that correspond to the domain names or IP addresses being accessed by clients. For example, if a user requests "http://example.com/path", the $host variable will contain "example.com". If the request does not specify a hostname and the server block has a default server defined, $host would hold that server's name. It's important to note that if NGINX is configured with multiple server blocks for different hostnames, the value of $host can be critical for routing requests to the correct block based on the incoming URL. The $host variable can also behave differently depending on the presence of the server_name directive. If the configured server names for a server block include wildcards or regex, the actual value of $host might be influenced by the specific match found during the request handling process. This ensures that requests are served correctly based on specific hostnames, enabling multi-tenancy configurations on a single NGINX server instance.
Config Example
server {
listen 80;
server_name example.com;
location / {
return 200 "Host is: $host";
}
}If the Host header is not sent by the client, the $host variable may not behave as expected, potentially affecting routing.
Ensure that the server block with the desired server_name is configured correctly, otherwise you might get unexpected values for $host.
Description
The $binary_remote_addr variable in NGINX is used to obtain the binary format of the client's IP address that is associated with a request. This variable can be particularly useful in access control and filtering configurations. When a request is received, NGINX retrieves the client’s IP address and converts it into a 32-bit or 128-bit binary format, depending on whether the client is using IPv4 or IPv6, respectively. By using this variable, system administrators can efficiently compare and match IP addresses in various access control settings such as 'allow' and 'deny'. Typically, this variable is set during the initial processing of the request, where it is populated from the 'sockaddr' structure that contains client's address information. The binary representation makes it straightforward for NGINX to perform quick comparisons and checks against lists of allowed or denied IP addresses without needing to convert back and forth between formats. Because it is in binary form, this variable is significantly more efficient for NGINX when handling access control directives, especially in high-throughput scenarios.
Config Example
http {
server {
location / {
deny 192.168.1.1;
allow all;
}
}
}This variable is not easily human-readable compared to its string equivalent ($remote_addr) and should only be used in contexts where binary format is acceptable.
Care must be taken when using it alongside other directives that expect string formats to avoid misconfigurations.
Description
The $remote_addr variable is populated by NGINX when a client makes a request to the server. It obtains the client's IP address directly from the request's socket information, specifically from the `struct sockaddr` associated with the connection. This variable is essential for access control and logging scenarios, where knowing the originating IP of requests can dictate the response behavior or record the source for auditing. If the NGINX server is behind a reverse proxy or load balancer, the $remote_addr may need to be supplemented by other variables like $http_x_forwarded_for to accurately reflect the client’s original IP address. In a standard configuration, the value will typically be an IPv4 or IPv6 address, represented in standard dotted-decimal or colon-separated formats, respectively. When using this variable in an access control setup (such as with the allow/deny directives), the server can make real-time decisions based on client IPs, enhancing security by permitting or blocking access to specific IPs. The versatile nature of this variable makes it a foundational element in both security and analytics configuration within NGINX.
Config Example
location / {
deny 192.168.1.0/24;
allow all;
access_log /var/log/nginx/access.log combined;
}If NGINX is behind a load balancer or reverse proxy, $remote_addr may not reflect the actual client IP unless configured correctly with proxy headers.
Care should be taken to configure trusted proxies in the 'set_real_ip_from' to mitigate spoofing of IP addresses when using reverse proxies.
Description
The $remote_port variable is a built-in NGINX variable that captures the TCP port number from which the client has established a connection to the server. It is primarily useful for logging and providing access control based on the source port of the requests. This variable is set as part of the request handling process, specifically when NGINX processes the incoming connection and initializes the request structure where various client-related details are stored. When a client connects to a server, the remote port is established by the operating system and made available to NGINX. In a typical scenario, this port number can range from 1024 to 65535, as ports in the range 1-1023 are reserved for well-known services. It can also vary frequently, as clients may connect from different applications or environments, leading to various port numbers being assigned. NGINX utilizes this variable in directives such as logging via the 'log_format' directive and in access control rules by allowing or denying requests based on source port ranges.
Config Example
server {
listen 80;
location / {
access_log /var/log/nginx/access.log 'Client IP: $remote_addr, Port: $remote_port';
}
}The value of $remote_port is not always predictable; it can vary between requests from the same client.
Using $remote_port in security rules should be done with caution; port spoofing can occur from client-side applications.
This variable is not available in all contexts; ensure you are using it in a correct context such as http, server, or location.
Description
The variable $proxy_protocol_addr is part of NGINX's support for the PROXY protocol, which allows passing the original client IP address through a proxy server. When a client connects to an NGINX server that has proxy protocol support enabled, NGINX can parse the PROXY protocol header sent by the proxy and extract the original client address. This enables NGINX to correctly log requests with the client's actual IP address instead of the proxy's IP address, which would otherwise be logged. This variable is typically used in configurations where NGINX is set up behind load balancers or reverse proxies that implement the PROXY protocol. The variable is populated only when the incoming connection has a corresponding PROXY protocol header; under other circumstances, it will typically return an empty value. Clients that connect directly to NGINX will not benefit from this variable, and it will reveal the IP of the last proxy instead. Commonly, it will contain either an IPv4 or IPv6 address, depending on the client's connection type.
Config Example
server {
listen 80 proxy_protocol;
location / {
proxy_pass http://backend;
add_header X-Real-IP $proxy_protocol_addr;
}
}Ensure that the PROXY protocol is enabled on both the NGINX server and the upstream proxy.
If the incoming connection does not include a PROXY protocol header, this variable will be empty.
Make sure that only trusted proxies are allowed to connect with proxy protocol to avoid spoofing.
Description
The $proxy_protocol_port variable is utilized within NGINX to retrieve the port number specified in the PROXY protocol header for connections that have been proxied. It is primarily relevant in configurations where NGINX acts as a reverse proxy or load balancer, allowing the server to accurately identify the original port from which the client connection was made. This information can be crucial for logging, security measures, or backend server decisions. This variable is set only when the PROXY protocol is enabled in the upstream server block. If the PROXY protocol is not utilized, the variable will not hold a valid value, effectively resolving to 0. Typical values for this variable would be the original port numbers such as 80 or 443 for HTTP and HTTPS traffic. Using $proxy_protocol_port allows for greater context about the incoming request, which is especially valuable in multi-tier architectures where backend processing may depend on the original client port.
Config Example
server {
listen 80 proxy_protocol;
location / {
add_header X-Proxy-Port $proxy_protocol_port;
}
}Ensure that the PROXY protocol is enabled at the listen directive; otherwise, the variable will not be set and will return 0.
For the variable to provide meaningful data, it must be used in a context where the PROXY protocol is supported and correctly configured.
Description
The variable $proxy_protocol_server_addr is primarily used in configurations involving proxy protocol, which is a feature that allows NGINX to receive the original client IP address and port when serving requests proxied through another layer (like AWS ELB or other reverse proxies). This variable is populated when the proxy protocol is enabled on the server or location block using the 'proxy_protocol' directive. Its value reflects the IP address of the client as decoded from the proxy protocol header, making it essential for accurate logging or access control based on the actual client address rather than the address of the immediate proxy. When this variable is set, it can significantly improve the accuracy of metrics and logs, as you can track actual client addresses instead of the IP address of the reverse proxy. If proxy protocol is not in use or if it is not properly set up, the value of $proxy_protocol_server_addr will be an empty string. As such, it is common to see this variable utilized in combination with other proxy variables to ensure the original client’s information is correctly captured.
Config Example
server {
listen 80 proxy_protocol;
location / {
access_log /var/log/nginx/access.log '$proxy_protocol_server_addr';
}
}Ensure the proxy protocol is explicitly enabled; otherwise, this variable will not be populated.
Misconfiguration of the upstream proxy can lead to empty or incorrect values.
If running behind multiple layers of proxies, ensure that the correct proxy protocol headers are retained.
Description
The $proxy_protocol_server_port variable is used in NGINX configurations to retrieve the port number that the server has received from the PROXY protocol header. This variable is particularly relevant when the PROXY protocol is enabled, allowing NGINX to understand when it is behind a proxy that forwards client connection information along with it. When a request is processed, if the PROXY protocol is activated via the 'proxy_protocol on' directive within the server block, the server port number will be dynamically set based on the incoming connection details relayed by the proxy. For example, if a request comes in on port 80, that value is what you would get if you were to reference $proxy_protocol_server_port. This variable can be very useful to configure SSL termination or to correctly redirect traffic based on the initial port that was used for the connection before any transformations. It can have typical values such as '80' for HTTP or '443' for HTTPS connections, depending on the configuration and the proxy server’s forwarding behavior.
Config Example
server {
listen 80 proxy_protocol;
location / {
return 200 "Server port: $proxy_protocol_server_port";
}
}Ensure that the PROXY protocol is actually enabled on the upstream server; otherwise, this variable will remain empty.
Be aware that using this variable without the necessary proxy settings will cause unexpected behavior, as it relies on the presence of the PROXY protocol header.
If using SSL, ensure the upstream server is transmitting both the PROXY protocol and the SSL settings correctly.
Description
The $proxy_protocol_tlv_ variables are prefixed variables in NGINX that allow for the retrieval of specific Type-Length-Value (TLV) data when using the Proxy Protocol. This protocol, which is often used in load balancing scenarios, transmits connection information (such as client IP addresses) from the proxy server to the backend server in a structured format. Each TLV data piece can be accessed using a suffix that identifies the specific information being requested. These variables are typically set when NGINX is configured to accept the Proxy Protocol from a direct client connection, such as when receiving traffic from a load balancer or proxy server that also implements this protocol. Common TLV types may include information such as client addresses, port numbers, and other header data. This added functionality enables backend servers to make more informed decisions based on client data that is not typically available due to NAT or other network setups that mask the true client IPs. For example, if a TLV item is specified with a suffix that corresponds to client address information, the variable will return the relevant information that was sent by the proxy. If no corresponding TLV data is provided by the upstream proxy server, the variable will be empty or unset. This behavior can be particularly useful in scenarios requiring accurate traffic logging or customization based on the originating client's details, enhancing the overall application performance and user experience.
Config Example
server {
listen 80 proxy_protocol;
location / {
default_type text/plain;
return 200 "Client IP: $proxy_protocol_tlv_client_ip\n";
}
}Ensure that the Proxy Protocol is enabled on the upstream proxy to populate these variables; otherwise, the variables will be unset.
Misconfiguring the Proxy Protocol settings may lead to empty variable values, causing unintended application behavior or errors.
Description
The `$server_addr` variable is part of the NGINX Core module and provides the actual IP address of the server that is responding to a request. This variable is set during the processing of a request, specifically within the context of TCP/IP communication, allowing it to reflect the address that the client connected to. If the server has multiple IP addresses or is behind a load balancer, `$server_addr` will represent the primary address that NGINX uses to respond to incoming client requests. This variable can be particularly useful in logging or in the construction of response headers. It is worth noting that `$server_addr` reflects the address from which a request was processed, and it is determined before the request is passed to any `location` blocks or handlers. In scenarios where NGINX operates without a specific bound server IP (such as in a containerized environment where the IP can change), this variable may require appropriate configuration to ensure it always returns a valid address. Typically, it will return either an IPv4 or an IPv6 address, depending on the server's configuration and the nature of client requests.
Config Example
server {
listen 80;
server_name example.com;
location / {
add_header X-Server-IP $server_addr;
# other configurations
}
}If NGINX is behind a reverse proxy, the `$server_addr` may not reflect the original client's IP unless properly configured with `proxy_set_header X-Forwarded-For`.
In IPv6 configurations, ensure that the server is properly configured to handle IPv6 addresses to avoid unexpected results.
Description
The $server_port variable in NGINX is dynamically set during the processing of a request to represent the port on which the server is listening for incoming connections. It is particularly useful when configuring responses that may vary based on the port number, such as enabling specific response headers or applying firewall rules. The variable takes into consideration both the port specified in the configuration and the port used by the client to initiate the request, providing clarity in multiport setups. This variable is generally set when the server block is defined in the NGINX configuration file. For instance, if a server block is configured to listen on port 80, any requests arriving on that port will update the $server_port variable to 80. In contrast, if the server block listens on an alternate port such as 443 (commonly used for HTTPS), the variable will reflect that value. This makes $server_port essential in crafting configurations that rely on distinguishing between, for example, HTTP and HTTPS traffic, as well as when managing applications that can run on non-standard ports. Typical values of $server_port include 80 for HTTP, 443 for HTTPS, or any other user-defined port. The variable does not include the protocol (HTTP/HTTPS) but solely focuses on the port number itself, giving flexibility and specificity for a range of server configurations.
Config Example
server {
listen 80;
server_name example.com;
location / {
add_header X-Server-Port "$server_port";
}
}Be mindful that the value of $server_port is determined at runtime, so it may differ from the configured listen port if a different port is used during a proxy or redirect operation.
If NGINX is behind a load balancer or proxy that alters the port, $server_port will reflect the port at which NGINX receives the request, not the external-facing port.
Description
The $server_protocol variable in NGINX is dynamically set during the processing of a request and returns the protocol version that the client used to connect to the server. This variable is particularly useful for logging, conditional configurations, and response handling, as it allows you to distinguish between different protocol versions such as HTTP/1.0, HTTP/1.1, or HTTP/2.0. Typically, the variable is set during the server's request phase, becoming available in various contexts, including http, server, and location blocks. When a request arrives at the NGINX server, the module evaluates the incoming request details and populates the $server_protocol variable accordingly. Its typical values are strings like "HTTP/1.1", "HTTP/2", etc., but it reflects the exact protocol that was negotiated during the connection establishment phase with the client. Understanding the $server_protocol variable is important for developers and system administrators as it can serve as a basis for security policies or functionality toggles. For instance, you might want to allow or deny access based on whether a connection was established over HTTPS or HTTP. Logging the protocol can also provide valuable insights for performance and security analysis.
Config Example
server {
listen 80;
server_name example.com;
location / {
add_header X-Protocol $server_protocol;
}
}Ensure that the variable is used in the right context where it's available (http, server, location).
Do not confuse $server_protocol with similar variables like $http_protocol which holds HTTP version information that can be different in certain scenarios.
Description
The $scheme variable is essential for determining the protocol used by the client to initiate a request to the server. It checks if the connection to the NGINX server was made using HTTPS or HTTP. When the request is made over HTTPS, the $scheme variable will return 'https', while for HTTP requests, it will return 'http'. This variable is typically set during the processing of a request in the NGINX core, and its value is determined by the presence or absence of a valid SSL certificate on the server. Therefore, if SSL is configured and the request is secured, the variable is set to 'https'. Otherwise, it defaults to 'http'. It's also important to note that scripts, redirects, or other configuration options often use this variable to generate correct URLs in response outputs or HTTP headers. For example, when redirecting a user or building links dynamically, using the $scheme variable ensures that links reference the appropriate protocol based on how the user accessed the server. Therefore, it plays a crucial role in environments where both secure and non-secure access can exist simultaneously.
Config Example
server {
listen 80;
server_name example.com;
location / {
return 301 $scheme://www.example.com$request_uri;
}
}Ensure SSL is configured properly; otherwise, it will always return 'http'.
Avoid using $scheme in contexts where it may not be defined, such as inside certain directives that do not handle requests.
Using $scheme too liberally can lead to security issues if not handled correctly, such as exposing internal endpoints over HTTP.
Description
The $https variable is a built-in variable in NGINX that is used to determine if the current request is being handled over HTTPS. When a request is handled over a secure connection, $https is set to 'on'. If the request is not secure (i.e., it is served over HTTP), this variable returns 'off'. This variable is particularly useful for redirecting HTTP traffic to HTTPS or for configuring conditional responses based on the security of the connection. The value of the $https variable is typically set when NGINX is configured with a proper SSL certificate and is listening on HTTPS ports. For example, if an NGINX server block specifies an SSL certificate and listens on port 443, $https will be set to 'on' for incoming requests on that block. Conversely, if there is no SSL configuration and the server is listening on port 80, the variable will be 'off'. It is important to note that this variable does not take any value other than 'on' or 'off'.
Config Example
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
location / {
if ($https = off) {
return 301 https://$host$request_uri;
}
# Handle the secure request
}
}Ensure that SSL is correctly configured; otherwise, $https will not work as expected.
The variable may not be reliable if NGINX is behind a proxy that does SSL termination; use appropriate headers to check the original connection scheme.
Description
The $request_port variable in NGINX represents the port on which the request was received from the client. This variable is set during the processing of the request and is derived from the client's connection details as specified in the request header. When a client makes an HTTP or HTTPS request, the request is routed to the specified server block based on matching criteria, which includes the port number. For standard HTTP requests, this is typically port 80, while for HTTPS requests, it's usually port 443. If the request is made using a non-standard port, $request_port will reflect that specific port number. For example, if the client connects to port 8080, then the value of $request_port will be "8080". It is important to note that when using protocols like HTTP/2, the port can be dynamically negotiated, but NGINX will still capture and reflect the port number used for that connection in the $request_port variable. Being aware of this value can be helpful for logging, conditional configuration, or when rewriting URLs based on the received port.
Config Example
server {
listen 80;
location / {
return 200 "Request received on port: $request_port";
}
}The variable will be empty if the server is running behind a proxy that does not forward the original client port.
Using this variable in certain contexts may lead to unexpected results if the request is forwarded or modified by an upstream server.
Description
The `$is_request_port` variable is used to determine if the incoming HTTP request was made on a non-standard port, allowing the server to differentiate between requests made on the standard ports (80 for HTTP and 443 for HTTPS) and those made on alternate ports. This variable is evaluated during the request processing phase, specifically when the request is being parsed and before it reaches any handlers. When this variable is set, it contains a value of '1' if the request was made on an alternate port and '0' if it was made on the standard HTTP or HTTPS ports. This is particularly useful in scenarios where applications need to apply specific configurations, logging, or access rules based on the port from which the request originated. The variable is automatically initialized by NGINX based on the incoming request's socket information and does not require manual setup. Developers often use this variable in combination with access control directives or in conditional blocks to provide tailored responses or logging based on whether the request was made to a standard or non-standard port, thereby enhancing control over server behavior based on client requests.
Config Example
server {
listen 8080;
location / {
if ($is_request_port) {
return 403; # Deny access from non-standard ports
}
# Other configurations...
}
}Ensure that the rewrite rules or access controls you set do not inadvertently block legitimate traffic when using this variable.
Be aware that the variable only evaluates to 1 or 0 based on the port; it does not provide the actual port number.
Description
The $request_uri variable is a core variable in NGINX that captures the complete URI requested by the client. This includes both the path and the query string, if any is provided. It is set during the processing of a request, specifically when NGINX reads the request line from the client, which allows it to create a response based on that URI. This variable is vital for routing requests, generating logs, and creating redirects or rewrites as needed. Typical values of $request_uri could be paths like "/products/item?id=123" or "/api/v1/users", where the former includes a query string while the latter does not. This makes $request_uri a comprehensive way to reference the precise request from the user's perspective. It is often used in conjunction with other variables to log requests, control access, and optimize response behaviors depending on the type or parameters of the request.
Config Example
location /api {
if ($request_uri ~* "/products") {
# Handle product requests
proxy_pass http://backend;
}
}
location / {
log_format main '$remote_addr - $remote_user [$time_local] "$request $request_uri" $status $body_bytes_sent';
access_log /var/log/nginx/access.log main;
}Make sure to use $request_uri only when you need the full URI. If you need just the path without the query string, consider using $uri instead.
Be cautious when using $request_uri in conditional blocks as it may lead to unexpected behaviors if not properly evaluated.
Description
The $uri variable is set during the processing of an HTTP request in NGINX and is primarily populated with the unescaped request URI, which is the part of the URL following the domain and protocol. This variable can include the path and query string of the requested resource but will not contain any additional server or location directives' modifications. The value of $uri is determined early in the request processing, typically when NGINX parses the client request. This means that it reflects the initial value sent by the client before any internal rewrites or redirects. When used in server configurations, the $uri variable allows you to make decisions based on the actual requested resource. It is especially useful in location blocks for access control, logging, or redirecting traffic. The value is usually a string representing a file path or a route and can be formatted like '/images/photo.jpg' or '/api/v1/users'. It should be noted that if the request was processed under a different location block that rewrote the URI, the $uri variable would reflect that change.
Config Example
location /images {
alias /var/www/images;
error_page 404 = /404.html;
}
location /api/v1 {
proxy_pass http://backend;
access_log /var/log/nginx/api_access.log;
if ($uri ~* /user/(\d+)) {
# Do something specific for user URIs
}
}The value of $uri can change based on internal rewrites configured in the NGINX configuration, so be cautious when using it to match specific locations.
$uri does not include the query string; use $request_uri if you need both the URI and the query string.
Description
The $document_uri variable is set during the processing of a request in NGINX, representing the part of the URI that is used to identify the requested resource. Unlike the $request_uri variable which includes the query string, $document_uri strictly contains the URI path, making it suitable for scenarios where the query string is not needed. This variable is particularly useful when documenting or logging the URIs being accessed on the server, enabling the tracking of intended resource requests without interference from any URL query parameters. The value of $document_uri is determined after the request has been processed by the NGINX routing system, specifically within the location context. This means that it can reflect decisions made by NGINX regarding URI rewriting or the use of specific location blocks. Typical values for $document_uri can range from simple paths like "/index.html" to structured URIs like "/products/item123". Any percent-encoded characters in the URI are decoded to ensure cleaner and more human-readable output when using this variable. Usage of this variable within access logs, rewrites, or conditional statements can simplify configuration, especially when specific actions are needed based on the resource that is being accessed. For example, log formats can include $document_uri to trace which documents are frequently requested.
Config Example
log_format mylog '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$document_uri"'; access_log /var/log/nginx/access.log mylog;
Remember that $document_uri does not include the query string; use $request_uri if the query string is needed.
The variable is set after the request is mapped to a location; ensure that you are in a valid context where it is defined.
Description
The `$request` variable in NGINX is set during the processing of an HTTP request and captures the entire request line, which includes the HTTP method (GET, POST, etc.), the requested URI, and the HTTP protocol version (HTTP/1.1, HTTP/2, etc.). This variable is populated when NGINX begins to handle a request and is utilized in various contexts to provide detailed information about the request being processed. As such, its value typically looks like 'GET /index.html HTTP/1.1' or 'POST /api/v1/resource HTTP/2'. The `$request` variable can be particularly useful for logging purposes or for customizing responses based on specific HTTP methods or requested URIs. For example, by inspecting the contents of the `$request` variable, server rules can be designed to conditionally rewrite URLs, log access patterns, or even apply security policies based on the request method. It is worth noting that once a request has been processed and a response has been generated, the `$request` variable holds the state of the request as it was when initially received, unmodified by later processing steps or internal actions.
Config Example
http {
log_format custom_format '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent';
access_log /var/log/nginx/access.log custom_format;
}Ensure proper context usage; it is primarily available in request-processing contexts such as location and server.
Using `$request` does not account for rewritten URIs unless specifically requested after an internal rewrite.
Description
The $document_root variable in NGINX provides the file system path to the root directory from which files are served for a given request. This directory is typically set in the server or location block through the `root` directive. Its value is determined by the most specific `root` directive applicable to the current request. If a `root` directive exists in both the server block and the location block, the more specific location block's value will take precedence. This variable is useful for constructing file paths, especially when combined with other variables like `$uri` or `$request_filename`. When NGINX handles incoming requests, it parses the configuration and assigns the appropriate document root based on the matched location directive. If no `root` is set, the variable will return an empty string which can lead to errors if misused in configurations expecting a valid path. One important note is that if the server block defines a `root` and no specific `location` block with a `root` directive exists, all requests matching that server will use the server block's document root. The variable is often used within various NGINX directives, including `try_files`, `rewrite`, and `alias`, to alter the way requests are processed based on the defined file structure.
Config Example
server {
listen 80;
server_name example.com;
root /var/www/example.com;
location / {
try_files $uri $uri/ =404;
}
}
location /images/ {
root /var/www/images;
}
# Example usage in a log:
access_log /var/log/nginx/access.log "[$document_root]";If no root directive is defined, $document_root will be empty, which may lead to `404 Not Found` errors if referenced directly.
Be cautious when using $document_root within a `try_files` directive; this can result in unexpected behavior if paths are not correctly structured.
Mixing up `root` and `alias` directives can lead to confusion, as they handle paths differently.
Description
The $realpath_root variable in NGINX provides the absolute file path to the root directory of the current location block for a given request, following all symbolic links. When a request is processed, NGINX resolves the requested URI against the configured document root, which is defined using the root or alias directives. If the root directory specified is a symbolic link, NGINX will resolve it to its actual physical location on the filesystem, ensuring that the path is accurate and not pointing to litigious symlink references. This variable is particularly useful in configurations where symlinked directories are used for serving files, as it guarantees the correct path is utilized for security checks or file-related operations. It is set during the request processing phase when NGINX examines the configuration block handling the request. Typical values for $realpath_root would include paths like '/var/www/html' or '/usr/share/nginx/html' depending on the user-defined configurations in the respective `server` or `location` contexts.
Config Example
location /images/ {
alias /var/www/data/images/;
try_files $uri $uri/ =404;
}
location = /images/logo.png {
internal;
root /var/www;
error_page 404 = @fallback;
}
# Use of $realpath_root
add_header X-Realpath-Root $realpath_root;Ensure that the path set in the root or alias directive is valid and exists, otherwise $realpath_root may not return a usable path.
Be cautious of using $realpath_root in conjunction with directory traversals, as it resolves paths that may bypass security checks if not properly configured.
Description
The `$query_string` variable in NGINX captures the query string from the client's request URI. The query string is part of the URL that follows the '?' character and typically includes parameters and their values. For example, in the URL `http://example.com/index.html?name=John&age=30`, the query string is `name=John&age=30`. This variable is automatically populated by NGINX when a request is received and can be utilized in various contexts, particularly within location blocks where conditional logic is applied. This variable is useful for implementing logic based on the specific parameters passed in a query string. For instance, you can use `$query_string` in rewrites, access control, or logging configuration to direct requests based on their parameters. It is important to note that this variable is a simple string representation of the query string and does not parse its content; it is up to the administrator to handle parsing if more fine-grained control is needed over the parameters themselves.
Config Example
location /search {
if ($query_string ~* "keyword=(.*)") {
set $search_keyword $1;
add_header Search-Keyword "$search_keyword";
}
}Ensure your query string parameters do not contain sensitive information, as they can be logged or exposed in URLs.
Remember that the `$query_string` variable does not include the '?' character. This might lead to confusion when constructing URLs or redirects.
Description
In NGINX, the $args variable is used to retrieve the query string from the URL of the incoming request. It includes any parameters that are passed as part of the URL following the '?' character. For example, in a URL like 'http://example.com/page?name=John&age=30', the $args variable would return 'name=John&age=30'. The $args variable is automatically populated when NGINX processes an HTTP request, making it readily available in several contexts, including within the server or location blocks, as well as in directives that accept variables. It's important to note that if there are no arguments in the query string, the value of $args will be an empty string. Additionally, this variable should be used cautiously to avoid accidentally exposing sensitive information that might be included in the query string.
Config Example
location /search {
if ($args) {
set $search_query $args;
# Log or handle the search query
access_log /var/log/nginx/search.log;
}
}Remember that $args includes all query parameters, so be careful with sensitive data.
If there are no query parameters, $args will return an empty string, not 'null'.
Properly decode any URL-encoded values before using them, as $args will include them encoded.
Description
The $is_args variable in NGINX returns a string that is either empty or contains a single question mark ('?'). It specifically signifies the presence of query string parameters in the URI for the current request. When there are query parameters associated with a request (for example, '/path?arg=value'), the $is_args variable will evaluate to '?', which can be useful in constructing redirects or rewrites that preserve the query string in the target URI. The variable is set during the processing of a request within the NGINX request processing lifecycle. If the request URI does not include any query parameters, $is_args will be an empty string. On the contrary, if the request carries query parameters, such as 'arg1=value1&arg2=value2', NGINX will set $is_args to '?' only, regardless of the parameters present. Therefore, its value does not reflect the content of the query string itself but simply indicates its existence.
Config Example
location /example {
if ($is_args) {
rewrite ^ /another_example$is_args last;
}
}Using $is_args in a context where it is not defined will lead to unexpected behavior. Ensure it is used in an appropriate context, typically inside a location or server block.
Avoid assuming that $is_args will provide the actual query string; it only provides a '?' if parameters exist, not their content.
Description
The $request_filename variable in NGINX is set during the processing of a request and provides the complete filesystem path to the requested resource. It is constructed based on the request URI and the specified root or alias directives in the configuration. When a request is made, NGINX combines the root path defined in the relevant context with the URI, resulting in the absolute path on the server where the requested file or directory can be found. For example, if a request is made for '/images/logo.png' and the server block’s root is set to '/var/www/html', then $request_filename would be '/var/www/html/images/logo.png'. This variable is particularly useful for modules that require file access, such as handling static files, enabling custom logging formats, or implementing conditional logic with the `try_files` directive. $request_filename gets re-evaluated for each request context, ensuring it reflects the current request’s target accurately. Common use cases include logging the requested file path or checking its existence before performing further processing with `try_files` or other configuration directives.
Config Example
location /images/ {
root /var/www/html;
if (!-f $request_filename) {
return 404;
}
}Ensure that the root or alias is correctly set to avoid incorrect path resolution.
Do not confuse $request_filename with $document_root, as the latter only returns the root path without appending the request URI.
Using $request_filename in a rewrite directive might yield unexpected results if not properly managed with respect to recursion.
Description
The $server_name variable in NGINX is automatically populated with the name defined in the server block of the configuration. It can either match the Host header of an incoming request or be explicitly set in the server block. The value of $server_name varies depending on how the server block is defined; it can take multiple forms, including a single name, a wildcard, or a regular expression. If the request doesn't match any server blocks defined in the configuration, $server_name will be unset. This variable is typically used for logging, error pages, and rewrite directives, enabling the server to dynamically customize responses based on the requested host. When working with multiple server names for a single server block, NGINX will perform a simple matching process, prioritizing exact matches over wildcards or regular expressions. To utilize the $server_name variable effectively, it’s essential to configure the server block correctly and ensure that it aligns with the incoming requests. This variable can also be accessed in the logs to provide context about requests being processed, thus aiding in debugging and analytics.
Config Example
server {
listen 80;
server_name example.com www.example.com;
location / {
root /var/www/example;
index index.html;
}
error_page 404 /404.html;
access_log /var/log/nginx/example.access.log;
error_log /var/log/nginx/example.error.log;
}If no matching server block is found for a request, $server_name will be unset, potentially leading to confusion in logs or redirects.
Avoid using $server_name in locations where it might be impacted by regex or wildcard definitions in server blocks, as results may vary.
Remember that the presence of www or subdomains needs to be accurately reflected in server_name to ensure correct behavior.
Description
The $request_method variable is part of the core NGINX variables and is used to capture the method of the HTTP request being processed. This variable will contain the method verb, such as GET, POST, PUT, DELETE, or HEAD, which indicates the action that the client intends to perform on the given resource. It is crucial in scenarios where specific handling of request types is required, for instance, distinguishing between GET and POST for processing data differently. This variable is set during the request processing phase when NGINX parses the incoming HTTP request headers. It is often used in conditionals or in logging configurations to capture details about the request for better diagnostics or to apply different rules based on the request type. For example, some server configurations might only allow POST requests for certain endpoints, and using the $request_method variable can help enforce this: ``` if ($request_method !~ ^(GET|POST)$) { return 405; } ``` This example restricts access to only GET and POST methods, returning a 405 Method Not Allowed status for all others.
Config Example
location /example {
if ($request_method = POST) {
# Handle POST requests
}
if ($request_method = GET) {
# Handle GET requests
}
}Ensure proper casing: methods are case-sensitive, so 'get' and 'GET' are treated differently.
Avoid using $request_method in situations where its value changes mid-request, as it reflects the value at the start of the request.
Description
In NGINX, the $remote_user variable is set when the server is configured to use HTTP Basic Authentication. This occurs when the `auth_basic` directive is used in a configuration block, prompting clients to enter a username and password. The username entered by the client is then made available to the server and can be accessed through the $remote_user variable. If the client does not authenticate successfully or if the request does not require authentication, $remote_user will be empty. Typically, the $remote_user variable is used in logging or for authorization purposes within server configurations. It can be included in custom log formats, allowing administrators to track who is accessing certain resources. Additionally, this variable can influence access control decisions in combination with conditional configuration directives, enabling or denying access based on the authenticated user's identity. This variable is primarily useful in scenarios where security is critical, such as when exposing sensitive data or services that require user identification for access control. However, it should be noted that this information might be sensitive, and using it in logs should be done with consideration for privacy and security practices, ensuring that access logs do not expose personally identifiable information.
Config Example
http {
server {
listen 80;
server_name example.com;
location / {
auth_basic "Restricted Area";
auth_basic_user_file /etc/nginx/.htpasswd;
access_log /var/log/nginx/access.log combined;
# Include the remote_user in the log
log_format combined '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent"';
}
}
}Ensure that the `auth_basic` directive is set in the relevant context or location block, or $remote_user will always be empty.
Be aware of the security implications of logging sensitive information such as usernames. Always consider privacy guidelines when logging $remote_user.
Check the path set in `auth_basic_user_file` to prevent unauthorized access to authentication files.
Description
The $bytes_sent variable in NGINX tracks the total number of bytes transmitted to the client during response. This variable is computed during the request processing phase and is available in all contexts where logging occurs. It reflects not only the raw response body but also any additional bytes sent, such as HTTP headers. The value of this variable is established when NGINX begins delivering the response to the client and continues to accumulate throughout the transmission process until the response is fully sent. Typically, $bytes_sent can vary significantly based on the type of content being delivered; for instance, static files will report the exact file size in bytes, while dynamic content generated by scripts can be less predictable. Additionally, and can affect the size reported due to differences in content length, headers, and compression. Overall, developers and system administrators utilize this variable primarily for analyzing bandwidth usage, calculating transfer statistics, and tuning server performance for efficient client response handling.
Config Example
log_format main '$remote_addr - $remote_user [$time_local] "$request" $status $bytes_sent'; access_log /var/log/nginx/access.log main;
The value of $bytes_sent is cumulative and only provides the total bytes sent for the specific request being processed at that moment.
In cases of small responses, or when responses are cached, the value may not reflect expected bandwidth metrics.
For chunked transfer encoding responses, $bytes_sent may not equate to a precise content size measurement during response delivery.
Description
The $body_bytes_sent variable is a significant metric used in NGINX logging and performance monitoring. It captures the total amount of data sent to the client, specifically the bytes that are transmitted in the response body. This variable is set during the processing of each request, once the response body has been completed and sent. It is important to note that this variable does not count the headers sent to the client, only the body content. This variable becomes particularly useful in scenarios involving performance evaluation and bandwidth calculations. As the handling of HTTP requests progresses, NGINX maintains an accumulator for the transmitted bytes, which culminates in the final value for $body_bytes_sent. Typical values can range from 0 (for cases where the request results in an error or no body content is sent) to any arbitrary size based on the nature of the response—small for simple HTML pages or large for file downloads or media streaming. In practice, the value of $body_bytes_sent can help administrators identify traffic patterns, user engagement with content, and potential areas of optimization within their applications. By logging this variable, one can analyze bandwidth usage as well as diagnose issues related to slow responses, by correlating the bytes transmitted with other variables such as response time and request rate.
Config Example
log_format custom_format '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent'; access_log /var/log/nginx/access.log custom_format;
Make sure that the variable is accessed after the response body has been set or transmitted, otherwise it may return unexpected values like 0.
Remember that $body_bytes_sent does not include headers; if you require total bytes sent (including headers), you'll need to account for that separately.
Description
The $pipe variable is used within the NGINX server to determine if a request is being processed via HTTP pipelining. Pipelining is a technique where multiple requests can be sent on a single connection, allowing the client to send additional requests before the previous responses are received. This can improve performance in specific scenarios, particularly for web browsers sending multiple requests to load resources concurrently. The variable is available to be referenced within NGINX configuration files, such as in directives that need to conditionally handle certain requests based on whether they are pipelined or not. When a request is received, NGINX sets the value of the $pipe variable accordingly: it is set to "pipelined" if the request is pipelined, and it is unset (or returns an empty string) otherwise. Typical usage for the $pipe variable involves logging and controlling access or response handling based on whether a request is being pipelined. For instance, you might log different messages for pipelined and non-pipelined requests, or you might apply different caching mechanisms based on the value of this variable.
Config Example
http {
log_format custom_format '[$pipe] $remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent"';
access_log /var/log/nginx/access.log custom_format;
}Pipelining is not supported by all browsers; this could lead to confusion if $pipe is used for significant server logic without proper checks.
If the configuration does not log the value of $pipe explicitly, it may lead to misunderstandings during debugging or performance tuning.
Description
The $request_completion variable is utilized in NGINX to provide insight into the state of request processing. This variable is mainly set during the request lifecycle by the NGINX core, where it can take one of three values: 'ok', 'timeout', or 'failed'. It is particularly useful for logging and for conditional logic in NGINX configurations, enabling administrators to react differently based on whether a request was successfully completed, timed out, or failed for another reason. The variable is typically evaluated during the processing of a request after any response is generated or when a request error is encountered. If the server successfully finishes handling the request, the variable value is set to 'ok'. In cases where the request exceeds the timeout settings, it returns 'timeout', and if it fails due to other reasons, it returns 'failed'. This granularity in status reporting helps in diagnostics and performance monitoring of the NGINX server.
Config Example
log_format custom_format '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_completion'; access_log /var/log/nginx/access.log custom_format;
Ensure that you are not using this variable in the wrong context, such as within an 'if' block, which can lead to unexpected behavior.
The variable may not be defined or accurate if accessed before the request processing is fully completed.
Description
The $request_body variable in NGINX captures the body of the client request after it has been processed by the server. It is primarily populated when the request method is one that typically contains a body, such as POST or PUT. This variable is only available if the 'set' or 'get' request body processing is enabled in the server configuration, usually through the proxy_pass directive or if the client sends data directly to the server. Once set, it holds all data sent by the client, making it useful for processing forms, file uploads, or any payloads sent in requests. When utilizing the $request_body variable, it is crucial to note that the request body is buffered. By default, it might be limited by the client_max_body_size directive, which restricts the size of the body that is processed and ultimately stored in this variable. If the body exceeds this limit, the request may be rejected, and the variable won’t hold any data. Typical values for $request_body can include JSON payloads, multipart form data, or simple text, depending on the content type and the client's intent.
Config Example
server {
listen 80;
server_name example.com;
location /submit {
# Capture the request body
proxy_pass http://backend;
proxy_set_header X-Request-Body $request_body;
}
}Ensure client_max_body_size is set appropriately to avoid issues with larger request bodies being discarded.
Remember that $request_body is only available for methods that support a body, such as POST and PUT.
If using post-processing in a subsequent location block, make sure previous location blocks do not consume the request body if you intend to access it later.
Description
The $request_body_file variable is used in scenarios where an HTTP request body exceeds NGINX's buffer size limitations. When a request is made, NGINX reads the request body; if the body is larger than a predefined size, it saves the body to a temporary file on the server filesystem. This variable then holds the file path of that temporary file, allowing further processing of the request body. The variable is primarily set during the request processing phase, specifically when any of the nucleus handler, such as the proxy or fastcgi modules, decide that the request body should not be processed entirely in memory. Typical values of this variable would include file paths on the filesystem, such as `/tmp/nginx_request_body_XXXXXX`. The temporary file is typically deleted after the request is completed or after it is processed, thereby managing server disk space efficiently. In configurations where handling large payloads (like file uploads) is common, configuring this request body handling appropriately, including buffer sizes, is crucial to ensure optimal performance and avoid server overloads.
Config Example
server {
client_max_body_size 10M;
location /upload {
proxy_pass http://backend;
proxy_request_buffering on;
# Accessing the request body file:
if ($request_body_file) {
log_format custom '$remote_addr $request_body_file';
access_log /var/log/nginx/custom.log custom;
}
}
}Make sure the client_max_body_size is set appropriately to use $request_body_file effectively; otherwise, large requests will be rejected.
Be cautious about file system permissions for the temporary directory to allow NGINX to write request bodies. If not set correctly, it could lead to errors or inaccessible files.
Not all modules will handle the temporary file correctly; double-check compatibility when combining with various NGINX modules.
Description
The $request_length variable in NGINX holds the byte count of the client request body, which includes the headers as well as any data sent by the client as part of the request (typically in POST requests). This variable is particularly useful for monitoring the size of incoming requests and for applying rate limiting based on request size. NGINX sets the $request_length value when it parses the incoming request. It is generally populated during the processing of requests in both server and location contexts. The size recorded is the total length of the request, counted in bytes. Typical values for this variable vary greatly depending on the nature of the application; common sizes could range from 0 bytes for simple GET requests with no body, to several kilobytes for complex POST requests containing forms or file uploads. This variable can help in situations where you might want to implement security measures to reject overly large request bodies or to keep track of the bandwidth being consumed by client requests. Administrators can thus set limits based on this variable to optimize server performance and manage resources effectively.
Config Example
http {
server {
listen 80;
location /upload {
client_max_body_size 1M;
if ($request_length > 1048576) {
return 413;
}
}
}
}The $request_length variable is only applicable for POST requests; for GET requests, it typically returns 0.
Misinterpretation of the variable could lead to erroneously applying restrictions based on request GET method where no body exists.
Description
The $request_time variable provides an accurate measurement of the time taken to handle a client request in NGINX. This variable is calculated when the request is completed, capturing the duration from the moment it is received to when the response is sent back to the client. Specifically, $request_time records this time in seconds and fractions of a second, allowing you to assess performance and troubleshoot latency issues in your application. The value of $request_time is set during the processing phase of the request, immediately before NGINX sends the response. It's important to note that the time recorded can be influenced by several factors including network latency, server processing time, and client behavior (such as connection speed). As a result, typical values for $request_time vary widely depending on the application and server load; they can range from a few milliseconds to several seconds, especially under high traffic conditions or resource-intensive operations.
Config Example
log_format custom '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" request_time: $request_time'; access_log /var/log/nginx/access.log custom;
Always remember that $request_time does not include the time spent waiting for a connection if persistent connections are utilized.
If your NGINX configuration has multiple requests per connection (like in keep-alive), $request_time will only reflect the time for the last response, not the total for the connection. Use $upstream_response_time if you need backend response times instead.
Description
The $request_id variable provides a unique identifier for each incoming request to the NGINX server. This identifier is useful for request tracking and logging, particularly in distributed systems or microservices architectures, where correlating logs across multiple services can aid in debugging and monitoring. By default, if a request does not contain an explicit `X-Request-ID` header, NGINX generates and assigns a UUID (Universally Unique Identifier) as the request ID. This means that every request handled by NGINX can be tracked regardless of whether the client has specified this header. When the request reaches an NGINX server, it might undergo several processing steps (like authentication, access rules, etc.), and the $request_id remains consistent throughout those processes. It's important to enable this variable by using the `request_id` module, which may not be present in minimal NGINX builds. In cases where external systems or services are involved (such as API gateways or load balancers), passing the request ID through the request headers ensures consistent tracking across the entire request lifecycle. Typical values of $request_id are formatted as UUID strings, ensuring global uniqueness. For instance, a sample output could look like `3e4e989c-45c6-4de4-b7d6-2b4ebc4fc417`. This makes it extremely helpful for identifying and correlating log entries pertaining to the same request across different system components or layers.
Config Example
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$_http_referer" "$_http_user_agent" "$request_id"';
daily_log {
access_log /var/log/nginx/access.log main;
proxy_set_header X-Request-ID $request_id;
}
}Ensure that the request_id module is enabled in your NGINX build, as some minimal builds may not include it.
The variable will not be available until the request processing phase where it is assigned, so using it in early phases (like server block) may yield unexpected results.
Not all upstream services may support or be able to log this request ID unless explicitly configured.
Description
The $status variable in NGINX represents the status code of the response generated by the server for the current request. It is set automatically by NGINX during the processing of the request, reflecting the outcome of the request handling. Common HTTP status codes include 200 for successful responses, 404 for not found errors, and 500 for server errors, among others. This variable becomes accessible once NGINX handles the request and generates a corresponding response. You can utilize this variable for various purposes, such as logging the status of requests or implementing conditional actions based on the outcome of the requests. For example, if a request results in a 404 status code, you might want to trigger specific error handling behavior or logging to track how often users encounter page not found errors.
Config Example
log_format custom '$remote_addr - $remote_user [$time_local] "${request}" $status ${body_bytes_sent} "$http_referer" "$http_user_agent"';
access_log /var/log/nginx/access.log custom;Ensure that the $status variable is accessed after the request has been processed. Accessing it too early in the configuration may yield unexpected results.
When using the $status variable in conditional expressions, be aware of NGINX's evaluation order to avoid incorrect assumptions based on its value.
Description
The $sent_http_content_type variable is a built-in NGINX variable that captures the value of the Content-Type HTTP header sent in the server's response. This variable is often used after a response has been generated, allowing you to access the Content-Type that was determined or set within the server's configuration or during request processing. When a request is processed, NGINX determines the appropriate Content-Type based on various factors such as the file extension of requested resources, content negotiation, or explicit settings in the configuration files. After the response is created and just before it is sent to the client, the $sent_http_content_type variable can be used to evaluate or log the outgoing Content-Type. It is not defined until the response header has been generated, and hence it might not be available for every request if the response is not valid or has not been set. Typical values for this variable could include standard MIME types such as "text/html", "application/json", "image/png", etc., depending on the content that NGINX serves. If no response is sent, or if the Content-Type is not specified, this variable would be empty. Therefore, it is recommended to check if it's set before using it in logging or decision-making.
Config Example
location / {
add_header Custom-Header "$sent_http_content_type";
proxy_pass http://backend;
}The variable is not set until after response headers are created, so it cannot be used in pre-processing stages of a request.
If the response headers do not include a Content-Type, this variable will be empty. Ensure that your application or backend service sets it properly.
Description
When NGINX processes a request, it gathers various response headers that will be sent back to the client. The $sent_http_content_length variable specifically captures the Content-Length header that indicates the size of the response body in bytes. This variable is set during the request processing phase and will typically contain a numerical value representing the length of the response that is generated by the server, or it may be empty if the response length is not known or if chunked transfer encoding is used instead of a concrete content length. The Content-Length header is important for HTTP/1.1 communications, as it allows the client to know exactly how many bytes to expect in the response body. In scenarios where dynamic content is generated, NGINX might not determine the content length until the body is completely processed. Therefore, using this variable can help in scenarios where specific handling of response length is required, such as conditional logging or response modification based on content size.
Config Example
server {
listen 80;
location / {
add_header X-Content-Length $sent_http_content_length;
proxy_pass http://backend;
}
}If the response uses chunked encoding, $sent_http_content_length will be empty.
Ensure that the variable is accessed after the response headers have been set to avoid unexpected results.
Description
The $sent_http_location variable is populated during internal redirection in NGINX, specifically when the module processes a request that results in a 3xx HTTP response. This variable is assigned the value of the 'Location' header set by directives such as `rewrite`, `error_page`, or `proxy_redirect`. It holds the redirected URL that the client will be sent to, which is particularly useful in handling situations where access control or logic is applied based on user inputs or request paths. The variable is only set if an internal redirection to a different URI takes place, and it can be utilized in various contexts, including logging the final URL to which a request is redirected or using it in conjunction with other NGINX variables for advanced response logic. Typical values include absolute URLs (e.g., "http://example.com/target") or relative URIs (e.g., "/target") depending on how redirection is configured in the NGINX server.
Config Example
server {
listen 80;
location /example {
return 302 /new-location;
}
location /new-location {
add_header Location $sent_http_location;
return 200 'Redirected to new location';
}
}$sent_http_location is only set during internal redirects, and may not contain a value if there's no redirection involved.
Ensure that the Location header is actually set; otherwise, this variable will be empty.
This variable is not available in the initial request processing context and will be relevant only after responding to redirection.
Description
The $sent_http_last_modified variable is automatically set by NGINX during the processing of a request that involves fetching a resource. When NGINX responds to a request and includes a Last-Modified header in the HTTP response, this variable captures the value of that header. The Last-Modified header is used to indicate the date and time at which the resource was last modified, allowing clients to make decisions about caching or re-fetching the resource based on its freshness. This variable is useful in scenarios where caching behavior needs to be finely tuned or when implementing conditional GET requests using the If-Modified-Since header on the client side. By examining the value of $sent_http_last_modified, server-side logic can be implemented to alter headers or the content of responses based on the freshness of served resources. Typical values for this variable align with common HTTP date formats, such as 'Wed, 21 Oct 2015 07:28:00 GMT'. Note that if the Last-Modified header is not sent, the variable will be empty.
Config Example
location /example {
add_header Last-Modified $sent_http_last_modified;
}If the Last-Modified header is not set in the response, the variable will be empty, leading to unexpected behavior in configurations dependent on it.
Ensure your application logic checks if the variable is set before using it to avoid errors in processing.
Description
$sent_http_connection is a variable in NGINX that captures the value of the 'Connection' response header that is sent to the client. This header indicates to the client whether the connection to the server should be kept open after the current request is processed. Typical values for this variable might include 'keep-alive' or 'close', depending on the server's configuration and the nature of the connection requested by the client. This variable is primarily useful in scenarios where the server needs to dynamically adjust connection behavior based on the client's request or patterns of usage. For example, if certain clients request persistent connections, the server can respond accordingly. The value of $sent_http_connection is set during the processing of a request, specifically when the response headers are being constructed. The value assigned to this variable is determined by the server's configuration directives within the relevant context. An important point to note is that if the 'Connection' header is not specified in the configuration or if it is not explicitly set during processing, this variable will not contain any value, and thus should be used cautiously in conditional expressions or logging to prevent unintended display of empty headers.
Config Example
server {
listen 80;
location / {
add_header Connection $sent_http_connection;
}
}$sent_http_connection returns an empty string if the 'Connection' header is not set in the response; be cautious when using it in logging or conditionals.
Ensure that the server's connection handling configurations (e.g., persistent connections) are appropriately defined to set this header correctly.
Description
The $sent_http_keep_alive variable is set by the NGINX core when the response is being prepared for the client. It reflects the 'Keep-Alive' header sent in the HTTP response, which informs the client whether the server wants to keep the connection open for multiple HTTP requests or close it after the current transaction. This variable can take values such as 'timeout=5', which indicates how long the connection should be kept alive before timing out, or 'timeout=0' indicating that the connection should be closed immediately after the response is sent. When configured with keep-alive settings, NGINX manages persistent connections efficiently and uses this variable to communicate the server's preferences back to the client. If keep-alive is disabled or not applicable, this variable may be left blank. The state of $sent_http_keep_alive can also be influenced by various directives such as 'keepalive_timeout', which determines how long the server will allow a connection to remain idle before it is closed, making this variable critical for performance in high-traffic environments where maintaining open connections can reduce latency for subsequent requests from the same client.
Config Example
http {
keepalive_timeout 65;
server {
listen 80;
location / {
add_header Keep-Alive "$sent_http_keep_alive";
}
}
}The variable will be empty if keep-alive is not set or disabled; make sure to enable keep-alive in your configuration.
If the Keep-Alive header is not included in the response for specific requests, the variable will not have any value.
Description
The $sent_http_transfer_encoding variable in NGINX holds the value of the 'Transfer-Encoding' header as sent in the HTTP response to the client. This variable is particularly useful during HTTP responses that use chunked transfer encoding. For a successful assignment of this variable, the NGINX configuration must specify a transfer encoding method, which typically occurs when a dynamic content generation mechanism (like PHP or an upstream server) is in use. When NGINX itself generates a response, the value of this variable may default to an empty string unless specifically set. This variable can take several values, most commonly `chunked`, which denotes that the response will be sent in a series of chunks instead of as a single complete message. This is especially useful for large responses that can be streamed in smaller parts or when the total size of the response is not known beforehand. If no transfer encoding is set by the server, the variable will not contain any value, reflecting that no transfer encoding is in use.
Config Example
location /api {
proxy_pass http://backend;
proxy_set_header Accept-Encoding '';
}
location /stream {
proxy_pass http://stream_backend;
add_header Transfer-Encoding chunked;
}Ensure that the backend service supports and correctly implements chunked transfer encoding to avoid unexpected behavior.
Using $sent_http_transfer_encoding outside the appropriate contexts may yield unexpected or empty results. It is typically used in conjunction with proxy settings.
Description
The $sent_http_cache_control variable retrieves the value of the 'Cache-Control' HTTP response header that NGINX sends to the client. It is set when NGINX processes a response, and it can be influenced by various configurations, such as `expires`, `add_header`, or `proxy_cache`. This variable is particularly useful when debugging or customizing the response headers returned by your server, allowing you to capture and log what caching directives are being sent out. Typically, the value of this variable could include directives like 'no-cache', 'private', 'max-age=3600', 'public', or could be empty if the header is not set. It can be used in combination with other variables to dynamically adjust response headers based on various server conditions or configurations, enhancing control over how responses are cached by clients and intermediaries in the HTTP chain. The variable is evaluated only after the response headers are sent out, meaning it reflects the final value that was included in the response. If there are multiple instances or manipulations of the Cache-Control header, this variable will hold the last value that was set before the response was completed.
Config Example
location /example {
proxy_pass http://backend;
add_header Cache-Control "private, max-age=3600";
log_format custom '$remote_addr - $remote_user [$time_local] "$request" $status $sent_http_cache_control';
access_log /var/log/nginx/access.log custom;
}The variable will be empty if the 'Cache-Control' header is not set in the response.
Ensure that you set the header in the right context; if set inside a location block, it will only apply to requests that fall within that context.
Changing response headers after they are sent will not affect this variable.
Description
The `$sent_http_link` variable is utilized in NGINX when the Link header is set in the response. This variable is particularly relevant for APIs and web services, where providing information on related resources can enhance client-side processing and navigation. It gets populated based on the `Link` header defined in the response context, typically constructed using multiple URIs that might be relevant to the current resource. When an HTTP response is generated, if the `Link` header has been set via configuration or through dynamic settings in Lua, `$sent_http_link` will contain this value for subsequent processing. It can then be utilized in rewrites, logging, or further response manipulations. Typical values of this variable would be a string representing one or more URIs, often with relations specified (like `rel="next"` or `rel="prev"`). This variable is especially useful when implementing pagination in APIs, allowing clients to deduce the next or previous pages easily. By setting the Link header appropriately, NGINX can pass relevant information through the response without needing substantial application logic to handle these references.
Config Example
http {
server {
location /api {
add_header Link "; rel='next'";
}
}
} If the Link header is not set, $sent_http_link will be empty.
Ensure that headers are correctly configured to be sent; this variable only holds a value if the corresponding header is present.
Description
The `$limit_rate` variable in NGINX is used to limit the transfer speed for responses sent to clients. By adjusting this variable, administrators can ensure a maximum bandwidth consumption from a specific connection, which can be particularly useful in scenarios where managing server resources becomes crucial, such as during high traffic periods. The variable can be set to a specific byte value or it can derive its value from other variables, allowing flexible configurations depending on the needs of the application. This variable can be set dynamically, depending on various conditions within the configuration, such as IP addresses or specific request characteristics. The default state has `$limit_rate` set to zero, meaning there is no limit on the transfer speed. If set value is greater than zero, NGINX will apply the limitation per request and adjust the speed accordingly, which might significantly impact user experience if the rate is set too low. Commonly, values are given in bytes per second; for example, setting it to `1048576` would restrict the rate to `1MB/s`.
Config Example
http {
server {
location / {
# Limit transfer rate
limit_rate 500k;
}
}
}If set to zero, it bypasses any rate limiting and allows full bandwidth usage.
Using `$limit_rate` in conjunction with other rate limiting directives can lead to confusion if not configured correctly.
Ensure to adjust the limit rate according to server capacity and expected traffic patterns to avoid an unsatisfactory user experience.
Description
The $connection variable is a built-in variable in NGINX that gives you a unique connection identifier for each request processed by the server. It is particularly useful for debugging and logging, allowing administrators to track the connection details across multiple requests. The value is based on the socket descriptor and is an integer that varies with each instance of a connection. Every time a new request is handled, this variable is set to identify which connection is being used, hence it helps in identifying concurrent requests from the same client or differentiating between various users in a log file. During the request processing, when a client connects to the NGINX server, a connection object is created that holds various connection-related information such as the client's address, connection state, and more. The $connection variable directly correlates to this connection object. As each client may make multiple requests over a single connection, this variable is invaluable in contexts where connection-based logic is implemented, such as rate limiting, access control, and customized logging. Typical values of this variable are integer numbers specific to the system's current connections, and they can be linked to requests over HTTP/2 multiplexed streams as well, where the same connection can serve multiple requests simultaneously. Note that, since NGINX uses an asynchronous, event-driven model, the same $connection will be shared among requests until the connection is closed, at which point a new identifier will be assigned for new connections.
Config Example
server {
listen 80;
location / {
access_log /var/log/nginx/access.log;
log_format main '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$connection"';
}
}Using $connection in if statements can lead to unexpected behaviors, as the variable may not evaluate correctly in some contexts.
When using $connection for logging, ensure the accessed logs are appropriately formatted to handle multi-valued connections like HTTP/2.
Description
The $connection_requests variable in NGINX is utilized to track the number of requests that have been processed during the life of a single connection. It is incremented every time a new request is made on that connection, starting from zero when the connection is first established. Each request – whether it's an initial request or subsequent requests triggered by keep-alive – will increase this count. This variable is particularly useful for monitoring and understanding request patterns, optimizing connection reuse, and debugging performance issues related to connection handling. In NGINX, the lifecycle of the $connection_requests variable begins when a new connection is established. For each incoming request on that connection, the server will increment the variable until the connection is closed or the request processing has been completed. Typical values for this variable could start from 1 for a single request connection and increase accordingly with multiple requests made by the client within that same connection. In scenarios where connections are kept alive, this variable allows web administrators to analyze how effectively keep-alive connections manage multiple requests compared to opening a new connection for each request. Given that this variable reflects the number of requests, it can be instrumental in configurations aimed at performance tuning or detailed request logging, where the insights gained from monitoring the variable could lead to improvements in resource allocation and response times. Logging this variable can provide valuable insights into the behavior of clients and their request patterns, making it easier to identify potential bottlenecks or increase efficiency.
Config Example
log_format custom_format '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$connection_requests"'; access_log /var/log/nginx/access.log custom_format;
The variable is only applicable when keep-alive is utilized; otherwise, it will always default to 1 for single requests.
Be cautious of logging verbosity; logging the $connection_requests variable in high-traffic scenarios may affect performance.
Ensure server blocks and locations are correctly configured to make use of keep-alive; otherwise the variable increments will not reflect multiple requests.
Description
The $connection_time variable in NGINX captures the duration, in seconds, it takes to establish a connection to a client after NGINX has received the request. This value is determined at the time the connection is established, enabling the measurement of latency from the perspective of the NGINX server. The variable is updated as part of the connection processing flow and is primarily suitable for logging or metrics collection associated with connection times. Common values might range from fractions of a second in optimal conditions to multiple seconds in scenarios with high latencies or connection issues. The context of when $connection_time is set starts as soon as the NGINX worker process accepts a new connection from a client, which corresponds to the moment the request begins to be processed. Developers and administrators can use this variable to gain insights into performance characteristics, especially when monitoring the behavior of the web server under various network conditions or loads. Analyzing these values can help in deducing the health of the server and identifying potential performance bottlenecks related to client connections.
Config Example
log_format main '$remote_addr - $remote_user [$time_local] "$request" ' \
'$status $body_bytes_sent "$http_referer" ' \
'"$http_user_agent" "$http_x_forwarded_for" ' \
'Connection time: $connection_time';
access_log /var/log/nginx/access.log main;The value of $connection_time is only set for accepted connections; it does not account for any processing time after the connection is established.
In scenarios where proxying occurs, ensure that connection time metrics are interpreted correctly, as they may not represent the client-to-server connection time accurately.
Description
The variable $nginx_version is set by NGINX at startup and it reflects the version of the NGINX server that is currently running. This variable is derived directly from the version information found in the NGINX source code during compilation, which allows users to check the exact version of the server being used. It is commonly indicated in a format such as '1.21.0' or '1.19.10'. The variable can be accessed in various contexts within NGINX configuration files, such as in log formats or response headers. This makes it particularly useful for debugging and maintenance purposes, as administrators can quickly ascertain the version of NGINX without having to execute commands on the server. Additionally, knowing the version can be critical for ensuring compatibility with certain modules or configurations, particularly in environments where multiple NGINX instances with varying versions might exist. Overall, the $nginx_version variable serves primarily as an informational tool for both administrators and developers, providing quick access to versioning details essential for maintaining and upgrading NGINX setups.
Config Example
http {
log_format my_log '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" NGINX/$nginx_version';
access_log /var/log/nginx/access.log my_log;
}Ensure you are viewing the variable in the correct context, as it may not be available in all contexts.
Using it in log formats or response headers may not output the expected information if misconfigured.
Description
The $hostname variable is set during the processing of a request and reflects the server's hostname as defined in the NGINX configuration. It pulls the value from the 'server_name' directive if specified, otherwise it defaults to the hostname of the server's operating system. This variable is particularly useful for applications that need to know the name of the host that is serving the requests, facilitating configurations that depend on the server's identity. NGINX evaluates the $hostname variable at the request stage, meaning it can be utilized in various contexts such as logging, redirects, or condition-based server routing. It generates values based on the server block where the request is processed—allowing for dynamic configurations if multiple server blocks with different names are defined. If a 'server_name' is not specified, NGINX will fall back to using the system's hostname, which can be confirmed via the command line using the 'hostname' command on Linux systems. To set this variable effectively, it is best practice to define the 'server_name' directive in your server blocks to ensure consistent behavior, especially in environments with multiple hostnames or virtual hosts. This approach alleviates ambiguity and ensures that the application utilizes the correct domain name associated with received requests.
Config Example
server {
listen 80;
server_name example.com;
location / {
add_header X-Host $hostname;
}
}If no 'server_name' is specified, it defaults to the system's hostname, which can lead to unexpected values in multi-host configurations.
Modifications to the system's hostname do not translate to changes in the NGINX configuration until NGINX is restarted, which may lead to inconsistency during runtime.
Description
The $pid variable in NGINX provides the process identifier (PID) of the worker process that is currently processing a client request. This variable is particularly useful for debugging and logging purposes, allowing administrators to identify which worker process is handling specific requests. The PID is initialized when the NGINX worker processes start and remains constant for the lifespan of the worker process. The value of $pid is numeric and reflects the system-assigned PID of the respective worker, which can be useful in scenarios where logging detailed information about process handling is necessary. Since NGINX can spawn multiple worker processes, each may be handling different requests concurrently. The $pid variable helps in correlating logs with specific PIDs, thus facilitating better diagnostics and monitoring. Typically, $pid values start from a defined number and increment for each new process spawned by the operating system. Users can utilize this variable in customized logging formats to provide insights on which processes are serving traffic, or when diagnosing performance bottlenecks or issues with individual worker processes.
Config Example
log_format custom_log '$remote_addr - $remote_user [$time_local] "$request" ' '"$status $body_bytes_sent "$http_referer" "-" "$http_user_agent" 'Ver: $nginx_version Pid: $pid'; access_log /var/log/nginx/access.log custom_log;
The $pid variable will only show relevant information if used within a supported context, such as inside a log or rewrite block.
It will reflect the PID of the current worker process, which may not correspond to the master process, potentially leading to confusion in environments with multiple processes.
Description
The $msec variable is a built-in variable in NGINX that provides high-resolution time information. It returns the current system time in milliseconds since the Unix epoch (January 1, 1970). This value is set at the time a request is being processed and is typically used for logging, debugging, and performance measurement purposes. When a request comes in, NGINX retrieves the current time and converts it into milliseconds, which you can then use in various contexts such as access logs, custom headers, or for conditional processing based on timing. The precision of this variable makes it especially valuable when you need to measure response times or calculate delays between different stages of request processing. The value output by $msec is a decimal number representing the total milliseconds elapsed since the epoch. This variable is often used alongside other time-related variables such as $time_iso8601 for generating detailed logs that require both formatted date and millisecond precision. It is important to remember that the accuracy of the $msec variable is subject to the underlying operating system and hardware clock resolution.
Config Example
log_format custom_format '$remote_addr - [$time_local] "${request}" $status $body_bytes_sent "$http_referer" "$http_user_agent" $msec';
access_log /var/log/nginx/access.log custom_format;Ensure the time on your server is synchronized (e.g., using NTP) to avoid discrepancies in logged times.
Be cautious when comparing $msec with non-millisecond precision timestamps; this can lead to confusion during debugging.
Using $msec in conjunction with request processing and timing mechanisms (e.g., measuring request duration) should be done carefully to prevent performance issues. In high-throughput situations, excessive logging could impact performance.
Description
The $time_iso8601 variable outputs the date and time in a format that complies with the ISO 8601 standard. This timestamp is typically formatted as 'YYYY-MM-DDTHH:MM:SS+ZONE', which for example corresponds to '2023-10-04T12:34:56+00:00'. The variable is populated by NGINX during the processing of a request and is available across various contexts within a configuration file. It leverages the server's local time zone settings to ensure that the displayed time aligns with the server's current time settings. It is particularly useful for logging purposes or in the context of HTTP headers, where an unambiguous representation of the date and time can aid in client interpretation, debugging and data analytics. Typically, NGINX sets this variable each time a request is processed, ensuring that it is up-to-date when accessed during the request lifecycle. As part of the timestamp output, it incorporates timezone information, making it suitable for scenarios where timezone variance is a consideration. Due to its standard format, $time_iso8601 is favored in RESTful API responses and web services that require precise timestamps, as it allows clients to interpret the received data accurately across different timezones. The variable can be formatted in the same manner as traditional date command formats in NGINX, maintaining its usability across various installation environments.
Config Example
http {
log_format combined '$remote_addr - $remote_user [$time_iso8601] "$request" '
'$status $body_bytes_sent "$http_referer" "$http_user_agent"';
access_log /var/log/nginx/access.log combined;
}Ensure the server's timezone is set correctly, as this affects the output of $time_iso8601.
$time_iso8601 does not change once assigned in a specific request; it remains the same throughout the request handling process.
Description
The $time_local variable in NGINX provides the local server time when a request is received. It is formatted as 'day/month/year:hour:minute:second zone', following the Common Log Format, which is universally recognized for log entries. This variable is particularly useful for logging purposes, allowing administrators to track when requests are made based on the timezone settings of the server. The value of $time_local is set during the request processing phase, specifically when a request is received in the application. As such, it ensures that the time stamp corresponds accurately to the moment a request is processed. By default, this variable is formatted according to the server's local timezone, which can be specified using directives in the NGINX configuration. Typical values of this variable might look like '12/Apr/2023:18:30:12 +0300', indicating the exact date and time along with the timezone offset. Given its importance in logging, it is often used in custom log formats to capture the requested timestamps precisely, thus allowing for easier troubleshooting and monitoring of server activity.
Config Example
log_format my_log_format '$remote_addr - $remote_user [$time_local] "$request"'; access_log /var/log/nginx/access.log my_log_format;
Ensure the server's timezone is correctly set; otherwise, $time_local may not reflect the expected local time.
When using in custom log formats, make sure the log format string is correctly quoted to avoid syntax errors.
Description
The $tcpinfo_rtt variable is part of NGINX's TCP connection module and is utilized to provide metrics for the TCP connection performance. It returns the round-trip time (RTT) measurements for a TCP connection, reflecting the time it takes for a data packet to go from the sender to the receiver and back again. This variable is especially useful for diagnosing network latency and assessing the responsiveness of a server to incoming requests. The value of $tcpinfo_rtt is available only when the connection is established and may not be immediately set for incoming connections. Typically, it returns an integer value that represents the time in microseconds. For a new connection, if RTT information isn't yet available, it might return zero until the information can be gathered from the TCP stack. Practically, typical values can range widely depending on the network, and developers can use this indicator to fine-tune their applications to better handle latency-related adjustments. This variable can be leveraged in logging directives, where server administrators might want to track connection performance over time. By including this metric in access logs, administrators can analyze trends and identify potential bottlenecks in service delivery.
Config Example
log_format custom_log '$remote_addr - $remote_user [$time_local] "$request" $status $tcpinfo_rtt'; access_log /var/log/nginx/access.log custom_log;
The $tcpinfo_rtt variable may not be set for the very first requests due to connection establishment delays.
If the connection has not recently sent or received data, the RTT value might be inaccurate or return zero. Adjust logging for intervals that reflect actual traffic patterns.
Description
The $tcpinfo_rttvar variable in NGINX provides diagnostics related to the latency stability of TCP connections by indicating the round-trip time (RTT) variation. This metric is derived from the TCP stack information, specifically measuring the variance of the RTT observed for packets sent over a TCP connection. When a client sends a request, NGINX captures the TCP information for that session, which includes the RTT values as reported by the underlying operating system's TCP stack. Typically, $tcpinfo_rttvar becomes available only after the TCP handshake is completed, and it is particularly useful in performance tuning and monitoring scenarios. Typical values can vary significantly based on the network conditions and the specific characteristics of the traffic; lower values indicate more stable and consistent network latency, while higher values reflect instability. Therefore, monitoring this variable can help in diagnosing network-related issues affecting application performance and user experience. This variable is mainly set whenever NGINX processes a request over a TCP connection in scenarios where TCP metrics are applicable. You can leverage its value in various contexts, such as logging or conditional configurations, to apply tailored optimizations depending on the RTT variance seen during a request.
Config Example
log_format my_logging_format '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $tcpinfo_rttvar'; access_log /var/log/nginx/access.log my_logging_format;
$tcpinfo_rttvar is only available for TCP connections and will not work for UDP.
Make sure to enable TCP information retrieval in your OS settings if you are not seeing expected values.
Values are only available post TCP handshake; accessing this variable too early in the request processing may yield empty results.
Description
The $tcpinfo_snd_cwnd variable in NGINX provides the current TCP send congestion window size for a connection, which is critical for understanding the flow and responsiveness of data transmission over the network. This variable is relevant in scenarios where NGINX is used as a reverse proxy or load balancer, as it allows for real-time monitoring of TCP metrics, which can be crucial for performance tuning and optimization. The congestion window size indicates the amount of data that can be sent before waiting for an acknowledgment from the receiver, influencing the overall network throughput. The value of $tcpinfo_snd_cwnd is set during an active TCP connection (i.e., when TCP is the transport layer) and reflects changes in the congestion window size as data is being transmitted. The typical values range widely depending on network conditions, load, and configuration; they can be very small equivalent to a few TCP packets, or quite large if the network path is reliable and has sufficient bandwidth. Monitoring this variable can help administrators identify potential performance bottlenecks and adjust their server configurations accordingly.
Config Example
location /status {
default_type text/plain;
add_header Content-Length 0;
return 200 "Current congestion window size: $tcpinfo_snd_cwnd bytes\n";
}This variable is only available in TCP connections and will not be set in UDP-based configurations.
Make sure the NGINX version used supports TCPINFO; otherwise, this variable may not be available or produce accurate results.
Description
The $tcpinfo_rcv_space variable in NGINX provides information about the size of the TCP receive buffer, which is critical for managing incoming connections over TCP. This variable reflects the current value of the receive space allocated by the TCP stack for handling incoming data packets. It is set when the TCP layer establishes a connection and is configured based on system and application-level parameters, such as the TCP_MAX_RCV_SPACE value which can limit the maximum receive buffer size. When a connection is established, the TCP kernel allocates a portion of memory for incoming data packets which will be filled up until it reaches the specified limit. As data is received, the available receive space decreases, impacting the flow of incoming connections. This variable can be particularly useful in identifying network performance issues or monitoring application behavior under load, as it can indicate whether the application is appropriately utilizing the available resources. Typical values for $tcpinfo_rcv_space can vary widely depending on the underlying system configurations and network conditions but will often reflect standard TCP buffer sizes, which might range from a few kilobytes to several megabytes based on system settings and load. Proper tuning of these values can lead to optimized performance and reduced latency in data transmission.
Config Example
server {
listen 80;
location /status {
add_header X-TCP-Receive-Space "$tcpinfo_rcv_space";
# Additional status logic...
}
}Ensure that the TCP stack is configured to allow large receive buffer sizes; otherwise, the variable may return lower than expected values.
Not all operating systems may expose the TCP receive buffer size through the $tcpinfo_rcv_space variable, leading to inconsistencies in behavior across platforms.
This variable is only applicable in contexts where TCP connections are managed, such as `http` and `stream`. For other contexts, it may not return meaningful data.
Description
In NGINX, the variable "$http_" is a dynamic prefix for accessing HTTP request headers sent by the client. The syntax for this variable is "$http_
Config Example
server {
listen 80;
location /example {
if ($http_user_agent ~* "Googlebot") {
return 403;
}
add_header X-Your-Header "$http_x_your_header";
}
}Remember to convert the header name to lowercase and replace dashes with underscores when forming the variable name, as in `$http_x_forwarded_for`.
Ensure that the headers are present in the client request; accessing a non-existent header will simply yield an empty string.
Use the correct context for this variable to ensure it behaves as expected. It might not work as intended if invoked in the wrong context.
Description
In NGINX, the $sent_http_ prefix variable is automatically generated when certain response headers are set in a server or location block. For each HTTP response header that is sent to the client, there exists a corresponding variable prefixed with $sent_http_. For example, if you send an HTTP header 'X-My-Header' from your server configuration, the corresponding variable will be $sent_http_x_my_header. These variables allow you to access the values of HTTP headers that have been sent in the response directly within NGINX configuration or log formats. This variable is particularly useful in scenarios where you need to perform conditional logging or when customizing output based on the values of specific headers. The $sent_http_ variables are only populated if the corresponding headers are actually set in the response. If a header has not been set, the corresponding variable will be empty. This mechanism helps with tracking and managing various header information that is crucial in scenarios such as debugging or verification of server behavior, especially under testing.
Config Example
location /example {
add_header X-My-Header "MyValue";
add_header X-Another-Header "AnotherValue";
# Log sent headers
access_log /var/log/nginx/example.log custom_format;
}
log_format custom_format 'Sent HTTP Headers: $sent_http_x_my_header, $sent_http_x_another_header';Ensure that the corresponding HTTP headers are actually set; otherwise, the variable will be empty.
The variable names must be in lowercase and underscores instead of hyphens, as NGINX transforms header names accordingly (e.g., X-My-Header becomes x_my_header).
Description
The $sent_trailer_ variable is used in conjunction with HTTP/1.1 responses that include trailers, which are additional fields sent after the message body. This variable allows NGINX to access specific trailer fields that may have been set in the response. It can be prefixed to a trailer name, such as `$sent_trailer_myfield`, which would return the value of the `myfield` trailer if it was specified in the HTTP response. Typically, this variable is set during the HTTP response phase when the trailers have been determined and are ready to be sent. The trailers themselves are defined in the response header fields, and thus, the values associated with the $sent_trailer_ variables will depend on how the application or backend service populates these trailer fields before NGINX sends the final response. If the specified trailer exists, the variable will return its value; otherwise, it will return an empty string. The usage of trailers is not widely common in front-end server responses, and their support largely depends on the client and server implementations. Some use cases of trailers include providing checksum data or other metadata that should come after the payload in streaming scenarios, making the trailers a useful feature for specific applications.
Config Example
location /example {
add_header My-Trailer myvalue;
# The value of the trailer can be accessed
echo "$sent_trailer_My-Trailer";
}If the specified trailer does not exist, accessing the variable will return an empty string, which can lead to confusion when debugging.
Ensure that the backend actually sends trailer headers; otherwise, this variable will have no value.
Remember that not all clients support HTTP trailers, which may limit your use cases.
Description
In NGINX, variables prefixed with $cookie_ allow for easy access to cookie values set by clients in their HTTP requests. When a client sends a request to an NGINX server, it may include various cookies that store user-specific information, session details, or preferences. By referencing $cookie_
Config Example
server {
listen 80;
server_name example.com;
location / {
if ($cookie_user_id) {
add_header X-User-ID $cookie_user_id;
}
}
}Ensure the cookie name does not contain special characters or spaces, as this can lead to unexpected behavior.
Be aware that missing cookies will result in an empty string, which could impact logic in conditional statements.
If caching is used, ensure that cookies are appropriately managed to avoid stale responses based on user sessions.
Description
In NGINX, variables prefixed with $arg_ are used to extract query parameters from the request URI. When a client makes an HTTP request, it may include a query string consisting of key-value pairs appended to the URL. For example, in the request URL `http://example.com/path?arg1=value1&arg2=value2`, NGINX allows access to these parameters through corresponding variables like `$arg_arg1` and `$arg_arg2`. The value returned by a specific `$arg_` variable corresponds to the value of the associated query parameter; if the parameter does not exist, the variable returns an empty string. These variables can be particularly useful in scenarios where request handling needs to be influenced by client parameters, such as redirects, access control, or conditional responses. They are evaluated each time a request is processed, making them versatile for dynamic content generation based on client input. Notably, because they are context-sensitive, their proper usage requires an understanding of NGINX configuration, especially regarding the placement of directives that depend on these variables.
Config Example
location /example {
if ($arg_user_id) {
return 200 "User ID: $arg_user_id";
}
return 400 "No User ID provided";
}Ensure that the query parameter is properly encoded in the request URL; otherwise, it may not be parsed correctly.
Using if statements with these variables can lead to unexpected behavior if not carefully structured; it’s best to use them within the context of a clean location block.
Description
In NGINX, the variable $proxy_host is integral in reverse proxy configurations that direct client requests to backend servers. When a request is proxied, this variable is set to the host part of the URL specified in the proxy_pass directive. It reflects the upstream server's hostname as determined by the proxy settings in the configuration file. The $proxy_host variable is available during request processing, particularly in contexts where a request is being proxied, such as inside a location block that uses the proxy_pass directive. If the upstream server is defined with a hostname or an IP address, $proxy_host will return that value, allowing for dynamic handling of requests based on the destination server's identity. Typical values for $proxy_host might be an actual hostname like "api.example.com" or an IP address if specified directly in the upstream definition. When using multiple upstream servers, this variable becomes crucial for logging and debugging purposes. It helps track which server a request was sent to, especially when dealing with load-balanced configurations. It's also useful in constructing custom headers or modifying requests based on the resolved upstream hostname.
Config Example
location /api {
proxy_pass http://backend;
proxy_set_header Host $proxy_host;
}$proxy_host will be empty if proxying is not configured correctly or if the upstream server is unreachable.
Ensure the proxy_pass directive is resolved to a valid upstream to avoid unexpected behaviors.
Description
The $proxy_port variable in NGINX is used to specify the port number of the proxied server that was specified in a proxy_pass directive. This variable is particularly useful when you need to log or conditionally handle requests based on the destination server's port. It is set whenever a request is proxied using the `proxy_pass` directive. If a hostname is given in `proxy_pass`, NGINX resolves this to an address and port, and the $proxy_port variable retrieves that port. Commonly, this port will be 80 for HTTP requests and 443 for HTTPS, but it can vary based on your upstream server's configuration. In scenarios where a request is not proxied, or if the port is not explicitly defined in the proxy_pass configuration, $proxy_port may return an empty string. It’s essential to ensure that your proxy_pass configuration correctly specifies a protocol and port to utilize this variable effectively. You can also combine it with other variables such as $proxy_host to create dynamic behaviors in your configurations.
Config Example
location /example {
proxy_pass http://backend_server:8080;
access_log /var/log/nginx/proxy_access.log "Proxy to port: $proxy_port";
}If no proxy is set or if the proxy_pass url doesn't specify a port, $proxy_port may return an empty value.
Remember to check conditional handling if you plan to use $proxy_port in if statements, as they can behave unexpectedly.
Description
The $proxy_add_x_forwarded_for variable is utilized in NGINX to construct the X-Forwarded-For header, which is essential in proxy setups to maintain the original client's IP address. It effectively combines the client's IP address with any existing IP addresses already defined in the X-Forwarded-For header, ensuring that a chain of proxy addresses can be preserved when multiple proxies are involved. The variable is set during request processing when the NGINX server is in proxy mode, typically within the configuration of a location block that uses the proxy_pass directive. When NGINX processes a request, if the X-Forwarded-For header is present, this variable will take its existing value and append the client's IP address. If the header is absent, it will only contain the client's IP address. For example, if the client's IP is 192.168.1.5 and a previous proxy has set the X-Forwarded-For header as 10.1.1.1, then $proxy_add_x_forwarded_for will result in "10.1.1.1, 192.168.1.5". This is a crucial mechanism for applications hosted behind multiple layers of proxies, so they can accurately log and trace the original client IPs. It is particularly used in load balancing scenarios to keep the request trace intact.
Config Example
location /api {
proxy_pass http://backend;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}If you do not set the X-Forwarded-For header correctly, your application might not receive the correct client IP address.
Ensure that the proxying behavior of NGINX is correctly configured to utilize this variable effectively.
Beware of potential IP spoofing if incoming headers from the client are not trusted.
Description
The $proxy_add_via variable is dynamically generated by NGINX when an HTTP request is proxied to another server. It constructs a 'Via' header that helps identify the proxying method used. This variable is typically set to the format '1.1 NGINX' when the request is proxied, indicating the version of the protocol used along with the server name. This is particularly useful for debugging and managing caching systems, as it allows client applications and intermediate proxies to trace back the request through the NGINX proxy. This variable is usually set when using the proxy_pass directive in the configuration. For instance, if the `proxy_pass` directive is configured properly for an upstream server, NGINX will automatically populate $proxy_add_via with the appropriate value for all requests forwarded to that server. If multiple proxies are chained, the 'Via' headers will reflect the entire route the request has taken, enabling better control and visibility into shared caches and routing mechanisms. Typical values might include '1.1 NGINX' or indicate additional proxy services if they are being used alongside NGINX.
Config Example
location /api {
proxy_pass http://backend_service;
proxy_set_header Host $host;
add_header Via $proxy_add_via;
}Ensure that the 'add_header' directive includes the appropriate context for the header to show up in responses (e.g. using `always` if necessary).
The 'Via' header can expose sensitive server information, ensure that this is considered when configuring public-facing proxies.
When running multiple NGINX instances, each may append its 'Via' header leading to a long chain if not managed properly.
Description
The $proxy_internal_host variable in NGINX is utilized primarily in configurations where requests are proxied to internal locations. Its role is to store and return the internal (or backend) hostname that is typically determined by the server’s internal network settings. This variable can be particularly useful in scenarios involving load balancing or reverse proxy setups, where multiple upstream servers might handle incoming requests. Depending on the context, this variable will be set to the internal hostname specified in the proxy configuration or defaults to the server's host if not defined explicitly. $proxy_internal_host is evaluated during the processing of a request that has been altered via proxy-related directives, such as "proxy_pass". If a proxy_name is defined, that name will be utilized as the value of this variable. If the internal hostname is not defined, it may fall back to the default behavior prescribed by the NGINX configuration. This ensures that requests are routed correctly based on either explicitly defined configurations or internal defaults. This allows advanced configurations where decisions can be made based on the $proxy_internal_host variable, enabling and managing internal routing policies or security checks based on internal hostnames. It can also enhance logging and debugging processes, where administrators may want to inspect internal proxying setups more intuitively.
Config Example
location /api {
proxy_pass http://backend_server;
proxy_set_header Host $proxy_internal_host;
}Ensure that the internal hostname is correctly set in your proxy configurations to avoid unexpected behaviors.
$proxy_internal_host may not behave as expected if used outside of a proxy context, as it depends on proxied requests to be defined.
Remember that this variable does not include port information; if needed, it must be handled separately.
Description
The variable $proxy_internal_connection is set in the context of processing a proxied request in NGINX when a request is being forwarded to another upstream server. It specifically checks if the outgoing connection uses an internal network, which is determined by whether the socket for the connection is part of the internal routing rules set up in your NGINX configuration. By default, this variable will evaluate to '1' (true) if the outgoing connection is internal and '0' (false) if it is external. This variable is particularly useful in scenarios involving multiple layers of proxying or configurations where security constraints are enforced based on whether connections are internal or external. For instance, if you have rules defined that restrict access or modify headers based on the connection type, this variable can help fine-tune such behaviors. Users can set up specific configurations in their NGINX setup to react differently depending on the value of this variable, enhancing the control over connection handling within proxied requests. Typical usage scenarios might include increased logging, modifying request headers for internal connections but not for external ones, or applying special security measures to protect against exposing certain features over public-facing connections. Logs can then indicate different routing paths based on the status of this variable, allowing for straightforward troubleshooting and performance monitoring.
Config Example
location /proxy {
proxy_pass http://backend;
# Log whether the connection is internal or not
access_log /var/log/nginx/internal_access.log;
if ($proxy_internal_connection) {
# Internal requests can have specific handling
set $internal 'true';
}
}Ensure that your network routing is correctly configured, as misconfigurations may lead to incorrect values for this variable.
The variable is only applicable in contexts where proxying is happening, so using it in unrelated contexts will not yield any results.
Double-check the value in conditions since evaluating $proxy_internal_connection is dependent on specific server setup and connectivity rules.
Description
The `$proxy_internal_body_length` variable in NGINX is utilized within the context of proxying between servers, specifically when NGINX acts as a reverse proxy. This variable keeps track of the size of the body of the request that has been received from an upstream server. The value of this variable is set after NGINX processes the request coming from the upstream server, generally when the request is being forwarded or when a response is being received. This variable is particularly useful in configurations where the size of the body content needs to be monitored or manipulated. For example, if you're implementing content filtering, logging, or conditional processing based on body size, `$proxy_internal_body_length` provides an immediate and accurate way to retrieve the length of the incoming body data. The typical values for this variable will vary depending on the request and response scenarios, but it generally reflects the number of bytes present in the request body. In this scenario of proxying, when NGINX processes a request, it calculates the body length derived from the proxied content and sets it to this variable before handling further operations or directives where the size might be relevant.
Config Example
location /proxy {
proxy_pass http://backend;
access_log /var/log/nginx/proxy.log "$proxy_internal_body_length";
}This variable is only meaningful within the context of a proxied request and may be empty or zero if accessed outside that context.
Be cautious of expecting a non-zero value when the upstream server sends an empty body; it will be set to zero in such cases.
If using `$proxy_internal_body_length` in conditional processing, ensure that it is accessed after the request body has been processed.
Description
The variable $proxy_internal_chunked controls the transfer encoding method utilized by NGINX for internal responses forwarded by the proxy module. When set to "on", it signifies that responses without a predefined Content-Length should be transmitted using chunked transfer encoding, enabling the streaming of data to clients as it becomes available. This is particularly useful for dynamic content, where the length of the response is not known in advance. This variable is typically set in location blocks or server contexts, primarily during proxying when a client request is handled by a backend server. The default behavior of NGINX uses chunked encoding if the upstream server does not specify a Content-Length header, which allows clients to begin processing the response immediately without waiting for the entire body to be sent. Developers can selectively enable or disable this feature based on their specific use cases and the behavior of the upstream services they are connecting to, making it a flexible option in managing how data is served. In practice, proxy_internal_chunked may result in performance benefits when dealing with large or streaming responses, but it can introduce complexity for clients that may not handle chunked responses correctly. Therefore, careful consideration of client compatibility and performance implications should be observed when configuring NGINX to use this variable.
Config Example
location /api {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_buffering off;
# Enable chunked transfer for internal responses
set $proxy_internal_chunked on;
}Ensure that upstream servers can handle chunked transfer encoding properly, as some older clients may not support it.
Disabling buffering with 'proxy_buffering off;' may result in unexpected performance issues if the upstream is slow.
Remember that chunked responses can complicate content delivery over HTTP/1.0 and may not work with certain proxies or intermediaries.
Description
The $fastcgi_script_name variable is used in the NGINX configuration to retrieve the script name that is passed to the FastCGI server. This variable is primarily set when using the `fastcgi_pass` directive, which forwards requests to a FastCGI server. When an incoming request is processed, NGINX extracts the script part of the URI from the request and sets the value of this variable accordingly. It ensures that the FastCGI application receives the correct script path for processing, which is crucial for applications that rely on specific routing mechanisms, like PHP or Python applications running under FastCGI. In typical usage, $fastcgi_script_name will return a value such as `/index.php` or `/app/script.php`, reflecting the script that should be executed. If no script is found or if the request does not target a script, the variable may be empty. Therefore, this variable is not only important for routing to the correct script, but also plays a critical role in applications where file access is strictly controlled by URL structure.
Config Example
location / {
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}Ensure that the `fastcgi_param SCRIPT_FILENAME` is correctly set to include the document root and the script name, or the FastCGI server may not locate the file correctly.
Remember that if the request does not point to a specific script or if the script does not exist, $fastcgi_script_name may be empty, leading to unexpected errors.
When modifying the request URI, make sure the $fastcgi_script_name is updated accordingly to reflect those changes.
Description
The $fastcgi_path_info variable is used in NGINX when interfacing with FastCGI applications. It provides the additional path information that may be included after the script name in the URL. This value is particularly important when working with RESTful APIs or web applications that require passing elements of the URL to the backend application for processing. For instance, if a request is made to /api/users/123, the '123' would be considered the path info, which would be returned as the value of the $fastcgi_path_info variable. This variable is primarily set when the FastCGI configuration is being utilized in the NGINX server block, specifically when handling script requests. It allows NGINX to correctly map the incoming requests to the corresponding FastCGI application, enabling more flexible routing and integration with various web frameworks which might rely on parsing that path info for their operations. It's important to note that if the URL does not contain any additional path segments after the script name, $fastcgi_path_info will be empty. In a typical scenario, when defining a FastCGI location block within NGINX, you will set the path to the FastCGI script while also allowing NGINX to manage the additional path information via this variable, which can be utilized by the depicted web applications accordingly.
Config Example
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}Make sure to define PATH_INFO in your fastcgi_param settings to ensure NGINX passes it correctly to the backend.
Neglecting to set SCRIPT_FILENAME can result in errors, as paths need to be fully resolved for FastCGI scripts.
Not all FastCGI applications handle the path information correctly, so you should test your application to ensure it behaves as expected.
Description
The $grpc_internal_trailers variable is utilized within the context of gRPC protocols to provide access to the trailer fields of a gRPC response. Trailers in gRPC are key-value pairs that can be sent at the end of a response, similar to HTTP headers, allowing additional metadata to be communicated alongside the response body. This variable is only populated when the server processes a gRPC request and is capable of handling trailer fields. When processing a gRPC response, if the relevant trailer information is set to be included, it becomes accessible through $grpc_internal_trailers. Therefore, this variable will typically contain a string representation of the trailer fields formatted similarly to HTTP headers, separated by commas or newlines. If the request does not return any trailers, this variable will remain empty. The use of trailers can enhance the efficiency of the communication by allowing the server to send significant information along with the response without adding additional overhead to the body.
Config Example
location /example {
grpc_pass grpc://backend;
add_header Trailer 'grpc-status';
set $status $grpc_internal_trailers;
}Ensure that your upstream gRPC server actually sets the trailers, otherwise the variable will be empty.
Avoid using $grpc_internal_trailers in contexts where gRPC is not being used, as it will not be populated.
Description
The $ssl_protocol variable in NGINX exposes the specific SSL/TLS protocol version negotiated for a secure connection between a client and the server. This variable is set only when SSL is enabled in the configuration, and it reflects the latest protocol version used for the current request at runtime. It can return values such as 'TLSv1.2', 'TLSv1.3', or 'SSLv3', depending on the protocols enabled in the server's SSL configuration. When utilized, $ssl_protocol is particularly useful for logging and conditional processing within NGINX configurations. For instance, an administrator may want to adjust settings based on the security level of the protocol used. The variable is typically populated during the TLS handshake process, and like other connection-level variables, it can be accessed in various contexts such as location or server blocks. This makes it straightforward to apply conditional logic or enhance security measures based on the protocol version in use. It's important for users to understand that the availability of different protocol values largely depends on the cipher suites and protocols configured in the server block. If a particular protocol version (like TLSv1.3) is not enabled, the value of $ssl_protocol may reflect the next available version that can be negotiated by the server and the client.
Config Example
server {
listen 443 ssl;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
# Log the SSL protocol used
access_log /var/log/nginx/ssl_protocol.log 'SSL Protocol: $ssl_protocol';
}Ensure SSL is enabled in your server configuration; otherwise, $ssl_protocol will not be set.
Check your SSL settings; if no suitable protocol is negotiated, $ssl_protocol may not yield expected values.
Remember that the protocol value depends on server capabilities and client requests, which may limit expected outputs.
Description
The $ssl_cipher variable in NGINX provides the name of the cipher suite that is currently being used for an SSL/TLS connection. This variable is particularly useful for logging and debugging purposes, as it allows administrators to determine which cipher was negotiated for secure connections. The value of $ssl_cipher is set during the SSL handshake process, which occurs when a client initiates a connection over HTTPS. Depending on the chosen cipher suite, the $ssl_cipher variable could contain values like 'ECDHE-RSA-AES256-GCM-SHA384' or 'ECDHE-ECDSA-CHACHA20-POLY1305', among others. When configuring SSL/TLS in NGINX, it is crucial to be aware of the ciphers supported by both the NGINX server and the client's SSL implementation. The supported ciphers can be specified in the NGINX configuration file using the ssl_ciphers directive. The effective value of $ssl_cipher will reflect the configuration set in this directive and may also depend on the OpenSSL version used in the NGINX build. If a client attempts to connect using a cipher not supported by the server, the connection will fail, and the $ssl_cipher variable will not be set. Aside from its existence during SSL connection handling, $ssl_cipher can also be combined with logging directives to capture security-related data, thus assisting in analyzing and monitoring secure sessions. This visibility helps in understanding client connections, enforcing security compliance, and optimizing cipher configurations.
Config Example
server {
listen 443 ssl;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
ssl_ciphers 'HIGH:!aNULL:!MD5';
access_log /var/log/nginx/access.log combined;
location / {
add_header X-SSL-Cipher $ssl_cipher;
}
}Ensure that SSL is properly enabled in your NGINX configuration; otherwise, the variable will not be set.
Be aware that the value of $ssl_cipher depends on both server configuration and client's capabilities; mismatches can lead to connection failures.
When combining with logging, make sure to format the logs correctly to capture the cipher values without breaking the log format. They can clutter the output if not handled properly.
Description
The `$ssl_ciphers` variable in NGINX is automatically set during the processing of HTTPS requests. This variable holds the string representation of the cipher suites negotiated between the client and the server during the SSL/TLS handshake process. It is particularly useful for logging and debugging purposes, as it allows the server administrator to understand which cipher suites are being used for secure connections. This variable is populated only when the SSL module is enabled in NGINX and is applicable only in server blocks that handle SSL connections. Typical values for `$ssl_ciphers` might look like 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384', representing a comma-separated list of cipher suite names. The exact ciphers that appear in this variable depend on the `ssl_ciphers` configuration directive defined within the NGINX server context, as well as the client's capabilities and preferences during the handshake. In practice, `$ssl_ciphers` can also facilitate the implementation of security measures such as HSTS (HTTP Strict Transport Security) by allowing administrators to conditionally adjust their configurations based on the strength of the ciphers used. This adherence to secure communication protocols enhances the overall safety of web applications.
Config Example
server {
listen 443 ssl;
ssl_certificate /path/to/certificate.crt;
ssl_certificate_key /path/to/private.key;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384';
access_log /var/log/nginx/ssl_ciphers.log combined;
}
location / {
add_header X-Cipher-Used "$ssl_ciphers";
}Ensure that the SSL module is compiled and enabled in your NGINX installation; otherwise, the variable will be empty.
Avoid using this variable in non-SSL server blocks to prevent unexpected behavior.
When logging or displaying the value of `$ssl_ciphers`, ensure proper handling to avoid exposing sensitive information. In production environments, displaying such information can be a security risk.
Description
The $ssl_curve variable is utilized in NGINX to expose the elliptic curve being used for establishing an SSL/TLS connection. This variable is specific to connections that utilize the SSL protocol and is set once the SSL handshake has been completed successfully. The value of this variable can vary depending on the elliptic curve that has been negotiated between the client and the server during the handshake process. Common values include 'P-256', 'P-384', 'P-521', and others as specified by the server's SSL/TLS configuration. The value of $ssl_curve is particularly useful for logging or conditional access control scenarios within your NGINX configuration. It can help administrators understand which curves are being employed, which might translate to performance considerations or security compliance needs. It is essential to note that this variable will only be valid in contexts where SSL is active, and it should not be used outside of an SSL-enabled server block or a location handling secure traffic. Usage of the $ssl_curve variable in configurations should be approached with an understanding of the SSL/TLS protocols and the significance of the elliptic curve chosen for cryptographic operations, especially in constantly evolving security contexts.
Config Example
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
location / {
add_header X-Ssl-Curve $ssl_curve;
# Other location directives
}
}Ensure SSL is enabled in the server block; otherwise, this variable will not be set.
Be mindful of the security implications of exposing elliptic curve information in headers; potential information leakage on the cipher suite in use.
This variable may not be available in all builds of NGINX, ensure your build includes the necessary modules.
Description
$ssl_curves is a variable in NGINX that contains the names of the elliptic curves that have been negotiated for use during an SSL/TLS session. This variable is particularly relevant when using modern encryption standards to ensure secure communications. The value of this variable is set when an SSL connection is established, and it can reflect multiple curves if the negotiated session supports them. Typical values for this variable might include names such as "P-256", "P-384", or "P-521", depending on the supported curves on both the server and client sides. This variable can be especially useful for monitoring and troubleshooting SSL/TLS connections as it allows for real-time insight into the cryptographic methods being accepted and utilized in secure communications. Administrators can use this information to ensure that their configuration is set to use optimal curves, promoting better security practices in web application deployments. It is worth noting that the specific elliptic curves available for use depend on the OpenSSL library version and its configuration. Essentially, if you are looking to control or optimize the curve settings for your SSL connections, leveraging the $ssl_curves variable can provide valuable insights into how well your configurations align with current security best practices.
Config Example
server {
listen 443 ssl;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
location /status {
# Show the negotiated SSL curves for monitoring
add_header Content-Type text/plain;
return 200 "$ssl_curves";
}
}Ensure that your OpenSSL version supports the curves you wish to use; otherwise, the variable may return unexpected results.
Remember that the curves negotiated depend on both the client's and server's supported curves; a misconfiguration may lead to none being selected.
Description
The $ssl_sigalg variable is set during the SSL handshake process, specifically when a client establishes a secure connection with an NGINX server. This variable captures and provides information on the signature algorithm that has been employed in the exchange, which is instrumental for auditing, logging, or conditional configurations based on the security parameters established during the connection. Typical values for $ssl_sigalg can include various cryptographic algorithms used in SSL/TLS communication, such as `SHA256 with RSA Encryption`, `SHA1`, or even `ECDSA` signatures, among others, depending on the server's SSL configuration and the client's supported protocols. As SSL/TLS protocols evolve, the available algorithms and their respective representations may also change, reflecting in the values returned by this variable. The variable is generally used in scenarios where specific handling is required based on the strength of the signature algorithm, allowing server administrators to enforce security policies. The $ssl_sigalg variable is primarily utilized in the context of SSL-enabled virtual servers, where it can be used with conditional expressions in directives like `if`, or simply output in log formats for monitoring purposes. It adds a layer of configurability and insight for users wishing to ensure secure practices regarding signature algorithms utilized by clients connecting to their services.
Config Example
log_format custom_format '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$ssl_sigalg"'; access_log /var/log/nginx/access.log custom_format;
Ensure SSL is enabled and configured correctly, otherwise the variable will not be set or accessible.
The value of $ssl_sigalg is only available in contexts where SSL is negotiated; it will not yield any output on plain HTTP connections.
Description
In NGINX, the $ssl_session_id variable represents the session ID associated with the currently active SSL session. This variable becomes available when the SSL connection has been established and is often used in conjunction with SSL session caching configurations to optimize SSL/TLS handshakes. Each established SSL session can be referenced by its session ID, which is utilized to resume sessions without needing to renegotiate handshakes. This variable is particularly useful for monitoring, logging, or managing user sessions securely. The $ssl_session_id is typically a hex string that uniquely identifies the SSL session. It is created during the handshake process and is used whenever a client attempts to resume an SSL session. If session resumption is not supported or disabled, this variable may return an empty string. Therefore, it’s essential to ensure that the server has SSL session caching enabled to utilize the benefits of this variable effectively. Common values might look like '00:11:22:33:44:55:66:77:88:99:AA:BB' indicating the hexadecimal representation of the session ID. The variable is applicable in various contexts, such as logging configurations to capture SSL session information, or when specific access controls are enabled based on SSL session properties. Additionally, using related variables such as $ssl_protocol and $ssl_cipher can provide deeper insights when paired together with $ssl_session_id.
Config Example
http {
server {
listen 443 ssl;
ssl_certificate /etc/ssl/certs/my_cert.pem;
ssl_certificate_key /etc/ssl/private/my_cert.key;
location / {
access_log /var/log/nginx/access.log combined;
add_header X-SSL-Session-ID $ssl_session_id;
}
}
}Make sure SSL is properly configured for the variable to be available; otherwise, it will return an empty value.
If session resumption is disabled, be aware that the $ssl_session_id will not provide useful information.
Using this variable in non-SSL contexts will result in an empty string.
Description
The $ssl_session_reused variable is set to 'on' if the SSL session was successfully reused during the SSL handshake process, and 'off' if a new session was created. This reuse typically occurs when a client reconnects to a server using the same SSL session parameters, which speeds up the handshake process by avoiding the need for full re-negotiation. NGINX utilizes session IDs or session tickets to facilitate session reuse. If a client presents a valid session ID or ticket that the server recognizes, it will use the existing session, leading to reduced latency and improved performance. Monitoring this variable can help diagnose the effectiveness of SSL session management in your NGINX setup and is particularly useful in high-performance applications with frequent SSL connections. The possible values for $ssl_session_reused are 'on' or 'off', where 'on' indicates a session reuse and 'off' denotes a fresh SSL connection without reuse. The variable can typically be found in the response headers or log formats to help in troubleshooting and performance analysis.
Config Example
log_format main '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" "$ssl_session_reused"'; access_log /var/log/nginx/access.log main;
Ensure that SSL session caching is enabled in the server configuration, as the variable will not report reuse without it.
Using $ssl_session_reused in log formats requires the access log to be properly configured to capture relevant SSL data.
Description
In NGINX, the $ssl_early_data variable is used when the server is configured to support TLS 1.3, which allows for a feature known as "0-RTT" or early data. This feature lets clients send data before the TLS handshake is fully completed, enabling faster communication but with certain security considerations. When early data is received, the value of $ssl_early_data is set to "1"; otherwise, it is set to "0". This variable is particularly useful for determining how to handle requests that may have been sent using early data. For example, relying on early data may introduce risks such as replay attacks, so users must implement necessary logic to safeguard against such vulnerabilities. NGINX can leverage the value of this variable in conditional configurations, to apply different restrictions or responses based on whether early data was used during the request.
Config Example
server {
listen 443 ssl;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
location / {
# Check if early data was received
if ($ssl_early_data) {
return 400; # Reject early data requests if needed
}
# Normal processing for regular requests
proxy_pass http://backend;
}
}Using $ssl_early_data without enabling TLS 1.3 support will always yield 0 (no early data).
Assuming that early data processing is safe without implementing proper security measures can lead to vulnerabilities.
Description
The variable $ssl_server_name is set within the context of an SSL-enabled NGINX server block and is primarily utilized to identify which virtual server is serving a request based on its server name during the SSL handshake phase. When a client initiates an SSL connection to the NGINX server, the server evaluates the requested hostname against the configured 'server_name' directives of each server block. The first matching server is used, and its corresponding server name is stored in the $ssl_server_name variable. In the case where there is no matching server name found, $ssl_server_name will be empty, and the default server's name (if configured) might be used instead. This variable behaves similarly to $server_name; however, it is specifically populated during the SSL handshake process, allowing it to be utilized for various purposes such as logging, access control, and generating responses based on the requested hostname. Typical values for $ssl_server_name would be the DNS names configured in the server blocks, such as 'example.com' or 'www.example.com'.
Config Example
server {
listen 443 ssl;
server_name example.com www.example.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
location / {
add_header X-SSL-Server-Name "$ssl_server_name";
}
}Ensure that SSL is enabled for the server block using 'listen 443 ssl;'.
If there is no matching server_name, the variable will be empty; be cautious during server configuration to avoid confusion.
$ssl_server_name cannot be used within configurations that are not related to SSL. Make sure the context is appropriate.
Description
The $ssl_alpn_protocol variable is specific to configurations that utilize HTTP/2 or other protocols that employ ALPN during the TLS handshake. This variable is set when a connection is made over HTTPS and allows the server to determine which application layer protocol has been selected by the client. It is primarily useful for servers that implement protocol variation in their service, such as providing content over both HTTP/1.1 and HTTP/2, depending on client capabilities. The value of this variable can be either the protocol name, such as 'h2' for HTTP/2 or 'http/1.1', representing the negotiated protocol. This variable is particularly important for performance optimization, as it allows the server to respond to client requests using the most appropriate protocol that the client supports according to the ALPN list. When a client initiates a TLS handshake, it can propose multiple protocols, and the server decides the best-suited protocol for that session. If the client does not support any of the protocols that the server offers, the variable will be unset, and a typical fallback may occur to a default protocol, often HTTP/1.1. This mechanism of negotiation enhances compatibility and performance across different client implementations.
Config Example
server {
listen 443 ssl;
ssl_certificate /path/to/certificate.crt;
ssl_certificate_key /path/to/private.key;
# Using the variable in a log format to capture the negotiated protocol
access_log /var/log/nginx/access.log custom_format;
location / {
if ($ssl_alpn_protocol = 'h2') {
# Perform actions specific to HTTP/2
}
# Other processing...
}
}
log_format custom_format '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" Protocol: $ssl_alpn_protocol';Ensure that your server is configured to handle protocols that support ALPN. If not configured properly, $ssl_alpn_protocol may return an empty value.
Use the variable only when SSL/TLS is enabled, as it will not be set for non-TLS connections.
Be cautious when using in combination with redirect rules, as certain configurations may unintentionally alter the protocol received.
Description
The $ssl_ech_status variable is set within the context of an SSL connection that supports Encrypted ClientHello (ECH). It communicates the client's ECH status as determined during the SSL handshake process. The variable can return various values indicating whether ECH was used or if there were any errors related to its use during the handshake. Common values include 'on' if ECH was successfully negotiated, 'off' if it was not supported by the client, and error codes for other specific issues. The handling of this variable happens when the NGINX server is configured with SSL and ECH support enabled. When a client connects and attempts to initiate an ECH handshake, the server evaluates the request and sets the $ssl_ech_status accordingly. This allows webmasters to implement fine-grained access control or customize responses based on the status of ECH, thus enhancing the privacy features offered to clients that support it. In practice, the variable is useful for logging or for writing conditions in configuration files that can tailor responses depending on client support for encryption. This might involve customizing the behavior of application servers or even redirecting clients based on their security capabilities.
Config Example
server {
listen 443 ssl;
ssl_certificate /etc/ssl/cert.pem;
ssl_certificate_key /etc/ssl/key.pem;
location / {
if ($ssl_ech_status = 'on') {
add_header X-ECH-Status 'Enabled';
}
if ($ssl_ech_status = 'off') {
return 403;
}
}
}Ensure that SSL is properly configured; otherwise, the variable may not be set or may return unexpected results.
Be aware that this variable only exists in the context of SSL connections; it will not be available for plain HTTP requests.
Description
The `$ssl_ech_outer_server_name` variable is a unique feature available in NGINX's SSL module that captures the outer server name provided by the client during the TLS handshake when the Encrypted ClientHello (ECH) extension is utilized. This allows clients to maintain privacy regarding the actual server they are connecting to while still enabling servers to decide how to handle requests based on the intended destination server name. This variable is set during the SSL handshake when the ECH extension is negotiated by the client. If the client includes an outer server name in its Encrypted ClientHello, this variable will contain that name. Typical values for this variable are domain names or host headers, such as 'www.example.com' or 'api.example.com', depending on the ECH implementation on the client’s side. If no outer server name is provided or the ECH negotiation fails, the variable will be empty. Using this variable can be particularly beneficial for applications that wish to respond differently based on the client's original intended destination while still preserving some level of privacy in communication. However, this functionality necessitates a compatible client setup to utilize ECH, meaning that not all requests will include this variable based on the client's configuration during the handshake.
Config Example
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/cert.key;
location / {
if ($ssl_ech_outer_server_name) {
add_header X-Outer-Server-Name $ssl_ech_outer_server_name;
}
}
}Ensure that your NGINX version supports Encrypted ClientHello and is correctly configured for SSL/TLS to capture this variable correctly.
If the client does not support ECH or does not send an outer server name, the variable will not be set, which might lead to unexpected configuration behavior.
Description
The $ssl_client_cert variable is utilized when SSL client certificate verification occurs in NGINX. This variable stores the client's SSL certificate provided during the SSL handshake, formatted as a PEM encoded string. It is set to the value of the client's certificate when the 'ssl_verify_client' directive is configured to 'on' and a certificate is successfully validated. It becomes particularly useful in scenarios requiring secure communications where client identity verification is essential, such as in API services or user authentication systems. If no client certificate is provided or if the verification fails, this variable will be empty. The content of this variable can be logged, evaluated in conditional statements, or passed to backend applications if necessary. Typical uses include using the variable to authorize access based on client certificate attributes, such as the Common Name (CN) or Subject Alternative Name (SAN), thereby enhancing security at the application level. This approach is critical in services that must ensure the identity of each client in a secure manner, making use of SSL/TLS protocols.
Config Example
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /path/to/server.crt;
ssl_certificate_key /path/to/server.key;
ssl_client_certificate /path/to/ca.pem;
ssl_verify_client on;
location / {
if ($ssl_client_cert) {
add_header X-Client-Cert $ssl_client_cert;
}
}
}Ensure ssl_verify_client is set to 'on' for $ssl_client_cert to be populated.
$ssl_client_cert will be empty if the client does not present a certificate or if the verification fails.
Be cautious about logging $ssl_client_cert since it contains sensitive information.
Description
The $ssl_client_raw_cert variable is populated when NGINX is configured to handle SSL/TLS connections and the client presents a certificate for authentication. This variable becomes available in the context of a request being processed over an SSL connection when the 'ssl_verify_client' directive is set to 'on' or 'optional'. The raw client certificate is output as a single base64-encoded string, which can be utilized for logging, access control, or any application-specific processing that requires knowledge about the client's certificate. This variable is particularly useful in secure environments where mutual TLS (mTLS) is implemented, as it allows server administrators and application developers to impose rules based on the client's certificate, such as logging sensitive information for auditing or dynamically altering request processing based on the certificate validity or attributes. Typical values of this variable will be in base64 format, representing the client's X.509 certificate as transmitted during the SSL handshake, making it crucial for situations where identity verification across trusted entities is required. To make use of this variable in practice, NGINX may directly log the contents of the certificate or conditionally route requests based on certificate attributes (e.g., subject or issuer). Settings such as 'ssl_verify_client' need careful configuration as they can impact application security directly.
Config Example
server {
listen 443 ssl;
ssl_certificate /path/to/server.crt;
ssl_certificate_key /path/to/server.key;
ssl_verify_client on;
location / {
add_header X-Client-Cert $ssl_client_raw_cert;
# Additional processing...
}
}Ensure that the 'ssl_verify_client' directive is set correctly; otherwise, the variable will be empty.
This variable is only available in SSL-enabled server contexts; it won't exist in HTTP contexts.
Logging raw client certificates can expose sensitive information; ensure compliance with security policies.
Description
The $ssl_client_escaped_cert variable is specifically used within the HTTP server context when HTTPS connections utilize client certificate authentication. When a client establishes a secure connection and provides a certificate, this variable captures the full details of that certificate in a format that is both URL-encoded and safely escaped for transmission. This is particularly useful for logging or passing as query parameters without introducing syntax errors in URLs. This variable is set only if client certificate verification is enabled and a valid client certificate is presented during SSL handshake. It holds the encoded content of the client's certificate, allowing server-side applications to utilize the client's identity for authorization or access control purposes. Typical values for this variable include the actual PEM-encoded certificate data, which consists of Base64-encoded certificate components wrapped between Specific PEM header and footer lines. If no client certificate is presented, this variable is empty. In combination with other SSL-related variables, such as $ssl_client_cert and $ssl_client_verify, the $ssl_client_escaped_cert provides a comprehensive mechanism for managing client verification within secure environments, catering to scenarios that require not just identity verification but safe transmission of certificate data for further processing.
Config Example
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /path/to/server.crt;
ssl_certificate_key /path/to/server.key;
ssl_client_certificate /path/to/client_ca.crt;
ssl_verify_client on;
location /secure-data {
add_header X-Client-Cert "$ssl_client_escaped_cert";
}
}Make sure SSL client authentication is enabled for this variable to be populated; otherwise, it will remain empty.
Using this variable without proper escaping in your application context can lead to malformed URLs.
It's essential to configure NGINX to log the variable securely to prevent exposure of sensitive certificate information in access logs.
Description
The $ssl_client_s_dn variable is populated when the client presents a valid SSL/TLS certificate during the SSL handshake process, and it contains the distinguished name (DN) of the client specified in that certificate. This variable is derived from the client's certificate and provides a way to identify the certificate holder by presenting the attributes of the distinguished name, which typically includes information like the common name, organization, country, and other relevant fields. The presence and content of this variable depend on the `ssl_verify_client` directive being set to either "on" or "optional", which ensures that NGINX requires or requests a client certificate. Only when a valid client certificate is successfully verified does this variable become available in NGINX context.
Config Example
server {
listen 443 ssl;
ssl_certificate /path/to/server.crt;
ssl_certificate_key /path/to/server.key;
ssl_verify_client on;
location / {
if ($ssl_client_s_dn) {
add_header X-Client-DN $ssl_client_s_dn;
}
}
}Ensure that `ssl_verify_client` is set to `on` or `optional` for this variable to be populated.
If no client certificate is provided, `$ssl_client_s_dn` will be empty.
Remember that this variable is only accessible in contexts where SSL is enabled.
Description
The $ssl_client_i_dn variable is populated when an SSL/TLS connection is established using client certificates. Specifically, it contains the distinguished name (DN) of the client as presented by the client's certificate during the TLS handshake process. This variable is available primarily in contexts that require SSL/TLS client authentication, which must be enabled in the NGINX configuration using directives such as 'ssl' and 'ssl_verify_client'. Typically, the DN will include information such as the client's common name (CN), organization (O), and country (C), formatted according to the X.500 standard. When the variable is set, it can be used in various configurations, from setting up access controls based on client identity to customizing responses based on who the client is. However, if client certificates are not utilized in the SSL/TLS connection, this variable will not contain any data. It's important to note that this variable is dependent on successful client authentication; if the client fails to provide a valid certificate or if verification fails, $ssl_client_i_dn will be empty. The presence of this variable is crucial for applications where user identity verification is needed based on client certificates.
Config Example
server {
listen 443 ssl;
ssl_certificate /path/to/server.crt;
ssl_certificate_key /path/to/server.key;
ssl_verify_client on;
location / {
if ($ssl_client_i_dn) {
return 200 'Client DN: $ssl_client_i_dn';
}
return 403 'Access denied';
}
}Ensure that client certificate verification is enabled with 'ssl_verify_client on'; otherwise, the variable will be empty.
Be cautious when using this variable in access control logic to avoid inadvertently denying legitimate requests due to misconfigured SSL settings.
This variable only contains data when valid client certificates are supplied; it will not populate on failed authentications.
Description
The $ssl_client_s_dn_legacy variable is populated when the NGINX server facilitates an SSL/TLS connection that requires client authentication. Specifically, it extracts the subject distinguished name (DN) of the client's certificate, which includes information such as the client's identity, organization, and country, formatted according to X.509 standards. The value is typically returned as a string where elements of the DN are represented in a textual format, such as 'C=US, ST=California, L=San Francisco, O=Example Corp, CN=example.com'. This variable is set only when the directive `ssl_verify_client` is enabled, which mandates that clients present valid SSL certificates. If the client certificate is not presented or fails verification, the variable will be empty. This can be useful for logging purposes, access control, or application logic that depends on the client's identity. Typical values can vary widely based on the client certificates used, and the format adheres to the standard DN representation. In practice, this variable can be used in different configurations for security checks, conditional processing based on client identity, or for simply logging the connection details to better audit access to a web service. For example, an application might use this variable to enforce access controls based on the client’s organization or to log client details for analytical purposes.
Config Example
server {
listen 443 ssl;
ssl_certificate /path/to/server.crt;
ssl_certificate_key /path/to/server.key;
ssl_client_certificate /path/to/ca.crt;
ssl_verify_client on;
location /protected {
if ($ssl_client_s_dn_legacy) {
add_header X-Client-DN $ssl_client_s_dn_legacy;
}
}
}Ensure `ssl_verify_client` is set to `on` to get useful values from this variable; otherwise, it will be empty.
If client certificates are not provided by clients, this variable will not hold any information, leading to potential conditional processing errors.
Ensure that NGINX is correctly configured to handle SSL connections before relying on this variable.
Description
The $ssl_client_i_dn_legacy variable is set during the processing of client requests in an NGINX server configured to use SSL/TLS with client authentication. When a client presents an SSL certificate during the handshake, the server processes this certificate to extract various pieces of information, including the distinguished name (DN) of the client. The DN includes attributes such as the common name (CN), organization (O), and country (C). In the case of $ssl_client_i_dn_legacy, the variable specifically returns this DN in a format that is compatible with earlier versions of NGINX and is primarily for backward compatibility. This DN string may vary in format based on how the client certificate was issued and the specifics of the certificate's data.
Config Example
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /path/to/your/cert.pem;
ssl_certificate_key /path/to/your/key.pem;
ssl_client_certificate /path/to/your/ca.pem;
ssl_verify_client on;
location / {
add_header X-Client-DN $ssl_client_i_dn_legacy;
}
}$ssl_client_i_dn_legacy is only available when client verification is enabled (ssl_verify_client on).
The format of the DN string may differ based on the client's certificate details, which could lead to processing issues if not properly accounted for.
Improperly configured SSL settings may lead to the variable being empty or undefined.
Description
The $ssl_client_serial variable is set when NGINX is configured to handle SSL/TLS connections that require client certificates for authentication. It retrieves the serial number of the client's SSL certificate if client verification is enabled, which typically occurs in a server block configured with the "ssl_verify_client" directive set to "on" or "optional". When presented with a valid client certificate during the TLS handshake, NGINX can access various attributes of the certificate, including the serial number, which is a unique identifier for the certificate instance. The value of $ssl_client_serial is often formatted as a hexadecimal string, representing the certificate's serial number. If client verification is not enabled, or if no client certificate is provided by the client, this variable will not be set and will return an empty string. This variable is especially useful for implementing access control, logging, or auditing mechanisms based on the identity of the client certificates being used.
Config Example
server {
listen 443 ssl;
ssl_certificate /path/to/your/server.crt;
ssl_certificate_key /path/to/your/server.key;
ssl_verify_client on;
location / {
if ($ssl_client_serial) {
add_header X-Client-Serial $ssl_client_serial;
}
}
}Ensure SSL is properly configured; otherwise, this variable will not be set.
If client certificates are not provided or verification is disabled, variable will yield an empty value.
Description
The $ssl_client_fingerprint variable is used in NGINX when SSL client authentication is enabled. It provides a unique fingerprint for the client's SSL certificate that is based on a cryptographic hash, usually generated using the SHA-1 or SHA-256 hashing algorithms. This fingerprint is derived from the certificate's data, specifically its `tbsCertificate` structure. The variable is set only when SSL client authentication is successfully performed, meaning that the server has been configured to require client certificates and the client has presented a valid certificate during the SSL handshake. Typical values of this variable will resemble a hex string format representing the hash of the client's certificate. This is useful for logging, access control, and applying additional custom logic based on client identity.
Config Example
server {
listen 443 ssl;
ssl_certificate /path/to/server.crt;
ssl_certificate_key /path/to/server.key;
ssl_client_certificate /path/to/ca.crt;
ssl_verify_client on;
location / {
# Use the fingerprint in access logs
access_log /var/log/nginx/access.log combined; # Redis or any other designated log format
set $client_fp $ssl_client_fingerprint;
if ($client_fp) {
# More logic can be applied here based on the fingerprint
}
}
}Ensure that SSL client authentication is enabled; otherwise, this variable will not be set.
Be careful not to rely on this variable for authentication purposes without proper validation of the SSL handshake.
The fingerprint format may vary based on the hashing algorithm used; ensure to handle both formats (SHA-1, SHA-256) if applicable.
Description
The variable $ssl_client_verify is set when the NGINX server is configured to use SSL client verification. This variable can take on specific values that represent the result of the client SSL certificate verification process. It can be set to 'none' if no client certificate was presented, 'off' if client verification is not enabled, and 'success' or 'failed' based on the outcome of the verification process. This functionality is part of the core NGINX SSL module, which manages secure connections and ensures that clients present valid certificates that can be validated against the trusted CA certificates configured in NGINX. When client certificate verification is enabled through directives such as "ssl_verify_client", the server uses this variable to assess whether a valid client certificate has been provided during the SSL handshake process. The verification can include checks against the certificate’s validity date, revocation status, and whether the certificate is signed by a trusted Certificate Authority (CA). Upon processing a request, NGINX sets the value of $ssl_client_verify to reflect the result, which can then be used for conditional processing or logging in configuration blocks.
Config Example
server {
listen 443 ssl;
ssl_certificate /path/to/server.crt;
ssl_certificate_key /path/to/server.key;
ssl_client_certificate /path/to/trusted_ca.crt;
ssl_verify_client on;
location /private {
if ($ssl_client_verify != "success") {
return 403;
}
# handle the request for authenticated clients
}
}Ensure that SSL client verification is enabled; otherwise, the variable will not be set as expected.
Be aware of context limitations; using $ssl_client_verify in inappropriate contexts like server{} or http{} may yield unexpected results.
Using this variable in if() statements can lead to unexpected behavior; prefer using it within location blocks.
Description
The $ssl_client_v_start variable is part of NGINX's support for SSL/TLS and is particularly useful for logging and debugging purposes. It captures the starting time of the SSL handshake, more precisely when the SSL connection is fully established between the client and the server. This timestamp is expressed in seconds since the epoch (January 1, 1970). This variable becomes available once an SSL connection has been established during the request processing. The value of $ssl_client_v_start is accessed within the context of a request, allowing NGINX to log the precise moment this event occurs. Knowing the start time of the SSL connection can assist developers and system administrators in performance monitoring, as it helps in understanding the duration of SSL handshakes and troubleshooting any related latency issues. Typical values for this variable will format as UNIX timestamps, such as '1672531199'.
Config Example
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /etc/ssl/certs/example.com.crt;
ssl_certificate_key /etc/ssl/private/example.com.key;
access_log /var/log/nginx/access.log combined;
location / {
add_header X-SSL-Client-V-Start "$ssl_client_v_start";
}
}The variable is only available in the context of an SSL-enabled request. Make sure that SSL is configured and enabled for the server block.
If the connection fails before the SSL handshake completes, this variable will not be set.
Description
The $ssl_client_v_end variable is available when NGINX is configured to use SSL and allows access to the end time of the client’s SSL session. This timestamp is typically set when the SSL connection is completed, and the client has finished the SSL handshake. The value is expressed as the number of seconds that have elapsed since January 1, 1970 (the Unix epoch). When an SSL connection is established with a client, multiple timestamps are recorded for various events, including the start and end times of the session. The $ssl_client_v_end variable specifically captures the moment when the client’s SSL connection officially ends. This can be useful for logging, analytics, and other purposes where understanding the duration or timing of the SSL connection is critical. Values of this variable are timestamps in the form of Unix time, e.g., a typical value might look like 1682467654, corresponding to a specific date and time in UTC. For scenarios where sessions need to be managed or monitored, this variable assists in tracking the lifecycle of secure communications between client and server, granting insights into when connections are established and wrapped up, which can be particularly useful in high-traffic environments where security performance is of paramount importance.
Config Example
server {
listen 443 ssl;
server_name example.com;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
location / {
access_log /var/log/nginx/access.log;
set $end_time $ssl_client_v_end;
# additional configuration
}
}Ensure that SSL is properly configured and enabled for the variable to be set; otherwise, it may return an empty value or not be accessible.
This variable can only be used within the contexts that support SSL, such as 'server' or 'location', and will not function in contexts like 'http' if SSL is not configured.
Description
The variable $ssl_client_v_remain is part of NGINX's SSL module and is utilized during the processing of client certificates. When a client presents a certificate for authentication, it often sends a chain of certificates. The $ssl_client_v_remain variable indicates how many bytes remain in that chain after the current certificate has been processed. This can be particularly useful when managing complex authentication scenarios where multiple certificates are involved, such as in mutual TLS (mTLS) configurations. This variable is set when the SSL handshake occurs and is accessible to server directives following successful SSL handshaking. It returns a numerical value that reflects the remainder of the certificate data that is still to be read. Typically, it could return values from zero (when all certificates have been processed) to the total length of the initial certificate chain sent by the client. For instance, if a client sends a chain of three certificates totaling 300 bytes and the first certificate (approximately 100 bytes) is processed, $ssl_client_v_remain would report 200 bytes after that point. This information allows NGINX to make informed decisions based on the configuration of the SSL server and any specified authentication requirements based on valid certificates.
Config Example
server {
listen 443 ssl;
ssl_certificate server.crt;
ssl_certificate_key server.key;
location / {
if ($ssl_client_v_remain > 0) {
add_header X-SSL-Client-Remain $ssl_client_v_remain;
}
}
}Ensure that SSL is properly configured and enabled to use this variable; otherwise, it will not be set.
This variable will only be available after the SSL handshake is complete.
If client authentication is not enabled, this variable may return zero or an unexpected value.
Description
The $ssl_client_sigalg variable is set when a client provides an SSL certificate during a TLS handshake. It stores the signature algorithm that was used to sign the client’s certificate. This variable is critical for applications that validate client certificates and may decide to allow or deny access based on the security level of the signing algorithm. The values for $ssl_client_sigalg can vary; common algorithms include 'sha256WithRSAEncryption', 'sha1WithRSAEncryption', and others, depending on the client’s configuration and security policies. NGINX will only set this variable in contexts where client SSL certificates are enabled, specifically in 'server' and 'location' blocks where the ssl_verify_client directive is set to 'on' or 'optional'. If no client certificate is provided, the variable will be empty. This is particularly useful for logging or debugging purposes, as it allows server administrators to see which signing algorithms are in use by clients accessing their services, and make informed decisions based on that information. Moreover, changes to client SSL configurations may alter the values generated here and should be monitored accordingly.
Config Example
server {
listen 443 ssl;
ssl_certificate /path/to/server.crt;
ssl_certificate_key /path/to/server.key;
ssl_client_certificate /path/to/client_ca.crt;
ssl_verify_client on;
location / {
if ($ssl_client_sigalg = 'sha256WithRSAEncryption') {
# Log or take action for clients using SHA256
access_log /var/log/nginx/ssl.log;
}
}
}This variable will be empty if SSL client verification is not enabled or if the client does not provide a certificate.
Misconfiguring the SSL context may lead to the variable not being set, leading to unexpected behavior in your access logic.
Description
The $realip_remote_addr variable is designed to retrieve the actual remote IP address of a client when NGINX is configured to operate behind a proxy or load balancer. This scenario often arises when the client connects to an intermediary server, making it difficult to ascertain the original client's IP address from standard connection methods. NGINX effectively handles this situation by using the X-Real-IP or X-Forwarded-For HTTP headers, which are common standards for passing on the originating IP address. When NGINX receives a request, it checks these headers in the specified order, and if an IP address is found in one of these headers, it sets the $realip_remote_addr variable to that value. If the headers are absent, the variable defaults to the direct client IP address as seen from NGINX's perspective. Correctly interpreting this variable usually requires proper configuration of both the upstream proxy settings and the headers used to pass along the client IPs. This ensures that the correct information is displayed and logged, leading to more accurate data and security measures. Typically, the value of $realip_remote_addr reflects either the actual remote IP address of the client making the request or falls back to the last known IP address if the original cannot be resolved. It is especially relevant in scenarios involving multiple layers of networks, enhancing both logging and access control capabilities by providing accurate client identification.
Config Example
http {
set_real_ip_from 192.0.2.0/24; # Allow this range to set the real IP
real_ip_header X-Forwarded-For; # Specify the header to use
server {
listen 80;
location / {
# Access can be logged with the original client's IP
access_log /var/log/nginx/access.log main;
}
}
}Ensure that the `set_real_ip_from` directive is correctly set to allow specific proxy IPs to modify the client IP.
Be cautious with untrusted proxies; otherwise, it can lead to IP spoofing.
The `real_ip_header` must be set to an appropriate header (like `X-Forwarded-For` or `X-Real-IP`) that your proxied requests use. If it's not set, `$realip_remote_addr` may not receive the correct value.
Description
The $realip_remote_port variable is used in the context of reverse proxying and indicates the original port number used by the client connecting to the server. It is particularly valuable when NGINX is operating behind a load balancer or a proxy and the actual client connection details are necessary for logging or other processing. This variable becomes available when using the real_ip module, which allows NGINX to correctly identify the client IP and port when requests are forwarded from another server, such as a web application firewall or a load balancer. This variable will typically contain numeric values representing the port number, such as 443 for HTTPS or 80 for HTTP, depending on how the client initially connected. If the `real_ip_header` directive is correctly set up in the NGINX configuration to capture the incoming requests' headers (like X-Forwarded-For), the $realip_remote_port variable will accurately reflect the port the client used, assuming the configuration is correct and the headers are present. Failure to set this correctly can lead to incorrect logging and misidentification of client connections. In essence, the $realip_remote_port variable enhances security and monitoring capabilities by ensuring that applications can reference the actual connection details of clients when processed through various intermediary servers.
Config Example
http {
set_real_ip_from 192.168.1.1; # Example trusted proxy
real_ip_header X-Forwarded-For;
}
server {
location / {
access_log /var/log/nginx/access.log combined;
}
}
# In the combined log format, you can include the realip remote port
log_format combined '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $realip_remote_port';Ensure that the real_ip module is included in your NGINX compilation. Without it, this variable will not be available.
Properly configure the trusted proxy settings; otherwise, the variable may return incorrect or default values.
Make sure the header you are using to set the real IP is correctly forwarded by your upstream proxy. Misconfigured proxies may alter or not set these headers.
Description
The $invalid_referer variable is utilized within NGINX to indicate whether the referer header of an incoming HTTP request is considered invalid according to predefined access rules. This variable is primarily linked to the `ngx_http_access_module`, which allows administrators to set allowed or denied referers using the `allow` and `deny` directives in the configuration file. Whenever a request is processed, the module checks the `Referer` header against the configured rules. If the referer does not match any of the allowed entries and matches a deny condition, $invalid_referer is set to 1. This functionality is particularly useful in scenarios where content security is vital, such as preventing resource misuse from unauthorized sites. The variable typically returns a value of 1 when the referer is invalid; otherwise, it is unset, indicating a valid referer. It's important to note that referers can be manipulated by clients, so this method should not solely rely on referer checking for security enforcement. The $invalid_referer variable can help control access to resources by implementing conditional logic in the NGINX configuration based on its value, allowing for fine-tuned access management and security mechanisms.
Config Example
location /protected {
valid_referers none blocked example.com;
if ($invalid_referer) {
return 403;
}
}Ensure the referer header is actually sent by clients, as some may choose not to send it, leading to unexpected behavior.
Consider the implications of using `if` inside a location block, as it can create unexpected results if not used carefully.
Be aware that referers can be spoofed; do not rely solely on this variable for critical security decisions.
Description
The $secure_link variable is utilized in NGINX to manage and validate secure links to resources, such as files or data, ensuring that access is restricted to authorized users based on tokens. When a secure link is generated, it is typically hashed using parameters including the document root, the URI, and an expiration timestamp, among others. This variable thus contains the value of this hashed link that clients use to authenticate and gain access to the protected resource. The variable is set under certain conditions, particularly when the secure_link module is enabled and a designated `secure_link` directive is configured within either an http, server, or location context. It validates requests by comparing the provided token with the expected secured link, ensuring that links are both time-limited (valid only for a certain duration) and unique to each client. The common values for this variable would typically reflect the hash generated based on the configurations established within the secure link directives. In practice, this helps to prevent unauthorized access to sensitive files, making it useful for applications that require content protection, such as private media files or software downloads. To ensure smooth operations, one needs to carefully manage not only the generation of these tokens but also their expiration and deactivation policies to enhance access security.
Config Example
location /protected_file {
secure_link $arg_st,$arg_e;
secure_link_secret your_secret;
# Validate secure link
if ($secure_link = "") {
return 403; # deny access
}
# Allow access if validation passes
root /path/to/your/files;
add_header Content-Disposition "attachment; filename=protected_file";
}Ensure that the secure link secret is kept confidential; exposing it can compromise security.
Tokens must be generated correctly; passing incorrect parameters will result in invalid secure links.
Always define expiration times to prevent indefinite access through outdated tokens.
Description
The `$secure_link_expires` variable in NGINX holds the expiration time for a secure link, specifically defined when the `secure_link` directive is used. This variable is typically set alongside the `$secure_link` variable, which generates a unique signature for secure URL access. The expiration time it returns is defined in Unix timestamp format, indicating when the secure resource will no longer be accessible. If not set explicitly, it defaults to 0, making the link valid indefinitely unless restricted through configuration settings. When using the `$secure_link` mechanism, you typically define both the link and its expiration to offer temporary access to resources. The expiration time can be set by the `expires` parameter in the `secure_link` directive, which is specified in your NGINX configuration. By providing appropriate expiration values, such as the current time plus some valid duration (like one day in seconds), you can effectively manage access to resources based on time-sensitive parameters. This variable can be useful in scenarios where you want to enforce time-limited access to files, such as media content, download files, or other sensitive resources. By combining it with access control directives, you can create secure, short-term links for authenticated user access, enhancing your application's security profile.
Config Example
location /protected {
secure_link $arg_md5,$secure_link_expires;
secure_link_md5 "$uri$time$key";
if ($secure_link = 0) {
return 403;
}
}Ensure that the secure link directive is properly set up; otherwise, this variable may not return valid values.
Avoid relying on this variable for links that should be valid indefinitely unless explicitly defined to do so in the configuration.
Description
The $uid_got variable in NGINX represents the unique identifier of the user associated with the request. This variable is typically set by the NGINX core when a request is processed. It is particularly useful in configurations that require access control based on user permissions or when tracking user activities through logs. When a request is received, NGINX checks the associated user ID, which can be derived from various sources, including system users, for authentication and authorization checks. In environments that utilize user-based access controls, such as when integrating with applications that require distinct user permissions, the $uid_got variable becomes vital. The variable is populated when the connection is established, and it reflects the UID assigned to that connection. Given its nature, the value may vary depending on how the request is processed and the context it operates in. Typically, you will see values ranging from 0 to 65535, which correspond to common user IDs (or reserved user IDs) on a UNIX-like system. For enhanced security and precise logging, the $uid_got variable can be logged in a custom NGINX log format or used in conditional configurations to restrict access based on user identification, helping establish a robust access control strategy.
Config Example
location /protected {
if ($uid_got = 1001) {
return 403; # Deny access to users with UID 1001
}
proxy_pass http://backend;
}Ensure that the user context is correctly established; otherwise, $uid_got may not return expected values.
Using $uid_got in the wrong context (e.g., as part of server directives) can lead to unexpected behavior.
Be cautious of using $uid_got in high-performance contexts, as it can introduce unnecessary overhead.
Description
The $uid_set variable is defined within the NGINX Core and is utilized to reflect the user ID that has been set for the current request. This variable is particularly relevant when handling specific access controls and security configurations based on user identity. The user ID is typically established through authentication mechanisms where a user successfully logs in or makes a request that includes identification credentials. When $uid_set is used, it is often manipulated internally by various modules that derive user credentials from headers or session tokens. The value of this variable can change mid-transaction if the request undergoes a redirection or if the back-end application modifies the authentication context. The typical values for $uid_set might range from a numerical representation of the user ID to an empty string when there's no authenticated user found.
Config Example
server {
listen 80;
server_name example.com;
location / {
if ($uid_set) {
return 200 'User ID is set';
}
return 403 'Access denied';
}
}Ensure that authentication modules are properly configured to set the user ID; otherwise, $uid_set may remain empty.
Using $uid_set in the wrong context (e.g., outside request processing) may lead to unexpected results or errors.
Description
The $uid_reset variable is used in the context of access control within the NGINX server. It primarily serves to manage and control the resetting of user identifiers (UIDs) for incoming connections based on defined access rules. The variable is set to '1' when a request is denied based on matching access control criteria, such as IP address blocks defined by 'allow' and 'deny' directives. If the request is allowed, $uid_reset remains '0'. This behavior is crucial for applications that require logic to depend on whether a user connection has been denied, allowing customized responses or logging mechanisms to be implemented based on this state. The actual setting of the variable happens during the execution of the access module, particularly in the context of handling incoming requests. Specifically, if a rule explicitly denies access due to a certain condition, NGINX will update the $uid_reset variable to reflect this denial. Therefore, users of this variable in their configurations can use it to conditionally execute directives or to record specific cases when a user's UID is reset as a part of the access control logic, hence reinforcing security and management of access privileges effectively.
Config Example
http {
server {
location / {
allow 192.168.1.0/24;
deny all;
if ($uid_reset) {
return 403;
}
proxy_pass http://backend;
}
}
}Ensure that the access rules are defined properly, as misconfiguration may lead to unexpected results in UID resetting.
Using this variable in contexts other than if may lead to unexpected behaviors since it's best suited for access control scenarios.
Description
The $gzip_ratio variable is a dynamic variable in NGINX that represents the ratio of the compressed size to the original size of the response when gzip compression is enabled. This variable is calculated at the time a response is served, specifically when the 'gzip' module is utilized to compress the output before it is sent to the client. The value is computed based on the size of the uncompressed response compared to the size of the compressed response, allowing for the evaluation of how effective the compression is. When the gzip module is enabled in NGINX, and a response is being compressed, the $gzip_ratio variable will be set after the compression completes. The typical output value can range from 1 to 100, with 100 indicating that the response was not compressed at all, and lower values indicating better compression efficiency. Developers can leverage this variable to monitor performance and tweak their gzip settings accordingly, ensuring optimal use of bandwidth and better load times for users. In instances where gzip cannot compress the response effectively, the variable may reflect a compression ratio of 100. This enables administrators to diagnose scenarios where certain types of content (like already compressed files - e.g., JPEG images) should not be handled by gzip settings, as they won't yield any benefits from further compression.
Config Example
gzip on; gzip_min_length 1024; set $compression_ratio $gzip_ratio;
$gzip_ratio only reflects values when gzip compression is applied; if gzip is not enabled, it will not be set.
Using $gzip_ratio before gzip compression starts will yield empty or default values, causing potential miscalculations in logging or response handling.
Keep in mind that $gzip_ratio is only relevant for at least HTTP/1.1 clients, as older protocols may not support gzip negotiation.
Description
The variable $limit_conn_status is used within NGINX to indicate the status of connection limits applied to requests coming from clients. When connection limiting is enabled via the 'limit_conn' directives, this variable is set to reflect the status of the client's connections in relation to the defined limits. Typical values for this variable include 'none' (indicating no limit is exceeded), 'exceeded' (when the client exceeds the allowed number of connections), or 'aborted' (when connection handling is aborted). The value of $limit_conn_status is especially useful for logging purposes or for conditional processing within NGINX configuration. It provides important feedback about the connection limits imposed on clients by the server. This feedback can be utilized to create custom responses or to log specific events based on client connection behavior. NGINX sets this variable during the processing of a request after evaluating the connection limits imposed by directives such as 'limit_conn_zone' and 'limit_conn'.
Config Example
http {
limit_conn_zone $binary_remote_addr zone=addr:10m;
server {
limit_conn addr 1;
location / {
if ($limit_conn_status = exceeded) {
return 503;
}
# Other configurations...
}
}
}Ensure 'limit_conn' directive is declared before any location or server block, otherwise it won't take effect.
Not all contexts support the use of this variable, paying attention to the appropriate scope is crucial.
Using it outside of intended limits can lead to unexpected behaviors in your configuration.
Description
The $limit_req_status variable is used within the NGINX configuration to indicate the response status resulting from the rate limiting directives defined in the 'limit_req' module. When a request exceeds the defined rate, NGINX can return specific status codes, such as 503 (Service Unavailable) or 503 with the 'limit exceeded' message. The $limit_req_status variable captures these codes and makes them available for logging or further processing. This variable is particularly useful for creating custom error pages or handling specific responses differently based on the status. For instance, if a request is throttled due to too many requests in a short amount of time, you can use the value of $limit_req_status in your configuration to manage how to inform users about the limit being hit. Typically, it can return values like 200 for normal requests, 503 for requests that have been denied due to rate limiting, or other status codes as configured in the application. It's important to understand that the $limit_req_status variable is only populated when a valid request is processed that passes through a limit_req directive and is subject to the rate limiting behavior. If rate limiting is not in effect or if the request does not hit that directive, the variable would be unset.
Config Example
server {
location /api {
limit_req zone=one burst=5;
if ($limit_req_status = 503) {
return 429 'Too Many Requests';
}
}
}
Make sure the limit_req directives are configured properly; otherwise $limit_req_status may not be set as expected.
It is best to use this variable in conjunction with logging or custom error handling to get meaningful insights.
The variable only reflects the status for requests processed under rate limiting rules, so ensure the rate limiting is correctly applied.
Description
In NGINX's stream module, the variable $bytes_received is dynamically set to reflect the number of bytes received from a client for the duration of a given connection. It helps in monitoring the amount of data a server has received through TCP or UDP data streams. Each time data is received from the client, this variable is updated accordingly and can be leveraged for logs or performance metrics. Consequently, $bytes_received essentially counts all bytes, including headers and payloads, making it a great tool for bandwidth analytics. This variable is initialized at the start of a connection and is available for use within various contexts related to active connections. For instance, within log formats, monitoring modules, or for implementing rate limiting based on the total bytes received. Typical values of $bytes_received can vary widely depending on the application but can usually scale from a few bytes to several megabytes or more during heavy loads, contingent on the type of traffic and configuration settings of the NGINX stream server.
Config Example
stream {
server {
listen 12345;
access_log /var/log/nginx/stream_access.log "$remote_addr - $bytes_received bytes received";
}
}Ensure the stream module is enabled in NGINX, as this variable is specific to it.
Be cautious when using $bytes_received in shared memory contexts because its value may reset when connections are closed.
Remember that this variable counts bytes only from the moment a connection is established until it is terminated.
Description
The $session_time variable in NGINX is used to measure the duration of the current stream session, specifically within the Stream module. This variable is set when the NGINX server establishes a connection for a stream and continues to be updated until the connection is closed or completed. It represents the total time in seconds that the session has been active, making it useful for monitoring and logging purposes, as well as configuring limits on sessions based on their duration. Typically, $session_time will start at zero when the connection is initiated and will increase as time progresses, reflecting the ongoing duration of the session. This value can be particularly beneficial in setups where session timeouts need to be enforced or when analyzing session performance and usage patterns in a streaming application. It's important to note that this variable is specific to the Stream module and is not available in the HTTP or other modules. As such, it is primarily utilized in contexts where TCP/UDP-based streaming is involved, rather than standard web serving contexts.
Config Example
stream {
server {
listen 12345;
log_format custom_format '$remote_addr - $session_time';
access_log /var/log/nginx/stream_access.log custom_format;
}
}Ensure that the Stream module is enabled in your NGINX configuration; otherwise, the variable will not be accessible.
The variable $session_time is only valid during stream processing and will not work with HTTP contexts.
Be cautious when using $session_time in conditionals as they are evaluated at different times during the request processing. Ensure it aligns with the actual session duration needed.
Description
The $protocol variable in NGINX Stream is an automatically set variable that becomes available within the same scope as other stream directives, particularly during stream block execution. This variable indicates the protocol for the established connection, indicating whether it is TCP or UDP. It is primarily used to facilitate logging, access control, or conditional configuration based on the type of protocol being handled. This variable is set when a connection is established to the server, depending on the protocol specified in the server and upstream blocks. Typical values for this variable include 'tcp' and 'udp', which provide a clear distinction of what type of traffic is being processed. This is particularly useful in environments where both TCP and UDP protocols are handled by the same server process, allowing for differentiated configurations and logging behaviors based on the protocol.
Config Example
stream {
server {
listen 12345;
proxy_pass backend;
log_format custom_format '$remote_addr - $protocol';
access_log /var/log/nginx/access.log custom_format;
}
}Ensure that the stream block is properly defined; otherwise, the variable will not be set.
The $protocol variable is only available in stream contexts, not in http context or other contexts.
Description
The $remote_passwd variable is part of the NGINX CoolKit module and plays a pivotal role in handling Basic HTTP Authentication. Specifically, it extracts the decoded password that is included in the Authorization header of the HTTP request when a client authenticates with a username and password. When a client sends an authentication request, the username and password are typically base64-encoded and sent in the format 'Authorization: Basic base64(username:password)'. NGINX, with the help of the CoolKit module, decodes this information so that the password can be used in subsequent processing. The variable is set during the processing of HTTP requests that include an Authorization header. If the header is not present, or if it does not follow the expected format for Basic Authentication, the value of $remote_passwd will be empty. This means that in secure applications where authorization and identity verification are important, the variable must be used cautiously, ensuring that the presence of valid credentials is checked before using the password in any logic. In practice, $remote_passwd can be a crucial piece of data when working with backend authentication against databases or services that require username/password verification, as exemplified in the example configurations provided in the module's documentation.
Config Example
location = /auth {
internal;
set_quote_sql_str $user $remote_user;
set_quote_sql_str $pass $remote_passwd;
postgres_pass database;
postgres_query "SELECT login FROM users WHERE login=$user AND pass=$pass";
postgres_rewrite no_rows 403;
postgres_output none;
}Ensure that your web server is configured to handle Basic Authentication properly, as missing headers may lead to an empty $remote_passwd.
Use with care in internal logic to avoid potential security issues. Non-encoded passwords should not be logged or exposed inappropriately.
Depending on the configuration, the Authorization header might be stripped by upstream proxies, resulting in an empty variable.
Description
The $location variable in the NGINX CoolKit Module provides the name of the matched location block for the incoming request. It is evaluated during the handling of an HTTP request whenever NGINX processes location directives in its configuration. As the request traverses through the configuration, NGINX checks each location block to find the most specific match for the requested URI. Once a match is found, the name of the corresponding location block is stored in this variable. Typical values for the $location variable can be exact paths, such as '/auth', or more complex patterns defined by regular expressions. This functionality is beneficial for logging, conditional processing, and customizing responses based on the matched location. The $location variable is particularly useful in configurations where URL rewrites, or access controls, are dictated by the matched location. Being a non-cacheable variable, $location reflects real-time changes in the request-processing context. This ensures that it is dynamically updated as the NGINX processes requests against the defined location contexts, making it essential for scenarios where behavior must vary depending on the matched URI.
Config Example
http {
server {
location = /auth {
internal;
}
location / {
add_header X-Matched-Location $location;
proxy_pass http://backend;
}
}
}Ensure that your location blocks are defined correctly to match desired URIs. Misconfigured blocks can lead to unexpected matches or no matches at all.
Avoid using $location in contexts where it might not yet be defined, such as in server or main configuration contexts.
Description
The $google variable is part of the NGINX Module for Google, specifically designed to facilitate the deployment of Google mirror sites. This variable is dynamically set during the processing of requests that have been configured with the `google` directive in the NGINX configuration. When a request is handled, the variable is populated with relevant information that determines if the request is being processed as a Google mirror, which allows NGINX to adjust its behavior accordingly. This value is typically a string indicating the status of the Google filter and can vary depending on the particular configuration and the specific request being handled. When the `google` directive is enabled in a location block, as shown in the practical example below, the $google variable can change based on whether additional configurations for features like Google Scholar or language settings are activated. It is particularly useful when combined with conditionals or for logging purposes in order to debug or verify that requests are being correctly mirrored as specified. Typical values include enabled states or specific identifiers relevant to the mirror operation, ensuring streamlined handling of Google’s resources.
Config Example
location / {
google on;
error_log /var/log/nginx/google_error.log;
access_log /var/log/nginx/google_access.log google;
}Ensure that the `google` directive is properly configured in the location block; otherwise, the variable may not be set as expected.
This variable is sensitive to context and may not work outside of designated contexts like location or server blocks.
Be cautious when using this variable in conjunction with caching mechanisms, as its dynamic nature may lead to stale responses being served if not properly accounted for.
Description
The `$google_host` variable is part of the NGINX Module for Google, specifically designed for creating mirrors of Google services. This variable is dynamically set during a request when the Google mirror functionality is enabled. Essentially, it extracts the host part of the incoming request URL directed at Google, thus allowing servers to manipulate or log the host as needed. It is typically utilized in locations where response manipulations or logging based on the Google host are necessary. When the Google filter is activated, the module hooks into the request lifecycle, capturing the request details including the host information. If the request is made to a valid Google service, this variable will reflect that service's host name (like `www.google.com`, `news.google.com`, etc.). The variable is marked as changeable, meaning it can be adjusted or updated during request processing, allowing for real-time adjustments based on application logic or conditions within different request contexts.
Config Example
location / {
google on;
add_header X-Google-Host $google_host;
}Ensure the Google filter is enabled for the $google_host variable to be populated; otherwise, it will not return a valid value.
Using $google_host outside of appropriate contexts (like server or location) may lead to undefined behavior or incorrect values.
Description
The $google_schema variable is used within the NGINX Module for Google Mirror creation to dynamically provide the schema type associated with the current request context. It is populated when the google filter is enabled via a specific `location` directive, allowing it to reflect and utilize settings configured in that directive to present structured data, primarily for Google’s indexing and data scraping systems. This variable evaluates and retrieves the schema type during the processing of requests. The appropriate schema type can be defined in the NGINX configuration and is often tied to the specific content being served. Common values for this variable might include 'WebSite', 'Article', or other relevant schema types recognized by schema.org, which help inform search engines about the data being presented, thus enhancing the SEO and visibility of the site. Depending on configuration, this variable allows for flexibility in changing schema types based on the content type needed for the current request environment. When defining this for usage, careful consideration should be taken into account for the JSON-LD scripts that may utilize these schema types, ensuring that the correct schema matches the content being served to maximize search engine patience and recognition of the site’s content. Since this variable is affected by other NGINX configurations, it often requires a coordinated setup to ensure accurate values are returned for the appropriate requests.
Config Example
location / {
google on;
google_schema "WebSite";
}Ensure that the google module is enabled in the server context for this variable to work properly.
Misconfiguring or omitting the schema type can result in unexpected behaviors or empty responses for the variable.
If using multiple locations, be sure that each location handles the schema variables as intended to avoid conflicts.
Description
The $google_schema_reverse variable is utilized within the NGINX Google filter module to manage the directionality of Google schema implementations during the processing of requests. This variable is set when specific conditions in the NGINX configuration trigger its behavior, such as when certain request URIs or parameters are matched according to predefined patterns in the module's logic. If activated, the variable influences how the schema information is represented in the HTTP response, particularly determining if it should reflect a 'reverse' mapping or interpretation of the default Google schema. Typically, the values of $google_schema_reverse can be either 'true' or 'false', allowing for a straightforward conditional check in configuration files. This enables finer control over how NGINX interacts with requests meant for Google services, particularly when there's a need to present or handle schema data differently depending on the application's requirements, such as compatibility or reverse indexing needs for search engines. This capability ensures that end-users may navigate the mirrored Google experience with the desired schema representation servicing their specific use case.
Config Example
location / {
google on;
if ($google_schema_reverse) {
# Additional configuration for reverse schema processing
add_header X-Google-Schema 'Reversed';
}
}Ensure that the google filter module is properly enabled in the NGINX configuration; otherwise, the variable will not behave as expected.
Confusing the logic around when to check $google_schema_reverse – it should be done after enabling the google feature and without conflicting location rules.
Description
The `$ssl_session_reused` variable is part of the NGINX functionality that assesses SSL connections. Specifically, it helps determine if an SSL session was reused when handling incoming requests. This optimization can enhance performance by avoiding the overhead of establishing new sessions, leading to faster response times for HTTPS connections. The variable's value is set to either '1' or '0' at the request processing stage, where '1' indicates that the session was successfully reused from a previous connection, while '0' indicates that a new SSL session was created. This variable is only applicable when SSL/TLS is active in the NGINX server configuration. The handling of SSL sessions is typically configured through directives such as `ssl_session_cache` and `ssl_session_timeout`. When reusing sessions, the handshake process is simplified, significantly reducing latency and computational load. Therefore, this feature is especially valuable on servers that handle a significant amount of secure traffic, as successful session reuse can improve throughput and user experience. To utilize this variable, ensure your NGINX server is compiled with SSL support. Also, verify that the relevant SSL session cache settings are configured correctly to optimize session reuse effectively. Monitoring `$ssl_session_reused` can provide insights into the efficiency of the SSL setup, which is crucial for maintaining performance in a secure environment.
Config Example
http {
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
server {
listen 443 ssl;
location / {
add_header X-SSL-Session-Reused $ssl_session_reused;
}
}
}Ensure that SSL is enabled in your NGINX configuration; otherwise, the variable will always return '0'.
Remember to configure the SSL session cache properly to see meaningful statistics about session reuse.
Avoid using this variable in inappropriate contexts such as non-SSL configurations.