http - Caching a POST response with nginx: Should I forward to the client? -
our system uses post requests preload list of assets. given same list of assets identifiers, server respond same list of asset data. since list of identifiers can long (it's multipart request containing json list), used post instead of although is idempotent.
we use nginx reverse proxy in front of these server. configured work, there's "feels" wrong; return cache-control: max-age=3600
header in post responses want cached, , have nginx strip them before returning them client.
the rfc 7234 says method , uri used cache key; use vary header seems limited other headers...
i'm not sure how reliable browser be. "seems" if make http post response cacheable, cached "future requests", not intend.
so, choices seem be:
- return
cache-control
header knowing (or hoping?) there reverse proxy in front of stripping header. - return
cache-control
header , let go through. if can explain why it's reliable, simple (or if there's similar header?) - do not use
cache-control
, instead "hardcode" urls directly in nginx configuration (i couldn't make work yet though)
is there reliable approach can use achieve need here? lot help.
here's excerpt of nginx configuration if helps someone:
proxy_cache_path /path/to/nginx/cache levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off; location /path/to/post/request { proxy_pass http://remote-server; proxy_cache my_cache; proxy_set_header host $host; proxy_set_header x-real-ip $remote_addr; proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for; proxy_set_header x-forwarded-proto $scheme; proxy_set_header proxy_http_version 1.1; proxy_set_header connection ""; proxy_cache_lock on; proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504; proxy_cache_methods post; proxy_cache_key "$scheme$proxy_host$uri$is_args$args|$request_body"; proxy_cache_valid 5m; client_max_body_size 1500k; proxy_hide_header cache-control; }
Comments
Post a Comment