I am currently managing an Open OnDemand instance in my institution, and we keep seeing this error come up:
# Bad Request
Your browser sent a request that this server could not understand.
Size of a request header field exceeds server limit.
The error is not persistent. It comes up every once in a while. If you open OnDemand from an incognito tab or new browser it works. Also, if you try again with the problematic browser hours after, it works. I am so confused.
Any idea why this is happening? Any idea how to get around it?
Hello! It sounds like this is something that would show up in the Apache logs (Logging — Open OnDemand 3.1.0 documentation) - Is it possible for you to catch this your error logs and share that?
I did not find anything useful in the logs. In fact, my account’s access.log/error.log were blank. No error/access gzip files were generated with today’s date.
OK, after some research, I’ve found this has to do with the number of cookies being sent via request headers. You can use oidc_state_max_number_of_cookies to increase this limit. You can read more about the variable that’s being set here: mod_auth_openidc/auth_openidc.conf at master · OpenIDC/mod_auth_openidc · GitHub and definitely let me know if you have more questions.
Edit: Forgive me, I mispoke - you want to decrease the max number of cookies so that the header size is smaller.
Sorry to loop back in way later. Original solution somewhat worked, with a caveat.
I took your previous advice, and lowered the number of oidc_state_max_number_of_cookies from 10 to 5. This made the error less prevalent, but it still comes up every once in a while. Can you tell me more about the effect of this variable?
If I lower it all the way down to 1, how would the user’s experience get affected?
We’ve seen this often too, with Azure OIDC auth and using the default oidc_state_max_number_of_cookies. The oidc default appears to be oidc_state_max_number_of_cookies: "7 false" so I am interested to know if that would help eliminate or at least reduce the prevalence of these errors for users.
For us, "5 true" reduced it significantly, that it is not that big an issue anymore. I would like to completely eliminate it in the future, but for now a value of "5 true" reduced it to a very uncommon error.
On the off-chance we get it, we advise users to clean their browser cache/cookies, and everything is back to normal.
I’ll play with this number on our test instance and update this thread if I find something better.
Unfortunately, now I just get a similar error from the nginx backend.
I have yet to figure out how to achieve the same in PUN (Per-User Nginx) configs.
First, I tried using a pun_pre_hook_root_cmd script to modify the pun config “http” context. This was not effective, however, as the pun config seems to be (re)generated after the script is triggered.
Then I found a much simpler solution:
Create the file /var/lib/ondemand-nginx/config/apps/sys/nginx_server_settings.conf containing the line large_client_header_buffers 4 32k;
At the end of each pun config, app configs are included in the “server” context, which also supports the large_client_header_buffers setting:
# Include all app configs user has access to
include /var/lib/ondemand-nginx/config/apps/dev/MyUserName/*.conf;
include /var/lib/ondemand-nginx/config/apps/usr/*/*.conf;
include /var/lib/ondemand-nginx/config/apps/sys/*.conf;
}
}