Scaling Best Practice

Is there advice on best practice for scaling Open OnDemand? We are currently running one server with Open OnDemand but at times have seen pretty high load. Have others implemented multiple OOD servers behind load balancer or other approach to manage high demand?

1 Like

Yes others have used a proxy for OnDemand. I believe the only issue is that you need to use sticky sessions. We’re still looking for someone to contribute documentation for the same as we don’t run OnDemand behind a proxy.

Keep in mind, that high load you are seeing is likely from the Passenger “bug” that has is in the process of being corrected in near future OOD versions.

Yep, we’ve seen similar load issues too. If you’re working in an outdoor
research or HPC-heavy environment, scaling with multiple OOD servers behind a load balancer definitely helps. Keeps things smoother during peak usage.

I should note that the patch I mentioned is merged and in the most recent OOD release.

1 Like

Thanks, @mjbludwig, for looping back! Yes, we did get the Passenger patch added to 4.0.5 and actually backported it into a 3.1.13.

1 Like

hello @edsills, I have scaled ours past a single setup, currently we have three VMs, using ha-proxy as the load balancer, for our setup I used openid connect with dex and configured ondmenad dex for ldap, which required me to add an etcd vm as a shared storage for dex. and add an redis vm for oidc.