Experiences with Minimum Hardware Requirements

Hi OSC/OOD Community:

https://osc.github.io/ood-documentation/master/requirements.html#hardware-requirements

Do these recommendations still stand? I’m soliciting what other sites are using to stand up their production resources to run OOD 1.8.

We plan on looking at supporting a unique OOD instance per cluster, where we have different storage mounts (also different $HOME s) for each.

Obviously every site is different - has any site needed to implement multiple instances or needed to make adjustments to a VM or bare metal resource running OOD after initial deployment? Any general concerns where we should plan for scalability?

Thanks,
Kevin

PS After the post I realized a similar query came up here:

It depends on what your users do, but since OOD is basically relaying (as opposed to processing) information, the demands on it are not that high. We run a dual 2.8GHz AMD core VM with 16GB of RAM, and that doesn’t seem to get itself in trouble with an average of 45 unique users (~200 user processes) during the day and about half that overnight. We have about 10GB free in /tmp. We started back in mid 2017 with a single core 4GB system, and we weren’t sure anyone would use it :blush:. CPU utilization is low (~800 seconds average per hour of user time). We do, however, see 3000-4000 seconds per hour used during prime time. Our single instance serves 3 clusters.

Cheers,

Ric

image001.png

image002.png

1 Like

The recommendations are still appropriate as far as OSC is concerned. We have 4 production instances of OOD (all branded/configured a bit differently) and all connected to the same 3 clusters. Each is running on a VM. Our classroom oriented one regularly sees peaks of 200+ users simultaneously connected to it.