Install on single server

Is there anyone here that has installed Open OnDemand and all associated pieces on a single server? We have small server with 2 six core processors, 384 Gb of RAM and 4 GeForce RTX 2080 GPU. It’s running CentOS 7

I’m new to all of this and currently I have this server, that hasn’t seen use yet, setup with jupyterhub, Singularity and Slurm. Since this is the same node user’s login to I have concerns about them using ssh to run jobs and bypassing Slurm. I have jupyterhub configured to run through Slurm when it launches user notebooks. My department chair is worried about making sure jobs are queued up to allocate resources fairly. I’m also not certain about C/C++ programs that may use CUDA and then create a visualization that needs to be launched in a window (like some of the CUDA samples do). I have found that applications like that cannot be launched via X since they use OpenGL.

I ran across Open OnDemand on some of my searches and wondered if it might solve my problems of controlling the user’s access a bit more. I’d appreciate any info from anyone willing to discuss this with me. I could possibly create an Open OnDemand front end on a VM (I say possibly as it would require some cooperation from IT folks) as long as the users could still store all their data on this single server as that is where the major disk storage would be.


(I’m sorry for the last edit, there’s a lot to unpack here and I just hit ctrl+enter too soon).

Hi! and welcome!

It seems you have two routes here. One is to get serious about cgroup allocations on this machine to limit what a user can do. This could apply to either an ssh session or through a process launched in OOD. This is beyond my linux administration capabilities, but your IT folks may be quite good at this. You could install OOD on this compute machine and still limit ssh access only through localhost. So folks can’t login from the world, but they can through the OOD web server.

The other seems to be to block ssh access to it and install OOD as a facade (this is the general use case we see at sites, where OOD is just a GUI for a login node and users can’t generally ssh into compute nodes). Now users will likely want to ssh and have a terminal session somewhere (they just love a cli!) so you’re likely going to want to have this OOD web server where users can ssh into it and not this larger machine you have.

We tend to promote Open OnDemand as an additive convenience in your stack, but you’re right in that we can act as a proxy to your compute node(s). Especially when a GUI is involved because we can proxy to a node that you won’t allow folks to ssh into (if say they wanted to X11 forward).

To libraries and things, that’s easily handled by containers or modules.

Hope that helps!

I edited my first post, so I’m re-commenting just in case you didn’t get the email/update for the edit.

Hi, just checking in. Is there anything else we can help you with?

Hey I’m going to mark this topic as solved, but please just reach out again if you have more questions!