Cannot log out or use two consecutive users in the open ondemand development container

Hi all,

I’m following the instructions to run open ondemand in a container with good results (also amused of the capabilites of Ruby! it is my first contact with that language). Now I’m playing a little bit with it and I want to check some stuff with another user but I can’t logout pressing the logout button and neither login with another user at the same time (also the bind mounts point to the first user logged in, so it seems I can’t do multiuser testing in that container.

Is there any way to have the container accessible to two or more users?

Thank you in advance for your time

Elisabeth

Hi, thanks for the post!

The container is designed for a single user largely to focus on development work of OOD itself.

What is it the goal you are after currently in setting up the second user? Just taking things for a spin to see it, or is there something specific you are trying to understand or setup?

Hi Travis,
I’m one of the engineers behind and european project called HEROES and I’m currently evaluating Open OnDemand to be used as a GUI to let the users send jobs using a pipeline tool. I’m using the development version to be able to modify the code to accomodate Open OnDemand to our workflow. Our to do list (better said, wish list) is:

  • Changing the authentication screen to one using our own API and retrieve some information about the user in that stage (i.e. the clusters available for them, their workfolders, the status of their jobs, etc.)
  • Adding a customized app to launch the pipeline tool (NextFlow).
  • Modifying the Job status screen to show the exit status of the different steps of the pipeline, not only the result of the execution of the complete pipeline.
  • Showing the remote folder from minIO or a shared drive (that is an extra if we finish all the previous steps on time…)

If using this container we are not able to have more than one user running the platform at the same time, we will have to think in other ways to use Open OnDemand. Also OOD should be containerized because the complete solution (which has at least 7 more “pieces”) will be orchestrated by kubernetes.

Do you have any hint in which will be be most suitable way deploy/execute/containerize Open OnDemand for our purposes?

Thanks for the context. First, Ruby is an absolutely splendid language and I hope you enjoy it as much as I do.

Secondly; the TLDR of this answer is: We have never deployed OOD to Kubernetes so we don’t have that recipe available, and indeed I suspect it will be hard to do and get right.

Though, you may not be the first to blaze this trail, I don’t know how has other than from inference. This is the only one I could find.

Here’s why this is difficult: We really expect you to be you and we’ve setup some infrastructure around that and the assumption that OOD is not contained at all and indeed on real OS. That is, you to have a UID/GID(s) of a normal regular non-privileged user. You may see Per User NGINX (PUN) in the documentation and we utilize NGINX’s feature to fork a process and set it’s effective UID & GID.

Here’s the ps -elf output I just pulled from our test system. This is my (user johrstrom’s) PUN process actually are. A couple take aways are the master process is being ran by root. Real root because it’s a real OS. Because it’s real root, it can make those system calls to set effective UID & GID to me (uid=30961(johrstrom) gid=5515(PZS0714)).

The other take away is that some process’ parent Id is 1, systemd. Systemd doesn’t like being in a container, but there are ways around that. The important bit really is that systemd is doing the heavy lifting in terms of forking processes’ and it is very good at that on real systems. I always podman for unprivileged users - plus those containers don’t interact with external systems (they’re really only for development) so I don’t know how good that forking process is without systemd.

4 S johrstr+  54856      1  0  80   0 - 100183 poll_s 13:11 ?       00:00:00 Passenger watchdog
0 S johrstr+  54859  54856  0  80   0 - 378357 poll_s 13:11 ?       00:00:00 Passenger core
5 S root      54878      1  0  80   0 - 30614 sigsus 13:11 ?        00:00:00 nginx: master process (johrstrom) -c /var/lib/ondemand-nginx/config/puns/johrstrom.conf
5 S johrstr+  54879  54878  0  80   0 - 33309 ep_pol 13:11 ?        00:00:00 nginx: worker process

So that’s the big hurdle with having everything bundled in 1 container - Having all those real LDAP users like johrstrom in a container. What’s more is, you’ll likely have to boot the container as real root so that it can continue to boot PUNs with the right UIDs. And that gives me a a bit of pause - because the containers have to be privileged in this way.

Conversly - you may be wondering, well why not boot all the PUNs in their own kuberentes PODs and have Apache route to that k8s Pod instead of a local PUN. We currently do this routing though a simple file check. I’m the REMOTE_USER named johrstrom (we got that from our Open Id Connect provider) and apache is using a simple file structure scheme to find the unix socket (a local file) that my PUN is listening to, namely /var/run/ondemand-nginx/johrstrom/passenger.sock. So if you were to try to break up these components (Apache and Nginx) you’d have to solve how to route requests.

Easiest way is to deploy is on a VM. Sorry if that’ not what you’re looking for. It’s probably not impossible to fully containerize Open OnDemand, you’d just be forging that path where we haven’t yet.

Hope that helps!

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.