Rserver in the Singularity container vs rstudio-server installed from rpm

Hi. We are trying to install the RStudio app following the successful model of Jupyter.
After starting a session the output log has:
FATAL: exec /.singularity.d/actions/run failed: no such file or directory
The singularity container was created using “singularity pull”.
It seems to call “rserver” but we installed RStudio on each node using the rpm which uses “rstudio-server” as the executable.
Am I missing something obvious?
Thanks
Chris

@chrisreidy if you installed the RPM located at: https://download2.rstudio.org/rstudio-server-rhel-1.1.463-x86_64.rpm then rserver is located at /usr/lib/rstudio-server/bin/rserver so you need to append that to your path.

Sitting on this thing with strace, I think I’ve finally found the root cause for

FATAL: exec /.singularity.d/actions/run failed: no such file or directory

which is what started our hunt for rserver. The real problem was the downloaded

rserver-launcher-centos7.simg

behaves in strange, wonderful ways when the host OS (CentOS6) /usr is bound to the image’s /usr, as is done by script.sh.erb. Rebuilding the image using the “def” file provided in the docs, but changing the OSVersion to 6 resolved the
FATAL error above. Now to see if we can get further…

Ric

@azric knowing that the host is CentOS 6 that makes sense. I am in the process of updating our docs and examples to point out that the guest should be the same OS type and major version as the host.

@rodgers.355 requiring the same OS version between the OnDemand Server and the target cluster is a needless constraint that would limit the usefulness of OOD. Because the constraint isn’t present today, we were able to add OnDemand as
a new way to access our existing CentOS 6 cluster. If there was a requirement for “same OS underneath OOD as is used on the cluster”, we probably would not have deployed OOD at all. I tried OOD on CentOS6 - it wasn’t workable because of the state of tools
in SCL for CentOS6.

I see this as an ongoing problem as well. At some point, we’ll have an additional cluster running who knows what (CentOS 7, 8, or ?), and that really shouldn’t affect OOD one way or the other, as long as sbatch/qsub/… works.

Ric

Agreed that the web host and compute hosts should be permitted to be separate OSes; I meant the compute host and the Singularity guest will probably need to be the same.

I’m curious as to why this is being done. Isn’t a major point of Singularity to insulate processes from the underlying distribution?

@michaelkarlcoleman; agreed that this is an odd way to do things, but Singularity is being used to isolate the process from the underlying OS. Specifically Singularity is being used to provide a per-user /tmp so that multiple RStudio servers may be run concurrently without running into permissions problems. RStudio server has a few paths hard coded (ServerConstants.hpp) that would otherwise collide. Our original solution used proot to accomplish the same thing. By binding in almost everything (except for tmp) from the host users have access to all the build tools and libraries that a given compute host would normally provide without admins having to spend time customizing the container.

I see–thanks for clarifying. There do indeed seem to be a number of occurrences of /tmp in the repo. I particularly enjoyed this one:

secureKeyPath = core::FilePath("/tmp/rstudio-server").complete(filename);

:slight_smile:

I can imagine alternatives, but not sure is any better. TIL unshare, though, and it sure looks interesting: linux - Is it possible to fake a specific path for a process? - Unix & Linux Stack Exchange