Mate Desktop in a Singularity container?

At the moment, we have allocated a number of the compute nodes in our cluster as “desktop nodes” for running Mate Desktop sessions and have created custom images for them that include all of the desktop packages. We normally do not install such packages on our compute nodes in order to avoid “bloat” and ensure higher performance. What we would like to do, though, is to allow our users to run the Mate Desktop as a Singularity container so that it could, at least in theory, run on any compute node in our cluster without our having to install all of the many desktop packages on each one, since those would be installed in the container image. Has anyone already done this who would be willing to share some how-to info? (We are currently running OOD v1.5.5, btw.)

Thank you,

Richard

Perhaps the person in this topic below has, but what comes with the RPM doesn’t have it enabled by default.

I have however recently opened this ticket because I think it’s a fairly good idea, I just haven’t gotten around to working on it yet.

Yes, we do. The modifications are very minimal. Basically, it’s another copy of /var/www/ood/apps/sys/bc_desktop called something else (we added _container to ours), and then modifying both the submit.yml.erb and the script.sh.erb files for it. We also do not have the VNC server installed in the image, so we work around that as well.

Basically, for submit.yml.erb, all we do is load the Singularity environment module and make another alteration to $PATH as well as define $WEBSOCKIFY_CMD as we also don’t have that installed in our image. Then in templates/script.sh.erb, all we do is change the command in the way it’s described in the above quote. We pass some command line flags to Singularity, but those are site-specific bind mounts, and are as likely to be handled by singuality.conf as need to be handled that way.

Do you already have the Singularity image? That part I didn’t personally work on, but I bet the definition for it is laying around somewhere. Seems like it’s probably just installing the “MATE Desktop” or similar group, though, and perhaps the scheduler?

We haven’t built the image yet. I’m still very much on the learning curve with regard to containerization. It is most likely that I’ll be building a container in a Docker Swarm environment, once I have access, then copying the container to our HPC cluster and running it with Singularity. I’m hoping I can just install the same packages we added to the images we used to deploy our custom desktop images. I just wanted to find out if it was actually doable, since I’d seen no discussion of it yet in discourse. I’m hoping we can take this approach and avoid bloating our compute nodes with all the “pretty” packages that user desktop interfaces tend to require. Thanks!

I don’t think you’ll have much trouble, and in fact I would not bother with all of that personally. My start looks like this:

Bootstrap: yum
OSVersion: 7
#DistType: centos
MirrorURL: http://mirror.centos.org/centos-%{OSVERSION}/%{OSVERSION}/os/$basearch/
Include: yum

%post   
    yum -y install epel-release
    yum -y groupinstall 'MATE Desktop'
    yum -y groupinstall 'Compatibility Libraries'
    yum -y install https://github.com/openhpc/ohpc/releases/download/v1.3.GA/ohpc-release-1.3-1.el7.x86_64.rpm
    yum -y install evince eog
    yum -y install glx-utils
    yum -y install systemd-libs
    yum -y install ohpc-slurm-client lmod-ohpc
    <followed by lots of mkdir for local bind mounts>

That’s a pretty basic Singularity definition file, but it’s most of what’s involved. Then it’s just a "singularity build containerfilename.sif containerdef.def (where the latter is a file containing similar to the above). What’s left for me is rationalizing the outcome of that with what’s in the current image, and then seeing if it works with our current setup/why not.

I’d definitely be interested to see your exact modifications :+1:
How does it work without vnc? I thought that was required to display the desktop connection on the compute node?

@novosirj, we are trying to use the Interactive Desktop with Singularity in the same way you have done. Your posts and directions on the related topic are extremely useful, so thank you so much for that!

When I start the Mate Desktop, I am getting:
Script starting…
Generating connection YAML file…
Restoring modules from user’s default
Launching desktop ‘mate’…
**(process:23566): dconf-WARNING : 18:47:07.198: failed to commit changes to dconf: Error spawning command line “dbus-launch --autolaunch=156661177db84bf3aec13595cc4277c5 --binary-syntax --close-stderr”: Child process exited with code 1
**(process:23570): dconf-WARNING : 18:47:07.213: failed to commit changes to dconf: Error spawning command line “dbus-launch --autolaunch=156661177db84bf3aec13595cc4277c5 --binary-syntax --close-stderr”: Child process exited with code 1

I used the “mate.sh” script from https://raw.githubusercontent.com/OSC/bc_desktop/master/template/desktops/mate.sh. Have you maybe encountered this error during your setup? I made custom Singularity image similar to yours, with addition of Websockify and TurboVNC and the respective variables. I use “mate” for Desktop attribute in "form.yml " and the “basic” template in “submit.yml.erb” since “vnc” doesn’t work. If you have any idea about the error I am encountering, please let me know.

I hope everyone is staying safe and well!

@npavlovikj If I do a quick google search for that error, it seems you can’t write to the ~/.dconf or ~/.config.

That said, I’ve found I have to mount /var in the container to avoid other Dbus errors (not this one specifically, but others).

@jeff.ohrstrom, thank you so much for your prompt reply! You were right that it was a binding issue - I already had $HOME in SINGULARITY_BINDPATH, but adding “/run” to the binding solved the issue since that is the location where the dbus stuff on our cluster is.

Now, when I start the Interactive Desktop, I am getting:
Script starting…
Generating connection YAML file…
Restoring modules from user’s default
Launching desktop ‘mate’…
**(process:10977): dconf-WARNING : 18:33:28.255: failed to commit changes to dconf: Cannot autolaunch D-Bus without X11 $DISPLAY
**** (mate-session:11010): WARNING **: 18:33:28.783: Cannot open display: **
Desktop ‘mate’ ended…
Cleaning up…

After this, I added “export DISPLAY=:0.0” to the “mate.sh” script. Although the “X11 $DISPLAY” message was resolved, the display can still not be opened:
Script starting…
Generating connection YAML file…
Restoring modules from user’s default
Launching desktop ‘mate’…
**** (mate-session:11404): WARNING **: 18:35:29.375: Cannot open display: **
Desktop ‘mate’ ended…
Cleaning up…

The error looks similar to the one described in Module not found launching Desktop. I am indeed using the “basic” template, but that is because when I use “vnc”, the Singularity image is not launched and I am getting “vncserver command is not found”, although the image has “vncserver” in it. Do you maybe have any suggestions on how to proceed and fix this error? If I should open a different issue, please let me know as well.

This topic is fine by me. You need to use the vnc template for sure. Besides that, you may have to add the vncserver directory to your PATH in a script_wrapper. Here’s what I had just recently because turbovnc installs into /opt which is odd. Also exporting a hard coded DISPLAY probably isn’t the way to go, I believe we set and export it in vnc script. This way, one user can be on display 1 and another on 2 and so on.

Here I’ve given it as a part of the cluster.d/ configuration for the cluster so it’s applied to every script all the time, but you could similarly add the batch_connect portion to just the submit.yml.erb of your desktop app.

v2:
  batch_connect:
    vnc:
      script_wrapper: "export PATH=$PATH:/opt/TurboVNC/bin\n%s"
      websockify_cmd: '/usr/bin/websockify'

The appropriate VNC server is required. It doesn’t have to be in the image. We have it on a shared filesystem that is bound into the container.

That’s how we do it, in the submit.yml.erb. We have the following added in:

---
batch_connect:
  header: |
    module use /projects/community/modulefiles
    module load singularity/3.1.0
    export PATH=/projects/community/containers/bin:$PATH
    export WEBSOCKIFY_CMD=/projects/community/containers/bin/websockify
  template: vnc

Where vncserver lives in the PATH.

We definitely use “vnc” in our submit.yml.erb as you can see above. I see no modifications to our mate.sh. The meat of it is in /var/www/ood/sys/apps/app_name/template/script.sh.erb.

Basically you change this line:

source "<%= session.staged_root.join("desktops", "#{context.desktop}.sh") %>"

To something like this:

singularity exec -B ... $IMAGE /bin/bash "<%= session.staged_root.join("desktops", "#{context.desktop}.sh") %>"

…where $IMAGE is your Singularity image, and the missing -B is where we do a large number of bind mounts to include critical config directories, scheduler related stuff (needs /run/munge, for example) /etc/slurm for the scheduler config, and user filesystems that are on the compute node outside the container. You could probably handle that differently depending on your needs, but for us it’s fine to have a long list of -B arguments.

@jeff.ohrstrom and @novosirj - thank you both for your comments.
I have followed the instructions in the documentation and here for the Interactive Desktop. I now have the “vnc” template in “submit.yml.erb”, and the binding and the Singularity command in “template/script.sh.erb”.

I have TurboVNC and websockify in the image, and not the compute node, with paths “/usr/bin/” and “/usr/local/bin/” respectively. Within the image itself, I added these paths to PATH and WEBSOCKIFY_CMD, so when I execute the image, both variables have the correct values.

I didn’t set PATH and WEBSOCKIFY_CMD in “clusters.d” since those are relative to the image. When I run the Interactive Desktop with the “vnc” template I am getting:
Setting VNC password…
/var/spool/slurmd/job1998684/slurm_script: line 98: vncpasswd: command not found
Starting VNC server…
/var/spool/slurmd/job1998684/slurm_script: line 107: vncserver: command not found
/var/spool/slurmd/job1998684/slurm_script: line 110: vncserver: command not found
/var/spool/slurmd/job1998684/slurm_script: line 107: vncserver: command not found
/var/spool/slurmd/job1998684/slurm_script: line 110: vncserver: command not found
/var/spool/slurmd/job1998684/slurm_script: line 107: vncserver: command not found
/var/spool/slurmd/job1998684/slurm_script: line 110: vncserver: command not found
/var/spool/slurmd/job1998684/slurm_script: line 107: vncserver: command not found
/var/spool/slurmd/job1998684/slurm_script: line 110: vncserver: command not found
/var/spool/slurmd/job1998684/slurm_script: line 107: vncserver: command not found
/var/spool/slurmd/job1998684/slurm_script: line 110: vncserver: command not found
/var/spool/slurmd/job1998684/slurm_script: line 107: vncserver: command not found
/var/spool/slurmd/job1998684/slurm_script: line 110: vncserver: command not found
/var/spool/slurmd/job1998684/slurm_script: line 107: vncserver: command not found
/var/spool/slurmd/job1998684/slurm_script: line 110: vncserver: command not found
/var/spool/slurmd/job1998684/slurm_script: line 107: vncserver: command not found
/var/spool/slurmd/job1998684/slurm_script: line 110: vncserver: command not found
/var/spool/slurmd/job1998684/slurm_script: line 107: vncserver: command not found
/var/spool/slurmd/job1998684/slurm_script: line 110: vncserver: command not found
/var/spool/slurmd/job1998684/slurm_script: line 107: vncserver: command not found
/var/spool/slurmd/job1998684/slurm_script: line 110: vncserver: command not found
Cleaning up…
/var/spool/slurmd/job1998684/slurm_script: line 27: vncserver: command not found

Because of this error, I thought the Singularity image is not started at all, since the TurboVNC executables should be accessible from it. That is why I next tried the “basic” template which led me to the post above. Currently, I use custom Docker image with Singularity, but I also tried using the .sing, .sif formats, as well as more official TurboVNC images, but the error persists. Is this because I don’t set the PATH and WEBSOCKIFY_CMD before the image is executed? Or there is some issue with the image I use? Beside Slurm and Lmod, our cluster nodes don’t have any of the other dependencies for Interactive Desktop, so the idea is to start the Desktop via Singularity that contains all the necessary dependencies.

What I can say is that the person who did our setup appeared to have put vncserver in the Singularity image, later removed it, and put it on the shared filesystem. I don’t know the reason for that, or if there’s a reason it has to be that way. So ours is not in the image. I don’t understand how the vnc template works well enough to know if there’s a reason it won’t work inside the image.

We set PATH and WEBSOCKIFY_CMD in submit.yml.erb in the app directory in /var/www/ood/apps/sys/, as I’d written above. So it’s possible that vncserver/websockify can’t be in the image, but it’s also possible that your PATH is not being kept somehow I guess (though it would seem like /usr/bin would be in any default PATH). @jeff.ohrstrom may know better than I.

I’ve not shared our config up till now because I was having a hard time figuring out what files we actually had to change, without comparing to a stock install. It seems like the only two files we needed to change were:

/var/www/ood/apps/sys/<appname>/submit.yml.erb
/var/www/ood/apps/sys/<appname>/template/script.sh.erb

There doesn’t seem to be anything related to this in /etc at all, and nothing else in the app directory seems to be changed specifically to support sing Singularity. Ideally, and this was my complaint, I could place replacements for these two files in /etc/ood/apps/ instead of needing to edit the distribution copy. But that’s not possible, so it’s all done at our site by copying /var/www/ood/apps/sys/bc_desktop to /var/www/ood/apps/sys/bc_desktop_container, and then change the configuration files accordingly in /etc to be located at, for example, /etc/ood/apps/bc_desktop_container instead of bc_desktop. But the <clustername>.yml file has nothing special in it at all.

We also change form.yml in the /var/www/ood/apps/sys/bc_desktop_container, but that’s related to the submission form for a session (default options, memory, etc.); again, nothing in there related to Singularity.

HTH.

1 Like

OK, I’m sorry it’s taken this long to come this this realization, but I’ve realized this week that this is not as straight forward as I’d thought. Right now you can put all the Desktop libraries (MATE, Gnome, and so on) in the container and that’s fairly straight forward and will work. But you still need vncserver and websockify on the host machine.

The issue is that the batch connect infrastructure is setup to do all this, but it’s never told to do it in a container. It just executes it all in a regular bash environment, as prep work before executing your script (whatever it may be). Then you’ve given your singuliarty command after all this is executed. But containers have 1 entrypoint right? Well we need to force that entrypoint to be before all of our batch connect commands.

That’s the main issue we’re having right now. All of this is being executed outside of any container.

I believe we may be able to hack this through the script_wrapper parameter, if we dumped the contents of the script (%s) to a file then singualrity run entrypoint.sh. It may look something like this, though I don’t know if this works off hand, it could throw the formatting off or any number of other things.

batch_connect:
  template: vnc
  script_wrapper: |
    cat << EOF > container.sh
        %s
    EOF

    singularity run myimage.sif container.sh

This way everything gets executed inside the container.

I’ll give this approach a shot when I get the chance and report back one way or another.

Hi @jeff.ohrstrom,

Thank you so much for your clear and detailed explanation. And I am really sorry for not being more specific what I was trying to achieve earlier.

I had a chance to try your suggestion:

$ cat submit.yml.erb

batch_connect:
template: vnc
script_wrapper: |
cat << “EOF” > container.sh
%s
EOF
TMPDIR=$(mktemp -d)
export SINGULARITY_BINDPATH=“$HOME,$WORK,/media,/mnt,/opt,/srv,/var,/run,/run/munge”
singularity run docker://npavlovikj/vnc_test /bin/bash container.sh

I needed to use “EOF” instead of EOF, otherwise the “display” variable was null which caused issue with the unary operators. Running the Interactive Desktop after produced this:

Setting VNC password...
Starting VNC server...
WARNING: The first attempt to start Xvnc failed, possibly because the font
catalog is not properly configured.  Attempting to determine an appropriate
font path for this system and restart Xvnc using that font path ...
Could not start Xvnc.
failed to parse log params:vnc.log
Unrecognized option: -log

It still failed, but it is definitely a progress. Thank you so much for thinking about this and providing a valuable suggestion!

Do you think the current error is due to version issue? I have tigervnc-server-1.8.0-17.el7.x86_64 installed in my image.

OMG! :tada: I’m kind of surprised that it did work. And to your explanation, it’s completely my fault that I was just not thinking with the smart part of my brain.

That issue is because of a different vncserver (tigervnc) located in /usr/bin/vncserver. I believe turbovnc is installed in /opt/Turbovnc and probably want to delete tigervnc-server through a yum uninstall.

Just to be clear, we need turbovnc as the vncserver command. We provide RPMs for both websockify and turbovnc on our yum repo.

https://yum.osc.edu/ondemand/1.6/compute/el7Server/x86_64/

Hi @jeff.ohrstrom,

I was excited the suggestion worked earlier and I didn’t realize I had the wrong TurboVNC version in the image, so I am sorry for bothering you with that. With the correct version, I am getting the following when I launch the Desktop:

Setting VNC password…
Starting VNC server…

Desktop ‘TurboVNC: c1505.rhino.hcc.unl.edu:1 (npavlovikj)’ started on display c1505.rhino.hcc.unl.edu:1

Log file is vnc.log
Successfully started VNC server on c1505.rhino.hcc.unl.edu:5901…
Script starting…
Starting websocket server…
Launching desktop ‘mate’…
echo /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/TurboVNC/bin/:/opt/libjpeg-turbo/bin/:/opt/noVNC/
cat: /etc/xdg/autostart/mate-volume-control-applet.desktop: No such file or directory
WebSocket server settings:

  • Listen on :18956
  • No SSL/TLS support (no cert file)
  • Backgrounding (daemon)
    Scanning VNC log file for user authentications…
    Generating connection YAML file…
    _IceTransmkdir: ERROR: euid != 0,directory /tmp/.ICE-unix will not be created.
    mate-session[49627]: WARNING: Could not parse desktop file /home/deogun/npavlovikj/.config/autostart/mate-volume-control-applet.desktop: Key file does not start with a group
    mate-session[49627]: GLib-GObject-CRITICAL: object GsmAutostartApp 0x558ec2b9cbf0 finalized while still in-construction
    mate-session[49627]: GLib-GObject-CRITICAL: Custom constructor for class GsmAutostartApp returned NULL (which is invalid). Please use GInitable instead.
    mate-session[49627]: WARNING: could not read /home/deogun/npavlovikj/.config/autostart/mate-volume-control-applet.desktop
    mate-session[49627]: WARNING: Unable to find provider ‘’ of required component ‘dock’
    Window manager warning: Log level 16: XPresent is not compatible with your current system configuration.

** (process:49757): WARNING **: 17:16:21.609: /build/indicator-session-MYtBYD/indicator-session-17.3.20+17.10.20171006/src/backend-dbus/users.c:302 on_user_list_ready: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.Accounts was not provided by any .service files

(process:49759): indicator-sound-WARNING **: 17:16:21.837: volume-control-pulse.vala:744: Unable to connect to dbus server at ‘unix:path=/tmp/tmp.G8c6e08VC8/tmp.Jt3rdfT3nM/pulse-PKdhtXMmr18n/dbus-socket’: Could not connect: No such file or directory

(process:49759): indicator-sound-WARNING **: 17:16:21.867: accounts-service-access.vala:151: unable to find Accounts path for user npavlovikj: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.Accounts was not provided by any .service files
no maximize: true
caja-dropbox is not installed, doing nothing.
*** ERROR ***
TI:17:16:23 TH:0x55ebc0605070 FI:gpm-manager.c FN:gpm_manager_systemd_inhibit,1782

  • Error in dbus - GDBus.Error:org.freedesktop.DBus.Error.AccessDenied: Permission denied
    Traceback:
    mate-power-manager(+0x19dff) [0x55ebbeec4dff]
    mate-power-manager(+0x12959) [0x55ebbeebd959]
    /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0(g_type_create_instance+0x1e5) [0x2b59dce929c5]
    /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0(+0x15748) [0x2b59dce73748]
    /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0(g_object_new_with_properties+0x2f5) [0x2b59dce74ee5]
    /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0(g_object_new+0xc1) [0x2b59dce75961]
    mate-power-manager(+0x12da2) [0x55ebbeebdda2]
    mate-power-manager(+0x8792) [0x55ebbeeb3792]
    /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7) [0x2b59dd9afb97]
    mate-power-manager(+0x8b0a) [0x55ebbeeb3b0a]
    Initializing caja-open-terminal extension

(nm-applet:49837): nm-applet-WARNING **: 17:16:23.720: NetworkManager is not running

** (update-notifier:49811): WARNING **: 17:16:23.750: can’t read /var/lib/update-notifier/user.d/ directory
No nvidia-settings and prime-select detected.
RuntimeError: object at 0x2b2912e54870 of type FolderColorMenu is not initialized
RuntimeError: object at 0x2b2912e40f50 of type RenameMenu is not initialized
/usr/lib/python3/dist-packages/blueman/plugins/applet/AppIndicator.py:8: PyGIWarning: AppIndicator3 was imported without specifying a version first. Use gi.require_version(‘AppIndicator3’, ‘0.1’) before import to ensure that the right version gets loaded.
from gi.repository import AppIndicator3 as girAppIndicator
ERROR:dbus.proxies:Introspect error on org.bluez:/org/bluez: dbus.exceptions.DBusException: org.freedesktop.DBus.Error.ServiceUnknown: The name org.bluez was not provided by any .service files
ERROR:dbus.proxies:Introspect error on org.blueman.Mechanism:/: dbus.exceptions.DBusException: org.freedesktop.DBus.Error.ServiceUnknown: The name org.blueman.Mechanism was not provided by any .service files
ERROR:dbus.proxies:Introspect error on org.bluez:/: dbus.exceptions.DBusException: org.freedesktop.DBus.Error.ServiceUnknown: The name org.bluez was not provided by any .service files
ERROR:dbus.proxies:Introspect error on org.bluez:/: dbus.exceptions.DBusException: org.freedesktop.DBus.Error.ServiceUnknown: The name org.bluez was not provided by any .service files
Gtk-Message: 17:16:25.535: GtkDialog mapped without a transient parent. This is discouraged.
INFO:root:The HUD is disabled via org.mate.hud in gsettings.

Now, I can go step further, and I can actually click on “Launch Desktop Container” from the web site, which is amazing! This link opens noVNC, but with error message “Failed to connect to server”. I wonder if some dependencies are missing from the image, or some bindings as well. I used the image Docker, with additional mounts.

After this, I created another image with the RPMs you shared with me. With that, I am getting:

Setting VNC password…
Starting VNC server…
WARNING: The first attempt to start Xvnc failed, possibly because the vncserver
script was not able to figure out an appropriate X11 font path for this system
or because the font path you specified with the -fp argument was not valid.
Attempting to restart Xvnc using the X Font Server (xfs) …
Could not start Xvnc.
Unrecognized option: -x509cert
use: X [: <display>] [option]

I will double-check the images and the versions used, and see what error is easier to tackle next.

At this point I’m able to replicate the second issue, though I don’t know what’s going on.

The first could be any sort of issue due to MATE installation/library load problems which we’ve seen in different topics on here or throughout google searches, to it’s never clear what resolves the issue.

Either way, I can replicate the second issue so I’m working on an an image that group installs ‘MATE Desktop’.

OK, I think I’ve solved this (finally :sweat_smile:).

The first issue you’ve had, turns out the easiest thing to do is just to remove mate-power-manager. Users shouldn’t be able to restart the node anyhow which I think is that widget does.

The second is something we’d already seen apparently and I just forgot about it. So, if you’ll notice in the definition file, there’s a different RPM url. I was not able to solve the issue with turbovnc 2.1.1. It says you need the openssl or gnutls libraries but when I had them installed as well (both) it made no difference, so I just upgraded.

This is my .def file which has MATE + websockify + turbovnc installed in it.

Bootstrap: docker

From: centos:7

%post   
    yum install -y epel-release
    yum groupinstall -y 'MATE Desktop'
    yum install -y python2-pip
    pip install ts
    yum install -y https://yum.osc.edu/ondemand/1.6/compute/el7Server/x86_64/python-websockify-0.8.0-1.el7.noarch.rpm
    yum install -y https://yum.osc.edu/ondemand/latest/compute/el7Server/x86_64/turbovnc-2.2.3-1.el7.x86_64.rpm
    yum remove -y tigervnc-server python2-pip mate-power-manager
    yum clean all
    rm -rf /var/cache/yum/*

This is the submit.yml.erb (that you would put into say /etc/ood/config/apps/bc_desktop/submit/container.yml.erb`) file I’ve used to get it to work which super similar to yours only with the websockify cmd being defined.

<%
  # your image location will differ
  image="/users/PZS0714/johrstrom/Public/images/sing/mate.sif"
%>
---
script:
  native:
    # your native.resources will differ
    resources:
      nodes: "<%= bc_num_slots %><%= node_type %>"
  template: "vnc"
batch_connect:
  websockify_cmd: '/usr/bin/websockify'
  script_wrapper: |
    cat << "CTRSCRIPT" > container.sh
    export PATH="$PATH:/opt/TurboVNC/bin"
    %s  
    CTRSCRIPT

    # your bindpath will differ
    export SINGULARITY_BINDPATH="$HOME,/fs,/srv,/var,/run,/tmp:$TMPDIR"

    singularity run <%= image %> /bin/bash container.sh

So, for all the users in this thread and beyond, I’m able to show that OOD handles this out of the box. The only thing left now is to get some proper documentation in place.

2 Likes