Systemd user sessions in interactive desktop

I’m not sure if it’s an issue with our configuration but when users start an interactive desktop session (we use RHEL9 and TurboVNC) there is no associated systemd user session. This causes problems with for example podman, which requires a systemd user session. From investigating, one solution may be to add systemd run –user to this line in OOD core: ood_core/lib/ood_core/batch_connect/templates/vnc.rb at cf9f0c421a9029d150161356a96bb0714507eb24 · OSC/ood_core · GitHub

Before I do that was just wondering if this was expected behaviour or if there were any other experiences with this?

We’re not entirely sure. This was a community contribution that we (OSC) don’t use, so I’m not 100% sure here.

Should I look at contributing a setting for this? I don’t want to affect other user workflows.

Sorry, I’m confusing myself because I thought you’d meant the systemd adapter.

I guess I’d have to see what your proposed solution is. I know for us, we have to set XDG_RUNTIME_DIR for newer systems because we often can’t write to /var/run/$(id -u). Is that perhaps a symptom of the same issue you’re running into?

I’d probably start by trying systemd-run --user --scope to that line and seeing if anything else needs to be added further up the stack. We also have to set XDG_RUNTIME_DIR in some situations with podman.

We previously came across this same issue.

Our solution was to handle the creation and lingering of user sessions through Slurm, using loginctl in the prolog and epilog scripts. It’s worked well at our site.

The Slurm documentation describes how:

Thanks for the pointer. I have followed the steps there but still having the same issue unless I disable pam_systemd.so. Is it possible to only do this for ood-specific logins?

I’m not sure how you’d do that for just OOD logins.

But reviewing for a minute, what’s your indication that there’s no systemd user session? From a terminal in the VNC session, does loginctl user-status <username> show anything? (Ensure you specify the username to bypass whatever is set in $DBUS_SESSION_BUS_ADDRESS)

I’m also having an issue with podman over TurboVNC on RHEL 9, and I’m wondering if this is the same thing.
My problem: although there is a systemd user session, processes over the VNC session can’t see it because TurboVNC is setting a unique DBus.

Hmm, I think you’re right - I am having the same issue as in your post. I was checking with loginctl and assumed no session was set but if I include the username then I do see that there is a session with lingering enabled for my user.

Perhaps the solution would be to PR additional arguments to the TurboVNC invocation?

I tried adding -userdbus but it didn’t work. I’m not super familiar with TurboVNC workings however.

I’m not sure what’s the best approach. Although TurboVNC has configurations to use the systemd user DBus, they’re ignored by the fact that OOD invokes vncserver with the -noxstartup option:

Whether setting $userDBus = 1 in the config file, or adding -userdbus as an option, you can see by walking through /opt/TurboVNC/bin/vncserver that those configs only come into play once TurboVNC runs the /opt/TurboVNC/bin/xstartup.turbovnc script. But with -noxstartup, it never gets run.

Possible question for @jeff.ohrstrom: is -noxstartup really necessary?
If so, then it’s probably a question for the TurboVNC developers: can -userdbus be decoupled from execution of xstartup.turbovnc?

Is it as simple as setting DBUS_BUS_SESSION_ADDRESS to $XDG_RUNTIME_DIR/bus?

EDIT: looks like no. I’ve also tried enabling xstartup which died since it expects a window manager to be provided. I also tried a blank xstartup script but also to no avail.

Hello, just following up after I had a chance to revisit this issue recently:

We identified a simple workaround by commenting out this line in /var/www/ood/apps/bc_desktop/template/desktops/xfce.sh:

This prevents the creation of a duplicate DBus session, and the default systemd-created session at $XDG_RUNTIME_DIR/bus is then used.

Thanks so much for sharing. Have implemented this fix as well.