SELinux + Rocky 9 with locally submitting configless SLURM and MUNGE

Hi,

I’m trying to get OnDemand 3.0.3 running on Rocky 9 submitting to SLURM with configless mode turned on and NFS homes. Out of the box SELinux is denying this despite turning on the appropriate booleans:

ondemand_manage_user_home_dir --> off
ondemand_manage_vmblock --> off
ondemand_use_kerberos --> on
ondemand_use_kubernetes --> off
ondemand_use_ldap --> off
ondemand_use_nfs --> on
ondemand_use_slurm --> on
ondemand_use_smtp --> off
ondemand_use_ssh --> on
ondemand_use_sssd --> on
ondemand_use_torque --> off

Appropriate audit logs are:


type=AVC msg=audit(1702056761.842:27908): avc: **denied** { write } for pid=184962 comm="sbatch" path=2F6D656D66643A736C75726D2E636F6E66202864656C6574656429 dev="tmpfs" ino=1126 scontext=system_u:system_r:ood_pun_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=file permissive=0
type=AVC msg=audit(1702385994.291:31251): avc: **denied** { read } for pid=209293 comm="id" name="userdb" dev="tmpfs" ino=41 scontext=system_u:system_r:ood_pun_t:s0 tcontext=system_u:object_r:systemd_userdbd_runtime_t:s0 tclass=dir permissive=0
type=AVC msg=audit(1702385994.291:31252): avc: **denied** { read } for pid=209293 comm="id" name="userdb" dev="tmpfs" ino=41 scontext=system_u:system_r:ood_pun_t:s0 tcontext=system_u:object_r:systemd_userdbd_runtime_t:s0 tclass=dir permissive=0
type=AVC msg=audit(1702385995.550:31253): avc: **denied** { read } for pid=209297 comm="PassengerAgent" name="userdb" dev="tmpfs" ino=41 scontext=system_u:system_r:ood_pun_t:s0 tcontext=system_u:object_r:systemd_userdbd_runtime_t:s0 tclass=dir permissive=0
type=AVC msg=audit(1702385995.594:31254): avc: **denied** { read } for pid=209328 comm="nginx" name="userdb" dev="tmpfs" ino=41 scontext=system_u:system_r:ood_pun_t:s0 tcontext=system_u:object_r:systemd_userdbd_runtime_t:s0 tclass=dir permissive=0
type=AVC msg=audit(1702386011.975:31257): avc: **denied** { open } for pid=209377 comm="squeue" path="/var/spool/slurmd/conf-cache/slurm.conf" dev="dm-0" ino=33554582 scontext=system_u:system_r:ood_pun_t:s0 tcontext=system_u:object_r:var_spool_t:s0 tclass=file permissive=0
type=AVC msg=audit(1702386017.570:31258): avc: **denied** { open } for pid=209385 comm="squeue" path="/var/spool/slurmd/conf-cache/slurm.conf" dev="dm-0" ino=33554582 scontext=system_u:system_r:ood_pun_t:s0 tcontext=system_u:object_r:var_spool_t:s0 tclass=file permissive=0
type=AVC msg=audit(1702386051.332:31259): avc: **denied** { open } for pid=209413 comm="sbatch" path="/var/spool/slurmd/conf-cache/slurm.conf" dev="dm-0" ino=33554582 scontext=system_u:system_r:ood_pun_t:s0 tcontext=system_u:object_r:var_spool_t:s0 tclass=file permissive=0

The ‘userdb’ messages are already discussed in topic various-selinux-denials-on-rocky-9-with-ood-3/3073.

Configless SLURM downloads the configuration into /var/spool/slurmd/conf-cache so ood_pun_t needs access to var_spool_t and it looks like sbatch is also being denied write access to tmpfs. SEModule lines generated by audit2allow:

allow ood_pun_t tmpfs_t:file write;
allow ood_pun_t var_spool_t:file open;

This gets me a little closer, but then they are being denied access to the MUNGE socket:

type=SYSCALL msg=audit(1702458006.484:32223): arch=c000003e syscall=42 success=no exit=-13 a0=4 a1=7ffeab69b130 a2=6e a3=0 items=0 ppid=216531 pid=216652 auid=4294967295 uid=YYYY gid=ZZZZ euid=YYYY suid=YYYY fsuid=YYYY egid=ZZZZ sgid=ZZZZ fsgid=ZZZZ tty=(none) ses=4294967295 comm=“squeue” exe=“/usr/bin/squeue” subj=system_u:system_r:ood_pun_t:s0 key=(null)ARCH=x86_64 SYSCALL=connect AUID=“unset” UID=“XXXX” GID=“XXXX” EUID=“XXXX” SUID=“XXXX” FSUID=“XXXX” EGID=“XXX” SGID=“XXXX” FSGID=“XXXX”

allow ood_pun_t unconfined_service_t:unix_stream_socket connectto;

Fixes this so that jobs now get submitted and partitions are viewable in OnDemand.

Hi and welcome!

Thanks, I’ll file a ticket upstream for the same but I must say pull requests welcome (especially if you have the fix already)!