I don’t think there is a way to easily modify the Job Composer’s submission arguments though that does sound like a good idea.
If users are able to successfully submit the job using sbatch from the command line from a login node but the same sbatch is failing from the web node, there is another approach you could take.
You can provide a wrapper script for sbatch that will ssh to the login node and execute sbatch there. Here is an example wrapper script:
and associated overrides https://github.com/puneet336/OOD-1.5_wrappers/tree/master/openondemand/1.5/wrappers/slurm/bin
See https://osc.github.io/ood-documentation/master/installation/resource-manager/slurm.html. You would deploy your wrapper script for sbatch on the webnode, such as /usr/local/bin/sbatch_ssh_wrapper
and then modify the cluster config to use this for sbatch:
job:
adapter: "slurm"
cluster: "my_cluster"
bin: "/path/to/slurm/bin"
conf: "/path/to/slurm.conf"
+ bin_overrides:
+ sbatch: "/usr/local/bin/sbatch_ssh_wrapper"
If you do this, it affects how all of OnDemand submits to that particular cluster, not just the Job Composer. Here is a relevant Discourse discussion: Question About Passing Environment Variables for PBS Job