Using two different servers -
- OoD server
- Cluster Headnode
Using putty I can stay logged into #2 for hours. Using #1 and the OoD console shell which SSH’s into #1 it will keep the session open for about 1 minute.
Using two different servers -
Using putty I can stay logged into #2 for hours. Using #1 and the OoD console shell which SSH’s into #1 it will keep the session open for about 1 minute.
Could you post the cluster config you have for the login node.
clusters.d/example2.yml
v2:
metadata:
title: “prod”
login:
host: “10.121.188.63”
job:
adapter: “slurm”
submit_host: “10.121.188.63”
cluster: “prod-al2”
bin: “/opt/slurm/bin”
strict_host_checking: false
conf: “/opt/slurm/etc/slurm.conf”
batch_connect:
basic:
script_wrapper: |
module purge
%s
set_host: “host=$(hostname -f | awk ‘{print $1}’)”
vnc:
script_wrapper: |
module purge
export PATH=“/opt/TurboVNC/bin:$PATH”
export WEBSOCKIFY_CMD=“/usr/local/bin/websockify”
%s
set_host: “host=$(hostname -f | awk ‘{print $1}’)”
I’m surprised this is not in our docs and am having a hard time finding the setting for this.
Have you tried idle_timeout
under the job
setting? I’m going to keep searching. It can be done it’s just not documented apparently.
Added idle_timeout. No change, still times out at a minute.
Anything else I can try on this one ?
I’m not sure at the moment. But, I did find this issue that sounds similar to what you are experiencing:
The code in the link fixed the issue. Many thanks.