Configure partitions as "clusters"

Greetings – Our current slurm cluster has been administratively divided into two “clusters”. There are separate network hosts (name1.case.edu and name2.case.edu) for access to distinct sets of login nodes. One slurm partition holds resources for use by people in specific courses (-p class) while other partitions designate ‘batch’, ‘gpu’ and ‘smp’ nodes.

So far, I have duplicated the ‘name1’ cluster configuration, and altered it for ‘name2’ to designate the ‘name2.case.edu’ network host. The “active jobs” app therefore sees all active jobs as belonging to both clusters.

Is there another field that can be introduced into /etc/ood/config/clusters.d/name2.yml so as to restrict the cluster definition to just the ‘class’ partition? Or is the cluster definition determined based on the ‘slurm.conf’?

With the dual cluster configs, our local ondemand does support separate shell access appropriately to the ‘name1’ and ‘name2’ login nodes.

Thanks
ps Yes, this situation is pushing us to discuss implementing a second cluster via slurm, however, that is not an immediate option.

The current behavior is centered around a cluster as defined by /etc/ood/config/cluster.d/cluster.yml, this folder is where we look for info. Indeed looking at the code slurm.conf is only used to set the SLURM_CONF env variable, and that’s only if it’s set. All of our logic seems to center around a cluster being a cluster and not a cluster just being a partition/queue of another cluster (sorta virtual cluster if you like - though as a complete aside, I’d bet we’d like to have this functionality).

It seems to me you have all the behavior you want, save for active jobs? Active jobs is currently duplicating jobs and you can’t tell which is which? Is this the right assessment?

There’s another topic asking about how to add fields to active jobs and adding the partition field here may suit your needs. Though it will all show up as one cluster, you’ll at least be able to distinguish between the partitions. Then secondly you’ll need to filter out the results of one of the clusters (like name2). We may be able to come up with an initializer that can do these things if that sounds like what you’d want?

Hi, Jeff – Thanks for sharing these thoughts, and sorry for the delay in responding – I was away from the office for much of last week.

I think what you summarize is correct:
– we are referencing the same slurm.conf for two ‘virtual clusters’
– each cluster has distinct partitions (all in the same physical cluster)
– the result is active jobs shows two entries for each job, one on each cluster.

Happy to learn more about implementing an initializer. I understand from another topic that these could be done in ruby, and are not part of the core. Could you share (or point me) to examples? I’m also happy to learn after the fact if the team wanted to put together an initializer for this situation.

Thanks

OK cool. Here is an example initializer. It adds an item to the dropdown menu at the top right for filter jobs started by your primary group.

I’m not sure how helpful that is, because that’s an option and you want something more static. The idea behind initializers is to override or add to the behavior of a given class. I’m tot sure off the top what class we’d have to modify for you to remove this duplication.

I can set time aside at some point soon and try to come up with something.

Hi, Jeff – Thanks, this is a useful start.
Take your time, of course – In the meantime, I’ll look into implementing a version of the active jobs app as a sandbox app to be able to mess about freely with an initializer approach.

Thanks again,
~ Em

When using squeue and sbatch from the separate network hosts, do all jobs submitted using sbatch or seen using squeue use to the correct partition?

If so, another option would be to use bin_overrides for sbatch and squeue to execute those via ssh on the appropriate host.

That is a common need, so in OnDemand 1.8 we are going to try to make supporting that use case easier - executing the actual Slurm commands on a separate host.

Another feature that we could add that would enable this would be to provide some type of arg_overrides option to the cluster config’s job config, so you could force every invocation of squeue and sbatch with that cluster to always use a specific partition, for example.