I would like to add memory and GPU attributes to the launcher form and am struggling to figure out how to do it. I used the info from the below thread to add the widgets, but I’m not sure where I need to configure the back end to utilize the selections upon job launch. Any info would be greatly appreciated!
You should use the native attributes. Look at how we do for Matlab at OSC as an example. You can browse that whole organization for bc_<app> related things with all sorts of examples.
We’re using the native attribute to specify the cli arg ppn=<%= ppn %><%= node_type %>". We use torque, that’s why there’s a native/node/resources hierarchy in the Matlab YML.
Something like this should work for you. I think ppn is the cli flag for hardware requests for pbspro? In any case native relates directly to cli flags, so if you want to pass a cli flag called foo you’d directly call it out like so foo=<%= my_foo_param %>.
Also note that you can pass in many flags/arguements here, not just one. so native: "foo=bar a=b c=d --long-arg" would pass all those into the cli command.
I do get correct menu items, however, instance always starts with one core and one node. I need one node, but I’m looking for the ability to change the number of cores.
This is the job_script_options.json file generated by the RStudio job
I think you may have 2 things wrong. First, the yml isn’t indented correctly. script and batch_connect should be on the same indent (0th indent). See that below.
Secondly, I think the cli argument for cores in SLURM is -c --cpus-per-task. Though I could be wrong on that, I just quickly checked the docs, so I don’t know that for sure; you may need to tweak that or could be more familiar with SLURMs cli than me. What you’ve listed above looks like Torque’s cli.
batch_connect:
template: "basic"
# script has same indent as batch_connect
script:
# I think this is the slurm cli for nodes (-N) and cpus (--cpus-per-task)
native:
- "-N <%= bc_num_slots.blank? ? "1" : bc_num_slots.to_i %>"
- "--cpus-per-task=<%= ncpus %>"