when I update the _footer.html.erb and change the image tag to logo.svg. I get this
App 2839311 output: [2025-04-09 06:06:31 -0700 ] INFO "method=GET path=/pun/sys/dashboard/ format=html controller=DashboardController action=index status=500 allocations=9674 duration=10.24 view=0.00"
App 2839311 output: [2025-04-09 06:06:31 -0700 ] FATAL "ActionView::Template::Error (Permission denied @ dir_s_mkdir - /var/www/ood/apps/sys/dashboard/tmp/cache):\n 2: <div class=\"me-2\">\n 3: <%= link_to \"https://docs.com\" do %>\n 4: <%=\n 5: image_tag(\n 6: \"logo.svg\",\n 7: class: \"footer-logo\",\n 8: alt: \"Powered by HPC\",\n \napp/views/layouts/_footer.html.erb:5\napp/views/layouts/_footer.html.erb:3\napp/views/layouts/application.html.erb:105"
App 2839311 output: [2025-04-09 06:06:31 -0700 ] WARN "Announcement file not found: /etc/ood/config/announcement.md"
App 2839311 output: [2025-04-09 06:06:31 -0700 ] WARN "Announcement file not found: /etc/ood/config/announcement.yml"
App 2839311 output: [2025-04-09 06:06:31 -0700 ] INFO "method=GET path=/pun/sys/dashboard/500 format=html controller=ErrorsController action=internal_server_error status=500 allocations=9196 duration=8.60 view=0.00"
App 2839311 output: Error during failsafe response: Permission denied @ dir_s_mkdir - /var/www/ood/apps/sys/dashboard/tmp/cache
App 2839311 output: /usr/share/ruby/fileutils.rb:402:in `mkdir'
App 2839311 output: /usr/share/ruby/fileutils.rb:402:in `fu_mkdir'
App 2839311 output: /usr/share/ruby/fileutils.rb:380:in `block (2 levels) in mkdir_p'
App 2839311 output: /usr/share/ruby/fileutils.rb:378:in `reverse_each'
App 2839311 output: /usr/share/ruby/fileutils.rb:378:in `block in mkdir_p'
App 2839311 output: /usr/share/ruby/fileutils.rb:370:in `each'
App 2839311 output: /usr/share/ruby/fileutils.rb:370:in `mkdir_p'
here’s my file
<footer class="d-flex m-0 mt-4 justify-content-between align-items-center p-4">
<div class="me-2">
<%= link_to "https://docs.com" do %>
<%=
image_tag(
"logo.svg",
class: "footer-logo",
alt: "Powered by HPC",
height: "40px",
)
%>
<% end %>
</div>
<span>OnDemand version: <%= Configuration.ood_version %></span>
</footer>
I have added a logo.svg file in this directory
/var/www/ood/apps/sys/dashboard/app/assets/images
You cannot change the assets that we ship. OnDemand is currently failing because it’s trying to recompile the assets - which it can’t. This is what image_tag
is trying to do - recompile the assets because logo.svg
hasn’t been compiled.
To add an assets, place it in the public directory: /var/www/ood/public
and reference it through the href="/public/my_logo.png"
.
You may be able to use image_tag
if you use skip_pipeline: true
but honestly I think just writing the HTML is likely safer to just skip rails and whatever rails may try to do altogether.
Got it.
One more question:
Is it possible to run the jobs on the same node on which OOD is running?
(Not scheduling a job through slurm just running the shell script on the OOD host)
any configuration for this?
These 2 adapters, the systemd
and linuxhost
may work for you, though they both work over SSH. So the users still have to ssh to localhost, which is counter intuitive, but generally folks use regular login nodes as the destination for these, not the webnode (the node/machine OnDemand is installed on)
So my OOD is installed on the Login node which is connected to the HPC cluster. I am able to schedule jobs on the cluster via slurm.
This is just something I want to experiment with. so inside the cluster.d I have created a .yml file which has a definition for my HPC cluster. Do I need to do something similar if I want to execute the shell scripts on the login node (with OOD)?
Yea you’ll do something similar. Basically OnDemand treats it as another “scheduler” and will use systemd for example to “schedule” work.
You just have to be careful here because an actual scheduler will allocate resources appropriately whereas a single user on a login node can basically crash it for other users if they’re not careful about what they’re doing.
so something like this will work?
---
v2:
metadata:
title: "Localhost"
url: "https://localhost"
hidden: true
login:
host: "localhost"
job:
adapter: "systemd"
submit_host: "localhost"
debug: true
strict_host_checking: false
I don’t think you should have the URL, but yea off the top of my head I think that’s OK.
getting this error now
Failed to submit session with the following error:
no implicit conversion of String into Integer
If this job failed to submit because of an invalid job name please ask your administrator to configure OnDemand to set the environment variable OOD_JOB_NAME_ILLEGAL_CHARS.
The Desktop session data for this session can be accessed under the staged root directory.
the config is fine, it works fine with my other cluster config file.
Maybe you need this piece?
ssh_hosts:
- localhost
still the same. here is my file
---
v2:
metadata:
title: "Localhost"
hidden: true
login:
host: "localhost"
job:
adapter: "systemd"
submit_host: "localhost"
ssh_hosts:
- localhost
debug: true
strict_host_checking: false
I think it’s an issue with your submit.yml.erb
. Specifically the native
portion.
Most adapters accept an Array in the native
configuration, IIRC this one accepts a key/value pair map.
You can see here in this section on how to supply native
parameters.
I managed to execute the job. but its showing this error, also I am not able to delete this job from the portal. any ideas on this?
The systemd adapter was a community contribution, so I’m not entirely sure how to debug it.
If the unit is actually gone (I’m sure there’s a way to query current systemd units running) then you can remove the file under ~/ondemand/data/sys/batch_connect/db
(I’m recalling that path from memory, it could be slightly off)