Unable to Open XDMod - https://localhost:4443/

Hi,

I’m really new to Open On Demand, and have been going through the tutorials with Docker local installation. Wanted to create some containers to run different HPC scripts. But going through the tutorials, I get error messages from opening XDMod - https://localhost:4443/ . From Open On Demand, in the right side, I get issues - TypeError: Failed to fetch Please ensure you are logged into Open XDMoD first, and then try again.

Is it because I have no current jobs running so that’s why XDMod is erroring out with message - localhost refused to connect.

Jesse

Hi and welcome!

I would have to boot up the project to see, but it seems to me that XDMoD didn’t start up correctly. The logs may indicate the same, but if you’re getting a connection refused trying to connect to XDMoD that suggests to me that it’s not up and running.

I guess maybe you can try to restart that container specifically (not the entire compose set of containers) and see if it comes back up correctly.

I restarted the xdmod container but still connection on https://localhost:4443/ is refused . I didn’t do anything crazy, but followed the tutorial instructions.

Are there any zoom office hours I can attend? Just to troubleshoot?

There was today this morning, but it’s closed now. I’m juggling a few things, but I’ll try to circle back to this.

In the interim you should be able to do everything else just without XDMoD functionality. I mean the OnDemand tutorial should still be OK, sans the XDMoD portions which aren’t a lot.

Thank you Jeff, appreciate it. Yep, other parts are working currently.

OpenXDMOD can be a bit fickle to setup. I don’t believe there is a specifically maintained container image for it either. Did you containerize it?

Hi Morgan, I’m pretty new to docker, trying to understand everything, but I see with docker compose ps, there is an image of xdmod so I thought that is containerized. Maybe it’ll be best to join your office hours. Really appreciate the help.

Docker-compose is the orchestration mechanism for handling multiple docker containers. If you just ran docker-compose ps on some system, that will just show containers that are on the local system. That would mean someone was likely working on an openxdmod container on that syste but hard to say if its functional or not. Again, there have been some attempts in the past to containerize it, i.e. GitHub - jtfrey/xdmod-container: Linux containers running XDMoD service (MariaDB + Apache) but I can’t speak to it still working. If you are new to containers, this might be a tricky path to go down as you would need to really learn how openxdmod works as well as containers.

I would probably start with just trying to stand up openxdmod in a VM and trying to get it working with your Slurm controller. Keep in mind though that openxdmod is easiest to install via rpm’s so you would want to be on RHEL based OS’s like RHEL 8, Rocky 8, Alma 8 or even Oracle Linux 8.

OpenXDMod is its own project though and while there is integration with OOD, its a little out of scope. I am sure folks will be happy to try and help out but it may not be something office hours is designed for.

@mjbludwig he’s going through the HPC toolset tutorial: GitHub - ubccr/hpc-toolset-tutorial: Tutorial for installing Open XDMoD, OnDemand, & ColdFront

Oh whoops! That explains that! I completely forgot this existed. That makes my post moot!

All the containers in the HPC Toolset tutorial worked as of a month ago when I used them for the OnDemand conference. We haven’t changed anything since then so these should work fine. I’d recommend using the destroy option to get rid of everything and then re-start it:

$ ./hpcts destroy
$ docker container list
(Should show no containers)

$ docker volume list
(Should show no volumes)

If either containers or volumes are listed, manually delete them with:

$ docker container rm [ContainerID]
$ docker volume rm [VolumeName]

Then restart the container environment with:

./hpcts start

Usually when I have unexplainable issues, this works. Let us know!

Thanks,
Dori