Help with Open OnDemand shell application

I’m trying to set up Open OnDemand using Kubernetes and Helm. We have a working OnDemand setup using Virtual Machines at our location but we want to make a version that’s packaged in a Helm Chart. Everything seems to be working right but we’ve run into a lot of issues trying to use the Shell application. I can SSH into a node that I want to use just fine with the same user credentials that are in Keycloak using our LDAP server. Yet whenever I try to use the Shell app in the OnDemand portal it redirects me to a page saying “We’re sorry, but something went wrong.”

Error ID: 9d77dcd9
Web application could not be started by the Phusion Passenger application server.

Here’s the output from the /var/log/ondemand-nginx/user/error.log file:

App 5459 output: [2020-11-20 21:51:24 +0000 ]  INFO "method=GET path=/pun/sys/dashboard/ format=html controller=DashboardController action=index status=200 duration=17.37 view=6.25"
[ N 2020-11-20 21:56:24.9023 5091/T4 age/Cor/CoreMain.cpp:1147 ]: Checking whether to disconnect long-running connections for process 5459, application /var/www/ood/apps/sys/dashboard (production)
App 5704 output: internal/modules/cjs/loader.js:638
App 5704 output:     throw err;
App 5704 output:     ^
App 5704 output:
App 5704 output: Error: Cannot find module 'pty.js'
App 5704 output:     at Function.Module._resolveFilename (internal/modules/cjs/loader.js:636:15)
App 5704 output:     at Function.Module._load (internal/modules/cjs/loader.js:562:25)
App 5704 output:     at Module.require (internal/modules/cjs/loader.js:692:17)
App 5704 output:     at Module.require (/opt/ood/ondemand/root/usr/share/passenger/helper-scripts/node-loader.js:80:25)
App 5704 output:     at require (internal/modules/cjs/helpers.js:25:18)
App 5704 output:     at Object.<anonymous> (/var/www/ood/apps/sys/shell/app.js:6:17)
App 5704 output:     at Module._compile (internal/modules/cjs/loader.js:778:30)
App 5704 output:     at Object.Module._extensions..js (internal/modules/cjs/loader.js:789:10)
App 5704 output:     at Module.load (internal/modules/cjs/loader.js:653:32)
App 5704 output:     at tryModuleLoad (internal/modules/cjs/loader.js:593:12)
[ E 2020-11-20 22:44:27.0440 5091/T1a age/Cor/App/Implementation.cpp:221 ]: Could not spawn process for application /var/www/ood/apps/sys/shell: The application process exited prematurely.
  Error ID: 3489b4ce
  Error details saved to: /tmp/passenger-error-t3sWF9.html

Any ideas on how to fix this? We’ve been stuck on this for a long time now. In our VM setup we never hit this error page when using the Shell application. Do you think it could be an issue with using Kubernetes or something to do with passenger?

I really appreciate the help, thanks.

Hi and welcome! You need node 10, I’m guessing in your container image? What image are you using?

Please let us know how it goes and what your architecture looks like. I’m guessing you have to run the containers in privileged mode so they can start process’s as a different user? Also all of the PUNs start within the container? I’m guessing it need significant resources? (I have many questions on this setup lol but am super happy to see you’ve got far!).

Hey thanks for the quick response, we’re using our own image on Docker that we built following the OnDemand docs. Nodejs-10 and ruby-25 are installed. I can show you the Dockerfile if you’d like.

Edit: The PUNs are starting within the container I think. We’re trying to set up SSH to a remote node from a server running the Helm Chart we built

You’re in luck that I’m just waiting for something else.

In any case, sure we’d love to see the Dockerfile. But I’m quite sure that error is from nodejs not being a high enough version so somehow nodejs-6 snuck in. If only nodejs 10 is in the image, then I’d check the mount points from the host as it may have node 6.

Okay here’s the dockerfile

FROM centos:7
RUN yum update -y && \
    yum install -y epel-release && \
    yum install -y supervisor centos-release-scl subscription-manager openssh-server openssh-clients && \
    yum install -y wget 

# Set up SSSD and edit PAM files
RUN yum install -y sssd authconfig openldap oddjob-mkhomedir && \
    yum clean all
COPY sssd/sssd.conf /etc/sssd
RUN chown root:root /etc/sssd/sssd.conf
RUN chmod 600 /etc/sssd/sssd.conf
WORKDIR /etc/pam.d
RUN rm -f system-auth \
    rm -f password-auth
COPY sssd/PAM-system-auth ./system-auth
COPY sssd/PAM-password-auth ./password-auth
RUN chmod 744 system-auth
RUN chmod 744 password-auth
RUN authconfig --update --enablesssd --enablesssdauth --enablemkhomedir

# Install Ruby 2.5 and Node.js 10
RUN yum install -y centos-release-scl-rh
RUN yum-config-manager --enable rhel-server-rhscl-7-rpms
RUN yum install -y rh-ruby25
RUN yum install -y rh-nodejs10

# Copy in the filesystem-map
COPY filesystem.txt /root

# Install OnDemand
RUN yum install -y && \
    yum install -y ondemand && \
    yum clean all
RUN yum install ondemand-selinux -y

# isntall openid auth mod
RUN yum install -y httpd24-mod_auth_openidc
# config file for ood-portal-generator
ADD ood_portal.yml /etc/ood/config/ood_portal.yml
# Then build and install the new Apache configuration file with
RUN /opt/ood/ood-portal-generator/sbin/update_ood_portal
# FIX: Contains secret values
ADD auth_openidc-sample.conf /opt/rh/httpd24/root/etc/httpd/conf.d/auth_openidc.conf

# Install Singularity
WORKDIR /usr/local
RUN sudo yum groupinstall -y 'Development Tools'
RUN sudo yum install -y openssl-devel libuuid-devel libseccomp-devel wget squashfs-tools cryptsetup
RUN wget
RUN tar -C /usr/local -xzf go1.14.7.linux-amd64.tar.gz
RUN rm go1.14.7.linux-amd64.tar.gz
RUN yum install golang -y
RUN export GOPATH=${HOME}/go&& \
    export PATH=/usr/local/go/bin:${PATH}:${GOPATH}/bin && \
    source ~/.bashrc
RUN export VERSION=3.6.0 && \
    wget${VERSION}/singularity-${VERSION}.tar.gz && \
    tar -xzf singularity-${VERSION}.tar.gz
WORKDIR /usr/local/singularity
RUN yum update -y && yum clean all
RUN ./mconfig && \
    make -C ./builddir && \
    sudo make -C ./builddir install
RUN yum install -y singularity

# Add cluster.yaml files
RUN mkdir /etc/ood/config/clusters.d
COPY ood-island.yml /etc/ood/config/clusters.d/ood-island.yml
RUN mkdir /opt/ood/linuxhost_adapter
WORKDIR /opt/ood/linuxhost_adapter
RUN singularity pull docker://centos:7.6.1810
RUN mv centos_7.6.1810.sif centos_7.6.sif

# Some security precautions
RUN chmod 600 /etc/ood/config/ood_portal.yml
RUN chgrp apache /opt/rh/httpd24/root/etc/httpd/conf.d/auth_openidc.conf
RUN chmod 640 /opt/rh/httpd24/root/etc/httpd/conf.d/auth_openidc.conf
RUN groupadd ood
RUN useradd -g ood ood
WORKDIR /home/ood

ADD supervisord.conf /etc/supervisord.conf
CMD ["/bin/sh", "-c", "/usr/bin/supervisord -c /etc/supervisord.conf"]

Ah, I think you might be right about node6. It says node64 is mounted

# findmnt | grep node
├─/proc                                     proc                                                                                                                                                                                              proc    rw,nosuid,nodev,noexec,relatime
│ ├─/dev/mqueue                             mqueue                                                                                                                                                                                            mqueue  rw,nosuid,nodev,noexec,relatime,seclabel
│ ├─/dev/termination-log                    /dev/mapper/utah--dev1-root[/var/lib/kubelet/pods/5cef9d8f-8f62-46a8-9e58-52a185adacab/containers/open-ondemand/af790679]                                                                         xfs     rw,relatime,seclabel,attr2,inode64,noquota
│ └─/dev/shm                                shm                                                                                                                                                                                               tmpfs   rw,nosuid,nodev,noexec,relatime,seclabel,size=65536k
├─/sys                                      sysfs                                                                                                                                                                                             sysfs   ro,nosuid,nodev,noexec,relatime,seclabel
│ └─/sys/fs/cgroup                          tmpfs                                                                                                                                                                                             tmpfs   ro,nosuid,nodev,noexec,relatime,seclabel,mode=755
│   ├─/sys/fs/cgroup/systemd                cgroup[/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cef9d8f_8f62_46a8_9e58_52a185adacab.slice/docker-fbdc43991ba1ea72223b6ad66c5f9cb39e18c983321fa00e303a9b0f2cfd693f.scope] cgroup  ro,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd
│   ├─/sys/fs/cgroup/pids                   cgroup[/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cef9d8f_8f62_46a8_9e58_52a185adacab.slice/docker-fbdc43991ba1ea72223b6ad66c5f9cb39e18c983321fa00e303a9b0f2cfd693f.scope] cgroup  ro,nosuid,nodev,noexec,relatime,seclabel,pids
│   ├─/sys/fs/cgroup/cpuset                 cgroup[/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cef9d8f_8f62_46a8_9e58_52a185adacab.slice/docker-fbdc43991ba1ea72223b6ad66c5f9cb39e18c983321fa00e303a9b0f2cfd693f.scope] cgroup  ro,nosuid,nodev,noexec,relatime,seclabel,cpuset
│   ├─/sys/fs/cgroup/freezer                cgroup[/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cef9d8f_8f62_46a8_9e58_52a185adacab.slice/docker-fbdc43991ba1ea72223b6ad66c5f9cb39e18c983321fa00e303a9b0f2cfd693f.scope] cgroup  ro,nosuid,nodev,noexec,relatime,seclabel,freezer
│   ├─/sys/fs/cgroup/net_cls,net_prio       cgroup[/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cef9d8f_8f62_46a8_9e58_52a185adacab.slice/docker-fbdc43991ba1ea72223b6ad66c5f9cb39e18c983321fa00e303a9b0f2cfd693f.scope] cgroup  ro,nosuid,nodev,noexec,relatime,seclabel,net_prio,net_cls
│   ├─/sys/fs/cgroup/hugetlb                cgroup[/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cef9d8f_8f62_46a8_9e58_52a185adacab.slice/docker-fbdc43991ba1ea72223b6ad66c5f9cb39e18c983321fa00e303a9b0f2cfd693f.scope] cgroup  ro,nosuid,nodev,noexec,relatime,seclabel,hugetlb
│   ├─/sys/fs/cgroup/cpu,cpuacct            cgroup[/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cef9d8f_8f62_46a8_9e58_52a185adacab.slice/docker-fbdc43991ba1ea72223b6ad66c5f9cb39e18c983321fa00e303a9b0f2cfd693f.scope] cgroup  ro,nosuid,nodev,noexec,relatime,seclabel,cpuacct,cpu
│   ├─/sys/fs/cgroup/blkio                  cgroup[/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cef9d8f_8f62_46a8_9e58_52a185adacab.slice/docker-fbdc43991ba1ea72223b6ad66c5f9cb39e18c983321fa00e303a9b0f2cfd693f.scope] cgroup  ro,nosuid,nodev,noexec,relatime,seclabel,blkio
│   ├─/sys/fs/cgroup/perf_event             cgroup[/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cef9d8f_8f62_46a8_9e58_52a185adacab.slice/docker-fbdc43991ba1ea72223b6ad66c5f9cb39e18c983321fa00e303a9b0f2cfd693f.scope] cgroup  ro,nosuid,nodev,noexec,relatime,seclabel,perf_event
│   ├─/sys/fs/cgroup/memory                 cgroup[/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cef9d8f_8f62_46a8_9e58_52a185adacab.slice/docker-fbdc43991ba1ea72223b6ad66c5f9cb39e18c983321fa00e303a9b0f2cfd693f.scope] cgroup  ro,nosuid,nodev,noexec,relatime,seclabel,memory
│   └─/sys/fs/cgroup/devices                cgroup[/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cef9d8f_8f62_46a8_9e58_52a185adacab.slice/docker-fbdc43991ba1ea72223b6ad66c5f9cb39e18c983321fa00e303a9b0f2cfd693f.scope] cgroup  ro,nosuid,nodev,noexec,relatime,seclabel,devices
├─/shared                                   /dev/mapper/utah--dev1-root[/var/lib/kubelet/pods/5cef9d8f-8f62-46a8-9e58-52a185adacab/volumes/]                                                               xfs     rw,relatime,seclabel,attr2,inode64,noquota
├─/etc/resolv.conf                          /dev/mapper/utah--dev1-root[/var/lib/docker/containers/8b166aa4154645f2323023a87edff84b016e4887aa4a4257c225965312e8384a/resolv.conf]                                                              xfs     rw,relatime,seclabel,attr2,inode64,noquota
├─/etc/hostname                             /dev/mapper/utah--dev1-root[/var/lib/docker/containers/8b166aa4154645f2323023a87edff84b016e4887aa4a4257c225965312e8384a/hostname]                                                                 xfs     rw,relatime,seclabel,attr2,inode64,noquota
├─/etc/hosts                                /dev/mapper/utah--dev1-root[/var/lib/kubelet/pods/5cef9d8f-8f62-46a8-9e58-52a185adacab/etc-hosts]                                                                                                 xfs     rw,relatime,seclabel,attr2,inode64,noquota

I’m not sure, I read that more as the mount option inode.

It’s taking me a minute to build this image and I’m about to log off, but I’d say hop into that container and locate, find or which node and determine if there really is only 1 node binary there and it’s the right version (10).

You should also try node --version and /opt/ood/nginx_stage/bin/node --version. The latter is what we use so that’s an important one because it’s a wrapper so it could have a slightly different PATH.

Okay it looks like there’s only one node binary

# find / -name "node"

# node --version

OK, it looks like you have the right node and I may have jumped to that conclusion about node.

Let’s focus on this error then: “Error: Cannot find module 'pty.js'”.

This library is located here: /var/www/ood/apps/sys/shell/node_modules/node-pty/.

We import it with

const pty       = require('node-pty');

Does that directory exist and is readable?

Hey you were on the right track, we were able to fix it the other day. Turns out we were running an out-dated version of the shell app that uses pty.js instead of node-pty, and we also needed to set an environment variable at /etc/ood/config/apps/shell/env

Thanks for the help though!