Kubernetes jupyter NFS error

Hi,
I tried to configure jupyter App on demand based on Add a Jupyter App on a Kubernetes Cluster — Open OnDemand 2.0.20 documentation

For submit.yml.erb mount I used the NFS ,
- type: nfs
name: home
server: 172.31.82.164
path: /
destination_path: /home/ondemand

But I got this error soon i tried to launch the app :

error: error validating “STDIN”: error validating data: ValidationError(Pod.spec.volumes[1].nfs): missing required field “server” in io.k8s.api.core.v1.NFSVolumeSource; if you choose to ignore these errors, turn validation off with --validate=false

Any advice is appreciated

Hello and welcome!

If you go up just a bit in that file to this location you can see a more clear example of the mounts: Add a Jupyter App on a Kubernetes Cluster — Open OnDemand 2.0.20 documentation

Without seeing more of the submit.yml.erb it is likely that you need to include the port for the server as well. In the example I posted that may be more clear where you can see the cold storage using nfs.

Hi Travis,
Thanks for the reply , I included the port number and still get the same error.
Here is the submit.yml.erb

<%
pwd_cfg = “c.NotebookApp.password=u'sha1:${SALT}:${PASSWORD_SHA1}'”
host_port_cfg = “c.NotebookApp.base_url='/node/${HOST_CFG}/${PORT_CFG}/'”

configmap_filename = “ondemand_config.py”
configmap_data = “c.NotebookApp.port = 8080”
utility_img = “ohiosupercomputer/ood-k8s-utils”

user = OodSupport::User.new
%>

script:
accounting_id: “<%= account %>”
wall_time: “<%= wall_time.to_i * 3600 %>”
native:
container:
name: “jupyter”
image: “jupyter/scipy-notebook:python-3.9.2”
command: “/usr/local/bin/start.sh /opt/conda/bin/jupyter notebook --config=/ood/ondemand_config.py”
working_dir: “<%= Etc.getpwnam(ENV[‘USER’]).dir %>”
restart_policy: ‘OnFailure’
env:
NB_UID: “<%= user.uid %>”
NB_USER: “<%= user.name %>”
NB_GID: “<%= user.group.id %>”
HOME: “<%= user.home %>”
port: “8080”
cpu: “<%= cpu %>”
memory: “<%= memory %>Gi”
configmap:
files:
- filename: “<%= configmap_filename %>”
data: |
c.NotebookApp.port = 8080
c.NotebookApp.ip = ‘0.0.0.0’
c.NotebookApp.disable_check_xsrf = True
c.NotebookApp.allow_origin = ‘*’
c.Application.log_level = ‘DEBUG’
mount_path: ‘/ood’
mounts:
- type: nfs
name: home
server: 172.31.82.164:2049
path: /
destination_path: /home/ondemand

init_containers:
- name: "init-secret"
  image: "<%= utility_img %>"
  command: 
  - "/bin/save_passwd_as_secret"
  - "user-<%= user.name %>"
- name: "add-passwd-to-cfg"
  image: "<%= utility_img %>"
  command:
  - "/bin/bash"
  - "-c"
  - "source /bin/passwd_from_secret; source /bin/create_salt_and_sha1; /bin/add_line_to_configmap \\\"<%= pwd_cfg %>\\\" <%= configmap_filename %>"
- name: "add-hostport-to-cfg"
  image: "<%= utility_img %>"
  command:
  - "/bin/bash"
  - "-c"
  - "source /bin/find_host_port; /bin/add_line_to_configmap \\\"<%= host_port_cfg %>\\\" <%= configmap_filename %>"

Thanks for the update.

You’ve found an error in the docs actually, so apologies and thank you!

The issue lies in that the server key is incorrect for what we need to pass to Kubernetes.

Please use host: 172.31.82.164 instead and see if that works correctly.

We are making the change to the docs now.

Thank for the fix , this fix the issue but I got another one.

the mount block in submit.yml.erb is:

  - type: nfs
    name: nfshome
    host: 172.31.82.164:2049
    path: /
    destination_path: /data

Note that from the node where the pod should run, I was able to mount the file system
$> mount 172.31.82.164:/ /data/

When you flip the path and destination_path entries, does it work as you expect?

I think the path is intended as the actual path to mount, and the destination_path is intended as the mount point in the container the path will be mounted at, which is semantically given in the doc example as:

    path: /some/location
    destination_path: /some/container/location

The new config also does not work
- type: nfs
name: nfshome
host: 172.31.82.164:2049
path: /
destination_path: /var/nfs

Eventhough this pod.yml works fine

kind: Pod
apiVersion: v1
metadata:
name: pod-using-nfs
spec:

Add the server as an NFS volume for the pod

volumes:
- name: nfs-volume
nfs:
# URL for the NFS server
server: 172.31.82.164
path: /

In this container, we’ll mount the NFS volume

and write the date to a file inside it.

containers:
- name: app
image: alpine

  # Mount the NFS volume in the container
  volumeMounts:
    - name: nfs-volume
      mountPath: /var/nfs

  # Write to a file inside our NFS
  command: ["/bin/sh"]
  args: ["-c", "while true; do date >> /var/nfs/dates.txt; sleep 5; done"]

Could you attach the pod.yml file that is output by chance, removing any passwords or sensitive data if needed first?

It’s a bit hard to read some of this formatting, and the pod.yml that is created can help me identify what is not being fed into Kubernetes correctly.

pod.yml (10.8 KB)

Thanks for sharing that.

I need some more information to understand what is happening. Are you able to run the describe command at all to get the state of the container?

What is the current error you are actually seeing right now? Understanding the error and behavior would be a big help.

Hi
I have the errors and they are in many places.

I feel sending you files is difficulty to trace, so can we meet over zoom tomorrow to show you the whole setup and the errors?

please let me know
Faras

I think that might be the easiest way forward as well.

Are you free in the afternoon? I’m about to sign off for the day but anytime after 1pm would be ideal, though we can do earlier if that works for your schedule better. I’m very flexible and happy to work around your best times.

Great , How about tomorrow Thursday at 2pm EST ? i can send a zoom link if this works for you.

That sound great!

A zoom link here works fine, and my email is travert@osc.edu if you need it.

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.