Hi, this is a separate issue than getting set up with kubernetes in general (I think). I’ve got OOD doing everything generally right with connecting to an RKE2 system with Calico as the network CNI. I’m testing the bc_k8s_jupyter application and the container keeps crashing and I think it’s because the config that’s generated looks like this:
I’m guessing that HOST_CFG is getting either a newline or lots of extra spaces. Is there any way you can think of to strip out newlines and stuff from those variables? I could try a different CNI to see if it generates more sane hostnames, but it’d be nice to clear out the garbage from the variable too.
Ok, I’m not sure where that variable is actually coming from looking at the code and ENV vars. Is it set somewhere else in your app maybe? Part of the scirpt.sh? It may be you need to export those variables to make them available.
Thanks. I’ll take a look at that larger example. The line I put in actually did get the container running so it fixed up the original issue. There’s still something funky with the nodeport that’s being created as I can’t connect to the host on that port.
Definitely something up with the way the ondemand pod/service is being set up such that I can’t reach the nodeport on the k8s host. IPtables is blocking it, so something didn’t get created right and I’m not really seeing it.
For example, if I have a simple pod/service setup:
apiVersion: v1
kind: Pod
metadata:
labels:
app.kubernetes.io/name: test
name: test
namespace: default
spec:
containers:
- image: rancher/hello-world
imagePullPolicy: Always
name: container-0
ports:
- containerPort: 80
protocol: TCP
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: myport
spec:
type: NodePort
selector:
app.kubernetes.io/name: test
ports:
- protocol: TCP
port: 80
This gets created fine and if I open a browser to the host:port I get the test web page.
When launching with OnDemand, I see the service being created, the jupyter session is running and all that, but I can’t get to it (just hangs and times out). Just to see what’s up I logged in on the console of the kubernetes node and I can reach jupyter. Something in the way the networking is being set up in the yaml is not quite right.
Yeah, that I think might be a big piece of that. I wonder how that really should be set up though, since the intention is that it’s blocking access from other namespaces, yet you still need to be able to browse into the container from external.
I just went ahead and deleted the network policy, now when I click on the link, I get a 404: Not found error. My URL is:
If I click on the ‘jupyter notebook’ link above that I get a proxy error.
If I just connect directly to k8s-node.example.org:31094 I do connect (also getting a 404 error), and clicking ‘Jupyter’ above that takes me to the password page.
I think the containers are still namespaced correctly and separated, the /etc/ood/config/hook.env looks to handle just the networking.
I’ll be honest, I’ve never even seen these hooks until your post, so I’m still trying to figure this out myself.
But, from what I understand of k8’s, we are allowing requests from the local network into the container in this namespace, but that won’t break the separation between namespaces.
Hmmm, so this is making a bit more sense. The various hook files below are what likely will answer your questions around networking and namespacing:
Namespace hook:
Networking hook:
So when all of this is cobbled together you’ll be able to use the hooks to gain network access with those settings, but have your containers namespaced as well. I’m not an expert at k8s so hopefully all these words are used right.