So for the testing my group is doing, we have setup a test k8s cluster using microk8s. The kubernetes setup is hosted on a different machine than the machine OOD is on. We have confirmed that the OOD system can talk to the k8s setup via the kube API. Based on the corresponding page in the docs, I’ve setup the following file ‘k8s.yml’ in the '/etc/ood/config/clusters.d/k8s.yml`
v2:
metada:
title: "K8s"
login:
host: "192.168.7.254"
job:
adapter: "kubernetes"
config_file: "/home/ubuntu/kubeconfig"
cluster: "k8s"
context: "k8s"
bin: "/bin/kubectl"
username_prefix: ""
namespace_prefix: ""
all_namespaces: false
auto_supplemental_groups: false
server:
endpoint: "https://192.168.7.254:16443"
# cert_authority_file: "/etc/pki/tls/certs/kubernetes-ca.crt"
auth:
type: "oidc"
mounts: []
batch_connect:
ssh_allow: true
header: "#!/bin/bash"
I don’t immediately see anything that would cause this setup to not work, but I was hoping someone could shed some more light on the various components of the cluster configuration file and clarify if the bootstrapping steps are necessary to run. If so, I assume the VERSION I need is equal to the server/client version from kubectl version
. Additionally, if simply clicking the ‘shell access’ button won’t prove if things are working properly, I’d be happy to know another way to test that OOD is able to use k8s properly.