Kubernetes – List of Frequently Used Commands

— This is a living post — New commands will be added as I discover any of use —


#get information about namespace
kubectl describe namespaces <namespace>
#get information about a pod
kubectl describe pods <podname> -n <namespace>
#delete pod in specific namespace
kubectl delete pods -n <namespace> <pod_name>
#logs for specific pod
kubectl logs <podname>
#cordon off node
kubectl cordon <nodename>
#list Services
kubectl get services
#delete all namespaces
kubectl delete --all namespaces


#count number of returned pods
kubectl get pods -n <namespace> | wc -l
#change lifetime of namespace
kubectl annotate namespaces <namespace>/lifetime-minutes=400 --overwrite
#find pods running on same node and output in wide format
kubectl get pods --all-namespaces --field-selector spec.nodeName=<nodename> -o wide
#logs for specific container on pod
kubectl logs <podname> -c <containername>
#watch nodes 
kubectl get nodes -w
#execute shell in container on pod
kubectl exec -it <podname> -- bash
#List nodes with their labels
kubectl get nodes --show-labels
#FORCE delete when pods are stuck in TERMINATING
kubectl delete pods --grace-period=0 --force --all --namespace <namespace>
#Get all pods in all namespaces
kubectl get pods --all-namespaces --output=wide


#scale all deployments
kubectl scale --replicas=0 deployment $'kubectl get deployments -n <namespace> |awk -F' ' '{ print $1 }'' -n <namespace>
#drain node
kubectl drain <nodename> --force --ignore-daemonsets --delete-local-data
#show usage for node
kubectl top node <nodename>  
#Show usage for pod
kubectl top <podname>
#follow logs
kubectl get logs <pod-name> -n <namespace> -f
#View running container environment variables
kubectl exec -it -n <ci-namespace> >runner-pod> env | grep <CI_BUILD>

DevSecOps – Integrating Vulnerability Scanning Within the CI/CD Pipeline Part 1

Recently I made a slight career change, a shift into DevSecOps as opposed to pure sec – i’m excited and loving it so far. My first task – integrate security scanning within the pipeline. Disclaimer: This article is very beginner friendly.

Our tech stack that supports our CI/CD consists of Kubernetes, docker and Gitlab. I didn’t find much on getting automated security scanning going using Gitlab so I basically started from scratch.

Picking a tool

Coming off a pentesting background i’m comfortable with OWASP ZAP as well as Burp Suite. Burp Suite has recently released an enterprise edition that touts CI integration but my gut was telling me that it would be easier with ZAP, as there may be more support online if (turned into when) I run into difficulties. The first thing to do now was find a Docker container that had ZAP installed. This did the trick.

Getting Started

I found a container online but I had no idea how Gitlab worked. I knew Gitlab was what we used for our entire SDLC lifecycle but I was not sure exactly how the CI/CD integration worked. Turns out the way it works is as follows:

Once you have commited your code, the code is compiled and a build is created. That build is run by Gitlab and Gitlab carries out a specific set of jobs that one can define using a YAML file. A YAML file is basically a configuration file that tells the Gitlab runners (carrying out the jobs) what to do. In our case it will need to tell the runners to kick off an automated scan. Remember, the idea here is to get a scan going every-time a developer commits a piece of code. If the scan picks up any HIGH sev vulnerabilities, the build should fail.

Configuring ZAP

ZAP needs to be able to kick off a scan automatically and fail the build if a High or Critical vuln is discovered. The following does the job:


 zap-cli --verbose quick-scan -sc --start-options '-config api.disablekey=true' \
 https://xxxx.xxxx.xxx.xxx \
 -l Informational | tee scan_result.txt; alerts=cat scan_result.txt | grep 'High|Critical' | (wc -l); echo $alerts
 Get Results
 if [ $alerts -gt 0 ]; then
   echo "$alerts findings. Build failing"
   exit 1
   exit 0
   echo "success - no findings identified"

Save this as shellscript in your repo. In my case I am using Gitlab – So the next step is to tell the Gitlab runner what to do by creating a Gitlab CI yaml file. The below does the trick.

image: docker-registry.public-xxxxx.xxxxx.xxxxx/owasp/zap2docker-weekly

  - zap
 stage: zap
 tags: [ kubernetes-deploy ]
   - ./quick-scan-zap.sh<br /><br /><br />