Quantcast
Channel: Hakin9 – IT Security Magazine
Viewing all 612 articles
Browse latest View live

WordPress: How to Protect Your Site When You Think It's Been Hacked by Ammar Naeem

$
0
0

You're running your WordPress site like a real champ, publishing the latest blog posts, or selling lots of products. Everything is going great.

All of a sudden, your site gets hacked. Before, you were a proud sailor sailing the smooth seas. Now, you're faced with a danger you've never witnessed before. Your next steps will dictate the rest of your WordPress journey.

What will you do?

Well, the first mistake webmasters make is to panic. It's tempting, we know, but panicking should be the last thing on your mind. 

The first thing would be to stay calm and identify where the hack has occurred. If you are in this situation, we feel that this article will be of great importance to you. 

Let us look at some of the signs that indicate whether or not your site is hacked.

Six Common Signs Your WordPress Security Has Been Breached

There are some subtle and some tell-tale signs that your site has been breached. However, it is essential to differentiate between what constitutes a hack, and what doesn't.

That said, the following are some symptoms indicating that your site has been hacked: 

  • You're witnessing changes to your site that you haven't made. 
  • Your username and password are fine, but somehow you can't log in.
  • You're being redirected to another site. 
  • You're getting warnings from Google that your site may have been hacked.
  • You've got a notification from your hosting provider saying your site's been hacked. 
  • Your security plugin is giving you a notification about unexpected changes. 

Why Was Your Site Breached in The First Place?

The reasons behind this vary from site to site. But generally, such hacks occur due to the following reasons:

Hackable Passwords:

It's 2020, and there are still people who use "admin" or their site name as their WordPress password. Speaking of which, some users still have "password" as their password. Imagine that.

Not only is this harmful, but it also indicates a lack of awareness with regards to your security initiatives. Having a secure password is a necessity, not only for your WordPress admin account, but also for your users, FTP, and hosting accounts. 

Outdated Software:

Your plugin and theme providers are continually making updates to their respective projects. Once they release an upgrade, you get a notification to update to their newer versions. Failure or disregard towards updating your plugins will ultimately make your site vulnerable to hacks.

Dodgy Codebase: 

The biggest mistake people make is installing themes or plugins from providers that aren't listed in the official directory. You must always install them from a reputed plugin provider (when going for paid add-ons or themes) or the official WordPress directory.

Because while such plugins may promise "superior" features, you are always at the risk of installing a plugin with an insecure codebase. 

How Do Such Breaches Take Place?

To give you some perspective, here are some of the most common ways hackers can gain access to your site:

  • Backdoors: Hackers compromise your site through planting malicious code within script files.
  • Pharma Hacks: Again, malicious code is inserted into outdated WordPress versions.
  • Brute-Force: The practice of using automation software like crawlers to exploit vulnerabilities in WordPress versions.
  • Malware Redirection: Through backdoors, hackers add malware-filled redirects to your site.
  • Cross-Site Scripts (XSS): Enables hackers to send malicious code through your WordPress site into your browser.
  • DoS Attacks: Denial of Service, aka DoS, attacks happen when hackers find vulnerabilities and exploit them to make the site unusable.

While breaches are common, to the general reader concerned about his/her security, they are a cause for worry. But don't panic because what follows will help you overcome all these vulnerabilities, even if you're not as tech-savvy as your digital enemies. 

WordPress Security: The Action Plan Against Vulnerabilities

Now, let's look at the action plan you want to take, step by step, in order to protect your site from vulnerabilities. Since we have already talked about not panicking in such a situation, this section will dive right into the technicalities of the whole situation.

Step 1: Ground Control – Putting Your Site on Maintenance Mode

Putting your site on maintenance mode has its benefits. You can work on fixing the vulnerabilities, while not letting your visitors see your site in the condition that it would be during the process.

The best practice at this point is to use a Maintenance Mode plugin that lets you build a landing page where your visitors will drop, only to come back later when you're done making the fixes.

When looking for such a plugin, you must make sure that it lets you customize the maintenance page with your site's logo and color palette.

Step 2: Remove Malware

The next step you want to take is to install a malware service on your WordPress. The benefit of installing such a plugin is that it automatically sniffs all the malware on your site and makes the malware removal process a lot simpler. There are plenty of plugins you can use for that purpose.

Step 3: Reset Passwords

As we mentioned in the previous section, most breaches occur due to bad passwords. When your site gets hacked, you don't know which password caused the breach.

Therefore, you would want to perform a 360 overhaul of all your passwords. Updating your passw­ords by making them stronger prevents hackers from easily accessing your site again.

From your hosting provider to your SFTP, user passwords, and more, make sure that the password changes are thorough.

Step 4: Update Plugins and Themes

Updating your plugins and themes is an important consideration you need to take to ensure that your site does not get hacked in the future.

Visit your WordPress dashboard and go to Updates. Once there, install updates for everything that's outdated.

Make sure to attempt this fix before anything else since updated plugins might aggravate the vulnerabilities even further. Try to ensure that all the updates are performed before you perform the more in-depth repairs.

Step 5: Remove Users

In your Users list, if you see a user that you don't remember assigning, then feel free to remove it. Before doing this, however, ask your administrators and other users of their credentials to confirm whether or not they have recently changed them.  

Step 6: Remove Unwanted Files

With the help of a plugin like WordFence or Sucuri, you can scan your site for potentially harmful files that may have infected your WordPress installation. Keeping these plugins, in the long run, is also beneficial since they keep you regularly updated with changes made on your files.

Step 7: Clean Out Your Sitemap and Resubmit to Google

Hacks are a nightmare for SEO personnel because search engines start to penalize your site. When a search engine like Google crawls your site, it checks your sitemap.xml file and finds several potentially harmful files, and thus disregards your rankings.

Using a plugin like Yoast, or any SEO plugin of your choice, you can resubmit your sitemap to the Google Search Console again. But be patient since it takes time for the crawler to crawl your site again.

Step 8: Reinstall Plugins and Themes

If you still feel that your site is facing problems, then it's best to reinstall the previously installed plugins or themes.

Speaking of themes, if you purchased it from an external vendor and are still facing vulnerabilities, then it's time to consider switching to a new vendor, or install a theme from the WordPress theme store.

Step 9: Reinstall WordPress Core

If you've performed all of the security measures we've talked about but are still facing security issues, then as a last resort, you should reinstall the WordPress core itself.

With a clean WordPress installation, you can upload secure versions of your theme, as well as plugins. Before you do that, it's best to back your site up with both the wp-config.php, as well as the .htaccess files to prevent data loss in the event they're overwritten.

Step 10: Clean Out Your Database

This step is reserved for users who feel or are certain that their WordPress database has also been hacked.

If you're such a user, then it's best to clean up your database since cleaning it not only helps you make your site run faster, but also enables you to reduce your site's resource usage. 

WordPress security: Preventing a Future Breach

So, you've fixed the issues currently plaguing your site. Now it's time to plan and ensure that such a breach does not occur again.

While the previous section runs in tandem with this one, there are plenty of other steps you can take to prevent your site from being hacked again. Apart from the ones we have already talked about, let's look at some additional steps you can take to avoid future hacks:

1. Don't Install Insecure Plugins or Themes

When you go about purchasing or installing a plugin on your WordPress site, make sure that it is compatible with your version of WordPress. Also, you should try to confirm that the plugin provider is a reputable source by reading reviews of both the plugin and the plugin providers.

2. Install SSL on Your Site

SSL adds an extra layer of security to your site and is an indication to Google that you care about your sites' security. If your hosting provider is providing you with an SSL, that's great. If not, then upon purchase, you can integrate it on your site through an SSL plugin.

3. Avoid Cheap Hosting

Shared servers, while being suitable for beginners, are generally not that beneficial to users who want a secure website.  

Look at it this way. In a shared apartment, you only have a dingy little room and a shared living space. For a bachelor (an allegory for WordPress beginners) that's more or less okay, but for a family man with his privacy concerns, it's not suitable.

If you're looking at your business in the long term, then a managed hosting or an advanced hosting service is the best.

Not only does it give you your own "house" to work on, but it also ensures that you don't run into any complications in the future with regards to security.

WPEngine and Kinsta are great choices if you are looking for a reliable, managed hosting provider that's built to scale.

4. Set up a Firewall

There are plenty of firewall plugins available online that prevent malware from entering your store one way or another. It also helps create an additional barrier protecting your site from heinous DoS attacks.

5. Install a Security Plugin

Similar to the previous step, there are plenty of security plugins that keep you updated regarding the condition of your site. They ensure that you are aware of any unwanted activity, or any unwanted data files on your WordPress site.

Summary

We get it, having your site hacked is a bad experience, which you would ideally never want to go through.

Its impact on business performance, user experience, and your bottom line cannot be disregarded. In the world of WordPress, therefore, vigilance with such matters is an important consideration.

The symptoms, steps, and prevention strategies that we've mentioned above can prove rather useful if you want to prevent your site from breaches today, and for the foreseeable future.

Lastly, it pays to stay vigilant. So stay safe, and stay informed.


About the Author:
Ammar Naeem is a security nerd and WP-writer at Codup.co. When he's not busy covering the latest WordPress trends, you will find him reading comics, history books, and TV-shows.

The post WordPress: How to Protect Your Site When You Think It's Been Hacked by Ammar Naeem appeared first on Hakin9 - IT Security Magazine.


Robber is open source tool for finding executables prone to DLL hijacking

$
0
0

Robber is a free open source tool developed using Delphi XE2 without any 3rd party dependencies.

  • In Version 1.7 Robber doesn't require administrator rights by default because of new write permission check feature, so if you want to scan somewhere like 'ProgramFiles' you need to run Robber with admin rights.

What is DLL hijacking?

Windows has a search path for DLLs in its underlying architecture. If you can figure out what DLLs an executable requests without an absolute path (triggering this search process), you can then place your hostile DLL somewhere higher up the search path so it'll be found before the real version is, and Windows will happilly feed your attack code to the application.

So, let's pretend Windows's DLL search path looks something like this:

A) . <-- current working directory of the executable, highest priority, first check

B) \Windows

C) \Windows\system32

D) \Windows\syswow64 <-- lowest priority, last check

and some executable "Foo.exe" requests "bar.dll", which happens to live in the syswow64 (D) subdir. This gives you the opportunity to place your malicious version in A), B) or C) and it will be loaded into executable.

As stated before, even an absolute full path can't protect against this, if you can replace the DLL with your own version.

Microsoft Windows protect system paths like System32 using Windows File Protection mechanism but the best way to protect executable from DLL hijacking in enterprise solutions is :

  • Use an absolute path instead of a relative path
  • If you have a personal sign, sign your DLL files and check the sign in your application before load DLL into memory. Otherwise, check the hash of DLL file with original DLL hash)

And of course, this isn't really limited to Windows either. Any OS which allows for dynamic linking of external libraries is theoretically vulnerable to this.

Robber uses a simple mechanism to figure out DLLs that prone to hijacking :

  1. Scan import table of the executable and find out DLLs that linked to executable
  2. Search for DLL files placed inside executable that match with linked DLL (as I said before current working directory of the executable has the highest priority)
  3. If any DLL found, scan the export table of theme
  4. Compare the import table of the executable with an export table of DLL and if any matching was found, the executable and matched common functions flag as DLL hijack candidate.

Features :

  • Ability to select scan type (signed/unsigned applications)
  • Determine executable signer
  • Determine which referenced DLLs candidate for hijacking
  • Determine exported method names of candidate DLLs
  • Configure rules to determine which hijacks is a best or good choice for use and show them in different colors
  • Ability to check to write permission of executable directory that is a good candidate for hijacking

Find out latest Robber executable here


More: https://github.com/MojtabaTajik/Robber

The post Robber is open source tool for finding executables prone to DLL hijacking appeared first on Hakin9 - IT Security Magazine.

Application Security: A Broader Perspective by Hardik Shah

$
0
0

Modern application come with many challenges, and security is indeed critical and often under-emphasized. Apps are the most favorable medium for cybercriminals who seek to steal the data, or breach user’s security defenses. As per research from cybersecurity, there were over 3,800 publicly disclosed data breaches, exposing 4.1 billion compromised records. There’s a vast amount of data stored in applications. With a considerable number of transactions taking place on applications, comprehensive app security is a must.

In this blog post, you will learn:

  • What is application security?
  • What is the importance of application security?
  • Classes of threats
  • Application Security Checklist
  • Security Testing Approaches
  • Application Security Tools

What is application security?

Application security, or “AppSec,” is the process of making apps more secure by finding, fixing, and enhancing the security of applications. In application security, the process of checking the security of confidential data from being exposed to unauthorized individuals is also involved. The purpose of this security is to ensure any user is not misusing the functionality of the application. App security also provides that no user holds the authority to deny the functionality of the app to other users.

What is the importance of application security?

In this modern digital world, going online can expose everyone to several harmful cyber threats. Whether it is about inputting credit card data or confirming our identity, there is always a risk. In a similar manner, the apps developed without considering security can expose users to vulnerabilities that can cause different levels of damage. To get rid of data breaching and secure users’ data in terms of credit/debit cards, bank details, application security is vitally important.  

When it comes to threats, the application layer attacks are a frequent pattern. As this threat intensifies, the security regulations organizations have to understand and comply with. Hence, in the new software-driven landscape, application security has become crucial.

Classes of Threats 

Make sure to take account for the following common classes of threats while designing security into apps:

SQL Injection

SQL Injection (SQLI) is the most common layer for the attack. It uses malicious SQL code for backend manipulation to access information that was not intended to be displayed. This information includes any sensitive data, including the company’s sensitive data, user lists, or private customer details. SQL Injection is a type of attack that takes the benefits of loopholes present in the implementation of apps since it allows a hacker to hack the system. 

If you want to check SQL injection, you need to take care of input fields like text boxes, comments, etc. On the other hand, to prevent injections, special characters should be either adequately handled or skipped from the input. 

Unauthorized Data Access

Unauthorized access refers to individuals accessing the data, networks, endpoints, apps, or devices without receiving permission. This is one of the major widespread threats, which is all about gaining unauthorized access to data within the app. The data can be accessed on servers or a network. This threat includes unauthorized access to - 

  • Data through data-fetching operations
  • Data by monitoring the access of others
  • Reusable client authentication information by monitoring the access of others

Privilege Elevation

It is a category of threat where hackers have accounts on a system and use it for increasing their system privileges to a higher level than they were meant to have. If it is successful, this type of attack results in a hacker gaining opportunities as high as root on a UNIX system. Once a hacker gains privileges, he/she can run code with this level of privilege, and the whole system is effectively compromised. 

URL Manipulation

It is the process of manipulating the website URL query strings and capturing critical information by hackers. It generally happens when the app uses the HTTP GET method to pass information/data between the client and the server. The information or data is passed in parameters in the form of a query string. The tester modifies a parameter value in the query string to check if the server accepts it. 

Cross-Site Scripting (XSS)

XSS is a type of computer security vulnerability, which is commonly found in web applications. It lets attackers inject client-side script into web apps, which is viewed by other users.

This is a trick of making users into clicking into the URL. Once the user’s browser executes it, the code performs actions, such as changing the behavior of the website, stealing personal information, and performing actions on behalf of the user. 

Data Manipulation

In a data manipulation threat, a hacker can make changes to the data used by a website for gaining some advantages. Moreover, hackers will gain access to HTML pages and can change them to be offensive. 

Denial of Service (DoS)

DoS threat is an explicit attempt to make network or machine resources unavailable to its authorized users. Apps can be attacked in ways that render the app. Eventually, the entire machine can be unusable. 

Application Security Checklist

To secure an app against numerous cyber threats means facing a veritable jungle of products, solutions, and services. Stick with the following app security checklist for securing and protecting your data in the current threat environment, 

  1. Eliminate vulnerabilities before apps go into production.

It’s pivotal to address application security once the development is completed. On top of that, it is all-important to build security into your development teams, processes, and tools (technology).

  1. Embrace security tools, which integrate into developers’ environments.

This can be done with an IDE plugin that allows developers to see the results of security tests directly in the IDE as they work on their code.

  1. Don’t forget to address security in architecture, design, and open-source third-party components

If you’re checking for bugs or running penetration tests against your system, you are likely to miss a substantial number of vulnerabilities in the software. 

  1. Make an “AppSec toolbelt”, which brings together the solutions, which needed to recognize the efforts

An effective AppSec toolbelt must include integrated solutions that address app security risks end-to-end. It also provides an analysis of vulnerabilities in proprietary code, open-source components, and runtime configuration and behavior. 

  1. Analyze App security risk profile, so that you can focus on efforts

It’s pivotal to know what is essential in terms of requiring a team of experienced security experts to analyze an app portfolio quickly and identify the specific risk profile for each app and its environment. 

  1. Make sure the team has appropriate resources and skills

It is essential to provide high-quality training solutions to raise the level of application security skills in their firms. 

  1. Develop a program to raise awareness of AppSec competency in your firm

Don’t forget to mention focusing on the actions that will create value and a positive impact on your software security program at the minimal cost. 

  1. Augment internal staff to address skills and resource gaps

It would be better to find a partner that can provide on-demand expert testing, optimize resource allocation at an affordable cost. And it also ensures complete testing coverage of your portfolio. 

  1. Develop a structured plan to coordinate security initiative improvements with cloud migration.

Once you completely understand the risks, you can easily create a roadmap for cloud migration to ensure all teams are aligned and priorities must be cleared. 

Security Testing Approaches

  • Security Architecture Study: The very first step is to acknowledge the business’s requirements, goals, objectives in terms of security compliance of the firm. The testing planning should include all security factors.
  • Security Architecture Analysis: It includes understanding and analyzing the requirements of the app under test.
  • Security Testing Classification: This approach collects all system setup information used for the development of software & networks like operating systems, hardware, and technology. It includes the listing of vulnerabilities and security risks. 
  • Threat Modeling: It is based on the above step, and prepares a Threat profile. It works to identify, communicate, and understand threats. Also, it can be applied to a wide variety of things, including software, application systems, networks, business processes, etc. Threat modeling can be done at any stage of development, especially early - so that findings can inform the design. 
  • Test Planning: This approach is based on identified threats, vulnerabilities, security, and risks. It is all about preparing a test plan to address these issues. 
  • Traceability Matrix Preparation: This approach is prepared for each identified threat, vulnerabilities, and security risks. 
  • Security Testing Tool Identification: Every type of security testing can’t be executed manually. That’s why it’s important to identify the tool to perform all security test cases faster and reliably.
  • Test Case Preparation: This approach is all about preparing the security tests case document. The test case is a set of actions executed to verify a specific feature or functionality of a software app. It contains test steps, test data, precondition, and postcondition developed for particular test scenarios to verify the requirements.
  • Test Case Execution: It is the most important and happening phase in the entire development lifecycle. This is because every team member’s contribution and work gets validated in this phase.

It performs the security test case execution and retests the defect fixes. It is the process of executing the code and comparing the expected and actual results. This approach is also about executing the Regression Test cases. Regression testing is partial or a full selection of already executed test cases, which are re-executed to ensure existing functionalities work seamlessly.

  • Reports: It includes the preparation of a detailed report of Security Testing that contains Vulnerabilities and Threats contained, detailing risks, and issues that are still opened. 

Application Security Tools

Security mechanisms can be included right from the initial stages of development. Businesses have been gradually moving towards incorporating security practices in the process development to achieve the highest level of security. Application security testing is mainly divided into two:

  • Static Analysis or SAST (Static Application Security Testing)
  • Dynamic Analysis or DAST (Dynamic Application Security Testing)

Static Analysis

It is also known as Static application security testing (SAST), a testing technique, which looks at the app from inside out. This type of testing is performed without executing the program, but instead of examining the source code, byte code, or application binaries for signs of security vulnerabilities. Also known as white box testing, SAST scans an app before the code is compiled. 

SAST takes performance in the initial stage in the software development lifecycle (SDLC) as it requires a working application. The best thing about SAST is that it quickly resolves issues without breaking builds. SAST tools provide developers real-time feedback while coding, helping them fix issues from considered an afterthought. 

SAST tools also provide graphical representations of the issues found and help to navigate the code easier. Tools also provide in-depth guidance on how to fix issues and the best place in the code to fix them without requiring in-depth security domain knowledge. 

  • It helps to find the exact location of vulnerability.
  • It scales more easily.
  • It integrates easily into the development process.
  • It finds vulnerabilities earlier in SDLC. 

Dynamic Analysis

It is a form of black-box testing; Dynamic Analysis is also known as Dynamic application security testing (DAST). Using DAST examines an app when it is running and tries to hack it just like an attacker would. It simulates attacks against a web app and analyzes the app’s reactions and determining whether it is vulnerable or not. 

SCA (Software Composition Analysis)

When it comes to delivering code quickly, the developers must have extreme pressure. That’s why the usage of open source components has increased. Thanks to Heartbleed and Struts-Shock vulnerabilities, several organizations are looking for a way to manage and track their component use.  

SCA technologies help to keep track of which apps are using each component and what version are being used. With such data, corporations can more easily update components to the latest version when new vulnerabilities are discovered. 

Penetration Testing 

Penetration testing, also known as a pen test, is a simulated cyberattack against computer systems to check for exploitable vulnerabilities. 

In this testing, a security consultant or pen tester manually checks an app for security vulnerabilities. Plus, there is no visibility into the internal workings of the app. It is commonly used to augment Web Application Firewall (WAF) in the context of web application security. The good thing about penetration testing is it has a shallow rate of false-positive rate and comprehensive method of security testing. 

RASP (Runtime Application Self Protection)

RASP is a technology that runs on a server and kicks in when an app runs. It is specifically designed to detect errors on apps in real-time. When an app starts to run, RASP protects it from malicious input or behavior by analyzing both the app’s behavior and the context of that behavior. 

Conclusion

Since the demand for application is increasing, the need for application security has been growing in organizations for the last few years. Therefore, an application security program has become a necessity for various organizations. People, process, and technology, it’s all-essential to be addressed to ensure effective application security. 

Want to share any suggestions or feedback, please use the comment box. 


About the Author:

Hardik Shah works as a Tech Consultant at Simform that provides application development services. He leads large scale mobility programs that cover platforms, solutions, governance, standardization, and best practices. Connect with him to discuss the best practices of software methodologies @hsshah_

 

 

 

 

The post Application Security: A Broader Perspective by Hardik Shah appeared first on Hakin9 - IT Security Magazine.

FalconZero - A stealthy, targeted Windows Loader for delivering second-stage payloads(shellcode) to the host machine undetected

$
0
0

Introducing FalconZero v1.0 - a stealthy, targeted Windows Loader for delivering second-stage payloads(shellcode) to the host machine undetected - first public release version Loader/Dropper of the FALCONSTRIKE project

Features

  • Dynamic shellcode execution
  • Usage of Github as the payload storage area - the payload is fetched from Github
  • Targeted implant Loader - only execute on targeted assets - thwart automated malware analysis and hinder reverse engineering on non-targeted assets
  • Killdates - implant expires after a specific date
  • Stealthy shellcode injection technique without allocating RWX memory pages in victim process to evade AV/EDRs - currently injects to explorer.exe
  • Sensitive strings encrypted using XOR

Payload Compatibility

And support for many more...

The ones mentioned in the list are the ones verified by the testing team.

Usage

There are many hard things in life but generating an implant shouldn't be one. This is the reason the generate_implant.pyscript has been created to make your life a breeze. The process is as simple as:

First generate your shellcode as a hex string
Upload it on Github and copy the Github raw URL
For testing(MessageBox shellcode): https://raw.githubusercontent.com/slaeryan/DigitalOceanTest/master/messagebox_shellcode_hex_32.txt
git clone https://github.com/slaeryan/FALCONSTRIKE.git
cd FALCONSTRIKE
pip3 install -r requirements.txt
python3 generate_implant.py

Follow the on-screen instructions and you'll find the output in bin the directory if everything goes well.

AV Scan of FalconZero implant

TO-DO

This is an alpha release version and depending on the response many more upgrades to existing functionalities are coming soon.

Some of them are:

  • Integrate various Sandbox detection algorithms
  • Integrate support for more stealthy shellcode injection techniques
  • Integrate function obfuscation to make it stealthier
  • Include a network component to callback to a C2 when a Stage-2 payload is released or to change targets/payloads and configure other options on-the-fly
  • Inject to a remote process from where network activity is not unusual for fetching the shellcode - better OPSEC
  • Include active hours functionality - Loader becomes active during a specified period of day, etc.

Feel free to communicate any further features that you want to see in the next release. Suggestions for improving existing features are also warmly welcome :)

Author

Upayan (@slaeryan) [slaeryan.github.io]


More: https://github.com/slaeryan/FALCONSTRIKE

The post FalconZero - A stealthy, targeted Windows Loader for delivering second-stage payloads(shellcode) to the host machine undetected appeared first on Hakin9 - IT Security Magazine.

Proxy Cheat Sheet by James Kattler

$
0
0

With more cyber threats emerging and governments trying to access more information about our activity online, users turn to proxies to remain anonymous. However, mere anonymity is not the only reason why this technology became so popular over the past couple of years. Proxies are a great aid in a lot of business processes and complex tech tasks. To understand this tool, let’s study all the details about it.

First, let us define proxies

A proxy is a server users can connect to. It can be a standard server you’d imagine if you think about a data center, for example. And it can be some device that works as a proxy server. We will get into all the differences a bit later. For now, we will focus on the concept.

So, we have a remote server or a device, and we can connect to it. Doing so, we will reroute our traffic through it, and only then head to the destination website. On our way, we will pick up the IP address of the server or gadget and mask our real IP. Therefore, when we reach the destination website, it will not see our real data. That’s how we can pretend to be someone else by using proxies. And that’s how one can remain anonymous online.

If you’re new to proxies, they might seem a bit fishy to you. But in reality, they’re completely legal, and you’re not breaking any law by using them. Well, until you try to use someone’s device as a proxy without their consent. But if you’re not an advanced enough hacker to do that, your proxy provider is responsible for making sure its network doesn’t violate anyone’s rights.

Proxies are very similar to a VPN, so many people confuse them. The primary difference is that you can apply a proxy to a certain stream of traffic — for example, your browser, or some other program that is connected to the Internet. Also, you can control what server you’re connecting to and, therefore, which IP you’re using. A VPN, on the other hand, applies changes to all the outgoing traffic and doesn’t let you choose IPs. So proxies are more precise, and for some tasks, that’s exactly the precision you need.

When proxies are used

The most widely spread use case for proxies is data gathering. Since you can apply this tool to a web scraper and control the rotation of IP addresses, proxies are very convenient for acquiring information from the internet. They allow a scraper to access geo-restricted pages, gather more accurate data, and avoid anti-scraping measures website owners use — this is how price aggregators can scale the process of gathering price intelligence.

Another way to utilize proxies is to make sure your target audience from different locations sees your targeted ads or to check out the ads of your competitors. That’s why this tool became so popular among marketing managers. SEO specialists also like using proxies to check the results of the optimization and gather some valuable information from websites of competitors. And SMM specialists use proxies to manage several accounts on social media without the risk of getting blocked. 

Proxies are also useful for testing. One can apply them to make sure the interface of a site or app works properly from all locations. Also, proxies are useful for cybersecurity testing — using them, specialists can simulate attacks. 

So, as you can see, there are many uses for proxies. However, many people get them to simply access geo-restricted content.

What are forward and reverse proxies?

When trying to understand proxies, many users get confused when they meet the terms “forward” and “reverse”. It is a more advanced detail about proxies that will be useful for IT specialists. 

Proxies that you’d use for accessing geo-restricted websites, scraping, social media marketing, marketing research, and so on, are forward ones. They process your traffic, apply their IP address to it, and forward it to a destination server — the website you want to visit. Thus, they hide your identity from the server.

Reverse proxies hide the main server from users. They retrieve the data from users without allowing them to access the main server. But the traffic, in the end, is assigned to this server. Reverse proxies are useful for protecting websites from DDOS attacks and malware. Also, they can distribute the traffic to several servers and reduce the load. Webmasters can use them to compress the content or force traffic through another website first.

But if you simply need to hide your IP for any reason, there is no need for you to fathom all the details of reverse proxies because you need forward ones.

Different kinds of forward proxies

Before we jump to all the types of proxies, we want to talk about free ones a bit. You can find free proxies, and they might satisfy your needs if all you want from them is to let you access a geo-restricted website. But they’re usually of low quality, and it will be very hard to use them for any more complex purposes. So if you’re looking for proxies for your professional needs, we advise you to stick to paid ones.

Some providers, such as Infatica, maintain a good balance between price and quality and offer reliable proxies at affordable costs. If you check out any of the existing vendors, you will see that they have different kinds of proxies.

Data center proxies

If you’re looking for the cheapest solution, these proxies should be your choice. Using them, you will connect to a shared server along with other customers of your provider. It will mask your IP address, but since there are many users connected to one server, you might experience issues with such tasks as scraping. Datacenter proxies can’t offer impeccable anonymity. However, they will be quite fitting for the needs of a social media manager, for example.

Residential proxies

Using these proxies you will connect to a device that has a unique IP address issued by a real ISP. Proxy vendors source such IPs through a completely compliant network, so you have nothing to worry about — you will not violate the rights of a device owner. Residential proxies offer high anonymity because you will appear like a real resident of the country where the device-mediator is located. They are perfect for scraping and marketing research.

Mobile proxies

These are residential proxies with IP addresses that are issued by a mobile operator and belong only to mobile devices. They’re great for testing and some specific marketing needs. Mobile proxies are the most expensive kind you can get because they’re difficult to source.

Now you can feel confident when looking for a proxy provider for your business needs. And remember — a good reliable vendor will always help you out if you can’t decide which proxies fit you best or you have some additional questions.


About the Author:

James Kattler is a web proxy solution specialist at infatica.io. His interests include information security, ethical hacking, and web development.

The post Proxy Cheat Sheet by James Kattler appeared first on Hakin9 - IT Security Magazine.

Sniffle - A sniffer for Bluetooth 5 and 4.x LE

$
0
0

Sniffle has a number of useful features, including:

  • Support for BT5/4.2 extended length advertisement and data packets
  • Support for BT5 Channel Selection Algorithms #1 and #2
  • Support for all BT5 PHY modes (regular 1M, 2M, and coded modes)
  • Support for sniffing only advertisements and ignoring connections
  • Support for channel map, connection parameter, and PHY change operations
  • Support for advertisement filtering by MAC address and RSSI
  • Support for BT5 extended advertising (non-periodic)
  • Support for capturing advertisements from a target MAC on all three primary advertising channels using a single sniffer. This makes connection detection nearly 3x more reliable than most other sniffers that only sniff one advertising channel.
  • Easy to extend host-side software written in Python
  • PCAP export compatible with the Ubertooth

Prerequisites for Sniffle

If you don't want to go through the effort of setting up a build environment for the firmware, you can just flash prebuilt firmware binaries using UniFlash/DSLite. Prebuilt firmware binaries are attached to releases on the GitHub releases tab of this project. When using prebuilt firmware, be sure to use the Python code corresponding to the release tag rather than master to avoid compatibility issues with firmware that is behind the master branch.

Note: it should be possible to compile Sniffle to run on CC1352P Launchpad boards with minimal modifications, but I have not yet tried this.

Installing GCC

The arm-none-eabi-gcc provided through various Linux distributions' package manager often lacks some header files or requires some changes to linker configuration. For minimal hassle, I suggest using the ARM GCC linked above. You can just download and extract the prebuilt executables.

Installing the TI SDK

The TI SDK is provided as an executable binary that extracts a bunch of source code once you accept the license agreement. On Linux and Mac, the default installation directory is inside~/ti/. This works fine and my makefiles expect this path, so I suggest just going with the default here. The same applies for the TI SysConfig tool.

Once the SDK has been extracted, you will need to edit one makefile to match your build environment. Within ~/ti/simplelink_cc13x2_26x2_sdk_4_10_00_78 (or wherever the SDK was installed) there is a makefile named imports.mak. The only paths that need to be set here to build Sniffle are for GCC, XDC, and SysConfig. We don't need the CCS compiler. See the diff below as an example, and adapt for wherever you installed things.

diff --git a/imports.mak b/imports.mak
index 5a8fb0cb..e99a03e7 100644
--- a/imports.mak
+++ b/imports.mak
@@ -18,12 +18,12 @@
 # will build using each non-empty *_ARMCOMPILER cgtool.
 #
 
-XDC_INSTALL_DIR        ?= /home/username/ti/xdctools_3_61_00_16_core
-SYSCONFIG_TOOL         ?= /home/username/ti/ccs1000/ccs/utils/sysconfig_1.4.0/sysconfig_cli.sh
+XDC_INSTALL_DIR        ?= $(HOME)/ti/xdctools_3_61_00_16_core
+SYSCONFIG_TOOL         ?= $(HOME)/ti/sysconfig_1.4.0/sysconfig_cli.sh
 
 
-CCS_ARMCOMPILER        ?= /home/username/ti/ccs1000/ccs/tools/compiler/ti-cgt-arm_20.2.0.LTS
-GCC_ARMCOMPILER        ?= /home/username/ti/ccs1000/ccs/tools/compiler/gcc-arm-none-eabi-9-2019-q4-major
+CCS_ARMCOMPILER        ?= $(HOME)/ti/ccs1000/ccs/tools/compiler/ti-cgt-arm_20.2.0.LTS
+GCC_ARMCOMPILER        ?= $(HOME)/arm_tools/gcc-arm-none-eabi-9-2019-q4-major
 
 # The IAR compiler is not supported on Linux
 # IAR_ARMCOMPILER      ?=

Obtaining DSLite

DSLite is TI's command line programming and debug server tool for XDS110 debuggers. The CC26xx and CC13xx Launchpad boards both include XDS110 debuggers. Unfortunately, TI does not provide a standalone command line DSLite download. The easiest way to obtain DSLite is to install UniFlash from TI. It's available for Linux, Mac, and Windows. The DSLite executable will be located deskdb/content/TICloudAgent/linux/ccs_base/DebugServer/bin/DSLite relative to the UniFlash installation directory. On Linux, the default UniFlash installation directory is inside ~/ti/.

You should place the DSLite executable directory within your $PATH.

Building and Installation of Sniffle

Once the GCC, DSLite, and the SDK is installed and operational, building Sniffle should be straight forward. Just navigate to the fw directory and run make. If you didn't install the SDK to the default directory, you may need to edit SIMPLELINK_SDK_INSTALL_DIR in the makefile.

To install Sniffle on a (plugged in) CC26x2 Launchpad using DSLite, run make load within the fw directory. You can also flash the compiled sniffle.out binary using the UniFlash GUI.

If building for or installing on a CC1352R Launchpad instead of a CC26x2R, you must specify PLATFORM=CC1352R1F3, either as an argument to make, or by defining it as an environment variable prior to invoking make. Similarly, specify PLATFORM=CC2652RB1F when building for CC2652RB Launchpad instead of the regular CC26x2R version. Be sure to perform a make clean before building for a different platform.

Sniffer Usage

[skhan@serpent python_cli]$ ./sniff_receiver.py --help
usage: sniff_receiver.py [-h] [-s SERPORT] [-c {37,38,39}] [-p] [-r RSSI]
                         [-m MAC] [-a] [-e] [-H] [-l] [-o OUTPUT]

Host-side receiver for Sniffle BLE5 sniffer

optional arguments:
  -h, --help            show this help message and exit
  -s SERPORT, --serport SERPORT
                        Sniffer serial port name
  -c {37,38,39}, --advchan {37,38,39}
                        Advertising channel to listen on
  -p, --pause           Pause sniffer after disconnect
  -r RSSI, --rssi RSSI  Filter packets by minimum RSSI
  -m MAC, --mac MAC     Filter packets by advertiser MAC
  -i IRK, --irk IRK     Filter packets by advertiser IRK
  -a, --advonly         Sniff only advertisements, don't follow connections
  -e, --extadv          Capture BT5 extended (auxiliary) advertising
  -H, --hop             Hop primary advertising channels in extended mode
  -l, --longrange       Use long range (coded) PHY for primary advertising
  -o OUTPUT, --output OUTPUT
                        PCAP output file name

The XDS110 debugger on the Launchpad boards creates two serial ports. On Linux, they are typically named ttyACM0 and ttyACM1. The first of the two created serial ports is used to communicate with Sniffle. By default, the Python CLI communicates using /dev/ttyACM0, but you may need to override this with the -s command line option if you are not running on Linux or have additional USB CDC-ACM devices connected.

For the -r (RSSI filter) option, a value of -40 tends to work well if the sniffer is very close to or nearly touching the transmitting device. The RSSI filter is very useful for ignoring irrelevant advertisements in a busy RF environment. The RSSI filter is only active when capturing advertisements, as you always want to capture data channel traffic for a connection being followed. You probably don't want to use an RSSI filter when MAC filtering is active, as you may lose advertisements from the MAC address of interest when the RSSI is too low.

To hop along with advertisements and have reliable connection sniffing, you need to set up a MAC filter with the -m option. You should specify the MAC address of the peripheral device, not the central device. To figure out which MAC address to sniff, you can run the sniffer with RSSI filtering while placing the sniffer near the target. This will show you advertisements from the target device including its MAC address. It should be noted that many BLE devices advertise with a randomized MAC address rather than their "real" fixed MAC written on a label.

For convenience, there is a special mode for the MAC filter by invoking the script with -m top instead of -m with a MAC address. In this mode, the sniffer will lock onto the first advertiser MAC address it sees that passes the RSSI filter. The -m top mode should thus always be used with an RSSI filter to avoid locking onto a spurious MAC address. Once the sniffer locks onto a MAC address, the RSSI filter will be disabled automatically by the sniff receiver script (except when the -e option is used).

Most new BLE devices use Resolvable Private Addresses (RPAs) rather than fixed static or public addresses. While you can set up a MAC filter to a particular RPA, devices periodically change their RPA. RPAs can can be resolved (associated with a particular device) if the Identity Resolving Key (IRK) is known. Sniffle supports automated RPA resolution when the IRK is provided. This avoids the need to keep updating the MAC filter whenever the RPA changes. You can specify an IRK for Sniffle with the -i option; the IRK should be provided in hexadecimal format, with the most significant byte (MSB) first. Specifying an IRK allows Sniffle to channel hop with an advertiser the same way it does with a MAC filter. The IRK based MAC filtering feature (-i) is mutually exclusive with the static MAC filtering feature (-m).

To enable following auxiliary pointers in Bluetooth 5 extended advertising, enable the -e option. To improve performance and reliability in extended advertising capture, this option disables hopping on the primary advertising channels, even when a MAC filter is set up. If you are unsure whether a connection will be established via legacy or extended advertising, you can enable the -H flag in conjunction with -e to perform primary channel hopping with legacy advertisements, and scheduled listening to extended advertisement auxiliary packets. When combining -e and -H, the reliability of connection detection may be reduced compared to hopping on primary (legacy) or secondary (extended) advertising channels alone.

To sniff the long range PHY on primary advertising channels, specify the -l option. Note that no hopping between primary advertising channels is supported in long range mode, since all long range advertising uses the BT5 extended mechanism. Under the extended mechanism, auxiliary pointers on all three primary channels point to the same auxiliary packet, so hopping between primary channels is unnecessary.

If for some reason the sniffer firmware locks up and refuses to capture any traffic even with filters disabled, you should reset the sniffer MCU. On Launchpad boards, the reset button is located beside the micro USB port.

Scanner Usage

sultan@sultan-neon-vm:~/sniffle/python_cli$ ./scanner.py --help
usage: scanner.py [-h] [-s SERPORT] [-c {37,38,39}] [-r RSSI] [-e] [-l]

Scanner utility for Sniffle BLE5 sniffer

optional arguments:
  -h, --help            show this help message and exit
  -s SERPORT, --serport SERPORT
                        Sniffer serial port name
  -c {37,38,39}, --advchan {37,38,39}
                        Advertising channel to listen on
  -r RSSI, --rssi RSSI  Filter packets by minimum RSSI
  -e, --extadv          Capture BT5 extended (auxiliary) advertising
  -l, --longrange       Use long range (coded) PHY for primary advertising

The scanner command line arguments work the same as the sniffer. The purpose of the scanner utility is to passively gather a list of nearby devices advertising, without having the deluge of fast scrolling data you get with the sniffer utility. The hardware/firmware works exactly the same, but the scanner utility will record and report observed MAC addresses only once without spamming the display. Once you're done capturing advertisements, press Ctrl-C to stop scanning and report the results. The scanner will show the last advertisement and scan response from each target. Scan results will be sorted by RSSI in descending order.

Usage Examples of Sniffle

Sniff all advertisements on channel 38, ignore RSSI < -50, stay on advertising channel even when CONNECT_REQs are seen.

./sniff_receiver.py -c 38 -r -50 -a

Sniff advertisements from MAC 12:34:56:78:9A:BC, stay on advertising channel even when CONNECT_REQs are seen, save advertisements to data1.pcap.

./sniff_receiver.py -m 12:34:56:78:9A:BC -a -o data1.pcap

Sniff advertisements and connections for the first MAC address seen with RSSI >= -40. The RSSI filter will be disabled automatically once a MAC address has been locked onto. Save captured data to data2.pcap.

./sniff_receiver.py -m top -r -40 -o data2.pcap

Sniff advertisements and connections from the peripheral with big endian IRK 4E0BEA5355866BE38EF0AC2E3F0EBC22.

./sniff_receiver.py -i 4E0BEA5355866BE38EF0AC2E3F0EBC22

Sniff BT5 extended advertisements and connections from nearby (RSSI >= -55) devices.

./sniff_receiver.py -r -55 -e

Sniff legacy and extended advertisements and connections from the device with the specified MAC address. Save captured data to data3.pcap.

./sniff_receiver.py -eH -m 12:34:56:78:9A:BC -o data3.pcap

Sniff extended advertisements and connections using the long range primary PHY on channel 38.

./sniff_receiver.py -le -c 38

Passively scan on channel 39 for advertisements with RSSI greater than -50, and enable capture of extended advertising.

./scanner.py -c 39 -e -r -50

Obtaining the IRK

If you have a rooted Android phone, you can find IRKs (and LTKs) in the Bluedroid configuration file. On Android 8.1, this is located at /data/misc/bluedroid/bt_config.conf. The LE_LOCAL_KEY_IRK specifies the Android device's own IRK, and the first 16 bytes of LE_KEY_PID for every bonded device in the file indicate the bonded device's IRK. Be aware that keys stored in this file are little endian, so the byte order of keys in this file will need to be reversed. For example, the little endian IRK 22BC0E3F2EACF08EE36B865553EA0B4E needs to be changed to 4E0BEA5355866BE38EF0AC2E3F0EBC22 (big endian) when being passed to Sniffle with the -i option.


More: https://github.com/nccgroup/Sniffle

The post Sniffle - A sniffer for Bluetooth 5 and 4.x LE appeared first on Hakin9 - IT Security Magazine.

What Role Does Data Destruction Play In Cybersecurity? by Daniel Santry

$
0
0

Photo by Icons 8 Team via Unsplash

Very often in organisations, conversations about cybersecurity take the form of how they can best protect the data they keep. This can be in the form of software, such as firewalls, anti-virus software, machine learning, and AI technology, or in shielding against the human element - tiered access, door passcodes, or awareness training.

The problem lies in the fact few organisations consider data destruction - how to dispose of data safely and responsibly so that it can't be accessed by others after disposal. It is an important part of cybersecurity both for the expectations of customers, clients, and partners that their personal data is safe, and in regard to government regulations.

Data destruction - What it is and what it isn't

In the days before computers and the internet took over the world, data destruction was a much simpler process. An organisation only had to run papers through a shredder, drop the shreds off at a recycling plant, keep a record of what was shredded, and all regulatory compliance was met. This was enough to ensure that data was not lost or picked up by prying eyes and criminals.

Digital storage has made data destruction a more difficult task. While some employees may believe that just deleting files is enough to get rid of the data, this couldn't be further from the truth. The vast majority of drives will just flag the data up for re-writing, so a user on the operating system can't see the files, but the data will be intact on the drive. Data must either be overwritten, cleared electromagnetically, or the drives must be destroyed physically.

This is why making sure that drives are purged of all data correctly before disposing of them is of the utmost importance to an organization. Correct data destruction should be systemized and handled by the data controller of a company to ensure there is a responsible chain of command. Unless the organization has specialist facilities set up in house, they should always use a reputable data destruction company.

What are fool-proof ways of destroying data?

Photo by Taylor Vick via Unsplash

There are a few ways that companies use to destroy data permanently, but there are three main techniques used today.

Degaussing is the most prevalent form of data destruction. A degausser is used to electromagnetically remove the magnetic field a hard disk drive uses to change the bits on a disk platter. This not only scrambles all the data on the drive, but also destroys the servo firmware, rendering the drive completely unusable even if recovered.

Over-writing is another form of destruction. As the name suggests, it consists of over-writing the entire drive with either 0's, 1's or a random scramble of them. This serves the purpose of completely erasing any data on it, and keeping the hard drive usable, so it’s a fantastic option for an organisation to do routine clears on a server without having to source all new hardware.

Physical destruction is the other popular form of data destruction. Again, as the name would suggest, it involves destroying the disk drive with trauma or chemicals in order to render it unusable and unreadable. The problem with this method is that unless it's done properly, a savvy criminal can recover pieces of the hard drive and may be able to still recover data from it. Therefore, it is essential to work with an experienced and reputable company to ensure your data is not compromised.

How does data destruction help an organisation?

Photo by Markus Spiske via Unsplash

An organisation can hold personal data on many people, and if they're in certain industries such as legal or financial, this data can be especially sensitive, so regulations on data storage can be very strong.

The other important thing for organisations to bear in mind, however, is that with competition in the marketplace becoming stronger, some less than moral companies and individuals are on the lookout for sensitive intelligence wherever they can find it to both use and sell. Many businesses only think about keeping their networks and servers secure and protected, but neglect their data destruction methods.

Snoopers are well aware of this vulnerability in companies, so will be on the lookout for hard drives they can get their hands on. This will include drives thrown in the trash, drives being transported, and at times, perhaps a laptop or USB stick left on a train. Systems must be in place to keep a business from losing clients’ personal information and businesses' critical intelligence. This is of the utmost priority for an organisation that doesn't want to be hit by large fines, as well as losing customers due to a data breach.

What needs to be considered when choosing how to destroy data?

The first thing to be considered is the type of hardware a business is using. A record log should be kept of all hardware being used, so when a business needs drives destroyed, the destruction company will know which techniques to use.

The second is to research the reputation of the destruction companies being shortlisted. They should have good testimonials from other businesses for their work, they should offer destruction certification and, if possible, video proof.

Lastly, time; it takes time to organise which drives need to be destroyed and when, transporting them to the destruction facility and then, depending on the method and how many drives there are, it can take different amounts of lead time.


About the Author: 

Daniel Santry is US Business Development Executive for Wisetek, who are global leaders in IT Asset Disposition, Data Destruction, & IT Reuse. 

The post What Role Does Data Destruction Play In Cybersecurity? by Daniel Santry appeared first on Hakin9 - IT Security Magazine.

Mouse Framework is an iOS and macOS post-exploitation framework

$
0
0

Mouse Framework is an iOS and macOS post-exploitation framework that gives you a command-line session with extra functionality between you and a target machine using only a simple Mouse payload. Mouse gives you the power and convenience of uploading and downloading files, tab completion, taking pictures, location tracking, shell command execution, escalating privileges, password retrieval, and much more.

Getting started

Mouse installation

cd mouse

chmod +x install.sh

./install.sh

Mouse uninstallation

cd mouse

chmod +x uninstall.sh

./uninstall.sh

Mouse Framework execution

To execute Mouse Framework you should execute the following command:

mouse

Why Mouse Framework

  • Simple and clear UX/UI.

Mouse Framework has a simple and clear UX/UI. It is easy to understand and it will be easier for you to master the Mouse Framework.

  • A lot of different functions.

There are a lot of different functions in Mouse CLI such as displaying alerts, recording mic sound and taking pictures on a remote iOS/macOS device.

  • A lot of different payloads.

There are a lot of different payloads in Mouse Framework such as Target shell and Duck or Arduino payloads.


Mouse Framework disclaimer

Usage of the Mouse Framework for attacking targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state, federal, and international laws. Developers assume no liability and are not responsible for any misuse or damage caused by this program.


More: https://github.com/entynetproject/mouse and http://entynetproject.simplesite.com

The post Mouse Framework is an iOS and macOS post-exploitation framework appeared first on Hakin9 - IT Security Magazine.


2020’s Biggest Plague for Industries - Ransomware by Devin Smith

$
0
0

As time proceeded, ransomware increased exponentially and slowly its variations came to light in a different form of attacks. A ransomware attack is when a person is targeted with ransomware through any computer with the aid of a link in an email.

The conventional ransomware consists of a sophisticated effort such as a pre-planned or built infrastructure that is being used as a means to distribute advanced development techniques. It should be noted that offline encryption steps are also becoming popular as ransomware is taking advantage of system loopholes such as Microsoft’s CryptoAPI.


A report by Trustwave defines how ransomware has injected into everyone’s lives like a disease and has affected infrastructures globally. The image above is from the recent report that shows the widespread use of ransomware.

For businesses, holding steady is a real challenge because the attack has become advanced and sophisticated; challenging to predict and challenging to prevent.

Ransomware Targeting Industries


Security has been compromised across multiple sectors in the market, shaking economic situations where over a trillion security events have been analysed since the past year. The only commonality in the analyses was ransomware.

Ransomware attacks are a more common ground than the payment card thefts because cybercriminals completely change how they go about their malicious activities with an aim to get the biggest financial reward for the least amount of effort.

2019 alone has seen a 10% rise in email blackmailing where hackers have hacked the individual’s personal details and, in return, demanded a ransom in the form of cryptocurrencies. These attacks are usually successful because multiple entities choose to pay the ransom to stabilize their company’s financial position and privacy.

But in reality, it backfires. These ransoms cross over six-figure sums because cryptocurrency is demanded as a means of payment. It’s clever, simple and cannot be traced back. $27,000,000 was spent last year by a car company over the hack of a business email.

Retail and finance industries are the ones that have seen major hits because of their scale and prospects.


The retail industry faced breaches of CNP (card not present) data that is standard in e-commerce. Industries that did not directly face customers saw a mix of different attacks that directly stole money.

Predominant Breach Locations where It was the Easiest

More and more research was implicated on the loopholes that led to hacks and defamation. POS cash registers took a major hit as they utilized a magnetic stripe scanner to process cards with EMV chip readers. The operating system being used, Windows or Linux, has been a downfall as hackers crashed the OS and hacked details of those cards.

Additionally, the perpetrators behind the Sodinokibi ransomware threatened tosell the major database that was compromised through the global currency exchange Travelex right after a sophisticated malware attack breached the company offline and toppled its entire business during January. Travelex alone ended with the financial burden of paying out $2.3 million in Bitcoin.

Analysts state the ransomware perpetrators will initially post screenshots of the statistics only, to show as a sign of warning for victims that they want to pay their ransom on time and not to take the threat lightly. If the payment isn't made in time, the attackers follow through on their threat and make the confidential files available on the internet for public download.

However, no matter the major damage that may be carried out through ransomware, it is feasible to guard against it. Organisations have to ensure that networks are patched with intense encryption and kept up to date so that ransomware and other malware can not take advantage of known vulnerabilities.

The basics are constantly key; patching, passwords and policy. Making sure all software is running the modern-day steady version.

Organisations must additionally make certain that any ports that are not facing and dealing with the rest of the world are not doing it in a manner that'll help prevent attackers from breaching the network from inside in the first place. Multi-factor authentication and many others have to additionally be applied across the network, so if attacks do try to brute force logins to get around the network, there may be the last barrier to stop them.

Finally, organisations have to frequently back up the whole network – and root it offline – in case something bad happens, and a ransomware attack is successful, the network may be restored without having to recollect the concept of giving in to extortion.


About the Author:

Devin Smith is a tech-mech by profession and IT Security Analyst at Reviewsed. He is passionate into finding variant indulgence of the Tech World. He has studied Computer Science and now turning his exposure into the experience."

The post 2020’s Biggest Plague for Industries - Ransomware by Devin Smith appeared first on Hakin9 - IT Security Magazine.

NTLMRecon - A tool to enumerate information from NTLM authentication enabled web endpoints 🔎

$
0
0

NTLMRecon is built with flexibility in mind. Need to run recon on a single URL, an IP address, an entire CIDR range or combination of all of it all put in a single input file? No problem! NTLMRecon got you covered. Read on.

A fast and flexible NTLM reconnaissance tool without external dependencies. Useful to find out information about NTLM endpoints when working with a large set of potential IP addresses and domains.

TODO

  1. Implement aiohttp based solution for sending requests
  2. Integrate a spraying library
  3. Add other authentication schemes found to the output
  4. Automatic detection of autodiscover domains if the domain

Overview of NTLMRecon

NTLMRecon looks for NTLM enabled web endpoints, sends a fake authentication request and enumerates the following information from the NTLMSSP response:

  1. AD Domain Name
  2. Server name
  3. DNS Domain Name
  4. FQDN
  5. Parent DNS Domain

Since NTLMRecon leverages a python implementation of NTLMSSP, it eliminates the overhead of running Nmap NSE http-ntlm-info for every successful discovery.

On every successful discovery of a NTLM enabled web endpoint, the tool enumerates and saves information about the domain as follows to a CSV file :

URL Domain Name Server Name DNS Domain Name FQDN DNS Domain
https://contoso.com/EWS/ XCORP EXCHANGE01 xcorp.contoso.net EXCHANGE01.xcorp.contoso.net contoso.net

Installation of NTLMRecon

BlackArch

NTLMRecon is already packaged for BlackArch and can be installed by running pacman -S ntlmrecon

Arch

If you're on Arch Linux or any Arch Linux based distribution, you can grab the latest build from the Arch User Repository.

PyPI

You can simply run pip install ntlmrecon to fetch the latest build from PyPI

Build from source

  1. Clone the repository : git clone https://github.com/sachinkamath/ntlmrecon/
  2. RECOMMENDED - Install virtualenv : pip install virtualenv
  3. Start a new virtual environment : virtualenv venv and activate it with source venv/bin/activate
  4. Run the setup file : python setup.py install
  5. Run ntlmrecon : ntlmrecon --help

Usage of NTLMRecon

 $ ntlmrecon --help                                                                                                                                                                                                                                 

         _   _ _____ _     ___  _________                     
        | \ | |_   _| |    |  \/  || ___ \                    
        |  \| | | | | |    | .  . || |_/ /___  ___ ___  _ __  
        | . ` | | | | |    | |\/| ||    // _ \/ __/ _ \| '_ \ 
        | |\  | | | | |____| |  | || |\ \  __/ (_| (_) | | | |
        \_| \_/ \_/ \_____/\_|  |_/\_| \_\___|\___\___/|_| |_|

             v.0.2 beta - Y'all still exposing NTLM endpoints?


usage: ntlmrecon [-h] [--input INPUT | --infile INFILE] [--wordlist WORDLIST] [--threads THREADS] [--output-type] [--outfile OUTFILE] [--random-user-agent] [--force-all] [--shuffle] [-f]

optional arguments:
  -h, --help           show this help message and exit
  --input INPUT        Pass input as an IP address, URL or CIDR to enumerate NTLM endpoints
  --infile INFILE      Pass input from a local file
  --wordlist WORDLIST  Override the internal wordlist with a custom wordlist
  --threads THREADS    Set number of threads (Default: 10)
  --output-type, -o    Set output type. JSON (TODO) and CSV supported (Default: CSV)
  --outfile OUTFILE    Set output file name (Default: ntlmrecon.csv)
  --random-user-agent  TODO: Randomize user agents when sending requests (Default: False)
  --force-all          Force enumerate all endpoints even if a valid endpoint is found for a URL (Default : False)
  --shuffle            Break order of the input files
  -f, --force          Force replace output file if it already exists

Example Usage

NTLMRecon on a single URL

$ ntlmrecon --input https://mail.contoso.com --outfile ntlmrecon.csv

Recon on a CIDR range or IP address

$ ntlmrecon --input 192.168.1.1/24 --outfile ntlmrecon-ranges.csv

Recon on an input file

The tool automatically detects the type of input per line and gives you results automatically. CIDR ranges are expanded automatically even when read from a text file.

Input file can be something as mixed up as :

mail.contoso.com
CONTOSOHOSTNAME
10.0.13.2/28
192.168.222.1/24
https://mail.contoso.com

To run recon with an input file, just run :

$ ntlmrecon --infile /path/to/input/file --outfile ntlmrecon-fromfile.csv

Demo

Acknowledgments

@nyxgeek for the awesome wordlist in lyncsmash repository and for the idea behind ntlmscan.

Feedback

If you'd like to see a feature added into the tool or something doesn't work for you, please open a new issue.


More: https://github.com/sachinkamath/ntlmrecon/

The post NTLMRecon - A tool to enumerate information from NTLM authentication enabled web endpoints 🔎 appeared first on Hakin9 - IT Security Magazine.

DalFox - Parameter Analysis and XSS Scanning tool based on golang

$
0
0

DalFox - Finder of XSS, and Dal is the Korean pronunciation of the moon.

What is DalFox

Just, XSS scanning and parameter analysis tool. I previously developed XSpear, a ruby-based XSS tool, and this time, a full change occurred during the process of porting with golang!!! and created it as a new project. The basic concept is to analyze parameters, find XSS, and verify them based on DOM Parser.

I talk about naming. Dal(달) is the Korean pronunciation of moon and fox was made into Fox(Find Of XSS).

Key features

  • Parameter Analysis (find the reflected parameter, find free/bad characters, Identification of injection point)
  • Static Analysis (Check Bad-header like CSP, X-Frame-optiopns, etc.. with base request/response base)
  • Optimization query of payloads
    • Check the injection point through abstraction and generated the fit payload.
    • Eliminate unnecessary payloads based on badchar
  • XSS Scanning and DOM Base Verifying
  • All test payloads(build-in, your custom/blind) are tested in parallel with the encoder.
    • Support to Double URL Encoder
    • Support to HTML Hex Encoder
  • Friendly Pipeline (single URL, from a file, from IO)
  • And the various options required for the testing :D
    • built-in/custom grepping to find other vulnerability
    • if you found, after action
    • etc..

How to Install

There are a total of three ways to. Personally, I recommend go-install.

Developer version (go-get or go-install)

go-install

  1. clone this repo
$ git clone https://github.com/hahwul/dalfox
  1. install in cloned dalfox path
$ go install
  1. using dalfox
$ ~/go/bin/dalfox

go-get

  1. go get dalfox!
$ go get -u github.com/hahwul/dalfox
  1. using dalfox
$ ~/go/bin/dalfox

Release version

  1. Open latest release page https://github.com/hahwul/dalfox/releases/latest
  2. Download file Download and extract the file that fits your OS.
  3. You can put it in the execution directory and use it. e.g
$ cp dalfox /usr/bin/

Usage

    _..._
  .' .::::.   __   _   _    ___ _ __ __
 :  :::::::: |  \ / \ | |  | __/ \\ V /
 :  :::::::: | o ) o || |_ | _( o )) (
 '. '::::::' |__/|_n_||___||_| \_//_n_\
   '-.::''
Parameter Analysis and XSS Scanning tool based on golang
Finder Of XSS and Dal is the Korean pronunciation of moon. @hahwul


Usage:
  dalfox [command]

Available Commands:
  file        Use file mode(targets list or rawdata)
  help        Help about any command
  pipe        Use pipeline mode
  url         Use single target mode
  version     Show version

Flags:
  -b, --blind string            Add your blind xss (e.g -b https://hahwul.xss.ht)
      --config string           Using config from file
  -C, --cookie string           Add custom cookie
      --custom-payload string   Add custom payloads from file
  -d, --data string             Using POST Method and add Body data
      --delay int               Milliseconds between send to same host (1000==1s)
      --found-action string     If found weak/vuln, action(cmd) to next
      --grep string             Using custom grepping file (e.g --grep ./samples/sample_grep.json)
  -H, --header string           Add custom headers
  -h, --help                    help for dalfox
      --only-discovery          Only testing parameter analysis
  -o, --output string           Write to output file
      --output-format string    -o/--output 's format (txt/json/xml)
  -p, --param string            Only testing selected parameters
      --proxy string            Send all request to proxy server (e.g --proxy http://127.0.0.1:8080)
      --silence                 Not printing all logs
      --timeout int             Second of timeout (default 10)
      --user-agent string       Add custom UserAgent
  -w, --worker int              Number of worker (default 40)
$ dalfox [mode] [flags]

Single target mode

$ dalfox url http://testphp.vulnweb.com/listproducts.php\?cat\=123\&artist\=123\&asdf\=ff -b https://hahwul.xss.ht

Multiple target mode from file

$ dalfox file urls_file --custom-payload ./mypayloads.txt

Pipeline mode

$ cat urls_file | dalfox pipe -H "AuthToken: bbadsfkasdfadsf87"

Other tips, See wiki for detailed instructions!

ScreenShot


More: https://github.com/hahwul/dalfox

The post DalFox - Parameter Analysis and XSS Scanning tool based on golang appeared first on Hakin9 - IT Security Magazine.

DalFox - Parameter Analysis and XSS Scanning tool based on golang

$
0
0

DalFox - Finder of XSS, and Dal is the Korean pronunciation of the moon.

What is DalFox

Just, XSS scanning and parameter analysis tool. I previously developed XSpear, a ruby-based XSS tool, and this time, a full change occurred during the process of porting with golang!!! and created it as a new project. The basic concept is to analyze parameters, find XSS, and verify them based on DOM Parser.

I talk about naming. Dal(달) is the Korean pronunciation of moon and fox was made into Fox(Find Of XSS).

Key features

  • Parameter Analysis (find the reflected parameter, find free/bad characters, Identification of injection point)
  • Static Analysis (Check Bad-header like CSP, X-Frame-optiopns, etc.. with base request/response base)
  • Optimization query of payloads
    • Check the injection point through abstraction and generated the fit payload.
    • Eliminate unnecessary payloads based on badchar
  • XSS Scanning and DOM Base Verifying
  • All test payloads(build-in, your custom/blind) are tested in parallel with the encoder.
    • Support to Double URL Encoder
    • Support to HTML Hex Encoder
  • Friendly Pipeline (single URL, from a file, from IO)
  • And the various options required for the testing :D
    • built-in/custom grepping to find other vulnerability
    • if you found, after action
    • etc..

How to Install

There are a total of three ways to. Personally, I recommend go-install.

Developer version (go-get or go-install)

go-install

  1. clone this repo
$ git clone https://github.com/hahwul/dalfox
  1. install in cloned dalfox path
$ go install
  1. using dalfox
$ ~/go/bin/dalfox

go-get

  1. go get dalfox!
$ go get -u github.com/hahwul/dalfox
  1. using dalfox
$ ~/go/bin/dalfox

Release version

  1. Open latest release page https://github.com/hahwul/dalfox/releases/latest
  2. Download file Download and extract the file that fits your OS.
  3. You can put it in the execution directory and use it. e.g
$ cp dalfox /usr/bin/

Usage

    _..._
  .' .::::.   __   _   _    ___ _ __ __
 :  :::::::: |  \ / \ | |  | __/ \\ V /
 :  :::::::: | o ) o || |_ | _( o )) (
 '. '::::::' |__/|_n_||___||_| \_//_n_\
   '-.::''
Parameter Analysis and XSS Scanning tool based on golang
Finder Of XSS and Dal is the Korean pronunciation of moon. @hahwul


Usage:
  dalfox [command]

Available Commands:
  file        Use file mode(targets list or rawdata)
  help        Help about any command
  pipe        Use pipeline mode
  url         Use single target mode
  version     Show version

Flags:
  -b, --blind string            Add your blind xss (e.g -b https://hahwul.xss.ht)
      --config string           Using config from file
  -C, --cookie string           Add custom cookie
      --custom-payload string   Add custom payloads from file
  -d, --data string             Using POST Method and add Body data
      --delay int               Milliseconds between send to same host (1000==1s)
      --found-action string     If found weak/vuln, action(cmd) to next
      --grep string             Using custom grepping file (e.g --grep ./samples/sample_grep.json)
  -H, --header string           Add custom headers
  -h, --help                    help for dalfox
      --only-discovery          Only testing parameter analysis
  -o, --output string           Write to output file
      --output-format string    -o/--output 's format (txt/json/xml)
  -p, --param string            Only testing selected parameters
      --proxy string            Send all request to proxy server (e.g --proxy http://127.0.0.1:8080)
      --silence                 Not printing all logs
      --timeout int             Second of timeout (default 10)
      --user-agent string       Add custom UserAgent
  -w, --worker int              Number of worker (default 40)
$ dalfox [mode] [flags]

Single target mode

$ dalfox url http://testphp.vulnweb.com/listproducts.php\?cat\=123\&artist\=123\&asdf\=ff -b https://hahwul.xss.ht

Multiple target mode from file

$ dalfox file urls_file --custom-payload ./mypayloads.txt

Pipeline mode

$ cat urls_file | dalfox pipe -H "AuthToken: bbadsfkasdfadsf87"

Other tips, See wiki for detailed instructions!

ScreenShot


More: https://github.com/hahwul/dalfox

The post DalFox - Parameter Analysis and XSS Scanning tool based on golang appeared first on Hakin9 - IT Security Magazine.

Kaiten - A Undetectable Payload Generation

$
0
0

An Undetectable Payload Generation. This tool is for educational purposes only, usage of Kaiten for attacking targets without prior mutual consent is illegal. Developers assume no liability and are not responsible for any misuse or damage caused by this program.

What is it and why was it made?

We intentionally made it for our penetration testing jobs and for learning purposes. Our Kaiten c2 now moved onto a better source. And antivirus is dumb. Source: https://www.shadowlabs.cc/kaiten

Requirements

  • MingW (64 & 32)
  • GCC
  • OSSLSIGNCODE

Features

  • Undetectable Payload Generation
  • Stealth FUD Payload
  • Self Signing Certificate
  • Random Junk code

Affected Devices and Operating Systems

  • Windows
  • Android (soon)
  • Mac/Linux

Diagrams (also it's cool hehe)

s


More: https://github.com/shadowlabscc/Kaiten

The post Kaiten - A Undetectable Payload Generation appeared first on Hakin9 - IT Security Magazine.

Saferwall is an open source malware analysis platform.

$
0
0

A hackable malware sandbox for the 21st Century - https://saferwall.com

It aims for the following goals:

  • Provide a collaborative platform to share samples among malware researchers.
  • Acts as a system expert, to help researchers generate an automated malware analysis report.
  • Hunting platform to find new malware.
  • Quality ensurance for signature before releasing.

Features

  • Static analysis:
    • Crypto hashes, packer identification
    • Strings extraction
  • Multiple AV scanner which includes major antivirus vendors:
    Vendors status Vendors status
    Avast ✔ FSecure ✔
    Avira ✔ Kaspersky ✔
    Bitdefender ✔ McAfee ✔
    ClamAV ✔ Sophos ✔
    Comodo ✔ Symantec ✔
    ESET ✔ Windows Defender ✔

Installation

Saferwall takes advantage of kubernetes for its high availability, scalability, and the huge ecosystem behind it.

Everything runs inside Kubernetes. You can either deploy it in the cloud or have it self hosted.

To make it easy to get a production-grade Kubernetes cluster up and running, we use kops. It automatically provisions a Kubernetes cluster hosted on AWS, GCE, DigitalOcean, or OpenStack and also on bare metal. For the time being, only AWS is officially supported.

Steps to deploy in AWS: (This still needs to be improved)

  1. Clone the project: git clone https://github.com/saferwall/saferwall
  2. Using a Debian Linux, make sure build-essential are installed: sudo apt-get install build-essential.
  3. Rename the example.env to .env and fill the secrets according to which AVs you want to have.
  4. Install it: make saferwall.
  5. Edit the deployments/values.yaml to match your needs.
  6. Logs are found elasticsearch:

Built with:

Current architecture / Workflow:

Here is a basic workflow which happens during a file scan:

  • Frontend talks to the the backend via REST APIs.
  • Backend uploads samples to the object storage.
  • Backend pushes a message into the scanning queue.
  • Consumer fetches the file and copy it into to the nfs share avoiding to pull the sample on every container.
  • Consumer calls asynchronously scanning services (like AV scanners) via gRPC calls and waits for results.

Acknowledgements

Contributing

Please read docs/CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.


More: https://saferwall.com and https://github.com/saferwall/saferwall

The post Saferwall is an open source malware analysis platform. appeared first on Hakin9 - IT Security Magazine.

Sharingan is a recon multitool for offensive security and bug bounty

$
0
0

Sharingan is a recon multitool for offensive security/bug bounty

This is very much a work in progress and I'm relatively new to offensive security in general so if you see something that can be improved please open an issue or PR with suggested changes.

Cloning for development

Outside of your Gopath git clone https://github.com/leobeosab/sharingan

Installing

go get github.com/leobeosab/sharingan/cmd/sharingancli

Dependencies

  • NMap
  • Go

Usage

Note

Order matters when it comes to flags it must be sharingancli [globalflags] command [commandflags] if this isn't a wanted feature I can change it but I like how clean it is.

DNS

bruteforce

DNS busts the target with a wordlist you provide

sharingancli --target targetname dns --dns-wordlist ~/path/to/wordlist --root-domain target.com 

addsubs

Adds subdomains to the program's storage from stdin using pipes:

cat subs | sharingancli --target targetname dns addsubs

Scan

Scans all hosts available that were stored in target using nmap:

sharingancli --target target scan 

interactive

Scan a single host from the list of subdomains stored in the target:

sharingancli --target target scan interactive 

info

domains

Outputs all domains as a list in stdout

sharingancli --target target info domains  

Features to come

  • Dir brute-forcing -- Currently being worked on
  • JSON and regular file exports
  • Automated scans through a daemon?
  • add a way to do SYN / -sS scanning [ must be root so it presents a challenge ]
  • Possible Web UI / HTML export

More: https://github.com/leobeosab/sharingan

The post Sharingan is a recon multitool for offensive security and bug bounty appeared first on Hakin9 - IT Security Magazine.


Identity and Access Management for “Dummies” by Richard Azu

$
0
0

Photo by Daria Shevtsova on Unsplash

Is your system and network environment being managed effectively against exponentially increasing attacks?

Do you have close control over user access defined in your environment?

If you answered anything but “yes” to either of these questions, read on to learn more about Identity Access Management (IAM) and how to implement it successfully.

What’s Identity Access Management (IAM)?

IAM is a system used to define and manage user identities and access permissions. With the right framework for IAM in place, system administrators can manage user access to critical data within your enterprise. System administrators also use IAM to regulate users’ access to systems and networks based on set definitions.

Identity and access management solutions deployed by teams like these top IT security companies in the UK consist of four major components: Authentication, Authorization, Administration and Central Identity Stores. These solutions provide users with access to systems in a seamless but secure way.

Authentication

Authentication is the process of verifying the identity of a user, system or device. The authentication process is invoked whenever a user, system or device initially makes the attempt to access a corporate network. During this process, users, systems and devices must verify their identity before being granted access to systems and networks. Once a user, system or device is authenticated, a session is created and referred to during all system interactions until the user, device or system logs off or the session is automatically timed out. 

To make it difficult for hackers to gain access to the entire network with a compromised username and password pair, additional steps are introduced during verification of identity. The additional steps require users to provide more information such as a One-Time Pin token (OTP), a fingerprint or a code sent to a mobile device. This extra level of authentication is commonly known as Multi-factor Authentication (MFA).

Authorization

Authorization refers to the process that determines what a user, device or system can do within a network. This is the next process after authentication is successful and you’re sure about the user, device or system trying to access the network.

This part of IAM determines whether a user, device or system is permitted to access a resource within a network. It does this by checking the access request presented by the user, device or system against the authorization policies defined in IAM (if one exists). If a request is the same as defined in the IAM system, access is granted. If the request is not, access is denied.

Administration

Administration is the method by which profiles are created for users, devices and systems.

This component of IAM defines the set of functions such as profile creation, propagation and maintenance of profile and privileges. This component of IAM has three sub-components: Delegated Administration, Provisioning and Self-Service.

Delegated Administration

Delegated Administration is the process of granting system administrators the ability to view another user’s identity data and execute actions on that profile.

Provisioning

Provisioning is the process of organizing the creation of user profiles and its dependencies in the form of roles.

Self-Service

Self-service is the process by which a user requests to modify her/his own identity attributes in the IAM system. This process also includes requests for new access rights.

Central Identity Stores

Central Identity Stores is a directory that contains identity information about a collection of users. Identity stores in IAM hold group membership information and the information required to validate credentials submitted from clients. The stores in IAM are the primary source and database for all the access profiles in IAM. Establishing a central identity store is necessary for centralizing IAM tasks and functions such as role-based access controls and provisioning or deprovisioning of access profiles.

Risks of not having an Identity and Access Management

Configuring Correct Access Profiles

Without an IAM solution, it would be difficult for organisations to control users’ access to their systems. Even though most organisations pay attention to external hackers, internal users contribute to many corporate security breaches. This makes it important to ensure users are configured with the right access profiles. This is strictly enforced and required for organisations that deal with very sensitive data for both internal and external clients. Ensuring the correct access profile is configured for each user should be an on-going activity that lasts for each user’s lifetime in the system. 

Termination of Access Profiles

After configuring the correct access profiles for users, system administrators may forget to terminate the account when its users have changed roles, resigned or had their appointment terminated. The life cycle of a user’s access profile must be monitored from its creation until the same profile is no longer required. There’s always significant focus on creating access profiles for users during initial employment, but the same urgency is lost when it’s time for the same access to be removed or deprovisioned. It’s important to manage the removal of such access profiles to prevent disgruntled employees from using credentials to access organisational data when they leave. 

Audits

One major problem with having no access management is dealing with audits and maintaining required compliance levels. When there are no systems in place to manage access, corporate organisations aren’t able to ensure they meet the required standards or rules in audits.

The importance of implementing an Identity and Access Management Solution

An SME or corporate organisation without an IAM solution leaves room for data breaches and several levels of security issues. An IAM solution ensures security requirements for organisations are met. The minimum IAM solution should include a process for provisioning and deprovisioning user profiles and monitoring it throughout its life cycle. This ensures users have just the right access required for their roles.

The core of an IAM solution oversees all the authentication, authorization, administration and central identity store processes. System administrators may manage the entire process from authentication to central identity stores, but the entire organisation can be impacted if user access profiles and their management aren’t properly aligned. Fortunately, a team of IT experts can create an automated IAM solution for your organisation that will minimize operational costs and streamline IAM operations.


About the Author:

Richard has a Diploma in Telecommunications Engineering from the Multimedia University – Malaysia and a Bsc. Engineering Physics from the University of Cape Coast, Ghana. He’s currently a member of the Institution of Engineering and Technology (IET  - UK). With over 16 years of experience in Network/Telecom Engineering, he’s experienced in the deployment of voice and data over the media; radio, copper and fibre. He is currently looking for ways to derive benefit from the WDM technology in Optics. Using Kali as a springboard, he has developed an interest in digital forensics and penetration testing.

The post Identity and Access Management for “Dummies” by Richard Azu appeared first on Hakin9 - IT Security Magazine.

GitHound - A batch-catching, pattern-matching, patch-attacking secret snatcher

$
0
0

GitHound pinpoints exposed API keys and other sensitive information across all of GitHub using pattern matching, commit history searching, and a unique result scoring system. GitHound has earned me over $7500 applied to Bug Bounty research. Corporate and Bug Bounty Hunter use cases are outlined below. More information on methodologies is available in the accompanying blog post.

Features

  • GitHub/Gist code searching. This enables GitHound to locate sensitive information exposed across all of GitHub, uploaded by any user.
  • Generic API key detection using pattern matching, context, Shannon entropy, and other heuristics
  • Commit history digging to find improperly deleted sensitive information (for repositories with <6 stars)
  • Scoring system to emphasize confident results, filter out common false positives, and to optimize intensive repo digging
  • Base64 detection and decoding
  • Options to build GitHound into your workflow, like custom regexes and results-only output mode

Usage

echo "\"tillsongalloway.com\"" | git-hound or git-hound --subdomain-file subdomains.txt

Setup

  1. Download the latest release of GitHound
  2. Create a ./config.yml or ~/.githound/config.yml with your GitHub username and password. Optionally, include your 2FA TOTP seed. See config.example.yml.
    1. If it's your first time using the account on the system, you may receive an account verification email.
  3. echo "tillsongalloway.com" | git-hound

Use cases

Corporate: Searching for exposed customer API keys

Knowing the pattern for a specific service's API keys enables you to search GitHub for these keys. You can then pipe matches for your custom key regex into your own script to test the API key against the service and to identify the at-risk account.

echo "api.halcorp.biz" | githound --dig-files --dig-commits --many-results --regex-file halcorp-api-regexes.txt --results-only | python halapitester.py

For detecting future API key leaks, GitHub offers Push Token Scanning to immediately detect API keys as they are posted.

Bug Bounty Hunters: Searching for leaked employee API tokens

My primary use for GitHound is for finding sensitive information for Bug Bounty programs. For high-profile targets, the --many-results hack and --languages flag are useful for scraping >100 pages of results.

echo "\"uberinternal.com\"" | githound --dig-files --dig-commits --many-results --languages common-languages.txt --threads 100

How does GitHound find API keys?

https://github.com/tillson/git-hound/blob/master/internal/app/keyword_scan.go GitHound finds API keys with a combination of exact regexes for common services like Slack and AWS and a context-sensitive generic API regex. This finds long strings that look like API keys surrounded by keywords like "Authorization" and "API-Token". GitHound assumes that these are false positives and then prove their legitimacy with Shannon entropy, dictionary word checks, uniqueness calculations, and encoding detection. GitHound then outputs high certainty positives. For files that encode secrets, decodes base64 strings and search the encoded strings for API keys.

Flags

  • --subdomain-file - The file with the subdomains
  • --dig-files - Clone and search the repo's files for results
  • --dig-commits - Clone and search the repo's commit history for results
  • --many-results - Use result sorting and filtering hack to scrape more than 100 pages of results
  • --results-only - Print only regexed results to stdout. Useful for piping custom regex matches into another script
  • --no-repos - Don't search repos
  • --no-gists - Don't search Gists
  • --threads - Specify max number of threads for the commit digger to use.
  • --regex-file - Supply a custom regex file
  • --language-file - Supply a custom file with languages to search.
  • --config-file - Custom config file (default is config.yml)
  • --pages - Max pages to search (default is 100, the page maximum)
  • --no-scoring - Don't use scoring to filter out false positives
  • --no-api-keys - Don't perform generic API key searching. GitHound uses common API key patterns, context clues, and a Shannon entropy filter to find potential exposed API keys.
  • --no-files - Don't flag interesting file extensions
  • --only-filtered - Only search filtered queries (languages)
  • --debug - Print verbose debug messages.
  • --otp-code - Github account 2FA code for sign-in. (Only use if you have authenticator 2FA setup on your Github account)

User feedback

These are discussions about how people use GitHound in their workflows and how we can GitHound to fulfill those needs. If you use GitHound, consider leaving a note in one of the active issues. List of issues requesting user feedback

Sponsoring

If GitHound helped you earn a big bounty, consider sending me a tip with GitHub Sponsors.

References


More: https://github.com/tillson/git-hound

The post GitHound - A batch-catching, pattern-matching, patch-attacking secret snatcher appeared first on Hakin9 - IT Security Magazine.

What You Need to Know About Network Security by Richard Azu

$
0
0

Photo by Pixabay from Pexels

Is Your Network Immune from Attacks?

Network security is the practice of implementing standards to protect network systems against unauthorized access or improper disclosure to corporate networks. This practice includes the use of hardware as well as software technologies to achieve the best solution for network defence. 

The criticality of an organisation’s data and infrastructure often requires a certain level of network security expertise that can only be provided by knowledgeable cyber security companies. This ensures any organisation can defend its network resources from the exponentially increasing threats of cybercrime.

Our current network architecture is faced with ever-changing threats and intruders who are constantly evolving their methods to find and exploit vulnerabilities. 

Let’s look at the types of network security you need to remain safe and secure.

Types of Network Security

Network security acts as the layer of protection between your network and any malicious activity being executed by a hacker, either internally or externally. This layer remains accessible or penetrable until the right solution to protect your network is implemented. The following types of network security will help you understand and select which one needs to be implemented based on your requirements.

Access Control

Organisational networks shouldn’t allow every user automatic access. There should be policies to restrict or terminate unrecognized devices from accessing the network. Profiles for devices and users that are classified as white or trusted should only be able to work within the scope they’ve been allowed. Blocking such non-compliant devices and user profiles can save your network against possible security breaches. This process is called Network Access Control (NAC).

System Behaviour Analytics

In order to spot irregular patterns in a network, it’s important to understand and analyse the normal behaviour of a network. System behaviour analytics is the use of software tools to detect or spot any network and system anomaly as they happen. The software tools establish a baseline of what defines normal behaviour for user profiles, applications and network activities.

Anti-malicious Software or “Anti-malware”

Malware, or “malicious software”, is software designed by cyber hackers with the primary intention of gaining access or causing damage to a computer system or network. It’s a form of cyber-attack that keeps evolving. While some may destroy files or corrupt data once they come into contact, others create undetectable routes or backdoors into systems for hackers to exploit. The best anti-malware shouldn’t just scan your network and go idle; it must also monitor the network traffic in real time for malware and look for irregular patterns within the network.

Email Security

Email gateways are the number one attack vectors for hackers to launch a security breach. Attackers can gather personal information from publicly available social media sites like LinkedIn, Facebook, etc. They use this information and social engineering tactics to generate phishing campaigns to deceive recipients into launching malware sites or portals. Email security applications scan for sensitive data in outgoing mails to prevent loss of critical and sensitive data. They also monitor for block attacks in incoming mails.

Firewalls

Firewalls are network security devices, software or hardware, that scan incoming and outgoing traffic and decide to allow or block specific traffic based on a set of defined policies. A firewall is the first line of defence in securing networks. It establishes a barrier between the protected internal network that can be trusted and the untrusted outside network, thereby preventing threats from hackers.

Network Segmentation

Segmentation divides a computer network into smaller portions, all with unique hosts. The smaller networks become a subnet of the larger network. Its purpose is to help enforce easier security policies and improve network performance. Segmentation allows role-based and location-based access profiles for users, and thus helps to contain and remediate suspicious devices.

VPN

Encrypting the connectivity between a device and any untrusted network creates a Virtual Private Network (VPN). This method of encryption allows remote access to secure corporate applications and other network resources. VPNs add additional levels of security and privacy to untrusted networks.

Web Security

This network security solution checks the level of access profiles defined for users, classifies users as either authorized or unauthorized, scans for vulnerabilities in web applications, and protects sensitive data from being compromised. It also checks for security levels deployed in websites and denies access when they don’t meet defined security standards.

Intrusion Detection and Prevention Systems

An Intrusion Detection and Prevention System (IDPS) scans network traffic in real time to actively block attacks that match global intelligence threat signatures. It also tracks malicious files and patterns and prevents them from replicating across the network.

Wireless Security

The fact remains that wireless networks aren’t quite as secure as wired networks. With the emergence of Bring Your Own Device (BYOD), mobile office culture and hot-desking, wireless access points have now become a channel for security breaches. A properly implemented wireless security system prevents unauthorized users from accessing an organisation’s wireless network.

The Principles of Network Security

Network security is built around three important components: Confidentiality, Integrity and Availability (C-I-A). When all three elements work simultaneously, a network is considered secure. 

Confidentiality is the security principle that manages access to information. 

It’s implemented to ensure users with the wrong access cannot gain access to restricted data, while users with the right access profiles can access restricted data. 

The second component, integrity, ensures critical data is from a genuine source, not broken, and isn’t altered or modified during transmission. 

The third component, availability, guarantees constant and reliable access to critical data. It ensures access to critical data is only possible with the right access profile.

The Importance of Network Security

It’s critical to understand the importance of network security.

Whether you own a start-up or a multinational corporation, network security should be implemented equally for them. A solid network security system is one that combines hardware tools, software tools, policies, best practises, and the three network security components to prevent unauthorised access to your system.


About the Author:

Richard has a Diploma in Telecommunications Engineering from the Multimedia University – Malaysia and a Bsc. Engineering Physics from the University of Cape Coast, Ghana. He’s currently a member of the Institution of Engineering and Technology (IET  - UK). With over 16 years of experience in Network/Telecom Engineering, he’s experienced in the deployment of voice and data over the media; radio, copper and fibre. He is currently looking for ways to derive benefit from the WDM technology in Optics. Using Kali as a springboard, he has developed an interest in digital forensics and penetration testing.

The post What You Need to Know About Network Security by Richard Azu appeared first on Hakin9 - IT Security Magazine.

ExploitDB and searchsploit [FREE COURSE CONTENT]

$
0
0

In this video from our OSINT for Hackers online course by Atul Tiwari you will learn how to utilize ExploitDB and searchsploit during your OSINT activities. Using databases available to you is a great way to make your life easier and your work more efficient - jump in and see for yourself! 



In the age of social networking where people post everything about themselves over the insecure internet, it becomes easy to hunt for or harvest information with the help of open source intelligence gathering. The only thing we are required is to use is the right set of mind with the right set of open source tools.

We can get almost everything, from credit card numbers to social security numbers, personal data, complete profiles of any person, vulnerable and misconfigured servers, private or internal IP addresses of an organization, passwords for admin panel, geo-location of IP addresses; more than 80 percent of the desired information can be obtained using only OSINT (Open Source Intelligence gathering).

This course is focused only on OSINT tools that are free to use. We have used numerous such tools that act as a silver bullet in terms of accessing public sources. In module 1, starting with DNS enumeration, getting useful URLs, IP and host finder, we will dive into harvesting email addresses anonymously and finding information about an email. Google dork or Google hacking database will play a crucial role in finding the complete information about anything deeply. Netcraft, web archives, and cached data will complete this module with outstanding command over all the topics discussed. You can start OSINT straight from here.

The exercises of the module focus on: 

  • Harvesting email addresses
  • Using Google dorks to find hidden data
  • Searching for cached data
  • Using Automater, ExploitDB, searchsploit, and other tools to make OSINT easier
  • Gathering DNS records

Related Posts:

The post ExploitDB and searchsploit [FREE COURSE CONTENT] appeared first on Hakin9 - IT Security Magazine.

Proxy.py – A lightweight, single file HTTP proxy server in python

$
0
0

To facilitate end-to-end testing for such scenarios, I architected a proxy infrastructure; A stripped-down version of which was a Proxy.py - lightweight HTTP proxy server in Python.

Blog post: https://abhinavsingh.com/proxy-py-a-lightweight-single-file-http-proxy-server-in-python/

Github page: https://github.com/abhinavsingh/proxy.py

Features

  • Fast & Scalable
    • Scales by using all available cores on the system
    • Threadless executions using coroutine
    • Made to handle tens-of-thousands connections / sec
      # On Macbook Pro 2015 / 2.8 GHz Intel Core i7
      ❯ hey -n 10000 -c 100 http://localhost:8899/
      
      Summary:
        Total:	0.6157 secs
        Slowest:	0.1049 secs
        Fastest:	0.0007 secs
        Average:	0.0055 secs
        Requests/sec:	16240.5444
      
        Total data:	800000 bytes
        Size/request:	80 bytes
      
      Response time histogram:
        0.001 [1]     |
        0.011 [9565]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
        0.022 [332]	|
  • Lightweight
    • Uses only ~5-20MB RAM
    • No external dependency other than standard Python library
  • Programmable
    • Optionally enable builtin Web Server
    • Customize proxy and http routing via plugins
    • Enable plugin using command line option e.g. --plugins proxy.plugin.CacheResponsesPlugin
    • Plugin API is currently in development phase, expect breaking changes.
  • Realtime Dashboard
    • Optionally enable bundled dashboard.
      • Available at http://localhost:8899/dashboard.
    • Inspect, Monitor, Control and Configure proxy.py at runtime.
    • Extend dashboard using plugins.
    • Dashboard is currently in development phase, expect breaking changes.
  • Secure
  • Man-In-The-Middle
    • Can decrypt TLS traffic between clients and upstream servers
    • See TLS Interception
  • Supported proxy protocols
    • http(s)
      • http1
      • http1.1 pipeline
    • http2
    • websockets
  • Optimized for large file uploads and downloads
  • IPv4 and IPv6 support
  • Basic authentication support
  • Can serve a PAC (Proxy Auto-configuration) file
    • See --pac-file and --pac-file-url-path flags

Install

Using PIP

Stable Version with PIP

Install from PyPi

❯ pip install --upgrade proxy.py

or from GitHub master branch

❯ pip install git+https://github.com/abhinavsingh/proxy.py.git@master

Development Version with PIP

❯ pip install git+https://github.com/abhinavsingh/proxy.py.git@develop

Using Docker

Stable Version from Docker Hub

❯ docker run -it -p 8899:8899 --rm abhinavsingh/proxy.py:latest

Build Development Version Locally

❯ git clone https://github.com/abhinavsingh/proxy.py.git
❯ cd proxy.py
❯ make container
❯ docker run -it -p 8899:8899 --rm abhinavsingh/proxy.py:latest

WARNING docker image is currently broken on macOS due to incompatibility with vpnkit.

Using HomeBrew

Stable Version with HomeBrew

❯ brew install https://raw.githubusercontent.com/abhinavsingh/proxy.py/develop/helper/homebrew/stable/proxy.rb

Development Version with HomeBrew

❯ brew install https://raw.githubusercontent.com/abhinavsingh/proxy.py/develop/helper/homebrew/develop/proxy.rb

Start proxy.py

From command line when installed using PIP

When proxy.py is installed using pip, an executable named proxy is placed under your $PATH.

Run it

Simply type proxy on command line to start it with default configuration.

❯ proxy
...[redacted]... - Loaded plugin proxy.http_proxy.HttpProxyPlugin
...[redacted]... - Starting 8 workers
...[redacted]... - Started server on ::1:8899

Understanding logs

Things to notice from the above logs:

  • Loaded plugin - proxy.py will load proxy.http.proxy.HttpProxyPlugin by default. As the name suggests, this core plugin adds http(s) proxy server capabilities to proxy.py
  • Started N workers - Use --num-workers flag to customize the number of worker processes. By default, proxy.py will start as many workers as there are CPU cores on the machine.
  • Started server on ::1:8899 - By default, proxy.py listens on IPv6 ::1, which is equivalent of IPv4 127.0.0.1. If you want to access proxy.py externally, use --hostname :: or --hostname 0.0.0.0 or bind to any other interface available on your machine.
  • Port 8899 - Use --port flag to customize the default TCP port.

Enable DEBUG logging

All the logs above are INFO level logs, default --log-level for proxy.py.

Let's start proxy.py with DEBUG level logging:

❯ proxy --log-level d
...[redacted]... - Open file descriptor soft limit set to 1024
...[redacted]... - Loaded plugin proxy.http_proxy.HttpProxyPlugin
...[redacted]... - Started 8 workers
...[redacted]... - Started server on ::1:8899

As we can see, before starting up:

  • proxy.py also tried to set open file limit ulimit on the system.
  • Default value for --open-file-limit used is 1024.
  • --open-file-limit flag is a no-op on Windows operating systems.

See flags for full list of available configuration options.

From command line using repo source

If you are trying to run proxy.py from source code, there is no binary file named proxy in the source code.

To start proxy.py from source code follow these instructions:

  • Clone repo
    ❯ git clone https://github.com/abhinavsingh/proxy.py.git
    ❯ cd proxy.py
  • Create a Python 3 virtual env
    ❯ python3 -m venv venv
    ❯ source venv/bin/activate
  • Install deps
    ❯ pip install -r requirements.txt
    ❯ pip install -r requirements-testing.txt
  • Run tests
    ❯ make
  • Run proxy.py
    ❯ python -m proxy

Also see Plugin Developer and Contributor Guide if you plan to work with proxy.py source code.

Docker image

Customize startup flags

By default docker binary is started with IPv4 networking flags:

--hostname 0.0.0.0 --port 8899

To override input flags, start docker image as follows. For example, to check proxy.py the version within Docker image:

❯ docker run -it \
    -p 8899:8899 \
    --rm abhinavsingh/proxy.py:latest \
    -v

Plugin Examples

  • See plugin module for full code.
  • All the bundled plugin examples also works with https traffic
  • Plugin examples are also bundled with Docker image.

HTTP Proxy Plugins

ShortLinkPlugin

Add support for short links in your favorite browsers / applications.

Start proxy.py as:

❯ proxy \
    --plugins proxy.plugin.ShortLinkPlugin

Now you can speed up your daily browsing experience by visiting your favorite website using single character domain names :). This works across all browsers.

Following short links are enabled by default:

Short Link Destination URL
a/ amazon.com
i/ instagram.com
l/ linkedin.com
f/ facebook.com
g/ google.com
t/ twitter.com
w/ web.whatsapp.com
y/ youtube.com
proxy/ localhost:8899

ModifyPostDataPlugin

Modifies POST request body before sending request to upstream server.

Start proxy.py as:

❯ proxy \
    --plugins proxy.plugin.ModifyPostDataPlugin

By default plugin replaces POST body content with hardcoded b'{"key": "modified"}' and enforced Content-Type: application/json.

Verify the same using curl -x localhost:8899 -d '{"key": "value"}' http://httpbin.org/post

{
  "args": {},
  "data": "{\"key\": \"modified\"}",
  "files": {},
  "form": {},
  "headers": {
    "Accept": "*/*",
    "Content-Length": "19",
    "Content-Type": "application/json",
    "Host": "httpbin.org",
    "User-Agent": "curl/7.54.0"
  },
  "json": {
    "key": "modified"
  },
  "origin": "1.2.3.4, 5.6.7.8",
  "url": "https://httpbin.org/post"
}

Note following from the response above:

  1. POST data was modified "data": "{\"key\": \"modified\"}". Original curl command data was {"key": "value"}.
  2. Our curl command did not add any Content-Type header, but our plugin did add one "Content-Type": "application/json". Same can also be verified by looking at json field in the output above:
    "json": {
     "key": "modified"
    },
    
  3. Our plugin also added a Content-Length header to match length of modified body.

MockRestApiPlugin

Mock responses for your server REST API. Use to test and develop client side applications without need of an actual upstream REST API server.

Start proxy.py as:

❯ proxy \
    --plugins proxy.plugin.ProposedRestApiPlugin

Verify mock API response using curl -x localhost:8899 http://api.example.com/v1/users/

{"count": 2, "next": null, "previous": null, "results": [{"email": "you@example.com", "groups": [], "url": "api.example.com/v1/users/1/", "username": "admin"}, {"email": "someone@example.com", "groups": [], "url": "api.example.com/v1/users/2/", "username": "admin"}]}

Verify the same by inspecting proxy.py logs:

2019-09-27 12:44:02,212 - INFO - pid:7077 - access_log:1210 - ::1:64792 - GET None:None/v1/users/ - None None - 0 byte

Access log shows None:None as server ip:port. None simply means that the server connection was never made, since response was returned by our plugin.

Now modify ProposedRestApiPlugin to returns REST API mock responses as expected by your clients.

RedirectToCustomServerPlugin

Redirects all incoming http requests to custom web server. By default, it redirects client requests to inbuilt web server, also running on 8899 port.

Start proxy.py and enable inbuilt web server:

❯ proxy \
    --enable-web-server \
    --plugins proxy.plugin.RedirectToCustomServerPlugin

Verify using curl -v -x localhost:8899 http://google.com

... [redacted] ...
< HTTP/1.1 404 NOT FOUND
< Server: proxy.py v1.0.0
< Connection: Close
<
* Closing connection 0

Above 404 response was returned from proxy.py web server.

Verify the same by inspecting the logs for proxy.py. Along with the proxy request log, you must also see a http web server request log.

2019-09-24 19:09:33,602 - INFO - pid:49996 - access_log:1241 - ::1:49525 - GET /
2019-09-24 19:09:33,603 - INFO - pid:49995 - access_log:1157 - ::1:49524 - GET localhost:8899/ - 404 NOT FOUND - 70 bytes

FilterByUpstreamHostPlugin

Drops traffic by inspecting upstream host. By default, plugin drops traffic for google.com and www.google.com.

Start proxy.py as:

❯ proxy \
    --plugins proxy.plugin.FilterByUpstreamHostPlugin

Verify using curl -v -x localhost:8899 http://google.com:

... [redacted] ...
< HTTP/1.1 418 I'm a tea pot
< Proxy-agent: proxy.py v1.0.0
* no chunk, no close, no size. Assume close to signal end
<
* Closing connection 0

Above 418 I'm a tea pot is sent by our plugin.

Verify the same by inspecting logs for proxy.py:

2019-09-24 19:21:37,893 - ERROR - pid:50074 - handle_readables:1347 - HttpProtocolException type raised
Traceback (most recent call last):
... [redacted] ...
2019-09-24 19:21:37,897 - INFO - pid:50074 - access_log:1157 - ::1:49911 - GET None:None/ - None None - 0 bytes

CacheResponsesPlugin

Caches Upstream Server Responses.

Start proxy.py as:

❯ proxy \
    --plugins proxy.plugin.CacheResponsesPlugin

Verify using curl -v -x localhost:8899 http://httpbin.org/get:

... [redacted] ...
< HTTP/1.1 200 OK
< Access-Control-Allow-Credentials: true
< Access-Control-Allow-Origin: *
< Content-Type: application/json
< Date: Wed, 25 Sep 2019 02:24:25 GMT
< Referrer-Policy: no-referrer-when-downgrade
< Server: nginx
< X-Content-Type-Options: nosniff
< X-Frame-Options: DENY
< X-XSS-Protection: 1; mode=block
< Content-Length: 202
< Connection: keep-alive
<
{
  "args": {},
  "headers": {
    "Accept": "*/*",
    "Host": "httpbin.org",
    "User-Agent": "curl/7.54.0"
  },
  "origin": "1.2.3.4, 5.6.7.8",
  "url": "https://httpbin.org/get"
}
* Connection #0 to host localhost left intact

Get path to the cache file from proxy.py logs:

... [redacted] ... - GET httpbin.org:80/get - 200 OK - 556 bytes
... [redacted] ... - Cached response at /var/folders/k9/x93q0_xn1ls9zy76m2mf2k_00000gn/T/httpbin.org-1569378301.407512.txt

Verify contents of the cache file cat /path/to/your/cache/httpbin.org.txt

HTTP/1.1 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Content-Type: application/json
Date: Wed, 25 Sep 2019 02:24:25 GMT
Referrer-Policy: no-referrer-when-downgrade
Server: nginx
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
Content-Length: 202
Connection: keep-alive

{
  "args": {},
  "headers": {
    "Accept": "*/*",
    "Host": "httpbin.org",
    "User-Agent": "curl/7.54.0"
  },
  "origin": "1.2.3.4, 5.6.7.8",
  "url": "https://httpbin.org/get"
}

ManInTheMiddlePlugin

Modifies upstream server responses.

Start proxy.py as:

❯ proxy \
    --plugins proxy.plugin.ManInTheMiddlePlugin

Verify using curl -v -x localhost:8899 http://google.com:

... [redacted] ...
< HTTP/1.1 200 OK
< Content-Length: 28
<
* Connection #0 to host localhost left intact
Hello from man in the middle

Response body Hello from man in the middle is sent by our plugin.

ProxyPoolPlugin

Forward incoming proxy requests to a set of upstream proxy servers.

By default, ProxyPoolPlugin is hard-coded to use localhost:9000 and localhost:9001 as upstream proxy server.

Let's start upstream proxies first.

Start proxy.py on port 9000 and 9001

❯ proxy --port 9000
❯ proxy --port 9001

Now, start proxy.py with ProxyPoolPlugin (on default 8899 port):

❯ proxy \
    --plugins proxy.plugin.ProxyPoolPlugin

Make a curl request via 8899 proxy:

curl -v -x localhost:8899 http://httpbin.org/get

Verify that 8899 proxy forwards requests to upstream proxies by checking respective logs.

HTTP Web Server Plugins

Reverse Proxy

Extend in-built Web Server to add Reverse Proxy capabilities.

Start proxy.py as:

❯ proxy --enable-web-server \
    --plugins proxy.plugin.ReverseProxyPlugin

With default configuration, ReverseProxyPlugin plugin is equivalent to following Nginx config:

location /get {
    proxy_pass http://httpbin.org/get
}

Verify using curl -v localhost:8899/get:

{
  "args": {},
  "headers": {
    "Accept": "*/*",
    "Host": "localhost",
    "User-Agent": "curl/7.64.1"
  },
  "origin": "1.2.3.4, 5.6.7.8",
  "url": "https://localhost/get"
}

Web Server Route

Demonstrates inbuilt web server routing using plugin.

Start proxy.py as:

❯ proxy --enable-web-server \
    --plugins proxy.plugin.WebServerPlugin

Verify using curl -v localhost:8899/http-route-example, should return:

HTTP route response

Plugin Ordering

When using multiple plugins, depending upon plugin functionality, it might be worth considering the order in which plugins are passed on the command line.

Plugins are called in the same order as they are passed. Example, say we are using both FilterByUpstreamHostPlugin andRedirectToCustomServerPlugin. Idea is to drop all incoming http requests for google.com and www.google.com and redirect other http requests to our inbuilt web server.

Hence, in this scenario it is important to use FilterByUpstreamHostPlugin before RedirectToCustomServerPlugin. If we enable RedirectToCustomServerPlugin before FilterByUpstreamHostPlugin, google requests will also get redirected to inbuilt web server, instead of being dropped.

End-to-End Encryption

By default, proxy.py uses http protocol for communication with clients e.g. curl, browser. For enabling end-to-end encrypting using tls / https first generate certificates:

make https-certificates

Start proxy.py as:

❯ proxy \
    --cert-file https-cert.pem \
    --key-file https-key.pem

Verify using curl -x https://localhost:8899 --proxy-cacert https-cert.pem https://httpbin.org/get:

{
  "args": {},
  "headers": {
    "Accept": "*/*",
    "Host": "httpbin.org",
    "User-Agent": "curl/7.54.0"
  },
  "origin": "1.2.3.4, 5.6.7.8",
  "url": "https://httpbin.org/get"
}

TLS Interception

By default, proxy.py will not decrypt https traffic between client and server. To enable TLS interception first generate root CA certificates:

❯ make ca-certificates

Lets also enable CacheResponsePlugin so that we can verify decrypted response from the server. Start proxy.py as:

❯ proxy \
    --plugins proxy.plugin.CacheResponsesPlugin \
    --ca-key-file ca-key.pem \
    --ca-cert-file ca-cert.pem \
    --ca-signing-key-file ca-signing-key.pem

:note: MacOS users also need to pass explicit CA file path needed for validation of peer certificates. See --ca-file flag.

Verify TLS interception using curl

❯ curl -v -x localhost:8899 --cacert ca-cert.pem https://httpbin.org/get
*  issuer: C=US; ST=CA; L=SanFrancisco; O=proxy.py; OU=CA; CN=Proxy PY CA; emailAddress=proxyca@mailserver.com
*  SSL certificate verify ok.
> GET /get HTTP/1.1
... [redacted] ...
< Connection: keep-alive
<
{
  "args": {},
  "headers": {
    "Accept": "*/*",
    "Host": "httpbin.org",
    "User-Agent": "curl/7.54.0"
  },
  "origin": "1.2.3.4, 5.6.7.8",
  "url": "https://httpbin.org/get"
}

The issuer line confirms that response was intercepted.

Also verify the contents of cached response file. Get path to the cache file from proxy.py logs.

❯ cat /path/to/your/tmp/directory/httpbin.org-1569452863.924174.txt

HTTP/1.1 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Content-Type: application/json
Date: Wed, 25 Sep 2019 23:07:05 GMT
Referrer-Policy: no-referrer-when-downgrade
Server: nginx
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
Content-Length: 202
Connection: keep-alive

{
  "args": {},
  "headers": {
    "Accept": "*/*",
    "Host": "httpbin.org",
    "User-Agent": "curl/7.54.0"
  },
  "origin": "1.2.3.4, 5.6.7.8",
  "url": "https://httpbin.org/get"
}

Viola!!! If you remove CA flags, encrypted data will be found in the cached file instead of plain text.

Now use CA flags with other plugin examples to see them work with https traffic.

Proxy Over SSH Tunnel

Requires paramiko to work. See requirements-tunnel.txt

Proxy Remote Requests Locally

                        |
+------------+          |            +----------+
|   LOCAL    |          |            |  REMOTE  |
|   HOST     | <== SSH ==== :8900 == |  SERVER  |
+------------+          |            +----------+
:8899 proxy.py          |
                        |
                     FIREWALL
                  (allow tcp/22)

What

Proxy HTTP(s) requests made on a remote server through proxy.py server running on localhost.

How

  • Requested remote port is forwarded over the SSH connection.
  • proxy.py running on the localhost handles and responds to remote proxy requests.

Requirements

  1. localhost MUST have SSH access to the remote server
  2. remote server MUST be configured to proxy HTTP(s) requests through the forwarded port number e.g. :8900.
    • remote and localhost ports CAN be same e.g. :8899.
    • :8900 is chosen in ascii art for differentiation purposes.

Try it

Start proxy.py as:

# On localhost
❯ proxy --enable-tunnel \
    --tunnel-username username \
    --tunnel-hostname ip.address.or.domain.name \
    --tunnel-port 22 \
    --tunnel-remote-host 127.0.0.1
    --tunnel-remote-port 8899

Make a HTTP proxy request on remote server and verify that response contains public IP address of localhost as origin:

# On remote
❯ curl -x 127.0.0.1:8899 http://httpbin.org/get
{
  "args": {},
  "headers": {
    "Accept": "*/*",
    "Host": "httpbin.org",
    "User-Agent": "curl/7.54.0"
  },
  "origin": "x.x.x.x, y.y.y.y",
  "url": "https://httpbin.org/get"
}

Also, verify that proxy.py logs on localhost contains remote IP as client IP.

access_log:328 - remote:52067 - GET httpbin.org:80

Proxy Local Requests Remotely

                        |
+------------+          |     +----------+
|   LOCAL    |          |     |  REMOTE  |
|   HOST     | === SSH =====> |  SERVER  |
+------------+          |     +----------+
                        |     :8899 proxy.py
                        |
                    FIREWALL
                 (allow tcp/22)

Embed proxy.py

Blocking Mode

Start proxy.py in embedded mode with default configuration by using proxy.main method. Example:

import proxy

if __name__ == '__main__':
  proxy.main()

Customize startup flags by passing list of input arguments:

import proxy

if __name__ == '__main__':
  proxy.main([
    '--hostname', '::1',
    '--port', '8899'
  ])

or, customize startup flags by passing them as kwargs:

import ipaddress
import proxy

if __name__ == '__main__':
  proxy.main(
    hostname=ipaddress.IPv6Address('::1'),
    port=8899
  )

Note that:

  1. Calling main is simply equivalent to starting proxy.py from command line.
  2. main will block until proxy.py shuts down

Non-blocking Mode

Start proxy.py in non-blocking embedded mode with default configuration by using start method: Example:

import proxy

if __name__ == '__main__':
  with proxy.start([]):
    # ... your logic here ...

Note that:

  1. start is similar to main, except start won't block.
  2. start is a context manager. It will start proxy.py when called and will shut it down once scope ends.
  3. Just like main, startup flags with start method can be customized by either passing flags as list of input arguments e.g. start(['--port', '8899']) or by using passing flags as kwargs e.g. start(port=8899).

Unit testing with proxy.py

proxy.TestCase

To setup and teardown proxy.py for your Python unittest classes, simply use proxy.TestCase instead of unittest.TestCase. Example:

import proxy


class TestProxyPyEmbedded(proxy.TestCase):

    def test_my_application_with_proxy(self) -> None:
        self.assertTrue(True)

Note that:

  1. proxy.TestCase overrides unittest.TestCase.run() method to setup and teardown proxy.py.
  2. proxy.py server will listen on a random available port on the system. This random port is available as self.PROXY_PORTwithin your test cases.
  3. Only a single worker is started by default (--num-workers 1) for faster setup and teardown.
  4. Most importantly, proxy.TestCase also ensures proxy.py server is up and running before proceeding with execution of tests. By default, proxy.TestCase will wait for 10 seconds for proxy.py server to start, upon failure a TimeoutErrorexception will be raised.

Override startup flags

To override default startup flags, define a PROXY_PY_STARTUP_FLAGS variable in your test class. Example:

class TestProxyPyEmbedded(TestCase):

    PROXY_PY_STARTUP_FLAGS = [
        '--num-workers', '1',
        '--enable-web-server',
    ]

    def test_my_application_with_proxy(self) -> None:
        self.assertTrue(True)

See test_embed.py for full working example.

With unittest.TestCase

If for some reasons you are unable to directly use proxy.TestCase, then simply override unittest.TestCase.run yourself to setup and teardown proxy.py. Example:

import unittest
import proxy


class TestProxyPyEmbedded(unittest.TestCase):

    def test_my_application_with_proxy(self) -> None:
        self.assertTrue(True)

    def run(self, result: Optional[unittest.TestResult] = None) -> Any:
        with proxy.start([
                '--num-workers', '1',
                '--port', '... random port ...']):
            super().run(result)

or simply setup / teardown proxy.py within setUpClass and teardownClass class methods.

Plugin Developer and Contributor Guide

Everything is a plugin

As you might have guessed by now, in proxy.py everything is a plugin.

  • We enabled proxy server plugins using --plugins flag. All the plugin examples were implementingHttpProxyBasePlugin. See documentation of HttpProxyBasePlugin for available lifecycle hooks. Use HttpProxyBasePlugin to modify behavior of http(s) proxy protocol between client and upstream server. Example, FilterByUpstreamHostPlugin.
  • We also enabled inbuilt web server using --enable-web-server. Inbuilt web server implements HttpProtocolHandlerPlugin plugin. See documentation of HttpProtocolHandlerPlugin for available lifecycle hooks. Use HttpProtocolHandlerPlugin to add new features for http(s) clients. Example, HttpWebServerPlugin.
  • There also is a --disable-http-proxy flag. It disables inbuilt proxy server. Use this flag with --enable-web-serverflag to run proxy.py as a programmable http(s) server. HttpProxyPlugin also implements HttpProtocolHandlerPlugin.

Internal Architecture

  • HttpProtocolHandler thread is started with the accepted TcpClientConnection. HttpProtocolHandler is responsible for parsing incoming client request and invoking HttpProtocolHandlerPlugin lifecycle hooks.
  • HttpProxyPlugin which implements HttpProtocolHandlerPlugin also has its own plugin mechanism. Its responsibility is to establish connection between client and upstream TcpServerConnection and invoke HttpProxyBasePlugin lifecycle hooks.
  • HttpProtocolHandler threads are started by Acceptor processes.
  • --num-workers Acceptor processes are started by AcceptorPool on start-up.
  • AcceptorPool listens on server socket and pass the handler to Acceptor processes. Workers are responsible for accepting new client connections and starting HttpProtocolHandler thread.

Development Guide

Setup Local Environment

Contributors must start proxy.py from source to verify and develop new features / fixes.

See Run proxy.py from command line using repo source for details.

Setup pre-commit hook

Pre-commit hook ensures lint checking and tests execution.

  1. cd /path/to/proxy.py
  2. ln -s $(PWD)/git-pre-commit .git/hooks/pre-commit

Sending a Pull Request

Every pull request is tested using GitHub actions.

See GitHub workflow for list of tests.

Utilities

TCP Sockets

new_socket_connection

Attempts to create an IPv4 connection, then IPv6 and finally a dual stack connection to provided address.

>>> conn = new_socket_connection(('httpbin.org', 80))
>>> ...[ use connection ]...
>>> conn.close()

socket_connection

socket_connection is a convenient decorator + context manager around new_socket_connection which ensures conn.close is implicit.

As a context manager:

>>> with socket_connection(('httpbin.org', 80)) as conn:
>>>   ... [ use connection ] ...

As a decorator:

>>> @socket_connection(('httpbin.org', 80))
>>> def my_api_call(conn, *args, **kwargs):
>>>   ... [ use connection ] ...

Http Client

build_http_request

Generate HTTP GET request

>>> build_http_request(b'GET', b'/')
b'GET / HTTP/1.1\r\n\r\n'
>>>

Generate HTTP GET request with headers

>>> build_http_request(b'GET', b'/',
        headers={b'Connection': b'close'})
b'GET / HTTP/1.1\r\nConnection: close\r\n\r\n'
>>>

Generate HTTP POST request with headers and body

>>> import json
>>> build_http_request(b'POST', b'/form',
        headers={b'Content-type': b'application/json'},
        body=proxy.bytes_(json.dumps({'email': 'hello@world.com'})))
    b'POST /form HTTP/1.1\r\nContent-type: application/json\r\n\r\n{"email": "hello@world.com"}'

build_http_response

build_http_response(
    status_code: int,
    protocol_version: bytes = HTTP_1_1,
    reason: Optional[bytes] = None,
    headers: Optional[Dict[bytes, bytes]] = None,
    body: Optional[bytes] = None) -> bytes

PKI

API Usage

gen_private_key

gen_private_key(
    key_path: str,
    password: str,
    bits: int = 2048,
    timeout: int = 10) -> bool

gen_public_key

gen_public_key(
    public_key_path: str,
    private_key_path: str,
    private_key_password: str,
    subject: str,
    alt_subj_names: Optional[List[str]] = None,
    extended_key_usage: Optional[str] = None,
    validity_in_days: int = 365,
    timeout: int = 10) -> bool

remove_passphrase

remove_passphrase(
    key_in_path: str,
    password: str,
    key_out_path: str,
    timeout: int = 10) -> bool

gen_csr

gen_csr(
    csr_path: str,
    key_path: str,
    password: str,
    crt_path: str,
    timeout: int = 10) -> bool

sign_csr

sign_csr(
    csr_path: str,
    crt_path: str,
    ca_key_path: str,
    ca_key_password: str,
    ca_crt_path: str,
    serial: str,
    alt_subj_names: Optional[List[str]] = None,
    extended_key_usage: Optional[str] = None,
    validity_in_days: int = 365,
    timeout: int = 10) -> bool

See pki.py and test_pki.py for usage examples

CLI Usage

Use proxy.common.pki module for:

  1. Generation of public and private keys
  2. Generating CSR requests
  3. Signing CSR requests using custom CA.
python -m proxy.common.pki -h
usage: pki.py [-h] [--password PASSWORD] [--private-key-path PRIVATE_KEY_PATH]
              [--public-key-path PUBLIC_KEY_PATH] [--subject SUBJECT]
              action

proxy.py v2.1.2 : PKI Utility

positional arguments:
  action                Valid actions: remove_passphrase, gen_private_key,
                        gen_public_key, gen_csr, sign_csr

optional arguments:
  -h, --help            show this help message and exit
  --password PASSWORD   Password to use for encryption. Default: proxy.py
  --private-key-path PRIVATE_KEY_PATH
                        Private key path
  --public-key-path PUBLIC_KEY_PATH
                        Public key path
  --subject SUBJECT     Subject to use for public key generation. Default:
                        /CN=example.com

Internal Documentation

Browse through internal class hierarchy and documentation using pydoc3. Example:

❯ pydoc3 proxy

PACKAGE CONTENTS
    __main__
    common (package)
    core (package)
    http (package)
    main

FILE
    /Users/abhinav/Dev/proxy.py/proxy/__init__.py

Frequently Asked Questions

Threads vs Threadless

Pre v2.x, proxy.py used to spawn new threads for handling client requests.

Starting v2.x, proxy.py added support for threadless execution of client requests using asyncio.

In future, threadless execution will be the default mode.

Till then if you are interested in trying it out, start proxy.py with --threadless flag.

SyntaxError: invalid syntax

proxy.py is strictly typed and uses Python typing annotations. Example:

>>> my_strings : List[str] = []
>>> #############^^^^^^^^^#####

Hence a Python version that understands typing annotations is required. Make sure you are using Python 3.6+.

Verify the version before running proxy.py:

❯ python --version

All typing annotations can be replaced with comment-only annotations. Example:

>>> my_strings = [] # List[str]
>>> ################^^^^^^^^^^^

It will enable proxy.py to run on Python pre-3.6, even on 2.7. However, as all future versions of Python will support typing annotations, this has not been considered.

Unable to load plugins

Make sure plugin modules are discoverable by adding them to PYTHONPATH. Example:

PYTHONPATH=/path/to/my/app proxy --plugins my_app.proxyPlugin

...[redacted]... - Loaded plugin proxy.HttpProxyPlugin
...[redacted]... - Loaded plugin my_app.proxyPlugin

OR, simply pass fully-qualified path as parameter, e.g.

proxy --plugins /path/to/my/app/my_app.proxyPlugin

Unable to connect with proxy.py from remote host

Make sure proxy.py is listening on correct network interface. Try following flags:

  • For IPv6 --hostname ::
  • For IPv4 --hostname 0.0.0.0

Basic auth not working with a browser

Most likely it's a browser integration issue with system keychain.

  • First verify that basic auth is working using curlcurl -v -x username:password@localhost:8899 https://httpbin.org/get
  • See this thread for further details.

Docker image not working on macOS

It's a compatibility issue with vpnkit.

See moby/vpnkit exhausts docker resources and Connection refused: The proxy could not connect for some background.

GCE log viewer integration for proxy.py

A starter fluentd.conf template is available.

  1. Copy this configuration file as proxy.py.conf under /etc/google-fluentd/config.d/
  2. Update path field to log file path as used with --log-file flag. By default /tmp/proxy.log path is tailed.
  3. Reload google-fluentd:sudo service google-fluentd restart

Now proxy.py logs can be browsed using GCE log viewer.

ValueError: filedescriptor out of range in select

proxy.py is made to handle thousands of connections per second without any socket leaks.

  1. Make use of --open-file-limit flag to customize ulimit -n.
  2. Make sure to adjust --backlog flag for higher concurrency.

If nothing helps, open an issue with requests per second sent and output of following debug script:

❯ ./helper/monitor_open_files.sh <proxy-py-pid>

None: None in access logs

Sometimes you may see None:None in access logs. It simply means that an upstream server connection was never established i.e. upstream_host=None, upstream_port=None.

There can be several reasons for no upstream connection, few obvious ones include:

  1. Client established a connection but never completed the request.
  2. A plugin returned a response prematurely, avoiding connection to upstream server.

Flags

❯ proxy -h
usage: proxy [-h] [--backlog BACKLOG] [--basic-auth BASIC_AUTH]
             [--ca-key-file CA_KEY_FILE] [--ca-cert-dir CA_CERT_DIR]
             [--ca-cert-file CA_CERT_FILE]
             [--ca-signing-key-file CA_SIGNING_KEY_FILE]
             [--cert-file CERT_FILE]
             [--client-recvbuf-size CLIENT_RECVBUF_SIZE]
             [--devtools-ws-path DEVTOOLS_WS_PATH]
             [--disable-headers DISABLE_HEADERS] [--disable-http-proxy]
             [--enable-dashboard] [--enable-devtools] [--enable-events]
             [--enable-static-server] [--enable-web-server]
             [--hostname HOSTNAME] [--key-file KEY_FILE]
             [--log-level LOG_LEVEL] [--log-file LOG_FILE]
             [--log-format LOG_FORMAT] [--num-workers NUM_WORKERS]
             [--open-file-limit OPEN_FILE_LIMIT] [--pac-file PAC_FILE]
             [--pac-file-url-path PAC_FILE_URL_PATH]
             [--pid-file PID_FILE] [--plugins PLUGINS] [--port PORT]
             [--server-recvbuf-size SERVER_RECVBUF_SIZE]
             [--static-server-dir STATIC_SERVER_DIR] [--threadless]
             [--timeout TIMEOUT] [--version]

proxy.py v2.1.2

optional arguments:
  -h, --help            show this help message and exit
  --backlog BACKLOG     Default: 100. Maximum number of pending connections to
                        proxy server
  --basic-auth BASIC_AUTH
                        Default: No authentication. Specify colon separated
                        user:password to enable basic authentication.
  --ca-key-file CA_KEY_FILE
                        Default: None. CA key to use for signing dynamically
                        generated HTTPS certificates. If used, must also pass
                        --ca-cert-file and --ca-signing-key-file
  --ca-cert-dir CA_CERT_DIR
                        Default: ~/.proxy.py. Directory to store dynamically
                        generated certificates. Also see --ca-key-file, --ca-
                        cert-file and --ca-signing-key-file
  --ca-cert-file CA_CERT_FILE
                        Default: None. Signing certificate to use for signing
                        dynamically generated HTTPS certificates. If used,
                        must also pass --ca-key-file and --ca-signing-key-file
  --ca-file CA_FILE     Default: None. Provide path to custom CA file for peer
                        certificate validation. Specially useful on MacOS.
  --ca-signing-key-file CA_SIGNING_KEY_FILE
                        Default: None. CA signing key to use for dynamic
                        generation of HTTPS certificates. If used, must also
                        pass --ca-key-file and --ca-cert-file
  --cert-file CERT_FILE
                        Default: None. Server certificate to enable end-to-end
                        TLS encryption with clients. If used, must also pass
                        --key-file.
  --client-recvbuf-size CLIENT_RECVBUF_SIZE
                        Default: 1 MB. Maximum amount of data received from
                        the client in a single recv() operation. Bump this
                        value for faster uploads at the expense of increased
                        RAM.
  --devtools-ws-path DEVTOOLS_WS_PATH
                        Default: /devtools. Only applicable if --enable-
                        devtools is used.
  --disable-headers DISABLE_HEADERS
                        Default: None. Comma separated list of headers to
                        remove before dispatching client request to upstream
                        server.
  --disable-http-proxy  Default: False. Whether to disable
                        proxy.HttpProxyPlugin.
  --enable-dashboard    Default: False. Enables proxy.py dashboard.
  --enable-devtools     Default: False. Enables integration with Chrome
                        Devtool Frontend. Also see --devtools-ws-path.
  --enable-events       Default: False. Enables core to dispatch lifecycle
                        events. Plugins can be used to subscribe for core
                        events.
  --enable-static-server
                        Default: False. Enable inbuilt static file server.
                        Optionally, also use --static-server-dir to serve
                        static content from custom directory. By default,
                        static file server serves out of installed proxy.py
                        python module folder.
  --enable-web-server   Default: False. Whether to enable
                        proxy.HttpWebServerPlugin.
  --hostname HOSTNAME   Default: ::1. Server IP address.
  --key-file KEY_FILE   Default: None. Server key file to enable end-to-end
                        TLS encryption with clients. If used, must also pass
                        --cert-file.
  --log-level LOG_LEVEL
                        Valid options: DEBUG, INFO (default), WARNING, ERROR,
                        CRITICAL. Both upper and lowercase values are allowed.
                        You may also simply use the leading character e.g.
                        --log-level d
  --log-file LOG_FILE   Default: sys.stdout. Log file destination.
  --log-format LOG_FORMAT
                        Log format for Python logger.
  --num-workers NUM_WORKERS
                        Defaults to number of CPU cores.
  --open-file-limit OPEN_FILE_LIMIT
                        Default: 1024. Maximum number of files (TCP
                        connections) that proxy.py can open concurrently.
  --pac-file PAC_FILE   A file (Proxy Auto Configuration) or string to serve
                        when the server receives a direct file request. Using
                        this option enables proxy.HttpWebServerPlugin.
  --pac-file-url-path PAC_FILE_URL_PATH
                        Default: /. Web server path to serve the PAC file.
  --pid-file PID_FILE   Default: None. Save parent process ID to a file.
  --plugins PLUGINS     Comma separated plugins
  --port PORT           Default: 8899. Server port.
  --server-recvbuf-size SERVER_RECVBUF_SIZE
                        Default: 1 MB. Maximum amount of data received from
                        the server in a single recv() operation. Bump this
                        value for faster downloads at the expense of increased
                        RAM.
  --static-server-dir STATIC_SERVER_DIR
                        Default: "public" folder in directory where proxy.py
                        is placed. This option is only applicable when static
                        server is also enabled. See --enable-static-server.
  --threadless          Default: False. When disabled a new thread is spawned
                        to handle each client connection.
  --timeout TIMEOUT     Default: 10. Number of seconds after which an inactive
                        connection must be dropped. Inactivity is defined by
                        no data sent or received by the client.
  --version, -v         Prints proxy.py version.

Proxy.py not working? Report at:
https://github.com/abhinavsingh/proxy.py/issues/new

Changelog

v2.x

  • No longer a single file module.
  • Added support for threadless execution.
  • Added dashboard app.
  • Added support for unit testing.

v1.x

  • Python3 only.
    • Deprecated support for Python 2.x.
  • Added support multi core accept.
  • Added plugin support.

v0.x

  • Single file.
  • Single threaded server.

For detailed changelog refer to release PRs or commit history.


Blog post: https://abhinavsingh.com/proxy-py-a-lightweight-single-file-http-proxy-server-in-python/

Github page: https://github.com/abhinavsingh/proxy.py

The post Proxy.py – A lightweight, single file HTTP proxy server in python appeared first on Hakin9 - IT Security Magazine.

Viewing all 612 articles
Browse latest View live