Introduction

Ionize has a long history of providing security services to a wide range of clients, be it government, academic, or commercial sectors. In our experience, there is a large amount of confusion as to what style of security assessment will achieve the goals of the organisation, and the type of questions which should be asked of potential providers of security services. This article aims to provide a resource which will alleviate some of this confusion.

Broadly speaking there are four categories of security assessment commonly referred to in the market. These are: vulnerability assessment, penetration tests, red team / attack simulations, and bug bounties. Each have different advantages, and will suit organisations with different goals, budgets, and levels of security maturity. A large amount of confusion in the market stems from the lack of standardised terminology, and unscrupulous providers of security services miscategorising their activities (i.e. promising a penetration test, yet conducting only a vulnerability assessment). Because of this, we will attempt to define each term before we outline the circumstances where it fits into the security landscape.

It is also worth noting that although as humans we enjoy classifying things into neatly defined categories, the real world doesn’t often accommodate our obsessive-compulsive disorder. There is no single key factor that will define the difference between a vulnerability assessment and a penetration test, nor is there one between a penetration test and a red team (although exactly what these factors are will be a constant source of Twitter arguments for the foreseeable future). The key thing to take away is the underlying themes that apply to each assessment type, which has the bonus of allowing you to detect when you’re being sold a vulnerability assessment labelled as a penetration test.

Vulnerability Assessments

Vulnerability assessments (VAs) are defined as a high-level overview or assessment of a product, service, or network. This is conducted with automated tooling (vulnerability scanners such as Nessus, Nexpose and others) and receives little to no human review. A key difference compared with later assessment types is that no active exploitation of the network occurs during a vulnerability assessment. This style outlines potential security risks, as opposed to confirming security risks through the act of exploitation.

The advantage of this style of assessment centres around cost, low technical barriers, and network impact. With little human intervention comes few technical barriers for entry. This allows many organisations to conduct vulnerability assessments in house for a similar price to engaging outside security services, with the additional benefit of being able to repeat the activity on an ongoing basis. The lack of active exploitation can also provide assurance to operations staff who need to ensure no impact on business systems or other infrastructure.

Many disadvantages are apparent with the vulnerability assessment model. It is by far the most unreliable style of security assessment, with tools often providing a large amount of false positive, or worse, false negative results surrounding security risks within your network. Security products have a habit of inflating vulnerability criticality (which scanner is better, the one who finds 3 critical vulnerabilities on the network or 3 medium vulnerabilities?) which makes it difficult to assess the real threat posed by findings, doubly so when combined with the lack of evidence the VA model provides. Another unfortunate side effect is the low barrier for entry allows a large amount of fly by night operators to sell vulnerability assessments (often dressed up as penetration tests) into the market.

Finally, the issue is you’re getting an automated device to check for default settings and assign a pass / fail based on this approach. We’ve encountered plenty of security assessments where vulnerability scanners have missed critical findings because a URL was non-standard, or the vulnerability was behind authentication with “admin:admin” as the username / password. Automated security products still cannot effectively deal with variations that are trivial for a human to understand and bypass.

Often after a vulnerability assessment you’ll get a list of areas where you can begin looking for problems within your network, but the real findings will be mixed in amongst tens if not hundreds of other scary sounding findings which don’t pose a threat in the real world. It’s something we could recommend organisations adopt as part of continuing penetration testing or red team assessments, however we don’t recommend you count on this to keep you secure.

Vulnerability Assessments – Summary

Key Themes

– Automated results and tool output.
– No exploitation.
– Little to no human involvement.

Pro’s / Con’s

+ Lowest cost out of any assessment style.
+ Low technical knowledge required, allowing in house scanning.
+ Typically non-intrusive.
Most unreliable technique by far.
Large amounts of false positives, and false negatives.
Security products inflate criticality.
Only finds low hanging fruit and cannot deal with organisation specific circumstances.
No or very low evidence of exploitability.
Difficult to develop roadmaps from results due to no evidence / inaccurate criticality.

Penetration Testing

Penetration Testing is defined as an in-depth security review of a product, service, or network by a (ideally, experienced) penetration tester. This is conducted using several tools, some of which may even be those used during a vulnerability assessment (but they shouldn’t be the only tools!). Testers should not only identify potential vulnerabilities, but confirm their existence through active exploitation or concrete examples of how these vulnerabilities could be abused to pose a security risk to the organisation. This style of assessment should provide an outline of actual vulnerabilities, substantiated explanation of how a vulnerability poses a security risk, and ideally evidence of exploitation. As a bonus penetration testing report should provide ideas surrounding remediation methods. However, this begins to be difficult to do in a comprehensive way as often the tester will not be fully aware of the business requirements surrounding the product, service, or network.

Leveraging an experienced tester will increase the number of relevant findings within your application. Organisation specific quirks, such as vulnerabilities behind weak authentication, reviewing bespoke code, non-standard application paths, and hundreds of other cases can be handled where an automated tool would fail. Active exploitation will allow the organisation to examine the real-world impact of the vulnerability, and assign an appropriate priority to the task for remediation. Groups of low severity vulnerabilities can be chained together to demonstrate high risk impact. This evidence based approach has the added benefit of eliminating the possibility of false positives.

Testers can write scripts or perform detailed step-by-step exploitation guides to aid the organisation in reproducing the findings, and confirming the issue has been remediated after patching has been applied. On the topic of remediation, the testers can liaise with developers to talk about why the application was vulnerable, and suggest modifications to ensure the security of future versions.

There are several elements of trust involved in penetration testing and other security assessments. Firstly, there is a strong possibility that testers will gain access to sensitive information about your organisation or client data, so you must ensure you are comfortable with the people you are allowing to access these systems. Secondly, you are allowing the tester to actively exploit systems, which has the potential of influencing data and the reliability of those systems. This can often be mitigated against by using pre-prod or UAT environments, along with dummy data, but the best mitigation is ensuring you have reliable experienced testers who are unlikely to bring down systems. Thirdly you’re engaging these services for the explicit purpose of disclosing all the vulnerabilities that are discovered, so you need to trust the provider will disclose this information to you. Finally, you must be aware that third parties will now hold information about security vulnerabilities within your organisation, so trusting their ability to keep this information confidential and secure is paramount. These issues must be considered when evaluating if engaging security services is right for you, along with evaluating your choice of provider.

Penetration testing is not the answer to every security issue. There are several downsides which must be considered and where possible, mitigated. As with vulnerability assessments, penetration tests are a point in time assessment. It can provide insight into the current security state of your project, application, or service, however it doesn’t necessarily provide assurance going forward once code changes or further releases have been made.

One of the aspects we use to define a penetration test, and perhaps one of its strongest limitations is with regards to the scope of engagement. Any engagement will rightfully have defined rules surrounding what systems may or may not be tested, yet this can lead to a false sense of security surrounding any given system. Consider the scenario where you ask for your latest project to be subjected to a penetration test as it contains gigabytes of sensitive information. When we carefully consider this, we don’t really care if the system is secure, we care about the security of the data held within it. The system can be totally secure, but if an attacker can compromise a totally unrelated legacy system elsewhere on your network, and then attack the data via your internal network, the result is the same as your newest project being hacked. Your penetration test which correctly focused only on the application, does not inform you of these risks. The penetration testing report which advises the system is secure is correct, yet still potentially leading you into a false sense of security.

Penetration Test – Summary

Key Themes

– In-depth / exhaustive dive into a project, service, or network.
– Active exploitation to gather evidence of vulnerabilities.
– Triaged reports and suggested remediations.
– Scoped around a system or small amount of systems.

Pro’s / Con’s

+ Evidence based testing
+ Comprehensive test of a product or service
+ Can demonstrate impact, assuming scope allows
+ Human testers can deal with organisation specific circumstances
+ Realistic assessment of criticality with supporting evidence
+ Elimination of false positives allowing organisation to triage remediation
+ Suggestions for remediation techniques and / or code snippets
+ Scripts to reproduce vulnerabilities to test remediation
+ Known reputation and accountability for testers
+ Demonstration of low impact vulnerabilities can be chained together to identify high impact issues which need priority remediation
Often limited in scope when engaged on a project basis
Limited scope can misrepresent the security of the product
A test is a point in time assessment, which doesn’t provide assurance for the future
If not done correctly, testing may impact production systems and cause business disruption
Knowledge of vulnerabilities surrounding your infrastructure are held by a third party

Red Teaming / Attack Simulations

Attack Simulations are defined as assessments with limited or no scope against an organisation. This is conducted by one or multiple testers, typically over a long period of time, however this can vary depending on the engagement. As the name suggests, this engagement aims to mimic real world threats facing an organisation or company by leveraging techniques used by dedicated attackers in the real world. This may include activities such as phishing, social engineering, physical security, or attacking the entirety of the organisations technical services open to the internet.

Typically, the security testing on any one system will not be as exhaustive as it would be on a penetration test. This is because the focus of testing is to examine a wide range of targets (the entire organisation) and isolate issues which pose a higher risk. As an example, SSL based vulnerabilities are something you may see in a “deep dive” of a penetration test, whereas an attack simulation will only focus on higher risk findings such as SQL Injection, Remote Code Execution, Arbitrary File Upload, etc. Yes, those SSL errors should be fixed to comply with best practice, but let’s take care of the glaring issues first.

There are multiple advantages to this style of testing, which leads Ionize to recommend it to our clients in several cases. As mentioned, the entire organisation is tested meaning that legacy systems, or other aspects of the business which usually don’t have the budget for dedicated penetration testing are captured within the security review. We are strong proponents of evidence based testing, and focusing on high risk findings allows testers to demonstrate impact and direct the limited resources of our clients towards actions that can make a difference to their organisation’s security posture. This can also be driven by high impact results which gives buy in from the executive level or by providing a clear roadmap of what are the weakest systems throughout the network.

Red teams allow organisations to learn a large amount about their attack surface and how their systems look from an attacker’s perspective. During several engagements, we have identified servers or systems running which the organisation had no knowledge of. It will also provide insights as to how much intelligence can be gathered about internal systems, organisational layout, key staff members, and much more. These assessments also test all controls holistically, rather than in isolation as with other assessment types. Essentially it can be summed up with the question of “Attackers are not limited in the methods they use to compromise you, why constrain your testers from discovering those methods first?”

Red teams offer great educational opportunities for the client as well, given a dedicated attacker will be simulating an adversary. This gives the defenders (blue team) a chance to examine their tools and alerting to discover what malicious actions are detected and what are missed. Post engagement debriefs offer the chance for the attacking team to go through the process step by step with defenders and offer advice on how different techniques could have been detected.

You’ll notice that the benefits of a penetration test apply to the attack simulation. Experienced testers can deal with organisational quirks and quickly make connections which automated tools will miss. Findings in reports can be triaged to ensure the removal of false positives, and the inclusion of clear reproduction steps and remediation suggestions. Some of the negatives also apply. For example, you will have to be very comfortable with the service providers you chose as they will quite often achieve a privileged position within your network. Testing will also once again only provide you with a snapshot of your current security posture, which will change over time as network changes occur and new threats evolve.

Red Team / Attack Simulation

Key Themes

– Broad security assessment focused on high risk findings.
– Little to no scope which allows for realistic attack vectors.
– Assesses security based from an organisational, rather than project based perspective.

Pro’s / Con’s

+ Evidence based testing
+ Comprehensive test of an organisations perspective
+ Can demonstrate impact
+ Human testers can deal with organisation specific circumstances
+ Realistic assessment of criticality with supporting evidence
+ Controls are tested holistically instead of on via an adhoc method
+ Provides defenders to experience a realistic attack and fine tune alerting and defensive tools
+ Legacy systems are tested which are typically not captured within penetration tests
+ Learn how the organisation looks from the attacker’s perspective
+ Vulnerabilities in different systems can be leveraged to compromise the organisation as a whole
+ Elimination of false positives allowing organisation to triage remediation
+ Suggestions for remediation techniques and / or code snippets
+ Scripts to reproduce vulnerabilities to test remediation
+ Known reputation and accountability for testers
+ Demonstration of low impact vulnerabilities can be chained together to identify high impact issues which need priority remediation
A test is a point in time assessment, which doesn’t provide assurance for the future
If not done correctly, testing may impact production systems and cause business disruption
Testers have access to sensitive internal systems
Knowledge of vulnerabilities surrounding your infrastructure are held by a third party

Bug Bounties

Bug bounties are the final style of security assessment addressed within this article. They are defined as crowd sourcing the testing of applications, services or networks, and paying for the results based on a per vulnerability basis via reports submitted by these testers. Scope of bounties can vary from company to company, but typically these styles of assessment focus on services exposed to the internet.

Due to the nature of bounties and not having testers directly employed, it is not possible to coordinate a methodical approach to ensure all aspects of the application are tested. However, over the life time of the bounty, it’s reasonable to say a comprehensive test will be achieved. Bounties can be more economically favourable to the company given you only pay for results of valid vulnerability reports. The range of experience bug hunters have can often benefit the company, with many having niche areas of expertise they apply across a wide range of bounty programs. As with other assessment styles, having a human reviewing the system will identify a much wider range of vulnerabilities which automated tools cannot discover.

There are hidden costs associated with a bounty program, which may not be apparent from the outset. The largest factor is often the signal to noise ratio. Approximately 90% of submissions to most bug bounty programs are deemed to be invalid or have no little practical security impact 1. In practical terms, you will need to have a reasonably large team of people dedicating to triaging and processing issues, and around 9/10 of these issues will often end up being invalid. There are many factors behind this, but the majority stems from a combination of inexperienced testers submitting a huge swath of “bugs” hoping that they will get paid, combined with the race to submit as fast as possible in case a fellow bug hunter submits the same bug and their submission is deemed invalid as it’s a “duplicate”. As an example of some hilarious submissions, feel free to read these 2 3 4.

The counter argument often touted is that you can hire someone (i.e. the bug bounty program) to validate and triage these issues and only present you the valid findings. This is possible, but in this event, you are transitioning from a per-vulnerability cost into maintaining an ongoing team of contractors. Your security issues are also now being triaged by a third party (or the sub-contractors they hire) who may not have the same understanding of the business systems as your organisation, and sensitive information about your vulnerabilities is being distributed to a wider circle.

On that issue of trust, bounties obviously have issues with allowing the public at large to test the security of your issues. I’m confident that 99% of these people are honest, well intentioned, and are often experienced penetration testers working for extra cash on the side. There remains a very minor element who may be disgruntled with the bounty payout amount, who feel the company isn’t taking an issue seriously enough 5, or simply wishing to push the rules to chase further bounty payments. Looking at Facebook’s bounty program we see incidents like this happening where bounty hunters have accessed company private data against the program rules 6 and even modified compromised server login pages to capture employee login details to explore the internal network further 7. When motivated by purely a pay-per-bug basis, we’ve seen examples of hunters withholding information to leverage vulnerabilities, using them to enumerate more information so they may submit more and more findings, while still retaining access.

Once more there is a common counter argument, typically surrounding private bounties and “invite only” programs. While true that this does limit the “wider public” from participating in the bounty program, it does not provide any guarantees as to the hunter’s morality, but simply their technical competency (usually private bounty invites are given based off a history of successful findings).

These issues of the public at large testing against an application can progress beyond malicious intent into unconscious incompetence. With bounties advertised as a great way to get experience in the security industry we’ve seen many inexperienced testers firing off tools blindly in the hope that it will discover a finding that will result in payment. This may cause significant load on production systems and potentially business disruption in very rare cases. Finally, this large increase in noise can be detrimental to defenders looking for indicators of compromise, as it’s very difficult to differentiate malicious activity against the ongoing noise of bounty programs.

A key differentiator with bounties is their ability to perform testing on an ongoing basis, rather than a point in time assessment as seen in other security services. This is fantastic for bounty participants and we do believe it’s one of the legitimately great things about bounty programs, as continuing security assessment is critical for all organisations.

Summarising, I think bounties can be a great tool for ongoing security of an organisation, but I believe a high level of security maturity and resources are required before it can be pulled off effectively.

Bug Bounties

Key Themes

– Opens security testing to a large pool of testers, typically the public.
– Pays for findings on a per-vulnerability basis instead of a time basis.
– Typically used for external facing services, much rarer to assess internal services.

Pro’s / Con’s

+ Evidence based testing
+ Can demonstrate impact, assuming scope allows
+ Testing can be cost effective, paying for results instead of effort
+ Human testers can deal with organisation specific circumstances
+ Testing is a continuing process, allowing a level of assurance throughout the program
+ A large range of testers can examine the product or service
+ Demonstration of low impact vulnerabilities can be chained together to identify high impact issues which need priority remediation
Lower tester accountability can result in untrusted people accessing sensitive information
A large amount of effort and organisational resources required to triage and verify reports
Large number of false positive and low quality reports
Inexperienced testers can lead to an impact on production systems through misconfigured testing tools
Muddies blue team indicators of malicious activity vs bug bounty activity

 

Disclaimer: Ionize is a provider of penetration testing and attack simulation services. While this article has done it’s best to remain objective the reader should always be aware of author biases. Ionize does not offer bounty services, and as a result I’ve tried to provide references for any negative statements I made surrounding bounties (there are a few). I’d like to add this talk from the CISO of LinkedIn on bug bounties who also raises some of these points – it’s worth a watch – Video