For those unfamiliar with Cybersecurity team colors, there are Blue Teams which are the defenders, and there are Red Teams who are simulating attackers with the goal to prove a circumvention of these defences.
There are a few other colors too, the only other well known one is _Purple Team_, cleverly named as a mix of Blue and Red with the maybe not so expected goal of these Blue and Red Teams working together to teach one another of the tools, tactics, techniques, and procedures each use. This sharing helps each advance to achieve greater success in their individual goals.
Who are the Red Team?
With all the hype and popularisation of hacking so many certified "ethical white hat" hackers often refer to themselves as a Red Team capability in an organisation, and in rare occasions this is actually true, but most of them are Blue Team trying to be trendy and riding the hype train.
Red Teams are often employed to fill a penetration tester, threat hunter, and vulnerability management role. Generally none of these are unique Red Team tasks, but all are common tasks for a Blue Team. Blue Teams always defend their organisation which many see this as applying preventative controls to stop a threat but the bulk, if not all of a Blue Teams effort, goes into monitoring and remediation - or what information security professionals define as detective and corrective controls. The popularisation of ethical hacking has made a lot of advancements in the reconnaissance tools space, which is directly a detective effort for a Blue Team that Red Teams also must develop before they can find assets and exploit them.
Who are the Blue Team?
The most effective Blue Team have a head start, well they should, as they begin with full knowledge of what needs to be defended, often known as an asset inventory, bill of materials, or just the billing from vendors. Either way, running recon tools is not a unique Red Team task, the Red Team can completely and entirely skip recon altogether by acquiring the targets asset inventory and go straight to exploit development which defines the Red Team.
Anyone in a Blue Team who has seen reports from the top penetration testing firms, know that maybe a couple of them deliver reports that could be useful. Most are just recon and findings, for a report to act in a Red Team capacity they would be required to develop their own exploits. Otherwise they are not offering any skills that a Blue Teams doesn't already have.
In a report there is a lot of other content, apart from providing exploits, all of the other content is suitable for the broader organisation. What the Blue Team requires is the exploit, everything else they can do themselves.
What exactly is an exploit?
An exploit is the procedure, often in the form of code, that is used to validate a vulnerability. Before you have a working exploit, a vulnerability is actually called a finding.
Just focus on that a second, it is only considered a vulnerability after the exploit is used to validate a finding. This is why the professional penetration testing reports only give you a findings report for most white box or grey box penetration tests, but if you are engaging them to perform a black box it is ingenious for the report to contain only findings, it is expected that these black box reports only provide you vulnerabilities and therefore inherently every item will include a working exploit. White box tests must define a scope, including assets and exclusion lists - whereas a black box is usually void of any scope or advance target knowledge being primarily scenario based or emulating a specific threat actor which makes it difficult to justify the value of the report with mere unvalidated findings which might be acceptable for a white box report that is used as evidence for compliance audits and heavily scrutinises the scoped targets.
If you skip the validation step, i.e. have no corresponding exploit to prove a findings is actually a real vulnerability, and you then ingest these findings into your vulnerability management process - you are adding a lot of noise and often very few (if any) vulnerabilities to the vulnerability management workload. So you will start to commit time and resources on fixing unverified findings, as though they were actually vulnerabilities, which can be more damaging to the business than a vulnerability in the first place.
What is the problem with Findings?
A Finding cannot be given a Risk score, a vulnerability should be assigned a CVSS.
Most mature businesses would want to align their Risk Management program of work to the Risk ratings they assign based on the CVSS of each vulnerability identified and tracked in the vulnerability management strategy. Risk ratings are not the same as a CVSS, they are very different, but this is a topic for a whole other post.
Anyone intimately familiar with the CVSS know the components that attribute to the scoring. If you're unfamiliar just know that a few attributes can be scored for a Finding, some that may apply are confidence or complexity, but other attributes related to the business asset like criticality, likelihood of an occurrence in this environment, the confidentiality or availability or integrity modifiers, and of course the impact if the threat is realised - are almost all impossible to given a value for a Finding.
Equally, a vendor or security researcher may attempt to score the CVSS attributes but are equally unaware of your environment or assets. So beware CVSS, and you should always rate vulnerabilities yourself.
There is a fundamental misunderstanding here with regards to how exploits interact with the concepts of Findings versus Vulnerabilities, but is not the fault of the organisations Risk or Vulnerability management programs. These strategies are likely mature, frequently scrutinised, and well understood. Unfortunately these programs have one critical flaw that makes them ineffective, and that flaw is an inherent trust in the vendors that inform these programs. These vendors assure completeness and accurate data and in practice barely understand what completeness represents and therefore delivery faulty data and compromise the programs fundamentally.
How to identify a real exploit in reports?
In your reports an occurrence with the name finding or vulnerability is usually (hopefully) followed by a procedure for you to reproduce it yourself (if not, find a better vendor).
There are two types of reproduction procedures;
Reproduces a finding: executing the procedure that will force an event or run a tool that informs you that something risky was identified. These are sometimes measurements against a standard, benchmark, policy, or compliance obligation. Others report a possible CVE, which only tells you someone has at least once in the past proved a vulnerability exists that looks like your finding. But the reporter did not validate that vulnerability in your environment, and that reporter is not related to your vendor in any way, so reporter who found the CVE can't possibly know anything about this finding in your report or inform you if have a vulnerability or not.
Reproduces a vulnerability: Usually contains both a procedure and a payload, that upon execution and resulting detonation will produce confidential or material data. These may also result in a compromise instead, elevated access, lateral movement, or uncovering some other vector to perform the next attack that leads to a compromise.
In the example of a Finding there is no exploit, therefore for Findings you must still validate before you can include it into your vulnerability management program. This practice of finding validation reduces the vulnerabilities you need to report to auditors, and has a huge reduction of eventual work for other business unit that comes out of typical vulnerability management programs.
As a security or risk professional; unless you have a working exploit for a vulnerability, it not actually a vulnerability yet and the finding may be completely benign in your environment. Before polluting your risk and vulnerability management with unvalidated Findings, ensure you have working exploits and verify the vulnerability is real.
As self-proclaimed Red Team professional; if you are reporting only findings you are actually a Blue Team professional. Real honest Red Team professionals develop exploits, never detonate them without express permission from an authorised owner, and ethically discloses them. Blue team will recon their networks and assets to identify threats and findings, they check for miss configured services, threat hunt, and scan public places for exposures, this practice does not make you a red team professional.
As a penetration tester; Your reports are only providing findings until you actually prove your work by disclosing a reproduction procedure for the exploits that validate vulnerabilities really exist. Stop calling these findings vulnerabilities, it is a disservice and shameful. And no, a CVE is not an exploit either, you actually need to perform a procedure that gains you further access, or breaches the confidentiality, availability, and integrity of the target.