3 min read

Guide to Application Security tools

First, what makes a good, or great, AppSec tool?

The best tools

A best tool for AppSec is implemented;

  • in the IDE directly
  • as a git pre-commit hook
  • native to the programming language package manager
  • be configurable with custom rules for your business

Apart from these there are some good tools too that have a Command Line Interface (cli) with structured data outputs (e.g. JSON, CSV) and use proper Linux signals and exit codes.

A bad tool will be;

  • an API only
  • a proprietary tool without the ability to add customisations
  • exist as a plugin to a CI/CD/pipeline tool and not run locally

So that's the best, the good, bad, so the ugly tools are ones where documentation is either not existent, not public, public but not community driven (so typically poor quality), or are part of a support system only. Developers and AppSec Engineers are creative, curious, and self-paced self-learners with a passion and willingness to contribute improvements.

Let's take a look at the categories.


A linter is essentially a code quality tool, focussed mostly on syntax correctness or tests rules from a programming language standard (e.g. PEP for Python).
Other types of linters will ensure the correctness of an open configuration (e.g. Terraform) or will validate your code conforms to a security policy (e.g. conftest or CIS Benchmarks).

Secrets Detection

When secrets enter the git history they can be harvested by read-only contributors or unintended team members. Particularly an issue when a project is public, but source code can be exposed in any number of ways unintentionally or maliciously. In large teams there are some secrets (like that stripe.com APIKey) that I'm sure not all team members should have access to, so keeping them out of the git history with proper secrets access management in place is pretty important.

So ensure you choose tools that utilise git pre-commit hooks so the secrets never make it into the git history. No one wants to be the engineer who's name is on the commit of a secret that caused a breach, so use git pre-commit hooks to help yourself have piece of mind you won't be that person referred to in the news when management blame a breach on an employee.

SCA - Source Composition Analysis

NotPetya, ASUS, MS Exchange, Hashicorp, Solar Winds (enough said?)
If you're unfamiliar with Supply-chain attacks on software, or how the above high profile breaches were possible, then it's fairly straight forward to explain.

Most code you run in production is written by others, from the operating system, software running alongside the application on the server, the network infrastructure and supporting systems, the non-productions systems, and even in the Application you use mostly third party dependencies and write in comparison to all this almost an insignificant amount of the code that runs in production.

How secure is all that code? Did you do anything other than trust it is secure?

Would you plug a USB drive you found in the car park into your machine?

We all run untrusted code with blind inherent trust that it won't harm us.
In AppSec we attempt to gain a small amount of security confidence for at least the third party dependency code we chose to integrate into our application code - using SCA.

Essentially SCA will look for known vulnerabilities in dependencies. These are ethical disclosures from security researchers tracked by CWE, CVE, NVD or a number of other Vendor specific Security Advisory databases. SCA typically will not identify a known vulnerability in the blackhat community, these are what the Security Industry frequently call an 0-day (Zero day) exploit. Some tools claim they do this but are inconsistent or absent in proof of how it is done.

Checksums / Digests / Fingerprints (oh my!)

These enable users of software to verify that what was downloaded has not been accidentally or intentionally modified from the moment it was published by its developer and are usually derived from the output of cryptographic hash functions (e.g. SHA-256) or PGP key fingerprint as a form of verification.

It's not entirely difficult to do as a software publisher that produces a distributable file like JAR for Java, Wheel or egg for Python, Gem for Ruby, an executable binary for go, etc. Even if you distribute multiple files you would normally want these compressed into an archive (e.g. my_software.tar.gz) which GitHub will do for you when you tag and release the source code.

Step 1

# my_software.tar.gz\nsha256sum my_software.tar.gz | cut -d' ' -f1 > my_software.tar.gz.sha256sum

Step 2

Make my_software.tar.gz and my_software.tar.gz.sha256sum files available for users to download.

Step 3

As a user you may chose to trust the files or you might verify them using:

echo \"$(cat my_software.tar.gz.sha256sum) my_software.tar.gz\" | sha256sum -c\n# my_software.tar.gz: OK

As programmers can script and automate this, but as a normal it is a bit more difficult so the Linux Foundation announced    a project to make it far easier called sigstore and will work automatically with many software distribution tools but has chosen to start with container technology and have demonstrated using a method like the humble curl command line utility also.

The below is under construction


OSS Licences


What other tools can be used for AppSec?