3 min read

Exploiting Orphaned Webserver Files

Exploiting Orphaned Webserver Files

Detect as a means to defend

The idea of this attack is to identify old dependencies with known exploits.

Even some of the most secure clients, that have excellent patching practices, are still vulnerable years after they assume they patched a vulnerability.

Many of the competent website developers are now adopting the immutable server paradigm meaning this type of attack is mostly mitigated. If the server is rebuilt often as the website is updated you will unlikely retain any old dependencies that make you vulnerable.
A CDN (Content Delivery Network) approach may also be a good mitigation here as the files are not actually on your servers, however a strict CSP (Content Security Policy) may not allow you to use a CDN, or maybe it does but not all dependencies are loaded from the CDN are they?

However immutable servers, patching, and CDN's were not always applied, and this attack can make you vulnerable to threats you long since believed you've avoided.

Exploit Tool Chain

The following tools will be used;

If you haven't already installed Node.js: https://github.com/nodejs/help/wiki/Installation

Same for Go: https://golang.org/doc/install

Install parallel on Debian/Ubuntu: apt install parallel
Install parallel on macos: First get brew if you haven't already then simply brew install parallel

Be a good GNU consumer: parallel --citation

Install retire.js: npm install --save-dev retire

Make sure the retire binary is in your PATH;

if [[ $PATH != *$(npm bin)* ]]; then
  export PATH="$PATH:$(npm bin)"
fi

Install waybackurls: go get github.com/tomnomnom/waybackurls

Again make sure the binary is in your PATH;

if [[ -d "$HOME/go/bin" ]] && [[ $PATH != *"$HOME/go/bin"* ]] ; then
  export PATH="$PATH:$HOME/go/bin"
fi

Lastly, you should be able to run searchsploit -h if you followed the installation instructions.

Gathering a list of URLs

If you're familiar with OSINT you will likely have a bunch of techniques to find active or current asset of your target, this tool can find possible forgotten assets, cached by the way back machine.

For this example we will just look at JavaScript, which are commonly used for XSS attacks. But with this method you can identity all sorts of files with threats.

waybackurls target.hostname.tld | grep "\.js" | uniq | sort >>urls_file.tmp

Now this is just a list of URLs that might have existed at some time in the past, lets filter that to files that still exist;

cat urls_file.tmp | \
  parallel -X HEAD -j50 -q curl -Lw 'Status:%{http_code}\t Size:%{size_download}\t %{url_effective}\n' -o /dev/null -sk | \
  grep 'Status:200' | \
  egrep -o 'https?://[^ ]+' >>urls_file
rm ${urls_file}.tmp

Now we will have only valid URLs targets so let's download them (notice we did a HEAD last time).

mkdir -p target_files; cd target_files
cat ${urls_file} | xargs wget

Now simply running retire here to find out if any of our targets old dependencies have any common known vulnerabilities.

Assuming you found jQuery upload in there, you can check for an exploit to launch to verify you are vulnerable with searchsploit -w jquery upload which might show you;

blueimp's jQuery 9.22.0 - (Arbitrary) File Upload (Metasploit) | https://www.exploit-db.com/exploits/45790/
jQuery Uploadify 2.1.0 - Arbitrary File Upload                 | https://www.exploit-db.com/exploits/11218/
jQuery-File-Upload 9.22.0 - Arbitrary File Upload              | https://www.exploit-db.com/exploits/45584/

Now go to one of the links in the results to inspect the exploit.

Here is a Gist of the entire scan;

Defending

As previously discussed, you might want to implement immutable web servers, so each patch and code update will purge all unused code. This is constrained to a certain type of infrastructure and might be possible for you, so an alternative to is purge all assets from the persistent storage periodically and rebuild assets from a known and most importantly current inventory, allowing you to also verify that what you are putting into production is at least safe at the time.

Another excellent idea it to periodically perform static analysis of the assets stored persistently, not just on the next code deployment candidate. If you have any findings cross reference your inventory (you have one of these right?) and risk register in case the risk has been assessed already.

Lastly, automate scanning tools similar to what is in this post. There are a lot of ways you can stay ahead of attackers, at the very least try to scan your own assets frequently with the common tools attackers are using to make sure you're not the easy target.