Interviews | May 17, 2019

Serverless Computing is Creating New Security Concerns: Checkmarx, ThreatConnect, Vectra


Maty Siman
Founder and CTO

Checkmarx

Q1. What's driving demand for interactive application security testing? What questions should you be asking when considering IAST tools for your organization?

The motivation behind the turn to IAST is a desire for improved DevSecOps, achieving greater accuracy, speed, and efficiency. To keep up with the fast pace of releases and the speed of DevOps, organizations need accurate and automated security testing tools that can easily scale and produce actionable results. Historically, AppSec programs were characterized by the use of Static Application Security Testing (SAST) tools which analyze the code or binary itself, and Dynamic Application Security Testing (DAST) tools that simulate attacks to see how an application reacts. Fast forward to 2019: while SAST is able to fit fast and iterative development processes, point-in time DAST is slow and manual, rendering it unfit for DevOps-like processes. This is where Interactive Application Security Testing (IAST) comes in.

IAST is a dynamic and continuous security testing solution that detects vulnerabilities on a running application by leveraging existing functional testing activities. IAST is designed to fit agile, DevOps and CI/CD processes. Unlike legacy DAST solutions, IAST does not introduce any delays to the software development lifecycle. Moreover, IAST requires little security training or expertise to use. Testing is continuous and automatic, significantly reducing the burden placed on developers.

Organizations considering adopting IAST tools should ask themselves questions like: Is my code being delivered quickly enough and is the final product secure? Are my teams wasting any time with the current tools we have in place? Do our current solutions provide adequate visibility into vulnerabilities within our applications and the code they interact with?

Q2. What impact do you see the rise in serverless computing have on application security testing? What, if any, new security concerns does the trend create?

In serverless computing environments – such as AWS, Azure and Google Cloud -- although applications are running without a managed server, they still execute code. If this code is written in an insecure manner, it can still be vulnerable to application-level attacks. For this reason, the increasing adoption of serverless computing is driving new requirements for application security testing. This includes everything from supported integrations with serverless computing vendors; to new functionality that scans the meta-data (settings, etc.) of the configurations and policies in addition to the applicative code; to complex security debugging since runtime analysis is dependent on the serverless vendors' monitoring/logging capabilities; and versioning to test all available versions of serverless functions.

While serverless computing may bring cost and productivity benefits to organizations, it also creates new security concerns. Attack and defense techniques are different in the serverless world from what we are used to in the traditional application world. There are shifts in ownership and responsibility. Organizations are now dependent on the security of their cloud vendor and responsibility for application security is now in the hands of developers—or the DevOps team—not IT.

And there are shifts in technology. In serverless, there's often a false sense of "there's no server to attack" while in fact, permanent/second order injections are possible. Serverless also creates the illusion of code isolation. In older generations, when applications were monoliths, it was easier to isolate each monolith into its own environment. With the introduction of micro services and serverless, almost every function of almost every application is able to communicate with any other function, making it challenging—practically impossible—to isolate each application in a different environment. This means that a vulnerability in one application can be abused to hack into other applications running within the same environment, since the isolation-barrier has been removed in serverless environments.

Putting it slightly differently, since functions from different applications should be able to talk with each other, the negative side effect is that vulnerabilities can also flow between functions belonging to different applications.

Q3. What does Checkmarx plan to highlight at Black Hat USA 2019?

Checkmarx plans to highlight our Software Exposure Platform, a comprehensive software security solution that tightly integrates SAST, SCA, IAST and developer training via a unified management and orchestration layer to address the entire software exposure lifecycle. The Checkmarx solution addresses software security from end-to-end empowering organizations to move to a true DevSecOps model and deliver secure software faster.

We live in a world of digital transformation and the rate at which software is being developed is continuing to increase exponentially. Software is everywhere, creating a massive attack surface for hackers. There's never been a greater need for solutions like ours that improve software security, while meeting the needs of the modern software development landscape.

Additionally, members of Checkmarx's security research team, including myself and Erez Yalon, will return to Black Hat USA to discuss our latest research findings, including emerging IoT threats, as well as some novel attack techniques to raise awareness among security practitioners.


Andy Pendergast
Vice President of Product

ThreatConnect

Q1. Why are security playbooks important for SOCs? What are some of the most common challenges organizations face in creating these playbooks for their environments?

SecOps teams have a challenge today where they are overwhelmed by alerts, and finding the appropriate course of action for each of them. The problem is inability to scale triage and response, and lack of consistent processes when responding. SecOps teams need to automate complex workflows for scaled, repeatable, and faster decision making across their whole team. The most effective way to do this is with baked-in understanding of the relevant threats they're facing and the knowledge of how to defend against them: in other words, threat intelligence. For example, by correlating data found in an active alert with intelligence found in a historical incident that leads the analyst, or analytic driving automated decision making, to take the appropriate courses of action. They need to anticipate and prevent threats specific to their organization, as well as reduce the time to detect and respond to a wide range of threats by making their security operations and analysts more efficient, while providing real-time insights to security leaders to make business decisions.

With ThreatConnect Playbooks, SecOps can establish consistent, repeatable, and trackable processes. And, document processes more efficiently and consistently. Metrics are another important aspect of Playbooks. ThreatConnect provides trackable metrics on time and money saved to demonstrate ROI and the value of individual Playbooks and/or the tools that have been integrated into them. All security teams want reduced workload, predictable costs, simple deployment, and flexible integrations with a range of technologies, and for all teams to work cohesively in one system.

This kind of intelligence-led automation can also help reduce turnover by allowing security team members to focus less on mundane, soul-killing tasks and more on cognitively satisfying analysis. ThreatConnect was designed by analysts, but built for the entire team. From the easily accessible intelligence and analytics to the way we simplify and automate workflows, every team member benefits from using the same platform.

Q2. What do organizations need to know about automated security testing? What are the most common misconceptions people have around automated testing?

In the context of deploying and running software, automated security testing is part of your continuous development, testing, integration, deployment cycle. Automation gives you the opportunity to build security testing directly into your development lifecycle without disruption. For automated security testing to be effective there's two big things that should be in place: an understanding of the objective or success criteria of the tests; and integrat[ing] it into your overall dev, test, integration, and deployment cycle. Know what other people, process and technology is involved to ensure the proper checkpoints are in place and integrations are solid.

There are no silver bullets in security. Automated security testing will not solve all your problems. [For example] it won't help you if you are deploying to a poorly architected or secured environment. Measure what risk it will inform and enable mitigations for, and what it won't. You will also need to periodically validate that as your threat landscape, risk tolerance, and environment change. As these factors change, you should adapt your automation and tooling accordingly.

Q3. Why is it important for ThreatConnect to be at Black Hat USA 2019? Which of your products and services do you expect will garner the most attention at the event?

We introduced the ThreatConnect Platform here in 2013, so Black Hat will always be a special place for us. We see Black Hat as the conference of our peers - our true users. We can get the pulse of the industry in just a few days and get great feedback while we are here. Truthfully, we really love showing off our t-shirts, and seeing the reactions every year too.

This year we have undergone some changes to our brand and our product. We went from offering a suite of products to now offering the Platform in its entirety. Because the ThreatConnect Platform has use cases for TI, security operations, IR, and security management, we felt that all security teams should reap all the benefits from the Platform's capabilities. We look forward to hearing feedback on our new approach from guests to our booth. With your entire team and all your knowledge in one place, you will drastically improve your ability to put security data in context with intelligence and analytics, establish process consistency with playbooks, workflows and a centralized system of record, and measure the effectiveness of your organization with cross-platform analytics and customizable dashboards. ThreatConnect fuses intelligence, automation, orchestration, and response to enable organizations of any size to be more predictive, proactive, and efficient.


Chris Morales
Head of Security Analytics

Oliver Tavakoli
CTO

Vectra

Q1. Chris, how have requirements and expectations for SOCs evolved in recent years?

The intent and function of security has always been the same – to find threats inside an organization before they cause damage. Security operations can be viewed on a maturity scale of measuring how proactive an organization is in threat hunting. It is this maturity that has changed.

Historically, most organizations have been reactive in their tools and processes, primarily because being proactive requires human capital and the right technology to enable threat-hunting efforts. Investigations require a broad and specialized set of skills, including malware analysis, forensic packet and log analysis, as well as the correlation of massive amounts of data from a wide range of sources. Security event investigations can last hours, and a full analysis of an advanced threat can take days, weeks or even months. Time is the most important factor in detecting network breaches. To protect key assets from being stolen or damaged, cyber attackers must be detected in real time.

It is in the technology stack and capabilities where the industry has seen a dramatic improvement in abilities for the SOC to reduce these long investigation times. Historically, security products have required a significant time investment by highly skilled professionals who are adept at extracting actionable cybersecurity intelligence. For example, at Vectra we have augmented human security analysts in their ability to hunt by automating the detection, triage, and prioritization of events that helps an analyst know where to hunt.

Q2. Oliver, from a technology standpoint what are some must-have capabilities for a modern SOC?

The combination of SIEM, EDR and NDR (Network Detection and Response) – what Anton Chuvakin at Gartner refers to as the "SOC Nuclear Triad" – are the must-have capabilities. What's not in this list is an organization's stationary defenses—firewalls, EPP, etc.—as those fall more into the prevention bucket and SOCs are set up to deal with everything that wasn't stopped on the first attempt.

The SIEM often is the place where multiple different elements in your security infrastructure are correlated together and several types of compliance checks can be performed. EDR is a relatively mature space which includes products by a number of different companies – this is where SOC teams look into behavior on the endpoint which, through the benefit of hindsight, should be considered suspicious and may require some form or action or remediation.

NDR is the new kid on the block. It provides overall visibility of what is going on in your network and provides both the means to find patterns of behavior indicative of attackers who have already established some foothold inside the organization and to investigate incidents or hunt for threats based on theories you may have or weak indicators of security issues which are present in your environment.

Q3. Chris, what are the biggest challenges organizations face with respect to operationalizing threat intelligence?

For many large organizations, intelligence-driven IR is now the goal. This level of maturity for incident response requires having a more detailed and up-to-date understanding of threat actors including their objectives, motivation, and their TTP profile. Having this knowledge of what attackers do is leveraged to architect controls in a manner that allows for the security team to apply the appropriate response to the problem, including actions to disrupt, degrade, and deny the ability of an adversary to reach their objectives.

The challenge with threat intelligence is it lacks environmental context of what is really happening inside enterprise environments. It is great to know that a hacker group might be targeting a specific vertical and what are the tools that are in use, but trying to operationalize this data in a meaningful way requires a high level of maturity in proactive hunting and security operations. It requires an analyst skilled in threat hunting who knows how to translate threat intelligence into hunting techniques, and it requires access to the right data to correlate threat intelligence with.

Q4. Oliver, what do you want attendees at Black Hat USA 2019 to take away from Vectra's presence at the event?

I want Black Hat attendees to come away with a clear understanding of the role NDR plays in the "SOC triad". In particular, what use cases or threat models it can help address and what this does to the risk posture of their organizations.

In a world of increasingly complex and distributed systems and networks, how SOCs prepare to meet modern threats is a topic that's top-of-mind for most CISOs. Black Hat brings together SOC team members, security architects and security managers and executives in the same place and facilitates a series of meaningful discussions for us that would take many flights and hotel stays to recreate.

Sustaining Partners