This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
Interviews | July 12, 2021
A Risk Based Security Strategy is Key to Managing Tool Sprawl
Q1. What are the main takeaways for enterprise organizations from attacks like the one on the water treatment facility in Oldsmar, Florida earlier this year?
I will start by saying these attacks are not new or unique in their TTPs or in their targeting of critical infrastructure. However, they are being taken much more seriously by government officials, the media, and the public at large than in the past. Next, these attacks aren’t going away and it’s not practical or realistic to remove network access to Operational Technology (OT) systems.
Organizations need to start with basic IT hygiene like implementing and auditing password policies, vulnerability management, and patching processes. This may seem like a no brainer not worth repeating but is still a huge gap for many organizations. Embracing identity and access management technologies as well as zero trust principles are also a key step alongside these hygiene basics to harden the environment, make it tougher for adversaries to gain access in the first place, and lengthen the time needed to breakout—move laterally. Ultimately organizations must be humble and acknowledge that it’s almost certain a persistent adversary will get through your perimeter defenses, but speed in response still counts. With that in mind, proactive threat hunting programs and pervasive visibility through technologies such as EDR are a must to prevent those incidents from turning into full blown breaches.
Q2. What are some key requirements for effective threat hunting? What are some of the most common mistakes that security leaders make when implementing a threat hunting capability?
At a high level there are two key requirements for effective threat hunting. The first is getting the right data to hunt on. This is where most organizations tend to focus initially because it’s easy to identify collection requirements, devise a plan, and execute against it. From a prioritization perspective, not all telemetry is equal, nor do they have the same confidence levels. For example, network data is often extremely easy to get, but due to fragmented networks, the rise in working from home, and pervasive use of encryption it won’t be as valuable as endpoint data. Endpoint data in turn is usually the most actionable for threat hunters, but it can have gaps related to the usage of cloud services and embedded/IOT devices.
The second requirement is making sure the data is actionable. For example, where and how you store the data is critical. Collecting years of logs by sticking it in cold storage is much less valuable to your threat hunters than being able to efficiently access the last few days of activity in real time and at scale. Even more important is ensuring you have the capability to generate insights. If data sets can’t be correlated into efficient hunting leads, and if those leads can’t in turn be properly investigated, all you have accomplished is the creation of a data lake for log management—and probably at great expense—versus a true threat hunting capability.
Too often I see organizations focus exclusively on the first requirement and fail to properly plan for the second until after the fact. They fall into the trap of fetishizing data collection for the sake of more data rather than focusing on the correct outcome which is threat hunting and breach prevention. Instead, I recommend that organizations begin planning for both in parallel and recognize that creating meaningful insights often requires not just technology but highly skilled experts as well. The need for skilled experts will ultimately require many organizations to consider partnering with some type of managed service due to the widely publicized skill and hiring gap that exists in cybersecurity today.
Q3. What are CrowdStrike's plans at Black Hat USA 2021? What does the company have lined up for the event?
CrowdStrike is a proud titanium sponsor at Black Hat 2021. Enter the Adversary Universe at Booth #1836 for a chance to own an adversary, chat with CrowdStrike experts, and watch product demos to see our solutions in real time. We are hosting an in-person session entitled “Getting Ahead of the Ransomware Operations Lifecycle” by Jason Rivera, CrowdStrike’s Director of Strategic Threat Advisory Group. If you can’t make it in person this year, we have plenty of virtual sessions with topics ranging from Zero Trust, Cloud Security, and Modernizing Your SOC. You can find a full list of our sessions on our Black Hat event page: www.crowdstrike.com/events/blackhat2021.html
Q1. You took over as CEO at Qualys relatively recently. What are some of your immediate priorities? Where do you see the biggest growth opportunities for Qualys in the next few years?
I started my Qualys journey in 2003 as a software engineer. Qualys was in its infancy and promoting a SaaS-based platform for security, which, while it now seems visionary, was advanced for the time. This background has shaped me as a leader. I have learned a lot from my direct work with customers. Further, being responsible for the engineering and the security of Qualys influenced how I look at security problems and solutions. I know the pain that IT and security leaders face, and this inspires me as we work to deliver solutions that solve customers’ problems.
Today cybersecurity is too complex. It isn’t uncommon to see security teams managing 30 to 80 individual solutions to get the security and visibility they need. This scenario creates too many silos. One solution looks at inventory, another at prevention and others at detection, which leaves the customer to “glue” them all together into one entity.
My vision and priority for the company is to continue to build out the Qualys Cloud Platform to address the issue of agent sprawl. In addition, we strive to provide innovation that helps simplify and automate security – from a deployment and number-of-solutions point of view. And, to provide all the information needed within one platform to help better secure hybrid environments. Our goal is to help customers address security challenges all the way from shift left in DevOps to shift right in security monitoring and response.
Q2. How has the shift to a more remote workforce environment exacerbated the IT asset visibility challenge for organizations? What should organizations be doing to address the issue?
The pandemic pushed enterprises to rethink their IT architecture and security, which caused them to accelerate their move to the cloud on all fronts. Seemingly overnight, they went from no more than 5% of their employees working outside the corporate network to a 100% remote workforce. Companies had to act quickly to figure out how to protect all the employees working from laptops on home networks at various locations around the world. They needed to know where the devices were and how to protect them.
One of the byproducts of this shift was the urgent need to know what assets were connecting to the corporate network, to secure the environment. Knowing what you have in terms of endpoints is the starting point of any cybersecurity program. And, while many companies have configuration management databases (CMDBs), they require you to manually enter assets which means that nine times out of 10, you are relying on out-of-date information.
It quickly became clear that the security teams needed different inventory information than the IT teams. For example, security needs to know things like who installed the software, how long it has been on the system, is it end of life? All things that an IT team typically doesn’t care about. Discovering what you have is a difficult job, and you need a network of agent-based and agentless sensors to provide this data. You need to know that the assets have basic policies to ensure that they aren’t running things they should not.
Traditional vulnerability management solutions don’t necessarily help because the software may not have any vulnerabilities. This is why we created a way for customers to focus on asset inventory from a cybersecurity perspective. Our CyberSecurity Asset Management Solution helps customers identify all assets in real time, to develop a solid program and have confidence that all systems inside and outside the corporate network are identified. Then, they can build on this to mitigate the risk to these remote endpoints by leveraging the same cloud-based approach for vulnerability management, patch management and EDR.
Q3. What can security professionals at Black Hat USA 2021 expect from Qualys at the event? What is your main message to existing and potential customers?
In the current environment, many organizations have shied away from attending in-person events. We are excited to show support for Black Hat, our customers, and partners, through our physical presence at the show, our first in-person event in over a year. At Black Hat USA, Qualys will focus on the value we bring to customers through our innovative Qualys Cloud Platform.
Qualys will share how security teams can gain the upper hand against ransomware and other sophisticated attacks by unifying their security strategy. Through a single unified platform, Qualys helps you manage asset inventory for cybersecurity, prioritize vulnerabilities, and automate remediation with zero-touch patching per threat indicators, and taking an effective multi-vector approach to detect and respond to malicious attacks.
Qualys offers more than 20 applications running off our cloud platform helping companies of all sizes to reduce their overall TCO for security and bring valuable context and insights to risk management and compliance. Stop by Booth #1437 and see how you can eliminate silos and consolidate your IT, compliance, and security stacks into a single platform to get more security with Qualys.
Q1. A recent study that ReliaQuest conducted showed that organizations only use a relatively small percentage of their installed security tools. Why is that the case? What should security leaders be doing to prevent and control tool sprawl?
There are several reasons for this. The main reason is the proliferation of tools which is causing security analysts to get bogged down in mundane, menial tasks such as administration and management of these tools. As tools increase in sophistication, it is becoming increasingly difficult to optimize these tools to stay at pace or ahead of threats. Many purchases are made because it is something new and the hope is that it would cover a gap; there is necessarily a well thought out strategy behind it. Also, many times folks buy tools, so they don’t lose their budgets – many orgs have a ‘use it or lose it’ policy that is driving tactical purchases. A tool is purchased, the product vendor can help with initial implementation and then the IT and security teams have a hard time managing and optimizing it in the long run. A lot of these security products do not integrate well with each other and that is left to either the organization or a service provider--if they have the skillset. While the product vendor might be an expert in that specific product or security area, they don’t bring a holistic understanding of the organization’s security environment and needs. And now the organization is really stuck with ensuring the tool is integrated properly into their ecosystem, a skill that most do not have.
It is not that these tools are not effective, and organizations do not need to throw them out. It is about figuring out where they can be most effective. To do this they need a strategy based on risk – an informed strategy that considers the critical assets that need protection, how IT is looking to transform the business, where data is going to live, the risk scenarios that are most of concern, etc. By using frameworks like the NIST Cybersecurity Framework (CSF), or MITRE ATT&CK, security teams can map to the controls they would need and how to implement them. This informs them what capabilities, and hence tools, might be most effective and what is needed. It is also important to note that you don’t have to staff for everything – a lot of organizations are extending their teams and amplifying talents by outsourcing services. This helps bring security expertise you don’t have to build in-house while ensuring you have the skills necessary to tackle specific functions. Such a scheme also can ensure that the tools you invest in are put to good use and is optimized for efficacies.
Additionally, to overcome resource issues, investing in automation—something that has proven well in many areas of business and IT—can get scarce analyst teams to focus on high value priorities by getting them away from low-value, repetitive and mundane tasks.
Q2. What exactly is Open XDR? What's driving the need for it?
XDR stands for eXtended Detection and Response and it purports to solve a critical problem security teams face. With so many tools that don’t talk to each other, security teams must hop from tool to tool to piece together any semblance of a threat. Many times, they can end up with false positives, having to start the process all over again. This process is burying scarce analyst resources who are spending an inordinate time collecting data, administering tools and other activities that take them away from true protection functions. This puts analysts in constant fire drill mode and security operations in a suboptimal state. Some technologies such as SIEM and EDR have matured to bring these disparate tools together but have created their own silos. They have also fallen short of the promised capabilities leaving further gaps in efficiencies, lack of full visibility across all sources such as cloud applications or business critical applications like SAP. The idea of Open XDR is to connect all these silos and bring integrated visibility across the security tools, business applications or any security relevant sources for singular, actionable situational awareness.
Open XDR is also about being vendor-agnostic so organizations can bring any tool from any vendor, and continue to invest in what they prefer most, into the mix. Open XDR not only drives singular visibility across the tools it also provides a unified workbench from where analysts can manage the security lifecycle – from monitoring to detection to investigation and response.
Q3. Why is it important for ReliaQuest to be at Black Hat USA 2021? What do you want security professionals at the event to know about ReliaQuest and its strategy over the next few years?
Black Hat has always been the security professionals conference to exchange ideas, drive innovation and essentially mature the discipline as a whole. We look forward to again being face-to-face with hundreds of our clients as it is one of the first hybrid conferences post pandemic. We have a lot to share. Over the last 18 months we have been extremely focused on innovating the ReliaQuest GreyMatter cloud-native platform which allows us to deliver our Open XDR-as-a-Service approach to security.
Additionally, we are built by security professionals, people who deliver services and support security teams in organizations around the world everyday. It gives us a front row seat to the challenges that security operations teams face. The combined approach makes us the force-multiplier for security operations teams for our customers as they continue to mature and improve their practices to protect their businesses. While tools and technologies are important, given the challenges in the security realm, we strongly believe that expert services in combination with technology is the right answer to progress cybersecurity and the industry as a whole.
Q1. How will Synopsys' recent acquisition of Code Dx benefit customers? What new or complementary capabilities does Code Dx bring to the table?
Code Dx provides immediate benefits to our customers through testing orchestration, issue correlation and reporting to prioritize vulnerability remediation efforts to reduce application security risk. Code Dx comes with over 75 integrations with application security testing, network security, container security and developer tools from Synopsys, third-party vendors, and open source for full coverage across all security testing activities.
With Code Dx, our customers can combine and correlate the results from all these different tools from us and from their existing investments in other tools for a consolidated risk view of their applications. Code Dx automatically prioritizes the most important issues from these sources in the interest of developer productivity and providing a complete picture of application security risk. We see Code Dx as a perfect extension of our recently announced Intelligent Orchestration solution for orchestrating software testing in DevOps workflows, and, like Code Dx, Intelligent Orchestration works with Synopsys and 3rd party tools.
Q2. What impact has DevOps and cloud transformation had on software security? What changes do you expect over the next few years?
When you look at DevOps and cloud transformation together, the most dramatic impact on software security has been that it has put security teams in very reactive, uncomfortable positions of trying to keep up with the pace of development and deployment. By itself, DevOps represents a gradual evolution in how software is built, delivered, and operated. It has had primarily a process impact on how security teams needed to think about where they should attempt to insert security testing, stage gates or verification steps before shipping. This has been a challenging approach because the processes and methods that security teams were used to advocating for waterfall development did not adapt well to the iterative development and continuous deployment that DevOps enabled. This has led to more friction from the perception that software security practices just slow things down when velocity is the very goal that DevOps tries to achieve.
DevOps has had more of a process impact, whereas the acceleration of cloud transformation over the last 1-2 years has had an impact on the architecture, composition and very definition of applications. This has led to security teams needing to rapidly adapt their security tooling to work with these new architectures and applications composed of containers, microservices and cloud services. These dual pressures of delivery velocity and cloud architecture will require significant changes to software security approaches over the next few years with a heavy focus on automation of the testing activity, orchestration of the software security activities and machine learning prioritization applied to all the testing artifacts to try to achieve effective software security at DevOps speed. Throughout the pipeline, orchestrated security services will automatically reinforce the policy guardrails and enable risk-based vulnerability management for overburdened, under-resourced security teams that are challenged to get in front of cloud adoption.
Q3. What is Synopsys' focus at Black Hat USA 2021? What do you expect will be top-of-mind issues for your customers at the event?
Synopsys will be focusing this year on the need to automate software security at scale through intelligently orchestrating testing activity and prioritizing all of the issues effectively to enable quick, effective risk management for the software supply chain. The awareness of security threats from application vulnerabilities is at an all-time high due to recent supply chain attacks and headline-grabbing malware that exploits zero-day vulnerabilities in commercial off-the-shelf-software. This has elevated the integrity and trust of the software supply chain to a top-of-mind issue for all our customers. Software security has traditionally focused only on securing the software that our organization write as custom code, and then a combination of custom and open-source code. Now most progressive organizations see much greater risk in not understanding the source and trustworthiness of all the software that they build, buy and run, as well as all of the third-party services that all of that software interacts with. This increasingly complex attack surface, with ever more complicated software, requires a more holistic look at risk across their entire portfolio of applications.