Chief Security Officer
Q: As the chief security officer of a major company, what security issue or issues keeps you awake at night the most?
Stan Black: Traditional security technologies don’t cut it anymore. What keeps me awake is that threats today are bigger than just malware campaigns or spreading viruses. Today, we have to protect against the old and the new and evolving threats from hacktivists, nation-state attacks, espionage, criminal enterprises, I could go on. We have to be smarter than those trying to attack our company and do our best to stay ahead of criminal “innovation.” There isn’t a solution on the market today that solves all problems.
My job is to figure out which ones are the right ones and which of the many attack vectors do we protect against first for our company and our customers. As an industry, we need to move away from traditional device-level, platform-specific, end-point security approaches.
CSOs and IT security organizations need to stop focusing on devices and things and every single threat these could bring the organization and instead focus on how to protect applications and data as they are being used, as they cross over the network and while they’re stored. My rule of thumb is to focus on five buckets: the security of apps, data and the network, identity and access management, and monitoring and response. Keeping tabs on these areas by setting up, enforcing and monitoring smart policies will allow your organization to filter out the noise and focus on real threats and bad actors that are trying to prevent access to your data or steal it for their own use or sale.
Q:What are some of the key considerations that enterprises need to keep in mind when securing their Citrix XenApp and XenDesktop environments?
Black: Enterprises should always stay up-to-date on security patches and hardening guidance. For XenApp and XenDesktop specifically, we recommend setting up policies to block jailbroken applications or environments, hardening your network to deter network boundary jumping and to be consistent in your defensive measures. As with any product, we also recommend reviewing configurations for logging and alerts to be sure you’re focusing on the right data sets or anomalies. Defining “known goods” and behaviors in your system will help further mitigate risk.
Q: As a Black Hat USA Platinum Plus sponsor what does Citrix hope to accomplish at Black Hat this year? What is it you want attendees to take away from Citrix’ presence at the event?
Black: We hope that the Black Hat audience will come away with a better understanding of our role in the security space. No one can completely prevent attacks from happening, but our goal is to protect sensitive applications and data from loss and theft in all stages – at rest, in use, and in motion on devices, servers, networks or the cloud. Ultimately, Citrix reduces risk and makes it possible for people do their job from anywhere, on the device, platform and network they want.
We make it possible for IT professionals to move away from traditional device-level, platform-specific, end-point security approaches that have historically under-delivered to new models that solve specific security challenges.
President and CEO
Q: At a time when many cybersecurity vendors are focused on detection and response, Cylance is among the handful of firms still advocating a proactive threat prevention approach. Why is that? What’s it about your technology that is different from traditional perimeter based intrusion prevention products?
Stuart McClure: As cyber attacks have increased in mutation and number, and as more and more have been successful in getting past perimeter defenses, the security industry has seemed to give up on the idea of prevention. My Cylance co-founder Ryan Permeh and I spent years working on endpoint protection tools and came to the realization that the only way to effectively prevent malware from deploying was to create a completely independent conviction engine that would work prior to execution. Of course that conviction engine would have to be dramatically more effective than traditional anti-malware technologies, and that’s why we’ve created an artificial intelligence approach that leverages math, machine learning, and our own knowledge of how malware works.
Q: You are a big proponent of applying machine learning in the cybersecurity context. How exactly does machine learning help bolster cyber security?
McClure: Security is a numbers game today. We have incredibly skilled incident responders and security analysts who are utterly outnumbered and outgunned by threat actors with deep pockets who are targeting the private sector with unprecedented force. The only way to level the playing field is by employing artificial intelligence that works much faster – and more independently and silently – to prevent cyber attacks than even the largest and most talented security team can. Our algorithm works within milliseconds to convict malware without referencing the cloud or allowing the malware to execute so that we can study it, all while damage is being done. We effectively prevent fires proactively so that security teams have time to do the arson investigation, answering how the malware made it past the perimeter and what it intended to do next had it been allowed to execute. This allows experts the rare luxury of focus.
Q: What do you want attendees at Black Hat USA 2016 to know about your company’s approach to cyber security?
McClure: Our approach is simple. Cylance technology does not use signatures. CylancePROTECT is a completely math and AI-based solution that runs invisibly on endpoints of every type. Our Cylance research labs will continue to study the latest malware types and apply those findings to the continuous evolution of our mathematical model. When it comes to anti-malware vendor claims, we encourage people to know the truth and test for themselves.
Chief Security Officer
Q: Fidelis has recently talked about how its technology can help enterprises create a world where cyber attackers have no place to hide. What exactly does that involve from a technology and process standpoint?
Justin Harvey: It really requires being able to see everything that happens on the network and on the endpoint. One might think that this means recording every single packet that traverses the network or recording every single byte that is processed on an endpoint, and that’s where we’re different. Fidelis has been pioneering the concept of “rich metadata” collection and analysis for quite some time. It becomes too costly to collect and analyze vast amounts of full packets. Our approach is to stream each network stream into memory, assemble it into a session and collect metadata not only on the protocol layer, but the application layer as well. From a process standpoint, it’s critical for organizations to realize that we don’t live in a “prevention” age any more. You cannot simply install firewall, IDS/IPS and anti-virus solutions and be assured that your organization won’t be breached. The fact of the matter is, the breaches we’re seeing today have threat actors that slice through preventative systems like butter. In the absence of a fully preventative solution, what’s the answer? Detection. Moving from a prevention mindset to finding threats faster on the network and endpoints is the only answer.
Q: What kind of skills and technology resources are really required to build and operate a security operations center (SOC) that is robust enough to deal with current and emerging threats.
Harvey: From a technology standpoint, being able to automate the trace back and recording from a network and endpoint standpoint is critical. Many security operations centers are too focused on manual alert remediation, and this falls into the category of moving from a prevention process flow to a detection—detect, dig, destroy—mindset.
Targeted threats don’t always create obvious alerts like, ‘Network Attack Detected’. But these attacks do, however, leave breadcrumbs behind: telltale signs that an attack is happening or has happened in the past. This is where a SOC’s resources should be looking for threats and this is why it is so critical to be able to automate the menial jobs of a SOC, like investigating the multitude of false positives received in the SIEM. Fidelis is focused on orchestrating and automating as much of the menial work as possible, so that SOC’s can stay focused on finding threats.
Q: Why is being at Black Hat USA important for Fidelis? What is your message for attendees at the event this year?
Harvey: Black Hat USA is so important for Fidelis because it’s the only major conference focused on “the people”—the men and women who are dedicated to cybersecurity research, development, defense, innovation and response. Fidelis sees Black Hat USA as a great venue to reach like-minded security professionals who are devoted to defending their organizations from threats and to share our collective expertise and research. For us, Black Hat USA is not just about selling our wares, but learning and gathering new ideas from attendees and the sessions we attend.
Security Intelligence Lead
Q: Lockheed Martin was the first to pioneer the concept of a cyber kill chain in the information security context. How is it different from or build on a traditional perimeter focused enterprise defense strategy?
Justin Lachesky: When we talk about an enterprise defense strategy, for Lockheed Martin that means Intelligence Driven Defense (IDD). IDD is, in simplest terms, using intelligence to inform our decision-making. The Cyber Kill Chain (CKC) is a critical cornerstone of this strategy, as it provides the framework that analysts need to truly understand the attacks and attackers threatening the enterprise.
When we look at this tandem there are three critical differences when compared to a traditional perimeter focused enterprise defense strategy:
- Threat-focused. It looks at the problem from the perspective of the attacker’s actions. All of the traditional best practices are in terms of “what should a defender be doing?” But you can’t succeed without understanding the threat, and the CKC gives defenders and analysts a means to do exactly that.
- Redefines defense-in-depth. The CKC allows defense-in-depth to be defined in a more impactful way by looking at defense across the entire lifecycle of the attack. It’s not just about having more devices at the perimeter, it’s about having visibility, detections, and mitigations across the entire CKC, which spans from the perimeter to the endpoint and back again.
- Changes the paradigm. We’ve always been told, “the defender needs to be right every time, but the attacker only needs to be right once.” That’s a pretty bleak outlook, and one that we didn’t like. The CKC reframes attacker activity as sequential steps, meaning that defenders only need to be right once to break the chain. This can shift the advantage back to defenders if we can understand each step and exploit that understanding defensively.
Q: What are the biggest challenges that organizations face, when it comes to fighting Advanced Persistent Threats (APTs)
Lachesky: There are a lot of challenges facing organizations when it comes to defending against any threat, let alone Advanced Persistent Threats. We see some common themes across industries and threats, as well as some unique aspects of APTs that contribute to additional challenges.
As defenders, some of the biggest challenges we face stem from our own capabilities and operating environments. If we can’t see what an attacker is doing, we can’t begin to understand or defend against it. If we don’t have a way of consistently analyzing and understanding the attacker’s actions, we can’t learn from it and use it to our advantage. If we lack the authority to affect change in our enterprise, we can’t use what we learn to defend ourselves. Many of the other challenges we see are rooted in these three or a symptom of them.
These challenges are further amplified when we think of them in the context of defending against APTs. These actors are operating with the same mission-focus approach that we use as defenders. They are well equipped and motivated, which means they can be innovative and resilient and adapt to changes in the defensive landscape. In other words, they can react to the things we do as defenders the same way we react to the things they do as attackers. In practical terms, this means we must continually learn, adapt and advance in order to defend against APTs. The adversary is not static, so we must be active in our defense.
Q: Lockheed Martin open sourced its Laika BOSS malware detection platform at the Black Hat conference last year. Has that move accelerated innovation around the technology in the manner you expected? What do attendees at this year’s Black Hat USA need to know about threat intelligence management?
Lachesky: When we open sourced LaikaBOSS, we weren’t really sure what to expect, but we were optimistic. We’ve been amazed at the reception from the community - over 400 github accounts “watching” the project, which shows there’s a lot of interest. Even more exciting are the numerous contributions we’ve gotten to the project from the community. Seeing others using LaikaBOSS in new and innovative ways and contributing back to the community is exactly what we hoped to foster when we open sourced the project. In fact, we’ve been so encouraged by the response from the community that we’ve open sourced our milter server as well. This provides integration between email delivery systems and LaikaBOSS to enable going from lab-developed detections to real-world active defense, driving more adoption and further innovation.
In terms of threat intelligence management, that’s something we view as an invaluable enabler for an effective defensive strategy. Being able to develop, then adequately capture, store, and apply threat intelligence is critical. A lot of times we see an almost exclusive focus on external threat intelligence, but when we’re talking about threat intelligence management, it needs to make sense for internal threat intelligence created through defensive operations and analysis. It’s also important to think about it in terms of the underlying analyst tradecraft, how threats are analyzed and understood. In fact, the criticality of this type of capability is what led us to develop a commercially available threat intelligence platform we call Palisade. Regardless of the tool you’re using, threat intelligence management is a critical function for being successful in executing any active defense strategy.
Senior Director of Cyber Security Services
Q: What are some of the common mistakes that companies make when incorporating threat intelligence into their cybersecurity programs?
Clint Sand: Symantec’s MSS and DeepSight intelligence analysts, and Incident Response teams work with hundreds of customers a day on a global scale to help them stay informed of what’s coming, detect when it’s happening, and respond with precision at a global scale. In these interactions we often note that while most organizations are able to collect and consume threat intelligence into their programs, their ability to apply what they’ve consumed to stay ahead of threats is limited. We often observe organizations confuse data with intelligence and quantity with quality, only apply threat intelligence to tactical uses and ignore warnings associated with evidence of compromise.
To avoid these common mistakes Symantec recommends that organizations:
- Learn to interpret threat intelligence with business context.
- Partner with a threat intelligence vendor with a comprehensive view leveraging telemetry data to analyze the threat landscape that goes beyond a single attacker profile like “APT”.
- Ensure that lessons learned analysis includes understanding what threat intelligence contributed to the resolution of an incident and what clues were available within your threat intelligence program that if properly leveraged, may have prevented the incident, or helped to reduce the scope of damage.
- Implement “Incident Peer Review” during the triage process. No matter if organizations run their own SOC, or outsource to an MSSP, implementing a peer review process that puts a second set of eyes on a potential problem enables an organization to see the same data differently.
Q: Symantec recently released its Internet Security Threat Report for 2016. What, in your opinion, is the most significant takeaway for enterprises in it?
Sand: Symantec’s Internet Security Threat Report (ISTR), Volume 21, revealed an organizational shift by cybercriminals: they are adopting corporate best practices and establishing professional businesses in order to increase the efficiency of their attacks against enterprises. This new class of professional cybercriminal spans the entire ecosystem of attackers, extending the reach of enterprise threats and fueling the growth of online crime.
Data breaches continue to impact the enterprise. In fact, large businesses that are targeted for attack will on average be targeted three more times within the year. Additionally, we saw the largest data breach ever publicly reported last year with 191 million records compromised in a single incident. There were also a record-setting total of nine reported mega-breaches. While 429 million identities were exposed, the number of companies that chose not to report the number of records lost jumped by 85 percent. A conservative estimate by Symantec of those unreported breaches pushes the real number of records lost to more than half a billion.
Q: What is Symantec’s primary focus going to be at Black Hat USA and why?
Sand: This year, Symantec’s focus at Black Hat is to engage with and inform attendees about our enterprise security solutions, research and broad security expertise. Symantec is focusing on organic innovation and has a robust portfolio of enterprise solutions to help companies achieve a higher level of security. We’re proud to be helping organizations around the world take control of their infrastructure, enhance their cybersecurity skills and stay ahead of advanced threats.