Q: Despite all the money spent on security, many organizations still have a very hard time detecting breaches in a timely fashion. What is it they are failing to do or are not doing enough of?
Paul Martini: I think organizations are still focusing heavily on preventive technology, building higher walls or better mousetraps, so that when a new technology is introduced everyone jumps on it. But as we all know by now, nothing is completely foolproof, which is why we hear of these massive data losses after a breach.
Today, after a major data breach occurs, we are no longer surprised to discover that gigabytes and even terabytes of data have been stolen. Some of these breaches have gone on for months without discovery, even years, which is hard to believe. This window between infection and detection, referred to as “dwell time” is clearly the most critical factor when it comes to detecting breaches. It exists primarily because organizations are putting too much emphasis on one area, prevention, and being diverted from a more balanced security approach.
As a first step to a more robust defense against data loss, organizations need to accept the fact that they will get infected. It’s just not realistic to assume otherwise. And when an organization does get compromised, the faster they can detect the active infection and shorten dwell time, the greater their chances are to minimize the data loss.
It will never be zero, which is why focusing on your data and proactively monitoring all your traffic is so important. Behavioral analytics technology that monitors and analyzes your outbound data looking for anomalies can narrow this security gap by shortening dwell time, giving companies a significant advantage.
Imagine the difference between losing a few hundred records and losing 10 million. The more progressive organizations are pursuing technology that can find active infections on the network, identify where they are located, determine what happened, etc. And the even more progressive ones are taking it a step further by focusing on the data itself proactively monitoring it, knowing where it’s moving and how much data is being transferred.
Q: iboss has positioned itself as one of the few companies that can help enterprise detect and stop breaches before loss occurs. What is it about your company’s node-based cybersecurity architecture that enables this capability?
Martini: iboss differentiates itself from other security vendors in a number of ways, beginning with our node-based architecture. Standard cloud security is based on a monolithic architecture, where data from multiple organizations is shared in the cloud. We recognized disadvantages with this approach because a breach affecting one customer could migrate to other organizations’ data in the cloud, creating vulnerabilities we thought could be avoided with a node-based design.
In a node-based architecture, nodes are isolated by OS boundaries. Each node is self-contained with its own OS, memory, processing; everything it needs to function without having to mix an organization’s data with any other’s. Each organization gets its own node collection, so that even if something happens to one customer, others aren’t impacted.
Nodes deliver all iboss features and can exist anywhere, on-premises or in the iboss cloud, so customers can have any configuration they require. For instance, you could keep your data within your own datacenter, securing users on-premises when they are at corporate headquarters and then via the cloud when they are remote or roaming, without having to proxy data back to corporate or deploy hardware.
The second differentiator is iboss advanced threat defense that detects data breaches and reduces data loss with behavioral DLP. This technology leverages our visibility across the full inbound and outbound Web stream, continuously monitoring and analyzing against a network baseline looking for anomalies. It is watching the data to see where it moves and how much data is moving, looking for suspicious transfers. Once a problem is detected, the data transfer is automatically stopped, giving you time to remediate the problem.
We also have a cyber threat score feature that analyzes the threat intelligence gathered in our incident response center. [It] applies the same algorithms used by the world’s biggest banks to determine credit risk, and issues a threat score. This speeds response time because security analysts know which incidents to pursue first. These features combine to give organizations the tools they need to find breaches faster, minimize dwell time and thus reduce data loss.
Q: What do you want attendees at Black Hat Europe 2016 to take away from your presence at the show?
Martini: We want them to take away a couple of things. First, we’d like them to leave with an understanding of iboss node-based architecture and the advantages it offers. How its flexibility can give even the most cloud-averse organizations, cloud security on their terms. It’s also a very fluid way for organizations to transition to the cloud because nodes can exist anywhere and can be created or destroyed instantly. You can have nodes on-premises and if you want to transition to cloud nodes, it’s literally just a click of a button, because nodes are synchronized across your enterprise.
We also hope they will learn how committed we are to our customers and to advancing cybersecurity technology. It’s not an accident that we have over a 98% renewal rate.
Our customers are loyal because we continue to maintain an aggressive development schedule in our efforts to keep their organizations secure. It’s become almost cliché to say that cybersecurity vendors fight to stay ahead of criminal hackers, but it’s true. We realize we are in a cyberwar with a very dedicated and sophisticated adversary and we don’t take our commitment to security lightly.
VP and CSO, EMEA
Palo Alto Networks
Q: Palo Alto Networks recently launched a new cloud-based threat analysis and prevention service called Networks WildFire for your European clients. What do you want customers to know about the service?
Greg Day: WildFire is a cloud-based malware analysis and prevention environment which provides granular and coordinated threat analysis for all traffic and attack vectors across thousands of applications, including web traffic, email protocols (i.e., SMTP, IMAP, POP) and FTP, regardless of location in the organization, ports, or deception technics, such as hiding behind encryption (SSL).
WildFire automatically creates protections against new threats – delivering them to all subscribers of the service within as little as five minutes – to help organizations prevent cyber breaches. In late August, we announced that customers in Europe [could] now benefit from the power of Palo Alto Networks WildFire cloud-based threat analysis and prevention capabilities from a data center located in the Netherlands. With local resiliency built-in, this helps European organizations meet their data privacy needs via the WildFire EU cloud.
With the launch of the new EU WildFire data center, customers’ submissions can now remain in the EU where they are processed by EU contracted staff. This means organizations still gain the value of global threat prevention from one of the world’s largest unknown threat analysis networks, whilst the detection capabilities from the in-region analysis allows global customers to gain the benefits of the blocking controls that are generated through WildFire Threat Intelligence.
This delivers a regional solution that meets customer’s data usage requirements but still provides global protection against today’s latest threat. The more we – ‘the good guys’ – collaborate, the more we can out crowdsource the relatively small collection of ‘bad guys’.
Q: Now that the European Union General Data Protection Regulation has been formally published what should enterprise security executives be doing to prepare for compliance?
Day: In our recently published research completed in collaboration with IDC, there’s a paradox around the state of the art requirements of the GDPR and the NIS Directive. Many believe they will be prepared for May 2018 but beneath the surface too many have yet to understand the details, let alone how this legislation will impact their cybersecurity strategies.
Executives must gain guidance from both their trusted cybersecurity advisors, their data protection officer (if they have one. If not, they will probably need to appoint one now) and their legal teams. The core challenge lies in understanding how they will measure and qualify risk. If you can’t qualify this for your own business, how will you justify it to auditors? They must realize this is an ongoing process that should be reviewed every three to six months.
New to the regulation is the need to notify when attacks have taken place. Notification requires discovery and the right best practice to pull together the required information. Businesses should consider if this is done in-house or through external sources. Core to this legislation is focusing on the right goal. By achieving the relevant state of the art cybersecurity capabilities, the requirement to notify should be occasional, not constant. Incident response is like a fire brigade – it’s only as effective as the drills, which keep relevant business teams prepared.
Purposefully I have left the penalties to last – the backstop to compliance. Good breach prevention security practice can reduce repercussions of shocking headlines. Often, it’s the brand damage that’s the biggest business cost but this can be managed by a well-established business response strategy. This new legislation is gaining executive visibility and its requirements provide a rare opportunity for businesses to take stock of their current cybersecurity capabilities. Organizations need to verify that their fundamental principles around cybersecurity still apply to the current digital landscape.
Q: As a Platinum Sponsor of Black Hat Europe 2016 what is your main messaging going to be, at the event?
Day: Our main messaging is around how traditional antivirus is no longer the solution to endpoint security – it’s the problem. AV can no longer stop today’s threats. We’ve recently announced some updates to our Traps advanced endpoint security, which replaces AV with “multi-method prevention” – a proprietary combination of malware and exploit prevention methods that pre-emptively block both known and unknown threats, and ransomware.
VP of Strategy
Tenable Network Security
Q: Why do organizations continue to have such a hard time measuring their return on security spending?
Matt Alderman: As an industry, we don’t truly understand security risks. We talk about risks, but they are not tied back to business risks, which would help us quantify security risks. And only by truly understanding security risks can we measure or justify security spending. This has been the longstanding problem; we don’t align security risk to business risks. We should learn from other industries, as they have figured this out.
Let’s use life insurance as an example. What’s the return on your life insurance policy? It can only be measured when something happens, but we buy it to mitigate risks in our life. We pay for a level of protection that we are comfortable with to support our family in the case of death. The life insurance company, on the other hand, calculates the risk of death based on your personal risk factors and determines a premium to insure you. We then decide if it’s a reasonable price to provide the level of protection we wanted.
The same holds true for security. Security is like an insurance policy. We have to determine the risk factors to mitigate and decide if it’s a reasonable price to spend.
Q: What do enterprises need to know about using metrics to better understand and mitigate the security threats and exposures they face?
Alderman: First, let’s start with a definition: Metrics are quantifiable measures to track performance. In this case, it’s the performance of our security program. But the security metrics we track need to align with the business objectives of the organization, which is where most organizations struggle. If your security metrics cannot be communicated to business owners in a language they understand, your message will be lost. Therefore, focus security metrics on two key elements: 1) aligning them to business objectives and 2) defining good metrics.
The first element, aligning them to business objectives, uses a standard risk assessment process. Based on your business objectives, identify the key control activities that need to be monitored. This will allow you to map your metrics to controls that have been mandated by policy to meet your objectives. A simple representation is depicted below:
The second element is defining good metrics for each control. Here, focus on the SANS criteria known as SMART. Each metric should be:
- Specific: Targeted to the area being measured, not a byproduct or result
- Measurable: Data can be collected that is accurate
- Actionable: Easy to understand the data and take action on it
- Relevant: Measure what’s important with the data
- Timely: The data is available when you need it
Q: Tenable is a Platinum Sponsor at Black Hat Europe 2016. What do you want enterprises to learn about your organization?
Alderman: The IT landscape is shifting. Adoption of cloud computing, the transition to web-based applications and the shift from traditional endpoints to mobile are changing the security threats and attack surfaces. We need to evolve our security programs to address these new technologies, along with our legacy systems, to build a comprehensive set of capabilities. That’s our focus.
Tenable Network Security helps organizations transform their security technology for the business needs of tomorrow. We focus on comprehensive solutions that provide continuous visibility and critical context across emerging technology and legacy systems; enabling organizations to take decisive actions to protect critical assets and data.