Black Hat is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Pedro Canahuati

Pedro Canahuati
Vice President, Engineering, Security and Privacy


Facebook

Q1. Your team at Facebook has worked on enabling a more collaborative culture between your software development and operations teams. Describe for us Facebook's approach to integrating security into the DevOps pipeline.

Given how essential security is to every technology today, we have built the same collaborative culture into how we integrate security into engineering and operations. The security engineers, analysts and investigators are part of the engineering discipline, supported by the engineering organization just like the operations teams. Traditionally, these disciplines are separate and we believe that this structure ensures security is foundational in how we build and stress test our products.

Our security and privacy engineering teams consist of experts in corporate and application security, detection infrastructure, data protection, data privacy, and more. Our goal is to build the systems that will standardize security, privacy, and transparency across Facebook's platforms. As we work to scale this effort, my team collaborates closely with Facebook's Infrastructure, Product and Policy teams to streamline product design, incorporate security throughout the entire development cycle, and maintain the platform's health and integrity. This is a global effort that spans teams, departments and time zones, and requires us all to work in lock step together and share responsibility for how we protect the privacy and security of people on Facebook.

For each challenge we face, we're bringing our best people together and engaging outside experts to thoroughly understand the issue and address it in a systematic and thoughtful manner. We will continue to evolve our team structure to position ourselves to be the most effective and efficient as we can to protect people using our services.

Q2. Facebook is arguably one of the biggest targets in the world for criminal hackers, phishers and social engineering scammers of all kinds. What is your guiding philosophy when it comes to implementing strategies/plans for dealing with such security threats? How do you prioritize the things that need attention when you are constantly under attack?

Security at Facebook is a massive effort across the company. We are fortunate to play an important role in people’s lives and that is a privilege but also an enormous responsibility that we don't take lightly. We build technical solutions at scale that solve entire categories of issues. For example, we ship reusable components and infrastructure to solve full classes of problems such as key management or access control. Instead of providing guidance or best practices to every single team, we build libraries and/or APIs that allow developers across the company to integrate security thereby accelerating the development of services and significantly reducing the heavy lifting required to make a service secure. We are constantly improving our tools, applying new security testing, reviewing approaches, and staying agile as we implement solutions to security and privacy challenges.

Managing a global and complex technology environment with billions of users and millions of lines of code produces a great deal of security insights that we know can be valuable to the rest of the industry, which is why we invest in sharing our tools and knowledge with the security community. We also work closely with outside experts through our bug bounty program, one of the longest running in the industry.

Q3. What do you want security practitioners at Black Hat Europe to know about Facebook's commitment to data security and privacy?

Facebook is deeply committed to protecting people's security and privacy. To meet this responsibility, my team and the entire company are working hard to maintain a firm foundation by standardizing privacy and security engineering practices across the platform. To me, our success boils down to two key components: 1) people's ability to clearly understand how they can control and protect their information on Facebook, and 2) people's trust that we are fulfilling our responsibility to understand and mitigate security risks facing our community every day.

As we work towards these goals, we are investing heavily in technology and talent. This year, we have doubled the number of people who work on security and safety issues to more than 20,000, including content reviewers, software engineers, and security engineers. Our security teams are focused on ensuring that we continue to improve our capacity to detect and respond to threats, while ensuring that the right people can access the right information when and where they need it.

On the data privacy side, my goal is to ensure that everyone has a full and clear view into how they can control and manage their data across Facebook services, and that we as a company have strong governance models in place to protect people's privacy.






PJ Kirner

PJ Kirner
Chief Technology Officer


Matthew Glenn

Matthew Glenn
Vice President of Product Management


Illumio

Q1. PJ, your company is betting on micro-segmentation as the best approach for preventing the spread of breaches inside the cloud and data centers. How is micro-segmentation different from network segmentation? Why doesn't traditional network segmentation work for security?

Traditional network segmentation, well understood by security and infrastructure teams, was designed to subdivide the network into smaller network segments through VLANs, subnets, and zones. Although these constructs can provide some isolation, their primary function is to boost network performance and requires control of the infrastructure, which is often a challenge in the public cloud.

In contrast, micro-segmentation was designed to prevent the spread of breaches and enforce security policies — what should and should not be allowed to communicate among various points on the network.

The goal of micro-segmentation is to decrease the network attack surface. By applying micro-segmentation rules down to the workload or application, IT significantly reduces the risk of an impactful data breach. For example, when a bad actor can only move between three application workloads vs. 3,000, their chances of accessing and stealing critical data without being detected is severely limited. Plus, the smaller attack surface makes it easier and faster for IT to find bad actors once they are in because they cannot move everywhere to hide.

One of the challenges with segmentation is you must know what to segment. Mapping the connections between workloads, applications, and environments requires real-time visibility into application dependencies, which many enterprises lack. Lack of visibility makes it harder to reduce the attack surface, protect applications, and reduce cyber risk. Creating a real-time context-rich application dependency map is the first step towards successful micro-segmentation.

With micro-segmentation, security is decoupled from the underlying network hardware constructs like IP addresses and port numbers. This approach leverages user-defined labels that can be imported from a system of record like vCenter or CMDB to create policies, making it much easier to deploy – while also reducing IT burden and errors. Once defined, the policies follow the workload, making them truly portable and agnostic of the workload form factor (bare-metal server, virtual machine, or container) and location (on-premise, public cloud). The policies are elastic as well, which means any new workloads instantiated in that group will inherit the policy.

Q2. Matthew, you recently noted in a blog that the key to protecting the network is to understand that there is no network. But organizations have cumulatively spent tens of billions of dollars on network security over the years. What's your advice on how best they can continue leveraging that investment while moving to a more workload/application-centric security model?

I think that is the challenge many people face when they think about micro-segmentation. The word “segmentation” implies that it is a network-centric challenge when, in reality, if an IT team realizes that they may not own the network — and the network doesn't "bend" so well — it makes organizations bend the way that they think and find better solutions.

I often think in metaphors and when I wrote that I was thinking about the movie “The Matrix,” when Neo takes the spoon from the child who is bending it and makes it straight again.

Spoon kid: Do not try and bend the spoon. That's impossible. Instead, only try to realize the truth.
Neo: What truth?
Spoon kid: There is no spoon.

When you don’t own the network, then there is no network.

Q3. PJ, help us understand how micro-segmentation can help organizations implement a Zero Trust security strategy?

Micro-segmentation is a key building block of the Zero Trust strategy. It implies a least privilege security model – only allow explicitly specified communications. And while normally we think about this in terms of users and data, which of course is a critical part of Zero Trust, the next step is to apply that same least privilege model to all the applications and workloads in your data center and cloud

To have precise and granular control of communications between workloads, applications, and processes running across the data center or public cloud, one needs a mechanism to easily create micro-perimeters around applications and processes. Micro-segmentation allows you to do precisely that, which is the first step towards restricting all communications unless specifically allowed.

The first thing organizations need to do when adopting a Zero Trust strategy is to have a good real-time view of their application landscape mapping all communications between workloads and application tiers. This is known as application dependency mapping and is a very important step before jumping into actual micro-segmentation. You can’t segment what you can’t see. A real-time map allows you to create the right policies that reflect the intended security posture.

It is no surprise that micro-segmentation has quickly become part of corporate security strategy. Organizations want to reduce risk in the event of a breach and a Zero Trust strategy enabled by micro-segmentation helps them prevent unauthorized lateral movement. Customers have started to assign budget towards micro-segmentation projects.

Another driver for micro-segmentation is regulatory compliance. The fundamental principle of Zero Trust is reflected in the mandates as part of compliance and auditing. Micro-segmentation helps organizations comply with regulations.

Q4. What are some of the questions you are expecting, or hoping, that attendees at Black Hat Europe will have for Illumio at the event?

Awareness that breaches are inevitable has IT organizations taking a pragmatic look at minimizing their exposure. Based on our experience with the largest micro-segmentation deployments in the world, we expect to have conversations around real-world deployments, operational aspects, and the need to scale.

Micro-segmentation deployments have risen significantly in the last year and the technology has become mainstream with customers like Morgan Stanley, Salesforce, and Oracle NetSuite implementing it globally and at scale. Some of the largest micro-segmentation deployments in the world are using the Illumio Adaptive Security Platform – and have production deployments with tens of thousands of workloads. Many enterprises are looking to scale micro-segmentation beyond their critical applications to include business applications and core services, which demands a level of scale never before envisioned. We had the foresight to think about this early and the Illumio Adaptive Security Platform is built for scale and resiliency.

We are hoping to talk to attendees about their specific use cases, scalability requirements, and operational challenges. Our technical experts thrive on new and interesting challenges presented by attendee scenarios and engage in real-time problem solving on the show floor.






Matthew Wilson

Matthew Wilson
VP Product Management


Neustar

Q1. In the current threat environment, what are some of the key requirements for enabling an actionable understanding of who or what is on the other end of every digital interaction?

In a world where cyber-attacks are becoming more sophisticated and complex, using state of the art data analytics and modeling software is essential in helping to understand who, or what, is on the other end of a digital interaction. By using this technology, authoritative decisions can be made around who is trying to engage with an organization – including what their intention may be – ensuring an appropriate response in real-time.

Through identifying IP addresses, organizations can determine if the IP is being used by a human, a non-human bot or is simply down to server traffic. This means businesses can swiftly uncover if the IP address has been associated with malicious activity in the past and is too risky to trust, as well as the history of the address and the last time it was seen.

By analyzing traffic behavior and identifying threats, organizations can monitor for patterns that indicate certain behaviors, interactions and risk. Abnormalities in these patterns can then easily be identified and queried, helping determine hostile activity and blocking non-human intrusions.

This comprehensive approach is what enables businesses to mitigate and protect against cyber-attacks, knowing exactly when someone is not who they claim to be. Thanks to these insights, the risks of fraud and denial of service can be stopped, before they become a problem, and organizations are exposed to increased resilience against disruptions.

Q2. What are some emerging trends around identity resolution and management? How do you see capabilities in this space evolve over the next few years?

Responsible identity is dynamic, not static and should be at the heart of businesses – knowing exactly who or what is behind every interaction, transaction and communication. Today, however, identity also acts as the ultimate competitive advantage, helping enterprises grow, guard and guide, while understanding how to connect people, places and things.

Over the next few years the cumulative impact of understanding identity, [can help] businesses win in the connected world.

Q3. If Neustar were able to leave attendees at Black Hat Europe with just one takeaway this year, what would it be?

Cyber security is a heavily evolving landscape, with organizations constantly working to try and stay at least one-step ahead of hackers. However, cyber criminals are more determined than ever before, never running out of new tactics in their attempts to penetrate an organization’s defensive layers.

As part of these industry developments, we’re starting to see distributed denial of service (DDoS) attacks become heavily targeted, meaning organizations are using all of their resources to mitigate the threat. Meanwhile, with the distraction in place, hackers are working to exploit other areas of the business, gaining easy access while the eyes are off the ball.

In addition, the explosion of IoT devices coming online everyday continues to open the door to vulnerabilities and makes it easier for hackers to launch denial of service attacks to infiltrate a company’s systems. In the coming years, there will be a lot more devices connected to the Internet. Something will need to manage the directory/registry information and authentication credentials for all these devices, and DNS is a logical platform to do that. The real changing force in the Internet, however, will come not with the “thingafication” of the Internet, but the vulcanisation of the Internet.

The reality is no organization should consider themselves safe, no matter their size, sector or level of security. The next big threat in DNS security will probably be very small micro-attacks. While large-scale DDoS attacks will still exist, we are effective as an industry at mitigating those. Where we are vulnerable is our inability to detect small, targeted attacks that either corrupt the DNS response during transit or exploit a weakness in the DNS servers or software.






Matthias Maier

Matthias Maier
Security Evangelist


Splunk

Q1. There is a lot of buzz around security orchestration, automation, and response (SOAR). What exactly is SOAR and how is it different from a fully integrated Threat Intelligence Platform?

It’s not buzz – these tools are actually helping to address the pain organizations face as they build out security strategies and capabilities that go beyond prevention with detect and respond. This strategy requires skilled and experienced security personnel and the significant skills shortage and limited availability of security analysts has increased the demand for automating repeatable tasks, allowing organizations to work smarter and respond faster.

Q2. Do you need to be a mature security organization in order to be able to implement a SOAR capability? What are some of the prerequisites for implementing SOAR?

To begin implementing SOAR a mid-level maturity in security is required – this can be defined by having at least some security analysts / responders, having a SIEM in place and having some repeatable and well-defined processes that would benefit from automation. The next step would be to utilize one of the out of the box playbooks and templates available from either Splunk or the Splunk community.

However the process should align to your organizations architecture and might involve orchestration across other teams. Once in place, security analysts can be freed up to think about what further attack scenarios might happen and strengthen detection with their SIEM. They can also review what other regular security operations processes would benefit from automation and at which point the analyst or sys admin from the department next to the security team should still be involved to review a particular action or outcome.

Q3. What do you want Black Hat Europe attendees to know about Splunk's ability to help SOCs?

We offer a full portfolio that allows organizations to operate their Security Operations Center to the level of a Security Nerve Center. However it’s not built or purchased as a service over night. For all our customers there is a maturity journey that starts by establishing visibility, enabling investigative capabilities, solving hard security problems and breaking silos to establish security operations procedures. The next level is to then optimize and automate the response where possible – to scale and defend your organizations risks and infrastructure with just a limited set of people.






 

UpcomingEvents

ShowCoverage

 

StayConnected

Sign up to receive information about upcoming Black Hat events including Briefings, Trainings, speakers, and important event updates.

 

Sustaining Partners