This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
Interviews | February 15, 2019
Digital Transformation Has Opened Up Enterprises to New Threats: Facebook, Qualys, Tenable
Q1. Over the past few years you have led the effort to bring Facebook's production engineering group and software engineering teams together to better address business requirements particularly around data security. What's been the most challenging part doing this? What lessons have you learn along the way?
It's not uncommon in the industry to see operations and software engineering to be two fairly independent disciplines within tech companies. The software engineering (SWE) teams often focus solely on writing code while relying on a centralized operations team to run services somewhat independently. In the last few years, companies are increasingly looking to adopt this model with "Secure Devops" and that makes a lot of sense. My goal at Facebook, from the very start, has been bringing these teams as close together as possible and making sure their values, roles and responsibilities are aligned across the board.
I strongly believe that operations isn't just one team's responsibility — it's everyone's. From the software engineers who are building the software to the embedded production engineers (PEs) that deploy and run the services, to the leaders that determine priorities. Everyone should be engaged in building reliable, feature-rich services in a holistic way.
The same applies to securing our infrastructure and services. The most rewarding and impactful work my team is focused on is ensuring security is built into services and products across the entire design lifecycle. Prioritizing software engineers being involved in how their software is deployed, how it's stress-tested, how it recovers during an outage and how it can be exploited from a security perspective, naturally leads to technology products that are a lot more resilient. As Facebook has grown, the operations and security teams have become more integrated with the software teams and this has enabled us to scale out our infrastructure with stronger automation and resiliency.
More specifically, to ensure engineers can keep up with their daily responsibilities and ensure they're securing our products, we invest heavily in building the coding frameworks that provide engineers with built-in safeguards as they write code and automate testing tools that can inspect code and find security errors at scale and as quickly as possible. For example, our security team built abstractions into code to remove full classes of issues, like XSS and CSRF vulnerabilities. What we've learned, sometimes the hard way is that we need to work closely together to prevent and solve these problems instead of depending on the security team to fix other teams' errors.
Q2. How has Facebook's operations team needed to evolve over the years to ensure that Facebook's products are available 24x7, in a secure way, to the tens of millions of users who hit its infrastructure on a daily basis?
Just like with operations, security is never just one team's mission. Given how essential security has become to every technology, we believe that every team has a responsibility to protect the systems they build. The traditional approach of centralized responsibility for security — long dominant in the technology industry — is due for a change. To enable this change, we need to examine the way we, as a community, enable data protection and shift from using audits, checkbox compliance, or bureaucracy as a crutch for getting security done.
Our teams at Facebook have had to evolve to empower our software teams to take ownership and responsibility for security and operations to enable data protection at scale. We've taken a defense-in-depth approach to ensure our software teams are as efficient as possible and continue to innovate while using multi-tiered security frameworks to catch bugs through code reviews, static analysis, bug bounties and rigorous pen-testing. We invest in talking to product and other teams about how their services could be exploited so we can work with them to implement secure frameworks or re-architect systems to ensure their resilience. We build these frameworks and tools that help engineers to easily integrate things like service authentication and authorization with minimal security team involvement.
We work on evangelizing the security mindset across our software teams — through software development frameworks and abstractions, and also through training and fun initiatives like Hacktober. Every October is our Security Awareness month, which means we run CTF competitions and smaller scoped Red Team exercises. Employees earn points and swag in addition to bragging rights when they solve these puzzles or discover and report rogue authentication pushes. This helps us raise security awareness across our entire company and exercise the organizational muscles we would need during a real incident. We know that sophisticated adversaries will always be interested in services like ours, so we continually examine our internal processes to ensure that our security operations and technology keep pace to combat future threats.
Q3. Why is it important for Facebook to be at security event like Black Hat Asia 2019? What is your main focus at the event?
Everyone in the industry is working towards the same goal of making the Internet more secure. Many of us face the same security challenges and that's why communities like Black Hat are so important. To us, this is an opportunity to share our insights and discoveries around threats with one another so we can collectively get better at defending against sophisticated adversaries.
We want to hear from our industry colleagues about the practices and methods they use to protect people online so we can learn from them and apply those lessons to our own security work. This type of open sharing has played a key role in our security operations, including through our Bug Bounty program, one of the longest running in the industry. We're committed to working with the security researcher community on improving the security of our infrastructure so we can get faster at finding, fixing and preventing bugs.
We've also long invested in sharing our insights, tools and technology with our infosec peers to help shore up our collective defenses against evolving threats. This continued collaboration is important now more than ever as we all increasingly rely on interconnected technologies in our daily lives.
Q1. How is the growing adoption of containers, functions and serverless infrastructure impacting enterprise security postures? What are the most common mistakes you are seeing organizations make when moving mission critical workloads to this new paradigm?
Containers have caused a massive disruption in the way applications are built and deployed — one that has perhaps not been seen since virtual machines disrupted the data centers and led to the advent of what we now know as the Cloud. However, unlike in the case of virtual machines, the impact of containers is not limited to infrastructure, and extends into applications as well. Application containers pack all of their dependencies with them, thereby leading to a very minimal dependency on the underlying infrastructure. All a container itself needs to run is the kernel. While this approach of bundling applications has led to a significant increase in portability and agility for deploying applications, it has also aided in increased compute density as many more containerized applications can utilize the same compute infrastructure than with virtual machines.
All of these advantages come with a cost —as is always the case— which is that of significantly lowering the isolation boundaries between applications sharing the same underlying infrastructure. With these new means to build, deploy and manage applications, the traditional virtual machine based security and monitoring tools just don't work, thereby exposing the application to significant compromise risks. The biggest and most common mistake which enterprises make in this newly adopted paradigm is trying to force-fit their existing virtual machine-based solutions to address the challenges brought about by containers. The challenges are further aggravated when application containers are deployed in a serverless and function-as-a-service environment where the traditional host-based agents are not allowed due to a shared compute fabric.
Q2. What impact has DevOps and agile practices for containerized and serverless workloads had on application security? What kind of requirements are they driving from a security standpoint?
The end-to-end automated CI/CD pipelines for containerized and function workloads have made it possible to seamlessly release numerous monthly or even daily releases of features and bug fixes. However, they have also made it significantly easier for any developer to introduce security risk through this process. This could be done by including an open source package that has serious vulnerabilities, or is unapproved by the enterprise for use, or includes some malware, in an application that quickly slides through the automated pipelines and shows up in production environments.
Containerized and serverless workloads require new visibility and security control methods and tools that are native to their build and deployment environments, and can provide automatic visibility and protection throughout the application lifecycle without disrupting the existing DevOps pipelines and practices.
Q3. What do you want attendees at Black Hat Asia 2019 to know about Qualys, its capabilities and its strategy for addressing this new reality in the coming years?
Though Qualys has traditionally focused on providing vulnerability scanning and compliance solutions for infrastructure, for the past couple of years Qualys has been expanding its platform to provide a single-pane view of security and compliance across all of the infrastructure —including on-premises assets, clouds, containers, remote workforces, applications, APIs, and soon mobile devices— which organizations are adopting as they embark on Digital Transformation.
Some of the most valuable tools for securing the digital transformation include solutions which provide visibility and control for monitoring, compliance and protection across the entire application lifecycle regardless of how they are built and bundled, as well as for the different infrastructure stacks on which they are deployed. As new computing and cloud service models are introduced, Qualys wants to ensure that enterprises don't have to compromise visibility and protection while adopting these new paradigms by providing solutions that are native to them, and which deliver visibility, accuracy, scale, immediacy and transparent orchestration of security.
Q1. You were a member of a panel discussion on cybersecurity at the recent Davos Summit. What were some of the key takeaways from that discussion especially for enterprise organizations?
Some of the key themes that came out of the Cyber Future Dialogue conference in Davos were cyber risk's role in business risk, the criticality of framing cybersecurity in business terms and third-party threats. These takeaways speak to the dynamic threat environment that enterprises are grappling with on a daily basis and how those threats are impacting the business.
We now live in a world of connected-everything, from IoT devices to operational technology to traditional IT. This connectivity brought on by digital transformation has opened up enterprises to new threats, expanded their attack surface and, in many ways, made their jobs more difficult. At the same time, boards of directors and C-suite executives are now asking their CISOs critically important questions: Where are we exposed? Where should we prioritize based on risk? Are we reducing exposure over time? And, how do we compare to our peers?
In order to answer these questions, CISOs must first be able to get their arms around the assets, data and systems in their purview. They then have to effectively measure and manage their overall exposure and cyber risk. And finally, they need to translate that cyber risk into business terms in order for executives to make informed decisions.
Q2. What are some of the common challenges organizations face when it comes to cyber risk assessment? Why do so many enterprise have such a hard time understanding and quantifying risk?
One of the biggest challenges for organizations today is a lack of adequate technology. The security industry simply hasn't kept up with the pace of innovation. This has meant organizations are forced to use traditional approaches to cybersecurity that were created for the world of on-premises servers and workstations. In today's highly dynamic and constantly evolving attack surface, these decades-old approaches don't cut it.
Cybersecurity was largely considered a tactical function 5 or 10 years ago; its reach was limited to the confines of the IT team. Fast-forward to today and cybersecurity is now a strategic imperative for organizations globally. And while this evolution means security is a board-level issue, many CISOs don't have the tools in place to accurately manage, measure and reduce their cyber risk.
Furthermore, many organizations are suffering from the cybersecurity workforce shortage while at the same time dealing with a barrage of new vulnerabilities. According to an independent study conducted by Ponemon Institue on behalf of Tenable, less than one third (29%) of respondents surveyed reported having sufficient visibility into their attack surface to effectively reduce their exposure to risk. To further complicate this lack of visibility, more than half of respondents (58%) said their security function lacks adequate staffing to scan for vulnerabilities in a timely manner, with only 35% scanning when it's deemed necessary by an assessment of risks to sensitive data.
Q3. What are Tenable's plans for Black Hat Asia 2019? What is your main messaging going to be at the event?
Digital transformation has created a complex digital infrastructure of Cloud, DevOps, mobility and IoT. This has expanded the attack surface and created a massive gap in organizations' ability to truly understand their Cyber Exposure at any given time. As in previous years, we remained laser-focused on our Cyber Exposure vision — helping organizations manage, measure and reduce their cyber risk in the digital era. While we look to the future, we'll also be celebrating the last 20+ years of Nessus, which paved the way for the vulnerability management and Cyber Exposure markets. Nessus is an integral part of Tenable's past, present and future.
The next phase of our Cyber Exposure journey is the launch of a new capability called Predictive Prioritization, which enables organizations to reduce business risks by focusing on the three percent of vulnerabilities with the greatest likelihood of being exploited in the next 28 days. Effectively prioritizing cyber threats is fundamental to modern vulnerability management. But understanding where an organization is most exposed is increasingly daunting with the barrage of new vulnerabilities. With this innovation, vulnerability remediation efforts will transition from reactive to predictive. Knowing that effectively prioritizing vulnerabilities is a critical component of modern vulnerability management, we've made Predictive Prioritization a core feature of our vulnerability management platform offerings. The capability is now generally available in Tenable.sc 5.9, for on-premises vulnerability management, and will be generally available in Tenable.io, for cloud-based vulnerability management, later in 2019.