Interviews | March 1, 2019

The Cloud Has Dis-Intermediated Security Professionals: Cloudflare, ExtraHop and IntSights Cyber Intelligence, Synopsys

Joe Sullivan
Chief Security Officer


Q1. When conducting hearings on the condition of Internet security as part of President Obama's cyber commission in 2016 you repeatedly asked panelists the question about who exactly should be responsible for a safer Internet. What is your own opinion on the matter?

Being on a presidential commission was a unique opportunity to step back from my day job as a CSO dealing with specific security challenges, and think strategically about the Internet ecosystem as a whole. In that context, I was often struck by the lack of support individuals and small businesses have online. It is so different from other contexts where governments have played an effective role managing risk. I believe we need to invest as much in the upkeep of the roads of the Internet as we do on making every street corner safer. Governments have reduced our risk for driving a vehicle on the street, sailing a boat across the ocean, setting up a store in a mall, going to an accredited learning institution—the list goes on. But when it comes to the Internet, governments have prioritized punishing crime and regulating content providers, rather than investing in preventing harm from happening in the first place. The focus on going after criminals after the fact, and pressuring services that don't want extremist or copyrighted content on their platforms in the first place, rather than building foundations for safety up front, has pulled security and infrastructure companies in unproductive directions and simply left the public to fend for themselves far too much.

I asked the question of who should be responsible for a safer Internet repeatedly, because I wanted to hear answers beyond the usual knee-jerk response of regulating the companies that are trying to embrace the Internet. I believe governments can do better. Examples we identified through this dialogue included: contributing to open source efforts on critical parts of our shared Internet infrastructure, creating incentives and platforms for sharing threat information, breaking down barriers to cross-border collaboration, driving market change by using government spending power to purchase modern technology only when it meets high security standards, and promoting education and awareness.

Q2. Is the Internet any safer now than it was in 2016 when you held those hearings on Internet security? What's driving the change?

I can't speak to measuring safety holistically, but there are promising trends when it comes to online security. We see the latest smartphones shipping with improved default security settings. Hardware and software companies are recognizing the brand halo that comes from investing in security. Corporate CEOs understand that poor security directly reflects not only on companies but also them personally. Media coverage has moved a bit from sensationalism into pragmatic advice. Security teams are seeing an increase in a strong and diverse talent pipeline into the field. The move to the cloud has taken some security responsibility from small businesses and shifted it to major companies, which are in turn better equipped to respond. And the increased focus on security has driven the venture community to invest heavily in the next generation of security startups.

Q3. What do you want those attending Black Hat Asia 2019 to know about Cloudflare and its mission and strategy?

I've had the good fortune to work as a federal prosecutor, be on a presidential commission, and serve on some of the best Internet security teams in the world at eBay, Facebook, Uber, and now Cloudflare—yet have never reached what I would call an ideal state of security. Even governments and large companies that have the resources and talent to face the extreme challenges of operating online will fall short sometimes. If this is a challenge for them, you can be sure that others, from individuals to small businesses, can't stand a chance without support. People should be able to step online confidently, and businesses shouldn't have to be in fear.

I joined Cloudflare because this company matches my passion for securing the whole Internet. Cloudflare strategically places data centers around the globe to empower the network, and in turn, bring users everywhere (not just big enterprises and governments) a [safer] and fast online connection. The company certainly approaches things differently—from providing free versions of security products from their very early days, to mobilizing Project Galileo to ensure those at risk don't lose their voice online, and launching to help all Internet users have a more private and the fastest internet connection in the world, for free.

I've been blown away by the pace of change in development online. I cannot wait to see where it goes. But I also worry that the cloud has dis-intermediated many security professionals from worrying about a good part of the Internet. It should not be a surprise that we are seeing an uptick in very serious DNS hijackings and DDoS attacks. It is the same old story about squeezing one part of the balloon leading to expansion in another area—as the endpoints and applications get a bit more secure, the network feels the heat. Fortunately, Cloudflare has had a singular focus on that network since its start, and continues to push the envelope to address the problems facing the Internet. Cloudflare now runs one of the world's largest networks, supporting nearly 10 percent of all Internet requests worldwide. We are continuing to partner with companies small and large, as well as individuals, as they deal with these new and challenging security and performance issues generated beyond the infrastructure they manage themselves.

Albert Kuo
Vice President, Asia Pacific


Q1. How is machine learning and AI helping advance cybersecurity? What are some of the biggest use cases for these technologies in the cybersecurity context?

Security teams at enterprises still drown in too many warnings. In November, Enterprise Management Associates (EMA) found that 64% of alerts go uninvestigated, and only 23% of respondents investigate all of their most critical alerts.

Machine learning – a building block for AI – lets augmented analytics help security staff decide what to investigate, detect low-and-slow attacks that defences have missed, and gain enough time to explore the serious problems. ML can discern indicators of attacks from collections of loosely related data faster and more reliably than an overworked (and often under-experienced) analyst. In security operations, ML helps combat a genuine, compelling, and intractable problem – the shortage of security analysts.

ML models evolve over time based on what they observe, or how they are trained. Used on authoritative data sets, ML helps prioritise those indicators that are materially interesting and automate aspects of investigation that slow and complicate the security operations center (SOC).

AI adds on to this idea by letting the machine either suggest or take action based on its models and observations. The challenge here is that while this sounds marvelous in theory, it's far more utopian in practice. For years, security teams have avoided even basic automated responses for fear of disrupting business. The 2-man rule, privileged access, playbooks, and surprise audits – these practices offset the risk of errors through haste, ignorance, or poor judgement.

Yet, cybersecurity leaders have seen the value of automation in DevOps and other areas and are now embracing automation for cybersecurity. This same laggard model will be used for AI in cybersecurity – just not yet. Right now, AI in security is still mostly artificial and not that intelligent. With that in mind, we will let other markets and operational teams find the bugs and breakdowns before we put our businesses, reputations, and careers at risk. In the meantime, although not all ML delivers equally, the approach has plenty of scope for positive impact without AI's downsides.

Q2. How are trends like cloud adoption and data center consolidation impacting cybersecurity requirements? What changes are these trends requiring companies to make?

Consolidating your data centre is an easy way to increase security power. Having one data centre with updated systems, instead of multiple, spread out locations with outdated legacy systems, ensures fewer points of exit and entry and an easier time detecting threats.

If your data centre consolidation involves moving data to the cloud, you may have security concerns. There is a common misconception that the cloud is less secure than a physical data center, and it's easy to see why—if you aren't hosting your own data in a physical space, you may feel less in control, and less secure. However, control does not equal security.

Your data centre is the gateway to your cloud data, and your cloud security is only as good as your data centre security.

The network is in a great position to be an enabler for digital transformation efforts and initiatives. In order to take advantage of the scale and elasticity of the cloud without disrupting business operations, user experience, or enterprise security, cloud-focused IT Operations and Security Operations teams need two things: more visibility and trustworthy automation to help them turn insight into action.

Q3. Why is it important for ExtraHop to be at Black Hat Asia 2019? What's your main messaging at the event?

Enterprise security. For some people, those words conjure up images of red-shirted crew members about to be zapped by creatures beefing with the United Federation of Planets. For others, enterprise security means finding ways to protect ever-expanding attack surfaces of their corporate networks and revenue-generating applications.

If you're attending Black Hat, you're probably at least in the latter group and interested in discovering how network traffic analysis (NTA) solutions, a fast-growing category of network detection and response stack up against traditional security solutions.

Network traffic analysis platforms inspect real-time wire data from all network communications, including encrypted communications, from layer two through layer seven. NTA products use a far richer data source than just NetFlow, which is a useful, but now mostly legacy data source for network security.

By analysing every transaction and reconstructing every conversation on the network through full-stream reassembly, NTA products can provide more conclusive insights into security events, and forensic-level evidence that SecOps teams can use to understand and report the exact scope of incidents.

Fueled by rich wire data, NTA products use advanced machine learning to identify anomalous behaviours and security incidents, trigger automated investigations, fire alerts, and in some cases trigger automated responses through integrations with firewalls, SOAR products, and other in-line response solutions.

Nick Hayes
Vice President of Strategy


Q1. As a former industry analyst, what do you see as fundamental drivers of the market for threat intelligence? How do you see capabilities in this space evolving over the next few years?

The easy answer is better visibility and context of your external threats. But when you dive deeper into customer responses and their use-cases, it's so much more than that. Cybersecurity and threat teams use threat intel to strengthen existing secops, to improve perimeter security with rapid detection and blacklisting, and active protection for execs, strategic assets, and intellectual property. Over the next few years, I expect the cyber-arms race will escalate as attackers find new ways to use AI for evil, and security teams take on new initiatives to protect their external digital environment.

Q2. Why is it important for organizations to apply a digital risk protection (DRP) approach to cybersecurity? How is it fundamentally different from traditional approaches to protecting enterprise data and assets against cyber threats?

The last thing security pros need is another, entirely new approach. Digital risk protection as a cybersecurity initiative aims to streamline threat intel processes and feeds. It seeks to automate routine tasks of asset discovery and threat remediation in a continuous three-step process: map, monitor, mitigate. The more you standardize and coordinate activities at each of these three stages, the less burden it is on security teams and the better protected organizations are beyond their firewalls.

Q3. What do you want attendees at Black Hat Asia 2019 to know about IntSights' strategy and product roadmap?

I spent the past decade as an industry analyst evaluating this, and other relevant, cybersecurity markets. I joined IntSights because of the fast-paced innovation, continued focus on fundamentals, and leadership dedication to building a product that offers real value and strategic support for every security practitioner. I couldn't be more excited for what we have in store for customers the rest of this year and 2020.

Geok Cheng Tan
Managing Director, Asia Pacific


Q1. What are the requirements for static application security testing in today's development environment? What questions should you be asking in evaluating SAST tools?

Static Application Security Testing (SAST) is one of the most popular techniques for detecting security vulnerabilities in web apps and other software, and there are several reasons organizations are embracing SAST as part of their development process. First and foremost, SAST can be used in the early stages of the software development life cycle (SDLC) when vulnerabilities are cheaper to remediate. Second, it's relatively easy to use and it provides detailed results, such as the specific lines of code that contain the vulnerabilities. Lastly, SAST can be automated and integrated seamlessly into development workflows so you don't have to halt operations and spend extra cycles running security tests.

When evaluating SAST vendors, the question you need to ask up front is, does this solution support the programming languages and frameworks you use to build your applications? SAST technology analyzes source code, so it needs to be able to understand or interpret a given programming language in order to find vulnerabilities written in it. Some vendors specialize in SAST for just one or a few of programming languages, while others support a broad range of languages.

Another important factor to consider is if, and, how well, a SAST solution integrates with your other development tools. If you have adopted Agile or DevOps, this is particularly important because it enables your security testing activities to keep pace with your development velocity. Some SAST solutions have IDE plug-ins that developers can use to perform SAST locally on their desktops, enabling them to find and fix problems before they even check in their code. This can save considerable time and money down the road. SAST solutions should also integrate with continuous integration (CI) tools via plug-in or APIs so that scans can be triggered as part of the automated build process.

Lastly, the accuracy of your SAST solution is paramount. If your SAST solution doesn't effectively detect all critical vulnerabilities in your code, you leaving your organization at risk of a cyberattack or data breach. Conversely, if your SAST solution produces a lot of false positives, meaning it flags issues that aren't real vulnerabilities, you end up wasting resources investigating non-issues and your development organization will ultimately reject or circumvent the tool.

Q2. What are the biggest technical challenges in securing customer-facing web applications given the emphasis on speed in software delivery these days?

One of the biggest challenges organizations face in securing their customer-facing web applications today is the rapid, continuous pace of modern software development and delivery. Development paradigms like Agile, DevOps, and CI/CD are becoming mainstream, and application delivery cadences have warped from quarterly or monthly releases to, in some cases, dozens of code changes per day. In theory, one small code change—a single line of code--could be the difference between a secure application and a massive data breach. With application code bases in constant flux, this evolution has really turned application security into a moving target. In response, application security technologies and best practices have had to evolve too. The AppSec market is moving towards fully automated, continuous security testing solutions, technologies that automatically prioritize the most critical vulnerabilities, and tools that perform incremental scans based on changes to a code base.

Another side effect of modern software development is the widespread use of vulnerable open source software components. Open source software itself is not a security problem, but the use of outdated, insecure open source components or the failure to patch them when new vulnerabilities are disclosed has left many organizations exposed. With modern applications comprising of more open source than proprietary code, and with 15 to 20 unique open source vulnerabilities being discovered each day, managing open source security has become a major challenge. Software composition analysis, an application security testing technology that automatically identifies and tracks vulnerable open source components, is quickly gaining traction with organizations seeking to proactively address this challenge.

Q3. What does Synopsys plan to highlight, or focus on, at Black Hat Asia 2019 and why?

At Black Hat Asia 2019, Synopsys will be showcasing our portfolio of industry-leading tools and services designed to help organizations build secure, high quality software faster. Our solutions include static application security testing, software composition analysis, and dynamic application security testing, which enable teams to quickly find and fix vulnerabilities and defects in proprietary code, open source components, and application behavior.

We will also be unveiling our new application security testing platform that unifies our tools and services into a centralized management and reporting console. Organizations need to employ a combination of security testing techniques at various stages within the SDLC to secure their applications against evolving threats, but they also need to do so in an efficient and effective manner that is conducive to agility and innovation. Over the past several years, we have successfully developed a portfolio of differentiated products and services that address most organizations security testing gaps, and now we're delivering on the promise to drive efficiencies and synergies across these solutions. The new platform represents an important step forward in unifying our Software Integrity portfolio into an offering that is more valuable than the sum of its parts.

Sustaining Partners