This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
Interviews | July 19, 2016
Black Hat USA Sponsor Interviews: Booz Allen Hamilton, CloudPassage, Neustar, TrapX, and VMWare
Vice President, Dark Labs Director, Cyber Futures, Strategic Innovations Group
Booz Allen Hamilton
Q: BAH offers an Embedded Vulnerability Analysis (EVA) service focused on helping enterprises finding vulnerabilities in embedded and IoT devices. What do organizations need to understand about the nature of the embedded threat as more devices and things get Internet-enabled?
Chad Gray: Anything that is connected to the Internet has the potential to be part of a computer security incident, including IoT and embedded devices. Even devices that are seemingly benign can provide entry points into an organization's network or be used to acquire information that can lead to more damaging attacks once connected. While the types of processors, protocols, and operating systems used by connected devices can differ from those used on desktops, servers, and mobile phones, this is not a barrier to an attacker. Security researchers have repeatedly demonstrated their ability to reverse engineer the software and protocols used by IoT devices to find vulnerabilities. As more research is conducted and published in this area, the cost of entry for both new security researchers and malicious actors will continue to decrease.
Many companies building connected devices today are new to computer security. Formed to join the boom of the consumer IoT industry, these companies across the industrial, transportation and medical sectors, are adding connectivity to devices that previously had none. Often, they lack the experience that the desktop and server industries have had dealing with security vulnerabilities. As a result, these new companies have not developed the same best practices that are seen as standard in these more mature industries. Because of this, bugs are found in IoT embedded devices that are rarely seen in desktops and servers today. Additionally, in some cases, connectivity is being added to legacy devices that are still running out-dated operating systems and libraries. Some of these devices lack a secure and efficient means of deploying updates, making it very difficult to correct vulnerabilities, once discovered.
Q: Why are software reverse engineering and machine learning skills important capabilities to have, for organizations?
Gray: Organizations that operate in the cyber realm are often faced with the task of understanding the behavior and/or functionality of software written by a third party. These third parties can include software vendors and malware authors. It is typically the case that these third parties do not disclose their source code for a variety of reasons. In the case of malware, one of the objectives of the authors is to intentionally obfuscate the behavior and functionality to conceal the intent. In the case of software vendors, they may not wish to disclose source for proprietary reasons.
Regardless of the rationale for a third party not releasing the source code, it may be necessary for the organization to assess the software binary directly to determine if there are any vulnerabilities and/or verify that the binary will behave in an acceptable manner from a security perspective. Static analysis is an important phase of the binary assessment where the analyst will look through the binary to reverse engineer key components of the code. The process of reverse engineering key components has the potential to provide valuable insight into the inner workings of the binary. These insights can then ultimately be used to answer important security questions.
The rate of new technologies, and potential threats, is increasing at an exponential rate, as the demand for expert talent grows. Industry still lacks the ability to fully automate this type of reverse engineering/vulnerability discovery, but we are making progress. An example of this is using Low Level Virtual Machine compilers, with SAT/SMT solvers, to rapidly find vulnerabilities in Intermediate Representations of binaries (Credit to Josh Jones and the Dark Labs EVA engineers for their research and testing of this with ILLUVIUM). With this progress, we aren't far from applying machine learning to these techniques, for automated and rapid testing to discover vulnerability states.
Q: Booz Allen and the Kaizen team are back again this year with a Capture the Flag and Hacker Dojo workshop at Black Hat USA. Why is the CTF event so popular? What will participants learn or take away from the event?
Gray: CTFs offer a window into the cybersecurity field by letting participants face challenges that mirror real-world vulnerabilities. The competitive, gamified learning environment generates excitement and is a great way for information security enthusiasts to showcase their skills. The friendly competition motivates participants to work their way to the top of the leaderboard, and there's an immense satisfaction that comes from solving a difficult challenge and finally submitting the flag.
We've been hosting a half-day CTF at Black Hat since 2013, and our room has always been packed to capacity. In order to allow more people to experience the event, this year we're making it a full-day event, and allowing people to hack away at our challenges around the conference via Wi-Fi.
Our CTF consists of categories you'd expect to find in a jeopardy-style event, like networking, forensics, web, and reverse engineering and binary exploitation. There will be challenges for all skill levels – from beginners to ninjas.
We're bringing our Hacker Dojo training to Black Hat this year, in an effort to make our CTF more beginner friendly and encourage learning. Our staff will be giving short talks on a variety of tools and techniques which can be applied to solve our challenges. Write-ups will also be available after the event, so participants can learn how to solve any challenges they may have missed.
We see CTFs as a valuable tool for increasing employee morale, identifying talent across specific cybersecurity disciplines, and unexpectedly providing training that mirrors skills used every day by security practitioners. It's a tool that we've used internally to Booz Allen for several years now, and a service offering that we are now regularly providing to our clients in conjunction with our other advanced cyber training offerings.
Q: What exactly will participants learn from your "Crash Course in Data Science for Hackers" at Black Hat USA?
Gray: In our Crash Course in Data Science for Hackers (CCDS) students will learn techniques for getting value out of raw data. Unlike other data science training courses out there, our class is specifically geared towards challenges security professionals face, and we tailor all our exercises to make them relevant for people in the security industry.
In our course, students will walk through the entire data science process – learning how to ingest, explore, visualize, make predictions and derive value from raw data. One of the biggest challenges that data scientists face is data preparation, and study after study show that data scientists spend between 50% and 90% of their time preparing their data for further analysis. As an antidote to this problem, students in our classes will learn how to prepare their data in an extremely time-efficient manner using Python and advanced data analytics libraries such as Pandas, MatPlotLib, Bokeh, and others.
Students will learn how to apply various machine learning techniques to security data as well as how to evaluate the effectiveness and assess the accuracy of various models. While there is a lot of advanced math associated with machine learning, we present the concepts in an easy-to-understand manner, helping students understand how the algorithms work, as well as how to apply them. Finally, students will also be exposed to several cutting-edge big data technologies and will be able to apply the techniques they have learned to extremely large datasets.
Q: CloudPassage's recent Cloud Security Spotlight report shows that a majority of people thinks traditional security tools are either somewhat or completely ineffective in the cloud context. Why is that the case? Where exactly is it that traditional tools are failing?
Sami Laine: Traditional security tools were designed for traditional data center environments where you could rely on dedicated security appliances at network choke points, packet inspection through taps and span ports, physical network segmentation, static IP addresses, manual change control processes and relatively slow rates of change. All of this changes drastically when you move your compute into to private cloud and especially public cloud. The modern IT infrastructure is elastic, autoscaling, agile and flexible – but most importantly it is now being created through automation and orchestration. Traditional security tools were never designed for that, and fundamentally break down when used with modern server and workload orchestration. If your security process requires manual changes to firewall rules, or redirecting all your traffic through existing infrastructure appliances, you can't scale and reap the key benefits that IaaS can provide you in speed and agility.
Q: Many of the respondents in the survey felt that security slowed down continuous development methods like DevOps. What can security teams do to address the situation?
Laine: Two big shifts are really called for, and only one of those is about technology. If we start with that, security teams need to evaluate the security technology and tooling they need that can effectively support the DevOps workflow. If you can't create security design patterns based on the system type and tagging that gets automatically applied as those systems are created through the toolchains used, you have a problem. Attaching the security automatically to each server and cloud workload or instance as they get created is the key, and you need a platform with broad security capabilities that are designed to be workload-centric and purpose-built for automation. Secondly, a culture shift is required. Even with the right tooling that supports the DevOps teams' need to run as fast as automation allows, the security teams need to adopt the same cross-functional culture of respect and collaboration that is at the heart of successful DevOps teams and become business-enabling partners in the process – some call this approach DevSecOps or SecDevOps.
Q: The LinkedIn information security community recently named CloudPassage as the Most Innovative Company for Cybersecurity in 2016. To what do you attribute that distinction?
Laine: For large-enterprise security and compliance organizations that must protect agile environments, Halo is the only workload security platform that offers efficient, effective and flexible security assurance by providing automated orchestration of broad security controls any, at any scale, on demand. Forrester Research recognized CloudPassage Halo as the only solution found ‘not lacking' in Forrester's first-ever cloud workload security report (published June 2015).
Halo's architecture combines elastic cloud compute and big-data technology into a platform purpose built for security computing; this means the protected servers and workloads don't have to sacrifice compute power as the analytics heavy lifting is shifted to cloud-based Halo SaaS platform.
The entire platform is environmentally agnostic and uses an HTTPS-based asynchronous messaging protocol; these factors mean Halo agents can be deployed on servers and workloads running anywhere from traditional datacenters to any public IaaS provider. Halo provides instant visibility and continuous protection through a single, easy-to-use interface. Delivered as a service, Halo deploys in minutes and scales dynamically to solve four major challenges:
- Visibility: Within seconds of deploying Halo, security teams gain full visibility into every server within their entire infrastructure. Halo makes it faster and easier to discover vulnerabilities, configuration security problems, policy violations, and potential compromises, so you can secure these critical assets.
- Speed: Halo bakes automated security right into the development process to ensure that all workloads are protected from the start. By automating workload security through the lifecycle from development to QA, staging and production, Halo enables businesses to safely and securely keep up with the speed of modern development and DevOps.
- Segmentation: Halo protects against malicious east-west traffic and lateral movement of threats by reducing the attack surface through micro-segmentation, host firewall orchestration, traffic discovery, and layered protection at every workload.
- Compliance: Halo replaces manual processes with full automation so security teams can track and prove the security posture of all assets in scope of regulations within seconds.
Q: What do you want attendees at Black Hat USA to take away from your sponsored workshop on ‘Best practices for workload security moving to cloud environments', at the event?
Laine: This workshop that we're co-presenting with our customer Xero really focuses on the shift in thinking needed when moving from traditional data centers into Infrastructure-as-a-Service delivery model. We plan to discuss the broader context on how this change in how IT is being delivered is impacting the role of the security teams, how their tools needs are going to change, and what the new best practices look like. The takeaways we hope to leave people with include insights into shifts in culture, operational practice, toolchains used and the role that automation can and should play in securing servers and instances in the modern datacenter and cloud.
Q: Neustar has announced its intention to separate into two independent publicly traded companies. What does it mean for your cybersecurity customers?
Evan Uhl: For the past 20 years, Neustar has played an integral role in driving our connected world forward. Neustar has announce that it intends to separate into two public companies in order to embrace new market opportunities, such as the security and management of the Internet of Things. This separation will enable Neustar to enhance its internal growth through distinct management teams, enabling a strategic focus on its security services.
Q: Neustar is working with Limelight to enhance Neustar's SiteProtect DDoS mitigation network. How exactly is the network being upgraded and what capabilities will it support?
Uhl: Neustar has announced its partnership with Limelight Networks to create the world's largest DDoS mitigation network, with a capacity of 10 Tbps and 27 mitigation centers around the globe. The result is more than just a largest network for mitigating DDoS attacks, it moves resolution and risk reduction closer to both source and attacker, reducing risks to Neustar customers while ensuring optimum network performance during DDoS attacks.
Additionally, this massive network will become a significant source of threat intelligence data, providing detailed insights and automation potential to help organizations more quickly react to threats and protect operational states.
Q: What are the biggest misconceptions that enterprises have when it comes to Cloud-Based Managed DNS Services?
Uhl: One of the biggest misconceptions with cloud-based DNS services is that the same capabilities can be managed internally. Many organizations still seem uneasy with a third-party managing its DNS in the cloud, but the fact is that DNS experts are few and far between. Unless an organization has a dedicated DNS expert managing it internally, cloud-based DNS will always be more secure; and even then, internal management will remain more expensive.
DNS networks remain high profile targets for multiple cyber attacks, including DDoS attacks, so it is not enough to manage your DNS internally, you must also secure it. Additionally, the proliferation of IoT devices can compliance these operations quickly. Cloud-based authoritative and secure DNS networks ensure high performance without compromise.
Q: Neustar has sponsored a workshop interestingly titled ‘Crushing the DNSSEC paradox when more security means more vulnerability' at Black Hat USA. Why is it important for Neustar to sponsor a workshop at the event?
Uhl: Neustar is a pioneer and market leader in the DNS and DDoS mitigation space. As a result, it has and continues to accumulate vast knowledge and insight on infrastructure and digital security trends, behavior, and threats. Neustar believes the Black Hat conference to be an important venue to communicate important insights, lessons, and counsel to organization representatives in attendance at this event. Specifically, sharing with those who are responsible for understanding and preserving the operational integrity and security of companies at risk. This is exemplified in this year's workshop in which a recent trend in exploitation of DNS services is being more frequently used to cripple online business. Neustar's study of this problem and its active practice of combatting and defeating such attacks offers insight that is particularly relevant to this year's attendees.
Q: Your recent MedJack2 report offers a pretty sobering analysis of the vulnerability of medical devices to malicious attacks. Data theft appears to be the primary motive behind most of the attacks. Should we be worried about other risks as well, such as sabotage of critical medical devices?
Carl Wright: During the research we conducted to produce the MEDJACK.1 and MEDJACK.2 reports we analyzed attacker activity emanating from command and control on several blood gas analyzers. In order to do this we acquired a blood gas analyzer, set it up on a network, and had a remote white hat security team recreate the suspected attack used by actual attackers within client hospitals. Surprisingly, we found that the attackers had complete access to the data and control panel displays within the medical devices.
This access enabled us to modify the readings on the front panel of the device. Based upon this analysis, we believe that attackers have the technical capability to compromise patient data and hence patient safety. Devices like blood gas analyzers are used in critical patient care situations. This suggests patients could be hurt by manipulated device readings and/or activity. To be clear, we have not see any evidence of attacks planned, or ongoing, which have intentionally compromised patient safety. We have only seen attacks targeting the theft of data or seeking extortion (ransomware).
Q: TrapX recently announced additional VC funding and plans to expand its operations because of growing customer demand for deception technologies. What's driving demand for this class of products?
Wright: The legacy defense in depth strategy is based upon establishing and defending a strong perimeter to keep the attackers out. This basic strategy has been failing at an increasing rate over the past several years. New best practices require acquiring technology, like deception, to find attackers that have already penetrated a company's network. Just having a perimeter/endpoint system cannot keep the bad actors out, however even the smallest SOC team can reduce the time to breach detection so that they can minimize or eliminate damage. This new thinking is driving demand for the deception technology industry. On June 15, 2016 Gartner recently identified top ten technologies in cyber security which included deception technology, "By 2018, Gartner predicts that 10 percent of enterprises will use deception tools and tactics, and actively participate in deception operations against attackers." Many customers within finance, healthcare and manufacturing have started projects to evaluate and acquire deception technology. Here is a link to the Gartner article: gartner.com/newsroom/id/3347717.
Q: In most cases, when enterprises deploy deception technologies are they using it to block attacks at the perimeter or to detect and to respond to intrusions sooner?
Wright: Deception technology has emerged as a new security layer to solve the problem of detecting malicious insiders that have already bypassed the perimeter. Deception provides a new approach that is a significant change from traditional monitoring tools that have been prone to false positives, or unable to detect intruders that are using legitimate devices and applications within the network. Deception offers customers the ability to instantly create decoys that are mixed with the customers real assets, alerting to any attacker activity. Once detected the company is provided with targeted actionable intelligence allowing them to move rapidly to stop the attack and resume normal business operations.
Q: You are sponsoring a workshop on escalating attacks on the healthcare industry at Black Hat USA. What can attendees expect to learn from it?
Wright: Healthcare cyber attacks have been on the rise, and most recently have contributed to major breaches making news headlines.. MEDJACK and MEDJACK2 are targeted attacks focused on healthcare institutions globally for the purpose of stealing highly valuable healthcare data. These attacks represent a significant threat to hospital operations because of the nature of the attack vector, and the targeting of very specific and relatively unprotected medical devices. Our first session will be a panel discussion titled "Healthcare under Siege". Our panelists will come from Texas Health Resources, US CERT, the Department of Homeland Security and the FDA. Our second session titled "MEDJACK.2 Escalates Attacks on the Healthcare Industry" will review the research process in developing the MEDJACK.2 report.
Q: How exactly does network virtualization help enable better security across both private and public clouds?
Matt De Vincentis: As a security industry, we've done a reasonable job of protecting the perimeter of our private and public clouds. The problem however is once security threats penetrate perimeter defenses, there have been very few controls to stop them spreading laterally throughout the network.
Network virtualization enables an entirely new security architecture. This is not simply a new security point-product, but an entirely new approach to network security. With a virtualized network, the security intelligence normally found in dedicated hardware appliances is moved into software. This fundamentally changes how and where we can apply security to workloads, regardless of whether they are running in public or private clouds. For example, security controls like firewalling to be enforced at every workload, at the hypervisor layer. This hypervisor layer security enforcement just isn't possible with a physical network infrastructure. We call this concept micro-segmentation, because it enables you to literally segment the network at the virtual machine or workload level, even if they are on the same layer 2 network or VLAN.
Because these security controls are tied to the actual workload, rather than some piece of physical equipment, the workload is protected whether it's running in a private cloud environment, or moved out to a public cloud. So the same level of security and control can be achieved, regardless of where the workload happens to be running at a particular point in time.
Q: What are some of the primary use cases driving adoption of VMware NSX?
De Vincentis: The primary use cases driving the adoption of NSX are around automation and security.
Once the network is virtualized and running in software, automating operations is greatly simplified. Entire network constructs including complex switching, routing, firewalling and load balancing can be moved, copied, deleted, restored, without any physical changes to the network. This effectively provides the operational model of a virtual machine, to the entire network, enabling both private and public clouds to scale efficiently and securely.
From a security perspective, customers are rapidly implementing micro-segmentation, as I was discussing earlier. This doesn't only apply to server workloads though, we're seeing customers deploying NSX to secure their end user VDI and enterprise mobility infrastructure too.
It's also important to note that NSX is a true platform, not a point-product, and it enables our ecosystem of security partners to integrate additional security services directly into NSX. This allows further security protection to be inserted into a network flow. So for example, a customer could use the IPS capability delivered by one of our partners with the Next-Gen firewall capability of another partner, and chain these defenses together. In this example, if the IPS engine detected a potential intrusion attempt, it can dynamically trigger the firewall to isolate the workload from the network, prevent the attack from spreading without manual administrator intervention.
Q: VMware recently announced its plans to purchase Arkin. What does that mean for your customers?
De Vincentis: The Arkin acquisition has actually closed now and we're excited to have them as part of VMware. Arkin brings a number of key capabilities to enhance the operation and security of a virtualized NSX network.
With Arkin, customers gain complete visibility of their network at both the virtual and physical layer. It does things like security group and firewall modeling, as well as continuous monitoring and compliance posture auditing. It has a very intuitive UI and Google-like search capabilities, so customers can quickly monitor their network to ensure health and performance.
Arkin also offers a pre-assessment tool that can be run in a traditional physical network environment to provides a snapshot of data center network flows and provides recommendations for micro-segmentation.
Q: What are you hoping customers will learn about VMware's NSX security capabilities from the workshop you are sponsoring at Black Hat USA?
De Vincentis: The number one thing I hope customer will learn about VMware's NSX security capabilities is just how easy it is to implement micro-segmentation once the network is virtualized.
Over the years, customers have certainly tried to segment their network using VLANs, physical or virtual firewall appliances, and other methods. But they quickly found this was operationally infeasible and frankly, a nightmare to manage. VMware NSX and network virtualization is finally making micro-segmentation operationally feasible for our customers.