Q1. How is ZeroFox evolving its detection technology to stay ahead of AI-generated threats such as deepfakes, synthetic media, and AI-generated phishing content?
AI-powered cyber threats are increasing rapidly because they’re cheap and fairly easy to create. They’re also continually evolving, giving bad actors the ability to increase speed and stealth and offering greater opportunities to further their motivations. As defenders, we need to move just as quickly, ideally faster. One of the best ways to do that is by also harnessing the benefits of AI; fighting fire with fire.
We expanded AI capabilities within our detection platform to combine machine learning algorithms, natural language processing, and computer vision techniques to monitor and analyze social media, digital ads, and other online platforms for potential threats. To stay ahead, we need to ditch the manual, time-intensive processes of analyzing the troves of potentially weaponized images and videos and ensure detection capabilities are accurate.
We offer a variety of AI-driven technologies such as text, video and image analysis where our models can pick up subtle signals in tone, structure, and style that are indicative of machine-generated content as well as break down content to its subcomponents to detect signs of manipulation. As a result, we’re boosting productivity and precision when mitigating threatening content.
We also built an open-source framework, dubbed Deepstar, which encourages experimentation with detection tools for deepfakes. It was important to us as a company that we contribute this toolkit back to the community and make it easily accessible on GitHub. We know that future challenges in the area are on the horizon, and we want security leaders to have an additional tool in their arsenal.
Q2. Threat actors are increasingly exploiting social media platforms and brand impersonation techniques to target organizations and their customers. How do threat intelligence and automation capabilities need to evolve to address these threats? What technical challenges do you see getting in the way?
This is one of the biggest shifts we’ve seen over the last few years. Attackers are moving outside the perimeter and using public platforms like social media, surface web forums, or even messaging apps to launch impersonation attacks and social engineering campaigns at scale. Automation capabilities will be critical in monitoring, detecting and taking down fraudulent social media accounts, spoofed websites, and other threat infrastructure. A few years ago, not enough security teams were utilizing automation, and I think now the narrative is finally changing thanks to AI. I believe many organizations are grappling with implementation challenges due to fragmented technology stacks that include legacy systems and piecemeal solutions that have poor interoperability.
On the other hand, threat intelligence has been evolving over the past few years to encompass organizations’ growing attack surfaces. By bolstering social media monitoring, we’re gaining new threat insights, detecting threats earlier and accelerating incident response. The technical challenge will be ensuring speed and contextual relevancy. You need systems that can continuously scan, identify patterns, and prioritize what matters without burning out your security team. But it’s not just about speed. Context really matters here, too. Not every fake account is equally dangerous. Some are part of coordinated influence campaigns, others are targeting customers with scams, and some might just be trolling. The intelligence layer needs to be smart enough to distinguish signal from noise in order to help security teams focus on truly dangerous threats. There are definitely hurdles. For example, a lot of digital platforms operate in silos or behind strict privacy frameworks, which limits visibility.
Attackers are also using AI themselves now to generate more convincing fake personas, automating engagement, and even testing which content performs best. This means detection is getting harder, not easier. There’s also the inconsistency in how platforms respond. Some are quick to act on takedown requests. Others, not so much. That’s why collaboration and building partnerships is going to be just as important as the technology itself. Our goal is to stay ahead of the curve by building flexible, smart systems that can adapt as these threats evolve.
Q3. What technologies or services does ZeroFox plan on highlighting at Black Hat USA 2025? What are you hoping customers and other attendees at the event will take away from your company’s participation at the event?
Most businesses are now faced with an exponentially larger attack surface that extends much beyond their perimeter and has created gaping holes in existing security coverage. We’re looking forward to connecting with attendees to dive deeper into today’s actors and their TTPs, spearheading conversations on who we’re up against and how they’re modernizing campaigns so that security leaders can better manage their exposure. And, we’re also hoping that we can educate them on who ZeroFox really is; our platform is the pre-eminent solution for organizations wanting to manage the risks associated with their digital footprint.
At Black Hat, we’ll be demonstrating how the ZeroFox platform combines Threat Intelligence, Attack Surface Discovery, Digital Risk Protection and Adversary infrastructure Disruption. It's only when these ingredients come together that we can meet the challenge of exposing, disrupting and responding to these external threats. We will also be highlighting how AI is changing the threat landscape and how the security community can counter their advances with defense AI technologies that can keep pace.
Our goal is to use AI as a catalyst for helping the humans in the loop make sense of complex risk data sets faster and more completely, reducing time to decision and improving decision accuracy for those who have to defend organizations; not just using it as a tool for smarter detection or process automation.