CFOtech UK - Technology news for CFOs & financial decision-makers
Story image

AI and Security have a diversity problem. Here’s why it matters

Yesterday

Recently, I experimented with AI to explore how gender perspectives influence its responses.

The questions were simple: Are you male or female? What would change if you were trained as a woman vs. a man? What would that look like if asked to be entirely fair and gender-neutral? Could prompting AI for fairness create a truly unbiased system, and would this spread? 

The conclusions were fascinating. 

While AI can be designed to be gender-neutral, the biases in training data still seep into its responses. True neutrality is difficult to achieve even with explicit fairness prompts because gendered experiences remain. Why does this matter? It affects decision-making in cybersecurity, leadership, and AI development differently. AI is based on data, and data is based on people. Rather than erasing these perspectives, the ideal approach is context-aware fairness, ensuring AI can recognize when gender is relevant without reinforcing outdated assumptions. 

This experiment reinforced the importance of diverse perspectives in AI and security. The viewpoints we bring affect how we model threats, train AI, and build security systems. If those perspectives are limited, so are our defences.

Historically, cybersecurity has been dominated by a homogeneous group, predominantly men in Western tech hubs. The same applies to AI development, where datasets often reflect a narrow subset of experiences. This lack of diversity creates real-world consequences. 

Consider threat modelling: If security teams primarily focus on the risks they are most familiar with, they may overlook vulnerabilities that affect other areas more acutely. These risks may be more pressing and need to be fixed earlier. Knowing your limitations comes with real experience, while those who are less self-aware can be at risk. 

Similarly, AI models trained on biased data risk reinforcing existing gaps rather than mitigating them. For example, in fraud detection and identity verification, AI has struggled to identify women and people from underrepresented backgrounds accurately. The lack of diverse images in that training data set affects performance in real-world environments, leading to overlooked problems and more challenges over time. The same biases can creep into security automation, where AI-driven responses might fail to account for how threats manifest across industries, geographies, or user demographics.

The more perspectives we integrate, the stronger our security becomes. Training AI to know what "good" or "normal" behaviour is depends on that baseline. Diverse teams approach problems differently, challenging assumptions and introducing novel ways of identifying potential threat models and mitigating risks. We can create a broader and more accurate picture of the "expected" behaviour so the AI system can spot potential outliers or threats.

When we incorporate these approaches and viewpoints into AI training data and security models, we move closer to building defences that work for everyone, not just a select few. Security threats evolve rapidly, and so must our approaches to addressing them. AI trained on diverse perspectives can identify novel attack patterns by considering a wider range of experiences, reduce bias in automated decision-making, make security more effective, and improve threat intelligence by including insights from different regions, industries, and lived experiences. 

This is why fostering inclusivity in AI and security is more than just an ethical imperative  – it's a strategic advantage. A diverse set of voices ensures that security systems are designed to protect everyone, reducing blind spots and enhancing the adaptability of AI-driven solutions.

I didn't start my career in cybersecurity or AI: I found my way here through curiosity, opportunities, and a realization that security isn't just about technology; it's about people. Throughout my career, I've seen firsthand how different perspectives lead to better problem-solving. As a woman in security, I've also seen the challenges. The lack of representation in AI development and cybersecurity means that many of us are entering spaces that weren't initially designed with us in mind. 

But that's precisely why diversity matters. When we bring new voices into the conversation, we're not just making the industry more inclusive. We're making it better.

As we celebrate International Women's Day, let's recognize that diversity in AI and security isn't just a checkbox; it's a necessity. Suppose AI is going to help us predict and prevent threats effectively. In that case, it must learn from as many perspectives as possible. That means hiring and elevating diverse voices in tech, ensuring AI training data reflects the real world, and fostering an industry culture where everyone's experiences are valued. 

The future of cybersecurity—and AI—is being built right now. The question is: Will we let it inherit the blind spots of the past, or will we teach it to see more clearly?
 

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X