
Jared Siddle warns of risks in business AI tool usage
Jared Siddle, Vice President of Risk & Compliance at risk management company Protecht, has highlighted the risks associated with using generative AI tools in business, urging organisations to develop robust AI use policies.
A recent study has disclosed that more than half of employees have entered high-risk information into generative AI tools, prompting the need for businesses to be vigilant. Jared Siddle articulated the importance of keeping confidential business information out of AI tools unless specifically approved for business use. "If you wouldn't post it publicly, don't put it into an AI tool," he advised.
Jared emphasised that while enterprise versions of AI tools might offer better privacy protections, the potential remains for data to become compromised. "AI tools don't have perfect memories, but they do process and retain data for training and moderation," he said. He compared inputting confidential data into an AI tool to "whispering secrets in a crowded room," suggesting that data might not be secure if an AI platform is compromised.
Furthermore, Jared stressed the importance of having an explicit AI use policy. "AI risk isn't theoretical, it's real," he stated, advising businesses to establish clear policies on what data is safe or unsafe to input. His recommended steps include using enterprise AI solutions with clear security measures, educating employees about AI risks, and actively monitoring and auditing AI tool usage within organisations.
AI security training for staff is necessary, according to Jared, as businesses incorporate AI tools into daily operations. "AI security training isn't optional, it's essential," he asserted. He stressed that employees should understand what data is inappropriate for AI tools, recognise misleading AI-generated content, and rely on enterprise-approved AI solutions. The aim is to prevent costly data breaches that could arise from unintentional errors by employees.
Jared also warned companies developing AI tools for internal use not to assume immunity from cybersecurity threats. "Internal AI doesn't mean immune AI," he stated. He pointed out that such tools might still be vulnerable due to weak access controls, insecure APIs, and insufficient monitoring practices.
Highlighting how cybercriminals might exploit AI tools, Jared outlined potential threats such as AI-powered phishing, automated hacking, deepfake scams, and AI model manipulation, all of which could disrupt business operations.
To mitigate these risks, Jared proposed several protective measures for businesses creating their own AI tools. These include encrypting data both at rest and in transit, employing strict access controls, securing APIs through various security protocols, and continuously monitoring AI models for signs of threats.
By adhering to ethical AI principles, such as ensuring outputs are auditable and bias-tested, businesses can also reduce compliance risks. Jared's advice underscores the necessity for businesses to strengthen their AI security protocols to safeguard against evolving cyber threats.