![Story image](https://cfotech.co.uk/uploads/story/2025/02/11/techday_53ed8f300f72b3648b58.webp)
Businesses recognise need for responsible AI readiness
A recent study by HCLTech and MIT Technology Review Insights highlights a significant gap between the recognised importance of responsible AI principles and the current preparedness of enterprises to implement them effectively.
The report, based on a survey of senior business leaders from various industries worldwide, shows that 87% of business executives understand the critical nature of adopting responsible AI principles, yet 85% admit to being unprepared for their implementation. This discrepancy is attributed to challenges such as the complexity of implementation, lack of expertise, managing operational risks, regulatory compliance, and inadequate resource allocation.
In the coming year, businesses are planning to increase investments in building responsible AI, as revealed by the study, "Implementing Responsible AI in the Generative AI Age." Released during the World Economic Forum's Annual Meeting in Davos, the study outlines issues that enterprises face, including bias and fairness, data privacy and security, regulatory compliance, operational disruptions, and user adoption.
The report suggests that enterprises are moving beyond the proof-of-concept stage in AI-driven transformations, with leaders acknowledging AI's potential to drive productivity and innovation in customer service, software development, and marketing. It also indicates that businesses view responsible AI as a potential competitive advantage, and most executives plan to boost investments in its development over the next year.
Agentic AI, which requires minimal human involvement, is gaining momentum in lower-risk areas such as IT operations, where it complements human efforts. However, the report notes that less than a quarter of respondents feel prepared to handle challenges such as user adoption, change management, and bias, despite half expressing confidence in managing operational risks.
Steven Hall, President of Europe and Chief AI Officer at ISG, commented on the report's findings, stating, "Everybody understands how transformative AI is going to be and wants strong governance, but the operating model and the funding allocated to responsible AI are well below where they need to be given its criticality to the organization."
Vijay Guntur, Chief Technology Officer and Head of Ecosystems at HCLTech, added, "AI can be a tremendous force of positive change in businesses and society at large, but its full potential can only be realised when it can be trusted." Guntur outlined key recommendations from HCLTech to address the "readiness gap" in responsible AI adoption.
HCLTech advises companies to develop robust frameworks and capabilities that ensure trustworthiness, ethics, responsibility, safety, and security in AI operations. Additionally, organisations are encouraged to collaborate with tech partners to pilot and test technologies and best practices, and to establish a dedicated team or Centre of Excellence to lead these initiatives.
HCLTech has already established an Office of Responsible AI and Governance to focus on responsible AI and partnerships. This office comprises experts on frameworks, compliance, ethics, and bias mitigation, aiming to drive co-innovation and the development of consulting capabilities and intellectual property solutions.