UK government forms AI safety research team with tech industry leaders
The UK Government has recently announced the formation of an AI safety research team, further bolstering its Frontier AI Taskforce with renowned tech industry leaders. This additional research directive comprises Advai, Gryphon Scientific and Faculty AI, all sought-out for their critical insights and expertise.
Frontier AI Taskforce is pioneering an AI safety research team dedicated to assessing and mitigating risks associated with advanced AI development. This function of the taskforce was initially described in their first progress report released on 7th September this year, where partnerships with notable tech companies such as RAND, ARC Evals, and Trail of Bits were announced.
Since then, Frontier AI Taskforce has made further strides by aligning with Advai, Gryphon Scientific and Faculty AI. These invested partnerships will examine pertinent questions around AI's augmenting human capabilities in speciality fields and audit the rigour of current safeguards. Their findings will be integrated into progressive roundtable discussions at the forthcoming AI Safety Summit.
These consultations will involve civil society groups, government representatives, premier AI companies and research experts, making collective headway into deciphering AI's evolving landscape. This monumental stride was made possible through the forging of new contracts that aim to capture and address the manifold complexities of AI technology.
John Kirk, Deputy CEO at ITG, endorses this collaborative approach, stating, "Seeing experts collaborate to tackle cautions and fears surrounding AI is key to enhancing confidence for its widespread adoption. AI has the potential to accelerate business operations in all areas, and the UK establishing such a team helps better position it for tech superpower status." Kirk further notes that this traction gained in the area of AI will enable the creative industries, among others, to further enhance their global campaigns.
The Faculty AI, a company engaged in applied AI, has been serving the UK government for nearly a decade providing software, consulting, and services. Their sterling efforts contributed significantly to building the COVID-19 early warning system and detecting ISIS online propaganda. Faculty AI, in partnership with the Frontier AI Taskforce, seeks to gauge how Large Language Models (LLMs) can improve a novice bad actor's capabilities and how future systems may increase this risk.
Advai, a UK-based firm focused on Simple, Safe, Secure AI adoption, brings to the table its proficiency in identifying vulnerabilities and limitations in AI to improve and defend these systems. Gryphon Scientific, a physical and life sciences research and consulting company, known for its work alongside governments globally, will apply its analytical acumen to exploring the potential for LLMs as an agent advancing rapid progress in life sciences.
This new development with the Frontier AI Taskforce follows its earlier progress report from last month announcing the establishment of its expert advisory panel, the appointment of two research directors, and several critical partnerships, enhancing the path towards safe AI development.