The U.K. has started to draft regulations that will focus on controlling the most powerful AI models such as OpenAI's ChatGPT, Bloomberg reported.
Officials of the Department of Science, Innovation, and Technology have initiated the crafting of a proposal that will soon regulate AI models.
UK Seeks Regulation on Powerful AI Models
According to people familiar with the situation, the department is already in the early stages of developing legislation that will help mitigate the potential risks brought by the latest technology. The inside source also shared that the government would have to wait for the next AI conference to launch a consultation.
On the other hand, officials from the Department of Culture, Media & Sport have proposed that an amendment to U.K.'s copyright legislation. According to the department, companies and individuals must be allowed to opt out of using their data as training content for language models.
Prime Minister Rishi Sunak's spokesperson, Dave Pares, clarified that the U.K. is not rushing the regulation. However, he emphasized that "we've always been clear that all countries would eventually need to introduce some form of AI legislation."
UK Establishes AI Safety Institute After AI Safety Summit
Last year, the U.K. made a breakthrough after attending the first global AI Safety Summit at Bletchley Park. The government created an AI Safety Institute which is assigned to evaluate AI models based on their safety.
On the other hand, big tech companies such as Google, OpenAI, Meta, and Microsoft are pressed to know how long the AI model testing will take. In addition, the companies also wanted clarification on the implications should a model be found risky.
Despite the companies' voluntary participation in testing their AI models, the institute does not have a policy that will prevent AI companies from releasing models that are not evaluated for safety. In addition, the U.K. does not have enough power to force them to pull out their models in the market even if there is a violation.