All new artificial intelligence (AI) products developed in China will be subject to a “security assessment” before being published to the public. Which is according to a new draught law unveiled Tuesday by the country’s internet regulator.
“Before providing services to the public that use generative AI products, a security assessment applies for through national internet regulatory departments, the Cyberspace Administration of China draught law states.
The Cyberspace Administration of China (CAC) restrictions come as numerous governments contemplate how to reduce the risks of the nascent technology. Because of a surge in investment and consumer appeal in recent months following the introduction of OpenAI’s ChatGPT.
They also come as a series of Chinese internet behemoths. Including Baidu, SenseTime, and Alibaba. In addition debuted new artificial intelligence models capable of powering apps ranging from chatbots to picture generators in recent weeks.
According to the CAC, China supports AI innovation and application and encourages the use of safe and trustworthy software, tools, and data resources. Nevertheless, content generated by generative AI must adhere to the country’s core socialist ideals.
Providers Accountable for generative AI products
It states that providers will be held accountable for the authenticity of data used to train generative AI products. And that precautions should be there to avoid discrimination when creating algorithms and training data.
According to the authority, service providers must also compel users to disclose their true identities and related information.
If providers do not follow the guidelines, they will get penalty. Will suspend their services or possibly face criminal charges.
If inappropriate content is created, platforms must be modified within three months. To prevent similar content from being created again. The public is asked to comment on the plans until May 10th. With the regulations set to go into effect later this year.
Meanwhile, OpenAI’s popular AI chatbot ChatGPT already restricted in Italy. Due to a possible violation of data collecting laws. Italy’s data protection authorities has initiated an investigation into ChatGPT over privacy concerns.
ChatGPT is also accused by the country’s data protection authorities of neglecting to check the age of its users. which should be reserved for persons aged 13 and up. The agency states it has temporarily prohibited the chatbot’s use of personal data from Italian users.