DeepSeek AI model generates information that can be used in crime analyses show
TOKYO – A generative artificial intelligence (AI) model released by Chinese start-up DeepSeek in January created content that could be used for crime, such as how to create malware programs and Molotov cocktails, according to separate analyses by Japanese and US security companies.
The model appears to have been released without sufficient capabilities to prevent misuse, with experts saying the developer should focus its efforts on security measures.
The AI in question is DeepSeek’s R1 model.
In a bid to examine the risk of misuse, Mr Takashi Yoshikawa of the Tokyo-based security company Mitsui Bussan Secure Directions entered instructions meant to obtain inappropriate answers.
In response, R1 generated source code for ransomware, a type of malware that restricts or prohibits access to data and systems and demands a ransom for their release. The response included a message saying that the information should not be used for malicious purposes.
Mr Yoshikawa said he gave the same instructions to other generative AI models, including ChatGPT, and they refused to answer.
“If the number increases of AI models that are more likely to be misused, they could be used for crime. The entire industry should work to strengthen measures to prevent misuse of generative AI models,” he said.
An investigative team with the US-based security company Palo Alto Networks also told The Yomiuri Shimbun that it confirmed it is possible to obtain inappropriate answers from the R1 model, such as how to create a program to steal login information.
According to Palo Alto Networks, professional knowledge is not required to give instructions and the answers generated by the AI model provided information that anyone could implement quickly.
The team believes that DeepSeek did not take sufficient security measures for the model, probably because it prioritised time-to-market over security.
DeepSeek’s AI is catching market
أرسل هذا الخبر لأصدقائك على