Headline-hitting DeepSeek R1, a new chatbot by a Chinese startup, has failed abysmally in key safety and security tests conducted by a research team at Cisco in collaboration with researchers from the University of Pennsylvania.
“DeepSeek R1 exhibited a 100% attack success rate, meaning it failed to block a single harmful prompt,” said the research team.
This new chatbot has garnered massive attention for its impressive performance in reasoning tasks at a fraction of the cost. Reportedly, DeepSeek R1’s development involved around $6 million in training expenses compared to the billions invested by other major players like OpenAI, Meta, and Gemini.
“DeepSeek has combined chain-of-thought prompting and reward modeling with distillation to create models that significantly outperform traditional large language models (LLMs) in reasoning tasks while maintaining high operational efficiency,” explained the team.
However, the Cisco report has exposed flaws that render DeepSeek R1 highly susceptible to malicious use.
“Our findings suggest that DeepSeek’s claimed cost-efficient training methods, including reinforcement learning, chain-of-thought self-evaluation, and distillation may have compromised its safety mechanisms,” added the report.
To read more, click here.