Generative AI is leaking your sensitive data—4 million
Generative AI, while powerful, poses a risk of leaking sensitive data if not used carefully. Free-tier AI platforms may use user queries to train their models, potentially exposing sensitive information.
To mitigate this, organizations and individuals should prioritize data protection by limiting data sharing, using anonymized data when possible, and implementing security protocols.
According to a report published by Wiz, the exposed data included over a million lines of log entries, digital software keys, backend details, and user chat history from DeepSeek’s AI assistant.
The firm’s researchers found that DeepSeek had inadvertently left an unsecured ClickHouse database accessible online, raising significant security concerns for enterprises and governments globally.
Wiz Chief Technology Officer Ami Luttwak confirmed in a blog post that DeepSeek swiftly acted to secure the database after being alerted.
The security breach comes at a time when DeepSeek has been making headlines for its AI advancements, particularly with its DeepSeek-R1 reasoning model, which has been hailed as a cost-effective alternative to leading US-based AI solutions.
However, this incident underscores a major concern for enterprises adopting AI—data security and the risks associated with rapid AI deployment.