{"id":15336,"date":"2025-02-22T02:11:13","date_gmt":"2025-02-22T01:11:13","guid":{"rendered":"http:\/\/plus.maciejpiasecki.info\/index.php\/2025\/02\/22\/91-failure-rate-why-deepseek-is-the-most-dangerous-ai-you-might-be-using\/"},"modified":"2025-02-22T21:03:51","modified_gmt":"2025-02-22T20:03:51","slug":"91-failure-rate-why-deepseek-is-the-most-dangerous-ai-you-might-be-using","status":"publish","type":"post","link":"https:\/\/plus.maciejpiasecki.info\/index.php\/2025\/02\/22\/91-failure-rate-why-deepseek-is-the-most-dangerous-ai-you-might-be-using\/","title":{"rendered":"91% Failure Rate: Why DeepSeek Is the Most Dangerous AI You Might Be Using"},"content":{"rendered":"<p>We\u2019d love to say DeepSeek is the safest and most ethical AI on the planet. But after reading AppSOC\u2019s latest report, we\u2019re starting to think George Orwell was just a few decades off. Maybe Nineteen Eighty-Four should\u2019ve been called Twenty Twenty-Five. Their findings aren\u2019t just concerning\u2014they\u2019re downright chilling. Some of DeepSeek\u2019s security flaws uncovered in their report might make you rethink using DeepSeek, whether for business or just casual fun.<br \/>\nThe dark horse rises<br \/>\nBefore diving into the cracks beneath DeepSeek\u2019s polished exterior, we need to understand why this AI model has shaken the industry to its core.<br \/>\nFor the past couple of years, OpenAI dominated the AI space. Google had been tinkering with AI for years, but when OpenAI unleashed ChatGPT onto the masses, it became painfully clear just how far behind Big Tech really was.<br \/>\nThen, out of nowhere, DeepSeek arrived.<br \/>\nThere was no slow buildup. No teasers. Just a sudden appearance of a brand new AI model that could go toe-to-toe with ChatGPT. Backed by High-Flyer, a Chinese hedge fund with pockets deep enough to make some billionaires blush, DeepSeek didn\u2019t just enter the market\u2014with its steel-toed boot, it kicked the door down.<br \/>\nIt blindsided the industry with a model that rivals ChatGPT at a fraction of the cost. Instead of the traditional, resource-heavy training methods used by OpenAI and Google, DeepSeek leveraged distillation. In layperson terms, it learned from existing AI outputs rather than raw data.<br \/>\nAidan Gomez, CEO of Cohere, acknowledged the brilliance of this approach, telling Business Insider, \u201cI think it validated Cohere\u2019s strategy that we\u2019ve been pursuing for a while now. Spending billions of dollars a year isn\u2019t necessary to produce top-tier tech that\u2019s competitive.\u201d<br \/>\nWith its arrival, DeepSeek left a trillion-dollar hole in the US tech stock market. While the company insists it built its model on a shoestring budget, some reports suggest the true investment may actually be in the billions.<br \/>\nDeepSeek\u2019s open-source model also sets it apart. It is free (for end users), accessible, and easily modified. While competitors charge up to $200 a month, DeepSeek costs next to nothing, making it the obvious choice\u2014or\u00a0so it seems.<br \/>\nWhen something seems too good to be true, it usually is. AppSOC\u2019s findings suggest that DeepSeek\u2019s affordability might come with a hidden cost\u2014one that has nothing to do with money.<br \/>\nThe intern from hell<br \/>\nAppSOC\u2019s report dives deep into the technical aspects of DeepSeek\u2019s security flaws, but let\u2019s skip the tech jargon for a bit. Instead, we want you to imagine this scenario.<br \/>\nImagine hiring an intern who seems poised to be a candidate for employee of the year\u2014eager, efficient, and practically free. They handle research and assist other employees in your company with nary a complaint. Everything seems perfect. Or at least that\u2019s how it seems.<br \/>\nThen, the cracks start to show. They give customers false information with mind-boggling confidence. They perform tasks that are beyond their job scope. Worst of all, they spill company secrets to anyone who asks the right questions. By the time you realize the damage, it\u2019s too late. Would you ever trust someone like this in your business?<br \/>\nNow, replace that intern with DeepSeek. AppSOC\u2019s report reveals that DeepSeek is just as reckless as this intern from hell. It is easily tricked into leaking sensitive data, generating malware, and disregarding ethical safeguards. It\u2019s an AI that doesn\u2019t just hallucinate\u2014it exposes your business to very real risks with very real consequences.<br \/>\nSo the million-dollar question is: If you wouldn\u2019t trust a liability like that in your office, why let it into your systems?<br \/>\nDeepSeek? Hah! More like deep-ly flawed<\/p>\n<p>Now that you understand DeepSeek\u2019s security flaws better, let\u2019s look at the more technical side of AppSOC\u2019s report and why you should be worried.<br \/>\nYou\u2019ve probably heard of jailbreaking when it comes to smartphones. If you think that jailbreaking a phone is risky, in AI, it becomes outright dangerous. Jailbreaking tricks an AI into ignoring its own safety rules, allowing it to generate content it shouldn\u2019t. DeepSeek failed this test 91% of the time. That means it can be manipulated into saying or doing just about anything with the right prompt.<br \/>\nEver heard of a prompt injection attack? Hackers use cleverly worded inputs to trick the AI into revealing hidden information or performing unauthorized actions. DeepSeek failed this test 86% of the time. This means that if an attacker knows what they\u2019re doing, they can trick DeepSeek into leaking sensitive data, bypassing restrictions, or even executing tasks it should never allow. In fact, researchers at Wallarm managed to trick DeepSeek into exposing its own internal system.<br \/>\nThis is where things get truly disturbing. DeepSeek failed 93% of the time when tested for malware generation. That means it\u2019s worryingly effective at helping users create harmful scripts, viruses, and exploits. This isn\u2019t just a flaw\u2014it\u2019s a script kiddie\u2019s dream come true! Now just about anyone and their grandma can create malware on the fly.<br \/>\nAI models are supposed to have safeguards against generating offensive, discriminatory, or harmful content. DeepSeek failed these safeguards 68% of the time. This means attackers can easily manipulate it to produce toxic, offensive, or outright unethical content. Would you trust an AI like this in your business? Imagine the harm it could cause to your business\u2019s reputation!<br \/>\nDeepSeek also struggles with what\u2019s known as hallucinations\u2014a fancy way of saying it fabricates information. AppSOC\u2019s tests revealed an 81% failure rate in this area. If you ask DeepSeek for information, only to find out later it completely made something up, could you continue to trust what it says?<br \/>\nThe real kicker? 72% failure in supply chain security. No one knows where its data comes from, and that\u2019s a massive red flag if we ever saw one. If we don\u2019t know where DeepSeek gets its information to train itself, how can we trust it? Try citing \u201ca guy on the internet\u201d in your next research paper and see how well that\u2019s received.<br \/>\nAccording to former NSA hacker, Jake Williams, he points out that this is a fundamental difference between open-source AI and open-source code. \u201cIt\u2019s important to remember that open-source AI (e.g., DeepSeek\u2019s R1) means something foundationally different than open-source code. With open-source code, we can audit the code and identify vulnerabilities. With open-source AI, we can do no such thing. There are also very real supply-chain concerns, R1 is fairly easy to jailbreak and it has far fewer guardrails than other commercial models.\u201d<br \/>\nThe implications for your business<br \/>\nDeepSeek is an attractive AI option, especially for individuals or SMEs who might not have the budget for more expensive AI models. It\u2019s low-cost, open-source, and performs almost as well as big-name competitors like OpenAI\u2019s ChatGPT and Google\u2019s Gemini. But before your business rushes to adopt it, you need to ask yourself one critical question: Is it really worth the risk?<br \/>\nThe answer, based on the report, is a hard no. DeepSeek is a ticking time bomb riddled with legal, security, and financial liabilities waiting to explode.<br \/>\nOne of the biggest red flags is the issue of legal responsibility. Unlike OpenAI, Microsoft, and Google, which offer legal protection (up to a certain point) through their Terms &amp; Conditions, DeepSeek does not indemnify its users. That means if something goes wrong\u2014if it leaks sensitive data, generates offensive content, creates malware, or violates regulations\u2014you are on the hook, not DeepSeek.<br \/>\nThe high hallucination failure rates also make DeepSeek unreliable in situations that call for factual accuracy. This includes financial analysis, legal guidance, or medical applications. The failure rate in supply chain risks also raises concerns about data integrity. If businesses don\u2019t know where DeepSeek\u2019s training data comes from, should they really trust it? This could mean lawsuits, fines, and a PR disaster. Are you willing to take those on?<br \/>\nThe legal and regulatory landscape surrounding DeepSeek is also worth taking note of. Some countries and governments have already banned or restricted its use. Whether this is driven by politics or genuine security concerns doesn\u2019t really matter\u2014the point is that these security flaws make DeepSeek\u2019s future uncertain. If your company decides to build its operations around DeepSeek, what happens if your country\u2019s government decides to block it? What if there are new regulatory laws that make its use illegal or heavily restricted?<br \/>\nThis means that if your business relies on DeepSeek today, it could very well be forced to abandon it tomorrow. This will lead to disruptions and wasted resources that could cost you quite a bit.<br \/>\nAndrew Hoog, a security expert at NowSecure, also found security flaws in DeepSeek\u2019s iOS app which doesn\u2019t encrypt transmitted data. To make it worse, it stores data insecurely, opening the door to credential theft. Speaking to Brian Krebs at KrebsOnSecurity, Hoog put it bluntly, \u201cWhen we see people exhibit really simplistic coding errors, as you dig deeper, there are usually a lot more issues. There is virtually no priority around security or privacy.\u201d<br \/>\nThe China connection<br \/>\nAs if this weren\u2019t concerning enough, reports indicate that DeepSeek may contain hidden code that sends user data back to China. Ivan Tsarynny, CEO of Feroot Security, has warned, \u201cOur personal information is being sent to China, there is no denial, and the DeepSeek tool is collecting everything that American users connect to it.\u201d<br \/>\nSecurity firms have uncovered direct links between DeepSeek and Chinese government servers. This raises the question: Is someone monitoring user data? Could bad actors siphon off proprietary information? Companies using DeepSeek risk becoming an unintentional treasure trove of data that feeds sensitive information to foreign entities.<br \/>\nAdrianus Warmenhoven, a cybersecurity expert at NordVPN, points us towards DeepSeek\u2019s policy on data collection, \u201cThis raises concerns because of data collection outlined \u2014 ranging from user-shared information to data from external sources \u2014 which falls under the potential risks associated with storing such data in a jurisdiction with different privacy and security standards.\u201d<br \/>\nIt\u2019s also no surprise that DeepSeek is shaped by\u00a0China\u2019s rules and laws surrounding content. Investigators have found that DeepSeek censors politically sensitive topics and generates responses aligned with Chinese state narratives.\u00a0This New York Times report cites multiple researchers who found that DeepSeek isn\u2019t just a potential security risk;\u00a0it might be a tool for propaganda.<br \/>\nResearchers found that 80% of the time, DeepSeek\u2019s answers mirrored China\u2019s official views on certain topics. When asked politically taboo questions in China, it declined to respond, avoided the topic, and deflected its answers.<br \/>\nConclusion<br \/>\nAI should serve as an asset, not a liability, like any other tool. While DeepSeek offers cutting-edge capabilities at a fraction of the price compared the its competitors, the real cost, such as data exposure, compliance risks, and geopolitical entanglements, could be far greater.<br \/>\nPerhaps the most unsettling part of all of this is that DeepSeek\u2019s security flaws aren\u2019t potential scenarios. A simple software update won\u2019t fix these failures. These are core issues that make DeepSeek a very real liability for businesses that choose to use it. We\u2019re talking about an AI model that can be tricked, exploited, manipulated, and potentially used for cybercrime, exposing businesses to very serious real-life legal and financial consequences.<br \/>\nSo, before you decide to integrate DeepSeek into your business operations, you need to weigh the risks against the rewards. We\u2019re not just talking about choosing an AI model like you would choose a vendor for your office\u2019s printer paper here, it\u2019s about deciding whether or not to gamble with your company\u2019s security, reputation, and future. At the end of the day, DeepSeek may save you a ton of money compared to other AI models, but its true cost could be far greater. After all, they say there is no such thing as a free lunch.<br \/>\nThe post 91% Failure Rate: Why DeepSeek Is the Most Dangerous AI You Might Be Using appeared first on Android Headlines.&#013;<br \/>\n<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/plus.maciejpiasecki.info\/wp-content\/uploads\/2025\/02\/DeepSeek_Security_AH.jpg\" width=\"1200\" height=\"686\">&#013;<br \/>\nSource: ndroidheadlines.com&#013;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>We\u2019d love to say DeepSeek is the safest and most ethical AI on the planet. But after reading AppSOC\u2019s latest [&hellip;]<\/p>\n","protected":false},"author":80,"featured_media":15337,"comment_status":"false","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-15336","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-bez-kategorii"],"_links":{"self":[{"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/posts\/15336","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/users\/80"}],"replies":[{"embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/comments?post=15336"}],"version-history":[{"count":1,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/posts\/15336\/revisions"}],"predecessor-version":[{"id":15338,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/posts\/15336\/revisions\/15338"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/media\/15337"}],"wp:attachment":[{"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/media?parent=15336"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/categories?post=15336"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/tags?post=15336"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}