{"id":15472,"date":"2025-03-05T20:20:32","date_gmt":"2025-03-05T19:20:32","guid":{"rendered":"http:\/\/plus.maciejpiasecki.info\/index.php\/2025\/03\/05\/google-gemini-has-been-used-to-generate-ai-deepfake-terrorism\/"},"modified":"2025-03-05T21:11:41","modified_gmt":"2025-03-05T20:11:41","slug":"google-gemini-has-been-used-to-generate-ai-deepfake-terrorism","status":"publish","type":"post","link":"https:\/\/plus.maciejpiasecki.info\/index.php\/2025\/03\/05\/google-gemini-has-been-used-to-generate-ai-deepfake-terrorism\/","title":{"rendered":"Google Gemini has been used to generate AI deepfake terrorism"},"content":{"rendered":"<p>The massification of AI-powered services has opened the door to countless possibilities in virtually every area of \u200b\u200bthe tech industry and consumer market. From easily editing an image to summarizing long documents, artificial intelligence just makes things easier. However, malicious actors also have access to this technology. In a recent report, Google revealed that its AI tools have been used to generate deepfake terrorism content.<br \/>\nIn Australia, big tech companies are required to periodically submit reports on their harm-minimization efforts regarding the use of their developments to the authorities. The Australian eSafety Commission is in charge of receiving and analyzing these reports. Repeated violations of the law expose companies to fines or potential sanctions.<br \/>\nGoogle discloses that Gemini has generated deepfake terrorism and child abuse material<br \/>\nGoogle\u2019s latest security report covers the period from April 2023 to February 2024. According to the Australian agency, the Mountain View giant\u2019s technology was responsible for generating AI deepfake terrorism content. Additionally, Google\u2019s report mentions the use of Gemini to generate child abuse material.<br \/>\n\u201cThis underscores how critical it is for companies developing AI products to build in and test the efficacy of safeguards to prevent this type of material from being generated,\u201d said Julie Inman, eSafety Commissioner Julie Inman Grant.<br \/>\nGoogle says it received 258 reports of cases of AI deepfake content related to terrorism or violent extremism. There are also 86 case reports of bad actors generating child exploitation or abuse material with Gemini. Google is more strict in removing child exploitation material. The company uses a hatch-matching system to detect such images and remove them as quickly as possible. However, Google does not apply the technology to extremism-related content.<br \/>\nOne of the main goals of regulators regarding artificial intelligence is for companies to establish stricter shields to prevent the creation of this type of material. The arrival of ChatGPT in 2022 raised the first concerns in this regard. However, years later, it seems that the issue is still present, although perhaps to a lesser extent.<br \/>\nThe Australian Commission has already sanctioned Telegram and X<br \/>\nThe eSafety Commission praises Google for its transparency in revealing the malicious uses that some criminal actors are making of its AI tools. The Australian eSafety Commission called Google\u2019s report \u201cworld-first insight.\u201d Other firms have not had favorable words from the agency. Telegram and X (FKA Twitter) received fines due to shortcomings in their reports.<br \/>\nThe post Google Gemini has been used to generate AI deepfake terrorism appeared first on Android Headlines.&#013;<br \/>\n<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/plus.maciejpiasecki.info\/wp-content\/uploads\/2025\/03\/Google-Logo-AM-AH-3.jpg\" width=\"1600\" height=\"900\">&#013;<br \/>\nSource: ndroidheadlines.com&#013;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The massification of AI-powered services has opened the door to countless possibilities in virtually every area of \u200b\u200bthe tech industry [&hellip;]<\/p>\n","protected":false},"author":67,"featured_media":15473,"comment_status":"false","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-15472","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-bez-kategorii"],"_links":{"self":[{"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/posts\/15472","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/users\/67"}],"replies":[{"embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/comments?post=15472"}],"version-history":[{"count":1,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/posts\/15472\/revisions"}],"predecessor-version":[{"id":15474,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/posts\/15472\/revisions\/15474"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/media\/15473"}],"wp:attachment":[{"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/media?parent=15472"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/categories?post=15472"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/tags?post=15472"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}