{"id":15859,"date":"2025-04-03T21:07:01","date_gmt":"2025-04-03T19:07:01","guid":{"rendered":"http:\/\/plus.maciejpiasecki.info\/index.php\/2025\/04\/03\/agi-arrival-could-cause-existential-risks-google-deepmind-says\/"},"modified":"2025-04-03T22:03:32","modified_gmt":"2025-04-03T20:03:32","slug":"agi-arrival-could-cause-existential-risks-google-deepmind-says","status":"publish","type":"post","link":"https:\/\/plus.maciejpiasecki.info\/index.php\/2025\/04\/03\/agi-arrival-could-cause-existential-risks-google-deepmind-says\/","title":{"rendered":"AGI arrival could cause &#039;existential risks,&#039; Google DeepMind says"},"content":{"rendered":"<p>Google is one of the main names in the artificial intelligence field. The company\u2019s DeepMind division produces dozens of developments and technologies that eventually find their way into Google\u2019s core products and services. Recently, Google DeepMind shared a safety paper that warned about the AGI era. However, there are voices opposing the document\u2019s conclusions.<br \/>\nOpenAI coined the term AGI (Artificial General Intelligence) a while ago. It refers to artificial intelligence systems capable of performing virtually any task a human could do.. There are very different views among experts in the field about when AGIs will arrive. Some are optimistic, while others are less so.<br \/>\nSafety paper discloses Google DeepMind\u2019s view of AGI<br \/>\nAccording to Google DeepMind, true AGIs could arrive as early as 2030. \u201c[We anticipate] the development of an Exceptional AGI before the end of the current decade,\u201d the document states. \u201cAn Exceptional AGI is a system that has a capability matching at least the 99th percentile of skilled adults on a wide range of non-physical tasks, including metacognitive tasks like learning new skills.\u201d<br \/>\nHowever, the company doesn\u2019t portray this potential reality as entirely positive. The paper warns that these systems could cause \u201csevere harm.\u201d It also says that the arrival of the technology could entail \u201cexistential risks\u201d that \u201cpermanently destroy humanity.\u201d<br \/>\nThe DeepMind team highlights key differences between their \u201cAGI risk mitigation\u201d approach and that of others. Google\u2019s branch says that Anthropic places less emphasis on \u201crobust training, monitoring, and security.\u201d On the other hand, OpenAI is overly optimistic about \u201cautomating\u201d alignment research, according to DeepMind. Alignment research is a method of AI safety research.<br \/>\nAI superintelligence is not viable, the company believes<br \/>\nDeepMind researchers also have doubts about the viability of AI \u201csuperintelligence,\u201d something OpenAI has also referred to. The paper says there is currently no \u201csignificant architectural innovation\u201d that points in that direction. However, they do see the possibility of \u201crecursive AI improvement\u201d with existing technology. That is, AI that can create more advanced AIs through research. However, DeepMind warns that this would be very dangerous.<br \/>\n\u201cThe transformative nature of AGI has the potential for both incredible benefits as well as severe harms,\u201d \u200b\u200bthe document reads. \u201cAs a result, to build AGI responsibly, it is critical for frontier AI developers to proactively plan to mitigate severe harms.\u201d<br \/>\nThere are experts who disagree with DeepMind<br \/>\nThere are some voices in the industry that disagree with the conclusions of the DeepMind report. Heidy Khlaaf, chief AI scientist at the nonprofit AI Now Institute, claims that it is still too early to expect the AGI concept to be \u201crigorously evaluated scientifically.\u201d Matthew Guzdial, assistant professor at the University of Alberta, says that the recursive AI improvement mentioned by DeepMind is also unrealistic. \u201cWe\u2019ve never seen any evidence for it working,\u201d he stated.<br \/>\nSandra Wachter\u2014a researcher studying tech and regulation at Oxford\u2014believes that the real concern is the training of future AIs with \u201cinaccurate outputs.\u201d She appears to refer to the growing use of synthetic data in the industry. Synthetic data is basically data outputs coming from artificial intelligence instead of taken from the real world.<br \/>\n\u201cWith the proliferation of generative AI outputs on the internet and the gradual replacement of authentic data, models are now learning from their own outputs that are riddled with mistruths, or hallucinations,\u201d she stated. \u201cAt this point, chatbots are predominantly used for search and truth-finding purposes. That means we are constantly at risk of being fed mistruths and believing them because they are presented in very convincing ways.\u201d<br \/>\nThe post AGI arrival could cause 'existential risks,&#8217; Google DeepMind says appeared first on Android Headlines.&#013;<br \/>\n<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/plus.maciejpiasecki.info\/wp-content\/uploads\/2025\/04\/Google-DeepMind-logo-featured.jpg\" width=\"1200\" height=\"630\">&#013;<br \/>\nSource: ndroidheadlines.com&#013;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Google is one of the main names in the artificial intelligence field. The company\u2019s DeepMind division produces dozens of developments [&hellip;]<\/p>\n","protected":false},"author":67,"featured_media":15860,"comment_status":"false","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-15859","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-bez-kategorii"],"_links":{"self":[{"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/posts\/15859","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/users\/67"}],"replies":[{"embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/comments?post=15859"}],"version-history":[{"count":1,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/posts\/15859\/revisions"}],"predecessor-version":[{"id":15861,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/posts\/15859\/revisions\/15861"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/media\/15860"}],"wp:attachment":[{"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/media?parent=15859"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/categories?post=15859"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/tags?post=15859"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}