{"id":14310,"date":"2024-12-11T20:29:38","date_gmt":"2024-12-11T19:29:38","guid":{"rendered":"http:\/\/plus.maciejpiasecki.info\/index.php\/2024\/12\/11\/say-hello-to-google-gemini-2-0\/"},"modified":"2024-12-11T21:04:10","modified_gmt":"2024-12-11T20:04:10","slug":"say-hello-to-google-gemini-2-0","status":"publish","type":"post","link":"https:\/\/plus.maciejpiasecki.info\/index.php\/2024\/12\/11\/say-hello-to-google-gemini-2-0\/","title":{"rendered":"Say hello to Google Gemini 2.0!"},"content":{"rendered":"<p>We should be pretty familiar with Gemini nowadays, as Google has been squeezing it into all of its products. While Google has made some significant strides with its AI models, we\u2019ve all been using Gemini Version 1 and 1.5. Well, Google just announced the next generation of Gemini, Gemini 2.0.<br \/>\nIt\u2019s important to know that this was just an announcement. We\u2019re not going to see Gemini 2.0 implemented into any services just yet. However, we\u2019re going to see it in one of the most anticipated AI tools that Google showed off.<br \/>\nGoogle just announced Gemini 2.0.<br \/>\nThe company\u2019s announcement shows that we\u2019re truly in the age of AI automation. Google released a short and sweet video detailing some of what the company has in store. In it, we see that Gemini 2.0 will have a focus on capable AI agents that can perform tasks on the behalf of the user. Other companies like Motorola are also working on models that can perform actions across apps.<\/p>\n<p>Google\u2019s ambitions are bigger, as you can imagine. Using Gemini 2.0 as a base, the company could develop tools that create an agent that can do just about anything. Multimodality is the key for this to work, as Gemini 2.0 will be able to take in information from several forms of input. Imagine being able to point your phone at an object in the real world and ask questions about it.<br \/>\nDoes that sound familiar? This is what Google showed off when it revealed Project Astra. Well, according to the video, Gemini 2.0 will power Project Astra. In case you forgot what it is, it\u2019ll let you point your phone at an object in the real world and ask questions about it. You\u2019ll be able to speak directly to the agent and you\u2019ll get a vocal response back.<br \/>\nPeople have been waiting for this tool since Google I\/O. We don\u2019t know when the company will release this to the public, but we\u2019re sure that it will be a hit with users.<br \/>\nThis could be a super helpful model<br \/>\nJust know that this video showcases what Google is planning. It\u2019s not a representation of what the company will launch. One thing that Google showed off in the video was a concept of Gemini 2.0 helping someone play Clash of Clans. The player asked Gemini where to attack the enemy base, and it was able to get the context from what was on the players screen.<br \/>\nThen, we heard a voice explaining where to attack the base from and why. While that seems rather lazy on the player\u2019s part, it shows that Google wants its AI to extend pretty deeply into your smartphone experience.<br \/>\nProject Mariner<br \/>\nGoogle gave us a sneak peek at its next biggest project. Project Mariner will have Gemini perform complex tasks with a simple command. Let\u2019s just say you want Gemini to find the most famous post-impressionist painter, find a painting of theirs on Google Arts and Culture, then add some colorful paints to your Etsy cart. That sounds rather specific, but you might be able to do that when Google fully realizes Project Mariner.<br \/>\nRight now, the only model that Google is talking about is called Gemini 2.0 Flash Experimental. This means that the company is in the process of testing it out. So, we don\u2019t know when the company will push a final version to the masses.<br \/>\nThe post Say hello to Google Gemini 2.0! appeared first on Android Headlines.&#013;<br \/>\n<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/plus.maciejpiasecki.info\/wp-content\/uploads\/2024\/12\/Gemini-2-screenshot.jpg\" width=\"1920\" height=\"1080\">&#013;<br \/>\nSource: ndroidheadlines.com&#013;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>We should be pretty familiar with Gemini nowadays, as Google has been squeezing it into all of its products. While [&hellip;]<\/p>\n","protected":false},"author":27,"featured_media":14311,"comment_status":"false","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-14310","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-bez-kategorii"],"_links":{"self":[{"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/posts\/14310","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/users\/27"}],"replies":[{"embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/comments?post=14310"}],"version-history":[{"count":1,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/posts\/14310\/revisions"}],"predecessor-version":[{"id":14312,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/posts\/14310\/revisions\/14312"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/media\/14311"}],"wp:attachment":[{"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/media?parent=14310"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/categories?post=14310"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/tags?post=14310"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}