{"id":7874,"date":"2021-06-17T22:12:33","date_gmt":"2021-06-17T20:12:33","guid":{"rendered":"http:\/\/plus.maciejpiasecki.info\/index.php\/2021\/06\/17\/facebook-baby-steps-toward-identifying-deep-fake-images-their-source\/"},"modified":"2021-06-18T00:25:40","modified_gmt":"2021-06-17T22:25:40","slug":"facebook-baby-steps-toward-identifying-deep-fake-images-their-source","status":"publish","type":"post","link":"https:\/\/plus.maciejpiasecki.info\/index.php\/2021\/06\/17\/facebook-baby-steps-toward-identifying-deep-fake-images-their-source\/","title":{"rendered":"Facebook Baby Steps Toward Identifying Deep Fake Images &amp; Their Source"},"content":{"rendered":"<p>Facebook and Michigan State University have revealed a new method for identifying deep fake images and tracing them back to their source. Or, at the very least, tracing back to which generative model was used to create the images. The new system, according to reports surrounding the reveal, uses a complex reverse engineering technique. Specifically, to identify patterns behind the AI model used to generate a deep fake image.<br \/>\nThe system works by running images through a Fingerprint Estimation Network (FEN), to parse out patterns \u2014 fingerprints \u2014 in those images. Those fingerprints are effectively built from a set of known variables in deep fake images. With generative models leaving behind measurable patterns in \u201cfingerprint magnitude, repetitive nature, frequency range, and symmetrical frequency response.\u201d<br \/>\nAnd, after feeding those constraints back through the FEN, the method can detect which images are deep fakes. Those are then fed back through a system to separate the images via \u201chyperparameters\u201d which are set to guide the system to self-learn various generative models.<br \/>\nThis is still in its infancy but it does move one step closer toward identifying and tracing deep fake images<br \/>\nOne of the big setbacks to the current iteration of the system serves to highlight that this is still new technology. It\u2019s nowhere near ready for primetime. Namely, it can\u2019t detect fake images created by a generative model that it hasn\u2019t been trained on. And there are countless such models in use.<br \/>\nWhat\u2019s more, this is by no means a finalized method for identifying deep fake images from Facebook and MSU.\u00a0 Not only is there no way to be sure that every generative model is accounted for. There aren\u2019t any other research studies related to this topic. Or, at the very least, there are no data sets to build up a baseline for comparison. Summarily, there\u2019s no way of knowing, for sure, just how good the new AI model is.<br \/>\nThe team behind the project indicates that there is \u201ca much stronger and generalized correlation between generated images and the embedding space of meaningful architecture hyperparameters and loss function types.\u201d And it compares that to a random vector of the same length and distribution. But that\u2019s based on its own, self-created baseline.<br \/>\nSo, without further research, the only takeaway is that the model detects AI-made deep fake images and their source better than a straightforward guess.<br \/>\nWhat could this be used for?<br \/>\nThe goal of the project, as presented by the team, is to generate a way to trace deep fake images back to their source after identifying them. That could potentially serve to make enforcement of misinformation policies and rules easier. Particularly, as that pertains to social media sites and the still-rampant spread of misinformation.<\/p>\n<p>The post Facebook Baby Steps Toward Identifying Deep Fake Images &amp; Their Source appeared first on Android Headlines.&#013;<br \/>\n<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/plus.maciejpiasecki.info\/wp-content\/uploads\/2021\/06\/Facebook-Surprise-Illustration-lol-AH-DB.jpg\" width=\"1600\" height=\"900\">&#013;<br \/>\nSource: ndroidheadlines.com&#013;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Facebook and Michigan State University have revealed a new method for identifying deep fake images and tracing them back to [&hellip;]<\/p>\n","protected":false},"author":6,"featured_media":7875,"comment_status":"false","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-7874","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-bez-kategorii"],"_links":{"self":[{"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/posts\/7874","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/comments?post=7874"}],"version-history":[{"count":1,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/posts\/7874\/revisions"}],"predecessor-version":[{"id":7876,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/posts\/7874\/revisions\/7876"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/media\/7875"}],"wp:attachment":[{"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/media?parent=7874"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/categories?post=7874"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/plus.maciejpiasecki.info\/index.php\/wp-json\/wp\/v2\/tags?post=7874"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}