{"id":1246,"date":"2024-05-23T11:04:08","date_gmt":"2024-05-23T15:04:08","guid":{"rendered":"https:\/\/www.carloswsmith.com\/blog\/?p=1246"},"modified":"2024-05-23T11:04:08","modified_gmt":"2024-05-23T15:04:08","slug":"openai-just-gave-away-the-entire-game","status":"publish","type":"post","link":"https:\/\/www.carloswsmith.com\/blog\/?p=1246","title":{"rendered":"OpenAI Just Gave Away the Entire Game"},"content":{"rendered":"<div class=\"header reader-header reader-show-element\">\n<h1 class=\"reader-title\">OpenAI Just Gave Away the Entire Game<\/h1>\n<div class=\"credits reader-credits\">Charlie Warzel<\/div>\n<div class=\"meta-data\">\n<div class=\"reader-estimated-time\" dir=\"ltr\" data-l10n-args=\"{&quot;range&quot;:&quot;8\u201310&quot;,&quot;rangePlural&quot;:&quot;other&quot;}\" data-l10n-id=\"about-reader-estimated-read-time\">8\u201310 minutes<\/div>\n<\/div>\n<\/div>\n<hr \/>\n<div class=\"content\">\n<div class=\"moz-reader-content reader-show-element\">\n<div id=\"readability-page-1\" class=\"page\">\n<article>\n<header data-event-module=\"hero\">\n<div>\n<div>\n<p>The Scarlett Johansson debacle is a microcosm of AI\u2019s raw deal: It\u2019s happening, and you can\u2019t stop it.<\/p>\n<\/div>\n<\/div>\n<\/header>\n<section data-event-module=\"article body\" data-flatplan-body=\"true\">\n<p data-flatplan-paragraph=\"true\">If you\u2019re looking to understand the philosophy that underpins Silicon Valley\u2019s latest gold rush, look no further than OpenAI\u2019s Scarlett Johansson debacle. The story, <a href=\"https:\/\/www.npr.org\/2024\/05\/20\/1252495087\/openai-pulls-ai-voice-that-was-compared-to-scarlett-johansson-in-the-movie-her\" data-event-element=\"inline link\">according<\/a> to Johansson\u2019s lawyers, goes like this: Nine months ago, OpenAI CEO Sam Altman approached the actor with a request to license her voice for a new digital assistant; Johansson declined. She alleges that just two days before the company\u2019s keynote event last week, in which that <a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2024\/05\/openai-gpt4o-siri-iphone\/678371\/\" data-event-element=\"inline link\">assistant was revealed as part of a new system called GPT-4o<\/a>, Altman reached out to Johansson\u2019s team, urging the actor to reconsider. Johansson and Altman allegedly never spoke, and Johansson allegedly never granted OpenAI permission to use her voice. Nevertheless, the company debuted Sky two days later\u2014a program with a voice many believed was alarmingly similar to Johansson\u2019s.<\/p>\n<p data-flatplan-paragraph=\"true\">Johansson told NPR that she was \u201cshocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine.\u201d In response, Altman issued a statement denying that the company had cloned her voice and saying that it had already cast a different voice actor before reaching out to Johansson. (I\u2019d encourage you to <a href=\"https:\/\/www.youtube.com\/watch?v=D9byh4MAsUQ\" data-event-element=\"inline link\">listen for yourself<\/a>.) Curiously, Altman said that OpenAI would take down Sky\u2019s voice from its platform \u201c<a href=\"https:\/\/www.forbes.com\/sites\/antoniopequenoiv\/2024\/05\/21\/sam-altman-apologizes-to-scarlett-johansson-over-openai-chatbot-voice-she-called-eerily-similar-to-hers\/?sh=4a261d152c86\" data-event-element=\"inline link\">out of respect<\/a>\u201d for Johansson. This is a messy situation for OpenAI, complicated by Altman\u2019s own social-media posts. On the day that OpenAI released ChatGPT\u2019s assistant, Altman posted a cheeky, one-word <a href=\"https:\/\/x.com\/sama\/status\/1790075827666796666\" data-event-element=\"inline link\">statement<\/a> on X: \u201cHer\u201d\u2014a reference to the 2013 film of the same name, in which Johansson is the voice of an AI assistant that a man falls in love with. Altman\u2019s post is reasonably damning, implying that Altman was aware, even proud, of the similarities between Sky\u2019s voice and Johansson\u2019s.<\/p>\n<p data-flatplan-paragraph=\"true\">On its own, this seems to be yet another example of a tech company blowing past ethical concerns and operating with impunity. But the situation is also a tidy microcosm of the raw deal at the center of generative AI, a technology that is built off data scraped from the internet, <a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2023\/09\/books3-database-generative-ai-training-copyright-infringement\/675363\/\" data-event-element=\"inline link\">generally without the consent of creators or copyright owners<\/a>. Multiple artists and publishers, <a href=\"https:\/\/www.nytimes.com\/2023\/12\/27\/business\/media\/new-york-times-open-ai-microsoft-lawsuit.html\" data-event-element=\"inline link\">including <em>The New York Times<\/em><\/a>, have <a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2024\/02\/generative-ai-lawsuits-copyright-fair-use\/677595\/\" data-event-element=\"inline link\">sued<\/a> AI companies for this reason, but the tech firms remain unchastened, <a href=\"https:\/\/futurism.com\/the-byte\/openai-executive-choked-sora-youtube\" data-event-element=\"inline link\">prevaricating when asked point-blank<\/a> about the provenance of their training data. At the core of these deflections is an implication: The hypothetical superintelligence they are building is too big, too world-changing, too important for prosaic concerns such as copyright and attribution. The Johansson scandal is merely a reminder of AI\u2019s manifest-destiny philosophy: <em>This is happening, whether you like it or not. <\/em><\/p>\n<div>\n<p id=\":r1:\">Don\u2019t miss what matters. Sign up for The Atlantic Daily newsletter.<\/p>\n<\/div>\n<p data-flatplan-paragraph=\"true\">Altman and OpenAI have been candid on this front. The end goal of OpenAI has always been to build a so-called artificial general intelligence, or AGI, that would, in their imagining, alter the course of human history forever, ushering in an unthinkable revolution of productivity and prosperity\u2014a utopian world where jobs disappear, replaced by some form of universal basic income, and humanity experiences quantum leaps in science and medicine. (Or, the machines <a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2023\/06\/ai-regulation-sam-altman-bill-gates\/674278\/\" data-event-element=\"inline link\">cause life on Earth as we know it to end<\/a>.) The stakes, in this hypothetical, are unimaginably high\u2014all the more reason for OpenAI to accelerate progress by any means necessary. Last summer, my colleague Ross Andersen <a href=\"https:\/\/www.theatlantic.com\/magazine\/archive\/2023\/09\/sam-altman-openai-chatgpt-gpt-4\/674764\/\" data-event-element=\"inline link\">described<\/a> Altman\u2019s ambitions thusly:<\/p>\n<div>\n<blockquote><p>As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we\u2019re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.<\/p><\/blockquote>\n<\/div>\n<p data-flatplan-paragraph=\"true\">Part of Altman\u2019s reasoning, he told Andersen, is that AI development is a geopolitical race against autocracies like China. \u201cIf you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI\u201d rather than that of \u201cauthoritarian governments,\u201d he said. He noted that, in an ideal world, AI should be a product of nations. But in <em>this <\/em>world, Altman seems to view his company as akin to its own nation-state. Altman, of course, has testified before Congress, urging lawmakers to regulate the technology while also <a href=\"https:\/\/www.nytimes.com\/2023\/05\/16\/technology\/openai-altman-artificial-intelligence-regulation.html#:~:text=Altman%20implored%20lawmakers%20to%20regulate,over%20A.I.'s%20potential%20harms.\" data-event-element=\"inline link\">stressing<\/a> that \u201cthe benefits of the tools we have deployed so far vastly outweigh the risks.\u201d Still, the message is clear: The future is coming, and you ought to let us be the ones to build it.<\/p>\n<p data-flatplan-paragraph=\"true\">Other OpenAI employees have offered a less gracious vision. In a video <a href=\"https:\/\/t.co\/YX3fOERVF5\" data-event-element=\"inline link\">posted<\/a> last fall on YouTube by a group of <a href=\"https:\/\/www.theatlantic.com\/ideas\/archive\/2022\/11\/cryptocurrency-effective-altruism-ftx-sam-bankman-fried\/672149\/\" data-event-element=\"inline link\">effective altruists<\/a> in the Netherlands, three OpenAI employees answered questions about the future of the technology. In response to one question about AGI rendering jobs obsolete, Jeff Wu, an engineer for the company, confessed, \u201cIt\u2019s kind of deeply unfair that, you know, a group of people can just build AI and take everyone\u2019s jobs away, and in some sense, there\u2019s nothing you can do to stop them right now.\u201d He added, \u201cI don\u2019t know. Raise awareness, get governments to care, get other people to care. Yeah. Or join us and have one of the few remaining jobs. I don\u2019t know; it\u2019s rough.\u201d Wu\u2019s colleague Daniel Kokotajlo jumped in with the justification. \u201cTo add to that,\u201d he said, \u201cAGI is going to create tremendous wealth. And if that wealth is distributed\u2014even if it\u2019s not equitably distributed, but the closer it is to equitable distribution, it\u2019s going to make everyone incredibly wealthy.\u201d (There is no evidence to suggest that the wealth will be evenly distributed.)<\/p>\n<p data-flatplan-paragraph=\"true\">This is the unvarnished logic of OpenAI. It is cold, rationalist, and paternalistic. That such a small group of people should be anointed to build a civilization-changing technology is inherently unfair, they note. And yet they will carry on because they have both a vision for the future and the means to try to bring it to fruition. Wu\u2019s proposition, which he offers with a resigned shrug in the video, is telling: <em>You can try to fight this, but you can\u2019t stop it. Your best bet is to get on board.<\/em><\/p>\n<p data-flatplan-paragraph=\"true\">You can see this dynamic playing out in OpenAI\u2019s content-licensing agreements, which it has struck with platforms such as Reddit and news organizations such as Axel Springer and Dotdash Meredith. Recently, a tech executive I spoke with compared these types of agreements to a hostage situation, suggesting they believe that AI companies will find ways to scrape publishers\u2019 websites anyhow, if they don\u2019t comply. Best to get a paltry fee out of them while you can, the person argued.<\/p>\n<p data-flatplan-paragraph=\"true\">The Johansson accusations only compound (and, if true, validate) these suspicions. Altman\u2019s alleged reasoning for commissioning Johansson\u2019s voice was that her familiar timbre might be \u201ccomforting to people\u201d who find AI assistants off-putting. Her likeness would have been less about a particular voice-bot aesthetic and more of an adoption hack or a recruitment tool for a technology that many people didn\u2019t ask for, and seem uneasy about. Here, again, is the logic of OpenAI at work. It follows that the company would plow ahead, consent be damned, simply because it might believe the stakes are too high to pivot or wait. When your technology aims to rewrite the rules of society, it stands that society\u2019s current rules need not apply.<\/p>\n<p data-flatplan-paragraph=\"true\">Hubris and entitlement are inherent in the development of any transformative technology. A small group of people needs to feel confident enough in its vision to bring it into the world and ask the rest of us to adapt. But generative AI stretches this dynamic to the point of absurdity. It is a technology that requires a mindset of manifest destiny, of dominion and conquest. It\u2019s not stealing to build the future if you believe it has belonged to you all along.<\/p>\n<\/section>\n<div data-event-module=\"footer\">\n<address id=\"article-writer-0\" data-event-element=\"author\" data-flatplan-bio=\"true\">\n<div>\n<p><a href=\"https:\/\/www.theatlantic.com\/author\/charlie-warzel\/\" data-label=\"https:\/\/www.theatlantic.com\/author\/charlie-warzel\/\" data-action=\"click author - name\">Charlie Warzel<\/a> is a staff writer at <em>The Atlantic<\/em> and the author of its newsletter <a href=\"https:\/\/www.theatlantic.com\/newsletters\/sign-up\/galaxy-brain\/\">Galaxy Brain<\/a>, about technology, media, and big ideas.<\/p>\n<\/div>\n<p>https:\/\/www.theatlantic.com\/technology\/archive\/2024\/05\/openai-scarlett-johansson-sky\/678446\/<\/p>\n<\/address>\n<\/div>\n<\/article>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>OpenAI Just Gave Away the Entire Game Charlie Warzel 8\u201310 minutes The Scarlett Johansson debacle is a microcosm of AI\u2019s raw deal: It\u2019s happening, and you can\u2019t stop it. If you\u2019re looking to understand the philosophy that underpins Silicon Valley\u2019s latest gold rush, look no further than OpenAI\u2019s Scarlett Johansson debacle. The story, according to [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[7,22],"tags":[],"class_list":["post-1246","post","type-post","status-publish","format-standard","hentry","category-ai","category-risks"],"_links":{"self":[{"href":"https:\/\/www.carloswsmith.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/1246","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.carloswsmith.com\/blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.carloswsmith.com\/blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.carloswsmith.com\/blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.carloswsmith.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1246"}],"version-history":[{"count":1,"href":"https:\/\/www.carloswsmith.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/1246\/revisions"}],"predecessor-version":[{"id":1247,"href":"https:\/\/www.carloswsmith.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/1246\/revisions\/1247"}],"wp:attachment":[{"href":"https:\/\/www.carloswsmith.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1246"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.carloswsmith.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1246"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.carloswsmith.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1246"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}