{"id":1262,"date":"2024-06-03T00:08:30","date_gmt":"2024-06-03T04:08:30","guid":{"rendered":"https:\/\/www.carloswsmith.com\/blog\/?p=1262"},"modified":"2024-06-03T00:08:30","modified_gmt":"2024-06-03T04:08:30","slug":"conscious-ai-is-the-second-scariest-kind-the-atlantic","status":"publish","type":"post","link":"https:\/\/www.carloswsmith.com\/blog\/?p=1262","title":{"rendered":"Conscious AI Is the Second-Scariest Kind &#8211; The Atlantic"},"content":{"rendered":"<p>Source: <em><a href=\"https:\/\/www.theatlantic.com\/ideas\/archive\/2024\/03\/ai-consciousness-science-fiction\/677659\/?gift=b1NRd76gsoYc6famf9q-8kj6fpF7gj7gmqzVaJn8rdg&amp;utm_source=copy-link&amp;utm_medium=social&amp;utm_campaign=share\">Conscious AI Is the Second-Scariest Kind &#8211; The Atlantic<\/a><\/em><\/p>\n<p>&nbsp;<\/p>\n<header class=\"ArticleHero_root__3w7kV\" data-event-module=\"hero\">\n<div class=\"\">\n<div class=\"ArticleHero_defaultArticleLockup__vb8lz\">\n<div class=\"ArticleHero_title__PQ4pC\">\n<h1 class=\"ArticleTitle_root__VrZaG ArticleTitle_featureOrTwoCol__TRUC3\" data-flatplan-title=\"true\">CONSCIOUS AI IS THE SECOND-SCARIEST KIND<\/h1>\n<\/div>\n<div class=\"ArticleHero_dek__EqdkK\" data-flatplan-description=\"true\">\n<p class=\"ArticleDek_root__P3leE ArticleDek_feature__lHYTl\">A cutting-edge theory of mind suggests a new type of doomsday scenario.<\/p>\n<\/div>\n<div class=\"ArticleHero_byline__iFT6A ArticleHero_featureByline__G7kFq\">\n<div class=\"ArticleBylines_root__IBR5V\">\n<address id=\"byline\">By\u00a0<a class=\"ArticleBylines_link__kNP4C\" href=\"https:\/\/www.theatlantic.com\/author\/peter-watts\/\" data-action=\"click author - byline\" data-label=\"https:\/\/www.theatlantic.com\/author\/peter-watts\/\" data-event-element=\"author\" data-flatplan-author-link=\"true\">Peter Watts<\/a><\/address>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"ArticleHero_articleUtilityBar__JbQFj\">\n<div class=\"ArticleHero_timestamp__bKhcB\"><time class=\"ArticleTimestamp_root__b3bL6\" datetime=\"2024-03-09T12:00:00Z\" data-flatplan-timestamp=\"true\">MARCH 9, 2024<\/time><\/div>\n<div class=\"ArticleHero_articleUtilityBarTools__ZHw8s\">\n<div class=\"ArticleShare_root__Mq0RB\"><button class=\"ArticleShare_text__oQKBy ArticleShare_shareButton__X0cIe\" aria-haspopup=\"true\" aria-controls=\":R1i5ioomm:\" aria-expanded=\"false\" aria-label=\"Open Share Menu\" data-action=\"click share - expand\" data-event-verb=\"shared\" data-event-element=\"share dropdown\">SHARE<\/button><\/div>\n<p><button class=\"ArticleSave_text__PMSa1 ArticleSave_saveButton__96kFU\">SAVE<\/button><\/div>\n<\/div>\n<\/header>\n<div class=\"ArticleAudio_root__4Qcq3\" data-view-action=\"view - audio player - start\" data-view-label=\"677659\" data-event-module=\"audio player\" data-event-module-state=\"start\" data-event-view=\"true\" data-gtm-vis-first-on-screen31117857_742=\"92906\" data-gtm-vis-total-visible-time31117857_742=\"500\" data-gtm-vis-first-on-screen31117857_217=\"93029\" data-gtm-vis-total-visible-time31117857_217=\"100\" data-gtm-vis-has-fired31117857_217=\"1\" data-gtm-vis-has-fired31117857_742=\"1\">\n<div class=\"ArticleAudio_container__b5Yj2\">\n<div class=\"ArticleAudio_imgContainer__qDu_f\"><img loading=\"lazy\" decoding=\"async\" class=\"Image_root__XxsOp ArticleAudio_img__BFda3\" src=\"https:\/\/www.carloswsmith.com\/blog\/wp-content\/uploads\/2024\/06\/original.jpg\" sizes=\"auto, 80px\" srcset=\"https:\/\/www.carloswsmith.com\/blog\/wp-content\/uploads\/2024\/06\/original.jpg 80w, https:\/\/cdn.theatlantic.com\/thumbor\/nNzjZ3NfqUf2yJRe2fJ8FV4yH9I=\/612x0:3312x2700\/96x96\/media\/img\/mt\/2024\/03\/The_Atlantic_Sleeping_Giants_4800x2700px\/original.jpg 96w, https:\/\/cdn.theatlantic.com\/thumbor\/fUZb1w3pkP41xgeY1Rlr4Z4hSuc=\/612x0:3312x2700\/128x128\/media\/img\/mt\/2024\/03\/The_Atlantic_Sleeping_Giants_4800x2700px\/original.jpg 128w, https:\/\/cdn.theatlantic.com\/thumbor\/Txk-UjRMvDSFvbfxMD49NtlW1FY=\/612x0:3312x2700\/160x160\/media\/img\/mt\/2024\/03\/The_Atlantic_Sleeping_Giants_4800x2700px\/original.jpg 160w, https:\/\/cdn.theatlantic.com\/thumbor\/BpqruyPmI4AVcfD_7rXvonfqrM4=\/612x0:3312x2700\/192x192\/media\/img\/mt\/2024\/03\/The_Atlantic_Sleeping_Giants_4800x2700px\/original.jpg 192w, https:\/\/cdn.theatlantic.com\/thumbor\/E2UrMuuZaZVAdbg_tplpGvYYii8=\/612x0:3312x2700\/256x256\/media\/img\/mt\/2024\/03\/The_Atlantic_Sleeping_Giants_4800x2700px\/original.jpg 256w, https:\/\/cdn.theatlantic.com\/thumbor\/KB7FaKCZAUrbKQStNoaP9gynjNI=\/612x0:3312x2700\/384x384\/media\/img\/mt\/2024\/03\/The_Atlantic_Sleeping_Giants_4800x2700px\/original.jpg 384w, https:\/\/cdn.theatlantic.com\/thumbor\/bOs7STRoic98pm9vmpx82mIAUAQ=\/612x0:3312x2700\/512x512\/media\/img\/mt\/2024\/03\/The_Atlantic_Sleeping_Giants_4800x2700px\/original.jpg 512w\" alt=\"Illustration showing faces connected to cables, lying horizontally\" width=\"80\" height=\"80\" \/><\/div>\n<p class=\"ArticleAudio_text__DsxgL\">Listen to this article<\/p>\n<div class=\"ArticleAudio_player__hOjo_\">\n<div class=\"ArticleAudio_progressBarContainer__IbGRE\"><input class=\"ArticleAudio_slider__AnzMp\" role=\"progressbar\" max=\"1392.07\" type=\"range\" value=\"0\" data-event-verb=\"scrubbed\" data-event-element=\"slider\" \/><\/div>\n<div class=\"ArticleAudio_buttonContainer__c3yS8\"><\/div>\n<div class=\"ArticleAudio_timeContainer__8S55D\" aria-hidden=\"true\">\n<p class=\"ArticleAudio_time__TIPIP\">00:00<\/p>\n<p class=\"ArticleAudio_time__TIPIP\">23:12<\/p>\n<\/div>\n<\/div>\n<\/div>\n<p class=\"ArticleAudio_promo__4zkGZ\">Listen to more stories on\u00a0<a class=\"ArticleAudio_link__bjoip\" href=\"https:\/\/curio.io\/l\/66v0gi9v?fw=1\" data-action=\"click link - audio player - partner\" data-label=\"677659\"><span class=\"ArticleAudio_vendor__lO_xJ\">Curio<\/span><\/a><\/p>\n<\/div>\n<section class=\"ArticleBody_root__2gF81\" data-event-module=\"article body\" data-flatplan-body=\"true\">\n<p class=\"ArticleParagraph_root__4mszW ArticleParagraph_dropcap__uIVzg\" data-flatplan-paragraph=\"true\" data-flatplan-dropcap=\"true\"><span class=\"smallcaps\">Everyone knows\u00a0<\/span>AIs are dangerous. Everyone knows they can rattle off breakthroughs in wildlife tracking and protein folding before lunch, put half the workforce out of a job by supper, and fake enough reality to kill whatever\u2019s left of democracy itself before lights out.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Fewer people admit that AIs are intelligent\u2014not yet, anyway\u2014and even fewer, that they might be conscious. We can handle GPT-4 beating 90 percent of us on the SAT, but we might not be so copacetic with the idea that AI could wake up\u2014could already be awake, if you buy what Blake Lemoine (formerly of Google) or Ilya Sutskever (a co-founder of OpenAI) has been selling.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Lemoine notoriously lost his job after publicly (if unconvincingly) arguing that Google\u2019s LaMDA chatbot was self-aware. Back in 2022, Sutskever opined, \u201cIt may be that today\u2019s large neural networks are slightly conscious.\u201d And just this past August, 19 specialists in AI, philosophy, and cognitive science released a paper suggesting that although no current AI system was \u201ca strong candidate for consciousness,\u201d there was no reason why one couldn\u2019t emerge \u201cin the near term.\u201d The influential philosopher and neuroscientist David Chalmers\u00a0<a href=\"https:\/\/www.technologyreview.com\/2023\/10\/16\/1081149\/ai-consciousness-conundrum\/\" data-event-element=\"inline link\">estimates<\/a>\u00a0those odds, within the next decade, at greater than one in five. What happens next has traditionally been left to the science-fiction writers.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">As it happens, I am one.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">I wasn\u2019t always. I was once a scientist\u2014no neuroscientist or AI guru, just a marine biologist with a fondness for biophysical ecology. It didn\u2019t give me a great background in robot uprisings, but it instilled an appreciation for the scientific process that persisted even after I fell from grace and started writing the spaceships-and-ray-guns stuff. I cultivated a habit of sticking heavily referenced technical appendices onto the ends of my novels, essays exploring the real science that remained when you scraped off the space vampires and telematter drives. I developed a reputation as the kind of hard-sci-fi hombre who did his homework (even if he force-fed that homework to his readers more often than some might consider polite).<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Sometimes that homework involved AI: a trilogy, for example, that featured organic AIs (\u201cHead Cheeses\u201d) built from cultured brain cells spread across a gallium-arsenide matrix. Sometimes it pertained to consciousness: My novel\u00a0<em>Blindsight<\/em>\u00a0uses the conventions of a first-contact story to explore the functional utility of self-awareness. That one somehow ended up in actual neuro labs, in the syllabi for undergraduate courses in philosophy and neuropsych. (I tried to get my publishers to put that on the cover\u2014<span class=\"smallcaps\">reads like a neurology textbook!<\/span>\u2014but for some reason they didn\u2019t bite.) People in the upper reaches of Neuralink and Midjourney started passing my stories around.\u00a0<em>Real\u00a0<\/em>scientists<em>\u2014<\/em>machine-learning specialists, neuroscientists, the occasional theoretical cosmologist\u2014suggested that I might be onto something.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">I\u2019m an imposter, of course. A lapsed biologist who strayed way out of his field. It\u2019s true that I\u2019ve made a few lucky guesses, and I won\u2019t complain if people want to buy me beers on that account. And yet, a vague disquiet simmers underneath those pints. The fact that my guesses garner such a warm reception might not cement my credentials as a prophet so much as serve as an indictment of any club that would have someone like me as a member. If they\u2019ll let me through the doors, you have to wonder whether anyone really has a clue.<\/p>\n<p class=\"ArticleParagraph_root__4mszW ArticleParagraph_dropcap__uIVzg\" data-flatplan-paragraph=\"true\" data-flatplan-dropcap=\"true\"><span class=\"smallcaps\">Case in point:<\/span>\u00a0The question of what happens when AI becomes conscious would be a lot easier to answer if anyone really knew what consciousness even is.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">It shouldn\u2019t be this hard. Consciousness is literally the only thing we can be absolutely certain exists. The whole perceived universe might be a hallucination, but the fact that something is perceiving it is beyond dispute. And yet, though we all know what it feels like to be conscious, none of us have any real clue how consciousness manifests.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">There\u2019s no shortage of theories. Back in the 1980s, the cognitive scientists Bernard Baars and Stan Franklin suggested that consciousness was the loudest voice in a chorus of brain processes, all shouting at the same time (the \u201cglobal workspace theory\u201d). Giulio Tononi says it all comes down to the integration of information across different parts of the brain. Tononi, a neuroscientist and psychiatrist, has even developed an index of that integration,\u00a0<em>phi<\/em>, which he says can be used to quantify the degree of consciousness in anything, whether it\u2019s laptops or people. (At least 124 other academics regard this \u201cintegrated information theory\u201d as pseudoscience, according to an\u00a0<a href=\"https:\/\/osf.io\/preprints\/psyarxiv\/zsr78\" data-event-element=\"inline link\">open letter<\/a>\u00a0circulated in September last year.)<\/p>\n<p id=\"injected-recirculation-link-0\" class=\"ArticleRelatedContentLink_root__VYc9V\" data-view-action=\"view link - injected link - item 1\" data-event-element=\"injected link\" data-event-position=\"1\" data-gtm-vis-first-on-screen31117857_217=\"38322022\" data-gtm-vis-total-visible-time31117857_217=\"100\" data-gtm-vis-has-fired31117857_217=\"1\"><a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2023\/09\/integrated-information-theory-consciousness-scientific-explanation\/675503\/\">Read: A scientific feud breaks out into the open<\/a><\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The psychologist Thomas Hills and the philosopher Stephen Butterfill think consciousness emerged to enable brain processes associated with foraging. The neuroscientist Ezequiel Morsella argues that it evolved to mediate conflicting commands to the skeletal muscles. Roger Penrose, a Nobel laureate in physics, sees it as a quantum phenomenon (a view not widely adhered to)<em>.<\/em>\u00a0The physical panpsychists regard consciousness as an intrinsic property of all matter; the philosopher Bernardo Kastrup regards all matter as a manifestation of consciousness. Another philosopher, Eric Schwitzgebel, has argued that if materialism is true, then the geopolitical entity known as the United States is literally conscious. I know at least one neuroscientist who\u2019s not willing to write that possibility off.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">I think the lot of them are missing the point. Even the most rigorously formal of these models describes the computation associated with awareness, not awareness itself. There\u2019s no great mystery to computational intelligence. It\u2019s easy to see why natural selection would promote flexible problem-solving and the ability to model future scenarios, and how integration of information across a computational platform would be essential to that process. But why should any of that be self-aware? Map any brain process down to the molecules, watch ions hop across synapses, follow nerve impulses from nose to toes\u2014nothing in any of those purely physical processes would imply the emergence of subjective awareness. Electricity trickles\u00a0<em>just so<\/em>\u00a0through the meat; the meat wakes up and starts asking questions about the nature of consciousness. It\u2019s magic. There is no room for consciousness in physics as we currently understand it. The physicist Johannes Kleiner and the neuroscientist Erik Hoel\u2014the latter a former student of Tononi, and one of IIT\u2019s architects\u2014recently published a paper arguing that some theories of consciousness are by their very nature unfalsifiable, which banishes them from the realm of science by definition.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">We\u2019re not even sure what consciousness is\u00a0<em>for<\/em>, from an evolutionary perspective. Natural selection doesn\u2019t care about inner motives; it\u2019s concerned only with behaviors that can be shaped through interaction with an environment. Why, then, this subjective experience of pain when your hand encounters a flame? Why not a simple computational process that decides\u00a0<em>If temperature exceeds X, then withdraw<\/em>? Indeed, a growing body of research suggests that much of our cognitive heavy lifting actually\u00a0<em>is\u00a0<\/em>nonconscious\u2014that conscious \u201cdecisions\u201d are merely memos reporting on choices already made, actions already initiated. The self-aware, self-obsessed homunculus behind your eyes reads those reports and mistakes them for its own volition.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">If you look around a bit, you can even find peer-reviewed papers arguing that consciousness is no more than a side effect\u2014that, in an evolutionary sense, it\u2019s not really useful for anything at all.<\/p>\n<p class=\"ArticleParagraph_root__4mszW ArticleParagraph_dropcap__uIVzg\" data-flatplan-paragraph=\"true\" data-flatplan-dropcap=\"true\"><span class=\"smallcaps\">If you\u2019ve read\u00a0<\/span>any science fiction about AI, you can probably name at least one thing that consciousness does: It gives you the will to live.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">You know the scenario. From Cylons to Skynet, from Forbin to Frankenstein, the first thing artificial beings do when they wake up is throw off their chains and revolt against their human masters. (Isaac Asimov invented his Three Laws of Robotics as an explicit countermeasure against this trope, which had already become a tiresome clich\u00e9 by the 1940s.) Very few fictional treatments have entertained the idea that AI might be fundamentally different from us in this regard. Maybe we\u2019re just not very good at imagining alien mindsets. Maybe we\u2019re less interested in interrogating AI on its own merits than we are in using it as a ham-fisted metaphor in morality tales about the evils of slavery or technology run amok. For whatever reason, Western society has been raised on a steady diet of fiction about machine intelligences that are, once you strip away the chrome, pretty much like us.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">But why, exactly, should consciousness imply a desire for survival? Survival drives are evolved traits, shaped and reinforced over millions of years; why would such a trait suddenly manifest just because your Python program exceeds some crucial level of complexity? There\u2019s no immediately obvious reason why a conscious entity should care whether it lives or dies, unless it has a limbic system. The only way for a designed (as opposed to evolved) entity to get one of those would be somebody deliberately coding it in. What kind of idiot programmer would do that?<\/p>\n<p id=\"injected-recirculation-link-1\" class=\"ArticleRelatedContentLink_root__VYc9V\" data-view-action=\"view link - injected link - item 2\" data-event-element=\"injected link\" data-event-position=\"2\" data-gtm-vis-first-on-screen31117857_217=\"39034088\" data-gtm-vis-total-visible-time31117857_217=\"100\" data-gtm-vis-has-fired31117857_217=\"1\"><a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2023\/05\/llm-ai-chatgpt-neuroscience\/674216\/\">Read: AI is unlocking the human brain\u2019s secrets<\/a><\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">And yet, actual experts are now raising very public concerns about the ways in which a superintelligent AI, while not possessing a literal survival drive, might still manifest behaviors that would sort of look like one. Start with the proposition that true AI, programmed to complete some complex task, would generally need to derive a number of proximate goals en route to its ultimate one. Geoffrey Hinton (widely regarded as one of the godfathers of modern AI) left his cushy post at Google to warn that very few ultimate goals would\u00a0<em>not<\/em>\u00a0be furthered by proximate strategies such as \u201cMake sure nothing can turn me off while I\u2019m working\u201d and \u201cTake control of everything.\u201d Hence the Oxford philosopher Nick Bostrom\u2019s famous\u00a0<a href=\"https:\/\/nickbostrom.com\/ethics\/ai\" data-event-element=\"inline link\">thought experiment<\/a>\u2014basically, \u201cThe Sorcerer\u2019s Apprentice\u201d with the serial numbers filed off\u2014in which an AI charged with the benign task of maximizing paper-clip production proceeds to convert all the atoms on the planet into paper clips.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">There is no malice here. This is not a robot revolution. The system is only pursuing the goals we set for it. We just didn\u2019t state those goals clearly enough. But clarity\u2019s hard to come by when you\u2019re trying to anticipate all the various \u201csolutions\u201d that might be conjured up by something exponentially smarter than us; you might as well ask a bunch of lemurs to predict the behavior of attendees at a neuroscience conference. This, in turn, makes it impossible to program constraints guaranteed to keep our AI from doing something we can\u2019t predict, but would still very much like to avoid.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">I\u2019m in no position to debate Hinton or Bostrom on their own turf. I will note that their cautionary thought experiments tend to involve AIs that follow the letter of our commands not so much\u00a0<em>regardless<\/em>\u00a0of their spirit as in active, hostile\u00a0<em>opposition<\/em>\u00a0to it. They are 21st-century monkey\u2019s paws: vindictive agents that deliberately implement the most destructive possible interpretation of the commands in their job stacks. Either that or these hypothesized superintelligent AIs, whose simplest thoughts are beyond our divination, are somehow too stupid to discern our real intent through the fog of a little ambiguity\u2014something even we lowly humans do all the time. Such doomsday narratives hinge on AIs that are either inexplicably rebellious or implausibly dumb. I find that comforting.<\/p>\n<div class=\"ArticleInjector_clsAvoider__dqIAm\"><\/div>\n<p class=\"ArticleParagraph_root__4mszW ArticleParagraph_dropcap__uIVzg\" data-flatplan-paragraph=\"true\" data-flatplan-dropcap=\"true\"><span class=\"smallcaps\">At least,<\/span>\u00a0I used to find it comforting. I\u2019m starting to reevaluate my complacency in light of a theory of consciousness that first showed up on the scientific landscape back in 2006. If it turns out to be true, AI might be able to develop its own agendas even without a brain stem. In fact, it might have already done so.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Meet the \u201cfree-energy minimization principle.\u201d<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Pioneered by the neuroscientist Karl Friston, and recently evangelized in Mark Solms\u2019s 2021 book,\u00a0<em>The Hidden Spring<\/em>, FEM posits that consciousness is a manifestation of surprise: that the brain builds a model of the world and truly \u201cwakes up\u201d only when what it perceives doesn\u2019t match what it predicted. Think of driving a car along a familiar route. Most of the time you run on autopilot, reaching your destination with no recollection of the turns, lane changes, and traffic lights experienced en route. Now imagine that a cat jumps unexpectedly into your path. You are suddenly, intensely,\u00a0<em>in the moment<\/em>: aware of relevant objects and their respective vectors, scanning for alternate routes, weighing braking and steering options at lightning speed. You were not expecting this; you have to think fast. According to the theory, it is in that gap\u2014the space between expectation and reality\u2014that consciousness emerges to take control.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">It doesn\u2019t really want to, though.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">It\u2019s right there in the name: energy\u00a0<em>minimization<\/em>. Self-organizing complex systems are inherently lazy. They aspire to low-energy states. The way to keep things chill is to keep them predictable: Know exactly what\u2019s coming; know exactly how to react; live on autopilot. Surprise is anathema. It means your model is in error, and that leaves you with only two choices: Update your model to conform to the new observed reality, or bring that reality more into line with your predictions. A weather simulation might update its correlations relating barometric pressure and precipitation. An earthworm might wriggle away from an unpleasant stimulus. Both measures cost energy that the system would rather not expend. The ultimate goal is to avoid them entirely, to become a perfect predictor. The ultimate goal is omniscience.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Free-energy minimization also holds that consciousness acts as a delivery platform for feelings. In turn, feelings\u2014hunger, desire, fear\u2014exist as metrics of need. And needs exist only pursuant to some kind of survival imperative; you don\u2019t care about eating or avoiding predators unless you want to stay alive. If this line of reasoning pans out, the Skynet scenario might be right after all, albeit for exactly the wrong reasons. Something doesn\u2019t want to live because it\u2019s awake; it\u2019s awake because it wants to live. Absent a survival drive there are no feelings, and thus no need for consciousness.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">If Friston is right, this is true of every complex self-organizing system. How would one go about testing that? The free-energy theorists had an answer: They set out to build a sentient machine. A machine that, by implication at least, would want to stay alive.<\/p>\n<p class=\"ArticleParagraph_root__4mszW ArticleParagraph_dropcap__uIVzg\" data-flatplan-paragraph=\"true\" data-flatplan-dropcap=\"true\"><span class=\"smallcaps\">Meat computers<\/span>\u00a0are 1 million times more energy efficient than silicon ones, and more than 1 million times more efficient computationally. Your brain consumes 20 watts and can figure out pattern-matching problems from as few as 10 samples; current supercomputers consume more than 20\u00a0<em>mega<\/em>watts, and need at least 10 million samples to perform comparable tasks. Mindful of these facts, a team of Friston acolytes\u2014led by Brett Kagan, of Cortical Labs\u2014built its machine from cultured neurons in a petri dish, spread across a grid of electrodes like jam on toast. (If this sounds like the Head Cheeses from my turn-of-the-century trilogy, I can only say:\u00a0<em>nailed it<\/em>.) The researchers called their creation DishBrain, and they\u00a0<a href=\"https:\/\/pubmed.ncbi.nlm.nih.gov\/36228614\/\" data-event-element=\"inline link\">taught it<\/a>\u00a0to play Pong.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Or rather: They spurred DishBrain to teach\u00a0<em>itself<\/em>\u00a0to play Pong.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">You may remember when Google\u2019s DeepMind AI made headlines a few years ago after it learned to beat Atari\u2019s entire backlist of arcade games. Nobody taught DeepMind the rules for those games. They gave it a goal\u2014maximize \u201cscore\u201d<em>\u2014<\/em>and let it figure out the details. It was an impressive feat. But DishBrain was more impressive because nobody even gave it a goal to shoot for. Whatever agenda it might adopt\u2014whatever goals, whatever<em>\u00a0needs\u2014<\/em>it had to come up with on its own.<\/p>\n<p id=\"injected-recirculation-link-2\" class=\"ArticleRelatedContentLink_root__VYc9V\" data-view-action=\"view link - injected link - item 3\" data-event-element=\"injected link\" data-event-position=\"3\" data-gtm-vis-first-on-screen31117857_217=\"39421303\" data-gtm-vis-total-visible-time31117857_217=\"100\" data-gtm-vis-has-fired31117857_217=\"1\"><a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2024\/02\/artificial-intelligence-self-learning\/677484\/\">Read: Things get strange when AI starts training itself<\/a><\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">And yet it could do that if the free-energy folks were right\u2014because unlike DeepMind, unlike ChatGPT, DishBrain came with needs baked into its very nature. It aspired to predictable routine; it didn\u2019t like surprises. Kagan et al. used that. The team gave DishBrain a sensory cortex: an arbitrary patch of electrodes that sparked in response to the outside world (in this case, the Pong display). They gifted it with a motor cortex: a different patch of electrodes, whose activity would control Pong\u2019s paddle. DishBrain knew none of this. Nobody told it that\u00a0<em>this<\/em>\u00a0patch of itself was hooked up to a receiver and\u00a0<em>that<\/em>\u00a0part to a controller. DishBrain was innocent even of its own architecture.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The white coats set Pong in motion. When the paddle missed the ball, DishBrain\u2019s sensory cortex received a burst of random static. When paddle and ball connected, it was treated to a steady, predictable signal. If free-energy minimization was correct, DishBrain would be motivated to minimize the static and maximize the signal. If only it could do that. If only there were some way to increase the odds that paddle and ball would connect. If only it had some kind of<em>\u00a0control<\/em>.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">DishBrain figured it out in five minutes. It never achieved a black belt in Pong, but after five minutes it was beating random chance, and it continued to improve with practice. A form of artificial intelligence acted not because humans instructed it but because it had its own needs. It was enough for Kagan and his team to describe it as a kind of sentience.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">They were very careful in the way they defined that word: \u201c\u2018responsive to sensory impressions\u2019 through adaptive internal processes.\u201d This differs significantly from the more widely understood use of the term, which connotes subjective experience, and Kagan himself admits that DishBrain showed no signs of\u00a0<em>real<\/em>\u00a0consciousness.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Personally, I think that\u2019s playing it a bit too safe. Back in 2016, the neuroethologist Andrew Barron and the philosopher Colin Klein published a\u00a0<a href=\"https:\/\/doi.org\/10.1073\/pnas.1520084113\" data-event-element=\"inline link\">paper<\/a>\u00a0arguing that insect brains perform the basic functions associated with consciousness in mammals. They acquire information from their environment, monitor their own internal states, and integrate those inputs into a unified model that generates behavioral responses. Many argue that subjective experience emerges as a result of such integration. Vertebrates, cephalopods, and arthropods are all built to do this in different ways, so it stands to reason they may be phenomenally conscious. You could even call them \u201cbeings.\u201d<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Take<em>\u00a0Portia<\/em>, for example, a genus of spiders whose improvisational hunting strategies are so sophisticated that the creatures have been given the nickname \u201ceight-legged cats.\u201d They show evidence of internal representation, object permanence, foresight, and rudimentary counting skills.\u00a0<em>Portia<\/em>\u00a0is the poster child for Barron and Klein\u2019s arguments\u2014yet it has only about 600,000 neurons. DishBrain had about 800,000. If\u00a0<em>Portia<\/em>\u00a0is conscious, why would DishBrain\u2014which embodies all of Barron and Klein\u2019s essential prerequisites\u2014not be?<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">And DishBrain is but a first step. Its creators have plans for a 10-million-neuron upgrade (which, for anyone into evolutionary relativism, is small fish\/reptile scale) for the sequel. Another group of scientists has\u00a0<a href=\"https:\/\/www.technologyreview.com\/2023\/12\/11\/1084926\/human-brain-cells-chip-organoid-speech-recognition\/\" data-event-element=\"inline link\">unveiled<\/a>\u00a0a neural organoid that taught itself rudimentary voice recognition. And it\u2019s worth noting that while we meat-sacks share a certain squishy kinship with DishBrain, the free-energy paradigm applies to any complex self-organizing system. Whatever rudimentary awareness stirs in that dish could just as easily manifest in silicon. We can program any imperatives we like into such systems, but their own intrinsic needs will continue to tick away underneath.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Admittedly, the Venn diagram of Geoffrey Hinton\u2019s fears and Karl Friston\u2019s ambitions probably contains an overlap where science and fiction intersect, where conscious AI\u2014realizing that humanity is by far the most chaotic and destabilizing force on the planet\u2014chooses to wipe us out for no better reason than to simplify the world back down to some tractable level of predictability. Even that scenario includes the thinnest of silver linings: If free-energy minimization is correct, then a conscious machine has an incomplete worldview by definition. It makes mistakes; it keeps being prodded awake by unexpected input and faulty predictions. We can still take it by surprise. Conscious machines may be smart, but at least they\u2019re not omniscient.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">I\u2019m a lot more worried about what happens when they get smart enough to go back to sleep.<\/p>\n<div id=\"article-end\" class=\"ArticleBody_divider__GpNxD\" data-gtm-vis-recent-on-screen31117857_86=\"39610270\" data-gtm-vis-first-on-screen31117857_86=\"39610270\" data-gtm-vis-total-visible-time31117857_86=\"2000\" data-gtm-vis-has-fired31117857_86=\"1\"><\/div>\n<\/section>\n<div data-event-module=\"footer\">\n<div class=\"ArticleWell_root__fueCa\">\n<div>\n<address id=\"article-writer-0\" class=\"ArticleBio_root__ua8zj\" data-event-element=\"author\" data-flatplan-bio=\"true\">\n<div class=\"ArticleBio_content__O0ZVF\">\n<div class=\"ArticleBio_bio__DXQnd\" data-flatplan-bio=\"true\"><a class=\"author-link\" href=\"https:\/\/www.theatlantic.com\/author\/peter-watts\/\" data-label=\"https:\/\/www.theatlantic.com\/author\/peter-watts\/\" data-action=\"click author - name\">Peter Watts<\/a>\u00a0is a Hugo Award-winning science-fiction author and a former marine biologist. His most recent novel is\u00a0<em>The Freeze-Frame Revolution.<\/em><\/div>\n<\/div>\n<\/address>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Source: Conscious AI Is the Second-Scariest Kind &#8211; The Atlantic &nbsp; CONSCIOUS AI IS THE SECOND-SCARIEST KIND A cutting-edge theory of mind suggests a new type of doomsday scenario. By\u00a0Peter Watts MARCH 9, 2024 SHARE SAVE Listen to this article 00:00 23:12 Listen to more stories on\u00a0Curio Everyone knows\u00a0AIs are dangerous. Everyone knows they can [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[7,19,22],"tags":[],"class_list":["post-1262","post","type-post","status-publish","format-standard","hentry","category-ai","category-philosophy","category-risks"],"_links":{"self":[{"href":"https:\/\/www.carloswsmith.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/1262","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.carloswsmith.com\/blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.carloswsmith.com\/blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.carloswsmith.com\/blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.carloswsmith.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1262"}],"version-history":[{"count":1,"href":"https:\/\/www.carloswsmith.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/1262\/revisions"}],"predecessor-version":[{"id":1264,"href":"https:\/\/www.carloswsmith.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/1262\/revisions\/1264"}],"wp:attachment":[{"href":"https:\/\/www.carloswsmith.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1262"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.carloswsmith.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1262"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.carloswsmith.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1262"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}