Take a photo of a barcode or cover
Good technical and philosophical exploration of strong AI. As someone who is scientifically literate but not well-versed in AI, I found the scientific arguments surprisingly accessible. The book falls a bit flat on character development (it also includes perhaps the most unnecessary sex scene I have ever read in a book), but the themes and questions explored make the book worth it anyway.
Read this book for the ideas, not the characters or the writing. We're going to need to reckon with simulation-based life sooner than we think, so it would do everyone some good to reflect on it through fiction or some other entertainment-based means, at the very least.
ALL THAT SAID, I wanted to be more receptive to the ideas in this novel, but the execution was terrible. The writing is very dry; the characters are flat; the pacing is very off, with the most interesting action happening in the last 100 pages.
Stay with it, but make sure you're on a stationary bike or doing something to keep you awake whilst you read.
ALL THAT SAID, I wanted to be more receptive to the ideas in this novel, but the execution was terrible. The writing is very dry; the characters are flat; the pacing is very off, with the most interesting action happening in the last 100 pages.
Stay with it, but make sure you're on a stationary bike or doing something to keep you awake whilst you read.
challenging
reflective
medium-paced
Plot or Character Driven:
Plot
Strong character development:
Complicated
Loveable characters:
Complicated
Diverse cast of characters:
No
Flaws of characters a main focus:
Yes
challenging
informative
mysterious
reflective
medium-paced
Plot or Character Driven:
Plot
Strong character development:
No
Loveable characters:
No
Diverse cast of characters:
No
Flaws of characters a main focus:
No
I don't think most of the other reviews do justice to how much of an outlier this novel is. Other books have toyed with the idea of mind uploads, but none have investigated the implications quite like Permutation City. What is it, the book asks, about a sequence of neural states that yields the internal experience of living life? If you record the states and replay them, does the person still experience life? Seems like they should, right, since the same circuits are passing through the same states. But the, what if you shuffle the states? Ultimately, if the experience of thought can be embodied in something that abstract, does it need an embodiment at all?
It's amazing to me now that this book is almost 30 years old.
I know it's not a fit for everyone, but here's a test that might work: Did you love Douglas Hofstadter's Gödel, Escher, Bach? How about The Mind's I, which Hofstadter coauthored with Daniel Dennett? If so, you should give Egan a try.
It's amazing to me now that this book is almost 30 years old.
I know it's not a fit for everyone, but here's a test that might work: Did you love Douglas Hofstadter's Gödel, Escher, Bach? How about The Mind's I, which Hofstadter coauthored with Daniel Dennett? If so, you should give Egan a try.
Fractious Fakes
What happens when your virtual clone hates your guts? Well apparently “Panic. Regret. Analysis. Acceptance” in that order. “People reacted badly to waking up as Copies.” Well, yeah of course. It’s a bit like finding out your girlfriend is really a transgender biker - a mixture of fearful awe and fascinated interest.
From a literary point of view, Egan has done something both awesome and interesting: he’s created a sort of reverse allegory. Instead of language taking on an alternative meaning from its literal referents, he has people taking on the literal qualities of language - vocabulary, grammar, and effects. You aren’t what you eat but what can be said about you, and programmed, in Permutation City.
The key to Egan’s intention, I think, is in his protagonist’s muttering of a secret password, “Abulafia.” This is a reference to a medieval Kabbalist who, as Kabbalists are wont to do, turned everything into language in order to disorient those who use it -people in other words - and, paradoxically, thereby to free language-users from the insidious power of the language which is actually using them (See: https://www.goodreads.com/review/show/2456619563).
The device of using a virtual reality ‘Copy’ within a virtual reality world - “urine and feces production optional” - is something Abulafia would have grasped immediately as obvious and necessary given the availability of the technology. This is Tohu, the Shattering of the Vessels through which the original unity of the universe is broken into fragments, both physical and spiritual. Tohu happens psychically as well for individuals. That is, bits of the Self are strewn about creation in a most unsatisfactory and unhappy state.
These spiritual bits can become quite unruly in their condition of fragmented isolation. They are desperate to end their loneliness by re-integrating with the original whole. This is Tikkun, a sort of reconstruction of psychic pieces into a new entity. Paradoxically, of course, such a ‘rebirth’ also involves the ‘death’ of the fragmented Selves. If anything were to impede this process, an aberrant techno-savvy Kabbalist for example, there is an interesting story to be told.
And Egan tells the story masterfully. I can only marvel at how he finds his inspiration for a high-tech tale in an ancient wisdom like Kabbalah, and then proceeds to out-Kabbalah even the Kabbalists with his creativity.
What happens when your virtual clone hates your guts? Well apparently “Panic. Regret. Analysis. Acceptance” in that order. “People reacted badly to waking up as Copies.” Well, yeah of course. It’s a bit like finding out your girlfriend is really a transgender biker - a mixture of fearful awe and fascinated interest.
From a literary point of view, Egan has done something both awesome and interesting: he’s created a sort of reverse allegory. Instead of language taking on an alternative meaning from its literal referents, he has people taking on the literal qualities of language - vocabulary, grammar, and effects. You aren’t what you eat but what can be said about you, and programmed, in Permutation City.
The key to Egan’s intention, I think, is in his protagonist’s muttering of a secret password, “Abulafia.” This is a reference to a medieval Kabbalist who, as Kabbalists are wont to do, turned everything into language in order to disorient those who use it -people in other words - and, paradoxically, thereby to free language-users from the insidious power of the language which is actually using them (See: https://www.goodreads.com/review/show/2456619563).
The device of using a virtual reality ‘Copy’ within a virtual reality world - “urine and feces production optional” - is something Abulafia would have grasped immediately as obvious and necessary given the availability of the technology. This is Tohu, the Shattering of the Vessels through which the original unity of the universe is broken into fragments, both physical and spiritual. Tohu happens psychically as well for individuals. That is, bits of the Self are strewn about creation in a most unsatisfactory and unhappy state.
These spiritual bits can become quite unruly in their condition of fragmented isolation. They are desperate to end their loneliness by re-integrating with the original whole. This is Tikkun, a sort of reconstruction of psychic pieces into a new entity. Paradoxically, of course, such a ‘rebirth’ also involves the ‘death’ of the fragmented Selves. If anything were to impede this process, an aberrant techno-savvy Kabbalist for example, there is an interesting story to be told.
And Egan tells the story masterfully. I can only marvel at how he finds his inspiration for a high-tech tale in an ancient wisdom like Kabbalah, and then proceeds to out-Kabbalah even the Kabbalists with his creativity.
I tried, but this is just not my style of scifi.
The experience of reading Permutation City is akin to having met a very intelligent man at a party and having him attempt, at length, to stage a debate with you over topics that are very interesting but about which you don’t really have any position or any desire to debate, and after all, you’re in the middle of a party.
Mr. Egan is no doubt very intelligent and the topics in which he is interested are very interesting indeed; however the experience of reading Permutation City is disappointingly unlike reading a novel.
Mr. Egan is no doubt very intelligent and the topics in which he is interested are very interesting indeed; however the experience of reading Permutation City is disappointingly unlike reading a novel.
Two things jarringly wrong with this book. The first involves an unfortunate plot choice wherein a male character invents something extremely technologically advanced (and far-fetched), and he hires a female coder to help him make it a reality. Once his big idea is revealed to her, she is skeptical and finds his concept absurd and impossible. Then, of course, he turns out to be correct, and her resistance is revealed to have been empty. It seems like Egan had very little concern about gender in this context because what it amounts to is a whole lotta mansplainin' throughout the story. Call it a side-effect of his story-telling that Egan seems to have been oblivious to, but it's a big negative in the way many aspects of the story are communicated. Egan has positioned the female character in the role of the reader. She's the doubter, just like we are the doubters...because the premise is so farfetched. And yet she is overcome just as we readers...are supposed to be overcome by Egan's argument. This woman has become little more than a rhetorical device. A strawman if you will to be proven shortsighted.
The second aspect of this story that bothered me...was the entire absurd premise itself. Unlike the female character whose disbelief was falsified by the author's fictional plot (like putting "God" into a story just to prove God is real), I was unconvinced. Egan has an impressive grasp of technological and scientific language. In his first novel in this sequence (not a series), called Quarantine, which I reviewed here (https://www.goodreads.com/review/show/2849960402), Egan did an amazing job taking seriously the Copenhagen Interpretation of Quantum theory and envisioning potential repercussions of what it would mean if this theory were accurate. In this book, Egan takes seriously the idea of digitizing consciousness. And frankly, I found it utterly ridiculous. He twists up so many convoluted knots that relate to the idea of digitized consciousness becoming real that it does nothing but demonstrate how farcical such a belief is. Reading such an elaborate story all about the repercussion of digitized consciousness struck me as a hell of a lot of sound and fury signifying nothing.
Those systems in computer science that are called "artificial intelligence" are truly nothing like actual intelligence. They are nothing more than task-based code that gets better at doing its task by optimization. There is zero related to actual consciousness or the ability to not just do something but understand what it's doing. Machine "learning" isn't really learning, it's simply optimizing of variables.
I have never read a convincing argument for the possibility of artificial intelligence, which is essentially the same as digitizing human consciousness. I think it's a big misunderstanding to believe that a) something fluid can be made digital--that somehow a "snapshot" of the brain at any given moment would capture its total functionality, and b) something physical (the meat of the brain, the physicality of the neurons and their connections) can be digitized. And so as a result, this whole book felt just like a lot of wasted effort. One side-effect to note about digitizing consciousness: if consciousness could be turned into code then you would be (for sure) eliminating free will. Once something becomes a series of commands, it can no longer make decisions other than reading the next step in its code. The system can be run backward and forwards, and the decision tree would never change because code always makes the same decisions as it optimizes.
In support of my skepticism of artificial intelligence, I'm going to paste a portion of an interesting article published this month in Salon magazine.
Artificial intelligence research may have just hit a dead end -- here's why
Thomas Nail, Salon, May 01, 2021
Philip K. Dick's iconic 1968 sci-fi novel, "Do Androids Dream of Electric Sheep?" posed an intriguing question in its title: would an intelligent robot dream? In the 53 years since publication, artificial intelligence research has matured significantly. And yet, despite Dick being prophetic about technology in other ways, the question posed in the title is not something AI researchers are that interested in; no one is trying to invent an android that dreams of electric sheep. Why? Mainly, it's that most artificial intelligence researchers and scientists are busy trying to design "intelligent" software programmed to do specific tasks. There is no time for daydreaming. Or is there? What if reason and logic are not the source of intelligence, but its product? What if the source of intelligence is more akin to dreaming and play? Recent research into the "neuroscience of spontaneous fluctuations" points in this direction. If true, it would be a paradigm shift in our understanding of human consciousness. It would also mean that just about all artificial intelligence research is heading in the wrong direction.
The quest for artificial intelligence grew out of the modern science of computation, started by the English mathematician Alan Turing and the Hungarian-American mathematician John von Neumann 65 years ago. Since then, there have been many approaches to studying artificial intelligence. Yet all approaches have one thing in common: they treat intelligence computationally, i.e., like a computer with input and output of information. Scientists have also tried modeling artificial intelligence on the neural networks of human brains. These artificial neural networks use "deep-learning" techniques and "big data" to approach and occasionally surpass particular human abilities, like playing chess, go, poker, or recognizing faces. But these models also treat the brain like a computer as do many neuroscientists. But is this the right idea for designing intelligence?
The present state of artificial intelligence is limited to what those in the field call "narrow AI." Narrow AI excels at accomplishing specific tasks in a closed system where all possibilities are known. It is not creative and typically breaks down when confronted with novel situations. On the other hand, researchers define "general AI" as the innovative transfer of knowledge from one problem to another. So far, this is what AI has failed to achieve and what many in the field believe to be only an extremely distant possibility. Most AI researchers are even less optimistic about the possibility of a so-called "superintelligent AI" that would become more intelligent than humans due to a hypothetical "intelligence explosion."
Does the brain transmit and receive binary information like a computer? Or, do we think of it this way because, since antiquity, humans have always used their latest technology as a metaphor for describing our brains? There are certainly some ways that the computer-brain metaphor makes sense. We can undoubtedly assign a binary number to a neuron that has either fired "1" or not "0." We can even measure the electrochemical thresholds needed for individual neurons to fire. In theory, a neural map of this information should give us the causal path or "code" for any given brain event. But experimentally, it does not. For starters, this is because neurons do not have fixed voltages for their logic gates like transistors that can determine what will activate "1" or not activate "0" in a given neuron. Decades of neuroscience have experimentally proven that neurons can change their function and firing thresholds, unlike transistors or binary information. It's called "neuroplasticity," and computers do not have it.
Computers also do not have equivalents of chemicals called "neuromodulators" that flow between neurons and alter their firing activity, efficiency, and connectivity. These brain chemicals allow neurons to affect one another without firing. This violates the binary logic of "either/or" and means that most brain activity occurs between an activated and nonactivated state. Furthermore, the cause and pattern of neuron firing are subject to what neuroscientists call "spontaneous fluctuations." Spontaneous fluctuations are neuronal activities that occur in the brain even when no external stimulus or mental behavior correlates to them. These fluctuations make up an astounding 95% of brain activity while conscious thought occupies the remaining 5%. In this way, cognitive fluctuations are like the dark matter or "junk" DNA of the brain. They make up the biggest part of what's happening but remain mysterious.
Neuroscientists have known about these unpredictable fluctuations in electrical brain activity since the 1930s, but have not known what to make of them. Typically, scientists have preferred to focus on brain activity that responds to external stimuli and triggers a mental state or physical behavior. They "average out" the rest of the "noise" from the data. However, precisely because of these fluctuations, there is no universal activation level in neurons that we can call "1." Neurons are constantly firing, but, for the most part, we don't know why. What might be the source of these spontaneous fluctuations? Recent studies in the neuroscience of spontaneous thought suggest that these fluctuations may be related to internal neural mechanics, heart and stomach activity, and tiny physical movements in response to the world. Other experiments have demonstrated that neuronal firing creates electromagnetic fields strong enough to affect and perturb how neighboring neurons may fire.
The brain gets even wilder when we zoom in. Since electrochemical thresholds activate neurons, a single proton could, in principle, be the difference that causes a neuron to fire. If a proton spontaneously jumped out of its atomic bonds, in what physicists call "quantum tunneling," this could cause a cascade of sudden neuron activity. So even at the tiniest measurable level, the neuron's physical structure has a non-binary indeterminacy. Computer transistors have the same problem. The smaller manufacturers make electronics, the smaller the transistor gets, and the more frequently electrons will spontaneously quantum tunnel through the thinner barriers producing errors. This is why computer engineers, just like many neuroscientists, go to great lengths to filter out "background noise" and "stray" electrical fields from their binary signal. This is a big difference between computers and brains. For computers, spontaneous fluctuations create errors that crash the system, while for our brains, it's a built-in feature.
What if these anomalous fluctuations are at the heart of human intelligence, creativity, and consciousness? This is precisely what neuroscientists such as Georg Northoff, Robin Carhart-Harris, and Stanislas Dehaene are showing. They argue that consciousness is an emergent property born from the nested frequencies of synchronized spontaneous fluctuations. Applying this theory, neuroscientists can even tell whether someone is conscious or not just by looking at their brain waves. AI has been modeling itself on neuroscience for decades, but can it follow this new direction? Stanislas Dehaene, for instance, considers the computer model of intelligence "deeply wrong," in part because "spontaneous activity is one of the most frequently overlooked features" of it. Unlike computers, "neurons not only tolerate noise but even amplify it" to help generate novel solutions to complex problems.
"Just as an avalanche is a probabilistic event, not a certain one, the cascade of brain activity that eventually leads to conscious perception is not fully deterministic: the very same stimulus may at times be perceived and at others remain undetected. What makes the difference? Unpredictable fluctuations in neuronal firing sometimes fit with the incoming stimulus, and sometimes fight against it." Accordingly, Dehaene believes that AI would require something akin to synchronized spontaneous fluctuations to be conscious. Johnjoe McFadden, a Professor of Molecular Genetics at the University of Surrey, speculates that spontaneous electromagnetic fluctuations might even have been an evolutionary advantage to help closely packed neurons generate and synchronize novel adaptive behaviors. "Without EM field interactions, AI will remain forever dumb and non-conscious." The German neuroscientist Georg Northoff argues that a "conscious…artificial creature would need to show spatiotemporal mechanisms such as… the nestedness and expansion" of spontaneous fluctuations.
Relatedly, Colin Hales, an artificial intelligence researcher at the University of Melbourne, has observed how strange it is that AI scientists have not yet tried to create an artificial brain in the same way other scientists have made artificial hearts. Instead, AI researchers have created theoretical models of neuron patterns without their corresponding physics. It is as if instead of building airplanes, AI researchers are designing flight simulators that never leave the ground. If contemporary neuroscience is correct, AI cannot be a computer with input and output of binary information. Like the human brain, 95% of its activity would have to be "nested" spontaneous fluctuations akin to our unconscious, wandering, and dreaming minds. Goal-directed and instrumental behaviors would be a tiny fraction of its developed form. If we looked at its electroencephalogram (EEG), it would have to have similar "signatures of consciousness" to what Dehaene has experimentally shown to be necessary. Why would we expect consciousness to exist independently of the signatures that define our own? Yet, that is what AI research is doing. AI would also likely need to make use of the quantum and electrodynamic perturbations that scientists are presently filtering out.
Spontaneous fluctuations come from the physical material of embedded consciousness. There is no such thing as matter-independent intelligence. Therefore, to have conscious intelligence, scientists would have to integrate AI in a material body that was sensitive and non-deterministically responsive to its anatomy and the world. Its intrinsic fluctuations would collide with those of the world like the diffracting ripples made by pebbles thrown in a pond. In this way, it could learn through experience like all other forms of intelligence without pre-programmed commands.
Experimental research confirms that dreams help consolidate memories and facilitate learning. Dreaming is also a state of exceptionally playful and freely associated cognitive fluctuations. If this is true, why should we expect human-level intelligence to emerge without dreams? This is why newborns dream twice as much as adults. In my view, there will be no progress toward human-level AI until researchers stop trying to design computational slaves for capitalism and start taking the genuine source of intelligence seriously: fluctuating electric sheep.
The second aspect of this story that bothered me...was the entire absurd premise itself. Unlike the female character whose disbelief was falsified by the author's fictional plot (like putting "God" into a story just to prove God is real), I was unconvinced. Egan has an impressive grasp of technological and scientific language. In his first novel in this sequence (not a series), called Quarantine, which I reviewed here (https://www.goodreads.com/review/show/2849960402), Egan did an amazing job taking seriously the Copenhagen Interpretation of Quantum theory and envisioning potential repercussions of what it would mean if this theory were accurate. In this book, Egan takes seriously the idea of digitizing consciousness. And frankly, I found it utterly ridiculous. He twists up so many convoluted knots that relate to the idea of digitized consciousness becoming real that it does nothing but demonstrate how farcical such a belief is. Reading such an elaborate story all about the repercussion of digitized consciousness struck me as a hell of a lot of sound and fury signifying nothing.
Those systems in computer science that are called "artificial intelligence" are truly nothing like actual intelligence. They are nothing more than task-based code that gets better at doing its task by optimization. There is zero related to actual consciousness or the ability to not just do something but understand what it's doing. Machine "learning" isn't really learning, it's simply optimizing of variables.
I have never read a convincing argument for the possibility of artificial intelligence, which is essentially the same as digitizing human consciousness. I think it's a big misunderstanding to believe that a) something fluid can be made digital--that somehow a "snapshot" of the brain at any given moment would capture its total functionality, and b) something physical (the meat of the brain, the physicality of the neurons and their connections) can be digitized. And so as a result, this whole book felt just like a lot of wasted effort. One side-effect to note about digitizing consciousness: if consciousness could be turned into code then you would be (for sure) eliminating free will. Once something becomes a series of commands, it can no longer make decisions other than reading the next step in its code. The system can be run backward and forwards, and the decision tree would never change because code always makes the same decisions as it optimizes.
In support of my skepticism of artificial intelligence, I'm going to paste a portion of an interesting article published this month in Salon magazine.
Artificial intelligence research may have just hit a dead end -- here's why
Thomas Nail, Salon, May 01, 2021
Philip K. Dick's iconic 1968 sci-fi novel, "Do Androids Dream of Electric Sheep?" posed an intriguing question in its title: would an intelligent robot dream? In the 53 years since publication, artificial intelligence research has matured significantly. And yet, despite Dick being prophetic about technology in other ways, the question posed in the title is not something AI researchers are that interested in; no one is trying to invent an android that dreams of electric sheep. Why? Mainly, it's that most artificial intelligence researchers and scientists are busy trying to design "intelligent" software programmed to do specific tasks. There is no time for daydreaming. Or is there? What if reason and logic are not the source of intelligence, but its product? What if the source of intelligence is more akin to dreaming and play? Recent research into the "neuroscience of spontaneous fluctuations" points in this direction. If true, it would be a paradigm shift in our understanding of human consciousness. It would also mean that just about all artificial intelligence research is heading in the wrong direction.
The quest for artificial intelligence grew out of the modern science of computation, started by the English mathematician Alan Turing and the Hungarian-American mathematician John von Neumann 65 years ago. Since then, there have been many approaches to studying artificial intelligence. Yet all approaches have one thing in common: they treat intelligence computationally, i.e., like a computer with input and output of information. Scientists have also tried modeling artificial intelligence on the neural networks of human brains. These artificial neural networks use "deep-learning" techniques and "big data" to approach and occasionally surpass particular human abilities, like playing chess, go, poker, or recognizing faces. But these models also treat the brain like a computer as do many neuroscientists. But is this the right idea for designing intelligence?
The present state of artificial intelligence is limited to what those in the field call "narrow AI." Narrow AI excels at accomplishing specific tasks in a closed system where all possibilities are known. It is not creative and typically breaks down when confronted with novel situations. On the other hand, researchers define "general AI" as the innovative transfer of knowledge from one problem to another. So far, this is what AI has failed to achieve and what many in the field believe to be only an extremely distant possibility. Most AI researchers are even less optimistic about the possibility of a so-called "superintelligent AI" that would become more intelligent than humans due to a hypothetical "intelligence explosion."
Does the brain transmit and receive binary information like a computer? Or, do we think of it this way because, since antiquity, humans have always used their latest technology as a metaphor for describing our brains? There are certainly some ways that the computer-brain metaphor makes sense. We can undoubtedly assign a binary number to a neuron that has either fired "1" or not "0." We can even measure the electrochemical thresholds needed for individual neurons to fire. In theory, a neural map of this information should give us the causal path or "code" for any given brain event. But experimentally, it does not. For starters, this is because neurons do not have fixed voltages for their logic gates like transistors that can determine what will activate "1" or not activate "0" in a given neuron. Decades of neuroscience have experimentally proven that neurons can change their function and firing thresholds, unlike transistors or binary information. It's called "neuroplasticity," and computers do not have it.
Computers also do not have equivalents of chemicals called "neuromodulators" that flow between neurons and alter their firing activity, efficiency, and connectivity. These brain chemicals allow neurons to affect one another without firing. This violates the binary logic of "either/or" and means that most brain activity occurs between an activated and nonactivated state. Furthermore, the cause and pattern of neuron firing are subject to what neuroscientists call "spontaneous fluctuations." Spontaneous fluctuations are neuronal activities that occur in the brain even when no external stimulus or mental behavior correlates to them. These fluctuations make up an astounding 95% of brain activity while conscious thought occupies the remaining 5%. In this way, cognitive fluctuations are like the dark matter or "junk" DNA of the brain. They make up the biggest part of what's happening but remain mysterious.
Neuroscientists have known about these unpredictable fluctuations in electrical brain activity since the 1930s, but have not known what to make of them. Typically, scientists have preferred to focus on brain activity that responds to external stimuli and triggers a mental state or physical behavior. They "average out" the rest of the "noise" from the data. However, precisely because of these fluctuations, there is no universal activation level in neurons that we can call "1." Neurons are constantly firing, but, for the most part, we don't know why. What might be the source of these spontaneous fluctuations? Recent studies in the neuroscience of spontaneous thought suggest that these fluctuations may be related to internal neural mechanics, heart and stomach activity, and tiny physical movements in response to the world. Other experiments have demonstrated that neuronal firing creates electromagnetic fields strong enough to affect and perturb how neighboring neurons may fire.
The brain gets even wilder when we zoom in. Since electrochemical thresholds activate neurons, a single proton could, in principle, be the difference that causes a neuron to fire. If a proton spontaneously jumped out of its atomic bonds, in what physicists call "quantum tunneling," this could cause a cascade of sudden neuron activity. So even at the tiniest measurable level, the neuron's physical structure has a non-binary indeterminacy. Computer transistors have the same problem. The smaller manufacturers make electronics, the smaller the transistor gets, and the more frequently electrons will spontaneously quantum tunnel through the thinner barriers producing errors. This is why computer engineers, just like many neuroscientists, go to great lengths to filter out "background noise" and "stray" electrical fields from their binary signal. This is a big difference between computers and brains. For computers, spontaneous fluctuations create errors that crash the system, while for our brains, it's a built-in feature.
What if these anomalous fluctuations are at the heart of human intelligence, creativity, and consciousness? This is precisely what neuroscientists such as Georg Northoff, Robin Carhart-Harris, and Stanislas Dehaene are showing. They argue that consciousness is an emergent property born from the nested frequencies of synchronized spontaneous fluctuations. Applying this theory, neuroscientists can even tell whether someone is conscious or not just by looking at their brain waves. AI has been modeling itself on neuroscience for decades, but can it follow this new direction? Stanislas Dehaene, for instance, considers the computer model of intelligence "deeply wrong," in part because "spontaneous activity is one of the most frequently overlooked features" of it. Unlike computers, "neurons not only tolerate noise but even amplify it" to help generate novel solutions to complex problems.
"Just as an avalanche is a probabilistic event, not a certain one, the cascade of brain activity that eventually leads to conscious perception is not fully deterministic: the very same stimulus may at times be perceived and at others remain undetected. What makes the difference? Unpredictable fluctuations in neuronal firing sometimes fit with the incoming stimulus, and sometimes fight against it." Accordingly, Dehaene believes that AI would require something akin to synchronized spontaneous fluctuations to be conscious. Johnjoe McFadden, a Professor of Molecular Genetics at the University of Surrey, speculates that spontaneous electromagnetic fluctuations might even have been an evolutionary advantage to help closely packed neurons generate and synchronize novel adaptive behaviors. "Without EM field interactions, AI will remain forever dumb and non-conscious." The German neuroscientist Georg Northoff argues that a "conscious…artificial creature would need to show spatiotemporal mechanisms such as… the nestedness and expansion" of spontaneous fluctuations.
Relatedly, Colin Hales, an artificial intelligence researcher at the University of Melbourne, has observed how strange it is that AI scientists have not yet tried to create an artificial brain in the same way other scientists have made artificial hearts. Instead, AI researchers have created theoretical models of neuron patterns without their corresponding physics. It is as if instead of building airplanes, AI researchers are designing flight simulators that never leave the ground. If contemporary neuroscience is correct, AI cannot be a computer with input and output of binary information. Like the human brain, 95% of its activity would have to be "nested" spontaneous fluctuations akin to our unconscious, wandering, and dreaming minds. Goal-directed and instrumental behaviors would be a tiny fraction of its developed form. If we looked at its electroencephalogram (EEG), it would have to have similar "signatures of consciousness" to what Dehaene has experimentally shown to be necessary. Why would we expect consciousness to exist independently of the signatures that define our own? Yet, that is what AI research is doing. AI would also likely need to make use of the quantum and electrodynamic perturbations that scientists are presently filtering out.
Spontaneous fluctuations come from the physical material of embedded consciousness. There is no such thing as matter-independent intelligence. Therefore, to have conscious intelligence, scientists would have to integrate AI in a material body that was sensitive and non-deterministically responsive to its anatomy and the world. Its intrinsic fluctuations would collide with those of the world like the diffracting ripples made by pebbles thrown in a pond. In this way, it could learn through experience like all other forms of intelligence without pre-programmed commands.
Experimental research confirms that dreams help consolidate memories and facilitate learning. Dreaming is also a state of exceptionally playful and freely associated cognitive fluctuations. If this is true, why should we expect human-level intelligence to emerge without dreams? This is why newborns dream twice as much as adults. In my view, there will be no progress toward human-level AI until researchers stop trying to design computational slaves for capitalism and start taking the genuine source of intelligence seriously: fluctuating electric sheep.
This book is a blast. Classic Egan: huge, complex, and ultimately--well, plausible, if plausible is when you can't really refute it.
It's almost completely irrelevant to the main plot of the book, but the extent to which Egan was able to guess at the general shape of the early-21st century internet & related economy is quietly awe-inspiring.
Like a few other of his books, I find myself having to squint sideways a bit and do some handwaving for a few of his ideas--observer-effect kind of quantum woo. In this case, it's the idea that internally self-consistent narratives are basically self-sustaining, "computing" themselves across any arrangement of matter, anywhere in space-time, that matches up.
Even setting that giant idea aside, there's tons going on here. Egan blows Conway's Game of Life up to a really interesting "artificial life" that is a constant counterpoint to merely "virtual" reality; and he starts delving into what post-humanity could really look like in a virtual world. That's even before he starts getting into a kind of epistemologically warring Platonic-forms plot played out on virtual Von Neumann/Turing machines.
Characters here are...odd, a bit disjointed, solipsistic, really. Of the four major point-of-view streams (Dunham, Maria, Peer, Riemann), only Dunham & Maria have any overlap. Riemann's self-imposed hell is an odd narrative choice in such a relatively short novel that's already trying to cram a lot in, and something to ponder. Peer's is the only timeline that feels like much of a genuine arc, as he genuinely embraces the possibilities of his situation.
The gutsy way that Egan spins really abstract concepts into concrete plot twists is a delight. Qua novel, this book is weak in a lot of ways, but it has stuck with me in a strangely affecting way. Recommended.
It's almost completely irrelevant to the main plot of the book, but the extent to which Egan was able to guess at the general shape of the early-21st century internet & related economy is quietly awe-inspiring.
Like a few other of his books, I find myself having to squint sideways a bit and do some handwaving for a few of his ideas--observer-effect kind of quantum woo. In this case, it's the idea that internally self-consistent narratives are basically self-sustaining, "computing" themselves across any arrangement of matter, anywhere in space-time, that matches up.
Even setting that giant idea aside, there's tons going on here. Egan blows Conway's Game of Life up to a really interesting "artificial life" that is a constant counterpoint to merely "virtual" reality; and he starts delving into what post-humanity could really look like in a virtual world. That's even before he starts getting into a kind of epistemologically warring Platonic-forms plot played out on virtual Von Neumann/Turing machines.
Characters here are...odd, a bit disjointed, solipsistic, really. Of the four major point-of-view streams (Dunham, Maria, Peer, Riemann), only Dunham & Maria have any overlap. Riemann's self-imposed hell is an odd narrative choice in such a relatively short novel that's already trying to cram a lot in, and something to ponder. Peer's is the only timeline that feels like much of a genuine arc, as he genuinely embraces the possibilities of his situation.
The gutsy way that Egan spins really abstract concepts into concrete plot twists is a delight. Qua novel, this book is weak in a lot of ways, but it has stuck with me in a strangely affecting way. Recommended.