The neuroscientist is spending a relaxing Sunday morning reading The New York Times while enjoying a cup of English breakfast tea. As Alison Barth turns the pages, a feature story grabs her attentionāA Dying Young Womanās Hope in Cryonics and a Future: Cancer claimed Kim Suozzi at age 23, but she chose to have her brain preserved with the dream that neuroscience might one day revive her mind.
The article goes on to assert that one day it may be possible for science to digitally replicate her consciousness in some capacity.
Suozziās hope for digital immortality haunts Barth. No wonder, as she is interim director of Āé¶¹“åās BrainHub, which is a Āé¶¹“å interdisciplinary initiative that focuses on harnessing the technology that helps the world explore brain and behavior. Started as a response to President Obamaās BRAIN Initiative, BrainHub fills its role through three initiatives: designing new tools to measure the brain, developing new methods to train the brain, and creating new computational methods to analyze data on the brain.The goal is to link discoveries in brain science with a deeper understanding of brain computation in order to provide insights that will improve approaches to treatment, as well as facilitate the design and creation of intelligent devices for therapeutic intervention and for better experiences in our everyday interactions with the digital world.
Throughout the rest of the day, and the following week, Barth keeps thinking about Suozzi. But rather than remain haunted, she realizes she can at the very least explore the feasibility of what the September 12, 2015, article suggested could be possible. She schedules an October 29 panel discussion, Downloading Consciousness: Connectomics and Computer Science,which will be open to the public and include several of her colleagues. The event will examine top-down and bottom-up approaches to replicate cognitive brain function: where are we now, what is likely in the near future, and what remains science fiction?
The discussion, slated for an hour, takes place in a Āé¶¹“å campus lecture hall that can seat more than 200 people. Good thingābecause nearly every seat is taken by an audience that spans generations.
The moderator is Barth, whose is focused on understanding how experience assembles and alters the properties of neural circuits in the cerebral cortex, in both normal and disease states.
She introduces the four panelists:
- , professor and director of Āé¶¹“åās : Two of his current projects are modeling and predicting human behavior, creating salient summaries of experiences to diagnose and support memory issues.
- Sandra Kuhlman, Āé¶¹“å biological sciences professor: She uses microscopic techniques to visualize how the circuitry in the brain changes as it learns new skills. By comparing and contrasting how young and old brains adapt to new situations, she seeks to understand how circuit construction evolves over time and how this impacts learning and disease.
- , : Among his current research projects is cognitive robotics, a new approach to robot programming that draws inspiration from ideas in cognitive brain science. His lab is for programming mobile robots.
- , associate director of Āé¶¹“åās : Also a faculty member in the Department of Philosophy, he focuses on the attention, perception, action, and schizophrenia at the interface between philosophy and cognitive science.
The discussion is meant to be freewheeling:
How are circuits constructed to give rise to cognition? Have we nearly passed the Turing test? Neuroscientists are making great strides in investigating motifs for cellular and synaptic connectivity in the brain, with the hope that they might be able to reconstruct āthoughtā by understanding the component parts. Conversely, computer scientists are using different strategies to create better and better interfaces for devices to interact with us in a way that is indistinguishable from another human.
At the outset, Touretzky contends that despite all of the headlines, artificial intelligence hasnāt passed the Turing Test, a litmus test established in the 1950s by scientist Alan Turing to determine whether a machine could exhibit intelligence indistinguishable from that of humans. To illustrate, ĢżTouretzky suggests asking Google a few questions: āWhatās the third largest city in Botswana? What is the square root of ānot quite 16?ā Which one of these is easy for people? Which one is hard for Google? Try it and see.ā Googleās shortcomings demonstrate that current systems havenāt achieved anything that could truly be called āintelligence.ā Touretzky does believe such āintelligenceā will be achieved some day, ābut I think that day is still pretty far off.ā
For Dey, artificial intelligence isnāt measured so strictly. He tells the panel that if he can suspend his disbelief that the technology heās engaging with is real, if it just feels real to him, even for a moment, then that might be enough. āThe fact that I can ask natural language questions to my phone and have it answer, thatās impressive,ā he says. āIf I interact with a system that essentially can fake me outāit may not have a soul, it may just be a representation of rules underneathābut if something was compelling enough to me that I couldnāt tell, then it almost wouldnāt matter if it has a soul.ā
This concept of engagement with machines is something neuroscientists and robotics engineers alike are focusing their research toward. But a machine emulation of a humanās brain? Barth shakes her head. She contends we are still ages away from the technology to upload a human brain and have it be an exact representation of that person. She tells the group that although she respects many of the named sources in the article that brought them all there in the first place, she posted her skepticism on The New York Times website.
āThere are so many other things about neural circuits that are not represented by the anatomy,ā Barth says. āItās a fantasy to think it would be sufficient to recreate somebody you would recognize.ā Our brains are shaped by modulatory factors, epigenetics, changes in genome, all of which vary depending on cell type, and they operate in a dynamic state, one in constant flux. āThe circuit map itself will not be sufficient to recreate someoneās identity. Iād say weāve got some other fish to fry first.ā
Those āfishāinclude more immediate technological advances which can āliberate the human experience.ā For example, at coffee shops, robot employees could soon be pouring the coffee and adding the desired amount of cream and sugar, perhaps freeing baristas to pursue more creative, meaningful responsibilities. āWhat these devices will do will free us to actualize ourselves,ā Barth says.
Like Barth, Kuhlman is in the trenches of basic brain science. Kuhlman falls into the camp of people who prefer to not think too hard about the bounds of artificial intelligence. Instead, she uses another term: āArtificial intelligence means so many different things to people. Iām going to use the words āmachine learning.āā
Deyās participation in the panel discussion made him remember a classmate of his from 20 years ago telling everyone he would one day upload his brain to a computer. āWe all thought he was nuts,ā recalls Dey. In retrospect, maybe not.
In her lab, she works on ways to improve machine learning using biology. Take facial-recognition software, for example. Vision of a human being is a skill that doesnāt take place only with our eyes. āItās the brain thatās doing the seeing,ā Kuhlman points out.
Machine learning systems have been modeled using excitatory cells, the kinds of cells in humans that Kuhlman likens to āgoā cells. But there are two types of cells found in human mechanisms: excitatory, or āgo,ā cells, and inhibitory, or āstop,ā cells. By using both types of ācells,ā machine learning is bound to improve, as the inhibitory cells form sub-circuits that allow them to filter out unnecessary information. The end result is clarity, in the form of facial-recognition, or even sight for the blind.
But building a machine based upon brain-based principles is not the same thing as simulating human intelligence and behavior. Dey shows the AI-curious room a video example of AI in action, in the form of , technology developed by a recent Āé¶¹“å acquisition.
Carnegie Mellon hired researcher at the start of 2015 to lead the and in the . He contributed to the development of SimSensei, a virtual interviewer that provides decision support for healthcare practitionersāor perhaps actual healthcare support, depending on how deeply one connects to the pseudo healthcare interviewer. On average, humans engage with the virtual agent named Ellie upwards of 40 minutes, a significantly long period of time for human-computer interaction. In that time, the humans give personal information aloud while SimSensei utilizes its MultiSense technology to quantify nonverbal information as well, such as voice quality and facial expression. In this way, SimSensei can potentially help doctors with medical diagnoses, in both accuracy and efficiency.
But with SimSensei, humans are never meant to suspend belief they are engaging with a machine. Some peopleāas Suozzi didāhope that humans will interact with uploaded brains as if they were human, too. But will people really be able to have a relationship with AI, even if itās emanating from a deceased friend or loved one?
āI donāt know. I just donāt,ā Dey says.
Work is being done to improve the ways computers think and reason, which could end up helping machines seem more human than ever before. The (Never Ending Image Learner) computer program has been running 24 hours a day at Āé¶¹“å since 2013, all in the name of common sense. The research team behind this constant learnerā, assistant research professor in Āé¶¹“åās , , a PhD student in Āé¶¹“åās Language Technologies Institute, andĢż, a PhD student in roboticsāhave found that as the computers analyze millions of images (more than 5 million so far), they are thinking more like people.
Such technology might naturally lead one to ponder the sci-fi, āTerminatorā-inspired worst-case scenario that often comes into such discussions: Computers take over the world; humans are rendered unnecessary; and these machines, now capable of recursive self-improvement, simply get rid of us. For now, the panel agrees, we have much more to gain than to lose from these innovations.
The panel also realizes that part of what makes the story of Kim Suozzi in The New York Times so compelling is the idea that, although it is currently highly improbable, science is indeed heading toward the technology of replicating the precise neural fingerprint of a once-living person. Could it ever happen?
Touretzkyāwho in addition to being an AI and robotics researcher, is also a published computational neuroscientistābelieves itās too early to speculate. We are at the very beginning of understanding the brain, and the fundamental theories that form the foundation for this field are just beginning to emerge.
āSuppose you read a sentence, āJohn kissed Mary.ā What happens in your brain that allows you to understand what that sentence means and remember it? We donāt know,ā he says.
The event will examine top-down and bottom-up approaches to replicate cognitive brain function: where are we now, what is likely in the near future, and what remains science fiction?
Much in the way chemistry was a well-established field long before we finally understood its basis in physics, understanding the brain requires a theoretical foundation still being established. Itās highly interdisciplinary, involving an understanding of neural pathways and psychology, and itās hard to define concepts such as consciousness, which is what panelist Wayne Wu specializes in.
āIf you want to find a full definition of consciousness, youāre not going to find one,ā Wu tells the group. āBut I would also point out that to study a lot of things in natural science, you donāt need to define them, right? If youāre observing tigers, you donāt have to tell me exactly what a tiger is as long as you can track it. So I think if weāre studying consciousnessāwhich seems kind of ineffable in some ways, because itās really hard to describe and no oneās got a definition of itāit might be enough if you can track it.ā
Itās an unsettling prospect, working in a field where basic concepts are impossible to define and can make folks uncomfortable.
But all of the researchers agree itās well worth the effort. āItās a better understanding of what it means to be human,ā Touretzky says.
As for Suozzi, and her dying wish to one day revive her mind, Touretzky says it ultimately raises two questions:
- Can we understand in full detail the principles by which brains work?
- Can we somehow deconstruct a particular person's brain in sufficient detail that we can simulate it, and in that way make virtual copies of the person?
The second question, says Touretzky, is far more ambitious and is probably impossible without answering question number one. He adds that most neuroscientists think the second question is āimpossible, period,ā whereas the first one is probably achievableāthough not in our lifetimes.
On the other hand, Deyās participation in the panel discussion made him remember a classmate of his from 20 years ago telling everyone he would one day upload his brain to a computer. āWe all thought he was nuts,ā recalls Dey. In retrospect, maybe not. He mentions what were once considered to be science fictionāwalking on the moon, self-driving vehicles, surgery under anesthesia, even the washing machine.
In perhaps one of the most quieting moments of the discussion, Dey references Greek mythology, and how the gods once told human beings that they would never have fire, flight, or immortality. Only one of those remains true.