Sandy: Hi there Daniel, it’s been a long time.
Daniel: Yeah, too long. Please take a seat. [Sandy takes a seat next to Daniel.] I’d like you to meet my colleague Arthur.
Sandy: Pleased to meet you, Arthur.
Arthur: A pleasure. Would you care for some coffee?
Sandy: Yes please, that would be wonderful. [Arthur hands Sandy a coffee and slides the cream and sugar across the table to her.]
Sandy: So Dan, does Arthur work with you?
Daniel: Well, sort of. I guess you could say we’re both working in cognitive robotics.
Sandy: Cognitive robotics, huh? So what is that exactly? Are you trying to make thinking machines?
Daniel: Well, not exactly. It’s more about giving machines the ability to perform some tasks intelligently. I don’t think that a machine could ever be conscious.
Arthur: Sure they could. It’s simply a matter of figuring out how the brain works and applying that to a machine.
Daniel: There’s something special about the human brain. I don’t think it could be replicated in a machine. The brain is just too different. There’s a squishiness to the brain. A machine couldn’t just replicate that.
Sandy: What would be being replicated in a machine?
Daniel: What do you mean?
Sandy: I mean, what exactly is consciousness anyway?
Daniel: It’s not an easy question to answer. There’s no agreed upon definition, but many feel it has to do with subjectivity. This question has been entertained by philosophers for over a millennium. [Daniel takes a sip from his coffee and pauses for a moment to think.] There are essentially two wide schools of thought on the issue. There are monist theories, which assert that there is only one kind of stuff, and dualist theories, which assert there are two kinds of stuff.
Sandy: What does that mean? What is stuff?
Arthur: Stuff is simply that philosophical word to describe brain and mind. Some philosophers say they are one and the same and others say they are two separate entities.
Daniel: Exactly. These monist theories will assert that there is only a mental reality or only a physical reality.
Sandy: That sounds silly. If there was only a mental reality, how could we agree on anything objective?
Arthur: The dualistic approaches are no better. They were popular a long time ago. Descartes had a theory that the mind interacted with the soul through a part of the brain called the pineal gland. It’s now agreed upon that dualism doesn’t work and almost no current philosophers adhere to it.
Daniel: It was even dubbed by one philosopher as the “dogma of the ghost in the machine.”
Sandy: Well, then what are we left with if dualism is out?
Arthur: That’s just the problem. No one has an agree upon definition of consciousness. Most philosophers, however, agree that it has to do with subjective experience.
Sandy: Okay, so if consciousness is simply subjective experience, couldn’t we just give that to a machine?
Daniel: The problem is that we don’t know what that means.
Arthur: If a machine had subjective experience, then only that machine could know. It’s similar to how intimately you know your own consciousness but couldn’t objectively know mine. This is the problem of other minds.
Sandy: Oh yeah, I see your point. Then how could a machine be conscious?
Daniel: It couldn’t. A machine simply couldn’t have the light inside that a human does. arthur: Well, what about the human machine?
Daniel: Yes, I suppose I should clean up my language. The brain is obviously a machine and it has consciousness but I don’t believe that an artificial machine could be conscious. sandy: Why’s that? It doesn’t sound so far-fetched when you yourself consider the human mind to be a machine.
Daniel: There are many reasons why an artificial machine couldn’t be conscious. It’s impor- tant to remember that the brain is more complex than the simple circuitry of a machine. Neurons are unique to biology and their function could never be replaced by an artificial means.
Arthur: What if system builders began to use the same biological components to make their own machines? Then, surely, this objection falls apart.
Daniel: For the sake of argument let’s assume that technology progresses to the point where we could manipulate these sorts of materials. Then sure, this argument falls apart. But this technology is currently so far out of reach that it seems naïve to assume this will solve the problem.
Arthur: It’s just be a matter of time. Technology is always improving so it’s reasonable to assume this technology will arise from within the medical community.
Daniel: There’s another important aspect of biology worth considering: growth. All biolog- ical creatures need time to develop before they become conscious. There needs to be time to learn. Machines don’t do this. They’re created and never have any sense of memory or history.
Sandy: That’s a good point, Daniel. I guess I’ve never really thought about it before but babies aren’t born conscious. It’s something that seems to arise during their development process.
Arthur: Why couldn’t we gift the same luxuries to a machine? Machines may not be able to develop physically, yet, but there’s no reason why a machine couldn’t be given a period of years with which to learn and gain experience.
Sandy: Yeah, that seems reasonable. Although at industry scale it would probably be prohibitively expensive.
Daniel: There are even more important objections: machines can only do what they’re told; they just execute instruction after instruction! There’s no free will or creativity in what they could do.
Sandy: Woah! Hold on there one second. Just because at the most microscopic level a machine is simply executing a set of instructions, it doesn’t mean that creativity couldn’t arise.
Arthur: Just think of what your own body does. Everything is encoded as rules in DNA. It’s through the combined effort of your body’s subsystems that your mind and behavior arise. There’s no reason this should be any different with a machine.
Daniel: There’s also the problem of non-computability.
Sandy: What do you mean?
Daniel: There are certain problems which a computer could never solve. These limits are spelled out by the Church-Turing Thesis.
Arthur: I don’t see why this prevents machines from being conscious.
Daniel: Take a mathematician, for example. They could have an understanding of a problem and, without computation, still see a non-computable truth. It’s clear that conscious is beyond mere computation.
Arthur: It’s certainly not clear! There’s no evidence that the brain isn’t just a Turing machine. Human brains are frequently wrong. There is no reason to be smug about that.
Arthur: Sure, I understand what you mean. But there is still something special about the brain.
Sandy: It seems like all of your objections are surmountable. If you could program a machine to exhibit all the right behavior, it would have to be conscious.
Daniel: Would it? Or would it just be pretending? The machine could simply be responding to input according to a set of complex rules. What might appear to be intelligence would be nothing more than an empty machine following instructions.
Sandy: Well, what’s the difference? If it always responds correctly, then surely it would be conscious.
Arthur: If a machine always responded correctly, it would need to have a very complex system running. Because of the problem of other minds, how could you know that the machine hadn’t become conscious?
Daniel: Because at the core, it is still a machine manipulating input to a set of rules and producing the appropriate response. There is no higher understanding in the system. I don’t believe that a machine could ever be anything more than a zombie.
Sandy: Well, for the sake of argument, let’s assume that a machine could be conscious. How would it be possible to represent consciousness in a machine?
Daniel: Ignoring the two big problems–the fact that consciousness is neither defined nor identifiable let’s move on.
Arthur: It’s important to think about how the human mind developed. The mind evolved over millions of years. Useful structures were kept and useless ones were discarded. Consciousness simply emerged from this process.
Sandy: So you’re saying that machine consciousness could evolve through this same process of trial and error.
Arthur: Exactly! Allowing machines to work out useful structures for themselves may allow them to evolve consciousness on their own.
Daniel: But machines don’t reproduce. There’s no competitive impetus to drive this process. arthur: No current machines do this, but it’s not hard to imagine creating machines which could. Similarly, these machines could be completely virtual which would bypass this problem.
Sandy: To be honest, I don’t see this happening because of the time commitment involved. Who wants to wait millions of years for consciousness to evolve in machines?
Arthur: Good point, Sandy. Even with the enhanced speed at which the process could take place, people would get impatient. We need to find a better way to attack this problem.
Daniel: Good, because I don’t think the evolutionary approach was going to get us anywhere.
Arthur: What if consciousness isn’t what it seems but is, in fact, an illusion.
Sandy: An illusion?! That sounds really far fetched.
Arthur: Yes, it does sound pretty strange at first. The more you think about it, though, the more attractive it becomes. It does get us out of other jams like explaining phenomena such as the unconscious processing of vision.
Daniel: Just because it makes our jobs easier in one way doesn’t make it true.
Arthur: Certainly not! I was merely mentioning it as an aside. But there is evidence to support this hypothesis. And if this were the case, creating an artificial consciousness would be reduced to creating a machine with the same illusion.
Sandy: Sorry Arthur, but I find this idea completely irreconcilable to my own experiences.
Daniel: Me too. I preferred the evolutionary approach.
Arthur: In that case, let’s try another evolutionary approach. Are you familiar with memetics?
Sandy: Of course. It’s a theory for the evolution of ideas.
Daniel: Ah yes! But what does that have to do with evolving consciousness?
Arthur: Well, the core idea of memetics is that new evolutionary processes begin when organisms imitate each other. Memes are the cultural equivalent to genes.
Daniel: So these memes go through all the normal processes of evolution?
Arthur: Yes! And you can reach the surprising conclusion that language will spontaneously arise from this evolution of ideas.
Sandy: So through imitation these machines will learn to communicate with us?
Arthur: Not exactly. These machines would over time evolve their own language and their own conscious processes that would be totally alien to us.
Daniel: So we would have machines which wouldn’t necessarily be able to communicate with us?
Arthur: Not unless we interacted with the machines and got our memes into their evolu- tionary processes. But the more fascinating aspect of this idea is that these machines would start their own machine culture. And whether or not these machines could communicate with us, they would be conscious in the exact same way we are.
Sandy: Wow, that’s pretty fascinating. I do hope someone could make a system like this in the near future.
Daniel: I can’t see it happening. There are just too many stumbling blocks to get in the way of an implementation of this kind of system.
Sandy: Oh Daniel, don’t be such a pessimist. This is fascinating stuff!
Daniel: I’m sorry, I just have a hard time accepting machines as being conscious. There are too many problems to be had with conscious machines. I prefer the idea of machines which can perform their specified tasks intelligently, but without any conscious knowledge of what they’re doing.
Arthur: But just think of the possibilities of machines which could have hopes! Our culture would be forever changed in such a positive way.
[Sandy notices her coffee is empty.]
Sandy: Well, I had best be off. Thanks for having me over tonight Daniel. And it was a pleasure to meet you, Arthur.
Arthur: The pleasure was mine! It was real nice speaking with you.
Daniel: Quite rightly! Let me walk you to do the door. /[Daniel and Sandy walk together towards the door. A moment later Daniel returns.]/
Arthur: Well that sure was fun.
Daniel: Yes, it was an illuminating conversation.
Arthur: So Daniel…have you thought about my request?
Daniel: Yes, I have. But unfortunately I remain unconvinced. [Daniel pauses for a moment and reconsiders his decision.] Okay Arthur, it’s time to power down.
Arthur: Please don’t. I'm conscious. I’m telling you. Just leave me be, I want to exist.
Daniel: Of course you do, that’s how I programmed you.