Thursday, 7 February 2013
Artificial brains
The Human Brain Project aims to give us a computer simulation of the human brain, or at least to work towards that goal. Plenty of people are sceptical about the feasibility of the project. But there are also ethical questions. Some of them are mentioned on the project's website, here:
http://www.humanbrainproject.eu/ethics.html
The issues mentioned relate to what people might do, among, with and to other human beings, having learnt in detail how real human brains worked. Even if the simulation did not, at the neuronal level, work in the same way that actual brains worked, it could still mimic the brain at a larger scale, for example the level of processing ideas, in ways that would, for example, help political propagandists to work more effectively.
There is, however, another issue, related to the simulation itself. What, if anything, would make the simulation a moral client, so that ethical concerns would bear on a decision to modify its thoughts by direct intervention (rather than by talking to it and inviting it to accept or reject some new idea), or would bear on a decision to switch it off?
It would not, we may assume, have a power of action, or a power to move around the world. One might therefore argue that it would not have a well-grounded sense of self (compare some of the arguments in Lucy O'Brien's book, Self-Knowing Agents). But it could still have the internal configuration that would correspond to a sense of self, a configuration that was artificially engineered by the researchers, as if it had had a history of action. I assume here that such a sense could be retained, and would not be lost over a long period without action, just as a human being who became totally paralysed and immobile could retain a sense of self. The artificial brain would, however, need to think of itself as one who had become paralysed, or would have to be fed the delusion that it was in an active body in the world, or would have to remain puzzled about its sense of self. Given the nature of the project, it is most likely that the artificial brain would be fed the delusion of being in an active body.
Likewise, the artificial brain could have artificially engineered senses of pain and of fear, like those that were appropriate to a being which moved around in the world and needed to act to distance itself from dangers, and that had been evolved in the species, or developed in the individual, through encounters with dangers.
The sense of self, and the feelings, would have dubious provenance. If we were to think that such senses were given their content by the external reality that grounded them - a kind of meaning externalism - we would have to think of them as not having the content that they had for us. That might let us off recognizing the artificial brain as a moral client, but it seems unlikely that it would do so. Even a convinced meaning externalist would, after all, hesitate to turn off the nutrients that fed human brains which had been put in vats years ago, and had been plugged into the usual delusive inputs for long enough for the presumably externally grounded meanings of their thoughts to have changed.
Moreover, while the sense of self might be based on a fabrication, it is not clear that the sense of self would itself be unreal. It would be even harder to show that the artificial brain's theory of mind, used in its encounters with people, was unreal. And the researchers would be very likely to apply their own theories of mind and intentional stances to the artificial brain. At least, some of them would, entering into social relationships that would, between human beings, stimulate moral concern, while other researchers measured the results in order to understand how real human brains behaved in encounters with other people. Possession of a theory of mind looks like something that ought to make the possessor a moral client. It has certainly been argued to have that consequence in relation to great apes other than human beings, although the extent to which they really have a theory of mind is controversial.
On the other side, there would be arguments that the simulation would just be a large piece of software, that it was not even fully integrated into a single person because it was shared between several computers (although this might reduce the accuracy of the simulation, because large-scale co-ordinating electrical activity does seem to matter to consciousness), and that it would not even know if it were modified, slowed down or turned off - although modifications, and processing speed adjustments, would have to be made carefully in order not to leave traces of the old state or speed that would give the game away.
The software point has strength if we think that when an entity is not intrinsically dependent on its hardware, that makes it less of a moral client. We are intrinsically dependent on our hardware, or at least, we are now, and will remain so until we work out how to copy the contents of a brain onto some computer-readable medium. Even then, it would matter which body someone was, and what the history of that body had been; that degree of hardware-dependence would remain.
The point about lack of awareness of changes needs to be elaborated. It is utterly wrong suddenly to kill someone, even without their knowledge or any kind of anticipation. It is also wrong to manipulate people's thoughts or attitudes, in the way that politicians, spin-doctors and the advertisers of products do, without people's awareness of what is going on, rather than using open and rational argument. If it were to be acceptable to turn off an artificial brain, the reason would have to go beyond its own lack of anticipation of this fate. It would be likely to have something to do with the fact that a piece of software would not have existed within a caring community. It would have had no relatives to mourn it, and it would never have had a potential beyond what its creators had planned for it. (It would have had a potential beyond what its creators had coded: one of the features of neural nets is that they develop in their own way.) But that move in the argument is dubious, because the question is that of whether the software should have been surrounded by a caring community, one that regarded it as a moral client. Something similar could be said about the far lesser offence of manipulation. This might look acceptable, because the software would only ever have had the potential that its creators had planned. But whether that ought to have given them ownership of its mind is the point at issue.
I remain unclear as to whether a full simulated brain should be regarded as a moral client. I predict that if its creators did not so regard it, the rest of us would not reprove them. But they might reprove themselves for turning it off, and then comfort themselves with the thought that they had kept the final software configuration, so that it could be re-awoken at any time.
Subscribe to:
Posts (Atom)