Sunday, 24 June 2012
Philosophers have thought up examples of entities that behave like thinking human beings, but that leave us wondering whether they are conscious. The China brain has been debated for many years. In recent months, Eric Schwitzgebel has discussed other entities, including the United States, on his blog (see posts dated 31 October 2011, 4 May 2012 and 19 June 2012):
The more I think about examples like these, the more I think that we should not say there is always a fact of the matter, out there, as to whether a given entity is conscious. Rather, we should ask whether it makes sense for us (or whichever rational beings happen to be having the discussion) to regard the entity as conscious. Whether it makes sense will depend on the nature of our social interactions with the entity, our views on moral obligation to the entity, whether we see the entity as made up of smaller entities that we see as individually conscious, and lots of other things.
We must regard other human beings, and a fair number of animals high up the evolutionary scale, as conscious. But human beings can disagree as to how far down the scale to go. Martians can also disagree with human beings about the consciousness of at least some of the entities that human beings must regard as conscious, and we can disagree with Martians about the consciousness of at least some of the entities that they must regard as conscious.
(This assumes that Martians have a concept that corresponds to our concept of consciousness. They may not have. If my general approach is right, a possible reason for them not to have would be that they might well not have concepts that corresponded to our concepts of social interaction or of moral responsibility.)
Then a dispute about whether the China brain is conscious, or whether the United States is conscious, can be seen as a dispute about the relative significance of the members of two groups of indicators of consciousness. The first group, on which such entities score highly, includes the sophistication of processing and the existence of a generally consistent, yet gently mutable, character of conduct, that differs, but does not differ radically, from the characters exhibited by other, comparable, entities. The second group, on which such entities score badly, includes the personal nature of our interaction with the entities, the existence of feelings of moral responsibility towards them that are very similar to our feelings towards other human beings, and a sense that the entities have qualia of experience. (I do not mean to claim that qualia are real, only that most of us, in our everyday lives, think that they are real.)
The example that Eric Schwitzgebel cites in his post on 19 June 2012 presents a new challenge. There is an artificial body, which looks to us like a person, and behaves appropriately. But there is a China brain arrangement in the background, feeding instructions to the body, rather than a normal brain in the body. We are presented with a single body, with which we can interact as we would with a person. So this example scores highly on personal interaction, and might easily come to score highly on being regarded as an object of moral responsibility. The one thing about which we would still worry would be the qualia (or whatever our views on the human mind allowed along the lines of qualia).
Another interesting example is the David character in the film A.I. This is an artificial child, the capacity of which to display love towards the human being who acts as its mother can be switched on, but cannot then be switched off. Once this capacity had been switched on, and the "love" had developed, could the mother argue that the creature was just a machine, to which she had no moral responsibility? I rather think that it would depend on how the programming was done. If the intelligent processing of data from the child's environment went on deep inside, but it was only near the surface, in a separate module, that appropriate behaviour was generated, then the mother would have less of a moral obligation than if the intelligent processing and the generation of behaviour were fully integrated. I have not worked this out properly, but if there is something in this idea, and if considerations of moral responsibility are relevant to the attribution of consciousness, then the details of implementation of processing could matter to the attribution of consciousness.
I have cross-posted these thoughts, with minor amendments to allow for the context as a comment on Eric Schwitzgebel's post of 19 June 2012, on his blog at:
Saturday, 23 June 2012
This week, a study of the extent to which different think tanks disclose their donors was published. It is available at:
Those that got high ratings were no doubt pleased. At least one that got a low rating did not accept the presumption that transparency was a virtue, and argued that donors had a perfect right to privacy, as can be seen here:
Meanwhile, in the United States, there has been considerable concern about the use of supposedly independent Political Action Committees, or PACs, to circumvent limits on politicians' campaign spending. And some of the big corporate donors to political campaigns don't even want to have their names disclosed:
So what would be a sensible position, given the tension between:
(a) the prima facie right of each person to decide whether or not to disclose his or her spending (on anything, not just on politics) and whether or not to disclose his or her political views; and
(b) the need to do what we can to prevent the corruption of the political process by those of the rich and powerful - I hope a fairly small proportion - who try to corrupt it?
The kind of corruption I have in mind is the twisting of legislation and government administration to suit the private interests of those who spend money to get certain politicians elected, or to lobby the politicians who get into power. It amounts to corruption because legislation and government administration are imposed on all of us, without the freedom to opt in or out: they should therefore be in the interests of all of us, not in the interests of a few. The spending of lots of money in the marketplace, promoting the production of the goods and services that the rich happen to like, does not amount to corruption, because we are free to participate, or not to participate, in any given market, and because the production of some goods and services, at high prices, does not prevent the production of others, at lower prices.
I think that the answer depends on the current state of the polity.
If we have a healthy, free, democratic polity, or one that has only wandered a little way from that ideal, then disclosure will help to keep it healthy. I would therefore favour full disclosure in the UK, whether or not the donees are political parties. Some think tanks say that they are not party political. Such claims are often true. But they still seek to change legislation, and if they may be promoting the interests of their funders, whether because the funders ask them to or because they decide their policies first and then naturally attract the funders who agree with them, that should be disclosed. When changes to legislation or to government administration are being advocated, we need to be aware of possible selfish motives, so that we can appraise the arguments being put forward with an appropriate degree of scepticism.
An important counterpart to this is total transparency on the side of government. All papers related to the conduct of government should be freely available, except when national security would be put at risk. That should help to prevent corruption in the reverse direction, for example when a local council might refuse planning permission for new business premises because the proprietor was known to support political views that were at variance with those of councillors.
Given those conditions, I do not see the right to privacy as carrying much weight. It is not even clear to me that we do have a right to privacy against anyone except an intrusive government. (I find Article 8 of the European Convention acceptable only as a right against the state: if it is a right against the press, we can say farewell to a free press. But that is another argument.)
If, on the other hand, there is a repressive government, secrecy may be essential in order to have a chance against the authorities. But in that position, opposition movements would probably be breaking the state's (unjust) laws anyway, and the state would inspect bank accounts, whether or not it was authorised to do so. Laws on privacy of funding would then be neither here nor there.
Thursday, 7 June 2012
Suppose that X works with Y in some business in which knowledge is important, and in which people need to draw sensible conclusions from evidence, and to recognize and suppress wishful thinking when the conclusions are not what they might expect or like. X reckons that Y exhibits the appropriate epistemic virtues, and is therefore a good colleague to have.
Now suppose that X finds that in some unrelated area of life, Y holds a belief that X thinks no reasonable person could hold, if that person were confronted with evidence that is plainly available to Y, and that Y could plainly grasp and understand how to use.
Should X be less happy about working with Y? I think not. Y's performance at work would be evidence that what X saw as Y's lack of epistemic virtue in the unrelated field had not infected Y's work. And while it would be a bit much to ask X to acknowledge that he or she might be wrong about the unrelated matter, it would not be unreasonable to ask X to acknowledge that his or her perception of Y's lack of epistemic virtue might be mistaken. Y's reasoning processes would not be likely to be fully transparent to X.
X could respond to this point by saying that the reasoning processes did not matter. Y's belief was so manifestly absurd that Y should have said, "I must be wrong here, now I should try to find the error in my reasoning". On that basis, Y would be guilty of one specific epistemic vice, a failure to recognize manifest absurdity. But even then, could X be sure that Y suffered from that vice? Perhaps the process of reasoning had itself led Y to change his or her view of what was absurd.
Now let us change the example. X is considering whether to work with a think tank, T, on some project. X is impressed with T's work in the relevant field. But X also knows that T's official views, in unrelated fields, are quite as bad as Y's conclusions in an unrelated field. They are not just mistaken. X cannot see how any rational person, confronted with the widely available and easily understood evidence, could reach those conclusions.
Should this deter X from working with T? There might be a risk to X's reputation, if he or she were seen to be working with an organization that X's peers might well regard as crazy, but we shall set that to one side, and concentrate on the question that arose as between X and Y. Would it be appropriate for X to think there was a serious risk that what X perceived as T's lack of epistemic virtue would infect work in the area of the proposed joint project? (There is a side issue as to whether institutions, as opposed to individuals, can have or lack epistemic virtues.)
It would not be hard to say yes, the risk should be taken more seriously in the case of X and T than in the case of X and Y. If an institution adopted crazy views, that would be likely to reflect the views of more than one person. There might be only one person formulating views on the topics in question, but he or she would be answerable to the institution's management. The management would therefore have a general outlook that allowed the views to be published, whether an outlook that staff should not be controlled, or an outlook that included sharing the crazy views. And that management outlook might very well infect the recruitment and the management of those who would work on the proposed joint project. People who lack epistemic virtues may well associate with, recruit, and encourage other people who also lack those virtues.
Such a conclusion would have an interesting implication. The conclusion would suggest that epistemic vice could spread more easily from one area of thought to another in a group of people than within a single person, despite the fact that a single person seems to be much more closely integrated than a group of people. That is not, however, absurd. In a group of people, propositions are expressed by some and are consciously considered by others. That stage of conscious consideration may given the propositions more power to influence behaviour than if they were merely present in a single cortex, encoded in a form that did not even look particularly propositional, and were occasionally and dreamily considered by the subject.