One of the complaints against those who deny, or doubt, anthropogenic global warming, is that they are mistaken. That must be the fundamental complaint. We won't do the right things, unless we get the facts right.
Another complaint, less fundamental but more interesting ethically, is that by opposing the scientific consensus, they get in the way of action. This complaint is sometimes mentioned, for example in paragraph 7 in this piece by James Garvey:
http://www.guardian.co.uk/environment/2012/feb/27/peter-gleick-heartland-institute-lie
There will always be some politicians who will oppose action on global warming, and if some people offer arguments that action would be wasteful and pointless, they will give those politicians material to use in their own campaigns. This will reduce the probability of action at the level of governments, whether they would work singly, or in co-operation at the international level.
My question here is, is there a specific ethical failing of opposing a consensus, where the result is likely to be to delay action, or to reduce the scale of action, to solve a problem that is real and pressing if the opponents of consensus are indeed mistaken?
If the opponents know that the evidence for their position is of poor quality, or if they are reckless as to whether it is of good or poor quality, there is a failing, but it is of a rather more general sort than any failing of opposing a consensus. The failing is that of not being sufficiently careful to put forward true claims, and to avoid making false claims, in a situation in which one ought to take more than usual care, because the consequences of people being misled are so serious. William Kingdon Clifford set a high standard here, in his essay "The Ethics of Belief". His standard is conveniently summed up in his words towards the end of its first section: "It is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence". We might debate whether this was an appropriate standard when the stakes were very low. But with global warming, the stakes are very high, and the standard is undoubtedly apt.
Not all global warming sceptics are reckless as to the quality of the evidence for their position. Some would be too ill-informed to be aware that the quality of their evidence needed to be reviewed. But the corporate lobbyists, and their think tank pals, who push the sceptical position, cannot plead that kind of innocence. They know about the evidence. They study it when formulating their positions. They also know about the need to look critically at the quality of evidence. They spend a lot of time criticizing the evidence on the consensus side. Given that the overwhelming majority of scientific, as distinct from political, commentary, is on the side that anthropogenic global warming is real, it must be reckless for them to take a contrary position, unless they have a large and thoroughly-analysed body of evidence to support that contrary position. The corporate lobbyists and think tanks that take up global warming scepticism have no reason to suppose that they have such a body of evidence. They are, therefore, at best reckless, and they stand condemned by Clifford's standard.
This does not mean that there is anything wrong with their challenging the detail of the scientific claims that are made, searching for errors, and highlighting alternative interpretations of the data. It does, however, mean that they should not crow that there is no sound support for the consensus scientific view, when they have no good reason to think that they have established that the consensus scientific view is baseless. One can be reckless, not only in taking up an overall position, but in making claims about how much one has shown, or about the extent to which the things one has shown undermine the other side's view.
Now let us leave the example of global warming, and ask whether there is a specific ethical failing of opposing a consensus, assuming that:
(1) the opponent of the consensus is not reckless as to the quality of the evidence for his position;
(2) if the consensus is correct, failure to act as the consensus view would recommend would have serious consequences;
(3) if the consensus is mistaken, there would be significant waste in acting as the consensus view would recommend.
It seems to me that there is no ethical failing here, unless, perhaps, the opponent of the consensus is in a position of exceptional power, so that his position is very likely to prevail, whatever the merits of the arguments on his side and on the consensus side. Absent such power, the opponent is acting in a way that is, in general, likely to lead to the advancement of our knowledge. The opponent is simply engaged in subjecting other people's claims to criticism. Criticism exposes error, and can strengthen correct positions that show themselves able to withstand the criticism, as Mill remarked (On Liberty, chapter 2).
What if we leave out condition (3), and suppose that while serious loss would follow from failing to act on the consensus view if it were correct, there would be no serious loss from acting on the consensus view if it were incorrect?
We might take the attitude of Pascal's Wager: we might as well act on the consensus view anyway. Then it might seem that opposition to the consensus view would be reprehensible, if it would be likely to hinder that desirable course of action.
However, even assuming that action on the consensus view would be the only sensible thing to do, and that opposition might hinder that course of action, it would not follow that opposition would be reprehensible. Opposition could still fortify belief in the consensus view, as suggested by Mill, if the opposer's arguments were found wanting. And if we were to adopt Clifford's attitude, we would want opponents of the consensus to be as active as they could be, in order to root out any mistaken beliefs that might happen to be held by the majority. That would be so, even if retention of the beliefs in question would not lead to any immediate adverse consequences, whether they were mistaken or correct. It would be so, both because an accumulation of erroneous beliefs, individually harmless, might have adverse consequences long after the beliefs were acquired, and because truth is something that is to be valued in itself.
Sunday, 31 March 2013
Saturday, 23 March 2013
Tricks of presentation
Yesterday, I saw the new David Bowie exhibition at the Victoria and Albert Museum. The exquisitely curated sequence of pieces of music, video clips, costumes, and other artefacts, demonstrates the power of presentation, far more than a single concert would do. I suppose the reason is that when we see contrasting tools of presentation, used in different concerts, we become aware of what each tool does, because it does not do quite the same as the corresponding tool in the next display. But we can remain carried away by each act of a pop star, because we know it is only an act. It does not profess to convey anything much about the Universe.
Now suppose we saw an exhibition of papal inaugurations and other grand religious ceremonies, and we therefore became more aware of how the tools of presentation worked in each one. Those who are currently carried away by such shows might cease to be so, because when something that is supposed to be more than an act, and to represent a point of contact with some profound truth about the Universe, is revealed to be shot through with tricks of presentation, doubt must be cast on the supposed profound truth. It should not need those tricks. And if the supposed profound truth is discarded, the show ceases to be a piece of fun, and is reduced to a charade. It cannot survive the loss of its raison d'ĂȘtre.
It is tempting to define an epistemic virtue, and its corresponding vice, in the following terms.
1. Suppose that there is some information.
2. The information might be presented in a variety of ways.
3. We shall only consider ways that would allow the subject to grasp the information.
4. The subject may form a view on the information's truth, or on its trustworthiness.
5. A propensity to form the same view, regardless of the way in which the information was presented, would be an epistemic virtue.
6. A propensity to form different views, depending on the way in which the information was presented, would be an epistemic vice.
This virtue and vice would come close to the virtue of scepticism (in the sense of being appropriately critical, not the Pyrrhonian sense or anything close to that sense) and the vice of gullibility, but the focus here is on the method of presentation, not on any other ways in which people might avoid being led astray, or might be led astray.
We could not test for the virtue, or the vice, in a given individual, by presenting the same information to him or her several times, in different ways, because the individual would remember previous presentations. We might use repeated presentation, with different forms of presentation being used in various orders, on a large sample of people, to test for the effectiveness of different methods of persuasion across the population as a whole, but that would be a different exercise.
We might note whether an individual was in general susceptible to the vice, by seeing whether he or she tended to accept the content of adverts or of propaganda. But our conclusions would probably be impressionistic, rather than based on a rigorous consideration of evidence. Indeed, if someone comes along with a test that purports to tell us whether a given individual is, or is not, uniformly susceptible to persuasion through the use of tricks of presentation, we should be sceptical of that claim.
Monday, 18 March 2013
Artificial intelligence and values
An article in the latest issue of Cambridge Alumni Magazine (issue 68, pages 22-25, but 24-27 of the version readable online), discusses the work of the Centre for the Study of Existential Risk. Huw Price, the Bertrand Russell Professor of Philosophy at Cambridge, takes us through some of the issues.
The magazine is available here:
http://www.alumni.cam.ac.uk/news/cam/
and some information about the Centre is available here:
http://cser.org/index.html
I found this comment by Huw Price particularly striking:
"It is probably a mistake to think that any artificial intelligence, particularly one that just arose accidentally, would be anything like us and would share our values, which are the product of millions of years of evolution in social settings."
This is an interesting thought, as well as a scary one. It leads us to reflect on what it would be for an artificially intelligent entity to have alternative values.
I shall start by considering what values do. I take it that they are, or provide, resources that allow us to decide what to do, when the choice is not to be made on purely instrumental grounds. That is, the choice is not to be made purely by answering the question, "What is the most efficient way to achieve result R?". The choice as to what to do might be one that was not to be made on purely instrumental grounds, either because of the lack of a well-defined result that was definitely to be achieved, or because it was not clear that all of the possible means, drawn from the feasible range, would be justified by the ends.
It would be possible for some artificially intelligent entity not to have any values at all, even if it could always decide what to do. It might choose both its goals and its methods by reference to some natural feature that we would regard as very poorly correlated, or as not correlated at all, with any conception of goodness. For example, it might always decide to do what would minimize entropy on Earth (dumping the corresponding increase in disorder in outer space, since it could not evade thermodynamics). We may not care about outer space, but we know that the minimization of entropy in our locality is not always a good ethical rule. When such a decision procedure failed to give clear guidance, the system could fall back on a random process to make its choices, and we would be at least as dissatisfied with that, as with the entropy rule.
The entropy example shows that there are things that do the job that I have just assigned to values, allowing entities to make decisions when the choice is not to be made on purely instrumental grounds, but that do not count as values. It is a nice question, how broad our concept of values should be. But we probably would have to extend it to naturalistic goals such as the maximization of human satisfaction, in order to have an interesting discussion about the values that artificially intelligent entities might have, in the context of the current or immediately foreseeable capacities of such entities. That is, we would have to allow commission of the naturalistic fallacy (supposing it to be a fallacy). It would be a big step, and one that would take us into a whole new area, to think of such entities as having the intuitive sense of the good that G E Moore would have had us possess.
We not only require the possessor of values to meet a certain standard in the content of its values, the standard (whatever it may be) that tells us that a purported value of entropy minimization does not count. We also require there to be some systematic method by which it gets from the facts of a case, and the list of values, to a decision. Without such a method, we could not regard the entity as being able to apply its values appropriately. It would, for practical purposes, not have values.
Sometimes the distinction between values and method is clear: honesty and courage are values, and deciding what action they recommend in a given situation is something else. At other times, the distinction is unclear. A utilitarian has the supreme value of the promotion of happiness, and a method of decision - the schema of utilitarian computations - that is intimately bound up with that value. From here on, I shall refer to a system of values, meaning the values and the method together.
Now suppose that an artificially intelligent entity had some decision-making resources that we would recognize as a system of values, but that we would say was not one of our systems of values, perhaps because we could see that the resources would lead to decisions that we would reject on ethical grounds, and would do so more often than very rarely. What would need to be true of the architecture of the software, for the entity to have that system of values (or, indeed, for it to have our system of values)?
One option would be to say that the architecture of the software would not matter, so long as it led the entity to make appropriate decisions, and to give appropriate explanations of its values on request.
That would, however, give a mistaken impression of latitude in software design. While several different software architectures might do the job, not just any old architecture that would yield appropriate decisions and explanations most of the time, would do.
The point is not that a badly chosen architecture, such as a look-up table that would take the entity from situations to decisions, would be liable to yield inappropriate decisions and explanations in some circumstances. An extensive look-up table, with refined categories, might make very few mistakes.
Rather, the point is that when the user of a system of values goes wrong by the lights of that system - falls into a kind of ethical paradox - we expect the conflict to be explicable by reference to the system of values (unless the conflict is to be explained by inattention, or by weakness in the face of temptation, and an artificially intelligent entity should be immune from both of those). That is, a conflict of this kind should shed light on a problem with the system of values. This is one reason why philosophers dream up hard cases, in order to expose the limits of particular sets of values, or of general approaches to ethics: the hard cases generate ethical paradoxes, in the sense that they show how decisions reached in particular ways can conflict with the intuitions that are generated by our overall systems of values. If the same requirement that conflicts should shed light on problems with systems were to hold for the use that an artificially intelligent entity made of a system of values, the software architecture would need to reflect the structure of the system of values, and in particular the ways in which values were brought to bear in specific situations. Several different architectures might be up to the job, but not just any old architecture would do.
Given a systematic software architecture, for a system of values that we would regard as unacceptably different in its effects from our own system of values, we can ask another question. Where would we need to act, to correct the inappropriateness?
We should not simply make superficial corrections, at the point where decisions were yielded. That would amount to creating a look-up table that would override the normal results of the decision-making process in specified circumstances. That would not really correct the entity's system of values. It would also be liable to fail in circumstances that we did not anticipate, but in which the system of values would still yield decisions that we would find unacceptable.
We would need to make corrections somewhat deeper in the system. Here, a new challenge might arise. As noted above, it is possible for values and methods of decision to be intertwined. In such value systems in particular, and perhaps also in value systems in which we do naturally and cleanly separate values from methods, it is perfectly possible that the software architecture would have intertwined values and methods in ways that would not make much sense to us. That is, we might look at the software, and not be able to see any natural analysis into values and methods, or any natural account of how values and methods had combined to produce a given overall decision-making system.
This could easily happen if the software had evolved under its own steam, for example, in the manner of a neural net. It could also happen if the whole architecture had been developed entirely under human control, but with the programmers thinking in terms of an abstract computational problem, rather than regarding the task as an exercise in the imitation of our natural patterns of thought about values and their application. Even if the intention were to imitate our natural patterns of thought, that might not be the result. The programming task would be vast, and it would involve many people, working within a complex system of software configuration management. It is perfectly possible that no one person would have a grasp of the whole project, at the level of detail that would be necessary to steer the project towards the imitation of our patterns of thought.
A significant risk may lurk here. If we become aware that artificially intelligent entities are operating under inappropriate systems of values, we may be able to look at their software, but unable to see how best to fix the problem. It might seem that we could simply turn off any objectionable entities, but if they had become important to vital tasks like ensuring supplies of food and of clean water, that might not be an option.
Saturday, 2 March 2013
Reasonableness
An English jury, in a criminal trial, must decide whether the prosecution has shown, beyond reasonable doubt, that the accused committed the crime. Normally, all 12 jurors must agree, but judges sometimes allow verdicts agreed on by only ten jurors.
Suppose that a jury starts its deliberations, and after some discussion, takes a vote. Eight say that the prosecution has shown guilt beyond reasonable doubt, but four say that it has not. All of these opinions have been reached by considering only the evidence presented in court, and comments by jurors on that evidence. After more discussion, it becomes clear that no juror is going to change his or her opinion, so long as only those things are considered.
Jurors might then consider the pattern of voting. Any one of the eight might reason as follows.
"Some people in the jury room think that the prosecution has not discharged its burden of proof. There is no ground to think that they are not reasonable people, and in any case, it is unlikely that one would get four unreasonable people among 12 randomly chosen people, although one might get one or two. If the prosecution had discharged its burden of proof, they would probably have been convinced, because reasonable people generally hold reasonable views on such questions. They are not convinced, so I should change my view and vote for acquittal."
(It might be thought that there would be a mirror-image argument for the four: "Eight apparently reasonable people have concluded that the prosecution has discharged its burden of proof, and they would not have concluded that if it had not, so I should change my view". But that argument should be excluded by the fact that the burden of proof is on the prosecution. Doubt trumps certainty. The views of the four could plant doubts in the minds of the eight, but the views of the eight could only plant doubts over whether to acquit in the minds of the four, and a doubt as to whether to acquit is not enough to convict.)
Clearly, this reasoning is not always followed in practice. If it were, we would never get deadlocked juries, because the reasoning would transform deadlock into acquittal. The psychological explanation may well be that at least some jurors think their job is to decide whether the accused is guilty, rather than to decide whether the prosecution has shown guilt beyond reasonable doubt. A seemingly more respectable reason would be that jurors made up their minds individually, on the basis of the evidence presented and other jurors' comments on that evidence, and did not consider that their views should be influenced by the views of other jurors. And yet, we may ask whether that reason really would be respectable. It would imply that each juror should decide on the basis of the standard of reasonableness of doubt that he or she would use when he or she had no-one else's guidance available, rather than on the basis of a standard of reasonableness of doubt that had been tested by reference to the conveniently available sample of 11 other people in the jury room. Would not such a test be likely to improve one's grasp of the appropriate standard? The concept at work should be as objective as possible: doubt that is reasonable, not doubt that an individual, with his or her foibles, might happen to consider reasonable.
Occasionally, comparable considerations are tackled explicitly in legislation. The UK is about to have a general anti-abuse rule put into its tax legislation. Assuming that the legislation follows the draft that was published in December 2012, the use of a tax avoidance scheme will only be caught by this new rule, removing the anticipated tax saving, if that use "cannot reasonably be regarded as a reasonable course of action in relation to the relevant tax provisions". (Even if the use of a scheme is not caught by the new rule, it may very well be caught by other rules.) For the text and commentary, see GAAR Guidance Part A, 11 December 2012, section 5.2, available here:
http://www.hmrc.gov.uk/budget-updates/11dec12/gaar-guidancepart-a.pdf
This is known as the double reasonableness test. If there are reasonable views on both sides, both that the use of a scheme was a reasonable course of action and that it was not, then use of the scheme will not be caught. It is natural, and very likely to be correct, to determine whether a view is reasonable by considering whether it is supported by arguments that a substantial proportion of people who are well-informed about the subject matter would consider to be reasonable, and whether there is an absence of any manifest objection that would lead most well-informed people to reject the view.
We should note a subtlety here. The legislation focuses on the reasonableness of a view, not of people who hold it. A reasonable person may happen to hold some unreasonable views. As the official commentary points out, it is not enough simply to produce an eminent lawyer or accountant, who states that a course of action was reasonable. (Ibid., paragraph 5.2.2.2)
If, however, a substantial number of eminent lawyers and accountants stated that a course of action was reasonable, it would be hard to maintain that this view was unreasonable. When we test for reasonableness, as distinct from correctness, the number of votes among experts carries some weight. (It may also carry weight in relation to correctness, but in a rather different way, and a minority of one can be correct.)
This reflects the fact that reasonableness is a normative concept that is directly related to the adoption of views, in a way that, where there is real debate among experts, correctness is not. What should you do? In fields in which there is such a thing as expertise, and in which you are not yourself an expert, you should limit yourself to adopting reasonable views, or suspending judgement. How can you avoid adopting unreasonable views? See what the experts think, and limit yourself to views that are adopted by decent numbers of experts (or to suspension of judgement).
Correctness of views is an aspiration, at least in relation to views of a type that have any prospect of being classified as correct or incorrect, but in areas where there is real debate among experts, views do not carry labels, "correct" and "incorrect", so the concept of correctness does not regulate the conduct of non-experts directly. We can only adopt strategies that are likely to lead to the avoidance of incorrect views, like limiting ourselves to reasonable views, or making ourselves experts and studying the evidence ourselves.
Subscribe to:
Posts (Atom)