Friday 25 October 2013

Government accountability and freedom of information

Gus O'Donnell, Cabinet Secretary until the end of 2011, has made waves recently by making proposals for improving the government of the UK which include vetting prospective MPs before they stand for Parliament. Amongst the responses has been an excellent one from Douglas Carswell, calling for accountability in the other direction: we the electorate, and backbench MPs, need to be able to hold civil servants and ministers to account. His piece is available at:

http://blogs.telegraph.co.uk/news/douglascarswellmp/100242390/how-to-expose-whitehall-cock-ups-give-us-the-power-to-sack-ministers-and-civil-servants/

It is, however, difficult to hold a government to account, unless we can see what considerations ministers and civil servants have weighed, and how they have reached their decisions. Unfortunately, there is a major obstacle to that in the exemptions from disclosure of documents that are conferred by Freedom of Information Act 2000, sections 35 - Formation of government policy, etc and 36 - Prejudice to effective conduct of public affairs. As a rule, we cannot get to see the policy papers that go back and forth among civil servants, and between them and ministers, and which show what evidence was considered and how decisions were reached. If the exemptions were abolished, MPs, journalists and the public could see the papers, and would have a valuable way to keep tabs on civil servants and on ministers.

The traditional argument is that if these papers had to be disclosed, that would inhibit free and frank discussion among civil servants, and between them and ministers. We should not accept this argument. Everyone would know in advance that such papers would be full of odd policy ideas that got rejected as silly, uncertainties about data (which should be disclosed anyway), queries over the value of evidence received from external consultees, and mentions of factors that might be seen either as risks of policies, or as advantages of them, depending on one's political stance. We know that the policy formation process is messy, and we would not think any the less of governments if we had confirmation of that fact. The worst consequence would be a bit of political embarrassment, and that matters far less than our ability to see whether the people we pay to run the public sector, and to formulate legislation, are doing a good job.

One might fear that there would be too much paper, through which to plough, and no good way to identify the key documents quickly. There would also be the difficulty of wording freedom of information requests so as to find out what was wanted. If such a request asks for the wrong thing, or leaves it open to the relevant department to supply very little information, the response can very easily not be what the person who made the request wanted.

These difficulties could, however, be overcome if external users had access to departmental document management systems, so they could search the stock of documents themselves. The official response to such a radical move might well be, "But then you might get access to personal data on taxpayers, or NHS patients, or litigants". But it is unlikely that this would really be a problem. It would not be difficult to tag documents as needing redaction before they could be accessed - although there would need to be severe disciplinary measures against civil servants who were found to be tagging everything, just so as to make life difficult for outsiders.

Will any of this happen? Maybe not. But if enough backbenchers from all parties wanted it to happen, they could force it on those who are in government, or who are on the Opposition front bench and hope that they will in due course be in government.

Thursday 12 September 2013

The life without examination

In the Apology, at 38a5, Plato famously reports Socrates as saying that the unexamined life is not worth living. At least, that is the standard English translation. But every now and then, someone contests the claim. Why should a life be worthless if it is wholly outward-looking, devoid of introspection? (We may note in passing that "not worth living" may be too strong a translation. "Not the sort of life one should live" would be a possible reading.)

Sometimes, when I have seen this view, or some related view, I have wondered out loud about the translation. (Examples where I have commented are Brian Leiter's blog on 17 January 2012, and Stephen Law's blog on 12 September 2013 - original post dated 7 September.)

I put my thoughts here now, in the hope that some expert in Plato's Greek may have a view. I am not such an expert, so my own thoughts are mere speculation.

My concern is the word "anexetastos". This is the word that is standardly translated as "unexamined", with the implication that it is one's own life that needs examining. Liddell and Scott (the big one, colloquially known as the Great Scott) gives two meanings:

(a) not searched out, not inquired into or examined;

(b) without inquiry or investigation.

This dictionary refers to Apology 38a5 in giving the latter meaning.

An Intermediate Greek-English Lexicon (the Middle Liddell) reinforces the point by giving (a) not inquired into or examined, (b) uninquiring, and again links Plato to the latter meaning, although without a reference to the Apology.

On the scope for a verbal adjective to have both active and passive meanings, see Smyth's Greek Grammar, page 157, paragraph 472.

Liddell and Scott's decision to link Plato to (b) and not (a) does not in itself prove anything. I assume that they simply followed the opinion of Plato scholars as to the translation. But if (b) were the meaning to adopt, Socrates' prescription would look rather different. It would amount to saying that you should enquire into things and strive to find out the truth about the world. You might yourself be a main object of your enquiry, or you might turn your gaze outwards, making enquiries in physics, natural history, geography, philosophy, or whatever else was of interest. The prescription would amount to an injunction not to be slothful intellectually, but to pursue knowledge and understanding.

Monday 2 September 2013

Most recent common ancestor

There has been a pause in my blogging. I have been engrossed in other work. The pause may last for a while longer. So here is a little puzzle, as an entr'acte.

According to Wikipedia, Elizabeth of York, wife of Henry VII, is the most recent common ancestor of all English monarchs. I have not checked this claim, but let us assume that it is correct. The puzzle, which we would need to solve before checking the correctness of the claim, is as follows.

How should we define "most recent common ancestor" in this context, so that a determinate person, or a determinate couple, is picked out, and so that it is interesting that this person, or this couple, is picked out?

The condition of interest would be failed if our definition led us to pick out George VI, or even if it led us to pick out George V, or Victoria. We want some sense of "furthest back most recent". But then, we must not allow Elizabeth of York's mother, Elizabeth Woodville, to displace her.

Technical terms from mathematics may be used freely.

Blogspot does not, so far as I know, support LaTeX math environment, so please feel free either to cut and paste logical symbols from elsewhere, or to use E for the existential quantifier, V for the universal quantifier, - for negation, and > for implication.

Monday 17 June 2013

Bonn, Berlin and Gettier


This month, we are fifty years on from the publication of Edmund Gettier's celebrated paper, "Is Justified True Belief Knowledge?" (Analysis, volume 23, number 6, June 1963, pages 121-123). Today, 17 June, is the sixtieth anniversary of the uprising in East Germany, which might have led to the early reunification of Germany (at least according to a broadcast on B5 radio this morning), had the Soviets not sent the tanks in.

We may also recall, for no particular calendrical reason, Bertrand Russell's note of the Gettier phenomenon, avant la lettre. The story is told in J E Littlewood, Littlewood's Miscellany, edited by Béla Bollobás, 1986, page 128. "He told me (c.1911) that he had conceived a theory that 'knowledge' was 'belief' in something which was 'true'. But he met a man who believed that the Prime Minister's name began with a B. So it did, but it was Bannerman and not Balfour as the man had supposed." [Balfour was Prime Minister from 1902 to 1905, and Bannerman from 1905 to 1908.]

Let us stick with the letter B, but move forward to the present day. Suppose that Claude, who has never lived in Germany, and whose access to German news is very limited, was taught in school, in the 1970s, that Bonn was the capital of West Germany, that West Germany was much larger, and much more significant economically, that East Germany, and that one part of Berlin was the capital of East Germany. Since then, Claude has heard of German reunification, but not of any change of capital. He reasons that the capital of Germany is probably still Bonn, but that it might have been moved somewhere else, and that if it had been moved, Berlin would have been the most likely choice.

He therefore forms the justified true belief that the name of the capital of Germany begins with a B. Does he know this?

The case is not on all fours with the Balfour-Bannerman case. Claude is aware that he might be wrong in his belief that Bonn is the capital, but reasons that even if he is, there is still a good chance that he is right in his belief that the name begins with a B. That reasoning is perfectly sound, and not just because the capital was in fact moved to Berlin. It would still have been sound reasoning, had Frankfurt or Hamburg been chosen. The problem then would have been not lack of justification for the belief that the name began with a B, but its falsity.

The reasoning would also have been sound if Bielefeld had been chosen, but then the truth of the belief that the name began with a B would have had no connection with Claude's reasoning, and we would probably have wanted to say that Claude's belief did not qualify as knowledge.

Let us return to the real world, in which the capital is moved to Berlin, Claude's reasoning is sound, and the truth of his belief that the name begins with a B is connected with his reasoning. Does he know that the name begins with a B?

It may depend on how Claude assesses the risk that the capital has moved. If he thinks there is a pretty good chance that the capital has moved, that supports a claim to knowledge, because it gives significance to the reasoning that Berlin would be the most likely new location. In giving significance to that reasoning, it also gives significance to what it is that actually makes the belief true. Claude reduces his reliance on the false lemma (that Bonn is the capital), and increases his reliance on the true lemma (that Berlin is the capital). If, on the other hand, Claude does not think it at all likely that the capital has moved, then his conscious reliance is on the false lemma, and the reasoning about the move is merely a precaution, upon which he does not expect to rely. We might then wish to deny Claude knowledge.

There is one more twist. Suppose that Claude thinks it is quite likely that the capital has moved, and that inclines us to allow him knowledge that the name begins with a B. That puts pressure on the quality of his reasoning that if the capital has moved, it has moved to Berlin. He cannot be sure of this. So unless he has good reason to think that any move would be to Berlin, with no other city in serious contention, we might not allow him to know that the name began with a B after all.

Wednesday 22 May 2013

Phenomenal consciousness


An e-mail has just invited me to consider whether phenomenal consciousness is a causally inert epiphenomenon, and a pointless by-product of brain activity. It is the sort of question that can be transformed into a different question.

First, to what does the claim amount?

It is not a claim about self-awareness of the type that underpins an agent's knowledge that the actions that she can make happen just by thinking, are the actions that will take place in her immediate vicinity, and are performed by the entity that must be kept fed in order for her thought processes to continue. That sort of self-consciousness can be characterized as having a grasp of what Perry called the essential indexical. It is the sort of self-consciousness that lies at the heart of Lucy O'Brien's argument in her book, Self-Knowing Agents. It is something that must be represented in our brain cells, one way or another, in order for us to function as we do.

Rather, the claim is about how things feel to us, the qualia (or apparent qualia, if one thinks that there are no qualia) that we have, but that zombies would not have. So the second part of the claim, that phenomenal consciousness is a pointless by-product of brain activity, is a claim that zombies could do just as well as we do. It might happen to be that any creature that functioned as we function could not be a zombie, for example, because we would need to wire up the cells in certain ways which would inevitably produce phenomenal consciousness as a by-product. But even if that were so, the second part of the claim could still be made. It would merely need to be worded as "Disregarding the practicalities of making zombies, they could do just as well as we do".

If, however, one were able to substantiate the second part of the claim, in the form of a claim that zombies could do just as well as we do, that would not mean that we would have established the first part of the claim. (It may be that the second part of the claim could be taken in some other form, such that substantiating it would substantiate the first part of the claim, but it is not obvious how to re-cast the second part in order to give this result.)

We would not be able to move straight from the second part to the first part, because phenomenal consciousness might be causally effective in human beings, creatures who have it, even though an alternative way of achieving the same results that we achieve, an alternative that did not involve phenomenal consciousness, would be available. We should not suppose that the difference between a human being and a zombie is simply the presence or absence of phenomenal consciousness, with no other differences. And if there are other differences, such that we would not get from a zombie to a human being simply by adding phenomenal consciousness, those differences could build in a causal role for phenomenal consciousness.

We need to say something about how phenomenal consciousness could have a causal role. It could be said that only fundamental particles and forces have causal roles. But that would be a very narrow way of speaking. We are more inclined to say that macroscopic objects also have causal roles. Now suppose that a subject has his fingers on some buttons, which carry modest electrical charges that produce a pleasant tingling sensation, but which are also getting steadily hotter. If a particular configuration of brain cells corresponded to a particular feeling in the subject, and could not be picked out in any other way than by saying "this is the configuration when the subject feels heat at his fingertips", and an analysis of his brain processes showed that the configuration led to conscious thoughts about when it would become sensible for him to withdraw his fingers and forgo the pleasant tingling sensation, then it would not be obviously inappropriate to say that the feeling played a causal role, any more than it is inappropriate to say that a ball that rolls off a table and onto the floor plays a causal role in making a noise, rather than saying that the particles of which the ball is comprised play causal roles in disturbing particles in the atmosphere. Berent Enç's thoughts on causation and conditionals, in his book How We Act, are relevant here.

It would not be obviously inappropriate to speak in that way, but it might still be inappropriate. What might make it appropriate, or inappropriate?

A claim of appropriateness would best be sustained by a claim that states of phenomenal consciousness were on a par with macroscopic physical objects. There is a sense in which both are not really there, but are mere causally inert epiphenomena of the fundamental particles and forces. If they were causally inert in the same sense, that would strengthen the position of those who said that it was appropriate to regard states of phenomenal consciousness as more than epiphenomena, to the extent that it was appropriate to regard macroscopic physical objects as more than epiphenomena.

So the new question, into which we can transform the original question, is this: Are states of phenomenal consciousness epiphenomenal on the fundamental particles and forces, in the same way as macroscopic physical objects?

One reason to say that states of phenomenal consciousness and macroscopic physical objects were epiphenomenal in different ways, would be that laws of the same general type, physical laws, give us a grip on the behaviour of particles, and on the behaviour of macroscopic physical objects. It is a debatable claim, but not a crazy claim, that we could derive the whole of chemistry and biology from physics, too. (This is an upwards claim, different from the claim that we could reduce biology and chemistry downwards to physics.) We could claim that any emergence would not obstruct such an upwards derivation.

But then, could one also argue that no emergence would get in the way of an upwards derivation of facts about states of phenomenal consciousness? If it would not, and if this upwards derivation were feasible, we would have failed to separate off consciousness as something different from biology in a relevant way.

One sign that there might be an obstacle to such an upwards derivation is that descriptions of states of phenomenal consciousness are readily appreciated by us, but they might not mean anything to, for example, Martians, whereas human biology would be perfectly meaningful to Martians.

Another sign is that we do not yet have much idea of what such an upwards derivation would look like, whereas we have a pretty good conception of upwards derivations of chemistry, and of biology, from physics. But it would be unwise to assume that this is how things will rest. Our knowledge of the brain has advanced enormously over the last 20 years. We do not know how much we will learn in the next 20 years.

Finally, we must consider the objection that if we did see how facts about states of phenomenal consciousness were to be derived, the descriptions of the derived objects would not look like descriptions of states of phenomenal consciousness. They would be descriptions of brain states. The descriptions would not glow with the feelings that we have, when our brains are in those states. But it is only Martians who definitely could not see the descriptions as glowing with those feelings. If we were to follow a Churchland-type programme of reform of our manner of speaking, so that we started to speak in terms of brain states, and if we associated descriptions of brain states with inner feelings, the descriptions might well come to glow with the corresponding feelings, even when we read them in neurology textbooks.

Tuesday 7 May 2013

Wings of Desire


Every so often in the film Wings of Desire (alternatively entitled Der Himmel über Berlin), directed by Wim Wenders and released in 1987, we hear extracts from the Lied vom Kindsein, by Peter Handke. The text is here:

http://www.wim-wenders.com/movies/movies_spec/wingsofdesire/wod-song-of-childhood-german.htm

and an English translation is here:

http://www.wim-wenders.com/movies/movies_spec/wingsofdesire/wod-song-of-childhood.htm

Plenty of lines in the Lied illustrate the claim that philosophy begins in wonder (Plato, Theaetetus 155d; Aristotle, Metaphysics 982b11-13): for example, "Warum bin ich ich und warum nicht du?"; and "Wann begann die Zeit und wo endet der Raum?".

It would be an interesting exercise to answer all of the questions that the Lied poses, and an equally interesting one to make all the connections that could be made between the text of the Lied and contemporary philosophy. For example, "Wie kann es sein, daß ich, der ich bin, bevor ich wurde, nicht war, und daß einmal ich, der ich bin, nicht mehr der ich bin, sein werde?", invites us to think about the conditions under which indexical and non-indexical terms can refer, and about the peculiar effects of using non-indexical terms to refer to oneself (with echoes both of Moore's Paradox and of Perry's essential indexical), as well as inviting us to think about coming to be and ceasing to be.

I shall here take a look at these words: "Ist das Leben unter der Sonne nicht bloß ein Traum? Ist was ich sehe und höre und rieche nicht bloß der Schein einer Welt vor der Welt?".

They have a particular relevance in the context of the film, in which angels see the world and the people in it only in black and white, and they cannot intervene causally to change people's lives, but on the other hand, they can listen to people's thoughts. But let us allow ourselves to go beyond that context, and ask what connection there may be between the idea that life is merely a dream, and the idea that we may perceive only an appearance, rather than the world.

I shall take the former idea to be that life is but a fleeting set of impressions, which if properly understood, could not be taken to be of any importance. The sentiment here is Pindar's, that a person is but a dream of a shadow (skias onar: Pythian Ode 8, line 95), ignoring the comfort that the following two lines give, with their reference to the effects of Zeus's favour. If we take the idea that life is a dream to concern importance, that idea is kept securely, and interestingly, separate from the idea that we do not perceive the real world. If we took the former idea to concern the process that produced our perceptions, then there would be a risk that the two ideas could come together, especially if it were possible to have dreams that were reliably veridical, for example by virtue of a mechanism of pre-established harmony, or by virtue of some unknowable that produced both the real world and our perceptions in parallel.

I shall take the latter idea to be that we perceive an appearance of a world, where that perceived world is distinct from the real world, and stands between us and the real world. The presence of the additional world, required by the text ("der Schein einer Welt vor der Welt", not "der Schein der Welt vor der Welt"), might be thought to steer us towards the sort of picture that would be given by a two-world interpretation of Kant, but we must allow for two non-Kantian elements. The first is that there is no requirement to regard the perceived world as empirically real. The second is the possibility (but not the certainty) that the perceived world is straightforwardly caused by the real world. The talk of our perceiving an appearance of the additional world also gives scope to introduce sense-data, or some such intermediary, but that is not the main point. The essential thing is the additional world.

First, suppose that life was a dream, in the sense of unimportance that I have just given. Could it be that we would nonetheless perceive the real world, and do so accurately - or, alternatively, perceive an intermediate world that was like the real world in its details, so that we at least had an accurate representation of the real world, and arguably did (indirectly) perceive the real world?

We could, so long as we could not act in the real world. That is, we would need to be in the impotent position of the angels in Wings of Desire, or of the deceased in Sartre's Les jeux sont faits. That would be both necessary and sufficient to make our perceptions of no importance, even though what was perceived or at least represented to us, the real world, was of importance, and even though the perceptions of people who could act in the real world, perceptions that might be qualitatively identical to our own, would be important.

If we could act to change the real world, perceptions that conveyed the state of the real world would matter, no matter how convoluted the process by which they conveyed that state. Perceptions of a world that was not the real world, and that did not convey the state of the real world, would not matter. They would be like the dreams that we, real agents in the world, in fact have when we sleep. Those actual dreams do not matter, except perhaps in indirect ways that do not rely on the accuracy with which they represent the real world.

Now suppose that life was not a dream, in the sense that our perceptions did matter. Would that impose any restrictions on the perceptual process?

We would need to be able to act in the real world, in order for our perceptions to be important. But our perceptions would also need to be useful guides to action. They would not need to give us wholly accurate information about the real world, but they would need to be such that paying attention to them led to action that was, on the whole, more appropriate than our actions would be if we did not pay attention to them. They could, however, fulfil that condition, whether we perceived the real world or some other, intermediate, world.

Sunday 21 April 2013

Manipulating the distribution of lifespans


Suppose that you have the power to alter people's lifespans, but only on a statistical basis, not person by person. Perhaps you can do this through genetic engineering, or through putting something in the water supply. (As is normal in philosophical thought experiments, we shall not worry about practicalities.) Furthermore, you cannot ask people's permission before acting, and you are the only person in the world who can take this action.

To be precise, you can take, or refrain from taking, action that will have the following effects. The effects are only on offer as a single package: you cannot pick and choose selected effects.

A. The mean of the distribution of life expectancies for the affected group will rise.

B. The dispersion of that distribution will fall, but the overall shape of the distribution will remain unchanged. If, for example, the distribution was normal, it will remain normal, but with a lower standard deviation. The fall in the dispersion will be great enough to ensure that despite the increase in the mean, there is some age, above which the high-lifespan tail of the new distribution will fall below the high-lifespan tail of the old distribution. (Both distributions are to be given in terms of proportions of the populations to which they apply, so as to avoid issues about the effects of a higher mean on the size of the population.)

C. The mean of the distribution of years of ill health or disability at the end of life, and the overall shape and the dispersion of that distribution, will remain unchanged. Note that this effect is given in terms of years, not in terms of proportion of life. So the number of years of ill health or disability remains the same, even if lifespan increases.

D. Any correlation between position in the distribution of lifespans, and position in the distribution of years of ill health or disability, remains unchanged.

Thus, average prospects will improve: longer life, with no more years of ill health or disability at the end of life. But there is a tail of the distribution of lifespans, in which prospects are made worse by the change.

Now we come to the questions.

1. Should you take this action, if it is to apply to all people already born or currently in the womb, but not to anyone else?

2. Should you take this action, if it is to apply to all people who have not yet been conceived, but not to anyone else?

3. Should you take this action, if it is to apply to all people already born, currently in the womb, or not yet conceived?

Two points should be noted, before we consider answers.

The first point is that you are confronted with a single choice: take the action, in accordance with the terms of whichever one of 1., 2., and 3. is in force, or do not take the action at all. You are told which of the three is in force. You do not get to choose which one is in force.

The second point is that in drawing a line at conception, I do not mean to take a stance on whether embryos are people. I simply want to recognize the difference between entities that already have a determined genetic endowment, and potential entities that do not. The significance of genetic endowment, is that it is likely to have quite a lot to do with prospects for lifespan and for infirmity in old age.

The case against taking action in 1. looks quite strong. Given that genetic endowment and lifestyle to date have a substantial influence on lifespan, there would be some group of people, ones with good genetic endowments and healthy lifestyles, who constituted a modest proportion of the population, such that we were aware of the criteria for membership of that group, and such that we could say that taking the action would shift the odds against members of that group, to an extent that one would not regard as trivial. Kantian objections to the use of people as means to other people's ends come to mind. The other people in question would be the people outside the group.

The fact that the members of the group would be people who were already well off, in that they would be people at the high end of the distribution of lifespans, hardly seems to be a satisfactory response. Imposing reductions in lifespans seems to be rather more fundamental than imposing high tax rates on high incomes.

My initial feeling is that the obvious utilitarian case for taking action in 1. is outweighed by considerations such as these.

One objection to this view, would be that a decision not to take the action would itself be a decision to act: that is, that refraining from action would not be abstention, but a positive choice for the other side, and that one would be just as responsible for that as one would be for a choice to take the action, and responsible in the same way. Then a choice not to act would amount to deliberately disadvantaging those who were not in the group with long life expectancies. I shall not explore this view here, but it is a view that could be argued.

The case against taking action in 2. looks much weaker. People as yet unconceived have neither any particular genetic endowment, nor any particular lifestyle. It is tempting to say that there would be, in the future population, some members of a specifiable group of people (the group of people with good genes and healthy lifestyles), who would be worse off than they would have been, had the action not been taken. But that objection would not be rightly phrased. The word "they" would have no referent. The complaint that some members of a specifiable group would be worse off, once re-phrased to remove this difficulty, can amount to no more than a complaint that the distribution of lifespans, for the whole population, would be different. Given that, the utilitarian case seems to be a good one. In 2., a choice to take the action would be the right choice.

It might seem that the word "they" would have a referent, at least in the next few generations, if there was a strong hereditary element to longevity. It could refer to the descendants of people who were currently alive and who had good genes. But I doubt that this would be enough to create a referent. Since we are concerned with people not yet conceived, there would only be potential descendants. Any particular person currently alive and with good genes might not have any descendants, even though it would be very probable that some people or other, drawn from the set of those currently alive and with good genes, would have descendants.

There is a parallel with John Rawls's veil of ignorance. It is a veil of total ignorance. The people behind it have no particular characteristics, until they are dropped into the society that they have designed. So they can only sensibly think about overall distributions. There is a sense in which they cannot take up arms on behalf of a particular group on the basis that some arrangement would do an injustice to its members, even though criteria for membership of the group may be perfectly clear, because no-one has, at the time of design, characteristics that would determine whether he or she was a member. "Its members" lacks a referent. To make the parallel closer, we can imagine Rawls's deliberators thinking about possible changes to distributions that already apply in an actual society, a society which all the designers will join as a complete replacement population, in roles and with characteristics that will be allocated at random, once all existing members of the society have died.

I shall not reach any conclusion on the third possibility, applying the change to current and future people. The key question is this. If the future population will be, in total over the centuries, very much larger than the current population, can the utilitarian case for action outweigh the case against action that was set out in relation to 1.?

There is one more complication. Is it in the interests of a person to have descendants who enjoy long lives? If it is, then that would influence one's thoughts in relation to 3. Currently living people with good genes might lose some of their own lifespans. But suppose that the beneficial influence of good genes on lifespan was not passed down the generations to any significant extent, so that, for example, the great-grandchild of someone with genes that substantially improved prospects had no better chance than the average in the population of having genes that conferred such good prospects. Then the good genes of the current generation would not tend to keep a significant proportion of their descendants in the group that would potentially lose from the action. Then the members of the current generation with good genes might have self-regarding reason to favour the action.

Sunday 31 March 2013

Opposition to consensus

One of the complaints against those who deny, or doubt, anthropogenic global warming, is that they are mistaken. That must be the fundamental complaint. We won't do the right things, unless we get the facts right.

Another complaint, less fundamental but more interesting ethically, is that by opposing the scientific consensus, they get in the way of action. This complaint is sometimes mentioned, for example in paragraph 7 in this piece by James Garvey:

http://www.guardian.co.uk/environment/2012/feb/27/peter-gleick-heartland-institute-lie

There will always be some politicians who will oppose action on global warming, and if some people offer arguments that action would be wasteful and pointless, they will give those politicians material to use in their own campaigns. This will reduce the probability of action at the level of governments, whether they would work singly, or in co-operation at the international level.

My question here is, is there a specific ethical failing of opposing a consensus, where the result is likely to be to delay action, or to reduce the scale of action, to solve a problem that is real and pressing if the opponents of consensus are indeed mistaken?

If the opponents know that the evidence for their position is of poor quality, or if they are reckless as to whether it is of good or poor quality, there is a failing, but it is of a rather more general sort than any failing of opposing a consensus. The failing is that of not being sufficiently careful to put forward true claims, and to avoid making false claims, in a situation in which one ought to take more than usual care, because the consequences of people being misled are so serious. William Kingdon Clifford set a high standard here, in his essay "The Ethics of Belief". His standard is conveniently summed up in his words towards the end of its first section: "It is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence". We might debate whether this was an appropriate standard when the stakes were very low. But with global warming, the stakes are very high, and the standard is undoubtedly apt.

Not all global warming sceptics are reckless as to the quality of the evidence for their position. Some would be too ill-informed to be aware that the quality of their evidence needed to be reviewed. But the corporate lobbyists, and their think tank pals, who push the sceptical position, cannot plead that kind of innocence. They know about the evidence. They study it when formulating their positions. They also know about the need to look critically at the quality of evidence. They spend a lot of time criticizing the evidence on the consensus side. Given that the overwhelming majority of scientific, as distinct from political, commentary, is on the side that anthropogenic global warming is real, it must be reckless for them to take a contrary position, unless they have a large and thoroughly-analysed body of evidence to support that contrary position. The corporate lobbyists and think tanks that take up global warming scepticism have no reason to suppose that they have such a body of evidence. They are, therefore, at best reckless, and they stand condemned by Clifford's standard.

This does not mean that there is anything wrong with their challenging the detail of the scientific claims that are made, searching for errors, and highlighting alternative interpretations of the data. It does, however, mean that they should not crow that there is no sound support for the consensus scientific view, when they have no good reason to think that they have established that the consensus scientific view is baseless. One can be reckless, not only in taking up an overall position, but in making claims about how much one has shown, or about the extent to which the things one has shown undermine the other side's view.

Now let us leave the example of global warming, and ask whether there is a specific ethical failing of opposing a consensus, assuming that:

(1) the opponent of the consensus is not reckless as to the quality of the evidence for his position;

(2) if the consensus is correct, failure to act as the consensus view would recommend would have serious consequences;

(3) if the consensus is mistaken, there would be significant waste in acting as the consensus view would recommend.

It seems to me that there is no ethical failing here, unless, perhaps, the opponent of the consensus is in a position of exceptional power, so that his position is very likely to prevail, whatever the merits of the arguments on his side and on the consensus side. Absent such power, the opponent is acting in a way that is, in general, likely to lead to the advancement of our knowledge. The opponent is simply engaged in subjecting other people's claims to criticism. Criticism exposes error, and can strengthen correct positions that show themselves able to withstand the criticism, as Mill remarked (On Liberty, chapter 2).

What if we leave out condition (3), and suppose that while serious loss would follow from failing to act on the consensus view if it were correct, there would be no serious loss from acting on the consensus view if it were incorrect?

We might take the attitude of Pascal's Wager: we might as well act on the consensus view anyway. Then it might seem that opposition to the consensus view would be reprehensible, if it would be likely to hinder that desirable course of action.

However, even assuming that action on the consensus view would be the only sensible thing to do, and that opposition might hinder that course of action, it would not follow that opposition would be reprehensible. Opposition could still fortify belief in the consensus view, as suggested by Mill, if the opposer's arguments were found wanting. And if we were to adopt Clifford's attitude, we would want opponents of the consensus to be as active as they could be, in order to root out any mistaken beliefs that might happen to be held by the majority. That would be so, even if retention of the beliefs in question would not lead to any immediate adverse consequences, whether they were mistaken or correct. It would be so, both because an accumulation of erroneous beliefs, individually harmless, might have adverse consequences long after the beliefs were acquired, and because truth is something that is to be valued in itself.

Saturday 23 March 2013

Tricks of presentation


Yesterday, I saw the new David Bowie exhibition at the Victoria and Albert Museum. The exquisitely curated sequence of pieces of music, video clips, costumes, and other artefacts, demonstrates the power of presentation, far more than a single concert would do. I suppose the reason is that when we see contrasting tools of presentation, used in different concerts, we become aware of what each tool does, because it does not do quite the same as the corresponding tool in the next display. But we can remain carried away by each act of a pop star, because we know it is only an act. It does not profess to convey anything much about the Universe.

Now suppose we saw an exhibition of papal inaugurations and other grand religious ceremonies, and we therefore became more aware of how the tools of presentation worked in each one. Those who are currently carried away by such shows might cease to be so, because when something that is supposed to be more than an act, and to represent a point of contact with some profound truth about the Universe, is revealed to be shot through with tricks of presentation, doubt must be cast on the supposed profound truth. It should not need those tricks. And if the supposed profound truth is discarded, the show ceases to be a piece of fun, and is reduced to a charade. It cannot survive the loss of its raison d'être.

It is tempting to define an epistemic virtue, and its corresponding vice, in the following terms.

1. Suppose that there is some information.
2. The information might be presented in a variety of ways.
3. We shall only consider ways that would allow the subject to grasp the information.
4. The subject may form a view on the information's truth, or on its trustworthiness.
5. A propensity to form the same view, regardless of the way in which the information was presented, would be an epistemic virtue.
6. A propensity to form different views, depending on the way in which the information was presented, would be an epistemic vice.

This virtue and vice would come close to the virtue of scepticism (in the sense of being appropriately critical, not the Pyrrhonian sense or anything close to that sense) and the vice of gullibility, but the focus here is on the method of presentation, not on any other ways in which people might avoid being led astray, or might be led astray.

We could not test for the virtue, or the vice, in a given individual, by presenting the same information to him or her several times, in different ways, because the individual would remember previous presentations. We might use repeated presentation, with different forms of presentation being used in various orders, on a large sample of people, to test for the effectiveness of different methods of persuasion across the population as a whole, but that would be a different exercise.

We might note whether an individual was in general susceptible to the vice, by seeing whether he or she tended to accept the content of adverts or of propaganda. But our conclusions would probably be impressionistic, rather than based on a rigorous consideration of evidence. Indeed, if someone comes along with a test that purports to tell us whether a given individual is, or is not, uniformly susceptible to persuasion through the use of tricks of presentation, we should be sceptical of that claim.

Monday 18 March 2013

Artificial intelligence and values


An article in the latest issue of Cambridge Alumni Magazine (issue 68, pages 22-25, but 24-27 of the version readable online), discusses the work of the Centre for the Study of Existential Risk. Huw Price, the Bertrand Russell Professor of Philosophy at Cambridge, takes us through some of the issues.

The magazine is available here:

http://www.alumni.cam.ac.uk/news/cam/

and some information about the Centre is available here:

http://cser.org/index.html

I found this comment by Huw Price particularly striking:

"It is probably a mistake to think that any artificial intelligence, particularly one that just arose accidentally, would be anything like us and would share our values, which are the product of millions of years of evolution in social settings."

This is an interesting thought, as well as a scary one. It leads us to reflect on what it would be for an artificially intelligent entity to have alternative values.

I shall start by considering what values do. I take it that they are, or provide, resources that allow us to decide what to do, when the choice is not to be made on purely instrumental grounds. That is, the choice is not to be made purely by answering the question, "What is the most efficient way to achieve result R?". The choice as to what to do might be one that was not to be made on purely instrumental grounds, either because of the lack of a well-defined result that was definitely to be achieved, or because it was not clear that all of the possible means, drawn from the feasible range, would be justified by the ends.

It would be possible for some artificially intelligent entity not to have any values at all, even if it could always decide what to do. It might choose both its goals and its methods by reference to some natural feature that we would regard as very poorly correlated, or as not correlated at all, with any conception of goodness. For example, it might always decide to do what would minimize entropy on Earth (dumping the corresponding increase in disorder in outer space, since it could not evade thermodynamics). We may not care about outer space, but we know that the minimization of entropy in our locality is not always a good ethical rule. When such a decision procedure failed to give clear guidance, the system could fall back on a random process to make its choices, and we would be at least as dissatisfied with that, as with the entropy rule.

The entropy example shows that there are things that do the job that I have just assigned to values, allowing entities to make decisions when the choice is not to be made on purely instrumental grounds, but that do not count as values. It is a nice question, how broad our concept of values should be. But we probably would have to extend it to naturalistic goals such as the maximization of human satisfaction, in order to have an interesting discussion about the values that artificially intelligent entities might have, in the context of the current or immediately foreseeable capacities of such entities. That is, we would have to allow commission of the naturalistic fallacy (supposing it to be a fallacy). It would be a big step, and one that would take us into a whole new area, to think of such entities as having the intuitive sense of the good that G E Moore would have had us possess.

We not only require the possessor of values to meet a certain standard in the content of its values, the standard (whatever it may be) that tells us that a purported value of entropy minimization does not count. We also require there to be some systematic method by which it gets from the facts of a case, and the list of values, to a decision. Without such a method, we could not regard the entity as being able to apply its values appropriately. It would, for practical purposes, not have values.

Sometimes the distinction between values and method is clear: honesty and courage are values, and deciding what action they recommend in a given situation is something else. At other times, the distinction is unclear. A utilitarian has the supreme value of the promotion of happiness, and a method of decision - the schema of utilitarian computations - that is intimately bound up with that value. From here on, I shall refer to a system of values, meaning the values and the method together.

Now suppose that an artificially intelligent entity had some decision-making resources that we would recognize as a system of values, but that we would say was not one of our systems of values, perhaps because we could see that the resources would lead to decisions that we would reject on ethical grounds, and would do so more often than very rarely. What would need to be true of the architecture of the software, for the entity to have that system of values (or, indeed, for it to have our system of values)?

One option would be to say that the architecture of the software would not matter, so long as it led the entity to make appropriate decisions, and to give appropriate explanations of its values on request.

That would, however, give a mistaken impression of latitude in software design. While several different software architectures might do the job, not just any old architecture that would yield appropriate decisions and explanations most of the time, would do.

The point is not that a badly chosen architecture, such as a look-up table that would take the entity from situations to decisions, would be liable to yield inappropriate decisions and explanations in some circumstances. An extensive look-up table, with refined categories, might make very few mistakes.

Rather, the point is that when the user of a system of values goes wrong by the lights of that system - falls into a kind of ethical paradox - we expect the conflict to be explicable by reference to the system of values (unless the conflict is to be explained by inattention, or by weakness in the face of temptation, and an artificially intelligent entity should be immune from both of those). That is, a conflict of this kind should shed light on a problem with the system of values. This is one reason why philosophers dream up hard cases, in order to expose the limits of particular sets of values, or of general approaches to ethics: the hard cases generate ethical paradoxes, in the sense that they show how decisions reached in particular ways can conflict with the intuitions that are generated by our overall systems of values. If the same requirement that conflicts should shed light on problems with systems were to hold for the use that an artificially intelligent entity made of a system of values, the software architecture would need to reflect the structure of the system of values, and in particular the ways in which values were brought to bear in specific situations. Several different architectures might be up to the job, but not just any old architecture would do.

Given a systematic software architecture, for a system of values that we would regard as unacceptably different in its effects from our own system of values, we can ask another question. Where would we need to act, to correct the inappropriateness?

We should not simply make superficial corrections, at the point where decisions were yielded. That would amount to creating a look-up table that would override the normal results of the decision-making process in specified circumstances. That would not really correct the entity's system of values. It would also be liable to fail in circumstances that we did not anticipate, but in which the system of values would still yield decisions that we would find unacceptable.

We would need to make corrections somewhat deeper in the system. Here, a new challenge might arise. As noted above, it is possible for values and methods of decision to be intertwined. In such value systems in particular, and perhaps also in value systems in which we do naturally and cleanly separate values from methods, it is perfectly possible that the software architecture would have intertwined values and methods in ways that would not make much sense to us. That is, we might look at the software, and not be able to see any natural analysis into values and methods, or any natural account of how values and methods had combined to produce a given overall decision-making system.

This could easily happen if the software had evolved under its own steam, for example, in the manner of a neural net. It could also happen if the whole architecture had been developed entirely under human control, but with the programmers thinking in terms of an abstract computational problem, rather than regarding the task as an exercise in the imitation of our natural patterns of thought about values and their application. Even if the intention were to imitate our natural patterns of thought, that might not be the result. The programming task would be vast, and it would involve many people, working within a complex system of software configuration management. It is perfectly possible that no one person would have a grasp of the whole project, at the level of detail that would be necessary to steer the project towards the imitation of our patterns of thought.

A significant risk may lurk here. If we become aware that artificially intelligent entities are operating under inappropriate systems of values, we may be able to look at their software, but unable to see how best to fix the problem. It might seem that we could simply turn off any objectionable entities, but if they had become important to vital tasks like ensuring supplies of food and of clean water, that might not be an option.

Saturday 2 March 2013

Reasonableness


An English jury, in a criminal trial, must decide whether the prosecution has shown, beyond reasonable doubt, that the accused committed the crime. Normally, all 12 jurors must agree, but judges sometimes allow verdicts agreed on by only ten jurors.

Suppose that a jury starts its deliberations, and after some discussion, takes a vote. Eight say that the prosecution has shown guilt beyond reasonable doubt, but four say that it has not. All of these opinions have been reached by considering only the evidence presented in court, and comments by jurors on that evidence. After more discussion, it becomes clear that no juror is going to change his or her opinion, so long as only those things are considered.

Jurors might then consider the pattern of voting. Any one of the eight might reason as follows.

"Some people in the jury room think that the prosecution has not discharged its burden of proof. There is no ground to think that they are not reasonable people, and in any case, it is unlikely that one would get four unreasonable people among 12 randomly chosen people, although one might get one or two. If the prosecution had discharged its burden of proof, they would probably have been convinced, because reasonable people generally hold reasonable views on such questions. They are not convinced, so I should change my view and vote for acquittal."

(It might be thought that there would be a mirror-image argument for the four: "Eight apparently reasonable people have concluded that the prosecution has discharged its burden of proof, and they would not have concluded that if it had not, so I should change my view". But that argument should be excluded by the fact that the burden of proof is on the prosecution. Doubt trumps certainty. The views of the four could plant doubts in the minds of the eight, but the views of the eight could only plant doubts over whether to acquit in the minds of the four, and a doubt as to whether to acquit is not enough to convict.)

Clearly, this reasoning is not always followed in practice. If it were, we would never get deadlocked juries, because the reasoning would transform deadlock into acquittal. The psychological explanation may well be that at least some jurors think their job is to decide whether the accused is guilty, rather than to decide whether the prosecution has shown guilt beyond reasonable doubt. A seemingly more respectable reason would be that jurors made up their minds individually, on the basis of the evidence presented and other jurors' comments on that evidence, and did not consider that their views should be influenced by the views of other jurors. And yet, we may ask whether that reason really would be respectable. It would imply that each juror should decide on the basis of the standard of reasonableness of doubt that he or she would use when he or she had no-one else's guidance available, rather than on the basis of a standard of reasonableness of doubt that had been tested by reference to the conveniently available sample of 11 other people in the jury room. Would not such a test be likely to improve one's grasp of the appropriate standard? The concept at work should be as objective as possible: doubt that is reasonable, not doubt that an individual, with his or her foibles, might happen to consider reasonable.

Occasionally, comparable considerations are tackled explicitly in legislation. The UK is about to have a general anti-abuse rule put into its tax legislation. Assuming that the legislation follows the draft that was published in December 2012, the use of a tax avoidance scheme will only be caught by this new rule, removing the anticipated tax saving, if that use "cannot reasonably be regarded as a reasonable course of action in relation to the relevant tax provisions". (Even if the use of a scheme is not caught by the new rule, it may very well be caught by other rules.) For the text and commentary, see GAAR Guidance Part A, 11 December 2012, section 5.2, available here:

http://www.hmrc.gov.uk/budget-updates/11dec12/gaar-guidancepart-a.pdf

This is known as the double reasonableness test. If there are reasonable views on both sides, both that the use of a scheme was a reasonable course of action and that it was not, then use of the scheme will not be caught. It is natural, and very likely to be correct, to determine whether a view is reasonable by considering whether it is supported by arguments that a substantial proportion of people who are well-informed about the subject matter would consider to be reasonable, and whether there is an absence of any manifest objection that would lead most well-informed people to reject the view.

We should note a subtlety here. The legislation focuses on the reasonableness of a view, not of people who hold it. A reasonable person may happen to hold some unreasonable views. As the official commentary points out, it is not enough simply to produce an eminent lawyer or accountant, who states that a course of action was reasonable. (Ibid., paragraph 5.2.2.2)

If, however, a substantial number of eminent lawyers and accountants stated that a course of action was reasonable, it would be hard to maintain that this view was unreasonable. When we test for reasonableness, as distinct from correctness, the number of votes among experts carries some weight. (It may also carry weight in relation to correctness, but in a rather different way, and a minority of one can be correct.)

This reflects the fact that reasonableness is a normative concept that is directly related to the adoption of views, in a way that, where there is real debate among experts, correctness is not. What should you do? In fields in which there is such a thing as expertise, and in which you are not yourself an expert, you should limit yourself to adopting reasonable views, or suspending judgement. How can you avoid adopting unreasonable views? See what the experts think, and limit yourself to views that are adopted by decent numbers of experts (or to suspension of judgement).

Correctness of views is an aspiration, at least in relation to views of a type that have any prospect of being classified as correct or incorrect, but in areas where there is real debate among experts, views do not carry labels, "correct" and "incorrect", so the concept of correctness does not regulate the conduct of non-experts directly. We can only adopt strategies that are likely to lead to the avoidance of incorrect views, like limiting ourselves to reasonable views, or making ourselves experts and studying the evidence ourselves.

Thursday 7 February 2013

Artificial brains


The Human Brain Project aims to give us a computer simulation of the human brain, or at least to work towards that goal. Plenty of people are sceptical about the feasibility of the project. But there are also ethical questions. Some of them are mentioned on the project's website, here:

http://www.humanbrainproject.eu/ethics.html

The issues mentioned relate to what people might do, among, with and to other human beings, having learnt in detail how real human brains worked. Even if the simulation did not, at the neuronal level, work in the same way that actual brains worked, it could still mimic the brain at a larger scale, for example the level of processing ideas, in ways that would, for example, help political propagandists to work more effectively.

There is, however, another issue, related to the simulation itself. What, if anything, would make the simulation a moral client, so that ethical concerns would bear on a decision to modify its thoughts by direct intervention (rather than by talking to it and inviting it to accept or reject some new idea), or would bear on a decision to switch it off?

It would not, we may assume, have a power of action, or a power to move around the world. One might therefore argue that it would not have a well-grounded sense of self (compare some of the arguments in Lucy O'Brien's book, Self-Knowing Agents). But it could still have the internal configuration that would correspond to a sense of self, a configuration that was artificially engineered by the researchers, as if it had had a history of action. I assume here that such a sense could be retained, and would not be lost over a long period without action, just as a human being who became totally paralysed and immobile could retain a sense of self. The artificial brain would, however, need to think of itself as one who had become paralysed, or would have to be fed the delusion that it was in an active body in the world, or would have to remain puzzled about its sense of self. Given the nature of the project, it is most likely that the artificial brain would be fed the delusion of being in an active body.

Likewise, the artificial brain could have artificially engineered senses of pain and of fear, like those that were appropriate to a being which moved around in the world and needed to act to distance itself from dangers, and that had been evolved in the species, or developed in the individual, through encounters with dangers.

The sense of self, and the feelings, would have dubious provenance. If we were to think that such senses were given their content by the external reality that grounded them - a kind of meaning externalism - we would have to think of them as not having the content that they had for us. That might let us off recognizing the artificial brain as a moral client, but it seems unlikely that it would do so. Even a convinced meaning externalist would, after all, hesitate to turn off the nutrients that fed human brains which had been put in vats years ago, and had been plugged into the usual delusive inputs for long enough for the presumably externally grounded meanings of their thoughts to have changed.

Moreover, while the sense of self might be based on a fabrication, it is not clear that the sense of self would itself be unreal. It would be even harder to show that the artificial brain's theory of mind, used in its encounters with people, was unreal. And the researchers would be very likely to apply their own theories of mind and intentional stances to the artificial brain. At least, some of them would, entering into social relationships that would, between human beings, stimulate moral concern, while other researchers measured the results in order to understand how real human brains behaved in encounters with other people. Possession of a theory of mind looks like something that ought to make the possessor a moral client. It has certainly been argued to have that consequence in relation to great apes other than human beings, although the extent to which they really have a theory of mind is controversial.

On the other side, there would be arguments that the simulation would just be a large piece of software, that it was not even fully integrated into a single person because it was shared between several computers (although this might reduce the accuracy of the simulation, because large-scale co-ordinating electrical activity does seem to matter to consciousness), and that it would not even know if it were modified, slowed down or turned off - although modifications, and processing speed adjustments, would have to be made carefully in order not to leave traces of the old state or speed that would give the game away.

The software point has strength if we think that when an entity is not intrinsically dependent on its hardware, that makes it less of a moral client. We are intrinsically dependent on our hardware, or at least, we are now, and will remain so until we work out how to copy the contents of a brain onto some computer-readable medium. Even then, it would matter which body someone was, and what the history of that body had been; that degree of hardware-dependence would remain.

The point about lack of awareness of changes needs to be elaborated. It is utterly wrong suddenly to kill someone, even without their knowledge or any kind of anticipation. It is also wrong to manipulate people's thoughts or attitudes, in the way that politicians, spin-doctors and the advertisers of products do, without people's awareness of what is going on, rather than using open and rational argument. If it were to be acceptable to turn off an artificial brain, the reason would have to go beyond its own lack of anticipation of this fate. It would be likely to have something to do with the fact that a piece of software would not have existed within a caring community. It would have had no relatives to mourn it, and it would never have had a potential beyond what its creators had planned for it. (It would have had a potential beyond what its creators had coded: one of the features of neural nets is that they develop in their own way.) But that move in the argument is dubious, because the question is that of whether the software should have been surrounded by a caring community, one that regarded it as a moral client. Something similar could be said about the far lesser offence of manipulation. This might look acceptable, because the software would only ever have had the potential that its creators had planned. But whether that ought to have given them ownership of its mind is the point at issue.

I remain unclear as to whether a full simulated brain should be regarded as a moral client. I predict that if its creators did not so regard it, the rest of us would not reprove them. But they might reprove themselves for turning it off, and then comfort themselves with the thought that they had kept the final software configuration, so that it could be re-awoken at any time.

Friday 25 January 2013

Dostoevsky on deciding what to do


There is an intriguing view of human decision-making and freedom in Dostoevsky's Notes from Underground, part 1, chapters 7 and 8. Here, I shall analyse an argument from chapter 7.

Dostoevsky starts with a question. Why do we, again and again, knowingly and deliberately act in ways that are contrary to our own advantage? He promptly moves on to another question. What is a person's advantage? Then he claims that there is an advantage that cannot possibly be included in any catalogue of advantages.

The advantage that cannot be included in any catalogue is that of making an unfettered choice, in defiance of any careful computation of advantage. The argument may be reconstructed as follows. (This reconstruction goes beyond what Dostoevsky says, in order to secure the argument against some obvious logical objections.)

1. Suppose that a person, D, is aware of the contents of a supposedly complete set, S, of advantages to him or her, along with information about their relative importance, about how to obtain the advantages, and about the possibilities for obtaining various combinations of them. "Advantages to D" has a broad meaning. It may include advantages of benefiting others, at no obvious gain to D. There is no suggestion that D need be selfish in making the best possible selection of advantages from S.

2. D can now make a careful computation of what to do, in order to maximize the net advantage to D.

3. D can also exercise freedom, by going against the result of the computation.

4. The exercise of freedom is in itself an advantage to D, but it cannot be the result of the computation, otherwise it would not be an assertion of D's freedom. D could freely decide to act in accordance with the result of the computation, but that would not be the kind of unfettered freedom that is required here. It would not assert D's ability to live unconstrained by such computations.

This argument does leave space for action in defiance of the result of the computation to be included in S. But defiance could not coherently appear as all or part of the result of the computation. Its inclusion in S would therefore be idle.

Suppose, first, that the computation produces a single recommendation, to act in defiance of the result of the computation. Compliance with the recommendation would amount to defiance of it, and defiance of it would amount to compliance with it. (There would be the additional, substantial, difficulty that D would not know what to do. "Act in defiance of a recommendation to eat healthily", would give D an idea of some specific action. "Do not do what this sentence tells you to do", would give no idea of any specific action.)

Now suppose that the computation produces several recommendations, say "Eat healthily", "Move to another city", and "Act in defiance of the result of the computation". If we read this list as a conjunction, as I think should, we find that D cannot comply with all conjuncts. Suppose that D eats healthily and moves to another city. Then if D complies with the final conjunct, it can only be by defying that conjunct, since that is the only remaining way to break the terms of the conjunction. If, on the other hand, D defies the final conjunct, then D must comply with the whole conjunction, which must mean complying with the final conjunct, as well as with the other two.

The one apparently coherent option would be to defy either or both of the first two conjuncts, and thereby comply with the third. But on closer inspection, we can see that this would not work either. The reason is that it would be known in advance that "Act in defiance of the result of this computation" would amount to "Discard at least one of the specific prescriptions". Given that acting in defiance of the computation was considered to be an advantage, a member of S, this discard would be required in order to yield the optimal solution, if the prescription to act in defiance of the result were to be part of the result. Therefore, the discard of at least one of the specific prescriptions would form part of the calculation, before the result was given. (The prescriptions to be discarded might be chosen by some rule, or at random.) But then the result would be a conjunction of the remaining specific prescriptions and the prescription to act in defiance of the result. Again, at least one of the specific prescriptions would have to be discarded, in order to achieve the optimal result while still keeping the prescription to defy as part of that result, and this too would have to form part of the calculation. We would continue until only the prescription to act in defiance of the result was left. But as already noted, that would lead to incoherence.

We may therefore conclude that while the prescription to act in defiance of the result could be included in S, contrary to what Dostoevsky asserted, it could not coherently feature in a prescription of what to do in order to maximize advantage, so its inclusion in S would be idle.