tag:blogger.com,1999:blog-20555834069176803852024-03-08T08:12:22.506+00:00Analysis and SynthesisRichard Baron's largely philosophical blog.Richard Baronhttp://www.blogger.com/profile/17869390364282686725noreply@blogger.comBlogger107125tag:blogger.com,1999:blog-2055583406917680385.post-29825387510280693892024-02-09T11:34:00.000+00:002024-02-09T11:34:00.182+00:00Publication and the shape of the knowledge base<h2 style="text-align: left;"> Introduction</h2><p>This post is about the dissemination and use of pieces of work that set out the results of research, that review specific pieces of research, or that survey fields of research.</p><p>Journals are the established form of dissemination. But technology has facilitated both new ways to make works available and new formats for individual works.</p><h2 style="text-align: left;">Old and new ways</h2><p>The traditional journal is managed by an editorial team, printed on paper, and sent out by post (as well as often being available online). The number of papers per year is for practical reasons tightly limited. Submissions routinely far exceed capacity. Decisions as to what to publish are taken by editors, either at sight or on the basis of reports received from referees.</p><p>Consequences of this system are a degree of prestige for those authors whose papers are published, the rejection of a great deal of work that is of a perfectly good standard, and a degree of confidence among readers that published papers have been scrutinised and meet appropriate standards of quality. Readers do not however have grounds to think that published papers are better than at least a substantial number of the papers that are not accepted for publication.</p><p>This long-standing system remains significant, with the modification that some journals now only exist in online format. But original research of perfectly good quality can now appear in other ways.</p><p>Most conspicuously there are online repositories such as the arXiv and the Social Science Research Network. These repositories accept papers subject only to very light quality control, and they do not limit the numbers of papers accepted. Some of the papers will be modified by their authors, and the arXiv in particular has a system for keeping successive versions available. Some of the papers will go on to be published in more traditional journals, again perhaps in modified form. Such a traditionally published version is then regarded as the version of record, the one to cite when others refer to the research.</p><p>There are also online overlay journals which pick out papers in online repositories and link to them, sometimes with comments from the editors of the overlay journals. The idea is to bring together some papers particularly worth reading in a given field, and to add a layer of quality appraisal that is not provided by the repositories.</p><p>An environment of repositories and overlay journals separates the two tasks of making work available and directing attention to work of high quality, tasks which are combined in traditional journals. Far more work is made available, but those who want to have their attention directed to work that might well have made it into traditional journals can rely on overlay journals. Breaking down a single process into two tasks that can be performed separately has its advantages.</p><p>There are also ways to make work available that exist outside even the structure provided by organised repositories, although they carry a correspondingly low (but not zero) probability that links to the work will appear in overlay journals. Authors may post work on their own websites or blogs, and may draw attention to the work by posting links on social media.</p><p>Finally, long-standing journals may find themselves in competition with new ones that retain some of the features of their forebears. A recent example is the new journal <i>Political Philosophy</i>, which has been created by former editors of the <i>Journal of Political Philosophy</i>. The new journal is published online only, by the Open Library of Humanities, but it is too early to know whether this means that far more papers will be accepted than would be practical with a paper journal. Peer review is carried over from the traditional journal model. One thing that is facilitated by this new journal's online model, and by the fact that no commercial publisher is involved, is that articles will be free to access while no publication charge is imposed on authors. </p><h2 style="text-align: left;">Publication and citation</h2><p>There are those who do not regard papers placed on websites or in online repositories as published. They reserve that term for papers which have appeared in journals of the traditional sort, where there is an expectation that there will be quality control through peer review, through a restriction on the number of papers published even among papers that meet some standard which can be applied without full peer review, or through both. The view is that publication requires not only dissemination, but also a gatekeeper. </p><p>It is tempting to see this as a grumpy old establishment keeping control of its territory. But there is another aspect. Works get cited as support for arguments in later works. Earlier works present arguments and results on which the authors of later works seek to rely. There is a case for saying that only works which have got past a gatekeeper should be considered respectable enough to be cited in this way. And it might be thought convenient to take publication in the traditional sense as the primary indicator that works are good enough to be cited.</p><p>There is however scope to challenge a view that the gatekeepers of traditional publication are the ideal source of quality control. One may doubt that the gatekeepers of traditional publication are always good at their job. A mere restriction to papers which strike the editors as good enough to outrank others in competition for the available space in journals is certainly not likely to weed out all or only the papers on which others should not rely. And peer review can vary greatly in its quality, both because reviewers may lack expertise in every aspect of a paper's topic and because time-pressured and unpaid reviewers may not devote great effort to the task. It does not help that reviewers' reputations are usually not on the line. It is normal for them to remain anonymous and for their reports not to be shared at all widely.</p><p>An alternative that would address such concerns in the new world of online repositories would be no control over initial acceptance but then open comment, by individuals who would be named so that their expertise could be assessed, either on papers as wholes or on specific points. So long as the comments were collected in repositories alongside the relevant papers, everyone could benefit. And adverse comment on one element in a paper, which might have led to a refusal to publish in a traditional journal (even if the verdict was to revise and resubmit, if the author could not or would not revise to meet the objection), would not deprive people of access to other elements which might be of considerable value to them. Moreover, an easy way to add comments would make it easy to add new material which might change one's view of the original paper or of parts of it. In the traditional system, drawing the attention of readers of the original paper to relevant new material may need to await publication of a whole paper which comments on the original paper. And the authors of such later papers may be so concerned to convey their own views that they do not bother to make all the points on earlier papers that they could make.</p><p>Overlay journals provide an additional quality control mechanism in the world of online repositories. They may suffer from the same disadvantages as traditional journals. Publication in an overlay journal is neither necessary nor sufficient for a paper to be worth reading. But overlay journals can provide the kind of gatekeeping that traditional journals provide, while the repositories over which they lie ensure that papers which would not pass a gatekeeper still become available.</p><p>Finally, we should note the different degrees of significance of the controls discussed here in different areas of work.</p><p>The risk of being driven to inappropriate conclusions as a result of relying on mistaken content of earlier work which might have been corrected if the earlier work had been subjected to better quality control arises most frequently and on the whole most severely in mathematics and the natural sciences, less frequently or severely in the social sciences, and least frequently or severely in the humanities. One reason is that deductive or close to deductive chains of reasoning are most common in mathematics and the natural sciences, and least common in the humanities. Some mistaken content of earlier work may drive the author of later work directly to a mistaken conclusion whenever the chain of reasoning is deductive or close to deductive, but if links in the chain are weaker an earlier mistake is less likely to drive the author of later work to a mistaken conclusion. If the links in the chain are nowhere near deductive and a prospective conclusion happens to look doubtful on some other ground, there will often be scope to place little weight on the earlier content even when it is not recognised as mistaken. This is however only how things stand on the whole, not universally. For example, a historian might misdate a document and that mistake might deductively rule out certain analyses by other historians of events in which the document played a role.</p><p>Turning to the use of research to choose practical actions, the greatest risk of serious mistake by relying on mistaken content of earlier work again arises when the earlier work was in mathematics or the natural sciences. Conclusions of such work come across as more solid than other considerations when choosing actions, so they are likely to carry considerable weight. Such conclusions can also have the potential to rule out certain actions absolutely. So undetected mistakes can have significant consequences. When it comes to the social sciences, there is likely to be more wariness of reliance on conclusions because it is clear that conclusions in the social sciences are less soundly based than those in the natural sciences. There is also less likelihood that a conclusion will rule out an action absolutely. Finally, conclusions in the humanities are not of a kind that should lead directly to choices of action, although they may well inform the outlooks of people who make such choices.</p><h2 style="text-align: left;">The shape of the knowledge base</h2><p>Traditional publishing gives us individual papers which contain references to other papers. Within each paper one finds a large number of pieces of information. They are not explicitly isolated from one another as detachable items. Isolation might in any case be impractical without extensive re-writing because pieces of information are stated in ways that rely on their having the context of the rest of the paper. And connections between pieces of information are not usually set out explicitly in a web of links. Thus the largely implicit connections between pieces of information within a paper are mostly of a different nature from that of the connections to other papers that are given by explicit reference.</p><p>None of this need change with a move to online publishing in repositories, with or without overlay journals. But online technology does allow changes. In particular, papers no longer need to be confined to the linear form.</p><p>Let us present a radical possibility, while acknowledging that actual developments may turn out to be less radical. A paper could comprise a number of different files, which we shall call notes even though they might be in a final and polished form. Each note would set out one piece or a few pieces of information, with links between the notes and a contents list which would include links to all of them. That list could optionally set out the notes in a hierarchical structure or provide a suggested order in which to read them. Each note would have its own public identifier to facilitate linking to individual notes by other authors.</p><p>The overall effect would be that of a published Obsidian vault for each paper, or for all the work of a given author, with links between notes (whether in the same vault or in different vaults) as well as links to papers that were in more traditional forms. Ultimately, if this style became the norm and many links were also made between notes created by different authors in their own vaults, the effect would be much the same as that of one giant Obsidian vault which was made up of all the vaults of individual papers or authors.</p><p>One would want to maintain separate vaults for separate authors within such a giant vault, both to assign authorial control and to allow attribution. So the giant vault would be a notional one. But others could send an author their suggestions for amendments, which the author could accept or reject. Systems for recording versions and handling suggestions could be added, perhaps using software on the lines of Git.</p><p>There would be an effect on the ways in which authors used other authors' work. The notional giant vault would be full of notes that captured single thoughts or small groups of thoughts, notes of the sort that when they contain single thoughts are called atomic notes. Searches might tend to lead to the thoughts of several authors on a specific point, rather than to the thoughts of one author on that point and related points. On the one hand this would be an advantage. But on the other hand there would be an increased risk of gathering atomic notes from several authors and misinterpreting them because they were viewed outside their original contexts. The new context of the collection of notes on a single point would tend to drive the contexts of the original papers out of readers' minds.</p><p>Correspondingly, there would be less focus on individual papers. An imperative to read papers by authors B, C, and D on some topic would be replaced by an imperative to comb the giant vault for whatever had been said on particular points. One might gain in depth on specific points, but lose an appreciation of overall ways to handle topics.</p><h2 style="text-align: left;">Attribution</h2><p>The traditional approach of complete papers which have named authors and which are accessed as wholes gives clarity of attribution. If a paper includes a piece of information, an idea, or an overall approach to a topic, then the information, idea, or approach should be attributed to that paper's author unless he or she acknowledges or should have acknowledged another source.</p><p>Attribution could be preserved even if all work was within the notional giant vault. Individual authors would have their own vaults, and new vaults could be created for specific groups of authors working together on single projects. There would be a minor complication when a team worked together so that they wanted a single vault but it was thought appropriate to attribute some particular piece of work within a project only to selected members of that team, but notes within the vault that related to such a piece of work could be tagged appropriately. The difficulty would be no greater than exists in the traditional system when a team produces papers that are to be attributed to selected members.</p><p>Attribution might however be affected, whether work was produced by one author working alone or by all or part of a team. When an idea is seen within a whole paper, the context can serve to distinguish it from similar ideas in other papers. If however ideas tended to be seen in an atomic way, detached from the contexts of the vaults in which they occurred, similar ideas might be seen as so close that it became unclear whether one author or another could really claim ownership of it. There might be visible priority in time, if all notes were time-stamped and the history of amendments to them could be viewed. But when the interval of time between two notes was small, the different authors were all working within the same intellectual context (a context which would be enlarged, merging contexts that might otherwise have been seen as different, by the existence of the notional giant vault), and both of the ideas could easily have developed from the context as it was shortly before either of them had been written down, such an order of temporal priority would be of little significance for the purposes of attribution.</p><p>(We may add that when notes written by different authors supplied the same idea, rather than similar ideas, or the same piece of information, and copying could be excluded, it would be appropriate simply to attribute the idea or the information to each of the authors independently.)</p><p>So a move from complete papers to atomic notes within a notional giant vault could at least sometimes reduce clarity of attribution. Would this matter?</p><p>From the point of view of individual authors affected, it might matter a great deal. People like to be given credit for their own ideas. On the other hand, if attribution ever came to be seen as unimportant, more useful work might be produced because producers would not devote any time or mental capacity to tracking down the sources of ideas and giving due credit except when that was necessary in order to show that some claim on which the new work relied had indeed been established by earlier work. More generally, if attribution came to be seen as unimportant, the focus would be on the corpus of knowledge that had been generated and was being enlarged all the time rather than on the contributions of particular people.</p><h2 style="text-align: left;">A bright future</h2><p>If the dissemination of work were to develop along the lines indicated here, at least in the direction of a notional giant vault of atomic notes, then the future could very well be on balance brighter than it would have been in the absence of such development, whether or not the extreme of a notional giant vault was ever reached. Searches for material relevant to new work would be faster and more comprehensive, new work could be made available quickly and without being compiled into the currently recognised form of a complete paper, and gaps in knowledge could be filled in one by one as and when material to fill them occurred to individual authors.</p><p>On the other hand, the discipline imposed by the recognised form of a complete paper might be lost. There is something to be said for requiring an author to set out in sequence the question addressed, how evidence was gathered, what evidence was obtained, the argument to conclusions, the conclusions themselves, and a discussion of their significance. Such discipline could however be restored by a norm that a batch of notes would be accompanied by a table of contents with links to the individual notes, such that when the notes were read in the order given the traditional sequence would be followed.</p><p>One danger to avoid would be a slide towards centralisation, with some authoritative figures seeing a notional giant vault as requiring management and directing individuals to working on certain topics in certain ways. Fortunately software like Obsidian works perfectly well with individuals all doing their own thing, even if there is notionally a single giant vault. Equally fortunately, academics and the like are strongly inclined to do the work they choose and to do it in their own ways. If there is a danger, it comes from people in authority threatening non-conformists with obstruction to their career progression. But any such danger should not be allowed to obstruct the spread of new ways to disseminate work done, or the growth of banks of work in forms that may be more useful than the traditional forms.</p><h2 style="text-align: left;">References</h2><p>arXiv: <a href="https://arxiv.org/">https://arxiv.org/</a></p><p>Git: <a href="https://git-scm.com/">https://git-scm.com/</a></p><p>Journal of Political Philosophy: <a href="https://onlinelibrary.wiley.com/journal/14679760">https://onlinelibrary.wiley.com/journal/14679760</a></p><p>Obsidian: <a href="https://obsidian.md/">https://obsidian.md/</a></p><p>Open Library of Humanities: <a href="https://www.openlibhums.org/">https://www.openlibhums.org/</a></p><p>Political Philosophy: <a href="https://politicalphilosophyjournal.org/">https://politicalphilosophyjournal.org/</a></p><p>Social Science Research Network: <a href="https://www.ssrn.com/">https://www.ssrn.com/</a></p>Richard Baronhttp://www.blogger.com/profile/17869390364282686725noreply@blogger.com0tag:blogger.com,1999:blog-2055583406917680385.post-82457335007794177082024-01-11T10:03:00.000+00:002024-01-11T10:03:32.202+00:00Repeated encounters with works of art<p> </p><h2 style="text-align: left;">1. The question</h2><p>This post concerns encounters with works of art, buildings, and cityscapes. We shall refer to all of these as works. And encounters with works shall mean being in the physical presence of the originals and perceiving them in the ordinary, direct, way, although perhaps with the aid of magnifying glasses or binoculars.</p><p>Many works repay repeated encounters. But many other works are waiting to be encountered for the first time. Our question is this. When a choice has to be made, what reasons might one have to choose repeated encounters with works already encountered over encounters with works not already encountered? (For brevity, we shall simply speak of repeated encounters and new works.)</p><p>In order to make the question a serious one we shall be concerned exclusively with works of great merit, such that there would be a real cultural loss in never encountering them. And when we use the term "works" without qualification, we shall mean works of great merit.</p><p>Within the category of works of great merit, we shall not think in terms of an order of merit. The number of truly outstanding works is small enough that one could get round most of those within one or two given artistic traditions, without having to miss out on too many other works of great merit within those chosen traditions. So not establishing an order between works of great merit will not limit thought along our lines. And to encounter the truly outstanding works within a large number of traditions would be too ambitious for anyone, unless perhaps one were never to encounter most of the other works of great merit within many of those traditions and thereby deprive oneself of a full understanding of the context of the truly outstanding works.</p><p>When one happens to live near works already encountered, there is only a very modest trade-off between repeated encounters and encounters with new works. But when works already encountered are in city C at some distance from one's home, and other works equally worthy of attention are in many other cities, also at some distance from one's home, from city C and from one another, there is a more serious trade-off. The extent of art and the brevity of life together mean that repeated encounters in city C will require forgoing even single encounters with new works in some of the other cities.</p><p>There may even be a serious trade-off when city C contains many new works, so that encounters with them can effortlessly be combined with repeated encounters in city C and the result might seem to be be a full life of artistic appreciation. This is because some artistic traditions may be far better represented by works in other cities, so that drinking one's fill in city C would still leave significant gaps in one's experience. Even if one only had a taste for a particular broad category of art, such as the art of Western and Central Europe or the art of East Asia, there would be many traditions and sub-traditions to explore. And a broad appreciation of those traditions and sub-traditions would enhance appreciation of particular works.</p><p>In what follows, we are only concerned to explore our question. We do not mean to put a returner to previously encountered works in the dock on a charge of irrationality. Choices like this are both unimportant to the rest of us and none of our business. We merely wish to investigate the reasons a returner, or another person who prefers new works to repeated encounters, might have for their choice. And we are concerned with personal benefits to the individual, rather than with what someone who might make a serious contribution to the discipline of the history of art should do for the sake of making such a contribution. We shall therefore not issue any prescription, or even try to reach an overall conclusion.</p><p>We shall investigate reasons to choose returns or to choose new works under two headings, the magic of the original and intellectual benefits. References are given at the end of the post.</p><h2 style="text-align: left;">2. The magic of the original</h2><p>Take a work of great merit. There is something magical about being in the presence of the original. Photographs available online would not be enough. And repeated encounters would allow at least some magic to be experienced again and again.</p><p>This may be so, but would the renewed experience be the most worthwhile use of one's limited time? Or would it be better to move on to new works, simply on the ground of available magic (quite apart from the intellectual benefits of encountering a wide variety of works). After all, the magic is not a single magic of works in general. It is a different magic in relation to each work. Relatedly, experiencing the magic in relation to several works does not diminish the magic in relation to the next work one encounters. </p><p>By far the largest dose of magic is likely to come from the first encounter. What is really special is to have been in the presence of the work, rather than to have been in its presence several times. To acknowledge this is not to fall into the vulgarity of "been there, done that, tick". Rather, it is to note that what is special about the original, as opposed to a perfect copy, is something intangible, a direct connection with the work's creation and subsequent history, and with the tradition within which the work originated (on tradition see Benjamin, "The Work of Art in the Age of Mechanical Reproduction", section 4). The direct connection is something to be felt rather than observed. And the most significant experience of any feeling is very often the first experience of it.</p><p>(On different causal connections and their effects on the worth of experience see Bertamini and Blakemore, "Seeing a Work of Art Indirectly: When a Reproduction Is Better Than an Indirect View, and a Mirror Better Than a Live Monitor". The studies reported there did however involve hypothetical rather than actual encounters with a work or with a reproduction, and they made the assumption that the original work would not be encountered in our sense at all. Our assumption is that some works have been encountered for the first time, and that new works can either be or not be encountered. In no case do we consider the option of only seeing photographs of a work or only having indirect perception without at some time actually encountering the work itself.</p><p>See also Newman and Bloom, "Art and Authenticity: the Importance of Originals in Judgments of Value". That paper is concerned with the relative values of originals and copies. But the authors do identify the importance of the artist's physical contact with the original, which they call contagion. One could extend the importance of contact to cover the chain of contact from artist to viewer that would at the very least be seriously weakened by the substitution of a copy for the original, even though the chain all the way from the artist to the viewer of a copy would still be causal. One reason for the weakening would be that the proximate human producer in the chain would be a mere copier, rather than the creative artist.)</p><p>While the first encounter is likely to give the largest dose of magic, there may also be significant additional magic to be enjoyed on repeated encounters. This is particularly so when features of a work at a higher level than its detail are of great significance. Works on a large scale provide examples. Some paintings and sculptures, and many interiors, buildings, and boulevards and cityscapes, impress partly by their size. How did the creator keep control of such a large work and create a harmony that endures as one moves from the whole to small parts and back again? Scale creates its own magic, and repeated encounters will allow the enjoyment of doses of magic that were not available on the first encounter. Likewise genius of composition or the fit of a work with other works, for example within a sequence of works intended to be displayed together, can be a source of significant magic on repeated encounters.</p><p>Even aside from such considerations specific to particular features of works, there might be fresh magic on second or later encounters with the same work which was of a different nature from the magic on the first encounter. There might for example be something like the special feeling of greeting a friend already known, which is not a weaker version of the feeling of encountering someone interesting for the first time.</p><p>We therefore have ample reason to allow that often, neither all nor nearly all of the magic will be given by a first encounter. There is good reason to encounter works of great merit more than once. So there is a trade-off between repeated encounters and encounters with new works.</p><p>If magic were all that mattered, one might seek a rule of thumb that would maximise the total magic enjoyed by a given individual over his or her lifetime. And we would want to allow different rules for people with different characteristics. But magic is not all that matters. We should also consider the intellectual benefits of studying particular works of art.</p><h2 style="text-align: left;">3. Intellectual benefits</h2><p>Encounters with works may bring not only magic and pleasure, but also intellectual benefits. Even if one is not going to make a significant contribution to the discipline of the history of art, one's brain may be exercised and one's understanding of humanity and history may be enlarged. How does this affect the trade-off between repeated encounters and encounters with new works?</p><p>One might think that the intellectual benefits of repeated encounters could easily be replaced by the benefits of studying photographs available online. Study of a work may even be better done in that way, because one can zoom in to a work of art and get closer than would be permitted in a museum, or can study a building's upper reaches in a way that would not be possible from the ground. This would tilt the balance in favour of encountering new works rather than repeatedly travelling to view works already encountered.</p><p>However, when large scale plays a significant role, pictures online will fail to capture how the scale has been handled. And the same is true of small scale. An enlarged reproduction of a miniature does not fully reflect its nature. This may argue for repeated encounters with original works.</p><p>There may also be benefit from studying a work in its presence. One gets to exercise the brain to see what one can notice without the benefit of zooming in or otherwise manipulating the image. And one can see the work as its creator expected it to be seen, within the technological constraints of its time of creation. Thus one may come to appreciate how the creator overcame any limits on perception imposed by those constraints. Again, we have an argument for repeated encounters.</p><p>There is also a connection between intellectual benefits and magic. It is a feature of many works of great merit that there is always more to be seen, in the detail, in higher-level features such as the composition or the lighting, or in the way in which the artist worked. Even someone who is not a professional art historian can see enough to be able to write whole essays that explore works in such ways. (See for example the essays in Barnes, <i>Keeping an Eye Open: Essays on Art</i>.) And there is a certain magic in getting the experience from the original. The artist reaches out to the viewer and the channel of transmission is the original work alone, with no stage of reproduction being involved. To the extent that this is so, repeated encounters may be amply justified by the attraction of further study.</p><h2 style="text-align: left;">References</h2><p>Barnes, Julian. <i>Keeping an Eye Open: Essays on Art</i>. London, Jonathan Cape, 2015.</p><p>Benjamin, Walter. "The Work of Art in the Age of Mechanical Reproduction". In Benjamin, <i>Illuminations: Essays and Reflections</i>, edited by Hannah Arendt, translated by Harry Zohn. New York, NY, Schocken Books, 1969.</p><p>Bertamini, Marco, and Colin Blakemore. "Seeing a Work of Art Indirectly: When a Reproduction Is Better Than an Indirect View, and a Mirror Better Than a Live Monitor". <i>Frontiers in Psychology</i>, volume 10, article 2033, 2019, pages 1-12. <a href="https://doi.org/10.3389/fpsyg.2019.02033">https://doi.org/10.3389/fpsyg.2019.02033</a></p><p>Newman, George E., and Paul Bloom. "Art and Authenticity: the Importance of Originals in Judgments of Value".<i> Journal of Experimental Psychology: General</i>, volume 141, number 3, 2012, pages 558-569. <a href="https://doi.org/10.1037/a0026035">https://doi.org/10.1037/a0026035</a></p>Richard Baronhttp://www.blogger.com/profile/17869390364282686725noreply@blogger.com0tag:blogger.com,1999:blog-2055583406917680385.post-62425107657302926262023-12-12T09:19:00.000+00:002023-12-12T09:19:15.409+00:00What if there were no prestige?<p> <br /></p><h2 style="text-align: left;">1. Introduction</h2><p></p><h3 style="text-align: left;">1.1 The question</h3><p>Some people enjoy prestige. They may be given titles which carry a certain cachet. Their holding such titles may lead others to think highly of them, and that may in turn induce positive self-perception on the part of the holders. Even if no titles are involved, records of achievements may be read in ways that lead others to think highly of the achievers, and that may in turn induce positive self-perception on the part of the achievers. In either case, prestige takes us beyond what would be implied by the mere noting of facts and the detached evaluation of those facts as, for example, indicators of likely performance in future tasks. Prestige involves a bit of magic, albeit magic that is explicable entirely in natural psychological and social terms.</p><p>We shall speak of people being accorded prestige, meaning that they are widely thought of as being special. The according of prestige will often not be a conscious act. It will usually be just how people tend to think of a person, given that person's titles or achievements. Anthropologists sometimes speak of a prestige economy, alongside any monetary economy. Prestige is accorded in line with the norms of the relevant society, and it is found to be a valuable reward and motivator. </p><p>We shall also speak of people having titles, awards or positions bestowed on them. Bestowal will be of interest to us when the titles, awards or positions are ones that on the whole lead recipients to be accorded prestige. Bestowal will be a conscious act on the part of those who have the power to bestow. It may or may not lead to the beneficiary being accorded prestige. It would be perfectly possible for the wider population to think nothing of the title, award or position. And someone could be accorded prestige without any title, award or position ever being bestowed, simply because the population admired some achievement.</p><p>If prestige is not accorded, it will vanish. It exists only in the minds of the beholders. Our question is as follows. What would be the consequences of this magic which we have called prestige vanishing in relation to everybody, so that there was never any sense that a title, award or position was anything more than a label with descriptive content and nobody took records of achievements to be of any significance other than to the extent that they were useful indicators of likely future performance?</p><h3 style="text-align: left;">1.2 Our scope</h3><p>Our topic is prestige that is noticed in society generally or at least across a large class of people, rather than prestige that only arises in narrow settings such as a family or a small institution, the inner workings of which do not attract public attention.</p><p>Prestige marks out a proportion of a particular group of people, which must amount to a modest proportion of the (perhaps larger) group of people who accord the prestige.</p><p>If the two groups are the same, as for example with the prestige of some engineers among engineers generally, the proportion of that group must be modest. If most members of the group were accorded the prestige, it would not be valued.</p><p>If however the two groups are different, that restriction need not apply. For example, doctors might enjoy a certain prestige among the population as a whole simply by virtue of their profession. Merely being a doctor would not give rise to prestige among doctors, but it might do so among people in general.</p><p>Prestige may arise formally, out of the receipt of specific honours (prizes bestowed by institutes, election to national learned academies with restricted membership, honours bestowed by the state, and so on) or out of the holding of specific positions. Prestige may also arise informally out of achievements that are recognised to be difficult, either naturally difficult such as climbing high mountains or artificially difficult such as publication in certain journals where competition for space in them is fierce. And there are other things in between the formal and the informal, such as invitations to chair or speak at important conferences and distinguished roles in professional bodies. Indeed, some academics now include in their CVs not only lists of their publications but also sections headed "Indicators of esteem", where esteem may be seen as a step on the way to prestige.</p><p>We shall be concerned with prestige that attaches to distinctions, the achievement of which is easily identified and the quantity of which is tightly limited, whether by explicit or informal quota (as with state honours and membership of some learned academies), by the nature of the position (as when a council only needs one person to chair it), or by human nature (which renders it so difficult to climb high mountains or to make significant scientific discoveries that few people manage such things).</p><p>Identifiability of achievement excludes from our scope general recognition by colleagues. And tight limits on quantity exclude mere membership of an esteemed profession. Tight limits also allow us to distinguish prestige from mere esteem, which may be accorded by members of a group to a large proportion of that group.</p><p>One significant consequence of our choice of scope will be that the prestige which interests us is something that someone might well pursue by following a plan to obtain it.</p><p>We shall now explore our question as to the consequences of prestige's vanishing. We shall do so largely by going through the effects that the possibility of being accorded prestige may have.</p><h2 style="text-align: left;">2. Achievement</h2><h3 style="text-align: left;">2.1 Motivation</h3><p>To the extent that people like to be accorded prestige, its availability may drive them to work hard. It may also drive them to get a move on, given that we have limited lifespans and even more limited spans of time during which we are at the height of our powers.</p><p>So if prestige were to vanish, less might get done. This could be to the detriment both of the individual, who might fall short of fulfilment of their potential, and of society. But it is also possible for the pursuit of prestige to lead people astray, so its disappearance might not be wholly disadvantageous either to the individual or to society.</p><h3 style="text-align: left;">2.2 The distortion of choices</h3><p>If someone's desire for prestige encourages him or her to work hard at tasks, success in which increases the probability that prestige will be accorded, that is likely to be beneficial. And it may be so even if the individual is motivated by the prospect of prestige rather than by any feeling that one should do the best job one can.</p><p>If however prestige is of great importance to the individual, he or she may be greatly concerned to do what other people will admire.</p><p>In some areas, the effect will still be very likely to be beneficial. What people admire will be aligned with what any rational agent would pursue. In athletics, what is admired by members of society is the same as what should matter to the individual: faster, higher, stronger. In those academic disciplines in which it is appropriate to think in terms of answers the correctness of which cannot sensibly be doubted until new evidence comes along, we see the same alignment. Getting those answers is admired and should also be the individual's aim. This can be said of mathematics, a large majority of work in the natural sciences, and a smaller but still substantial proportion of work in the social sciences and the humanities.</p><p>In other areas, the alignment between what people admire and what any rational agent would pursue will be far from guaranteed. Artistic achievement is the obvious example. Indeed, some now admired artistic movements were at first scorned. Impressionism suffered from that treatment. But the same misalignment could arise in politics, with some campaigns against injustice being more deplored than admired, or in business, where someone who sacrifices profit to what is perceived as socially responsible consideration for those known as stakeholders may be admired and someone who has a single-minded focus on profit deplored, even though the profit-seeker may not only obtain higher rewards but also have a better prospect of serving the customer well and keeping the business's employees in their jobs.</p><p>There are two potential disadvantages when someone is intent on prestige but there is no guaranteed alignment between what people admire and what any rational agent would pursue.</p><p>The first disadvantage is that effort may be wrongly directed. A new artistic movement which would in fact be of value may go undeveloped. An important political campaign may not be pursued. Or a business may not be run to best advantage, whether of the owner or of the customers or employees.</p><p>The second disadvantage is that the agent may fail to be true to himself or herself. Whether one sees this as a serious disadvantage will depend on how much one admires the person who insists on living on his or her own terms, and who refuses to conform merely for the sake of conforming. Some of us do see it as a great virtue to live in that way. For a powerful case in favour, and against the conformist pursuit of compliance with social expectations for the sake of prestige, see Ayn Rand's novel <i>The Fountainhead</i>.</p><p>Both of these disadvantages would be removed if prestige disappeared.</p><h3 style="text-align: left;">2.3 Manipulating the system</h3><p>There would be other things to do in the pursuit of prestige which would not be directly detrimental to the quality of work. They would primarily be to the detriment of any relevant system of bestowal, bringing it into contempt.</p><p>These would include both the outright fabrication of achievements, and practices which were less obviously unacceptable. Someone might ask friends who were also friends of bestowers of titles, awards or positions to put in a good word for them. Another approach would be to put great effort into presenting a case for recognition, putting everything in the best possible light without exactly lying.</p><p>Moreover, it would often not be difficult to sway the judgement of bestowers. The criteria for bestowal are generally not wholly objective, other than in special cases such as awards for triumph in sports where there is a wholly objective measure of achievement (such as time or height, rather than style) or awards for being the first to solve mathematical or technical problems which have been defined in advance.</p><h3 style="text-align: left;">2.4 When prestige is incidental</h3><p>People may act in ways that lead to their being accorded prestige, but without having prestige as a goal.</p><p>This may be the ideal, so long as prestige exists. If they have sufficient motivation, the lack of motivation from prestige will be no handicap. And they should neither be tempted to manipulate the bestowers of titles, awards or positions, nor be led astray by social conditions on the according of prestige.</p><p>One might indeed see lack of interest in prestige as a virtue in itself. Lack of interest would allow people to do things because they were seen as worthy things to do, rather than for any ulterior motive. And the vanishing of prestige would not matter to people with such an attitude.</p><h2 style="text-align: left;">3. Personal satisfaction</h2><p>Prestige may give rise to personal satisfaction. It feels good to have one's achievements admired by others. And such admiration confirms that the worth of one's achievement is not a figment of one's own imagination. It may indeed be anticipation of such satisfaction that motivates the pursuit of prestige.</p><p>The perception of validation is however not always justified. We noted above that there are areas of work in which what people admire will be aligned with what any rational agent would pursue, and areas in which such alignment is far from guaranteed.</p><p>Someone who was accorded prestige for achievements in areas of the first kind could have considerable confidence in the judgement of his or her admirers, except when prestige was accorded only by a few eccentrics and the one who basked in the prestige refused to acknowledge this fact.</p><p>Someone who was accorded prestige in areas of the second kind might legitimately have confidence in the judgement of his or her admirers, whether the judgement of those who bestowed awards which were widely regarded as prestigious or the judgement of people generally who thought well of the specific bestowal of an award on that person or thought well of that person's achievements.</p><p>Someone who was accorded prestige in such areas might however have doubts. What if the standards of his or her admirers were inappropriate? What if prestige was in fact accorded largely on the basis of conformity to current norms as to what kind of work should be produced, so that mediocre but conformist work could earn it while excellent but nonconformist work would not?</p><p>What would be the effects of prestige's vanishing? Individuals could still be satisfied with their achievements, but they would lose one source of validation of their satisfaction. Among some people, this might reduce motivation to achieve. As for society, any loss of motivation might be unfortunate. But at least there would be a reduction in motivation to conform to inappropriate standards.</p><h2 style="text-align: left;">4. The allocation of resources</h2><p>Institutions have to decide how to allocate scarce resources. Who will make the best use of them? The question can arise both in relation to resources for specific projects, and in relation to the allocation of longer-term jobs. Prestige can be used as a guide, on the basis that past performance is at least an indicator of having the talents needed for good future performance. We may ask whether it would be easier or harder to allocate resources appropriately in the absence of prestige.</p><p>If prestige were accorded on a basis which ensured that there was a high positive rank correlation with level of achievement, it would be sensible to use prestige as at least a rough guide to the allocation of resources. But if prestige were accorded on other grounds, for example on the basis of conformity to expectations which in fact deserved to be challenged, its use could easily lead to an inappropriate allocation of resources.</p><p>Even if prestige were accorded on the basis of level of achievement, there would be a difficulty. Unproven talent might be shut out. It is not that all new entrants into a field should get resources. Some would actually lack talent. But prestige would not be enough to ensure the best allocation of resources. There should also be an enquiry into work done by at least some new people. Sadly, that might be obstructed by a desire among those who currently enjoyed high prestige and control over resources to exclude challengers.</p><p>Overall, the disappearance of prestige might well reduce the misallocation of resources where prestige was not accorded on a basis which ensured a high positive rank correlation with level of achievement.</p><p>Where prestige was accorded on a basis which ensured a high positive rank correlation, its disappearance would remove a useful rule of thumb in the initial selection of potential recipients of resources. But that could be replaced by a cursory examination of the achievements of potential recipients.</p><p>A more detailed examination of their achievements and talents would in any case be needed to whittle down a shortlist to the list of actual recipients, if there was to be much prospect of the whittling down's leading to an allocation of resources which was not manifestly inferior to one or more alternative allocations. (We speak in these terms because it would be very unlikely that anyone could identify an optimal allocation, even supposing that an optimum would exist.) </p><h2 style="text-align: left;">5. Power</h2><p>Our discussion of the allocation of resources brings us on to the topic of power.</p><p>Having prestige can in itself give power, and the appropriateness of such power may be questioned. In particular, if the distribution of prestige reflects any considerations other than real talent and achievement, the influence on the distribution of power is quite likely to be inappropriate.</p><p>(We deliberately speak of appropriate and inappropriate distributions of power, not legitimate and illegitimate distributions. Our concern is with whether it would be sensible for certain people to have power in order that good results be achieved, not with whether it would be just for them to have power.)</p><p>We can see what might go right, or wrong, by considering different types of power that might be involved.</p><h3 style="text-align: left;">5.1 The power to influence other people's views</h3><p>Having prestige may make it is more likely than otherwise that one's views on contentious topics will be accepted. If the distribution of prestige tracks real expertise, that may be beneficial. And a failure to have some way or other to indicate who had expertise and who did not would be unfortunate - although prestige might be too broad-brush an indicator to be ideal for this purpose. But if the distribution of prestige does not track real expertise, or does so only poorly, the result may be unhelpful. Views of people whose expertise is modest may be given more weight than views of others whose expertise is greater. The problem would disappear if prestige vanished. It would however be useful to have some alternative rule of thumb to identify those whose views were likely to be worth serious consideration.</p><p>Prestige may also make it easier than it would otherwise be to get one's views disseminated, for example by getting one's own works published or by having others report one's views. Here there is a serious potential disadvantage, even if prestige is well correlated with expertise. The question of whether views are worth considering can only be answered by those who have become aware of the views. It is reasonable to direct one's attention to those who have expertise, and to ignore people who manifestly do not, but within the range of experts one should at least cursorily survey all views, not just those from prestigious people. Otherwise one could very well miss valid challenges to established orthodoxy. Again, this problem would disappear if prestige were to vanish.</p><h3 style="text-align: left;">5.2 The power to influence selectors for positions</h3><p>Having prestige may make it more likely than otherwise that one will be promoted to positions of leadership. The case here is slightly different from that with expertise, because effectiveness in leadership requires abilities that are less well defined than those which expertise requires but are more easily recognised by non-experts. So even if prestige is not ideally allocated, it may play a useful and relatively harmless role in getting candidates who enjoy prestige onto a shortlist.</p><p>Having said that, the final choice of candidate should be made without regard to prestige and should depend on actual possession of the required talents. The main area in which prestige can lead selectors badly astray is when someone comes from outside a given area of work, lacking a relevant track record but in possession of considerable prestige from some other area of work. For example, someone with great political or civil service prestige but little business sense might seek an appointment to run a business, and the selectors might be influenced by the prestige. If prestige were to vanish, this danger would go away. On the other hand, where this danger did not arise, a useful rule of thumb in the compiling of shortlists would be lost.</p><h3 style="text-align: left;">5.3 The power to allocate resources and positions</h3><p>Prestige may make one more likely to have powers of allocation of resources and positions.</p><p>This gives rise to a serious concern. If resources and positions are allocated by people with high prestige, the result may be a self-perpetuating oligarchy. The resources may well, and the positions are very likely to, enhance the prestige of the recipients after a little while. Then they may inherit power to allocate resources and positions.</p><p>This concern would arise whether or not prestige was accorded in line with actual talent. A self-perpetuating oligarchy would be likely to slow down the progress of those who, while they would do perfectly good work, had faces which did not fit. It would also tend to perpetuate established views on how people in the relevant area should think and work.</p>Richard Baronhttp://www.blogger.com/profile/17869390364282686725noreply@blogger.com0tag:blogger.com,1999:blog-2055583406917680385.post-81854007463369694222023-11-30T11:46:00.002+00:002023-11-30T11:46:28.640+00:00Infinity and the unlimited<p> I have recently placed a post on this topic on the blog of the sculptor Dr. Gindi. The post is here:</p><p><a href="https://www.dr-gindi.com/essays/infinity-and-the-unlimited">https://www.dr-gindi.com/essays/infinity-and-the-unlimited</a></p><p>Posts by others on the topic of the infinite are here:</p><p><a href="https://www.dr-gindi.com/essays">https://www.dr-gindi.com/essays</a></p>Richard Baronhttp://www.blogger.com/profile/17869390364282686725noreply@blogger.com0tag:blogger.com,1999:blog-2055583406917680385.post-30435749056322925232023-10-27T09:52:00.000+01:002023-10-27T09:52:37.721+01:00Will future people adjust our present?<h2 style="text-align: left;"> Introduction</h2><p>"If no one comes from the future to stop you from doing it, then how bad can it be?"</p><p>This slogan has been doing the rounds on social media. It is in a photograph of the title page to a spiral bound document. The document is attributed to US Robots and Mechanical Men Inc., a fictitious company in Isaac Asimov's Robot stories.</p><p>In this post we shall assume that people from the future could come back to our time and make adjustments on the basis of what they could see would be the consequences of actions taken now. They might do so in order to make things better for some people (perhaps us or themselves) at some time later than now. And they might do so indirectly, by leaving information that was relevant to predictions somewhere where we would find it. Even if they intervened directly, for example by changing people's brains just before they made certain decisions or by introducing fatigue into machines that people were using so that the machines did not carry out the actions that the people had chosen, the affected people would not be aware of the fact of the interventions. They might in retrospect be puzzled that they had taken decisions which were out of character, or that previously reliable machines had failed at critical moments, but that would be all.</p><p>What consequences might there be for some of our decision-making, and what would be the scope of and limits on the actions of the time-travellers?</p><p>In order to keep track, we shall give numbers to years.</p><p>Year 1 is now.</p><p>Year 600 is the year in which the time-travellers who visit us live. We shall assume that after visiting us they would go back there. It would be where all their friends lived and where the lifestyle was one to which they were accustomed.</p><p>Year 900 is a time when everyone alive in year 600 and in the next few generations after them will have died.</p><p>We shall speak of people living in, for example, year 600, to mean people who would be confined to that year and decades either side of it if there were no time travel. That is, the people who live in year 600 are the people for whom that year and decades around it are their home period of time.</p><h2 style="text-align: left;">The benefit for consequentialists</h2><p>One of the problems with selecting actions in year 1 on the basis of their consequences is that it is impractical to work out more than a few of the consequences, or to work out the long-term consequences. It is possible that an apparently harmless action would in fact have disastrous consequences years into the future, although it is also arguable that little blame should attach to a specific action in year 1 because many other actions not yet taken would also influence what happened years into the future. Consequentialists would love to be able to foresee all significant consequences of actions, but they cannot do so.</p><p>Once one reached, say, year 50, it would at least theoretically be possible to see what the long-term consequences up to that point of an action in year 1 had been. It would not be easy, because many other things would have happened in the intervening years. And to do the job in a way that would be of practical help to consequentialists in year 1 who had magically gained access to information that was in fact only available in year 50, one would have to work out how the world would have turned out following all of the alternative actions or inaction that might plausibly have produced a better outcome. That might remain impossible, or it might require the running of many simulations on the lines envisaged by the simulation argument that Nick Bostrom puts forward.</p><p>This is the message of the slogan. If actions in year 1 would lead to disastrous consequences, people living in year 600 would come back to year 1 and either prompt us to different actions or make other changes so as to break the causal chains that would have led to disaster. Then consequentialists could do the best they could in year 1, and be confident that their truly massive mistakes would be nullified. But would people living in year 600 bother? Could they coherently intervene? And what might people living in year 900 do?</p><h2 style="text-align: left;">Would people living in year 600 bother?</h2><p>We might expect people living in year 600 to intervene if it would make life better for themselves. And it might be that year 1 would be the easiest point at which to intervene, before the consequences of actions in year 1 had spread too widely and deeply. On the other hand, it might be hard to foresee the effects in year 600 of adjustments to year 1. Perhaps the best option would be a broad-brush adjustment to year 1 followed by finer adjustments to years 100, 200, 300, 400, 500, 550 and 575, making adjustments to each year as it would be modified by adjustments to earlier years. (Later we shall note a reservation about making adjustments to years close to year 600.)</p><p>Would people living in year 600 care about people earlier than themselves? They might foresee bad consequences of year-1 actions for people living in year 2, or year 50, or year 300. Those people would include the ancestors of people living in year 600. There is a large philosophical debate about what we owe to our near and distant descendants. Here we ask whether, given the opportunity, people should benefit their ancestors.</p><p>There is a natural feeling that people probably would do so, assuming no significant cost to themselves. We care about the living elderly. We may not give much thought to the states of mind of the dead, given that there are no such states of mind and there is no current way to affect the states of mind of the dead when they were alive. But if we could do something that would have benefited them back in their lifetimes, doing so would be a natural extension of our current habit of caring for the living.</p><p>Having said that, if there were a significant cost to people living in year 600 of benefiting their ancestors, one can envisage them saying that if they did not act the ancestors would never be aware of the lost opportunity to be helped. But then, if intervention in the past became routine, people living further forward (say in year 800) might complain that they seemed to have get things wrong, say in year 799, making year 800 a bad year, and that they had not been helped by people living in year 1200. There might come to be an expectation that future generations, able to estimate the consequences of actions, should help. That expectation might be seen by those future generations as creating an obligation on them to help.</p><h2 style="text-align: left;">Could people living in year 600 coherently intervene?</h2><h3 style="text-align: left;">The risk of incoherence</h3><p>A common issue in discussions of backward time travel is that of coherence. If someone living in year 600 comes back and changes something in year 1, it seems likely that things in year 600 would have to change too because the intervening course of history would have been different. Moreover, all the history between years 1 and 600 would need modification.</p><p>There are extreme concerns over coherence, for example that someone coming back from year 600 might not have existed under the re-written history, which would at the least seem to make it difficult for them to return to year 600. Then there are apparently more modest concerns that things would have had to be a bit different in various years. </p><p>Concerns of both sorts would however be of comparable severity in one sense. It is a fundamental principle that the world has, from any particular point of view, one history. At least, this is so subject to the isolation of some regions of spacetime from others on account of the finite speed of light and the expansion of space.</p><p>Call the history up to year 600 absent any intervention History-F (for first). An alternative, History-S (for second), would have to displace History-F from the point of intervention onward. Then what would have happened to History-F?</p><h3 style="text-align: left;">Multiple histories</h3><p>One answer involves the idea of multiple histories. History-F continues to exist and play out, but at the point at which a visitor from year 600 to year 1 intervenes, History-S starts to run in parallel. Anybody after that point of intervention is either in History-F and can only look back over that history, or in History-S and can only look back over that history. The histories over which they can look back are the same at all times before the point of intervention.</p><p>Whatever merits this approach may have in the interpretation of quantum mechanics, it does seem a bit much at the level of human life. It could be correct, but it would raise further questions about which was the real history, where there would be space for all the alternatives, and so on. The counter-intuitive nature of an approach does not show that it is mistaken. We must put up with a great deal that is counter-intuitive in physics. But at the human level, it at least intuitively (and therefore perhaps circularly) seems that intuition should carry more weight, if not as a guide to what is correct then at least as an indicator of the manifest incorrectness of some ideas.</p><p>Such concerns would arise even if a split into History-F and History-S was followed by a merger well before year 600, so that the picture was like that of a main railway track and a track that branched off to one side, ran in parallel to the main track, and then rejoined it. There would be some points at which there would be two different histories running in parallel. Indeed, concerns over coherence would be more acute. Without a subsequent merger one might at least say that there was no instant of time at which two histories were running, because each history would have its own time. But with a merger there would be much more pressure to accommodate a common timescale with two histories running at the same time, simply because one could count backward in time from a post-merger point. Moreover, there would be a question of which history should be considered to have preceded the merger. It would be a condition of merger that only one set of records and memories existed. So only one history would be written, and there would be no awareness of the branching and the merger. Nonetheless one could consider the unverifiable possibility of those events and the period of parallel histories having happened. This would challenge the intuitive idea that however obscure the causal pathways, the present was caused by a single past.</p><p>(We may note effects on motivation if there were such a merger, so that year 600 and years around it would be the same with or without the intervention. On the one hand the merger would make people living in year 600 more willing to intervene in the past, because it would make no difference to their lives. On the other hand they might say that there would be no benefit to them from intervening, so they might not bother. If the point of merger were before year 400, there would not even be any benefit to anyone whom anyone alive in year 600 had ever met.)</p><p>Now let us put all such concerns, with or without a merger, to one side. Just suppose that multiple histories would be created and that someone comes back from year 600 in History-F. They intervene and make a change to the benefit of people in some year later than year 1 (it need not be people living in year 600). That would make some people in History-S better off than they would have been had their history been the same as History-F. But it would leave unchanged the lives of everyone in History-F, including all of the intervener's friends back in the year 600 as it was when they set out on their journey (and as it would be on their return if there was a merger).</p><p>We could then ask what the benefit of the intervention would be. If it was to save people from some disaster, there would be a new set of people who did not experience the disaster but the old people, in History-F, would still experience it. So there would be extra people who would have a better life. If it would overall be a good life, we could see the point of the intervention. But then, would people living in year 600 bother to create better histories for people who would otherwise not have existed at all? There are connections both with the question we posed above as to whether one should care about people who no longer exist, and with the view that every positive life is worth creating which leads to Parfit's repugnant conclusion. (Intervention in the case we consider might however not be as repugnant as the breeding of as many people with even marginally positive lives as possible, because in History-S, and even in History-F, lives might be far better than marginally positive.)</p><p>One alternative which would remove many of the concerns we have raised would be to have the intervention wipe out History-F, so that it no longer occurred. This would however not be a panacea. It would go against our intuition that once a course of events has unrolled in time, the events cannot be undone. And interveners would have to leave a year 600 from History-F before returning to a year 600 from History-S. A question of the identity of the leavers with the returners would arise, although one might borrow from work on possible worlds and in particular David Lewis's counterpart theory. Alternatively one might claim that History-S was the only history there ever was, it being fixed from the start that the interveners would do their work. One might compare that idea with the conjecture of superdeterminism in quantum mechanics.</p><h3 style="text-align: left;">Changes at a safe distance</h3><p>Another answer is offered by the possibility of changes made in year 1 by visitors from year 600 making no difference that would bother humanity before year 601. It is not that no change would be made anywhere. Information and its transmission have energy costs. There is no free information, and no free causation. But if the difference made, which would technically create a split in the universe, were parked somewhere out of sight, for example on a planet that orbited a distant star, the idea of a split history would not be so disturbing. Nobody would have to contemplate the existence of alternative human beings who had split off from them and who would have noticeably different lives, even if there were such splits in theory. The effects of the distant change would then work their way back to Earth at the appropriate time.</p><p>The challenge of possible incoherence would not go away, but at least it would not be so directly disturbing to our idea of how human lives play out in time. The effects of the change made by interveners who came back from year 600 would not enter into the human realm until year 601 or later, so there would be no split in human history.</p><p>The motive of interveners who came back from year 600 would also be clear. They would be making life better for their own society, including themselves once they returned to year 600. Indeed, the approach suggested here would not allow the interveners to arrange any benefit to people between years 1 and 600, on pain of reintroducing incoherence that would be visible in the human realm.</p><p>On the other hand, if that were all that would be done, it would not be clear that there was much point in intervening in year 1. Why not just use, in year 600, the technology then available to change how things would be in year 601 or later years? It is plausible that if people living in year 600 had the technology for time travel and to effect changes far away from Earth, they would have the technology to effect local changes at reasonably short notice. And that would probably be safer, both because time travel might not work as intended and because the causal chain from some distant planet over a period of 600 years would be highly uncertain.</p><h2 style="text-align: left;">What might people living in year 900 do?</h2><p>People living in year 900 would have more information than people living in year 600. They would know how the world turned out in years 601 to 899, and they might attribute some of the things that happened in that timespan to interventions made in years before 600 by people who lived in year 600. They might also have more information relevant to the choice of interventions that would influence years before 600, because they might have developed better methods of simulation of alternative realities.</p><p>It would therefore not be surprising if people living in year 900 intervened to over-write the work of people who lived in year 600, in addition to making changes in years from 601 to 899.</p><p>The layering of interventions on interventions in the period from year 1 to year 599 would not generate new problems of coherence, although the problems already identified would still arise, and in the slightly stronger form that there would be a need to handle three or more alternative histories instead of just two.</p><p>There would be a new concern for people who lived in year 600. They could have no confidence that their interventions would be final. But there would also be new reassurance for them. If their interventions were not in fact ideal, and if people who lived in year 900 cared enough, those people would intervene to make corrections.</p><p>The process could of course go on, with subsequent generations making further interventions. There would however be two possible restrictions, both of which might also apply to people who lived in year 600 who made the first set of interventions.</p><p>The first possible restriction is this. People at any time T might be reluctant to make changes directly within the lifetimes of people who were still alive at time T or who had been personally known to people still alive at time T. The thought here is that there might be a reluctance to disrupt the lives of people one knew or had known, in whatever sense of disruption was appropriate given how the problems of coherence and multiple histories were handled. And direct disruption at points within the lifetimes of those people might be considered to be significantly more offensive than disruptions at earlier points in time which would likewise change the lives of the people in question. Any such gradation of offence would probably be based on a feeling that people at the point of intervention would feel a sudden jolt. That would be likely to be a mistaken feeling, but unless the effects of intervention in the past became much better understood by year 600 than they are now, it would be hard to displace the feeling.</p><p>The second possible restriction is this. People might eventually lose interest in the welfare of people in the distant past, and would also think it safer to adjust their own welfare by making interventions in the more recent past so as to keep the causal chains shorter and their simulations less prone to serious error. So people living in year 600 might want to intervene as far back as year 1, but people living in year 900 might not go back further than year 300, people living in year 1200 might not go back further than year 600, and so on. This would limit the number of layers of intervention in any particular year that would build up.</p><h2 style="text-align: left;">The Prime Directive</h2><p>In the science fiction stories of Star Trek, there is the Prime Directive. There does not seem to be a canonical text, but the gist is that Starfleet personnel shall not interfere in the normal development of any alien society, for example by introducing technology the society has not yet discovered for itself. Even intervention in order to save a starship or its crew is not allowed.</p><p>It is possible that people living in year 600, while they had overcome the technical challenges of time travel, would decide that the philosophical challenges were too great. Then they might adopt their version of the Prime Directive and leave us alone, even though we could hardly be called an alien society.</p>Richard Baronhttp://www.blogger.com/profile/17869390364282686725noreply@blogger.com0tag:blogger.com,1999:blog-2055583406917680385.post-78016320146149985532023-10-10T12:15:00.000+01:002023-10-10T12:15:15.972+01:00Kant, Rand, and the world<h2 style="text-align: left;"> Introduction</h2><p>This post is about the relationship between our discourse about the world and the world itself.</p><p>We have already explored the topic in Accounts and Reality:</p><p><a href="https://rbphilo.com/accounts.html">https://rbphilo.com/accounts.html</a></p><p>There we put forward a way to look at the relationship which would allow us to put to one side the question of scientific realism and parallel questions outside the natural sciences.</p><p>The aim here is different. It is to explore ideas from two authors, Immanuel Kant and Ayn Rand. Our main focus will be on Rand.</p><p>Our aim is not exegesis. We shall freely borrow material and take ideas out of context to suit our purpose. We shall also develop Rand's ideas beyond what she published. She might or might not have agreed with all of the developments we propose. But her central ideas do provide the framework for our developments, and to that extent we remain true to her thought.</p><h2 style="text-align: left;">The problem</h2><p>There is indubitably a world that exists and that has its nature independently of our thoughts about it. That nature is stable over time. It has a constancy that we capture by formulating laws of nature. We would never have evolved if there were not such an independent and stable world.</p><p>We give sophisticated descriptions of the world, sometimes in the terms of physics and chemistry, sometimes in the terms of other natural sciences, and sometimes in the terms of the social sciences and the humanities. We shall concentrate on descriptions that are expected to be very widely applicable across different places (on and off our planet) and times. Such descriptions are given in the natural sciences.</p><p>Our problem is to find a satisfactory way to describe the relationship between the independent world and our descriptions of it. In what sense, if any, do our descriptions set out the actual nature of the world?</p><p>The task is immediately made difficult by the fact that when we want to discuss the relationship between two things, we normally start by understanding the natures of the things separately. But here, we cannot say anything about the independent world without describing it. We can draw aside the curtain of a description, for example when we reduce some high-level phenomenon to lower-level phenomena, but then we have another description.</p><h2 style="text-align: left;">Immanuel Kant</h2><p>Kant, in his <i>Critique of Pure Reason</i>, tells us that our inability to escape from a sequence of descriptions points directly to the nub of the problem. We can never get beyond our own perceptions and descriptions of the world, which are perceptions and descriptions of the phenomenal world, the world as it appears to us.</p><p>This is a serious obstacle because in formulating our descriptions we impose mental structures, such as those of space, time and causation, which we have to impose in order for us to be subjects of coherent experience and to acquire knowledge.</p><p>Moreover, we should not conclude that the applicability of these mental structures discloses the nature of things as they might be independently of experience, things which he calls the noumena. In order for us to conclude that, things as they might be independently of experience would have to be presented to us in experience. Then the mental structures would inevitably apply, simply as a condition of experience, and we could not conclude that the structures were required by virtue of the nature of the noumena. All we can do is take it that there is some nature of reality as it is independently of experience, and then say nothing about that nature.</p><p>We should however be content with the phenomenal world, the world as it appears to us. It is the world as described by our sciences. It has stability and we can all know about a single phenomenal world. It is not our fleeting and subjective perceptions.</p><h2 style="text-align: left;">Ayn Rand</h2><p>In this section we shall set out Rand's scheme as elaborated by us. (This qualification is important: we have added elements that are not in Rand's writings. And as we are not engaged in exposition of her views, we have mostly done so silently. Our aim is to show that there is a useful system on Randian lines.)</p><h3 style="text-align: left;">The intrinsic, percepts, and the objective</h3><p>Rand regards as a huge mistake the Kantian generation of mystery as to the nature of the world as it is in itself. Her specific criticisms of Kant are contested. But she also offers an alternative approach.</p><p>For Rand we have the intrinsic, which is the world that is independent of our thoughts. We also have our percepts, where a percept is "a group of sensations automatically retained and integrated by the brain of a living organism" (Rand, <i>Introduction to Objectivist Epistemology</i>, chapter 1). Percepts may well differ from one person to another, and their nature will depend on the perceptual apparatus of the organism. Finally, we have the objective. This is the set of facts about the world. Facts are discovered when we apply the appropriate concepts to make sense of our percepts, where those percepts are in turn generated by the intrinsic world (which includes our own perceptual apparatus). The objective is therefore a relation between the intrinsic world and the appropriate concepts.</p><p>We need to interpret the idea of a relation in the right way. We should see relations not as processes of interaction but as connections, for example the connection of siblinghood which can exist between two people and which can be seen when we put two siblings side by side. For a given state of the intrinsic world and a given set of concepts, a conceptual scheme, there will be a given set of purported facts that can be seen. If the concepts are appropriate, they will be facts and therefore parts of the objective.</p><p>The objective must comprise a single set of facts. All rational creatures, and all other objects, exist in a single world, and things may be said about that world. If the notion of a fact is to make sense at all, each fact has to be a fact for everyone. The alternatives would be on the one hand disjoint sets of facts for different groups of rational creatures, as if they were speaking languages between which no translation was possible, and on the other hand a shared language or translatable languages within which different creatures expressed their personal truths without any requirement to avoid contradicting one another. Neither of these options would make any sense.</p><p>The need for a single set of facts gives rise to a problem for Rand's approach. Different people may have different percepts. And if we consider non-human rational creatures, we can expect them to have different perceptual apparatus. So for a given state of the intrinsic world we would need a function from ordered pairs, each of which comprised a set of percepts and a conceptual scheme, that always had the same value. That value would be the single set of facts. How could this be so?</p><p>The solution lies in the role of concepts. Different conceptual schemes would be appropriate to rational creatures of different natures, specifically those with different perceptual apparatus and the consequently different sets of percepts. (We restrict ourselves to differences in percepts by virtue of differences in apparatus, and exclude differences by virtue of happening to observe different things.) So rational creatures with different perceptual apparatus could be taken to the same facts. The differences in conceptual schemes would balance out the differences in percepts.</p><p>There is however one more stage. When appropriate concepts are used the result will be statements of facts, and the use of different conceptual schemes will result in different statements. For Rand's approach to work, it must be possible to claim that different statements could be seen as stating the same facts. But this claim could be made. We may envisage scope to hold different statements together by applying rules of translation between them. Such identities of facts stated in different ways might however be limited to statements of large and coherent sets of facts. Individual facts might not be identifiable across different conceptual schemes, because schemes would carve up the world in different ways.</p><p>In such a way we can see the objective as common property among all rational creatures, and a particular statement of it as common property within a given community of rational creatures, although different members of a community may grasp parts of the objective with varying degrees of sophistication and some parts may not yet be grasped by anyone. Any conflicts as to contents of the objective, that is, as to whether some given purported facts are facts, will indicate that mistakes have been made, so that at least one party to a disagreement has not in fact captured the relevant parts of the objective. A particularly likely reason is that they are not using the appropriate concepts. We have indeed spoken of the (single) conceptual scheme that is appropriate to rational creatures of a given nature, rather than an appropriate scheme. We shall say more about the appropriateness of conceptual schemes below.</p><p>In speaking of different concepts and different statements of the same facts, we have added to Rand's thought. But this development is within the spirit of Rand. She is very keen to emphasise that facts are as they are, independently of what we or any other rational creatures might think.</p><p>Having said that, our development does require careful handling in order to remain consistent with what Rand says. For her, the objective is a relation between the intrinsic world and the appropriate concepts. If we recognise the role of different conceptual schemes and then consider the relation between the intrinsic world and a given conceptual scheme, it may seem that the relation would have to comprise not the objective but a given statement of the objective. This would be a significant deviation from Rand's thought and not a development of it. Fortunately, we can save the day and allow the relation to be the objective itself.</p><p>One cumbersome way to save the day would be to say that the objective was a relation between the intrinsic world and the entire set of appropriate conceptual schemes. But that would raise questions about the definition of the boundary of the set. For example, which types of perceptual apparatus should be considered?</p><p>A more straightforward way is the following. While a given appropriate conceptual scheme would only take us to one statement of the objective, it would be a statement of the one objective. And it would be the same objective whichever appropriate scheme was used. Thus the relation is still the objective rather than a statement of it. It might however be necessary to extend the chosen conceptual scheme in order to allow it to cover all the ground covered by all conceptual schemes, so as to avoid having parts of the objective accessible to some rational creatures but not to others.</p><h3 style="text-align: left;">The objective and the facts that comprise it</h3><p>It is sometimes appropriate to think of the objective as a single entity that comprises all the facts. This can help us to bring out the structure of Rand's approach and contrast it with Kant's. But when we consider questions of human progress and error, it is more helpful to consider specific purported facts and ask whether they are indeed facts, meaning that they are elements in a relation between the intrinsic world and either the whole appropriate conceptual scheme or some parts of it. (Henceforth, "the appropriate conceptual scheme" will be used to mean the scheme appropriate to rational creatures of the relevant nature.) And when we refer to facts without qualification, rather than purported facts, this is what we shall mean.</p><p>Concentration on individual facts will have the advantage of taking away any need to think it practical ever to set out the entire appropriate conceptual scheme. What will matter will be concepts that are needed to relate to the intrinsic world in order to yield the facts in question. These will need to be appropriate concepts, in the sense that they would have their places within the appropriate scheme.</p><p>Defining the objective as a relation between the intrinsic and the appropriate conceptual scheme does not allow for the distinction between the world as it is in itself and the world as it appears to us on which Kant relied. The objective considered as a single entity which can be identified given a pair that comprises the intrinsic world and the appropriate conceptual scheme is not to be seen as parallel to the world in itself in the way that for Kant, the world as it appears to us is parallel to the world as it is in itself. ("Parallel" is not meant to imply separability. We mean to permit a one-world interpretation of Kant under which one would regard the world as it was in itself and the world as it appeared to us as two aspects of a single world rather than as two worlds.) Rather, while the intrinsic world remains the world to which we relate (and in which we exist), we get away from thinking of an objective world. Instead we should speak only of the objective and statements of it. The intrinsic world is the only world. The objective is how things are, a notion that is more a concern of epistemology than of metaphysics. To echo Wittgenstein (<i>Tractatus</i> 1.1), it is the totality of facts and not of things. But to contrast with Wittgenstein, while this totality of facts is the objective, it is not the world. We know facts that are parts of the objective, and we state those facts in ways that reflect our concepts, but we know the world, the intrinsic world rather than the objective.</p><h3 style="text-align: left;">The appropriate conceptual scheme</h3><p>We have spoken of the appropriate conceptual scheme for rational creatures of a given nature as being paired with the intrinsic world to yield the objective as a relation between them. We need to give some substance to this notion of appropriateness.</p><p>A conceptual scheme should take us from our percepts to the objective which is set out in the statements of facts that we make (and which would equally be set out in different statements given by other rational creatures). The objective should not depend on our perceptual apparatus, but our percepts clearly do depend on that apparatus. It is the conceptual scheme that provides the necessary flexibility. The appropriate scheme for rational creatures of some given nature is the one that is appropriate to their perceptual apparatus, where that apparatus encompasses both their external organs and whatever processing in their brains may be considered involuntary and inevitable. The conceptual scheme must be the one that will take the rational creatures from whatever percepts they have to the objective rather than to some merely apparent objective.</p><p>This point extends to routes to the objective that run through scientific instruments. Rational creatures construct them to pick up the values of certain variables. Then the creatures read off the data and draw conclusions. The instruments become parts of the creatures' perceptual apparatus. The creatures still need to find the appropriate conceptual scheme that will lead them to the objective, given how the instruments have been constructed.</p><p>The objective includes not only facts that once established may be expressed without reference to percepts, but also facts that are to be expressed in perceptual terms. The sizes, shapes, speeds and colours of objects, the pitches and volumes of sounds, and so on, are all facts that fall within the objective.</p><p>Moreover, all the facts included in the objective should in principle be open to being stated by creatures with any perceptual apparatus (although some facts would be hard to verify with the wrong perceptual apparatus). Colours and pitches as perceived by human beings might not be detectable directly by creatures with the wrong apparatus. They would have to say "to human beings, this object is blue", or something similar. But they could say it like that, picking out the fact of blueness to human beings. Human beings could identify the same fact. It is just that they would not normally bother to say "to human beings".</p><p>What we have said so far defines the appropriateness of a conceptual scheme in terms of leading to the facts. This fits very well with Rand's approach. The facts are there, whether we grasp them or not. And we have to make use of our perceptual apparatus and develop our conceptual scheme in order to improve our grasp of the facts. Now we shall look at how the development of conceptual schemes may proceed. In so doing we shall get a better idea of how we can tell whether our concepts are indeed appropriate, so that we can have confidence that the purported facts we identify are indeed facts.</p><h3 style="text-align: left;">The development of conceptual schemes</h3><h4 style="text-align: left;">The origin of concepts</h4><p>For Rand, our concepts arise neither in the intrinsic world alone (a realist position) nor in our consciousness alone (a nominalist position). Rather, they arise out of the need of consciousness to get a grip on the intrinsic world, with the grip being tested by reference to the perceptual appearance of the intrinsic world. They may be regarded as appropriate concepts when there is no conflict between the purported facts that are stated using the concepts and our observations of the world.</p><h4 style="text-align: left;">Measurement-omission</h4><p>An important stage in the development of concepts which will give us a grip on the world is the process of omitting inappropriate details, a process that Rand calls measurement-omission. For example, to arrive at a sensible concept of a table we need to focus on the presence of a flat surface that can support objects and of legs that keep the surface off the floor. We should omit specifics of size, colour, or the wood or metal used.</p><p>Inappropriate details are those which should be ignored because they would destroy the concept by excessive fragmentation or would be irrelevant given the nature of the concept.</p><p>On fragmentation, the concept of an animal remains valid even after cats are distinguished from dogs. To incorporate details of the specifically feline or canine into the concept of an animal would be to destroy it by fragmentation.</p><p>On irrelevance, it is for example in the nature of the concept of a cat not to mention physical location, so location would be an irrelevant detail. We are of course free to mention the locations of specific cats, and we might regard it as significant that a given subspecies only bred in a specific place. But we might not incorporate location even into the concept of that subspecies, because we might want to allow accidental genetic twins in other parts of the world to be covered by the same concept.</p><h4 style="text-align: left;">Error in the selection of concepts</h4><p>Error is for Rand an issue in relation to all concepts, from the most fundamental to the most superficial. This is because for her we gain, or through error fail to gain, a grip on the intrinsic world.</p><p>Specifically, Rand could not use a defence against error that Kant could use in relation to the most fundamental concepts. Kant could say that we simply had to use those concepts. Their use would not yield a false picture of the world as it was in itself, because it would not give us any picture of that world. It would only give us a grip on the world as it appeared to us. And since the concepts moulded that appearance, the result would necessarily be a grip on the world as it appeared to us. There would therefore be no such thing as error through choosing the wrong fundamental concepts.</p><p>For Rand, there is no such recourse to a mere appearance in relation to which any concepts would be bound to be appropriate because they gave us only a grip on the world as it appeared under the influence of precisely those concepts. And we cannot ignore the possibility of error, given that different people could use conflicting sets of concepts.</p><p>The response of Rand is that we must choose the right concepts to fit with the intrinsic world and yield a statement of the objective. If we choose the wrong concepts, we shall eventually find this out because the result will be a set of beliefs that contradict one another or that give us no grip or a poor grip on the world. (This is a point at which percepts do real work, alongside the work they do in guiding the development of our concepts. If what our concepts incline us to think is misaligned with our percepts, we can tell that something is wrong.) The objective, the set of actual facts rather than erroneous fictions, is to be discovered by getting things right, under the harsh discipline of the intrinsic world.</p><h4 style="text-align: left;">The refinement of conceptual schemes</h4><p>Conceptual schemes can become more refined, capturing more details of the world. The objective can be seen as captured in more or less detail by more or less refined conceptual schemes. Moreover, progress in that way would not require the rejection of facts established using less advanced conceptual schemes.</p><h4 style="text-align: left;">The revision of conceptual schemes</h4><p>Conceptual schemes can also change more radically, in ways that require rejection of what had earlier been thought of as facts. The development of relativity and of quantum mechanics are examples.</p><p>This is not in itself an objection to Rand's approach. Indeed it reinforces her message that we must find the concepts to fit the intrinsic world, and not expect the intrinsic world to mould itself to our concepts. But given the risk of conceptual revision, how can we have confidence that what emerges as the apparent objective is anywhere close to the relevant parts of the objective. It might be primarily a product of our conceptual scheme, with our appreciation of the intrinsic world having been distorted by that scheme.</p><p>The answer is that we cannot be wholly confident. But we can have reasonable confidence, in the sense that our apparent objective is as good as we can get for the time being and we have no specific reason to think that we are mistaken. (Occasionally we do have specific reason to think that while we may be on the right lines, there are still defects in our understanding.) We may also be able to celebrate the fact that what we currently take to be a statement of relevant parts of the objective is empirically adequate to a very high degree: there is no direct clash between the purported facts that we state and the observations we are so far able to make. And we may argue that our taking of sets of purported facts to be parts of the objective is regulated by the intrinsic world because it is sometimes precisely because of observations that we revise our view of what the facts are.</p><p>We should not however be wholly confident that we are making steady progress toward a full statement of the objective along a path that will avoid the need for any significant backtracking. It would be possible for us to pursue a line of conceptual development, and to gather data in ways that were shaped by our concepts, such that we tended toward a merely locally optimal set of concepts and apparent grasp of the intrinsic world which was still seriously defective. A large-scale rethink would then be required, although it might be many years before we discovered that this was so.</p><h4 style="text-align: left;">Against relativism</h4><p>Different conceptual schemes would give rise to different relations with the intrinsic world. Sometimes the outcome would be the same objective, because the differences between schemes merely counteracted differences between percepts that were produced by different perceptual apparatus. But sometimes the outcome would be different apparent objectives. This would not however mean that there was really more than one objective or that relativism could gain a foothold.</p><p>We should start by discarding mistaken conceptual schemes, on the basis of their leading us into contradiction or on the basis of conflicts between their predictions and our actual percepts.</p><p>After discarding mistaken schemes, we should be left with access to a single objective that could be set out in ways which were more or less refined, depending on the specific concepts from a single scheme that we chose to deploy. (The ways in which the objective was set out might or might not all be reducible to some fundamental way. The idea of a single conceptual scheme does not require us to believe in universal reducibility.)</p><p>At least that could be expected in the natural sciences, although even there it would not be safe to assert that the current conceptual scheme would last for ever, only that it should be regarded as the appropriate one given the evidence currently available. We would in practice only be able to say that we had high confidence that we had reached the objective, because we could not be wholly confident that all mistaken schemes had been discarded.</p><p>In the social sciences and the humanities we might expect more scope for there to be schemes that yielded purported facts which were inconsistent with one another, while the various schemes were all considered to be acceptable for the time being so that no scheme could justifiably be selected as the single appropriate one.</p><p>The need to allow for that possibility is however a challenge to the notion that relevant assertions made in the social sciences and the humanities are facts. Where there is scope to put different conceptual schemes to work and conflicting purported facts emerge from the interactions between the different schemes and the intrinsic world as presented through percepts, we should hesitate to regard any of the purported facts as parts of the objective at all. The most we could confidently say would be that all but one of the schemes which yielded conflicting purported facts might at some time be discarded, so that some of the purported facts might justifiably come to be regarded as facts.</p><p>It might seem that this way of keeping relativism at bay was circular. It seems to amount to saying that if relativism appears to get a foothold, then it is not the objective that is discovered, or at least that not more than one of any set of conflicting facts can be regarded as part of the objective. That is, relativism about the objective is excluded by fiat.</p><p>There is truth in this charge of circularity. It is assumed that there is a single objective to be found, so contradictory purported facts cannot be admitted. And a consequence is that not every study of the world can be counted as consistently uncovering the objective. But the success of some studies of the world, particularly in the natural sciences, gives us every reason to think that there is an objective to be uncovered. And there is plenty of scope for the objective in the social sciences and the humanities too. Many conclusions survive shifts between conceptual schemes.</p><h2 style="text-align: left;">Kant and Rand</h2><p>The ideas of Immanuel Kant and of Ayn Rand are far apart, and in some respects directly opposed. Nonetheless we can explore scope to modify their ideas in ways that might allow some reconciliation, and scope to find common ground even without modification.</p><h3 style="text-align: left;">Merging the noumenal and the phenomenal</h3><p>One way to seek a reconciliation between Kant and Rand would be to make Kant's world as presented by phenomena as good to Rand as the reality of which she speaks. But Rand would require reality as presented by phenomena to be the full extent of reality.</p><p>Kant would agree that reality as presented by phenomena was all the reality we could talk about in any detail. And the fact that given a certain intrinsic world and a certain set of concepts, Rand's objective would have to be as it was, would be entirely acceptable to Kant. His position is one of empirical realism. In his view we make empirical discoveries, with the world and not our preferences being dominant. That thought fits well with Rand's approach.</p><p>But for Kant, there are still the noumena. The noumena feature in a metaphysical theory. The supposition of the noumena is motivated by a view that certain metaphysical questions make sense, even though all attempts to tackle them as if they were questions about the world as we experience it would end in paradox and confusion, and by the need to find a place for the free will of human beings. But nothing about the noumena features in any natural science, and any attempt to learn anything specific about noumenal reality would be fruitless. Alas for any hope of reconciliation between Kant and Rand through matching the phenomenal world with the intrinsic world, being all the reality worth talking about in any detail would not be the same as being the full extent of reality, something that Rand would demand.</p><p>This might seem an odd demand to infer from Rand's work, since she spoke both of intrinsic reality and of the objective. But for her intrinsic reality, and nothing else, could be paired with a conceptual scheme so that the objective could be identified as a relation between those two elements. There is no space for two sorts of reality, or even for one reality considered in two different ways. The objective is not reality viewed differently. It is the view of reality that we get by putting the appropriate concepts to work. For Kant on the other hand there is a reality as it appears to us, and we then perceive it. This would be a real point of disagreement between Rand and Kant. Rand's assertion is not a kind of logical positivist one that there would be no sense in talking of anything apart from intrinsic reality. Rather, it is a metaphysical assertion that there really is nothing else.</p><p>Even if it were a logical positivist assertion, that would not be acceptable to Kant. He would agree that nothing specific could be said about the noumena, but he would also require that reference to the noumena not be regarded as vacuous.</p><h3 style="text-align: left;">The emphasis on epistemology</h3><h4 style="text-align: left;">Forms of intuition and axioms</h4><p>Both Kant and Rand are concerned with what we know. For Kant, a study of our powers to know leads on to the conclusion that what is known is not the qualities of the world as it is in itself. For Rand, anything that counts as knowledge is knowledge of the nature of the intrinsic world. At that level, the conflict between Kant and Rand persists. But the centrality of knowledge to the approaches of both of them leaves open the hope of some agreement at the level of the nature of the act of knowing, and in particular in relation to the process of the application of concepts to get to grips with the world.</p><p>For Kant, there are certain forms of intuition (space and time) and concepts that we just have to apply. The need to use them reflects our nature, and the necessity of applying them should not mislead us into thinking that they reflect the nature of the world as it is in itself.</p><p>For Rand, there are three axioms that we must at least implicitly adopt if we are to think about the world at all. These are the axioms of existence, identity, and consciousness. The axiom of existence is that existence exists, or to spell this out a bit, that there really is a world out there, the intrinsic world, which is independent of our consciousness of it. The axiom of identity is the somewhat gnomic "A is A". To spell this out, each thing has the specific nature that it has, and its characteristics constitute its identity (Rand, <i>The Ayn Rand Lexicon</i>, entry for Identity). As a practical corollary, one must face up to the natures of things as they are. One can take action to change things in the world, but merely wishing that things were different or trying to view them using inappropriate concepts will leave them as they are. The axiom of consciousness is that one exists possessing consciousness, which is the faculty of perceiving that which exists. As Rand puts it to pull the picture together, "Existence is Identity, Consciousness is Identification" (Rand, <i>The Ayn Rand Lexicon</i>, entry for Identity).</p><p>The axioms underpin a way of thinking that Rand picks out as fundamental. She argues that a crucial step on the road to making sense of the world is to recognise units, such as a person or a table. These units are then grouped together under concepts. A modest degree of conceptualisation is needed to pick out units at all, as when someone identifies "that thing". But it is only once units have been picked out that a person can group them on the basis of some appreciation of similarities and differences, and then enquire into their characteristics so as to define concepts which allow sensible groupings. Moreover, even to get as far as picking out units in a way that will allow the application of concepts, things must be out there in the world (existence), they must have properties by which they can be recognised and grouped (identity), and we must be conscious of them (consciousness). We might get as far as picking out units and grouping them merely because existence, identity and consciousness prevailed, without our being aware of their prevalence. But in order to achieve knowledge at anything like the level we have in fact achieved we need to think about what we are doing, reflect on what we know or seem to know, and consciously work out what do do next in the way of making further empirical investigations or defining new concepts. So we need to be fully aware of existence, identity and consciousness.</p><p>The axioms are not concepts. But they correspond to the three axiomatic concepts, the concepts we must put to work, the concepts of existence, identity, and consciousness. And acceptance of the axioms and the concepts does parallel Kant's requirement to use the forms of intuition and the concepts he lists in the sense that acceptance is required in order to make any progress. There is even a noteworthy parallel in the role accorded to time, the form of inner intuition for Kant and something fundamental to our grasp of the world for Rand. In chapter 6 of <i>Introduction to Objectivist Epistemology</i> Rand notes that it is only once we consciously take on board the concepts of existence, identity and consciousness that we can make real progress by understanding continuity and change in the world. And continuity and change are essentially temporal concepts.</p><p>There is however a difference, particularly from the use of Kant's forms of intuition. For Rand, our thought must accord with her axioms in order for us to have any hope of getting things right. For Kant, we have to perceive in the ways that use of his forms of intuition ensures. So for Rand it is about thought, while for Kant it is about perception.</p><p>Having said that, the relevant sense of perception in Kant's thought is not the basic level of incoming sensations but the more elevated level of organising them, a level which arguably involves a degree of unconscious thought. This does however correspond to Rand's stage of percepts, which follows the stage of sensations but precedes the stage of the application of concepts.</p><h4 style="text-align: left;">Extensive knowledge of a stable world</h4><p>For Rand, the one and only world is what we come to know. And while our knowledge is never likely to become complete, we can in principle explore everything there is, with nothing beyond our reach for reasons that would be supplied by metaphysics or epistemology. We may be delayed by reasons supplied by the sciences. It may for example be very difficult to make instruments of a desired refinement. And we may, particularly in fundamental physics, find ourselves not knowing whether there is information about what particles are doing that is inevitably hidden from us or whether there is no fact of the matter about what particles are doing. But our being faced with such a disconcerting dilemma would not reflect some inevitable metaphysical or epistemic principle. Rather, it would merely reflect the inappropriateness of our expectation that the microscopic world should conform to the style of the macroscopic world.</p><p>Another important point is that for Rand, the world has its own stability. Things do endure, and they participate in causal relationships in accordance with their natures. This makes knowledge possible. Kant would likewise be happy to allow stability in the sense of endurance, natures and causal relationships in the world as it appeared to us. But for Rand, the stability we find when we learn about the things in the world and connect their natures to their causal powers is a stability of the world as it is in itself. Kant is by his own theory debarred from even speculating about the stability of things in themselves.</p><h4 style="text-align: left;">Knowing the world from the inside</h4><p>An important part of Rand's picture is that we are elements in the world, whose perceptual and intellectual capacities are given by the nature of the world itself. We can explore how our eyes and brains work, and what we have discovered or may eventually discover is what there is to be known about our intellectual functioning. Our abilities to identify and re-identify objects, to conceptualise, and to trace chains of causes and effects give us immense power, but they do not make us beings outside the world. We are in the world and what we will discover is what is going on around us in that world. Moreover, there is no mystery about how we know. Our processes of perception and conceptualisation are natural, not magical.</p><p>There is a contrast here with Kant, who sees us as rational creatures with a true nature that is not to be defined in the terms of the world as it appears to us (although Kant would of course allow research into the eye and the brain). This view goes hand in hand with the idea that there is a true nature of the world which is outwith the grasp of empirical exploration. </p><p>Kant's view may also make it somewhat mysterious how we perceive the world. The one sure way to remove the mystery would be to adopt a firmly one-world interpretation of Kant that took an empirical description of our processes of perception to be a good guide to how those processes really worked. That would however risk making claims about us as we were in ourselves, claims of a sort that Kant would forbid.</p><h2 style="text-align: left;">Conclusion</h2><p>Rand and Kant have markedly different theories of how the world is, and those differences spill over into differences as to the nature of our knowledge. But there are some points at which their two lines of thought can be related to each other. And their basic questions about the world and about knowledge are the same, even if their answers are different.</p><h2 style="text-align: left;">Sources</h2><p>Kant, Immanuel. <i>Critique of Pure Reason</i>.</p><p>Rand, Ayn. <i>Introduction to Objectivist Epistemology</i>, expanded second edition edited by Harry Binswanger and Leonard Peikoff, with an additional Essay by Leonard Peikoff. New York, NY, Meridian, 1990.</p><p>Rand Ayn. <i>The Ayn Rand Lexicon: Objectivism from A to Z</i>, edited by Harry Binswanger and with an Introduction by Leonard Peikoff. New York, NY, Meridian, 1988.</p><p>Wittgenstein, Ludwig. <i>Tractatus Logico-Philosophicus</i>.</p><div><br /></div>Richard Baronhttp://www.blogger.com/profile/17869390364282686725noreply@blogger.com0tag:blogger.com,1999:blog-2055583406917680385.post-9807910704772264012023-06-17T11:19:00.000+01:002023-06-17T11:19:28.753+01:00Tax reduction: law and ethics<h2 style="text-align: left;">Introduction</h2><p>This post is about different ways to try to reduce the tax one pays, and about relationships between those ways, the law, and ethical considerations.</p><p>This post is not advice on tax law. Anyone with questions or concerns about the position of an actual taxpayer should seek appropriate professional advice. We will not provide advice or recommend an adviser.</p><p>The focus is on the UK. We shall therefore refer to the tax authority as HMRC, which is short for HM Revenue and Customs. Other countries handle some of the issues differently, but the same general style of enquiry may be useful in relation to those other countries.</p><p>We have written two earlier blog posts on issues related to legal provisions we shall discuss:</p><p><a href="https://analysisandsynthesis.blogspot.com/2012/07/tax-avoidance-and-problem-of.html">https://analysisandsynthesis.blogspot.com/2012/07/tax-avoidance-and-problem-of.html</a></p><p><a href="https://analysisandsynthesis.blogspot.com/2013/03/reasonableness.html">https://analysisandsynthesis.blogspot.com/2013/03/reasonableness.html</a></p><h2 style="text-align: left;">Categories of conduct</h2><h3 style="text-align: left;">The traditional categories</h3><p>There is a traditional distinction between three things one can do in order to reduce the tax one pays: tax planning, tax avoidance, and tax evasion.</p><p>Broadly, planning involves things like putting money into tax-privileged investments such as pension funds, or ensuring that a home bought to rent out becomes the owner's main private residence for certain periods so as to minimise the proportion of the gain on sale that is taxable, or structuring a group of companies so that if one company makes a loss, it can be netted off against the profits made by another company to reduce the amount that is taxable. Avoidance is planning that looks too much like the exploitation of loopholes. It often involves complex structures and transactions that have no purpose other than to save tax. It is common to speak of avoidance schemes, where a scheme is a complete structure and set of transactions. Evasion is the provision of false information to HMRC, or the withholding of information, in order to pay less tax.</p><p>There is also a well-established distinction between different consequences of actions that in some way harm others. This is the distinction between restitution, making good the loss suffered, and penalties. In the context of tax underpaid, restitution arises in the form of payment to HMRC of the tax that would otherwise have gone unpaid.</p><p>Traditionally, tax avoidance which did not succeed because HMRC argued successfully for an interpretation of the law that left no loophole only led to restitution. The tax which the taxpayer had hoped to avoid had to be paid, perhaps with interest, but that was all. There was nothing illegal in setting up complex structures and carrying out complex transactions. They might simply fail to achieve their aim. Evasion, on the other hand, traditionally led to penalties.</p><h3 style="text-align: left;">The categories we shall explore</h3><p>We shall explore categories that are not the traditional ones, although they are related to the traditional ones. We shall do so because our alternatives seem to be more useful in understanding certain issues, particularly some relationships between the legal position and the ethical position. </p><p>Our first category, which we shall call failed avoidance, covers tax planning or avoidance (we can run them together) that does not achieve its aim, for legal reasons rather than because some mistake was made in implementing the planning or avoidance.</p><p>Our second category, which we shall call unethical avoidance, covers tax planning or avoidance that on ethical grounds should not be undertaken.</p><p>Our third category, which we shall call penalised conduct, covers conduct that is liable to lead to penalties as distinct from mere restitution. We shall however not be concerned with the deliberate falsification or concealment of information.</p><p>Our fourth category, which we shall call culpable conduct, covers conduct that on ethical grounds should lead to penalties.</p><p>One of our questions will be that of the sharpness of the boundaries of these categories. Another will be that of the match or mismatch between the boundaries of the first and second categories, and between the boundaries of the third and fourth categories. We shall also be concerned with the extent to which, contrary to tradition, failed avoidance can amount to penalised conduct.</p><p>We shall start with failed avoidance, then move on to unethical avoidance and relationships between failed and unethical avoidance. After that, we shall we shall move on to penalised and culpable conduct. We shall then consider the ethics of legislating in certain ways. We shall conclude with a discussion of categories and general ethical principles.</p><h2 style="text-align: left;">Failed avoidance</h2><p>Tax planning can be very straightforward, for example putting money into a pension fund to get a tax deduction now and tax-free investment returns, subject to taxation (perhaps at a lower rate) when the pension is paid out. At the other extreme it can involve very complex structures and transactions, often using multiple companies, trusts and the like and mixtures of share capital and loans, all spread across several jurisdictions, in an attempt to ensure that the overall tax rate on a group's worldwide profits is far less than one would naturally expect it to be.</p><p>HMRC will seek to prevent tax avoidance from succeeding when the tax at stake is substantial in an individual case, when it would be substantial if a scheme were widely used, or when the avoidance is considered to be brazenly contrary to the policy of the tax system. Complexity is not essential for any of these conditions to be met, but it is quite common for schemes to be complex and to rely on loopholes in the law that may not have been apparent when it was drafted.</p><h3 style="text-align: left;">Legal provisions</h3><p>Avoidance in which the necessary steps were executed correctly can only be prevented from succeeding if the legal tools are available. Some avoidance is blocked by specific legal provisions, often ones which were inserted as soon as the form of avoidance in question came to light. But the enactment of such provisions can easily lead to the creation of new schemes that stay just outside the ambit of the new provisions. More significant for our purposes are general provisions that are intended to lead to the failure of classes of schemes defined by some aspect of their overall nature or by their results, rather than by their details. We shall look at two such provisions, the General Anti-Abuse Rule (the GAAR) and the Diverted Profits Tax (the DPT).</p><h4 style="text-align: left;">The GAAR</h4><p>In 2013, the UK introduced the General Anti-Abuse Rule (the GAAR). Links to the legislation and the guidance can be found here:</p><p><a href="https://www.gov.uk/government/publications/tax-avoidance-general-anti-abuse-rules">https://www.gov.uk/government/publications/tax-avoidance-general-anti-abuse-rules</a></p><p>The basic idea is that a tax avoidance scheme does not succeed in delivering the expected tax saving if it fails the double reasonableness test. In the words of Finance Act 2013, section 207(2):</p><p>(2) Tax arrangements are "abusive" if they are arrangements the entering into or carrying out of which cannot reasonably be regarded as a reasonable course of action in relation to the relevant tax provisions, having regard to all the circumstances including -</p><p>(a) whether the substantive results of the arrangements are consistent with any principles on which those provisions are based (whether express or implied) and the policy objectives of those provisions,</p><p>(b) whether the means of achieving those results involves one or more contrived or abnormal steps, and</p><p>(c) whether the arrangements are intended to exploit any shortcomings in those provisions.</p><p>One argument for the double reasonableness test is that it makes it quite difficult for tax avoidance to be caught by the GAAR. This is legislation with a much less well-defined scope than most of the legislation that targets specific tax avoidance schemes. It has to have a fuzzy boundary of application because if it had a precise boundary, avoidance schemes that stayed just the right side of the boundary would soon be devised. So to be fair to taxpayers, the grey area created by the fuzziness of the boundary should be such that it is easy to keep a safe distance away from it. Routine tax planning is definitely safe because it would be perfectly reasonable to regard it as reasonable. This is so even though there are also reasonable positions from which it would be regarded as unreasonable, for example a position that people should act without regard to tax considerations and then just accept whatever tax consequences happened to follow.</p><p>We should note the connections between the general notion of reasonableness and the circumstances to consider that are listed in section 207(2)(a), (b) and (c).</p><p>The effect of the circumstances mentioned in (a) and (c) is to link the notion of reasonableness to the intention of Parliament as it may be inferred from the relevant legislation and to the policy intention (an intention of the Government rather than of Parliament). Unreasonableness is indicated, although not demonstrated, by action which would frustrate those intentions. And (b) draws attention to contrived or abnormal steps as an independent indicator of unreasonableness.</p><p>Thus the notion of reasonableness at stake is closely tied to what Parliament and the Government at the time wanted the law to achieve. This is not surprising. The whole point of the GAAR was to give a new way to stop taxpayers dodging the intended effect of the law. But it will raise the question of whether what one might regard as unreasonable on general ethical grounds would align with what would be caught by the double reasonableness test of the GAAR. Has the GAAR defined a boundary of ethical significance, or merely one of governmental convenience?</p><h4 style="text-align: left;">The DPT</h4><p>The Diverted Profits Tax (DPT) was introduced in 2015. The legislation is in Finance Act 2015, part 3, and may be viewed here:</p><p><a href="https://www.legislation.gov.uk/ukpga/2015/11/part/3">https://www.legislation.gov.uk/ukpga/2015/11/part/3</a></p><p>This is a tax in addition to corporation tax, not an element within corporation tax. It is imposed on profits which would have been subject to UK corporation tax if avoidance steps had not been taken.</p><p>For the tax to apply, the taxpayer must have taken steps which lacked sufficient economic substance (sections 80(1)(f) and 86(2)(e)). This condition is defined in section 110(4), (5), (6) and (7). There are two parts. It must be reasonable to assume that the steps were designed to secure a tax reduction. And it must not be reasonable to assume that the non-tax benefits would exceed the tax benefits (section 110(7)(b) introduces a variation on this relative benefits rule which we shall not discuss).</p><p>We here have a different kind of reasonableness test from that used in the GAAR. The important difference is not between single and double reasonableness, with reasonable-design replacing reasonable-reasonable. Rather, it is between the DPT's "it is reasonable to assume" design to secure a tax reduction and the GAAR's "cannot reasonably be regarded as a reasonable course of action". The use of "cannot" in the GAAR leaves scope to survey all reasonable views in search of one favourable to the taxpayer. The use of "is" in the DPT pushes us to suppose some authoritative standard of reasonableness on the basis of which various possible sets of steps taken by a taxpayer could be divided into those it was reasonable to assume were designed to secure a tax reduction and those about which it would not be reasonable to make that assumption. The GAAR avoids the need to suppose an authoritative standard by allowing the full range of reasonable positions to be surveyed.</p><p>While inability to survey a full range of reasonable positions would make it harder for a company to be confident that it was safe from the DPT than it would be to be confident that it would be safe from the GAAR, because the inevitable grey area would be closer to regular commercial practice, there is another difference that might, but need not, pull in the other direction. The GAAR looks directly at what the taxpayer did. The DPT legislation looks at whether what the taxpayer did appeared to be designed to reduce tax. That is, application of the DPT requires consideration of design.</p><p>Having said that, it is only design as may be inferred from action that matters, not actual design as might be disclosed by internal company documents. To avoid its being reasonable to assume that transactions were designed to achieve a tax reduction, one would at a minimum have to argue that the transactions could easily have been chosen without tax reduction in mind. This much would be needed because section 110(9)(b) allows there to be design to secure a tax reduction despite there also being design "to secure any commercial or other objective". </p><p>There is however one final safeguard for the taxpayer. This is the rule that the DPT does not apply if it is reasonable to assume that the non-tax benefits would exceed the tax benefits.</p><p>There is a penalty for failure to notify that the DPT might apply. It can be up to 100% of the tax at stake. Notification does not however imply an admission that the tax applies.</p><h2 style="text-align: left;">Unethical avoidance</h2><h3 style="text-align: left;">What is unethical?</h3><p>Tax avoidance may be seen as unethical. It at least tends to be against the spirit of the law, which is to collect tax at prescribed rates on income, gains, sales and so on as evaluated in a common-sense way. If for example someone receives income of £90,000, and there is no express provision to treat income of that kind as being less than is received, they should pay income tax on that amount minus the personal allowance at the applicable rates of 20% and 40%. It would be contrary to the spirit of the law to find a loophole which meant that tax was only levied on £20,000.</p><p>The ethical case against avoidance could be strengthened by pointing out that the avoider relied on goods and services provided out of taxation, such as education, healthcare and roads. Such goods and services are not only used directly in daily life. They also make it possible to run businesses so that people can have jobs or make investment returns. That is, they help to generate the income on which tax is supposed to be paid. If a democratically elected Parliament has decided how the cost of the tax-funded goods and services is to be shared out, it would seem to be unethical to dodge paying one's allocated share.</p><p>A contrary view would be that tax was an artificial imposition, with ethical obligations being defined entirely by the words of the law so that the only unethical conduct would be to act contrary to those words. If an avoidance scheme navigated round the words of the law without actually breaking any of the rules, nobody would have any ground to condemn the avoider as unethical. It would even be possible to take a positive view of avoidance, on the ground that the state had no business violating individual property rights without individual consent (as opposed to collective consent).</p><p>There is very unlikely to be universal agreement on the boundary between acceptable and unethical tax planning or avoidance. Some people think that if the law allows a piece of tax avoidance to succeed it is ethically acceptable, however contrived it may be. Others think that anything in the least bit contrived is ethically unacceptable. And there are plenty of options in between, allowing that certain degrees of avoidance are acceptable but condemning more extreme avoidance.</p><p>It is not surprising that there should be a wide range of views. There are many factors to consider in political questions, and several angles from which those questions may be viewed. We do not find the single dominant factors or angles which may allow there to be general agreement on conclusions such as the one that physical violence is wrong. </p><p>Not only is there no general agreement on where to place an ethical boundary. Almost any boundary would be hazy. The only positions that would be guaranteed to yield sharp boundaries would be extreme ones: that all tax planning was wrong, so that people should make their commercial decisions as if tax did not exist and then accept whatever tax consequences arose, or that all tax avoidance, however contrived, was ethically acceptable.</p><p>Even an ethic that was intrinsically disposed to precise conclusions as to what to do in given cases would be unlikely to help. A Kantian ethic might tell taxpayers to act from a good will, and to consider the feasibility of universalising the maxims on which they acted, but any one of a range of principles on which to approach tax planning, or indeed any one of a range of possible decisions on particular occasions, could easily comply with such requirements. And a Benthamite approach of maximising happiness could not be used to work out either principles on which to act or which choices to make on particular occasions, simply because of the impossibility of working out what would in fact do the most to promote human happiness. On the one hand it could be argued that the democratically controlled state was the best guardian of resources to be applied for the common good, so that tax planning should be minimal. On the other hand it could be argued that the flourishing of the economy would be vital, that this would demand not letting too much wealth leach into the public sector, and that the state was incompetent at allocating resources, so that tax payments should be minimised.</p><h3 style="text-align: left;">Failed avoidance and unethical avoidance</h3><p>We shall now consider the relationship between the boundary that divides effective from failed avoidance and possible boundaries that would divide acceptable from unethical avoidance.</p><p>Perfect correspondence in fact, so that tax planning was ethically acceptable if and only if it was legally effective in the sense that it would lead the planner to keep their envisaged tax saving, seems unlikely. One would expect it if there were a general ethical principle that whatever planning the law did not defeat was acceptable, and whatever it defeated was unacceptable. One might argue for the first leg of this general principle on the basis that there was no natural morality about the levying of taxes, although on that basis ethical acceptability would amount not to endorsement but only to the absence of ethical objection. And one might separately argue for the second leg of the general principle on the basis that there was an ethical requirement to show respect for the law which went beyond abstention from illegal actions and extended to abstention from actions, the hoped-for consequences of which would be frustrated by operation of law. (Remember that tax planning, even aggressive avoidance, is in general not disobedience. There is in general nothing illegal about setting up complex commercial structures, even if the purpose is tax avoidance. The law may merely render the use of such structures ineffective for tax purposes.) But it would be very hard indeed to argue for both legs at the same time, unless one were to assert that the law was in the areas of life that it covered the definitive guide to the ethical. That would be to allow legal positivism to steamroll its way across ethical discourse.</p><h3 style="text-align: left;">Ethical thought and the GAAR</h3><p>The GAAR relies on the double reasonableness test: could the tax arrangements reasonably be regarded as a reasonable course of action? </p><p>The resultant haziness as to which tax avoidance would be defeated by the GAAR is inevitable, given that precise rules directed against specific avoidance schemes routinely leave scope to devise new schemes.</p><p>There is a special contribution that ethical thought might make to deciding specific cases. Ethical thought might contribute to deciding what would be reasonable, or at least (taking account of the double reasonableness test) to deciding what could reasonably be regarded as reasonable. What follows here on this point is however speculative. The potential scope to contribute is not recognized in current legislation, and attempts to make use of ethical thought might get nowhere in court. We include the topic here in order to suggest that ethical thought might have a role to play in the writing of future tax law or in the development of approaches to deciding cases.</p><p>A Kantian approach would best be suited to saying what could reasonably be regarded as reasonable. An understanding of the notion of a good will and of the categorical imperative would not suffice to say what would be reasonable in tax planning. The factors to consider would be too complex for that. But a claim that some particular piece of tax planning could reasonably be regarded as reasonable could be tested by asking whether it was plausible that the action might be inspired by a good will, or whether it was plausible that the maxims behind the tax planning could be universalised. (We here mean maxims specific to planning of the relevant sort, and do not envisage a single maxim or set of maxims that would cover tax planning of all sorts.)</p><p>A utilitarian approach would likewise best be suited to saying what could reasonably be regarded as reasonable. The only actions that would in fact be reasonable, according to such an ethic, would be actions that maximised total happiness. But it would be far too hard to work out consequences of particular pieces of tax planning or of general approaches to tax planning. On the other hand, it might be feasible to say that it was reasonable to regard some particular piece of tax planning, or some particular general approach, as having a plausible prospect of maximising total happiness.</p><p>Virtue ethics may have a better prospect of leading to judgements as to which specific pieces of tax planning, or general approaches, were in fact reasonable. A piece of tax planning or a general approach could be judged on the basis of whether it accorded with specific virtues. Honesty would be an obvious virtue, but so long as we were not considering evasion dishonesty might not be in question in any case. Community spirit might be a relevant virtue (although there are philosophies, such as that of Ayn Rand, which would not see it as a virtue of any significance). Paying for resources on which one relied even though they were technically provided free of charge at the point of use, such as the education of employees, might be a virtue. And so on.</p><h2 style="text-align: left;">Penalised and culpable conduct</h2><h3 style="text-align: left;">Restitution and penalties</h3><p>We have noted the distinction between two two types of demand for payment that HMRC may make.</p><p>Restitution is making good a loss, for example when tax avoidance fails and the tax which it had been intended to save is payable after all. The restitution is of the loss to the state. It is not exactly like damages payable for loss in a civil action between private individuals or companies, but the basic idea is the same. Adversely affected parties are to be restored to the financial positions in which they would have been had something unfortunate not happened. Most importantly, for any payment to amount to restitution, its amount must be determined by reference to the loss that arose.</p><p>A penalty is a punishment that is inflicted for reasons other than restitution, for example in order to deter. The severity of a penalty may happen to be determined by the size of the loss in question, as when a penalty for making an inaccurate tax return is a percentage of the tax that would have been lost. But there is no need for there to be a link between the size of the loss and the size of the penalty.</p><h3 style="text-align: left;">The extent of penalties</h3><p>Penalties do of course apply when a taxpayer deliberately falsifies tax returns, or conceals information that is required. Such actions fall under the traditional heading of evasion. We shall not be concerned with such conduct here.</p><p>One would naturally expect that failed avoidance would lead only to restitution. Indeed, that has been the traditional position. There is in principle nothing illegal about setting up complex structures and carrying out complex transactions, even if the aim is to reduce tax liabilities.</p><p>Things have however changed. Failed avoidance can incur penalties, and can do so even when all the structures and transactions are in themselves perfectly legal and full disclosure has been made to HMRC. We shall consider two examples here. The first one relates to mistaken claims to successful avoidance. The second one relates to the GAAR.</p><h4 style="text-align: left;">Mistaken claims to successful avoidance</h4><p>If a taxpayer completes a tax return on the basis that an avoidance scheme succeeds, when in fact it fails, the return will be inaccurate and there may be penalties for that reason, a reason which is distinct from the creation of a complicated commercial structure and its use to engage in complicated transactions.</p><p>This can happen even if the structure and the transactions involved are disclosed in full, and any applicable requirement to notify the use of an avoidance scheme or of transactions that may suggest avoidance is also met. (The use of some specific schemes must be notified. And transactions with certain characteristics that are typically associated with avoidance must also be notified, even if no avoidance is in fact involved.) </p><p>The return can be inaccurate because under the self-assessment regime that has prevailed in the UK since the 1990s, the taxpayer is responsible for working out what tax they owe. So if a taxpayer has engaged in tax avoidance that on close inspection would be found not to save tax, they should state their tax position without taking account of the purported saving. The normal penalty would be the one for carelessness in making an inaccurate tax return, although other penalties would also be possible (Finance Act 2007, schedule 24, paragraph 3A). The penalty for carelessness typically ranges from nil to 30% of the tax at stake (more if transactions involving entities in certain countries are involved), but if HMRC find out about the error before the taxpayer discloses it unprompted the minimum penalty is 15%.</p><h4 style="text-align: left;">The GAAR</h4><p>In relation to the GAAR, the UK has adopted an approach which allows penalties to be imposed merely because avoidance fails. A taxpayer may be put on notice that HMRC believe the GAAR to frustrate a piece of tax avoidance. If the taxpayer does not promptly agree and re-assess their tax as if the avoidance did not succeed, a legal challenge may follow. If the taxpayer loses, they must pay not only the tax not saved but also a penalty of 60% of that tax. The only way to guarantee not to suffer this penalty is to concede that the scheme is defeated by the GAAR as soon as HMRC claim that it is so defeated. If the taxpayer fights the case and loses, the penalty will be due.</p><h3 style="text-align: left;">Culpable conduct</h3><p>The absence of general agreement on an ethical boundary, together with the fact that the steps taken in tax avoidance are when considered individually often steps that could just as well be taken for other reasons, steps such as setting up trusts in order to safeguard assets from the spendthrift or making loans in order to finance business ventures, make it arguable that when tax avoidance is frustrated by law the consequences should be limited to restitution and should not include a penalty. The law has to draw its boundary between effective and failed avoidance in one place, and there is not much prospect of general agreement that it is ethically the right place. So there is not likely to be general agreement that a given taxpayer has acted in a way that would make penalties appropriate in addition to restitution.</p><p>This puts the spotlight on cases in which the approach is to impose penalties despite the fact that all this is involved is failed avoidance. Can it be right to do so, given the legality of the underlying structures and transactions?</p><p>One argument in favour of such penalties would be that the law could draw a secondary boundary within failed avoidance, between respectable and outrageous avoidance. Another argument would be that penalties would deter speculative avoidance which was unlikely to succeed unless HMRC failed to look at it. A third argument would be that while the structures and transactions might be perfectly legal, the motive would very often be one of tax reduction. Then the taxpayer should be regarded as unethical and the state should not be expected to pull its punches.</p><p>There are counter-arguments to all three of these. To take the first argument, while a legally defined boundary between respectable and outrageous avoidance might be said to exist already in the double reasonableness test of the GAAR, there would be little prospect of general agreement on the location of a corresponding ethical boundary. The deterrent effect in the second argument might go too far. It would extend to avoidance that had a reasonably good prospect of succeeding, and the state should not deter claims to what might well turn out to be the taxpayer's legal entitlement were the matter to go to court. (It is a feature of such cases that it often cannot be known in advance of a court hearing exactly what the law would require.) Finally, the argument that unethical conduct on the part of the taxpayer should allow the state a freer hand than it would normally have would introduce notions of fair dealing into the tax system in a very strong form. Not being bullied by the state would become dependent on being kind to the state.</p><p>The GAAR brings such issues sharply into focus. It may easily be unclear whether some tax avoidance would be defeated by the GAAR. There is guidance on the GAAR, but it is not going to answer every question, and in any case guidance is not the law under which the question of whether tax avoidance succeeds is determined. And yet Parliament has legislated to impose a penalty of 60% of the tax at stake on avoidance that fails by virtue of the GAAR, the only sure protection against which is to concede as soon as HMRC challenge the avoidance.</p><p>For the penalty actually to apply, the avoidance must be established to fail. Then it would be a penalty like the general penalty for making a return that was inaccurate by virtue of allowing for avoidance that was found to fail. But the scale of the penalty would be doubled, from a usual maximum of 30% to a fixed 60%, if the taxpayer, at the earlier stage of not knowing whether the avoidance would succeed, had the temerity to challenge HMRC's view that the avoidance would fail. Such an encouragement to taxpayers not to stand up for themselves but instead meekly to accept HMRC's view of the law is at the very least ethically dubious.</p><p>To raise ethical objections to this kind of penalty is not to endorse tax avoidance. The kind of avoidance involved will nearly always be highly contrived, and would not have been instigated unless the taxpayer was determined to pay significantly less tax than the general principles of the tax system would lead one to expect. But such considerations do not suffice to give the state a free pass.</p><p>We should add in favour of the state that a decision to challenge avoidance under the GAAR would not be made lightly or on the say-so of a single tax inspector. There is an Advisory Panel, and HMRC must seek the Panel's opinion on each case before invoking the GAAR. The Panel can decide that the GAAR should not be used, for example in the case discussed here:</p><p><a href="https://www.pinsentmasons.com/out-law/news/gaar-advisory-panel-backs-uk-taxpayer-first-time">https://www.pinsentmasons.com/out-law/news/gaar-advisory-panel-backs-uk-taxpayer-first-time</a></p><p>One may however have reservations about the strictness of the control imposed by the existence of the Panel. While its members are not HMRC staff, they are appointed by HMRC and are supported by an HMRC-staffed secretariat. While HMRC must seek the Panel's opinion on each case before invoking the GAAR, they are not bound by that opinion but are only required to consider it (Finance Act 2013, schedule 43, paragraph 12). And the Panel does not apply the double reasonableness test, but only the test less favourable to the taxpayer of asking whether the taxpayer's actions were in fact a reasonable course of action (Finance Act 2013, schedule 43, paragraph 11(3)(b)).</p><p>Thus one can still be concerned about the existence of a penalty for avoidance, the only sure way of avoiding which is to concede to HMRC without taking the matter to a judicial hearing, even if one disapproves of tax avoidance and despite the existence of the Advisory Panel.</p><p>We do not here discuss penalties under the Serial Tax Avoidance Regime. We do however note that there would be things to say in connection with that regime about the imposition of penalties, rather than merely requiring restitution, on the basis of the fact that the taxpayer had previously attempted avoidance.</p><h2 style="text-align: left;">The ethics of legislating</h2><h3 style="text-align: left;">Clarity</h3><p>It is a general principle of law that individuals and companies should know in advance what the consequences of various possible actions would be. This argues in favour of clear boundaries, and against hazy ones. Legislation that does not provide such boundaries where it would be possible to provide them is arguably unethical, even if hazy boundaries would make it easier for the state to achieve its objectives.</p><p>In the present context this means that it should ideally be clear in advance which kinds of tax avoidance would fail, and which conduct would be penalised (as opposed to merely requiring payment of tax saved by way of restitution).</p><p>With avoidance that is defeated by the GAAR, the boundary between successful and failed avoidance is deliberately hazy. A precise boundary would lead to too narrow a range of avoidance being defeated by the GAAR, either because the boundary had been made precise by specifying commercial features of schemes (such as loan transactions that went through intermediate group entities which merely received money and passed it on, or the use of branches which were commercially significant but fell outside the definition of permanent establishments), or because the boundary had been made precise by reference to figures like effective tax rates. In either case, it would not be long before significant avoidance schemes which fell just outside the boundary were devised.</p><p>The same case for a hazy boundary could be made in relation to the DPT, although the potential scope of that tax is made a bit more precise by the need for a specific effect, the diversion of profits, which narrows the range of structures and transactions that could give rise to the charge.</p><p>In defence of haziness in both these cases, it could be said that those who engage in normal commercial transactions with little focus on saving tax will find themselves to be at a safe distance from the hazy boundary, although this is less true of the DPT than of the GAAR because of the absence with the DPT of an option to survey a full range of reasonable positions.</p><p>The argument for clear boundaries is particularly strong when penalties, rather than merely restitution, are involved. We naturally associate penalties with criminal conduct, where conduct must clearly fall within the scope of an offence defined in legislation. But penalties are involved both when the GAAR applies and when the failure of avoidance leads to a penalty for making an inaccurate return, another area of hazy boundaries because it can easily be unclear in advance whether avoidance will succeed even when the avoidance is clearly not within the scope of the GAAR. (The penalty for failure to notify that the DPT might apply is however rather different. It is not in itself enough to make haziness of the boundary objectionable, because notification does not imply liability.)</p><h3 style="text-align: left;">Motives and penalties</h3><p>It might be thought that penalties, and even the GAAR penalty for having the temerity to challenge HMRC's view and then losing, would be acceptable so long as a taxpayer had unacceptable motives.</p><p>It might also be thought that dubious motives were indeed present, both when the GAAR applied and when the penalty for making an inaccurate return applied. After all, the pursuit of loopholes and the minimisation of one's contribution to the cost of providing infrastructure, a health service, and so on do not look like good objectives.</p><p>We should however recognise counter-arguments.</p><p>One is that the duty of taxpayers to the state is laid down in statute, and statute is nearly all drafted in terms of precise rules. The rules must be obeyed, but the limits of the rules are the limits of the taxpayer's obligations. There is nothing in statute about the spirit of the rules, and the relationship of taxpayer to state is not like the relationship of two cricket teams to each other in which the spirit of the game should be observed. (As it happens there is a law of cricket, law 41, which requires observation of the spirit.) The closest one gets in tax law to reference to the spirit of the rules is in references to the principles and policy objectives of legislation and to the exploitation of shortcomings in legislation which we find in the GAAR legislation quoted above (Finance Act 2013, section 207(2)(a) and (c)).</p><p>A second counter-argument is that the state gets enough money, and wastes a lot of it because politicians and officials who spend it are not personally at risk of financial loss, so it is good to keep the state on a tight rein by not paying more tax than one must.</p><p>A response to concerns about penalties might be that it would be impractical to handle all the tax avoidance there would be if attempting it were a game without risk, in that the taxpayer would in cases of failure merely have to pay the tax they would have had to pay without the attempt. Penalties change the balance of risk, and deter the taxpayer population as a whole. But that would be to say that the need to keep the population cooperative outweighed considerations of justice to individual taxpayers. One could make a case that this was so, but the case could in turn be challenged.</p><h2 style="text-align: left;">Categories and our general ethical principles</h2><p>We have proposed four categories to analyse conflicts between on the one hand actions short of the outright criminal to reduce tax payable, and on the other hand the law and ethics: failed and unethical tax avoidance, and penalised and culpable conduct. These categories strike us as more useful than the traditional categories of planning (being sensible), avoidance (being too clever by half), and evasion (lying), when we want to think about relationships between legal rules and ethical principles.</p><p>Having said that, the categories we use are not particularly novel. They are clearly related to and inspired by the traditional categories. And the traditional categories still have important roles. They serve to explain how the law has developed. They also help to relate general ethical principles from outside the tax world both to the general scheme of the law and to ethical assessments of what taxpayers do: being sensible is generally accepted both in law and in ethics, being too clever by half may easily be seen as something to be blocked by the law and is also ethically debatable, and lying is both reasonable to punish and ethically unacceptable.</p><p>It is not that the traditional categories are essential to underpin judgements as to what is legally appropriate or ethically unacceptable. But whenever we tackle questions of what the rules should be or what is unacceptable conduct in a specific area, we do need some way to relate the question to more general principles. And it so happens that in the area of taxation the traditional categories can perform that role. Since they also need to be kept in play in order to understand the history of the current law and the context of much ethical comment on what taxpayers may do, and they do not interfere with other ways of thinking about the issues, we see no reason to discard them. It is merely that they are not the only useful categories.</p><div><br /></div>Richard Baronhttp://www.blogger.com/profile/17869390364282686725noreply@blogger.com0tag:blogger.com,1999:blog-2055583406917680385.post-31843809098499770892023-04-29T12:45:00.000+01:002023-04-29T12:45:48.499+01:00One thing<h2 style="text-align: left;">City Slickers</h2><p>In the 1991 film <i>City Slickers</i>, the wise old cowboy Curly explains something vital to Mitch, one of the three slickers who have come out to join the cattle drive and rediscover meaning in their lives. The dialogue goes as follows.</p><p><i>Curly:</i> Do you know what the secret of life is?</p><p><i>Curly:</i> This. (He holds up his right index finger)</p><p><i>Mitch:</i> Your finger?</p><p><i>Curly:</i> One thing. Just one thing. You stick to that and everything else don't mean shit.</p><p><i>Mitch:</i> That's great, but what's the one thing? (Mitch smiles and holds up his right index finger)</p><p><i>Curly:</i> That's what you've gotta figure out.</p><p>(Mitch looks uncertainly at his finger)</p><p>This is more intriguing, and more powerful, than the many self-help platitudes each of which claims to be the ultimate secret. It may not be in the league of Aristotle or Marcus Aurelius, but a one-liner does not have the same objective as a book. And figuring out what we might make of it from an impersonal standpoint is just as much of a challenge as an individual's figuring out what his or her personal one thing might be.</p><p>The phrase "one thing", while absolutely right in its original context in which Curly holds up one finger, can sound inelegant when used in other contexts. So we shall speak of an individual's project, with the implication that only one project will really matter to him or her at a given time.</p><p>In speaking of projects, we shall narrow the range of things that Curly invites us to identify. Projects have goals and results. Curly would allow an individual to select something that was not so teleological, for example "family" rather than "bringing up children". Our narrowing will allow us to be more specific in what we say than would otherwise be possible. But there would be other comments to make on a proposal to focus on things which were not defined in teleological terms.</p><p>This post will explore some complexities that come to light when we look at Curly's advice and its implications. Complexities are laid out, but not resolved. As may be apt for advice directed to individuals in relation to their own lives, resolution is left as an exercise for the reader.</p><h2 style="text-align: left;">What kind of project?</h2><h3 style="text-align: left;">Options</h3><p>There are plenty of projects on which someone might focus. Examples include bringing up a family (existing or planned), pursuing a career, or undertaking a business, academic or artistic project.</p><p>We can take it that Curly's recommendation would be limited to projects that really mattered to the individual, or that would at least have a good prospect of coming to be of great importance to the individual after he or she had got involved in them.</p><h3 style="text-align: left;">False steps</h3><p>There would be scope for false steps, as there usually is in life. The individual might pick a project, even one that was already of great importance to him or her, and after a while find that it was not sufficiently important to justify making it the single focus of his or her current life. If the risk of that kind of false step was very high, one should perhaps not follow Curly's advice.</p><p>We here take it that the appropriate arbiter of importance is the individual, not some independent standard with which the individual might disagree. This reflects the fact that Curly offers advice to the individual. There is no proposal to manage society so that the various focuses of people would collectively produce the best overall result. Curly is a cowboy, and cowboys are not collectivists.</p><h3 style="text-align: left;">Breadth</h3><p>An individual might select a project that was broad or one that was narrow. Both breadth and narrowness could have advantages. And a consequence of following Curly's advice and selecting only one project would be that there was a trade-off: an individual could not enjoy both the advantages of breadth and the advantages of narrowness at the same stage in life.</p><h4 style="text-align: left;">Robustness and adjustment</h4><p>A broad project should be more robust than a narrow one. A narrow project, such as becoming an Olympic-level athlete, could easily be frustrated by some random accident, such as injury caused by slipping on an icy pavement. A broad project could be adjusted in its details to accommodate unexpected difficulties.</p><p>There is a risk in adjustment. Modest and rare adjustments would maintain the single focus that Curly recommends, even if the cumulative consequence was that the project after 20 years was not recognisably the same project as the one that was first adopted. But substantial or frequent adjustments would betoken a loss of focus, so that some of the benefit of following Curly's advice would be lost.</p><h4 style="text-align: left;">Guidance</h4><p>The adoption of a project should yield guidance on what to do. There might be a lot of detail to fill in, not evident from the description of the project and perhaps not even implied by that description. But the general lines should be clear.</p><p>The statement of a broad project might not give much guidance. Statement of one of the broadest ones, such as "be happy" or "achieve worthwhile goals" would give hardly any guidance beyond indicating some types of activity to avoid (in our examples, activities that would induce misery or would waste time and energy). And adoption of a project as broad as that would not amount to following Curly's advice. But statement of a moderately broad project, such as "start a family" or "write a book", might give enough guidance. And statement of a narrow project should give quite a lot of guidance.</p><h2 style="text-align: left;">Focus</h2><h3 style="text-align: left;">The nature of focus</h3><p>The basic idea is that of focus on the individual's one project.</p><p>We take this focus to require primarily attaching value to progress in the project. How things go in relation to anything else is not to be important. There is indeed a suggestion in Curly's words that how other things go will automatically cease to matter, so long as the individual is focused on the one project.</p><p>Focus can seem like a good idea. More will be achieved in the area of focus, and the individual will be less bothered by things going on outside that area. At least, these results should follow so long as the project is not defined too broadly. But complexities crowd in quickly.</p><h3 style="text-align: left;">The sense in which a project matters</h3><p>A project might matter to the individual, or it might be one that would be generally agreed mattered to society or reasonably could matter to an individual. Finally, if one were to allow talk of moral facts or other facts of a comparably unusual nature, one might see a project as mattering by reference to some factual standard.</p><p>Focus would only work as a motivator and as a way to stop other things mattering if a project really mattered to the individual, and mattering to the individual might be sufficient as well as necessary for focus to work. But we should not ignore the other ways in which a project might matter.</p><p>Social acceptance that a project was at least one which reasonably could matter to an individual might be needed in order to ensure that the individual was neither obstructed by social disapproval of the project nor deprived of friends.</p><p>The existence of a factual standard by which a project mattered would not contribute anything if the factual correctness of such standards was not manifest to most people, and it would seem that it would not be. People might claim that such-and-such standards existed, but there would not be any independent way to check their claims. And while ethical intuitionists might regard their conclusions as self-evident, general agreement with those conclusions would not always be found. So all we would have to go on would be claims that the standards existed, together with whatever support particular alleged standards might have garnered by virtue of having emerged from empirically informed debate over such matters as the social effects of respecting or violating certain standards. And there would be no good reason to heed any claims that were only espoused by a few people. Thus so far as power to motivate individuals went, reference to supposed factual standards would not in terms of content take us beyond reference to social acceptance.</p><p>Reference to supposed factual standards might however take us further in terms of motivation. If an individual thought that focus on the chosen project complied with standards that he or she took to be factual, or even better, if the individual thought that focus on the chosen project was (for him or her) positively recommended by such standards, that should encourage focus on the project.</p><p>Correspondingly, if relevant standards were supposed to be factual, an individual's fear that his or her choice of project might conflict with those standards would undermine motivation. It would not be possible for the individual to think that the conflict was merely with views of other people that could be disregarded because one was entitled to follow one's own star.</p><h3 style="text-align: left;">Neglecting other things</h3><p>A focused life might have advantages, but one would need to ask whether it was acceptable for an individual to attach little or no importance to concerns outside the scope of his or her chosen project.</p><p>Whether it would be acceptable to the individual would depend on his or her psychology. Some people would not worry that other things which they might at other times have thought important were being neglected. Others would be seriously concerned. And while we might say that the former turn of mind would be more effective, we could hardly make an ethical judgement as to which turn of mind was psychologically preferable.</p><p>We could however discuss substantive ethical questions without judging the individual's psychology directly.</p><p>We may start with the positive benefits of focus. In favour of not worrying about the neglect of other things, one could say not only that more might be achieved but that a focused life would itself be a good life. It would not be the only kind of good life, but such a life would offer a better prospect of turning out to be good than several other kinds of life because it would be likely to be a life of high or at least respectable achievement. (We could only be sure of the quality of a life in retrospect. While it was being lived, the best one could do would be to act in such a way that there were good prospects.)</p><p>We now turn to the disadvantages of neglecting other things.</p><p>At the level of society as a whole, or by reference to long-term measures such as human progress, neglect would be very unlikely to matter. It would be exceptionally rare for what any one individual might have done or not done outside his or her main project to matter much. Someone else would have done something just as good. It is true that very widespread neglect of certain things, such as friendship, family, or civil society, would be a serious loss. But that is not a likely consequence of many people focusing on their single projects. Different people would focus on different things, and some people would focus on friends, family, or civil society.</p><p>There is a more troubling ethical question about the effects of neglect on people close to the individual, whether family or friends. (Work colleagues are not included here because someone who did not work hard enough would simply be replaced by someone who was willing to devote more effort to the job.)</p><p>We may distinguish two types of case, although the boundary between them would be decidedly hazy.</p><p>In a case of the first type, someone might gradually evolve into a highly focused person, with his or her personal relationships evolving in parallel to fit around that focus. Someone who needed a lot of time, or a nomadic life, to pursue his or her chosen project would form friendships of types that would fit with such demands. There would not need to be any specific person who could legitimately claim to have been unjustly left out of friendship, because those unable to be accommodated by virtue of the demands of the individual's focus would not have become friends, or at least not close friends, in the first place.</p><p>In a case of the second type, there would be existing personal relationships which would be damaged by the individual's coming to focus on a demanding project. One might expect this to apply to relationships with family members, who would automatically have a status of closeness to the individual and who might be counterparties to obligations on the individual. It could also apply to close or long-standing friends. Here there would be an ethical argument against adopting a project, focus on which would damage existing relationships. Having said that, one could also argue that each person was entitled to give priority to his or her own life and that it would be wrong for some friend or family member to expect that the individual should abandon his or her own aims. There is little hope of general rules to adjudicate individual cases. But we can say that there would be some dependence on whether the individual had voluntarily taken on the relationships in question, as when someone had chosen to start a family with a partner. There might also be some dependence on whether the individual's chosen project actually succeeded (something that could only be confirmed too late) or had a good prospect of succeeding (something that might be assessed in advance). We might here bring in what Bernard Williams said about Gauguin, who abandoned his family in pursuit of the fortunately fulfilled project of becoming a great artist ("Moral Luck", in Williams, <i>Moral Luck: Philosophical Papers 1973-1980</i>, Cambridge University Press, 1981).</p><p>Finally, we can ask whether an individual's life might fall short of being among the better sorts of life by virtue of constraints on relationships. Certain types of engagement with others which are widely considered to contribute to a good life might be ruled out by a focus on some demanding project. And it might be that while close relationships were formed, they would be at higher than usual risk of having to be broken off. That risk might taint relationships even before any break.</p><h2 style="text-align: left;">Change over life</h2><p>Suppose that an individual selects a project and puts a great deal of energy into it for a few years. Various goals that fall within the scope of the overall project and that are worthwhile in themselves are achieved, so that the effort to date would not be wasted even if the project was no longer pursued. But there is a good deal more that could be done within the scope of the project.</p><p>Now suppose that the individual's priorities or attitudes change, as can easily happen as people get older. As a result the individual either deliberately abandons the project or, without a decision to abandon, devotes less and less energy to it.</p><p>There is no reason why such a change should devalue the achievements to date. The project was at the time worthwhile, it would still be directly perceived as worthwhile by anyone who had the priorities and attitudes the individual used to have, and its worth could still be appreciated indirectly by anyone who could imagine having those priorities and attitudes.</p><p>There is however a way in which the possibility of future change could legitimately concern an individual and might undermine his or her current motivation. Future change might arise not out of the kind of development in priorities and attitudes that is natural to human beings, but out of a realisation that a mistake had been made in selecting the project. The project might turn out to be too challenging, given the individual's abilities. Or it might have appeared to be one that was appropriate to the individual's priorities and attitudes, but only because there were implications of pursuit of the project that did not come to light early on. Careful thought in advance might reduce the risk of such a mistake, and once a mistake was appreciated there would be nothing for it but to start again. It is the possibility that a mistake would in due course come to light that might undermine current motivation. Not to lose motivation for that reason would be to exhibit the virtue of cheerfully living with uncertainty.</p><div><br /></div>Richard Baronhttp://www.blogger.com/profile/17869390364282686725noreply@blogger.com0tag:blogger.com,1999:blog-2055583406917680385.post-85391265554750880232023-04-15T12:19:00.001+01:002023-04-20T08:59:50.452+01:00The plausibility test<p>In this post, we shall explore the test of whether responses to questions are plausible. We shall consider use of the test in philosophy and in history.</p><p>The plausibility test is in principle less demanding than the test of whether responses are correct. But it is still important, for two reasons. The first one is that it is not always possible to say whether a response is correct. The second one is that a focus on plausibility brings out certain important requirements for responses to be acceptable, such as that they should not outrage our background understanding and that (in some cases) they should confer Verstehen.</p><p>References are given at the end of the post.</p><h2 style="text-align: left;">The test</h2><p>Suppose that one seeks a response to a philosophical question, or to the historical question of why events in the human world took the course they did. Such a response may be a one-line statement of some conclusion, or an elaborate account that implicitly answers the question.</p><p>We shall use the term "response" to cover the full range of such possibilities. And we shall sometimes be able to speak of responses being correct or incorrect, meaning either that they are answers to questions where those answers could be identified as correct or incorrect in the ordinary sense, or that they are more discursive responses which could nonetheless attract attributions of correctness or incorrectness which would be based on their head-on collisions with facts.</p><p>There will however be some responses which are too discursive to collide with facts in the required way, or which do not collide with facts in that way because of the nature of the subject matter of or the approach to it. Concepts of correctness and incorrectness will then be inapplicable. And there will be gradations, with some responses more or less open to classification as correct or incorrect.</p><p>When a response is not open to classification as correct or incorrect, an important control is to ask whether it strikes experts as plausible. In this context, plausibility will require making sense, not being outlandish or in serious conflict with how one thinks the world works, and so on. It will not just mean within the bounds of possibility. On the other hand, our intended sense of plausibility is not that a response is to be believed cautiously, or that it is to be assigned some respectable probability of being correct (such as 0.25). Rather, the sense is that adoption of the response would not be unreasonable.</p><p>This plausibility test is what will concern us here. When applied in an academic discipline, it will not stand alone. Even if a judgement as to correctness is not expected, there will still be scope for detailed argument as to the evidential and other reasons to accept or reject a response. And what is learnt in the course of such detailed argument should influence judgements as to whether the response is plausible. But the plausibility test will still take investigation a step forward from the stage of detailed argument. It is a safeguard against getting lost in the trees of analysis so that one fails to see the wood of the overall picture, an overall picture which will include the background of existing understanding.</p><p>The test also has a role when a judgement as to correctness is envisaged. Even in such cases, once all the detailed work on evidence and reasoning has been done, it is worth standing back and asking whether the response is plausible. This final stage may not be especially worthwhile in disciplines in which one can be confident both that all the relevant evidence has been collected and interpreted correctly, and that its analysis will drive one to an inevitable conclusion as to whether a response is correct. But such happy conditions are only met in physics, chemistry, and a few other parts of the natural sciences. Elsewhere, and certainly across the humanities, the plausibility test is a valuable additional check. This role of the test in writing history was for example highlighted by Geoffrey Elton (<i>The Practice of History</i>, chapter 2, section 5). Elton did not draw our distinction between contexts in which judgements as to correctness are envisaged and contexts in which they are not, but he did stress the need for a stage of detailed analysis of the available evidence.</p><p>There is no algorithm for the plausibility test. A judgement as to whether the test is passed will not rest directly on a detailed analysis of evidence or reasoning, even though it may be influenced by matters that have come to light in the course of such an analysis. Rather, whether a response is plausible will simply be manifest to an expert, who will not then seek further justification for their judgement.</p><p>One might speak of intuitive judgements. We shall however not do so, except when we refer to other authors' work on intuition. A reference to intuition may be helpful in grasping the notion of judgements that responses are plausible. But the concept of intuition is liable to bring some baggage with it. One can see some of this baggage in discussions of the role of intuitions in philosophy. Philosophers are widely thought to rely on intuitions, but there is also a case to be made that intuitions are not needed (Cappelen, <i>Philosophy without Intuitions</i>). One might argue that the proper role of intuitions in philosophy was not in reaching or directly supporting specific conclusions, but in making judgements of plausibility, albeit subject to the qualification that a response which failed the plausibility test might nonetheless be correct (since intuitions can mislead). We shall not pursue that line of argument here. But we do note that it would be challenged by the fact that a lot of philosophical discussion centres on the scope for intuitions to give direct support to specific conclusions (see the papers in Booth and Rowbottom (eds.), <i>Intuitions</i>).</p><p>We should add that expertise does matter. Someone must have had the right kind of education and experience for their judgements to count for much. To describe a response's plausibility or implausibility as manifest, or to refer to intuition, is not to throw the door open to the views of the ill-informed or the untrained.</p><h2 style="text-align: left;">Examples</h2><p>We shall now set out some examples of use of the test. We shall include some comments on use of the test in different contexts. Later on we shall focus on the nature and the value of the test itself, rather than on examples of its use.</p><p>Our examples will be drawn from the disciplines of philosophy and history. The sense in which a response to a question can be plausible differs as between these disciplines. The normal sense in the areas of philosophy we shall highlight is that a response may be plausible given our own attitudes and habits of thought. In history, the point of reference is not our own attitudes and habits of thought but those of people of the past, along with what was feasible at the time studied. Our own attitudes and habits of thought, along with our current knowledge of people's physical and mental capacities and our grasp of the history of technology, will however be the starting point in compiling the point of reference we need.</p><p>Our treatment of the tests in the areas of philosophy we shall highlight and in history as a single test is justified by the fact that in both cases, the point of reference is primarily the nature of human beings. This also opens up scope for applying what would be broadly the same test across other humanities and to some extent across the social sciences. But while there can be good reason to ask whether an account in the natural sciences is plausible, the point of reference would be different enough that one could not say that the plausibility test would be the same kind of test.</p><h3 style="text-align: left;">Meta-ethics</h3><p>Various responses to the question of the nature of ethical claims may be accepted by some ethicists and rejected by others on technical grounds. But the most powerful reason for rejecting some responses can be that the responses simply strike one as implausible against the background of a common human understanding of the nature of ethics.</p><p>Emotivism may be rejected because it seems plain that an ethical claim is more than an expression of preference, or even an expression of preference with which one would expect others to agree. And one can reject emotivism as implausible on such grounds even before noticing technical concerns such as those raised by the Frege-Geach problem (Geach, "Assertion", pages 463-464).</p><p>Some forms of moral realism may be rejected because it seems that there is no space in the world as we generally understand it for moral facts or moral properties on a par with non-moral facts or properties. (One source for such arguments is Mackie, <i>Ethics: Inventing Right and Wrong</i>, chapter 1, section 9.)</p><h3 style="text-align: left;">Substantive ethics</h3><p>Suppose that an ethicist argues for some specific response to the question of how to act in a situation of a given type. Other people may agree, or they may disagree but think that the response is nonetheless plausible. As an example of such disagreement, one ethicist might endorse telling small lies to save people's feelings, and another one might say that while their own commitment to honesty was too strong for them to agree, they could still see that the use of small lies for such purposes could be a sensible policy. Whether there was agreement or such eirenic disagreement, the first ethicist's response would pass the plausibility test in the eyes of the second one.</p><p>On the other hand, an ethicist might offer a response from which many people would recoil. One such response would be that nobody should ever lie, even to protect someone from a potential murderer. Then the response would in the eyes of most people not pass the plausibility test, and we would start to look for defects in the reasoning or in the premises from which it started.</p><p>In order for us to fit the application of the plausibility test to substantive ethics into our discussion, we need to take it that there is scope for disagreement about what to do in situations of different types. An emotivist position would stand in the way of seeing scope for disagreement. But there are other meta-ethical positions, so we shall explore application of the plausibility test on the assumption that there is scope for disagreement about what to do.</p><p>Use of the plausibility test in substantive ethics draws on our personal inclinations. We may accept one response to a question as to what to do as reasonable, and reject another one as manifestly immoral, because of our own values. This need not be illegitimate. Ethics concerns how to conduct human lives. So it seems reasonable to give a conspicuous role to how human beings regard it as appropriate to live. But that does give rise to questions. How respectable are the origins of our views on how people should live? And are those views consistent enough between people and across cultures for judgements of plausibility to be thought of as having more value than idiosyncratic preferences?</p><p>On the respectability of origins, we may compare reaching a verdict on the plausibility of some response to a question of what to do with the route to conclusions that has been advocated by intuitionists. Intuitionists take the correctness of an ethical claim to be self-evident, without the need for direct support from argument (Stratton-Lake, "Intuitionism in Ethics"; for a full exploration of ways in which intuitionists may reach their ethical conclusions and advocacy of one particular way see Roeser, <i>Moral Emotions and Intuitions</i>). Correctness in the eyes of intuitionists does not however mean obviousness at first glance. It may become evident only after detailed thought which helps to bring out what is salient. Likewise, a judgement of the plausibility of a response will not depend on arguments that support the response directly, but it will be likely to follow detailed argument that relates to aspects of the response other than its plausibility. Having said that, there is a difference between the intuitionist approach and our approach. An intuitionist's claim that it is self-evident that a particular ethical claim is correct is a very strong claim. All conflicting claims are ruled out. To say that it is self-evident that a response which makes the claim is plausible can be to say something much weaker, because several conflicting responses might still be admitted to be plausible.</p><p>On consistency of views, we may look to surveys that have been conducted under the banner of experimental philosophy. (Much has been and is being written in this area. Two starting points for what we say here, including our remark about Gettier cases under the heading of epistemology, are Knobe, "Philosophical Intuitions are Surprisingly Robust Across Demographic Differences"; Stich and Machery, "Demographic Differences in Philosophical Intuition: a Reply to Joshua Knobe". On personal identity under our heading of metaphysics see Tobia (ed.), Experimental Philosophy of Identity and the Self.)</p><p>There is some evidence from experimental philosophy that people's ethical views do vary across cultures and can be affected by framing. That would count against the worth of views on the plausibility of responses to ethical questions where those views were not based on detailed argument, perhaps reducing the views to expressions of idiosyncratic preferences. Having said that, there are reasons why we should perhaps not be too concerned.</p><p>One reason not to be too concerned is that interpretations of the data differ. Not everyone agrees that there are wide variations in views.</p><p>A second reason not to be too concerned is that the data often reflect the views of people in general, while we are interested in application of the plausibility test by experts (although sometimes experts are surveyed and turn out to have varying views). There may not be a clear distinction between people in general and experts when it comes to substantive ethics, but there is still a distinction between what people will say when answering a questionnaire quickly and what they would say if they were encouraged to reflect on their views before answering. And those who had already spent some time considering ethical questions could be expected to reflect more thoroughly and more effectively than those who had not.</p><p>A third reason not to be too concerned is that even if results varied between cultures, the results obtained in a particular culture could still be seen as valid for people within that culture. The significance of an ethical claim's being thought correct or of a response to an ethical question's being thought plausible would however then have to be limited to what could be concluded from its being thought correct or plausible relative to the relevant culture. No conclusion based on any supposed general correctness or plausibility could be drawn.</p><p>Finally, when reviewing data on what people think, we must be sensitive to whether the questions they were asked related to correctness or to plausibility. These can be hard to disentangle. A claim will typically be that in all situations, or in all situations of certain types, some specified conduct is required, acceptable, or forbidden. And the question put to people is quite likely to amount to "Is it required/acceptable/forbidden to perform such and such action in such and such circumstances?". Such a question would relate to the conduct in question rather than to the claim considered as a response to a question. One might conclude from what a subject said that agreement with a response encapsulated by a claim was being shown. For example, agreement that some specified conduct was required would imply a view that a response to the ethical question to the effect that the conduct was required was correct. It would be less straightforward to get at views on the plausibility of responses. Thinking a claim correct would imply thinking that a response which the claim encapsulated was plausible, but it would be left unclear how wide a range of people thought a claim incorrect but still regarded such a response as plausible, and therefore unclear how far views on plausibility of responses varied between different cultures or between segments of the population defined in other ways. One might however at least hope that if ratios of regarding claims as correct to regarding them as incorrect were much the same across cultures or across other segments, proportions regarding corresponding responses to ethical questions as plausible would be much the same too.</p><h3 style="text-align: left;">Metaphyiscs</h3><p>In fundamental physics, any plausibility test that is of value will be couched in highly technical terms. It will only be usable by physicists who are deeply immersed in current research. It would be misleading to think of it in the way in which we have been thinking of the plausibility test more generally. And if the results obtained or their conceptualisation turn out to be utterly counter-intuitive to non-experts, so much the worse for those people. Any objection that a non-technical plausibility test was failed would rightly be ignored by physicists.</p><p>Turning back to philosophy, we do reasonably demand a commonsense metaphysics of space, time and matter. That is however allowed us, with no need to challenge physics. We may borrow an image supplied by Max Tegmark, without needing to accept his whole theory. The world as described by counter-intuitive physics can unproblematically be seen as giving rise to the consensus reality of space, time, objects, and the observable interactions of objects within which we conduct our lives (Tegmark, <i>Our Mathematical Universe</i>, chapter 9).</p><p>There are other areas of metaphysics in which there is not the same need to accept the supremacy of potentially non-intuitive physics and then recover our everyday world.</p><p>One example is given by personal identity. A response to the question of what constitutes personal identity over time may be reviewed to see whether it offers a secure identity through all sorts of changes. These will include both gradual changes such as those of maturing and ageing, and sudden changes such as those caused by brain injuries. And the identity must be substantial enough to fulfil its practical roles in settling to which people we relate in certain ways (as family, friends, colleagues, and so on), in settling attributions of responsibility and property, and so on.</p><p>How the plausibility test can affect debates is interesting. Philosophers will imagine strange cases, including brain swaps, brain divisions, and successful and interrupted teleportations. Then they will precisify or amend everyday notions of personal identity to find notions that will yield plausible verdicts on those cases. This will be an initial application of the plausibility test. But the test must be applied again in relation to cases that actually occur.</p><p>It will usually be trivial to show that a notion of personal identity yields plausible verdicts in the most straightforward everyday cases. But there may be difficulties in less straightforward cases, such as extreme memory loss, where we naturally want to say that identity is in fact preserved but have difficulty in doing so. There may also be concern that a notion provided by philosophers is not robust enough to meet our everyday requirements. For example, a criterion based on a sense of attachment to one's past or to one's forthcoming conscious life might be thought to be too focused on ephemeral mental phenomena. And philosophers like Derek Parfit (<i>Reasons and Persons</i>, chapter 12) who say that identity is not the important thing may be thought to have insufficient respect for a notion that is central to our personal and social lives. In all the cases in this paragraph we may see applications of the plausibility test, both by philosophers and by other people who take an interest. And even if one were to discount the views of non-philosophers, a notion that they would criticise as implausible might well also be criticised as implausible by many philosophers.</p><p>Another example is given by the question of human free will. From the inside, we have a clear sense of making our own decisions and acting on them. But from the outside, we may be told that all of our thoughts and actions as they may be characterised in human terms supervene on physical reality, and that this reality's evolution reflects a mixture of determinism and randomness.</p><p>(For an introduction to views we now mention see Fischer, Kane, Pereboom and Vargas, <i>Four Views on Free Will</i>.)</p><p>Some philosophers will tell us that free will is indeed illusory. Such responses to the question of free will would fail any plausibility test, even among philosophers, until good reason had been given to think that there was no alternative.</p><p>Other philosophers, the libertarian incompatibilists, will tell us that while the physical world cannot accommodate free will, we have it anyway. Such a response might fail the plausibility test not because it would be disappointing, but because it would be unclear how free will would arise.</p><p>Finally, there are the compatibilists who work on our initial conceptions of free will. Schopenhauer located freedom in our freedom to do what we will, and dismissed the idea of our being able to will what we will (<i>Prize Essay on the Freedom of the Will</i>). More recent philosophers have developed the notion of guidance control: our choices and actions reflect our own personalities because lines of causal influence flow through our brains and bodies, but this does not imply that things could have turned out differently by virtue of influences originating within ourselves and not directly or indirectly prompted by any prior events in the external world. Compatibilist responses to the question of free will would seem to have the best prospect of passing the plausibility test. This reflects the fact that a response can on reflection be found plausible by virtue of some adjustment to our initial demands, in this case an adjustment to everyday conceptions of free will.</p><h3 style="text-align: left;">Epistemology</h3><p>Most people are confident that many facts are known by humanity, and that they themselves know a fair few. Even experts in various disciplines, well aware of the difficulty of making discoveries and the risk of error, take the same view of the contents of their disciplines. Responses to the question of how to define knowledge that would make knowledge very hard to obtain, for example definitions that would require no possibility of error, would therefore fail the plausibility test.</p><p>A more interesting case is that of definitions which discriminate between examples of knowledge and non-knowledge in ways that provoke debate, both among people generally and among experts in various disciplines. Typically the examples are justified true beliefs where there has been some element of luck, for example through harmless or even positively helpful reliance on false premises or on defective reasoning. A response that supplies a given definition may be thought to fail the plausibility test if the definition too often classifies such beliefs as knowledge when people are inclined not to do so, or if it too often classifies them as non-knowledge when people are inclined to think of them as knowledge.</p><p>As with substantive ethics, such verdicts as to plausibility can legitimately have force. The facts that we know may not be human constructs, save to the extent that we have invented specific concepts to express those facts, and even then the independent world may have forced us to use certain concepts and not others. But the notion of knowledge is our own construct. It has been created to capture important facts about our relationship to the world, such as the fact that people with knowledge tend to get on better than people without it. The notion has been tailored to capture the fact that there is something more advantageous than true belief, following the issue raised by Plato (<i>Meno</i>, 97-99). And the verdicts on individual beliefs that a definition gives had better not be too far out of line with what people would think prior to philosophical reflection.</p><p>But as with substantive ethics, we must ask about the consistency across cultures of pre-philosophical thought. Again, experimental philosophy has something to say. Views on Gettier cases and the like do vary. But as with substantive ethics, we can ask how great the variation really is.</p><p>One difference from substantive ethics is that it is harder with knowledge to minimise concerns about a lack of consistency by saying that views may be for particular cultures rather than for the whole of humanity. Cultural relativity need not be overly troubling in relation to substantive ethical views. Different people live in different circumstances, with different histories. So different judgements of good and bad, right and wrong, may be apt to different societies. But the notion of knowledge is closely tied to the notion of truth. And we tend to regard both the notion of truth and the set of propositions that are true as universal.</p><h3 style="text-align: left;">History</h3><p>At first glance, the discipline of history might seem to amount to the narration of facts on the basis of evidence. That would leave no room for a plausibility test, save in making judgements when the available evidence was not decisive.</p><p>But history goes far beyond chronology. Historical accounts impute motives, they abstract from factual details to identify political, economic and social forces, and they give narratives that are powerfully explanatory and that confer understanding on those who read them.</p><p>Having said that, history is far from being a natural science. There is no scope for repeated experiments, nor for precise and decisive calculations of the extent to which evidence supports conclusions. And both the motives of human beings and the causal links between what they think and what they do are too ill-defined to admit of comprehensive calculation.</p><p>In such a context, there is work for the plausibility test to do. Are portrayals of people's thoughts, fears and desires, and identifications of reasons for significant actions, plausible?</p><p>The test will usually be passed in published work, because historians are also human beings and will have a good inner sense of what would be realistic portrayals of the people they study. But we may still see the test as having played a role in processes of thinking, writing and re-writing, perhaps playing that role without historians' being conscious of its having done so. </p><p>The plausibility test acts as a filter to dispose of responses to questions of why events took the course they did which would not be much good, rather than as a way to show that a given response which passes the test is correct. In the context of history, the test may first be applied to see whether Vestehen (understanding) is conferred. Its conferral would be a good sign, given the need for consonance with our background understanding of people and the world if it is to be conferred. Then the test may be applied in two directions, running between Erklären (explanation in a broadly scientific sense) and Verstehen.</p><p>The first direction runs from Erklären to Verstehen. Suppose that some quasi-mechanical causal account has been given as a response to the question of why events took the course they did. If the account is good enough, it will amount to Erklären. But is it good enough? Given the lack of mechanical precision in the world as viewed by historians, it is useful to have a test that brings in a different set of requirements. This is what the plausibility test can do. It can be used to assess whether the quasi-mechanical detail in the account affords a route to Verstehen. The move is from testing the proposed mechanism by reference to technical and quasi-mechanical principles to testing whether the account would resonate with us on the basis of our common understanding of how human beings and the world work.</p><p>The second direction runs from Verstehen to Erklären. An account may seem to confer understanding, and that may be checked in an initial application of the plausibility test. But an impression of understanding may be too easily given. If understanding depends on accepting an account that assumes a pattern of causes and effects which is in quasi-mechanical terms implausible, then the account should be rejected as an implausible response to the question of why events took the course they did.</p><p>Historians must be cautious when checking for plausibility by reference to Verstehen, whether in an initial application of the test or to see whether the quasi-mechanical detail provided affords a route to Verstehen.</p><p>The need for caution arises from the fact that principles of human motivation and action, the satisfaction of which in an account is required for the account to confer Verstehen, vary as between societies. Given that the test should be of whether the people studied could plausibly have acted as described, the relevant principles will be the ones of that society. (In anthropological terms, an emic approach should be preferred to an etic one.) These relevant principles may not be the ones that first occur to historians, especially if the historians come from a society other than the one being studied or if they are looking back many centuries. Just how easy it can be for principles to differ can be seen from one anthropologist's attempt to explain the story of Hamlet to members of a west African community (Bohannan, "Shakespeare in the Bush").</p><h2 style="text-align: left;">A general consideration of the test</h2><p>We now move on from specific applications of the plausibility test to a more general consideration of its worth.</p><h3 style="text-align: left;">Plausibility and correctness</h3><p>A judgement that a response is plausible is not always a judgement that the response is uniquely correct or that it is among the correct responses. It is however at least a judgement that the response should not be discarded yet, but should continue to be kept in play and explored as a useful way to look at relevant features of the world. It is therefore at least a judgement that the response should not currently be regarded as incorrect, because responses regarded as incorrect are never worth keeping in play except perhaps as part of a lateral thinking exercise in which they may prompt new thoughts.</p><p>There are three possibilities to consider.</p><p>The first possibility is that it was expected that one response would deserve to be regarded as correct. Then the identification of only one response as plausible might amount to a judgement of its correctness.</p><p>The second possibility is that it was expected that several responses would deserve to be regarded as correct. If it was thought that all plausible responses deserved to be regarded as correct, a judgement of plausibility would amount to a judgement of correctness. (This case might not be reducible to the one-response case. Differences in the aspects of the topic on which responses focused might suffice to obstruct simply taking their conjunction and presenting that as a perhaps unwieldy single correct response.)</p><p>The third possibility is that the nature of the discipline or of the topic would make it inappropriate to assert that every response could be shown to be correct or shown to be incorrect, so that there would be space for an intermediate category of responses which, while plausible, could not have their status as correct or as incorrect determined. Then a judgement of plausibility would not in general amount to a judgement of correctness, although it might do so in relation to some responses. (There might or might not be responses which could never have their status determined. It might only be that at any one time, there would be responses which could not have their status determined in the near term. As the discipline progressed, they might have their status determined or they might cease to be of any interest, but it would be likely that new responses of indeterminate status would also come into play.)</p><p>When a judgement of correctness is made, a judgement of plausibility becomes redundant save as a path to a judgement of correctness. Correct responses have to be accepted whether one likes them or not. The favourable result of any test of plausibility would play no more than a supporting role, showing the absence of a certain kind of objection to a response. There is also pressure on experts to resolve any disagreement as to correctness.</p><p>It is when no judgement of correctness is made, or when conflicting judgements of correctness are made by different experts and there is no prospect of resolving their disagreement in the near term, that the role of plausibility becomes interesting. A judgement of plausibility is not rendered redundant, but may be valuable in its own right. So a test of plausibility may make a real contribution, moving us forward from a plethora of possible responses either to one plausible response, or to a modest range of plausible responses.</p><p>We should think in terms of a range of responses, however many responses are currently in play. Even if only one is in play, we should allow for a potential range that would encompass responses which could be introduced. That possibility would be interesting because while there might be grounds to think that only one response could be correct, and there would always be reason to think that when responses conflicted no more than one of them could be correct, there would not in general be reason to think even that only one out of a range of conflicting responses could be plausible, at least not when determinations of correctness were not expected to be available in the near term. </p><p>It may be perfectly possible to consider each member of a set of responses plausible, even if various responses in the range would not sit comfortably together or would contradict one another. This would however need to be limited to saying that each one was plausible individually, not that the conjunction of all of them would be plausible.</p><p>If conflict fell short of contradiction, for example because different responses identified different ethical or experiential considerations as central while each allowed some role for considerations picked out as central by others, or because different responses identified different factors in the explanation of some historical event as the most significant factors, there would be a hope and maybe an expectation that either current approaches to the question or knowledge of the world would in due course advance so that some responses would drop out and the conflict would be resolved. It would however be possible to live with the thought that the conflict might never be resolved. If there was contradiction, there would be a more pressing need to resolve the conflict. Then a thought that the conflict might never be resolved would amount to a thought that our grasp of the world and of life might be irremediably inadequate.</p><p>One feature of the humanities would contribute to making it tolerable to regard several conflicting responses as acceptable. This is the fact that a given response is prone to carry with it a given way to weigh up competing considerations and sometimes a given way to interpret evidence. So someone who favours one response may see someone who favours another response not as debating with them against a background of a completely shared understanding of the significance of different considerations and of the meaning of the evidence, but as debating against a background that was in some respects different. The effects would be a bit like that of perspective-taking in the natural sciences, where what might be seen as tension between different accounts of the same phenomenon can be defused by a recognition that different scientists may approach a single topic from different perspectives.</p><h3 style="text-align: left;">The significance of tolerance</h3><p>We now turn to the significance of the scope to tolerate several conflicting answers as plausible. We shall explore what it might mean for the worth of a response's passing the plausibility test, not merely in cases in which several conflicting responses are in fact considered plausible, but in general. We shall say something about the nature of the test which will be relevant whether or not, in a particular case, several conflicting responses all pass it.</p><p>A judgement of plausibility is at least a judgement that it makes sense to look at some feature of the world in a particular way. It is stronger than a judgement that it is pedagogically helpful to look at the feature of the world in that way. Pedagogical helpfulness will reflect the psychology of students. Responses that experts would regard as misleading may turn out to be helpful. We on the other hand are concerned with what fully trained experts, who would not need the same level of assistance as students, would say about keeping possible responses in play.</p><p>Having said that, a response may be plausible in our sense when the usefulness of keeping it in play reflects the powers of thought of experts, while some hypothetical more advanced being would regard the response as misleading. Such a being might view it in the ways that human experts would view ways to look at the relevant feature of the world which were needed only to help students. We cannot have more than a speculative grasp of what the powers of such a more advanced being might be, so we cannot tell which answers might be downgraded in the eyes of such a being. </p><p>One might argue that such advanced beings were already among us, in the form of artificial intelligence systems. We might declare certain ways to look at features of the world of which the systems appeared to have no need to be of merely pedagogical helpfulness. We would however be reluctant to do so if, as would be entirely possible, we could not grasp how such systems thought. Dispensing with our own ways to look at features of the world would then leave us with artificial intelligence systems which could assure us that the evidence found should not surprise us, or which could make accurate predictions, but the systems would not give us any understanding. Concerns about degrees of sophistication and comprehensibility of different responses would however mainly arise in relation to the natural sciences, rather than in relation to the humanities which are our concern here.</p><p>While a judgement of plausibility is stronger than a judgement of pedagogical helpfulness, it is weaker than a judgement that the world actually is as described, at least when the judgement of plausibility falls short of a judgement of correctness. It is weaker than such a factual judgement even if the factual judgement is read in ways that either anti-realists or perspectivists in the philosophy of science might advocate. (We take perspectivists to read factual judgements as saying "From this perspective, the world is like this".)</p><p>The comparative weakness of a judgement of plausibility explains why it is possible to judge several responses plausible even when they contradict one another. If responses were judged to say how the world was, such tension between them would be intolerable. If one description of the world was thought correct, contradictory ones would have to be thought mistaken. This is not because the world is known not to be an awkward place which could encourage contradictory descriptions. The existence of sensible if inconclusive discussion about adjusting logic in the face of quantum mechanics shows that we cannot be confident of the world's not being so awkward. Rather, our aversion to regarding contradictory responses as correct springs from the fact that doing so would undermine our notion of judgements of correctness as saying how the world actually was and implying decisively how it was not. If our judgements of plausibility do not amount to judgments of correctness, we can tolerate contradiction more easily. Contradiction will however remain uncomfortable.</p><p>Likewise, the comparative weakness of a judgement of plausibility makes it easier to judge several responses plausible when they would conflict with one another in some way that fell short of contradiction than it would be to judge all of those responses correct. The prospect of the world's being awkward enough to encourage conflicting but non-contradictory descriptions is more tolerable than that of the world's encouraging contradictory descriptions. It would undermine not our notion of correctness, but our confidence that we were on the high road to a full understanding. And finding several conflicting but non-contradictory responses plausible would not be as depressing as thinking we had to regard them all as correct. It might indeed be encouraging, in that it showed we were capable of developing several lines of enquiry without knowing which one would turn out to be the most fruitful.</p><p>We can see one way in which it can be tolerable to keep conflicting responses in play by reflecting on how things are in certain types of philosophy and in history. An important feature of these areas of work, and of some other work in the humanities, is that decisive judgements as to the truth of assertions can often be expected to lie permanently out of reach. This is so because in order to say anything interesting, one has to go beyond what the evidence requires one to say or not to say, and to interpret it in ways that are optional. Given that the crunch point at which final verdicts of truth and falsity are announced is not expected to be reached, conflicting responses can be tolerated because their coexistence will not be forced to come to an end. This does not however mean that anything goes. There are still standards to be met. For standards in history see Baron, <i>Epistemic Respectability in History</i>.</p><h3 style="text-align: left;">The value of the test</h3><p>As we have already said, the fact that a response passes the plausibility test does not show that it should be taken to be correct, at least not without some substantial additional conditions being met. And we can often tolerate having conflicting responses to the same question which all pass the test. The test is only a filter to reject some responses, while potentially leaving several others in play. Does all this mean that the test is too undemanding for its administration to have value? No, it does not.</p><p>The fact that the test is only a filter to rule out some responses does not undermine the whole process of review of responses, because it is not the only test. It will normally be administered in addition to a detailed analysis of evidence and reasoning.</p><p>But why should such a filter add much to detailed analysis?</p><p>One reason is that detailed analysis cannot be as conclusive in the humanities, or indeed in the social sciences and some parts of the natural sciences, as it can be in physics and chemistry. Forms of evidence are diverse enough for there to be scope to overlook relevant evidence, and the evidence that is found can be misinterpreted.</p><p>A second reason is that evidence may be open to alternative interpretations, all arguably legitimate, in the light of different background theories. The use of such theories may be required in order to make the evidence speak at all, so it is not to be avoided. From within a theory, its interpretation of the evidence cannot be seen as illegitimate, and there may be no neutral standpoint from which to weigh up the alternative theories and their approaches to the interpretation of evidence. Then the plausibility test, based on general principles which are not specific to particular theories, may help to screen out theories which fall short in some way. The test would typically do so by finding that responses in some way failed to make sense, implying that there might have been something wrong with the theories under which they were produced. The test would however still not offer a neutral standpoint from which theories could be examined, because it would not have the tools with which to examine theories directly. Indeed, its use might lead to the identification of a theory as unsatisfactory without specifying the defects in that theory, merely condemning it by reference to its results.</p><p>A third reason is that in the humanities in particular, there is a legitimate demand for Verstehen. Responses to questions must bear an appropriate relationship to the nature of human readers, one that really does confer understanding. It is hard to check for that through detailed analysis. A general test like the plausibility test is what is needed.</p><p><br /></p><h2 style="text-align: left;">References</h2><p><br /></p><p>Baron, Richard. <i>Epistemic Respectability in History</i>. CreateSpace, 2019.</p><p><a href="https://rbphilo.com/history.html">https://rbphilo.com/history.html</a></p><p><br /></p><p>Bohannan, Laura. "Shakespeare in the Bush". Chapter 5 of James Spradley and David W McCurdy (eds.), <i>Conformity and Conflict: Readings in Cultural Anthropology</i>, fourteenth edition. London, Pearson, 2011.</p><p><br /></p><p>Booth, Anthony Robert, and Darrell P. Rowbottom (eds.). <i>Intuitions</i>. Oxford, Oxford University Press, 2014.</p><p><a href="https://doi.org/10.1093/acprof:oso/9780199609192.001.0001">https://doi.org/10.1093/acprof:oso/9780199609192.001.0001</a></p><p><br /></p><p>Cappelen, Herman. <i>Philosophy without Intuitions</i>. Oxford, Oxford University Press, 2012.</p><p><a href="https://doi.org/10.1093/acprof:oso/9780199644865.001.0001">https://doi.org/10.1093/acprof:oso/9780199644865.001.0001</a></p><p><br /></p><p>Elton, Geoffrey R. <i>The Practice of History</i>, second edition with an afterword by Richard J. Evans. Oxford, Blackwell, 2002. (First edition, Sydney, NSW, Sydney University Press, 1967.)</p><p><br /></p><p>Fischer, John Martin, Robert Kane, Derk Pereboom and Manuel Vargas. <i>Four Views on Free Will</i>. Oxford, Blackwell, 2007.</p><p><br /></p><p>Geach, Peter T. "Assertion". <i>Philosophical Review</i>, volume 74, number 4, 1965, pages 449-465.</p><p><a href="https://doi.org/10.2307/2183123">https://doi.org/10.2307/2183123</a></p><p><br /></p><p>Knobe, Joshua. "Philosophical Intuitions are Surprisingly Robust Across Demographic Differences". <i>Epistemology and the Philosophy of Science</i>, volume 56, number 2, 2019, pages 29-36.</p><p><a href="https://doi.org/10.5840/eps201956225">https://doi.org/10.5840/eps201956225</a></p><p><br /></p><p>Mackie, John L. <i>Ethics: Inventing Right and Wrong</i>. Harmondsworth, Penguin, 1977.</p><p><br /></p><p>Parfit, Derek. <i>Reasons and Persons</i>, corrected edition. Oxford, Clarendon Press, 1987.</p><p><br /></p><p>Plato. <i>Meno</i>.</p><p><br /></p><p>Roeser, Sabine. <i>Moral Emotions and Intuitions</i>. Basingstoke, Palgrave Macmillan, 2011.</p><p><br /></p><p>Schopenhauer, Arthur. <i>Prize Essay on the Freedom of the Will</i>, edited by Günter Zöller, translated by Eric F. J. Payne. Cambridge, Cambridge University Press, 1999.</p><p><br /></p><p>Stich, Stephen P., and Edouard Machery. "Demographic Differences in Philosophical Intuition: a Reply to Joshua Knobe". <i>Review of Philosophy and Psychology</i>, published online 2022, no volume or part number assigned at the time of writing.</p><p><a href="https://doi.org/10.1007/s13164-021-00609-7">https://doi.org/10.1007/s13164-021-00609-7</a></p><p><br /></p><p>Stratton-Lake, Philip. "Intuitionism in Ethics". <i>Stanford Encyclopedia of Philosophy</i>, 2020.</p><p><a href="https://plato.stanford.edu/entries/intuitionism-ethics/">https://plato.stanford.edu/entries/intuitionism-ethics/</a></p><p><br /></p><p>Tegmark, Max. <i>Our Mathematical Universe: My Quest for the Ultimate Nature of Reality</i>. New York, NY, Alfred A. Knopf, 2014.</p><p><br /></p><p>Tobia, Kevin (ed.). <i>Experimental Philosophy of Identity and the Self</i>. London, Bloomsbury, 2022.</p><div><br /></div>Richard Baronhttp://www.blogger.com/profile/17869390364282686725noreply@blogger.com0tag:blogger.com,1999:blog-2055583406917680385.post-22140865657632706212023-02-16T19:04:00.000+00:002023-02-16T19:04:40.199+00:00Chatbots, AI, education, and research<p>Some sophisticated chatbots are now available. One is ChatGPT, which is being incorporated into Microsoft's search engine Bing. Another is Bard, which is being incorporated into Google's search engine. Connection with web searches allows bots to do more than write in a human style. They can also gather information on which to base what they write.</p><p>This post discusses some implications of chatbots, and of artificial intelligence (AI) more generally, for education at the university level and for academic research. We shall start with education, because that is the area in which the greater number of people will be affected directly. Some of what we shall say in relation to education will carry over to research. This is a natural consequence of the fact that at the university level, students should develop skills of enquiry, analysis and the reporting of conclusions that are also needed in research.</p><p>We shall assume that AI will get a good deal better than it is at the moment. In particular, we can expect systems that meet the needs of specific disciplines to develop. Such systems would draw on appropriate corpora of material both in their training and to respond to queries, and would reason their way to conclusions and present the results of their work in ways that were appropriate to their disciplines.</p><p>References to publications cited are given at the end of the post.</p><h2 style="text-align: left;">Education - developing students' minds</h2><p>When a student finishes a course of education, they should not only have acquired information. They should also have developed ways to think that would be effective if they wanted to respond to novel situations or to advance their understanding under their own steam. Even if they did not continue to work in the disciplines in which they had been educated, such skills would be useful. Many careers demand an ability to think clearly and to respond intelligently to novel challenges. And there is also the less practical but still vital point that the ability to think for oneself is important to a flourishing life.</p><p>Ways to think are developed by making students go through the process of finding information, organizing it, drawing conclusions, and writing up the results of their work. They must start with problems to solve or essay questions. Then they must work in laboratories, or use libraries and online repositories to find relevant source material (whether experimental data, historical evidence, or academic papers). They must think through what they have found, draw conclusions, and produce solutions to the problems set or essays on the relevant topics.</p><p>If critical stages in this process were outsourced to computers, educational benefit would be lost. But which stages should not be outsourced, and which stages could be outsourced harmlessly?</p><p>The traditional function of search engines, to show what material is available on a given topic, seems harmless. It looks like merely a faster version of the old custom of trawling through bibliographies or the footnotes in books and articles which surveyed the relevant field. </p><p>Search engines do however have an advantage besides speed over old methods. A search engine will typically put what it thinks will be the most useful results first. This is helpful, so long as the search engine's criteria of usefulness match the student's needs. But even then, it can detract from the training that the student should receive in judging usefulness for themselves.</p><p>The latest generation of search engines, with ChatGPT, Bard or the like built in, take this one step further. If students can express their requirements with sufficient clarity and precision to prompt the search engines appropriately, the results can be well targeted. Rather than giving several pages of links, the search engines can provide statements in response to questions. References to support those statements could be supplied along with the statements, but if not, they would be reasonably easy to find by making further searches. The assistance of search engines in going directly to answers to questions might be helpful, but it would also take away practice in reviewing available items of material, judging their relative reliability and importance, and choosing which items to use.</p><p>Moving on to putting material to work, developing arguments, and reaching conclusions, there is not much sign that the new generation of chatbots will in their current form be helpful.</p><p>This reflects the way in which they work. (A good explanation is given by Nate Chambers of the US Naval Academy in his video <i>ChatGPT - Friendly Overview for Educators</i>.) When working out what to say, they rely on knowledge of how words are associated in the many texts that can be found online. They get the associations of words right, but they do not first formulate sophisticated ideas and then work out the best ways to express them. Texts they produce tend to be recitals of relevant bits of information in some reasonably sensible order, rather than arguments that move from information to conclusions which are supported by the intelligent combination of disparate pieces of information or by other sophisticated reasoning.</p><p>So training in the important skill of constructing arguments would not seem to be put at risk by students' use of chatbots. But there is other software to consider, software which can engage in sophisticated reasoning. AI is for example used in mathematics (a web search on "theorem prover" will turn up examples), and in chemistry (Baum et al., "Artificial Intelligence in Chemistry: Current Trends and Future Directions").</p><p>If students relied on software like that to respond to assignments set by their professors, they would not acquire the reasoning skills they should acquire. And it would be no answer to say that if they went on to careers in research, they would always be able to rely on such tools. If someone had not in their training reasoned their own way through problems, they would not understand what the conclusions provided by AI systems really meant. Then they would not be able to appreciate the strengths and the weaknesses of those conclusions. They would also be unable to assess the level of confidence one should have in the conclusions, because they would not have a proper grasp of the processes by which the conclusions were reached and what false steps might have been made.</p><p>Software that does all or most of the reasoning required to reach interesting conclusions is not to be expected everywhere. We can expect it to be far more widespread in mathematics and the natural sciences than in other disciplines. In the social sciences and the humanities, there may be data which are open to being marshalled into a form that is suitable for mathematical analysis, and such analysis may yield robust conclusions. Some such analyses may be found under the rubric of digital humanities. But while such analyses might be devised and conducted entirely by AI systems, and their results might be eminently publishable, those results would be unlikely to amount to conclusions that conferred insights of real value, at least not in the humanities and to some extent not in the social sciences either. Humane interpretation of the sort that may confer Verstehen is still needed, and that does not yet appear to be a forte of AI.</p><p>Having said that, where the thinking that AI might be expected to do would suffice to reach interesting conclusions, the combination of such a system with a language system to write up the results could be powerful. All the work could be done for the student. </p><p>Such a combination will surely be feasible in the near future in mathematics and the natural sciences, where the results of reasoning are unambiguous and there are standard ways to express both the reasoning and the results. There are already systems, such as SciNote's Manuscript Writer, which will draft papers so long as their users do some initial work in organizing the available information. That package is, as its maker states, not yet up to writing the discussion sections of papers, but we should not suppose that accomplishment to be far away. To write discussion sections, a system would need to have a sense of what was really interesting and of how results might be developed further. But we should not suppose such a sense to remain beyond AI systems for long.</p><p>In other disciplines, and particularly in the humanities, it is much less clear that such a combination of AI reasoner and AI writer would be feasible in the near future. The results of reasoning are more prone to ambiguity, and ways to express those results are less standardized. It might also not be feasible to break down the task of building a complete system into two manageable parts. The distinction between reasoning and final expression for publication is never absolute, but it is decidedly hazy, involving substantial influences in both directions, outside the natural sciences.</p><p>Another issue is that in work of some types the expression of results, as well as the reasoning, need to be apt to confer Verstehen. As with reasoning, that does not yet appear to be a forte of AI. The main obstacle, in relation both to reasoning and to expression, may be that AI systems do not lead human lives. They do not have a grasp of humanity from the inside. Human experience might one day be fabricated for them, but we are some way off that at the moment. (There is more on the themes of Verstehen and the human point of view in Baron, <i>Confidence in Claims</i>, particularly section 5.6.) Having said that, AI may well improve in this respect. Systems may come to align their ways of thinking with human ways by having human beings rank their responses to questions, a method that is already in use to help them to get better at answering questions in ways that human beings find helpful.</p><p>We may conclude that AI could put the development of any of the skills that students should acquire, from gathering and organizing information to drawing conclusions and expressing them, at risk, but that the relevant software would vary from skill to skill and the threat would probably become serious in the natural sciences first, then in the social sciences, and finally in the humanities.</p><h2 style="text-align: left;">Education - grading</h2><p>There has been concern among professors that students will use ChatGPT to write essays. So far, the concern seems to be exaggerated. If ChatGPT is offered a typical essay title, the results are often poor assemblies of facts without any threads of argument, and are sometimes laughably incorrect even at the merely factual level. But chatbots will get better, and may merge with the reasoning software we have already mentioned to yield complete systems which could produce work that would improperly earn decent grades for students who submitted it as their own.</p><p>Some remedies have been suggested. One remedy would be to make grades depend on old-fashioned supervised examinations, perhaps on computers provided by universities so as to compensate for the modern loss of the skill of handwriting while not allowing the use of students' own computers which could not reliably be checked for software that would provide improper assistance. Another remedy would be to make students write their assignments on documents that automatically tracked the history of changes so that professors could check that there had been a realistic amount of crossing out and re-writing, something which would not be seen if work produced elsewhere had simply been pasted in. A third remedy would be to quiz students on the work submitted, to see whether they had actually thought about the ideas in their work. A fourth remedy would be to set multi-part assignments, with responses to each part to be submitted before the next part was disclosed to students. This idea relies on the fact that current software finds it difficult to develop arguments over several stages while maintaining coherence. Finally, anti-plagiarism software is already being developed to spot work written by chatbots, although it is not clear whether it will be possible for detection software to keep up with ever more sophisticated reasoning and writing software.</p><p>Alternatively, AI might shock educators into the abolition of grading. It is not that AI would adequately motivate the abolition of grading. Rather, it could merely be the trigger for such a radical move.</p><p>There are things to be said against such a move. Grades may be motivators. Employers like to see what grades potential employees have achieved. And there are some professions, such as medicine, in which it would be dangerous to let people work if their knowledge and skills had not been measured to ensure that they were adequate.</p><p>There are however things to be said in favour of the abolition of grading. Alfie Kohn has argued that a system of grading has the disadvantage of controlling students and encouraging conformity (Kohn, <i>Punished by Rewards</i>). Robert Pirsig puts into the mouth of his character Phaedrus an inspiring account of how students at all levels can actually work harder and do better when grades are abolished. As he puts it when commenting on the old system:</p><p>"Schools teach you to imitate. If you don’t imitate what the teacher wants you get a bad grade. Here, in college, it was more sophisticated, of course; you were supposed to imitate the teacher in such a way as to convince the teacher you were not imitating, but taking the essence of the instruction and going ahead with it on your own. That got you A's. Originality on the other hand could get you anything - from A to F. The whole grading system cautioned against it." (Pirsig, <i>Zen and the Art of Motorcycle Maintenance</i>, chapter 16)</p><p>So the end of grading could have its advantages, particularly in the humanities where there is, even at the undergraduate level, no one right way to approach a topic. (This is not to say that all ways would be acceptable. Some would clearly be wrong. And at the detailed level of how to check things like the reliability of sources, there may be very little choice of acceptable ways to work.)</p><h2 style="text-align: left;">Research - the process</h2><p>AI that collates information, reasons forward to conclusions, and expresses the results can be expected to play a considerable role in research in the reasonably near future. There is however some way to go. Current systems seem to be good at editing text but not so good at generating it in a way that ensures the text reflects the evidence. Their specialist knowledge is insufficient. And as noted above, they cannot yet write sensible discussion sections of scientific papers, let alone sensible papers in the humanities. (For an outline of current capabilities and shortcomings see Stokel-Walker and Van Noorden, "What ChatGPT and Generative AI Mean for Science".)</p><p>Influences of AI on research will be likely to parallel influences on education. The burdens of tracking down, weighing up and organizing existing information, analysing new data, reasoning forward to interesting conclusions, and expressing the results of all this work might be taken off the shoulders of researchers in the near future, leaving them only with the jobs of deciding what to investigate and then reviewing finished work to make sure that the AI had considered a suitably wide range of evidence, that it had reasoned sensibly, and that the conclusions made sense. Having said that, the issues would be different.</p><p>There could be very great benefit if more research got done, particularly when research addressed pressing needs such as the need for new medical treatments or for greater efficiency in engineering projects. There would be no such benefit in the context of education rather than research, although if the use of AI made education more efficient students might progress to research sooner.</p><p>There is also the point that if an AI system absorbed the content of all the research being done in a given area and interacted with human researchers, this could create a hive mind of pooled expertise and knowledge which would be more effective than the hive mind that is currently created by people reading one another's papers and meeting at conferences. (We here mean a hive mind in the positive sense of sharing expertise and knowledge, not in the negative sense of conformity to a majority view.)</p><p>The development of minds by thinking through problems would be less important at the level of research, because minds should already have been developed through education. The loss of opportunities for development on account of the use of AI would however still be a loss. Every mind has room for improvement. In addition to concern about the continued development of general skills, there is the point that not actively reasoning in the light of new research would reduce the extent to which someone came to grasp that research and its significance. Finally, only a researcher who had a firm grasp of the state of the discipline, including the most recent advances, would be able to judge properly whether the results of AI's work passed the important test of making sense.</p><p>A concern that is related to the development of minds is that there would be a risk that any novel methods of reasoning by AI would not be properly understood, leading to the misinterpretation of conclusions or a failure to estimate their robustness properly. Well-established techniques should have been covered in a researcher's student days, but new techniques would be developed. A researcher who had never gone through the process of applying them laboriously using pen and paper might very easily not have a proper grasp of what they achieved, how reliably they achieved it, and what they did not achieve.</p><p>Another concern is that the processes of reasoning by AI systems could be opaque to human researchers. This would be an instance of the AI black box problem. Reasoning might be spelt out in the presentation of work, but there would be a risk that the reasoning as represented was not the actual reasoning. If satisfactory reasoning were set out, that might appear to address the issue. But that reasoning might not in fact be related appropriately to the evidence (while the internal reasoning was so related), and this might not be noticed by human researchers reviewing the work.</p><p>One specific form of opacity that should concern us is the risk that when AI systems search for material, they may be influenced by inappropriate criteria. Search engines can already give high rankings to links that it is in their commercial interests to favour, or can push down the rankings material that is disfavoured by the political establishment (the practice of shadow-banning). If AI used by researchers did the same sort of thing, research could be skewed in wholly improper ways.</p><h2 style="text-align: left;">Research - credit</h2><p>Researchers like to get credit for their work, and are annoyed when other people take credit for work that is not their own. Names on publications need to be the right ones, and if any material is taken from someone else's work it must be attributed in a footnote. One reason for this ethos is that non-compliance would be considered to amount to bad manners, or theft, or something in between these extremes. Another reason is that jobs, promotion and funding depend on the work one has done, so each person needs to be able to take credit for their own work, whether it is published by them or used by others.</p><p>Now suppose that a researcher had relied on an AI system to organize material and reason the way to conclusions, rather than merely to find material. And suppose compliance with the minimal requirement that the use of AI should be disclosed. How should its use affect the allocation of credit?</p><p>One might argue that the use of AI was not in principle different from the use of any other tool. In many disciplines, the use of sophisticated computer systems is routine and is not thought to give rise to special issues of attribution. Even in the pre-computer age, people relied on bibliographies and on other people's footnotes to track down material, and that was not thought to lessen the credit due to researchers who relied on such aids.</p><p>On the other hand, AI systems that were good enough to help with reasoning would learn as they went along, pooling knowledge gained in their use by all researchers in a given field. Then any particular user would rely indirectly on the work of other researchers, and might easily be unaware of which other researchers' work was involved. There could be no more than a general acknowledgement, enough to tell the world that not everything was the author's own work but not enough to give credit to specific previous researchers.</p><p>We should not however think that such a failure to credit previous researchers would be entirely new. At the moment, identifiable contributions by others are expected to be acknowledged. But there is also the accumulated wisdom of a discipline, which may be called common knowledge or prevalent ways to think. That wisdom depends on the contributions of many researchers who are not likely to be acknowledged. One may stand on the shoulders of people of middling stature without knowing who they were, and it is not thought improper to fail to acknowledge them by name. The new difficulty that would be created by the use of AI to produce reasoning would not lie there. It would instead be be that contributions which would be easy to acknowledge if one worked in traditional ways might accidentally go unacknowledged. On the other hand, one might get an AI system to track such contributions and generate appropriate footnotes.</p><p>We now turn to the significance of credit when allocating jobs, promotion and funding. The basic idea is a sensible one. The aim should be to employ, promote and fund people who produced the best work, and ensure that it was their own work without undisclosed reliance on other people's work. How might the use of AI to reason the way to conclusions matter? We shall again assume that its use would be disclosed.</p><p>Given that the difference made by the use of AI would vary from one piece of work to another, and that a researcher who routinely relied on AI might in fact have the talent to do just as well without its help, it might become harder to decide reliably between candidates. On the other hand, such decisions are unlikely to be particularly reliable as it is, at least not when choices are made between several candidates all of whom are of high quality. So any loss might not be great.</p><p>Finally, there is the question of credit for the clear and elegant description of work and expression of conclusions. AI that was rather more sophisticated than currently available could do this work, and a human author might take credit. Fortunately it is not normal to credit other researchers for one's style of writing anyway, so there would be no appropriate footnotes giving credit to others to be omitted even if the AI had developed its style by reviewing the work of many researchers. And when it comes to the allocation of jobs, promotion and funding, reasoning should in any case be a good deal more important than style.</p><p><br /></p><h2 style="text-align: left;">References</h2><p>Baron, Richard. <i>Confidence in Claims</i>. CreateSpace, 2015.</p><p><a href="https://rbphilo.com/confidence.html">https://rbphilo.com/confidence.html</a></p><p><br /></p><p>Baum, Zachary J., Xiang Yu, Philippe Y. Ayala, Yanan Zhao, Steven P. Watkins, and Qiongqiong Zhou. "Artificial Intelligence in Chemistry: Current Trends and Future Directions". Journal of Chemical Information and Modelling, volume 61, number 7, 2021, pages 3197-3212. </p><p><a href="https://doi.org/10.1021/acs.jcim.1c00619">https://doi.org/10.1021/acs.jcim.1c00619</a></p><p><br /></p><p>Chambers, Nate. <i>ChatGPT - Friendly Overview for Educators</i>.</p><p><a href="https://www.youtube.com/watch?v=fMiYNrjDPyI">https://www.youtube.com/watch?v=fMiYNrjDPyI</a></p><p><br /></p><p>Kohn, Alfie. <i>Punished by Rewards: The Trouble with Gold Stars, Incentive Plans, A's, Praise, and Other Bribes</i>, with a new afterword by the author. Boston, MA, Houghton Mifflin, 1999.</p><p><br /></p><p>Pirsig, Robert M. <i>Zen and the Art of Motorcycle Maintenance: An Inquiry into Values</i>, 25th anniversary edition. London, Vintage, 1999.</p><p><br /></p><p>SciNote Manuscript Writer.</p><p><a href="https://www.scinote.net/manuscript-writer/">https://www.scinote.net/manuscript-writer/</a></p><p><br /></p><p>Stokel-Walker, Chris, and Richard Van Noorden. "What ChatGPT and Generative AI Mean for Science" (corrected version of 8 February 2023). Nature, volume 614, number 7947, 2023, pages 214-216.</p><p><a href="https://doi.org/10.1038/d41586-023-00340-6">https://doi.org/10.1038/d41586-023-00340-6</a></p><p><br /></p>Richard Baronhttp://www.blogger.com/profile/17869390364282686725noreply@blogger.com0tag:blogger.com,1999:blog-2055583406917680385.post-45474870318868962132023-02-01T16:32:00.000+00:002023-02-01T16:32:08.190+00:00Why is there something rather than nothing?<p> </p><p>This was the question at our Cambridge philosophy café on 22 January 2023. The first impression is that it is both a question that demands a decent answer, and a question that cannot have one. This post does not provide an answer. Instead, it sketches some of the territory.</p><h2 style="text-align: left;">Why is an answer demanded?</h2><p>There are items of many types in the world (meaning not just the world as it is today, but the world with all its history). There are physical objects, events, relationships of space and time (or of spacetime when we focus on physics), laws of nature, mathematical results, thoughts, feelings, and so on. We may be more or less generous in what we regard as an item in the world. But whenever we admit something as an item, we can ask why it is in the world. And we tend to think either that an answer will already be known, or that the discovery of an answer would be perfectly conceivable. Even if we do not have much hope that an answer will in fact emerge, for example where historical records have been lost, we still think that an answer could have been found if things had been different in perfectly identifiable ways. At the extreme, we might forgo even that hope and say that no answer could be found, but even then, we would think that there was some unknowable reason for the presence of the item in the world. </p><p>To put all this in traditional philosophical terms, we have a strong inclination to subscribe to the principle of sufficient reason. And we are disturbed when quantum mechanics suggests that there may be no reason why some observations rather than others come to be made. We are left hoping that physics will advance to restore compliance with the principle. The hope may be forlorn, but it is there.</p><p>If we routinely find reasons for the presence of items in the world, and if we can put quantum mechanical worries to one side by noting that they do not extend to everyday items when characterized in the macroscopic terms we in fact use to individuate and describe those items, it would seem reasonable to ask why the whole ensemble is present (not present in the world, for the ensemble is the world, but simply present). And the question of why is there something rather than nothing is a less demanding question than that of why the particular ensemble we find is present, in that an answer to the latter question would automatically be an answer to the former one but not vice versa. Moreover, not only would the question seem reasonable. A response that it should not be asked, or could not be answered, would seem to be unreasonable.</p><h2 style="text-align: left;">Types of explanans</h2><p>Our explanandum is the existence of some non-empty world or other (we do not need to explain its actual make-up). Our explanans could be causal and within the world, causal and external to the world, or non-causal.</p><p>By way of background, we shall make some remarks on ways to explain the existence of items within the world. Then we shall consider causal options, followed by non-causal options. Finally, we shall look at the option of dismissing the question.</p><h2 style="text-align: left;">Ways to explain the existence of items within the world</h2><p>We can find reasons why individual items are in the world.</p><p>Sometimes reasons are directly causal, as when stresses between continental plates cause earthquakes, or the values of certain physical constants together with some conditions in the early universe determine which forms of matter have come into being.</p><p>Sometimes reasons are indirectly causal. We may for example say that a particular taxonomy of animals has been created because of the causal influences that have produced the actual variety of animals. Or we may say that a thought exists because of causal influences on neurons. Or we may say that an emotion in general (as distinct from individual instances) exists because causal patterns in the world are such as to generate particular types of human reaction in particular circumstances, where those reactions are reasonably systematic.</p><p>Sometimes reasons are not causal at all, as when we identify the reason why some mathematical statement is a theorem. </p><p>There are also items that would be amenable both to non-causal explanation and to indirectly causal explanation. For example, the existence of Mannerism as an item in the history of European art could be explained non-causally by the presence of distinctive formal features in the relevant paintings and sculptures. And it could also be explained causally, albeit in an indirect way, by an analysis of the thoughts in the minds of artists and their patrons, in turn explained by a causal history of their neurons and of preceding developments which created the artistic context.</p><h2 style="text-align: left;">Internal causes of the whole</h2><p>Now let us consider the whole universe, in all its history. The prospects for a causal explanation that relies on causes within the world are not bright.</p><p>Causes generally precede their effects. A cause might arise at the same time as its effect, but even then we require a direction of causation. If one thing caused another, it was in fact the case that the second thing did not cause the first one, even if in other circumstances the second thing could have caused the first one.</p><p>Moreover, we do not expect A to cause B to cause C to cause A. In mathematical terms, we want the causal connections between things to form a directed acyclic graph. In particular, cycles directly from something to itself are ruled out. No self-caused things are permitted. We can however allow that A might cause B on some small scale, which then caused A to grow, which caused B to grow, and so on. Causal feedback is allowed.</p><p>(As usual, quantum mechanics complicates things. See for example the analysis and the references to earlier work in Barrett, Lorenz and Oreshkov, "Cyclic Quantum Causal Models", at <a href="https://doi.org/10.1038/s41467-020-20456-x">https://doi.org/10.1038/s41467-020-20456-x</a>.)</p><p>As soon as we have a direction of causation, we get causal chains to trace backward. We would expect to find something at the head of a chain into which all chains merged as we worked backward, or maybe several things at the heads of various chains which did not merge. A head item would explain everything that followed in its chain, so if it went unexplained, we would still not have explained why there was something rather than nothing because explanations for the existence of other things would be contingent on its existence. And everything that was not at the head of a chain would have to stand in some chain or other, so its existence would only have such a contingent explanation.</p><p>At this point we may remark that telling us that empty space has non-zero energy does not answer the question, at least not in the obsessive form in which philosophers are apt to pose it. Empty space with its scientifically determinable properties is not nothing, but a something that could have led in a causal way to what we see today. It has been said that when Lawrence Krauss published <i>A Universe from Nothing</i>, it should have been entitled <i>A Universe from Not Very Much</i>. And to a philosopher, that criticism has bite. (Krauss is however fully aware of the issue. He discusses it in chapter 9. On page 149, he acknowledges that he takes empty space and the laws of physics to exist within his "nothing".)</p><p>We must tread carefully here. It would be possible for everything to be caused without there being anything uncaused, in the way that every positive real number has a predecessor positive real number (in fact, infinitely many of them) without there being any first one.</p><p>More interestingly, we may need to be careful because temporal precedence relies on there being time, and strange things may have happened with spacetime at the very beginning. Temporal precedence may not be the only type of precedence to give a direction of causation, if we allow for causes to be simultaneous with their effects. But it is the prevalent type. If the notion of temporal precedence were to get into difficulties in the context of the early universe, cosmologists of that early stage might be able to offer us a loophole that would allow a cause of things which was within the universe to be fully explanatory.</p><p>There would also be an argument to be had over whether one should see the quest for a causal reason for there being things in general in the same terms as the quest for a causal reason for there being some particular thing or other, even an unspecified thing that would stand as a representative member of things in general.</p><p>While one might raise such doubts about the easy argument that there cannot be, within the universe, a cause of the collectivity of things, there is enough room for worry that we should not expect to find such a cause.</p><h2 style="text-align: left;">External causes of the whole</h2><p>Perhaps there could be a causal explanation which would be saved by not being within the world, so that it did not itself stand in need of a worldly causal explanation.</p><p>Sadly, this idea looks like a non-starter. Something that was causally effective but outside the world would look suspiciously like a god. And what could reasonably be substantiated about a supposed god would be so little as to make the supposition of one nothing more than a place-holder for an answer. Theists may rely on holy texts to describe their gods, but those texts are claims, not evidence. And in the absence of substantiated properties of a god or gods, the assertion of their existence does no more to answer our question as to why there is something rather than nothing than to say "If we had an answer, it would go here". Leibniz, who made an explicit connection with the principle of sufficient reason, may have thought he had done more (<i>Principles of Nature and of Grace</i>, 7-9; <i>Monadology</i>, 36-39). But he had not.</p><h2 style="text-align: left;">Non-causal options</h2><p>It is time to look at explanations which would avoid the requirement to respect a unidirectional relation from each specific explanans to its specific explanandum. They would be non-causal explanations.</p><p>Such explanations would comprise facts about the world, or facts about the abstract realm independently of any connection it might have to the world, rather than comprising things within or outside the world. The facts might mention things, but they would not be those things. Given the difficulty of conceiving of facts about things outside the world, aside from unsubstantiated and possibly incoherent claims about supposed gods, we need not distinguish between internal and external facts in the way that we distinguished between internal and external causes. But we must recognize that facts may be about abstract entities, any relationship or lack of relationship of which to the world is not given and is not to be presumed (beyond the observation that logic and mathematics are readily applicable to the world). Such abstract facts may include structures of relationships in which the specific relata are unimportant.</p><p>We seek facts that would make the existence of something rather than nothing unsurprising. There are options.</p><h3 style="text-align: left;">Many possible universes</h3><p>One option is the claim that there are a great many possible universes. An easy argument would be that since there would be only one possible empty universe and a vast number of possible non-empty universes, it is no surprise that a non-empty universe should turn up. All that would then be lacking would be an explanation of why any possible universe at all was actual. If some random one were actual, it would probably be a non-empty one.</p><p>This argument would rely on the claim that there was only one empty possible universe (or at least, not many of them). Fortunately, within the realm of the merely possible, the claim of a single empty universe could be made just as, in mathematics, there is only one empty set. It is only in the concrete world that there are many empty boxes.</p><h3 style="text-align: left;">Many actual universes</h3><p>Another option is the multiverse claim that there are in fact a great many universes, all but one of them beyond our ken. It would not be necessary to argue that most of them would be non-empty. Any one non-empty universe would suffice. But this argument would only give a route from there being many universes to there being something rather than nothing. It would leave unanswered the question of why there actually were many universes.</p><p>Either this option or the option of many possible universes could be backed up by a weak anthropic principle to the effect that whatever we observe will be a universe sufficiently complex to support conscious life, and therefore not empty. An empty universe might be a possibility, or there might be one or more empty universes among a number of actual universes, but we could not be in an empty universe.</p><h3 style="text-align: left;">The universe as an abstract structure</h3><p>A third option would be to say that the universe was itself some abstract structure, not dependent on anything for its existence.</p><p>The leading example is the mathematical universe hypothesis that Max Tegmark has set out in <i>Our Mathematical Universe</i> (2014), but that is better and more briefly explained in his paper "The Mathematical Universe" from 2007, available at <a href="https://arxiv.org/abs/0704.0646">https://arxiv.org/abs/0704.0646</a> (take care to obtain version 2, dated 8 October 2007).</p><p>In Tegmark's view, the universe actually is a mathematical structure. This claim goes beyond the observation that the universe is amenable to mathematical characterization. Moreover, we are substructures within that structure who are conscious and self-conscious by virtue of the complexity of those substructures.</p><p>Tegmark's approach has the advantage that it is plausible, although not uncontentious, to think that mathematical structures simply exist, independently of anything else's existence. In particular, and importantly for him, they can be seen as existing independently of any human predisposition to think in particular ways (for example to think in terms of objects and causation). They are all form, definable in strictly mathematical terms, and no non-formal content. And this is argued to be enough to characterize any universe. As physics digs deeper and deeper into the nature of reality, it identifies symmetries and conservation laws which say everything that physicists feel the need to say. The need for a separate stage of explaining how the mathematical structures identified give rise to what we perceive does not detract from the sufficiency of physics without that extra stage.</p><p>Tegmark's approach can however be challenged.</p><p>An apparent difficulty is that we need to explain how it is that the particular mathematical structures we find here are instantiated, rather than all possible structures being instantiated to form each and every possible universe. (He does posit a multiverse.) The problem is that once mathematics gets going, even with an empty set and minimal set theory, there is nothing within itself to stop its expansion. We would get all of mathematics everywhere, subject to a few large-scale decision points such as whether to adopt classical or constructivist mathematics or whether to adopt the axiom of choice.</p><p>This apparent difficulty is easily addressed. Tegmark thinks that what comprises any given universe is not a theory but a model of a theory. That allows for variation between universes, especially if we allow universes to comprise models of parts of theories.</p><p>This does not however leave an entirely satisfactory picture. A clue to the difficulty lies in Tegmark's invocation of the mathematical principle that "same up to isomorphism" amounts to "same" (the 2007 paper, section II.D). If our universe is isomorphic to a mathematical structure, then in his view it is that structure.</p><p>The problem is that "same up to isomorphism" does not amount to "same" in the everyday sense of "same". When we think in physical terms, we are happy with the idea of several distinct but isomorphic things. We could even imagine several isomorphic universes within a multiverse, so long as the multiverse was realised in a physical or quasi-physical way and not in a purely mathematical way. We would have to presuppose Tegmark's conclusion that we should think of the universe as a mathematical object in order to require ourselves to think of sameness in the mathematical way. </p><h2 style="text-align: left;">Dismissing the question</h2><p>The option of dismissing the question of why there is something rather than nothing remains.</p><p>One could simply take the existence of something rather than nothing as a brute fact.</p><p>One could add that the principle of sufficient reason did not have to be accepted. There are after all things in the quantum world which, so far as we can tell, simply happen. More precisely, there are observations which are such that we cannot currently give any sufficient reason why a specific observation was made rather than an alternative.</p><p>One could dismiss all questions as to why things were as they were and and simply ask how things came to be as they were. An argument for doing so, given by Lawrence Krauss, is that why-questions assume that there is some purpose to be found, and that there is no sign of any such purpose at the level of the universe (<i>A Universe from Nothing</i>, page 143). But it is not clear that this is so. A why-question need not assume purpose. It may be a rephrased how-question which, in the context of asking why there is something rather than nothing, carries the implication that one is going to go on asking how-questions until one reaches something that plainly has to be the case so that the stream of how-questions can come to a stop.</p><p>One could challenge the formulation of the question, on the ground that it was not possible to talk about nothing. But that argument might easily not succeed. While there might be nothing to which to refer, one could talk about the falsity of every proposition of first-order predicate logic which started with an existential quantifier, the variable of which actually bound an occurrence later in the proposition. (This is only a sketch of a solution. It would need to be worked out in detail.) Or one could point out that we have no difficulty in thinking mathematically of the contents of the empty set, even if philosophers have found that natural languages run into difficulties over such notions.</p><p>Finally, one could treat the existence of something rather than nothing as a mystery. In the words of Wittgenstein (<i>Tractatus</i> 6.44), "Nicht <i>wie</i> die Welt ist, ist das Mystische, sondern <i>daß</i> sie ist". But that would seem to amount to no more than accepting the existence of something rather than nothing as a brute fact and taking up a particular emotional attitude to the fact, unless one had good grounds to think that the mystical was itself a repository of information that was so far inaccessible and might eternally remain so. That would be a hope without justification.</p><p><br /></p>Richard Baronhttp://www.blogger.com/profile/17869390364282686725noreply@blogger.com0tag:blogger.com,1999:blog-2055583406917680385.post-33414704012774411652017-02-08T20:57:00.000+00:002017-02-15T23:26:40.067+00:00The humerus side of lifeIn November, when with a friend on our way back from seeing someone depart for Paradise by way of Kensal Green, I tripped and fell on a railway platform and sustained a fracture to my left arm, at the upper end of the humerus. This must rank very low on the scale of injuries requiring hospital treatment, and indeed the orthopaedic surgeon who first looked at it chose to put the arm in a sling and let it heal, rather than resorting to surgery. But the injury was still enough to provoke a few thoughts, which I record here.<br />
<br />
Before doing so, however, I would like to record my very great thanks to everyone who helped, both friends and transport and medical staff. Among the latter I number (in order of encounter):<br />
<br />
1. the platform staff at Oxford Circus station. They were with me within 30 seconds of my falling, and made sure I was alright and able to continue safely (in the company of the friend who was with me) to where I could get medical help;<br />
<br />
2. the pharmacist at Boots on Kingsway whom I first approached in search of a sling and painkillers, who told me that there might be a fracture and sent me to hospital;<br />
<br />
3. the Accident and Emergency and Urgent Treatment Centre staff at University College Hospital. I was passed smoothly round the necessary doctors, orthopaedic surgeon, nurses and radiographers, and while work in such departments must be stressful, they were at all times calm and friendly;<br />
<br />
4. the staff of the Fracture Clinic at the same hospital, which I visited three times from December up to early February. Again, orthopaedic surgeons, nurses, radiographers and administrative staff ran the operation very smoothly;<br />
<br />
5. the radiographer and the orthopaedic surgeon who took a look to make sure there was no displacement when I was in Hong Kong for a while in December;<br />
<br />
6. the physiotherapist whose clinic at University College Hospital I now attend, and probably will attend for a little while to come. Each time I go I get a friendly and efficient review of progress, and am sent away with clear instructions as to which exercises to practise.<br />
<br />
The National Health Service sometimes gets a bad press. This has been by far my most significant encounter with the Service to date, and I want to say out loud that they have been absolutely superb. The next time you read a story saying that they have got something wrong, please bear in mind that they get things right thousands of times a day, and the newspapers hardly ever notice.<br />
<br />
Now to the thoughts provoked by the fracture. None of them is original, but some of them only come to mind in abnormal circumstances.<br />
<br />
1. What happens is so contingent on trivial details. If I had stepped off the train a little bit differently, my course might have been to one side of where it was, or my feet might have fallen at each pace a few centimetres behind where they in fact fell, and the trip might have been avoided. As a corollary to this, there was no practical way to see the risk coming. Even approximations to the predictive power of Laplace's demon are unavailable. <br />
<br />
2. Following on from this first thought, trivial differences can make a big difference to the next few months. My reading and writing were interrupted and then slowed down, and a couple of talks I was due to give had to be cancelled. (A trip to Hong Kong and Singapore, on the other hand, starting ten days after the accident, went ahead, although I don't think the orthopaedic surgeon in the London fracture clinic was very happy about that.) But the effects need not be life-changing. My life is now back to what it probably would have been if the fracture had not occurred. Of course something life-changing might have happened at the talks, had I given them, but that is mere speculation, and the possibility is too ill-specified for there to be any fact of the matter about whether something life-changing would have happened. (For comparison, I buy a lucky dip lottery ticket for each Saturday's draw, but given that the numbers are generated randomly and depend on the shop one uses and the precise time of purchase, there is no fact of the matter as to whether, when I happen to miss a Saturday, I would have won had I bought a ticket.)<br />
<br />
3. It is sometimes said that one should get on and do the important things in life, because one might for all one knows die quite soon. This argument is less persuasive than it used to be, at least when addressed to people who are young or in middle age, because while there are early deaths, the probability of dying when young or in middle age is much lower than it used to be. We should perhaps replace the argument with one which turns on the fact that one might lose some important abilities relatively early in life. My fracture is healing nicely, but I could easily have suffered a permanent injury. And in a world that is structured around people with four fully functioning limbs, good eyesight, and so on, moderate injuries can impose significant limitations, despite provision being made for the disabled. To take a trivial example, when one arm is out of action or cannot be used to apply any significant force, it takes much longer than normal to get dressed, and you have to find someone else to do up your shoelaces.<br />
<br />
4. It is remarkable how little one notices about one's body until something goes wrong. The big lesson for me was that no body part is an island. Everything is connected under the skin. Thus for the first week, any movement of the upper left arm was painful. So I took care to do everything with the right arm. But certain movements of the right arm led to pain in the left arm. And if I had to pick something up from the floor, crouching down (moving only the legs, not the arms) also led to pain in the left arm.Richard Baronhttp://www.blogger.com/profile/17869390364282686725noreply@blogger.com1tag:blogger.com,1999:blog-2055583406917680385.post-23750382286399350722016-11-16T10:13:00.000+00:002016-11-16T10:16:09.676+00:00Court decisions and arguments for themHere is a little puzzle, inspired by a brief visit to the UK Supreme Court yesterday.<br />
<br />
Suppose that an appeal, which turns purely on questions of law rather than on questions of fact, comes to a supreme court. (That is important - we are to assume that this hearing is the end of the line, so if something goes wrong, there is no further appeal procedure to put it right.) And suppose that 9 justices hear the appeal, and decide who wins by a simple majority.<br />
<br />
5 justices would allow the appeal, and 4 would dismiss it, so the appeal succeeds. But the 5 are split 3-2. 3 of them reach their conclusion by using argument-3, and make clear their view that another argument, argument-2, would be not merely inadequate but mistaken. The other 2 rely on argument-2, and make clear their view that argument-3 would be mistaken.<br />
<br />
The 4 who would dismiss the appeal all rely on exactly the same argument, argument-4.<br />
<br />
Given that we expect legal decisions to be made for good reasons, should we worry that the court's decision is opposite to the decision which would be supported by the argument that attracts more support than any other argument?Richard Baronhttp://www.blogger.com/profile/17869390364282686725noreply@blogger.com0tag:blogger.com,1999:blog-2055583406917680385.post-29548301564324902952016-09-26T21:36:00.001+01:002016-09-26T21:38:09.095+01:00Tax and softwareHere is the response I have just sent to the HM Revenue & Customs consultation paper "Making Tax Digital: Bringing business tax into the digital age", which is available here:<br />
<br />
<a href="https://www.gov.uk/government/consultations/making-tax-digital-bringing-business-tax-into-the-digital-age">https://www.gov.uk/government/consultations/making-tax-digital-bringing-business-tax-into-the-digital-age</a><br />
<br />
I write in response to the consultation paper Making Tax Digital: Bringing business tax into the digital age, published 15 August 2016. I write as a private individual, not on behalf of any business or organization. (I used to be in charge of tax policy for the Institute of Directors, but I left that post in 2013.)<br />
<br />
I only have comments on one topic, the supply of software to taxpayers to enable them to work within the proposed digital system.<br />
<br />
The proposal is to make people use software from suppliers who have obtained the approval of HMRC, coupled with the proposal to require software suppliers to agree to provide cut-down versions (which will lack full functionality) for free.<br />
<br />
I think this proposal addresses the problem in the wrong way, and in a way that will cause considerable trouble for HMRC. My reasons for saying this are as follows.<br />
<br />
1. In order to comply with their legal obligations, many businesses will have to spend money in addition to paying their tax. This is fundamentally wrong. You should be able to have all necessary dealings with the state at no additional cost. (Compare the fact that if you want to go to court without spending money on a lawyer, you can do so by representing yourself.)<br />
<br />
2. The software companies will want to offer as little as possible in the free versions, while HMRC will want a reasonable amount to be offered. HMRC will get involved in tangled and wholly unnecessary negotiations, and there will inevitably be the suspicion of cosy deals being done - unjustified suspicion, perhaps, but there nonetheless. HMRC will also be at risk of litigation from rival software providers, when they are seen as having done deals with some providers that are more generous to the providers than the deals they have done with others.<br />
<br />
3. The negotiations with providers will go on year after year, over which updates will be provided free of charge.<br />
<br />
<br />
It would be far better if HMRC did not get into the business of doing deals, but instead did at least one, and perhaps both, of the following. (Of the two, I think the second one, provide a full package, would be the better one.)<br />
<br />
<br />
A. Publish the interface and allow anyone to produce software<br />
<br />
Publish the interface - the exact requirements for any file submitted to HMRC. Then anyone could produce software to match, and anyone could make it open-source if they wanted.<br />
<br />
It is not clear how far HMRC already intend to go down this route. Paragraph <br />
2.14 speaks of "developing and releasing new APIs". For this to be satisfactory, it must mean release in full to the public at large (and not just authorized suppliers), and in a form that allows anyone to try out software with them. On this last point, see the remarks by Toby Parkins in response to Q250 of the evidence that the Treasury Committee took on 06 September 2016, available here:<br />
<br />
<a href="http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/treasury-committee/shifting-sands-an-inquiry-into-uk-tax-policy-and-the-tax-base/oral/38512.pdf">http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/treasury-committee/shifting-sands-an-inquiry-into-uk-tax-policy-and-the-tax-base/oral/38512.pdf</a><br />
<br />
HMRC might object that they need to authorize the software for reasons of security of the HMRC system - they don't want people finding ways to feed false data into the system. While that is a legitimate concern, I very much doubt that the authorization of software would be the remedy. After all, people will find ways to spoof authorized software, and whatever security systems will be used with the authorized software (eg public and private keys) could be made a requirement of the software that anyone was allowed to develop.<br />
<br />
Paragraph 2.18 also indicates that HMRC would undertake lots of testing of software, something which would require HMRC only to have to test a few packages. While that might be reassuring to taxpayers, it is unlikely to be necessary. If a product is a duff one, word will quickly spread. And it would be open to HMRC and others to provide sets of test data which people could put into software, and copies of the files which should come out at the other end for sending to HMRC. Consumer groups could then run the test data and compare the files which emerged with the files which should emerge, to see whether the software was any good.<br />
<br />
<br />
B. Provide a full package free of charge<br />
<br />
HMRC could provide their own full-featured package to do everything, including the keeping of accounting records that would feed into submissions to HMRC, free of charge (and preferably open-source so people could make their own modified versions it if they wished, or at least look at it in detail and propose amendments), and fully functional on all of Windows, Mac and Linux.<br />
<br />
That approach would be far more likely to wean people off the shoebox-full-of-invoices system of record-keeping than asking taxpayers to spend money on commercial systems. And it would probably involve a good deal less effort for HMRC in total than entering into negotiations with commercial suppliers and testing all their various products.<br />
<br />
HMRC's traditional argument against providing full-featured packages free of charge is that it would undermine the private sector software industry. Well, tough. It is not HMRC's job to create work that people then have to pay to get done. The concern in paragraph 2.22 that HMRC should provide reassurance to companies that they could provide both free and paid-for packages and still make money is wholly misplaced. Companies' profitability is not in principle HMRC's problem, and it is only made HMRC's problem if they first takes the decision to rely on commercial suppliers, rather than providing a full and free package.<br />
<br />
Moreover, acting to keep commercial providers in business right against the idea of reducing regulatory burdens. And in the old, paper records, days, the Inland Revenue happily provided all the forms you needed, free of charge.<br />
<br />
Finally, here is an admittedly off-the-wall suggestion. If HMRC do go down this route, it will be worth considering ways to get a package developed cheaply. It would make a fine project for computer science students - some to write the package, and others to find out what was wrong with it and propose corrections. Just look at what the open-source community has achieved, using collaboration tools like GitHub, and you will see that it really would not be that difficult. HMRC's main concern would probably be time discipline - the package would have to be ready by a certain date in order to fit in with broader plans. But it is not clear that such worries would be any greater than they would be with more traditional methods of software development that have been tried by Government agencies in the past.<br />
<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2055583406917680385.post-90863151396254692142016-07-25T20:37:00.002+01:002016-07-25T20:37:08.437+01:00Pokémon Go: Philosophy ExaminationPhilosophy Examination<br /><br />Subject: Pokémon Go<br /><br />Answer as many questions as take your fancy<br /><br />Time allowed: for as long as you have nothing better to do<br /><br /><br />Note: this examination is based on a simplified version of Pokémon Go, which is not identical to the actual game. The broad idea of capturing different monsters which appear on the screen of a mobile phone when one is in the right area is carried over from the actual game. Other features which are to be assumed are indicated by the questions. Candidates may assume such other features as are reasonable given what the questions ask, so long as no assumptions which would make the questions trivial to answer are made.<br /><br />1. Assume that when several people are at the same location, surrounding a table, they can all see a monster in the middle of that table, and they see it in ways that reflect their relative positions. (For example, if someone facing south sees the monster face-on, someone facing west sees its right hand side.) And when a viewer moves round the table, his or her view of the monster changes accordingly.<br /><br />(a) Under what additional conditions, if any, would this be evidence that the monster was real?<br /><br />(b) What kind of evidence, if any, would help to settle the question of whether the viewers saw one monster or numerically distinct but qualitatively identical monsters?<br /><br />(c) How would your answer to (b) be affected by whether the monster disappeared from all screens when it was captured by one of the viewers?<br /><br />2. Assume that a monster is at an absolutely fixed position relative to the Earth's surface. Does it exist when no-one is looking at it?<br /><br />3. What conditions, if any, of numerical identity over time could be applied to monsters?<br /><br />4. New monsters are supposed to come out of eggs which have been fertilized and laid following an encounter between two monsters, and the species of a new monster is systematically related to the mother's ancestry. (It seems that these facts are known from other Pokémon games, rather than from Go.) But it appears that no-one has seen a monster lay an egg, and the eggs are simply found.<br /><br />(a) Is inference to the breeding-and-laying account an example of inference to the best explanation?<br /><br />(b) Is it a respectable inference (of whatever type it may be)?<br /><br />5. Monsters are apparently distributed in a way that is positively correlated with human population density. Such a distribution would give people in rural areas very limited access to monsters, unless they travelled to urban areas.<br /><br />(a) Assuming that there is a cost to the creation and maintenance of monsters, which is a cost per monster unrelated to the location of the monster or the number of other monsters near it, would such a distribution policy accord with utilitarian principles?<br /><br />(b) If you were behind Rawls's veil of ignorance, would you approve of such a distribution policy?<br /><br />6. Some of these questions come out of a conversation between the examiner and a (human) interlocutor who is happy to remain anonymous. Other people may have already had and published the same or similar thoughts.<br /><br />(a) If thoughts which are qualitatively identical to those set out in the questions here have already been captured by others, do they count as captured by the examiner and the interlocutor, and do they get points for capturing them?<br /><br />(b) Is there any identity other than qualitative identity for (i) thoughts (ii) Pokémon monsters?Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2055583406917680385.post-48084060141910388992015-04-28T07:41:00.000+01:002015-04-29T07:13:41.386+01:00Buridan's assBuridan's ass thought he had the answer. Each time he could not decide between two bales of hay, he would toss a coin. If it came up heads, he would start with the bale on the right; if tails, he would start with the bale on the left.<br />
<br />
But there would be an equally good rule, heads-left, tails-right. How could he decide between these two rules?<br />
<br />
Then he had another idea. He would build an electronic device with a loudspeaker and a button. When he pushed the button, it would randomly say "Right" or "Left", and he would follow its instruction. It would not matter whether it was genuinely random, or determined but as near random as made no difference (for example, if it picked up whatever radio waves happened to be around and used them to decide when to stop a counter that was flipping left-right a million times a second). He did not have to worry about every philosophical problem.<br />
<br />
But the device would not have any notion of the meanings of the words "right" and "left". It would be like the person handling the dictionaries inside Searle's Chinese room. So the ass would have to make an arbitrary choice between the rule that said the sound of the word "right" meant he was to choose the bale on the right, and the equally good rule that it meant he was to choose the bale on the left.<br />
<br />
He began to wonder whether there was any way to get rid of the arbitrariness, as opposed to just moving it to a different place. Perhaps there could not be. Perhaps for any rule or system of rules, there would have to be a mirror image rule or system that would always have the opposite effect. This might be an inevitable result of the symmetry of the situation that created the problem.<br />
<br />
At this point he felt hungry. He went out into the yard, and was very pleased to see that there was only one bale of hay. And as it was directly in front of him, he would start by eating the middle part of it.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2055583406917680385.post-47942827681751862932015-03-01T15:41:00.005+00:002015-03-02T21:50:44.761+00:00TetralogueOxford University Press have kindly sent me a review copy of Tetralogue: I'm Right, You're Wrong, by Timothy Williamson. It is an introduction to a wide range of philosophical problems, presented as a conversation between four people on a train.<br />
<br />
Dialogues have a distinguished history in philosophy: think of Plato, Berkeley and Hume. Many such dialogues steer the reader gently but firmly towards the author's preferred solutions. This one is different. The problems are put in front of us. Some sketches of solutions are identified, and are explored sufficiently to show us that they could not easily be worked up into fully developed solutions with which all would be comfortable. But we are neither told nor shown that one sketch is the right one upon which to concentrate. The reader who asks "What sort of thing does Timothy Williamson want me to think about this?" will get no answer, whether "this" be relativism, vague language, the primacy of science, the status of logic or the nature of ethical claims. Readers can reach their own conclusions, but they need to work at reaching them for themselves. They also need to be wary of identifying with any one of the four participants and then casually agreeing with the chosen one. The participants have such different outlooks that it would be easy to take to one and against the others. Bob insists on making room for the unscientific, Sarah has no truck with such nonsense, Zac is a relativist who always knows that he can be helpful, and Roxana is the Queen of Logic.<br />
<br />
For whom is the book? The beginner in philosophy will engage with serious philosophical questions without realizing it. Those who are steeped in the western philosophical tradition will enjoy the many allusions to writers, problems and solutions that are hidden in the dialogue. The benefits to these contrasting readers draw our attention to a third reader, who will get more out of the book than either of the first two. This is the beginner who has the benefit of guidance from an expert who can show where there is more to the dialogue than meets the eye, and who can push the beginner to use the dialogue as a springboard for his or her own arguments. This would be a very good book to use in an informal course for adult or teenage beginners, a course that was aimed at broadening the mind while having fun rather than at passing an examination.<br />
<br />
The allusions pose a puzzle. Some of them must be deliberate. For example, on pages 35-36 Zac adopts the voice of Plato's Socrates to commit Sarah to a position with which she is not at all comfortable. Others may or may not have been planted by the author. On page 134, we have a suggestion that we should take into account only the views of those who can look at things from all relevant points of view (those who are both accountants and soccer fans in this case). Is this a deliberate allusion to Mill's justification for preferring the opinion of Socrates to that of the fool, and the opinion of the human being to that of the pig (Utilitarianism, chapter 2)? Or is that connection created solely by the reader? And if the author did not intend the allusion, does it still qualify as an allusion? We might go on to distinguish the case where it was unintended but the words were a consequence of the author's having read Mill from the case where there was no causal connection between that reading and what the author wrote, and then argue about whether the concept of causal connection was the right concept of connection in this context, and so on. Thinking about the author and the reader, rather than solely about the contents of the book, can set off such a torrent of questions. At least we can be confident that the author will be perfectly happy to have given us a meta-problem to ponder, whether he did so accidentally or deliberately.<br />
<br />
The last section of the book is about ethics, and the tone changes. The gap between the dialogue in the book and how a discussion might go in a philosophy class shrinks. This is a reminder that ethical issues arise in all of our lives, so that a direct presentation of the issues can engage anyone. It is not only philosophers who see that the issues are important. But this change of tone is no let-down. Like the train blending smoothly into the dusk at the end of the book, the book blends philosophy smoothly into the life of the reader. This is a serious book that is also a lot of fun.<br />
<br />
<br />
Tetralogue: I'm Right, You're Wrong, by Timothy Williamson. Oxford University Press, 2015, 160 pages, hardback £10.99. ISBN 978-0-19-872888-7.<br />
<br />
Publisher's web page:<br />
<a href="http://ukcatalogue.oup.com/product/9780198728887.do">http://ukcatalogue.oup.com/product/9780198728887.do</a><br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2055583406917680385.post-15053128852487190512014-02-28T18:37:00.001+00:002014-02-28T19:47:41.929+00:00Open access publicationIn a recent incident, a number of fake academic papers, which were computer-generated gibberish, got published (and then withdrawn when the problem was spotted). Nature's account is here:<br />
<br />
<a href="http://www.nature.com/news/publishers-withdraw-more-than-120-gibberish-papers-1.14763">http://www.nature.com/news/publishers-withdraw-more-than-120-gibberish-papers-1.14763</a><br />
<br />
It is all very entertaining. It is also not surprising that there should be occasional incidents like this, when so much gets published, and appropriate peer reviewers may be in short supply. At least we can hope that when a paper is gibberish, as opposed to merely bad, even a first year student would be able to see that it was gibberish. That is, there should be an effective control at the point of reading.<br />
<br />
But is this a problem that could be solved, at least partially, by the purest form of the open access approach to academic publishing, an approach that is in any case gaining ground?<br />
<br />
I have in mind the free publication of papers on websites, in ways that make them freely accessible, rather than in traditional journals. This is happening anyway. The arXiv, <a href="http://arxiv.org/">http://arxiv.org/</a> , sets a particularly high standard of organization and ease of use, including the tracking of revisions of papers, but there are other worthy sites too. Such sites do need funding: the arXiv is funded by Cornell University Library and others. But the expense must be modest, compared to the immense value of the service.<br />
<br />
Often, a paper published in this way is a draft, not the final version, which will typically appear in a traditional journal and will be the version of record, the version for others to cite. Publication in a traditional journal creates expense, either for the author, or for libraries or individual readers. Some of that expense is justified, because the selection of papers, the organization of peer review and the checking of formatting takes effort - although reviewers themselves are not likely to be paid, and the rise of LaTeX and stylesheets to use with it means that formatting, and the checking of it, involves much less work than it used to require. But some publishers exploit the prestige of their journals, and the ability to deny online access to past years' editions if one stops paying the subscription, in order to make very large profits, at the expense of university libraries who could otherwise use the money elsewhere.<br />
<br />
The obvious solution would be to move to free online publication of the version of record of each paper, and to buy permanent open access to past years' editions of journals with one-off payments to publishers.<br />
<br />
But what would then happen to quality control?<br />
<br />
It could be imposed by all who read papers - and open access would greatly increase the pool of potential readers, improving the effectiveness of this form of quality control. A well-organized website could allow readers to rate papers on a simple scale, and to leave comments. Ideally, the raters and commentators should be identified, and should be allowed to rate one another, so that expert commentators (as identified by the ratings of others) found that their ratings and comments carried more weight than those of people without such recognition. Indeed, people who were highly rated should have their ratings of other commentators given more weight than the ratings of commentators that were given by those who were not themselves highly rated.<br />
<br />
There would be interesting problems here in the design of voting systems. How could the risk of conspiracies to distort the voting be minimized? (The risk could not be eliminated, unless an authority imposed weights by reference to a list of approved academics.) But something that was good enough should be possible.<br />
<br />
But what about the academic job market, the allocation of research grants, and so on? Lists of publications in prestigious journals currently form important parts of applications. Here, I think there is not much difficulty. An application would include links to papers, and to the comments on them. Those who allocated jobs and grants would know whose comments were most worthy of note. They would then have the advantage of often having more than one opinion on the applicant's work. And one would also eliminate the unfairness that springs from the current element of luck in publication. Luck arises because each journal can publish far fewer papers than are submitted, and is therefore likely to reject papers that were just as worthy of publication as the accepted ones. Finally, any papers by an applicant that had not been exposed to open comment in this way would be treated with suspicion by the allocators of jobs and of grants, encouraging a fairly rapid shift to the system.<br />
<br />
So how would this solve the problem of fake papers? The first people to look at such a paper would simply leave comments to the effect that it was gibberish. It would then be ignored by the academic community.<br />
<br />
Finally, why should we cut out commercial publishers? Why is the purest form of open access publishing the most desirable form in this context, apart from the obvious point that it would allow universities to make better use of their funds? The answer is that if there is no commercial interest, there is no reason to design a system that would be any other than the best system for the academic purposes of honest appraisal and the encouragement of good work. A commercial publisher would always have an urge to control adverse comment on papers that it had published, and to solicit favourable comment from prestigious commentators. Good publishers would resist the urge, but it would be safer to take money out of the system altogether.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2055583406917680385.post-25159310082304933382013-10-25T22:10:00.001+01:002013-10-25T22:12:36.500+01:00Government accountability and freedom of informationGus O'Donnell, Cabinet Secretary until the end of 2011, has made waves recently by making proposals for improving the government of the UK which include vetting prospective MPs before they stand for Parliament. Amongst the responses has been an excellent one from Douglas Carswell, calling for accountability in the other direction: we the electorate, and backbench MPs, need to be able to hold civil servants and ministers to account. His piece is available at:<br />
<br />
<a href="http://blogs.telegraph.co.uk/news/douglascarswellmp/100242390/how-to-expose-whitehall-cock-ups-give-us-the-power-to-sack-ministers-and-civil-servants/">http://blogs.telegraph.co.uk/news/douglascarswellmp/100242390/how-to-expose-whitehall-cock-ups-give-us-the-power-to-sack-ministers-and-civil-servants/</a><br />
<br />
It is, however, difficult to hold a government to account, unless we can see what considerations ministers and civil servants have weighed, and how they have reached their decisions. Unfortunately, there is a major obstacle to that in the exemptions from disclosure of documents that are conferred by Freedom of Information Act 2000, sections 35 - Formation of government policy, etc and 36 - Prejudice to effective conduct of public affairs. As a rule, we cannot get to see the policy papers that go back and forth among civil servants, and between them and ministers, and which show what evidence was considered and how decisions were reached. If the exemptions were abolished, MPs, journalists and the public could see the papers, and would have a valuable way to keep tabs on civil servants and on ministers.<br />
<br />
The traditional argument is that if these papers had to be disclosed, that would inhibit free and frank discussion among civil servants, and between them and ministers. We should not accept this argument. Everyone would know in advance that such papers would be full of odd policy ideas that got rejected as silly, uncertainties about data (which should be disclosed anyway), queries over the value of evidence received from external consultees, and mentions of factors that might be seen either as risks of policies, or as advantages of them, depending on one's political stance. We know that the policy formation process is messy, and we would not think any the less of governments if we had confirmation of that fact. The worst consequence would be a bit of political embarrassment, and that matters far less than our ability to see whether the people we pay to run the public sector, and to formulate legislation, are doing a good job.<br />
<br />
One might fear that there would be too much paper, through which to plough, and no good way to identify the key documents quickly. There would also be the difficulty of wording freedom of information requests so as to find out what was wanted. If such a request asks for the wrong thing, or leaves it open to the relevant department to supply very little information, the response can very easily not be what the person who made the request wanted.<br />
<br />
These difficulties could, however, be overcome if external users had access to departmental document management systems, so they could search the stock of documents themselves. The official response to such a radical move might well be, "But then you might get access to personal data on taxpayers, or NHS patients, or litigants". But it is unlikely that this would really be a problem. It would not be difficult to tag documents as needing redaction before they could be accessed - although there would need to be severe disciplinary measures against civil servants who were found to be tagging everything, just so as to make life difficult for outsiders.<br />
<br />
Will any of this happen? Maybe not. But if enough backbenchers from all parties wanted it to happen, they could force it on those who are in government, or who are on the Opposition front bench and hope that they will in due course be in government.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2055583406917680385.post-21121820141539760992013-09-12T14:40:00.003+01:002013-09-12T14:46:34.090+01:00The life without examinationIn the Apology, at 38a5, Plato famously reports Socrates as saying that the unexamined life is not worth living. At least, that is the standard English translation. But every now and then, someone contests the claim. Why should a life be worthless if it is wholly outward-looking, devoid of introspection? (We may note in passing that "not worth living" may be too strong a translation. "Not the sort of life one should live" would be a possible reading.)<br />
<br />
Sometimes, when I have seen this view, or some related view, I have wondered out loud about the translation. (Examples where I have commented are Brian Leiter's blog on <a href="http://leiterreports.typepad.com/blog/2012/01/the-examined-life-is-not-worth-living.html" target="_blank">17 January 2012</a>, and Stephen Law's blog on <a href="http://stephenlaw.blogspot.co.uk/2013/09/george-ross-memorial-lecture-tomorrow.html" target="_blank">12 September 2013</a> - original post dated 7 September.)<br />
<br />
I put my thoughts here now, in the hope that some expert in Plato's Greek may have a view. I am not such an expert, so my own thoughts are mere speculation.<br />
<br />
My concern is the word "anexetastos". This is the word that is standardly translated as "unexamined", with the implication that it is one's own life that needs examining. Liddell and Scott (the big one, colloquially known as the Great Scott) gives two meanings:<br />
<br />
(a) not searched out, not inquired into or examined;<br />
<br />
(b) without inquiry or investigation.<br />
<br />
This dictionary refers to Apology 38a5 in giving the latter meaning.<br />
<br />
An Intermediate Greek-English Lexicon (the Middle Liddell) reinforces the point by giving (a) not inquired into or examined, (b) uninquiring, and again links Plato to the latter meaning, although without a reference to the Apology.<br />
<br />
On the scope for a verbal adjective to have both active and passive meanings, see Smyth's Greek Grammar, page 157, paragraph 472.<br />
<br />
Liddell and Scott's decision to link Plato to (b) and not (a) does not in itself prove anything. I assume that they simply followed the opinion of Plato scholars as to the translation. But if (b) were the meaning to adopt, Socrates' prescription would look rather different. It would amount to saying that you should enquire into things and strive to find out the truth about the world. You might yourself be a main object of your enquiry, or you might turn your gaze outwards, making enquiries in physics, natural history, geography, philosophy, or whatever else was of interest. The prescription would amount to an injunction not to be slothful intellectually, but to pursue knowledge and understanding.<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2055583406917680385.post-63689314788101376272013-09-02T10:30:00.000+01:002013-09-02T10:59:20.082+01:00Most recent common ancestorThere has been a pause in my blogging. I have been engrossed in other work. The pause may last for a while longer. So here is a little puzzle, as an entr'acte.<br />
<br />
According to Wikipedia, Elizabeth of York, wife of Henry VII, is the most recent common ancestor of all English monarchs. I have not checked this claim, but let us assume that it is correct. The puzzle, which we would need to solve before checking the correctness of the claim, is as follows.<br />
<br />
How should we define "most recent common ancestor" in this context, so that a determinate person, or a determinate couple, is picked out, and so that it is interesting that this person, or this couple, is picked out?<br />
<br />
The condition of interest would be failed if our definition led us to pick out George VI, or even if it led us to pick out George V, or Victoria. We want some sense of "furthest back most recent". But then, we must not allow Elizabeth of York's mother, Elizabeth Woodville, to displace her.<br />
<br />
Technical terms from mathematics may be used freely.<br />
<br />
Blogspot does not, so far as I know, support LaTeX math environment, so please feel free either to cut and paste logical symbols from elsewhere, or to use E for the existential quantifier, V for the universal quantifier, - for negation, and > for implication.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2055583406917680385.post-25425164678081687442013-06-17T12:39:00.000+01:002013-06-17T12:55:13.962+01:00Bonn, Berlin and Gettier<br />
This month, we are fifty years on from the publication of Edmund Gettier's celebrated paper, "Is Justified True Belief Knowledge?" (Analysis, volume 23, number 6, June 1963, pages 121-123). Today, 17 June, is the sixtieth anniversary of the uprising in East Germany, which might have led to the early reunification of Germany (at least according to a broadcast on B5 radio this morning), had the Soviets not sent the tanks in.<br />
<br />
We may also recall, for no particular calendrical reason, Bertrand Russell's note of the Gettier phenomenon, avant la lettre. The story is told in J E Littlewood, Littlewood's Miscellany, edited by Béla Bollobás, 1986, page 128. "He told me (c.1911) that he had conceived a theory that 'knowledge' was 'belief' in something which was 'true'. But he met a man who believed that the Prime Minister's name began with a B. So it did, but it was Bannerman and not Balfour as the man had supposed." [Balfour was Prime Minister from 1902 to 1905, and Bannerman from 1905 to 1908.]<br />
<br />
Let us stick with the letter B, but move forward to the present day. Suppose that Claude, who has never lived in Germany, and whose access to German news is very limited, was taught in school, in the 1970s, that Bonn was the capital of West Germany, that West Germany was much larger, and much more significant economically, that East Germany, and that one part of Berlin was the capital of East Germany. Since then, Claude has heard of German reunification, but not of any change of capital. He reasons that the capital of Germany is probably still Bonn, but that it might have been moved somewhere else, and that if it had been moved, Berlin would have been the most likely choice.<br />
<br />
He therefore forms the justified true belief that the name of the capital of Germany begins with a B. Does he know this?<br />
<br />
The case is not on all fours with the Balfour-Bannerman case. Claude is aware that he might be wrong in his belief that Bonn is the capital, but reasons that even if he is, there is still a good chance that he is right in his belief that the name begins with a B. That reasoning is perfectly sound, and not just because the capital was in fact moved to Berlin. It would still have been sound reasoning, had Frankfurt or Hamburg been chosen. The problem then would have been not lack of justification for the belief that the name began with a B, but its falsity.<br />
<br />
The reasoning would also have been sound if Bielefeld had been chosen, but then the truth of the belief that the name began with a B would have had no connection with Claude's reasoning, and we would probably have wanted to say that Claude's belief did not qualify as knowledge.<br />
<br />
Let us return to the real world, in which the capital is moved to Berlin, Claude's reasoning is sound, and the truth of his belief that the name begins with a B is connected with his reasoning. Does he know that the name begins with a B?<br />
<br />
It may depend on how Claude assesses the risk that the capital has moved. If he thinks there is a pretty good chance that the capital has moved, that supports a claim to knowledge, because it gives significance to the reasoning that Berlin would be the most likely new location. In giving significance to that reasoning, it also gives significance to what it is that actually makes the belief true. Claude reduces his reliance on the false lemma (that Bonn is the capital), and increases his reliance on the true lemma (that Berlin is the capital). If, on the other hand, Claude does not think it at all likely that the capital has moved, then his conscious reliance is on the false lemma, and the reasoning about the move is merely a precaution, upon which he does not expect to rely. We might then wish to deny Claude knowledge.<br />
<br />
There is one more twist. Suppose that Claude thinks it is quite likely that the capital has moved, and that inclines us to allow him knowledge that the name begins with a B. That puts pressure on the quality of his reasoning that if the capital has moved, it has moved to Berlin. He cannot be sure of this. So unless he has good reason to think that any move would be to Berlin, with no other city in serious contention, we might not allow him to know that the name began with a B after all.<br />
<br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2055583406917680385.post-49707771673968676422013-05-22T18:47:00.002+01:002013-05-22T19:00:47.908+01:00Phenomenal consciousness<br />
An e-mail has just invited me to consider whether phenomenal consciousness is a causally inert epiphenomenon, and a pointless by-product of brain activity. It is the sort of question that can be transformed into a different question.<br />
<br />
First, to what does the claim amount?<br />
<br />
It is not a claim about self-awareness of the type that underpins an agent's knowledge that the actions that she can make happen just by thinking, are the actions that will take place in her immediate vicinity, and are performed by the entity that must be kept fed in order for her thought processes to continue. That sort of self-consciousness can be characterized as having a grasp of what Perry called the essential indexical. It is the sort of self-consciousness that lies at the heart of Lucy O'Brien's argument in her book, Self-Knowing Agents. It is something that must be represented in our brain cells, one way or another, in order for us to function as we do.<br />
<br />
Rather, the claim is about how things feel to us, the qualia (or apparent qualia, if one thinks that there are no qualia) that we have, but that zombies would not have. So the second part of the claim, that phenomenal consciousness is a pointless by-product of brain activity, is a claim that zombies could do just as well as we do. It might happen to be that any creature that functioned as we function could not be a zombie, for example, because we would need to wire up the cells in certain ways which would inevitably produce phenomenal consciousness as a by-product. But even if that were so, the second part of the claim could still be made. It would merely need to be worded as "Disregarding the practicalities of making zombies, they could do just as well as we do".<br />
<br />
If, however, one were able to substantiate the second part of the claim, in the form of a claim that zombies could do just as well as we do, that would not mean that we would have established the first part of the claim. (It may be that the second part of the claim could be taken in some other form, such that substantiating it would substantiate the first part of the claim, but it is not obvious how to re-cast the second part in order to give this result.)<br />
<br />
We would not be able to move straight from the second part to the first part, because phenomenal consciousness might be causally effective in human beings, creatures who have it, even though an alternative way of achieving the same results that we achieve, an alternative that did not involve phenomenal consciousness, would be available. We should not suppose that the difference between a human being and a zombie is simply the presence or absence of phenomenal consciousness, with no other differences. And if there are other differences, such that we would not get from a zombie to a human being simply by adding phenomenal consciousness, those differences could build in a causal role for phenomenal consciousness.<br />
<br />
We need to say something about how phenomenal consciousness could have a causal role. It could be said that only fundamental particles and forces have causal roles. But that would be a very narrow way of speaking. We are more inclined to say that macroscopic objects also have causal roles. Now suppose that a subject has his fingers on some buttons, which carry modest electrical charges that produce a pleasant tingling sensation, but which are also getting steadily hotter. If a particular configuration of brain cells corresponded to a particular feeling in the subject, and could not be picked out in any other way than by saying "this is the configuration when the subject feels heat at his fingertips", and an analysis of his brain processes showed that the configuration led to conscious thoughts about when it would become sensible for him to withdraw his fingers and forgo the pleasant tingling sensation, then it would not be obviously inappropriate to say that the feeling played a causal role, any more than it is inappropriate to say that a ball that rolls off a table and onto the floor plays a causal role in making a noise, rather than saying that the particles of which the ball is comprised play causal roles in disturbing particles in the atmosphere. Berent Enç's thoughts on causation and conditionals, in his book How We Act, are relevant here.<br />
<br />
It would not be obviously inappropriate to speak in that way, but it might still be inappropriate. What might make it appropriate, or inappropriate?<br />
<br />
A claim of appropriateness would best be sustained by a claim that states of phenomenal consciousness were on a par with macroscopic physical objects. There is a sense in which both are not really there, but are mere causally inert epiphenomena of the fundamental particles and forces. If they were causally inert in the same sense, that would strengthen the position of those who said that it was appropriate to regard states of phenomenal consciousness as more than epiphenomena, to the extent that it was appropriate to regard macroscopic physical objects as more than epiphenomena.<br />
<br />
So the new question, into which we can transform the original question, is this: Are states of phenomenal consciousness epiphenomenal on the fundamental particles and forces, in the same way as macroscopic physical objects?<br />
<br />
One reason to say that states of phenomenal consciousness and macroscopic physical objects were epiphenomenal in different ways, would be that laws of the same general type, physical laws, give us a grip on the behaviour of particles, and on the behaviour of macroscopic physical objects. It is a debatable claim, but not a crazy claim, that we could derive the whole of chemistry and biology from physics, too. (This is an upwards claim, different from the claim that we could reduce biology and chemistry downwards to physics.) We could claim that any emergence would not obstruct such an upwards derivation.<br />
<br />
But then, could one also argue that no emergence would get in the way of an upwards derivation of facts about states of phenomenal consciousness? If it would not, and if this upwards derivation were feasible, we would have failed to separate off consciousness as something different from biology in a relevant way.<br />
<br />
One sign that there might be an obstacle to such an upwards derivation is that descriptions of states of phenomenal consciousness are readily appreciated by us, but they might not mean anything to, for example, Martians, whereas human biology would be perfectly meaningful to Martians.<br />
<br />
Another sign is that we do not yet have much idea of what such an upwards derivation would look like, whereas we have a pretty good conception of upwards derivations of chemistry, and of biology, from physics. But it would be unwise to assume that this is how things will rest. Our knowledge of the brain has advanced enormously over the last 20 years. We do not know how much we will learn in the next 20 years.<br />
<br />
Finally, we must consider the objection that if we did see how facts about states of phenomenal consciousness were to be derived, the descriptions of the derived objects would not look like descriptions of states of phenomenal consciousness. They would be descriptions of brain states. The descriptions would not glow with the feelings that we have, when our brains are in those states. But it is only Martians who definitely could not see the descriptions as glowing with those feelings. If we were to follow a Churchland-type programme of reform of our manner of speaking, so that we started to speak in terms of brain states, and if we associated descriptions of brain states with inner feelings, the descriptions might well come to glow with the corresponding feelings, even when we read them in neurology textbooks.Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-2055583406917680385.post-59462755110602564462013-05-07T23:32:00.000+01:002013-05-07T23:32:28.763+01:00Wings of Desire<br />
Every so often in the film Wings of Desire (alternatively entitled Der Himmel über Berlin), directed by Wim Wenders and released in 1987, we hear extracts from the Lied vom Kindsein, by Peter Handke. The text is here:<br />
<br />
<a href="http://www.wim-wenders.com/movies/movies_spec/wingsofdesire/wod-song-of-childhood-german.htm">http://www.wim-wenders.com/movies/movies_spec/wingsofdesire/wod-song-of-childhood-german.htm</a><br />
<br />
and an English translation is here:<br />
<br />
<a href="http://www.wim-wenders.com/movies/movies_spec/wingsofdesire/wod-song-of-childhood.htm">http://www.wim-wenders.com/movies/movies_spec/wingsofdesire/wod-song-of-childhood.htm</a><br />
<br />
Plenty of lines in the Lied illustrate the claim that philosophy begins in wonder (Plato, Theaetetus 155d; Aristotle, Metaphysics 982b11-13): for example, "Warum bin ich ich und warum nicht du?"; and "Wann begann die Zeit und wo endet der Raum?".<br />
<br />
It would be an interesting exercise to answer all of the questions that the Lied poses, and an equally interesting one to make all the connections that could be made between the text of the Lied and contemporary philosophy. For example, "Wie kann es sein, daß ich, der ich bin, bevor ich wurde, nicht war, und daß einmal ich, der ich bin, nicht mehr der ich bin, sein werde?", invites us to think about the conditions under which indexical and non-indexical terms can refer, and about the peculiar effects of using non-indexical terms to refer to oneself (with echoes both of Moore's Paradox and of Perry's essential indexical), as well as inviting us to think about coming to be and ceasing to be.<br />
<br />
I shall here take a look at these words: "Ist das Leben unter der Sonne nicht bloß ein Traum? Ist was ich sehe und höre und rieche nicht bloß der Schein einer Welt vor der Welt?".<br />
<br />
They have a particular relevance in the context of the film, in which angels see the world and the people in it only in black and white, and they cannot intervene causally to change people's lives, but on the other hand, they can listen to people's thoughts. But let us allow ourselves to go beyond that context, and ask what connection there may be between the idea that life is merely a dream, and the idea that we may perceive only an appearance, rather than the world.<br />
<br />
I shall take the former idea to be that life is but a fleeting set of impressions, which if properly understood, could not be taken to be of any importance. The sentiment here is Pindar's, that a person is but a dream of a shadow (skias onar: Pythian Ode 8, line 95), ignoring the comfort that the following two lines give, with their reference to the effects of Zeus's favour. If we take the idea that life is a dream to concern importance, that idea is kept securely, and interestingly, separate from the idea that we do not perceive the real world. If we took the former idea to concern the process that produced our perceptions, then there would be a risk that the two ideas could come together, especially if it were possible to have dreams that were reliably veridical, for example by virtue of a mechanism of pre-established harmony, or by virtue of some unknowable that produced both the real world and our perceptions in parallel.<br />
<br />
I shall take the latter idea to be that we perceive an appearance of a world, where that perceived world is distinct from the real world, and stands between us and the real world. The presence of the additional world, required by the text ("der Schein einer Welt vor der Welt", not "der Schein der Welt vor der Welt"), might be thought to steer us towards the sort of picture that would be given by a two-world interpretation of Kant, but we must allow for two non-Kantian elements. The first is that there is no requirement to regard the perceived world as empirically real. The second is the possibility (but not the certainty) that the perceived world is straightforwardly caused by the real world. The talk of our perceiving an appearance of the additional world also gives scope to introduce sense-data, or some such intermediary, but that is not the main point. The essential thing is the additional world.<br />
<br />
First, suppose that life was a dream, in the sense of unimportance that I have just given. Could it be that we would nonetheless perceive the real world, and do so accurately - or, alternatively, perceive an intermediate world that was like the real world in its details, so that we at least had an accurate representation of the real world, and arguably did (indirectly) perceive the real world?<br />
<br />
We could, so long as we could not act in the real world. That is, we would need to be in the impotent position of the angels in Wings of Desire, or of the deceased in Sartre's Les jeux sont faits. That would be both necessary and sufficient to make our perceptions of no importance, even though what was perceived or at least represented to us, the real world, was of importance, and even though the perceptions of people who could act in the real world, perceptions that might be qualitatively identical to our own, would be important.<br />
<br />
If we could act to change the real world, perceptions that conveyed the state of the real world would matter, no matter how convoluted the process by which they conveyed that state. Perceptions of a world that was not the real world, and that did not convey the state of the real world, would not matter. They would be like the dreams that we, real agents in the world, in fact have when we sleep. Those actual dreams do not matter, except perhaps in indirect ways that do not rely on the accuracy with which they represent the real world.<br />
<br />
Now suppose that life was not a dream, in the sense that our perceptions did matter. Would that impose any restrictions on the perceptual process?<br />
<br />
We would need to be able to act in the real world, in order for our perceptions to be important. But our perceptions would also need to be useful guides to action. They would not need to give us wholly accurate information about the real world, but they would need to be such that paying attention to them led to action that was, on the whole, more appropriate than our actions would be if we did not pay attention to them. They could, however, fulfil that condition, whether we perceived the real world or some other, intermediate, world.<br />
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2055583406917680385.post-14151974289006429172013-04-21T22:31:00.001+01:002013-04-21T22:42:09.584+01:00Manipulating the distribution of lifespans<br />
Suppose that you have the power to alter people's lifespans, but only on a statistical basis, not person by person. Perhaps you can do this through genetic engineering, or through putting something in the water supply. (As is normal in philosophical thought experiments, we shall not worry about practicalities.) Furthermore, you cannot ask people's permission before acting, and you are the only person in the world who can take this action.<br />
<br />
To be precise, you can take, or refrain from taking, action that will have the following effects. The effects are only on offer as a single package: you cannot pick and choose selected effects.<br />
<br />
A. The mean of the distribution of life expectancies for the affected group will rise.<br />
<br />
B. The dispersion of that distribution will fall, but the overall shape of the distribution will remain unchanged. If, for example, the distribution was normal, it will remain normal, but with a lower standard deviation. The fall in the dispersion will be great enough to ensure that despite the increase in the mean, there is some age, above which the high-lifespan tail of the new distribution will fall below the high-lifespan tail of the old distribution. (Both distributions are to be given in terms of proportions of the populations to which they apply, so as to avoid issues about the effects of a higher mean on the size of the population.)<br />
<br />
C. The mean of the distribution of years of ill health or disability at the end of life, and the overall shape and the dispersion of that distribution, will remain unchanged. Note that this effect is given in terms of years, not in terms of proportion of life. So the number of years of ill health or disability remains the same, even if lifespan increases.<br />
<br />
D. Any correlation between position in the distribution of lifespans, and position in the distribution of years of ill health or disability, remains unchanged.<br />
<br />
Thus, average prospects will improve: longer life, with no more years of ill health or disability at the end of life. But there is a tail of the distribution of lifespans, in which prospects are made worse by the change.<br />
<br />
Now we come to the questions.<br />
<br />
1. Should you take this action, if it is to apply to all people already born or currently in the womb, but not to anyone else?<br />
<br />
2. Should you take this action, if it is to apply to all people who have not yet been conceived, but not to anyone else?<br />
<br />
3. Should you take this action, if it is to apply to all people already born, currently in the womb, or not yet conceived?<br />
<br />
Two points should be noted, before we consider answers.<br />
<br />
The first point is that you are confronted with a single choice: take the action, in accordance with the terms of whichever one of 1., 2., and 3. is in force, or do not take the action at all. You are told which of the three is in force. You do not get to choose which one is in force.<br />
<br />
The second point is that in drawing a line at conception, I do not mean to take a stance on whether embryos are people. I simply want to recognize the difference between entities that already have a determined genetic endowment, and potential entities that do not. The significance of genetic endowment, is that it is likely to have quite a lot to do with prospects for lifespan and for infirmity in old age.<br />
<br />
The case against taking action in 1. looks quite strong. Given that genetic endowment and lifestyle to date have a substantial influence on lifespan, there would be some group of people, ones with good genetic endowments and healthy lifestyles, who constituted a modest proportion of the population, such that we were aware of the criteria for membership of that group, and such that we could say that taking the action would shift the odds against members of that group, to an extent that one would not regard as trivial. Kantian objections to the use of people as means to other people's ends come to mind. The other people in question would be the people outside the group.<br />
<br />
The fact that the members of the group would be people who were already well off, in that they would be people at the high end of the distribution of lifespans, hardly seems to be a satisfactory response. Imposing reductions in lifespans seems to be rather more fundamental than imposing high tax rates on high incomes.<br />
<br />
My initial feeling is that the obvious utilitarian case for taking action in 1. is outweighed by considerations such as these.<br />
<br />
One objection to this view, would be that a decision not to take the action would itself be a decision to act: that is, that refraining from action would not be abstention, but a positive choice for the other side, and that one would be just as responsible for that as one would be for a choice to take the action, and responsible in the same way. Then a choice not to act would amount to deliberately disadvantaging those who were not in the group with long life expectancies. I shall not explore this view here, but it is a view that could be argued.<br />
<br />
The case against taking action in 2. looks much weaker. People as yet unconceived have neither any particular genetic endowment, nor any particular lifestyle. It is tempting to say that there would be, in the future population, some members of a specifiable group of people (the group of people with good genes and healthy lifestyles), who would be worse off than they would have been, had the action not been taken. But that objection would not be rightly phrased. The word "they" would have no referent. The complaint that some members of a specifiable group would be worse off, once re-phrased to remove this difficulty, can amount to no more than a complaint that the distribution of lifespans, for the whole population, would be different. Given that, the utilitarian case seems to be a good one. In 2., a choice to take the action would be the right choice.<br />
<br />
It might seem that the word "they" would have a referent, at least in the next few generations, if there was a strong hereditary element to longevity. It could refer to the descendants of people who were currently alive and who had good genes. But I doubt that this would be enough to create a referent. Since we are concerned with people not yet conceived, there would only be potential descendants. Any particular person currently alive and with good genes might not have any descendants, even though it would be very probable that some people or other, drawn from the set of those currently alive and with good genes, would have descendants.<br />
<br />
There is a parallel with John Rawls's veil of ignorance. It is a veil of total ignorance. The people behind it have no particular characteristics, until they are dropped into the society that they have designed. So they can only sensibly think about overall distributions. There is a sense in which they cannot take up arms on behalf of a particular group on the basis that some arrangement would do an injustice to its members, even though criteria for membership of the group may be perfectly clear, because no-one has, at the time of design, characteristics that would determine whether he or she was a member. "Its members" lacks a referent. To make the parallel closer, we can imagine Rawls's deliberators thinking about possible changes to distributions that already apply in an actual society, a society which all the designers will join as a complete replacement population, in roles and with characteristics that will be allocated at random, once all existing members of the society have died.<br />
<br />
I shall not reach any conclusion on the third possibility, applying the change to current and future people. The key question is this. If the future population will be, in total over the centuries, very much larger than the current population, can the utilitarian case for action outweigh the case against action that was set out in relation to 1.?<br />
<br />
There is one more complication. Is it in the interests of a person to have descendants who enjoy long lives? If it is, then that would influence one's thoughts in relation to 3. Currently living people with good genes might lose some of their own lifespans. But suppose that the beneficial influence of good genes on lifespan was not passed down the generations to any significant extent, so that, for example, the great-grandchild of someone with genes that substantially improved prospects had no better chance than the average in the population of having genes that conferred such good prospects. Then the good genes of the current generation would not tend to keep a significant proportion of their descendants in the group that would potentially lose from the action. Then the members of the current generation with good genes might have self-regarding reason to favour the action.Unknownnoreply@blogger.com5