1. The Techno-Optimist Manifesto
In 2023, Marc Andreessen published The Techno-Optimist Manifesto. There is a link at the end of this post.
The Manifesto makes the following claims.
Technology is the way forward, with the potential to improve our lives.
Resources may be scarce, but technology holds out the promise of unlimited growth. It will let us make more with less.
Free markets impose the best discipline. They encourage producers to make things better and cheaper. And they create the surplus wealth that makes social welfare systems possible.
There are great gains from having human and artificial intelligence working together.
Technology allows increased energy production, which is essential to economic progress, without undue environmental impact.
Technology brings down prices, so that low-income people can enjoy the same goodies as high-income people.
The extension of technology is a heroic and exciting enterprise.
The enemies include visions of a deliberately engineered utopia, and the precautionary principle.
The message of the Manifesto is overwhelmingly optimistic, but conditional on innovators being allowed to get on with their jobs as they see fit. State intervention to tell them how to do their jobs is not welcome, and intervention to stop them would be even worse.
Technology has indeed made a huge difference to our lives. One high point has been the management of large amounts of energy (steam power, internal combustion engines, nuclear power, electricity distribution networks, and so on). Another has been the management of information, from Jacquard cards to computers and the Internet. A third has been the management of chemistry, for example in the extraction of useful elements from the Earth and in the manufacture of plastics. A fourth has been the management of biology, whether in medicine or in the genetic engineering of crops. The aristocrats of past centuries about whom we learn when we tour stately homes may have had fine enough lives, at least until they got seriously ill. But the vast majority of the population had a far worse time of it than they do today.
There are however questions to consider. Here are some of them, and answers that may allow optimism to be sustained.
2. Will the trend continue?
We may start by asking whether the trend of advances improving lives will continue. We do not know what the next innovations will be. So we cannot tell either whether they will be of great benefit to humanity, or whether the right innovations will arrive in time to address pressing challenges we may face.
2.1 Benefit or harm?
We might reasonably suppose, on the basis of past experience, that technology would go well. There are no obvious refutations of a claim that benefits are likely to be considerable. There is only a valid point that considerable benefits are not certain. But there is also reason not to be confident that detrimental effects will be small or manageable. This is because recent innovations have brought ever more energy, information, chemistry and biology under management. There is therefore greater potential than in the past to do serious damage, either by the deliberate direction of energy, information, chemistry or biology to do harm, or by accident.
We should however distinguish between known and unknown unknowns. The known unknowns relate to technology that we can already introduce and technology that, while it is not yet practical, we can already adumbrate. This latter category would broadly cover the technology that we can imagine bringing within reach by the use of existing scientific knowledge or by filling in well-defined gaps in existing knowledge. The unknown unknowns relate to technology that we cannot yet describe in any scientific detail, although we might be able to imagine its capabilities in broad terms. Supersonic flying cars have now moved into the category of known unknowns. Implants to enable human telepathy are still in the category of unknown unknowns. And then there are the totally unknown unknowns, where we cannot imagine the capabilities of technologies even in broad terms but they will nonetheless become feasible eventually.
In fact there is a scale with fine gradations, from the technology we could introduce simply by investing resources to mass produce it, through the technology we can imagine with high scientific precision, then the technology we can only imagine in vaguer terms, up to the wholly unimaginable but in fact feasible in the far future.
The location of a technology on this scale affects the confidence we may have in any judgement as to its potential for harmful effects. The closer to the known end a technology is, the greater our confidence may be. We should however never have complete confidence in such a judgement. The future is never perfectly predictable.
There is a specific reason for optimism in the face of uncertainty about the future. Lack of knowledge about the potential effects of a new technology is likely to reflect a lack of specific detail as to how it would work. We may reasonably hope that the less detail we have, the further away introduction of the technology will be and the longer we will have to work out what precautions might be needed. As introduction of a technology comes closer to being feasible, there should be more detail that we could use to improve our predictions.
We may however wonder how much confidence to repose in this optimistic thought. It is possible for technology, including some potentially highly destructive technology, to develop very fast. In addition, both commercial and government developers may be inclined to keep their work secret in order to prevent others from catching up and taking away their commercial or military advantage. Secrecy would hamper the assessment of risks and the development of precautions.
2.2 Will developments arrive in time?
Humanity always faces challenges. It does not make much sense to complain that technical solutions may not arrive quickly to address existing problems. If they do not, that is not an objection to the message of the Manifesto. And letting technology advance is the best hope of hastening the arrival of solutions.
It it is more of an objection to point out that challenges brought about by new technology can arise, and to be concerned as to whether technology will also address them quickly. Challenges can arise because of the direct side-effects of technology. They can also arise because of more indirect effects, as when technology increases lifespans so that there are new pressures both on total population and on the ratio of active to inactive people.
The fact that technology cannot be relied upon to address reasonably quickly the challenges that it itself creates, although it will sometimes do so, should temper the optimism of the Manifesto. But we should make a broad utilitarian calculation. If technology creates new challenges for which there are no quick solutions, we should weigh in the balance the problems that the new technology does solve. That benefit may substantially outweigh the disadvantage of the new challenges, at least unless those new challenges are severe and will not be addressed fast enough to avoid significant damage.
3. Should governments regulate?
3.1 For and against
3.1.1 Utilitarian arguments
It is easy to make a plausible case for government regulation of new technology. We cannot know in advance what all the consequences of innovation will be, and we may very well not even have the data to estimate either the sizes or the probabilities of undesirable effects. Several such concerns are outlined in Hossenfelder, "Our Enemy is the Ivory Tower" American Tech Billionaire Says. That video also points out that knowledge has to precede technology, so knowledge is the true source of progress.
We should however hesitate to endorse regulation.
One reason to hesitate, which applies in all contexts, is that governments typically show considerable enthusiasm for regulation and downplay its adverse effects. Politicians have an urge to be seen to be doing something, and officials are happy to play along because regulations that must be administered secure their continued employment.
Another reason to hesitate is specific to the regulation of new technology. Technology can bring great benefits. If governments slow its introduction, people who would benefit from it for large parts of their lives may only benefit for smaller parts of their lives. To that extent the total happiness of those who live at any time from the present onward, and their average happiness, will be diminished. The effects over the entire future of humanity might be tiny proportions of total happiness and average happiness, because new technology would only be delayed for a few years. But if one were to concentrate on people now alive, or on them plus those who will be born during the lifetimes of people now alive, the effects would be proportionately much greater. Not only would it be natural to concentrate on that select group, as the people to whom we can relate. It would also be defensible in utilitarian terms, because the far future is so hard to predict that bringing it into utilitarian calculations amounts to mere guesswork.
Moreover, given that each advance enables new advances, the loss to people alive now or to be born during the lifetimes of people now alive could be even greater than would appear from looking at each technology separately. A delay in one technology would delay the development of another. And this would be so even if regulation did not prevent scientific research, but only widespread implementation. Widespread implementation may be needed both to fund future development, and to identify the gaps in benefit that could usefully be filled by the next round of development.
We must also consider the point that given the unknown consequences of new technology, there is a good chance that governments will regulate in unhelpful ways. They suffer from the same ignorance as everyone else. And in addition, government decision makers often lack both technical expertise and business sense. It is not surprising that they often fall back on the precautionary principle, that things ought to be forbidden whenever there might be adverse consequences. And this principle has been much criticised as a shackle on progress.
This is not to say that there should definitely be no regulation. One of the features of some recent technological developments is that they could easily have very large adverse consequences across the whole planet. The greater the extent of our control over energy, information, chemistry and biology, the more damage we could do at one fell swoop.
3.1.2 An argument of principle
As well as utilitarian arguments for and against regulation, we should consider an argument of principle. This is the argument that government intervention is inherently objectionable and must be thoroughly justified. The starting point of this argument is that we are free human beings who may choose to establish governments because we find them helpful, but who thereby become citizens and not subjects. We surrender some of our liberties because it is worthwhile to do so, but we retain the rest, which were ours to begin with and do not become gifts from the state. So governments should not interfere with our lives beyond their proper remit, and there is at least a rebuttable presumption against regulation.
Applying this argument to the regulation of new technology, we might say that not regulating is in principle preferable to any degree of regulation, and that less regulation is in principle preferable to more regulation. One could fill out the argument by reference to a specific right to use one's talents and labour. In outline, the argument would be that if someone has an idea for a new technology, he or she should be free to make use of the idea. And this could be so whether or not there was any system of intellectual property rights to allow exclusivity of exploitation.
3.2 The pointlessness of control
3.2.1 Difficulties in exercising government control
Governments could not always regulate the spread of new technology, even if it were good that they should do so. We can easily identify some problems they would face.
One problem would arise for governments of separate countries. Technology is transmitted in three forms: ideas for how to do things, software (including both programs and data), and physical objects. The flow of the third can be regulated, but not the flow of the first two. Nor can use of the first two be regulated without the heaviest of surveillance of everyday activity - and even then, some users will mange to hide from the surveillance. So when it comes to ideas and software, individual governments can do very little once technologies become available somewhere in the world. And if they do find ways to block ideas and software at their borders, people will find ways round those blocks. They may for example use encryption or virtual private networks.
Another problem would arise both for governments of separate countries and for any hypothetical world government. Some innovations are easily introduced, without any need for rare resources, once the ideas are available. Software is like this. So the innovations are likely to be used by lots of people in many locations. Stopping a few people might be possible. Stopping many people would not be. And with anything in the software field, there are plenty of methods to hide what one is doing from the authorities.
These two problems would not arise with every new technology. Some things require physical objects, sometimes large ones. Some, such as new ways to generate electricity, require changes to existing infrastructure. And some, such as medical interventions, require both the participation of skilled professionals whom governments could identify reasonably easily, and the use of substantial laboratories or care facilities. But the two problems would arise with quite a lot of new technology. And even when physical objects were involved, it might be difficult to maintain boundaries between countries. Small items can be smuggled, and genetically modified seeds and animals can stray across frontiers.
Finally, there is the concern that blocking use of some new technology in one country while it is introduced in others may be economically detrimental. It may for example lead to other countries destroying the market for important exports from the country that introduces a block, because they have a better product or a more efficient way to make the same product.
3.2.2 International agreements
We have noted that having a world government would remove some but not all of the obstacles to the regulation of innovation. There is no realistic prospect of a world government being introduced in the near future. But international agreements represent a step in that direction. Do they give a route to control?
Alas for those who favour regulation, they cannot be relied upon to do the job. They are often vague in their content. This can be because vagueness is necessary in order for large numbers of countries to agree. And in the specific context of new technology, it can also be because the precise nature of future technology is unknown. So rules cannot be set out with sufficient precision even to come close to regulating all and only the members of a sensible set of targets. Moreover, not all countries will sign any given agreement, and those which do not sign will make up their own regulations or have no regulations at all. Then the difficulties of preventing the spread of technologies from countries that allow them to countries that do not allow them will arise again.
3.2.3 "Ought" and "can"
We may conclude that for some technologies, whatever the case for control may be, control cannot be applied. There are things that people might argue governments should do, but they simply cannot do them. "Ought" may not imply "can".
Having said that, there might be a case for introducing ineffectual controls. They can be a political statement of the kind of society that a government sees as desirable. They can also create an atmosphere of caution or abstention. This would however be a very weak case for ineffectual controls. Such controls could equally have the effect of making the state into a laughing stock.
3.3 Utopia
3.3.1 The danger of an ideal
The Manifesto comes out against Utopias. One might wonder why. Is it not sensible to want to make things as good as possible? There are however reasons why opposition to Utopian visions can make sense.
One reason is that there is no such thing as a perfect world. At least, there is no perfect world that human beings can create. There will always be scope to improve. If we ever thought we had reached perfection, we would rest on our laurels and forgo further improvement. And if we decided in advance what would constitute Utopia, we would be limited by what we could currently imagine. That would be likely to fall far short of what would in fact be possible, because new possibilities would emerge as advances currently beyond us were made.
Another reason to oppose Utopian visions is that if there is a predefined shared goal, someone will have to direct activity so as to achieve it. So central direction of economic activity would be needed. And ample experience shows that this would be disastrous.
What we can laud as an ideal it would be sensible to have would be a world in which the direction was upward, a world of continual improvement. But we should not try to define in advance what improvements there should be, save in the short term when we have well-defined problems to solve such as the security and privacy of information stored online, or a medical condition without an available cure.
3.3.2 Sub-optimal courses of development
We should also be aware that it is possible for us accidentally to follow a path of improvement that would trap us in a sub-optimal corner of the space of possibilities, requiring us to backtrack. But such risks are inherent in any progress into the unknown, and are not to be avoided by any kind of central planning or by an absurd extension of the precautionary principle to not doing things that would be good because they might not be the best.
We may note an interesting way in which sub-optimality might arise. This is a less than ideal order of work. Suppose that two developments, called B and C, are already envisaged in sufficient detail for it to be possible to start work, and that we could work on either of them first. If we worked on development B first and then development C, we might miss out on an opportunity to have done better by starting with C. This could happen because C might have opened up the way to a better form of B, or to an alternative to B that would have been preferable.
While there are unavoidable risks of sub-optimality, we can console ourselves with the thought that losses would probably be for the short term only, with the option to backtrack as necessary. And there would be no reason to let governments regulate in an attempt to avoid sub-optimal paths of development, because they would be as much in the dark as to the best choices as anyone else.
3.4 Accelerationism
3.4.1 The push for progress
There is a trend in modern thought that goes by the name of accelerationism. The broad idea is deliberately to push for progress to happen faster and faster. Various political beliefs have clustered around this idea, some rather strange and some incompatible with others. They include a belief that faster progress is a good way to bring down existing political and social structures, with a view to creating a new and allegedly better world.
The Manifesto explicitly endorses accelerationism as a a general concept, although fortunately there is no commitment to strange political beliefs associated with it. The endorsement gives rise to the question of whether governments should force the pace of progress. The political beliefs give rise to the question of whether progress itself carries a risk of destruction.
3.4.2 Should governments force the pace?
Should governments force the pace of progress, rather than merely not regulate and avoid creating any other barriers to progress?
Forcing the pace of progress would be unwise. It would require applying pressure in particular ways, encouraging some forms of progress more than others. Even if the forms were only defined in very broad terms, there would be an influence of political priorities in an arena in which decisions should be taken by businesses and private individuals. There would also be a degree of selection of likely winners, and politicians are very bad at that.
3.4.3 Political beliefs and the risk of destruction
There are political beliefs sometimes associated with accelerationism which see rapid progress as a way to break down existing institutions. Such beliefs are often rather silly. Deliberately breaking down institutions is not an intelligent thing to do unless one has a well-tested and widely acceptable set of institutions with which to replace them, along with the means to ensure a smooth transition.
However, even if we simply dismiss such political beliefs, we are prompted to ask whether progress itself can be too destructive. If the people with the political beliefs see rapid progress as a way to destroy institutions, maybe institutions and social structures could also be destroyed by accident. And maybe governments should at least intervene to reduce the risk of this.
We can see some ways in which excessive destruction might happen. Developments that quickly eliminated many jobs, at times when the redundant workers had little opportunity to find alternative employment, could lead to a degree of social breakdown. This reflects the fact that work is the leading way we have to distribute income. To take another example, developments that introduced new ways to undertake everyday transactions and to make social connections, which some people adopted quickly and others refrained from using, might create patterns of exclusion. These and similar social strains might in turn lead to a widespread, although probably not universal, loss of confidence in existing political institutions.
These risks are not to be dismissed. We should face them. But it is unlikely that they mean we should put brakes on the introduction of new technology. The losses from doing so could easily be far greater than the gains from avoiding difficulties. It is likely that a better approach would be to assess the risks and see what measures might be held in readiness to counter adverse effects that actually arose. For widespread redundancy, this might be retraining programmes. For social exclusion, it might be steps to preserve old methods of transaction and interaction. And so on.
3.5 Security concerns
3.5.1 National security
There is one specific area in which it may seem that state intervention would be entirely justified, and perhaps essential. This is national security.
There are valid concerns about new technology that would gather and store data on people, on economic activity, or on infrastructure, and that might then transmit the data to potentially hostile foreign powers. New technology might also hand remote control of processes to foreign powers.
In such cases, there would be no point in trying to stop the development of the technology. If a hostile power saw the advantages of using it, that power would develop it. One could only try to limit its deployment.
In the public sector, this would in principle be straightforward. One would simply insist that all technologies be analysed for such risks before buying them. It might however be difficult in practice, because objectionable pathways of data transmission and of remote control might be well hidden.
In private sector businesses, it would be harder to control deployment. One can try to regulate private sector procurement, but there might not be the resources to analyse all potentially objectionable technology. And if a supplier was condemned, some other supplier might replace it, repackaging a product with the same objectionable qualities. Moreover, the heavy hand of the state regulating the use of all technology that fell within a broad range when only a small proportion of examples of use might in fact be objectionable would be economically counter-productive. One might therefore have to take the risk, recognising that it could be a considerable risk. The risk could be considerable if tools on which businesses relied included ways for hostile powers to turn them off remotely. A hostile power could cause considerable disruption by turning off banking services, supermarket logistics, or water and sewage services.
Among the general public, control over deployment would be harder still. If people like a product, and it can be obtained without bringing physical goods across a boundary where there are controls (typically a national boundary), they will get it. Blocks to the sharing of software can generally be circumvented. So technology that will result in the sharing of data on individuals or in remote interference with their devices can easily be deployed, and there is not much that governments can do about it. Fortunately, the scope to harm national security in this way is limited.
There are two general points to draw out of what we have just said. The first is that controls might reasonably be desired, but they may be hard to impose. The second is that if one were to take a strictly utilitarian approach, one would proportion attempts to control deployment to the risks involved. The risks are bound to be high in some parts of the public sector, most obviously the military, security services, and some parts of the police. They can be substantial in some other parts of the public sector and in parts of the private sector that are critical to the smooth running of society. Risks at the societal level are much lower in the generality of businesses and among individuals.
Fortunately, ease of controlling deployment of technologies is positively correlated with level of risk at the societal level, although the correlation is far from perfect.
We should also note that none of the concerns here would make much of an argument against development of the technologies in question. Those technologies may bring considerable benefits. If they would also bring risks, the point at which the risks arose would be deployment. And while control at that point may be difficult, it would in any case be the only option given that development can happen anywhere in the world.
3.5.2 Personal data and the state
A specific concern with much new technology is that it allows the collection and analysis of large amounts of personal data. Sometimes this will be done by private enterprise, and sometimes by the state.
The obstruction of technology that allowed the collection of data would be a bad move. If a service provider, whether a business or for example a state health service, has information on the people with which it interacts, it can provide a better service. The sensible response is therefore likely to be laws that give people rights over the use of their data.
Here the state is a particular concern. It is unlikely to make laws against itself. It may even make laws to frustrate attempts to keep data private, for example when the UK government issues secret orders to technology companies to undermine the encryption that they offer to their customers.
State monitoring may help to fight crime, but it has adverse effects too. If people think they are being constantly watched, they will self-censor. Even if current governments are not at all likely to intervene against opinions they do not like, so that there is no immediate reason to be concerned, data will be saved up for an indefinite period and future governments might not be so benign. Moreover, state surveillance would make it very difficult for a whistle-blowing civil servant, aware of wrongdoing, safely to pass information to journalists, so the level of corruption would be very likely to rise.
Given that government regulation of technology is useless in the area of state data collection, our best hope to control the adverse effects of state monitoring that relies on new technology is more new technology. Examples are encryption systems that are developed by companies or communities immune to orders from particular governments to build in backdoors, and software such as virtual private networks to hide web browsing.
3.5.3 Misinformation
The world wide web in general, and content generated by artificial intelligence in particular, have been seen as threats to the social and political environment by virtue of their scope to distribute false information in ways that make it easy for people to think it is true. Fiction and guesses, masquerading as established fact, can be disseminated very quickly. And deepfakes can be used to make it look as though respected authorities have made certain assertions when in fact they have not.
There are two reasons why this might be regarded as a security issue. The first is that people may be led to act in ways that are foolish and disruptive. The second is that the normal functioning of democracy may be undermined, because that functioning requires voters to have accurate information on issues facing the state and on political parties so that they can choose accordingly.
As already noted, there would be no prospect of controlling the development and the widespread use of such technologies. And there would be a strong case against attempts to control either development or deployment. On utilitarian grounds, such controls would limit the considerable benefits available from technologies like the web and artificial intelligence. On grounds of principle it is a basic liberty to be able to exchange ideas. The controls that would be necessary to, for example, prevent the generation of messages by using artificial intelligence, would greatly limit the exchange of ideas more widely.
One might make a case for control downstream, targeting particular messages or categories of message. But that would be outright censorship, to which there are ample objections. One could add that established political parties themselves happily mislead people and misrepresent issues as part of the routine political process.
4. Control by corporate interests
It is a common concern that new technologies which play a significant part in people's lives are controlled by corporations that are in turn under the control of their founders. These are seen as immensely wealthy people who may try both to frustrate government attempts at control and, more broadly, to foist their own political agendas on society.
It is not surprising that control by a few very wealthy people should be reasonably common in relation to new technologies. Specific people had to have the new ideas, then drive them forward to commercial reality. And there has not been time to diffuse control through the sale of equity in the relevant businesses to a broad range of shareholders.
We shall not spend time on moral objections to striking disparities of wealth. There is however one point about disparities that is particularly relevant to technological progress. This is that innovations which generate vast wealth for a few are also ones that confer great benefits on the many. Those benefits are why there are many customers, and hence vast wealth to be made. In consequence, the rewards to the innovators are a tiny fraction of the benefits to people generally. Looked at that way, large rewards to innovators may look entirely proportionate.
Should we be concerned that innovators may try to frustrate attempts at government control? Perhaps not, if we think that governments are too keen to control and too unaware of the damage that excessive control can do. It may well be that innovators who resist control provide a useful counterweight to the urges of governments. And to the extent that attempts to control are ineffective anyway, for the reasons we set out in section 3.2, there will be little loss from effective resistance to what governments may attempt.
There is however a way in which innovators may positively encourage government control. They may want measures to shut out competitors. They may for example seek to strengthen existing intellectual property protections, or they may seek new regulations that impose safety conditions which they happen to satisfy without any difficulty but which new competitors would find hard to satisfy. This is something that governments should of course resist.
Turning to rich people seeking to impose wide political agendas, this is not a problem specific to innovators. It just so happens that many of the most publicly recognised rich people have made their money in that way. So if controls are needed, they are not specifically for innovators, nor does innovation create a new need. Any controls would be a matter for the wider political system, and would raise general political questions such as those that have come up in relation to super-PACs in the United States.
It might still be thought that there should be a special concern because innovation can so easily give rise to many rich people. But the loss from attempts to hold back innovation would be so great that it would be folly to tackle the supposed political problem in that way. There is also the point that the more billionaires there are who seek to influence the political process, the greater is the likelihood that they will have a range of political views and will to some extent work against one another, thereby lessening their overall impact.
5. Value and fulfilment
The Manifesto portrays technological advance as an inherently noble pursuit. It is valuable and a source of human fulfilment. There is an implicit nod to some of Ayn Rand's characters, particularly Hank Rearden in Atlas Shrugged and Howard Roark in The Fountainhead, and a mention of John Galt from Atlas Shrugged.
5.1 Value
We could agree with the value of technological advance by reference to long-standing approaches to ethics.
Most obviously, utilitarians would endorse actions that increased overall human happiness, and technology can certainly do that - although utilitarians would also want to avoid innovations that were on balance harmful.
Virtue ethicists could also endorse innovation because it is mostly a way to benefit humanity. And they could endorse work to advance technology, because it requires the exercise both of intellectual virtues such as sound reasoning, creativity and skill, and of other virtues such as determination.
So the view that technological advance is ethically valuable could enjoy widespread support.
One might hesitate here. People who drive our technology forward are easily viewed as selfish, out to make as much money as they can, and selfishness tends to attract ethical opprobrium. On the other hand, there are those (such as Ayn Rand) who argue that selfishness is a virtue, so long as one relies on the consent of others to make deals and does not coerce people into handing over their resources. If you set out to improve your own position by non-coercive means, the only ways to succeed are first to use resources that are lying around unused and unowned, and second to offer to fulfil the needs of others so that they are willing to do business with you. Thus the pursuit of your own ends leads to the fulfilment of other people's ends.
5.2 Fulfilment
Whether work on technology is fulfilling depends on the individual. But it can very easily be fulfilling. It demands the exercise of intellect, and sometimes physical skill. And the benefits to humanity mean that what is done can be seen as worthwhile. The career of an engineer can be as fulfilling as the career of a poet.
This may however sometimes be hard to see. An engineer will typically work in a big team, and there may be many other people who would have done the same work. A poet, on the other hand, will typically work alone. And the poems written will be such as would not have been written by anyone else, even if plenty of other people would have written poems of equal merit. So if uniqueness of contribution matters to fulfilment, the poet will have the edge over the engineer. On the other hand, if consistent acknowledgement of merit that does not vary much with individual taste matters to fulfilment, the engineer will have the edge over the poet.
References
Andreessen, Marc. The Techno-Optimist Manifesto. Online publication, 2023.
https://a16z.com/the-techno-optimist-manifesto/
Hossenfelder, Sabine. "Our Enemy is the Ivory Tower" American Tech Billionaire Says. Video, 2024.
https://www.youtube.com/watch?v=kibFAk8HwS4