Good afternoon all! We at PEA Soup are happy to share another entry at Soup of the Day (formerly ‘The Pebble’). Today’s post is brought to you by Dr. Alexandros Koliousis (currently Senior Lecturer in Computer Science at New College of the Humanities, London and program director of NCH’s MSc in Artificial Intelligence with a Human Face) and Dr. Brian Ball (currently Head of Faculty, and Senior Lecturer in Philosophy at New College of the Humanities, London, and Associate Member of the Faculty of Philosophy at the University of Oxford). We here at PEA Soup thank Dr. Koliousis and Dr. Ball for choosing to post with us at PEA Soup of the Day. Here they are now:

The field of Artificial Intelligence (AI), according to Russell and Norvig (2010), is an engineering discipline – one that aims to build certain artifacts. Specifically, AI looks to construct rational agents: pieces of hardware or software capable of performing tasks for which intelligence is required; agents, we would add, which must practically contribute to (often collaborative) human projects. If this is right, we argue that it is time to train philosopher engineers – those who are able to put into practice their understanding of theoretical principles in e.g. ethics, epistemology, and the philosophy of mind in the development of AI systems.

In reality, the interaction between philosophy and AI is, or ought to be, a two-way street. As Boden (2018) points out, AI has scientific as well as engineering goals: in other words, it aims to yield an understanding of the operations of intelligence. Indeed, years ago, Dennett (1978) aptly described AI “as Philosophy and as Psychology” – a field that answers the (transcendental) question of how it is possible to perform a given task requiring intelligence, without presupposing that a solution captures the (empirically verifiable) way humans or other animals actually do so. This, in turn, suggests another direction of potentially fruitful influence. Insofar as philosophers (and cognitive scientists) understand naturally occurring intelligent systems, they can suggest ways in which artifacts might be built to perform certain tasks. After all, whatever is actual is certainly possible!

Dennett also noted that, computers being rather unforgiving machines, philosophers’ armchair speculations can benefit from the intellectual rigor required to build computer models that succeed in carrying out a given task – a point which has been made recently, in a different context, by Mayo-Wilson and Zollman (2020) in support of the use of computer simulations in philosophical research. (Assumptions need to be made explicit – and precise!) Indeed, we might think of the construction of computational models as a method by which philosophy students and researchers alike can engage in active, experiential learning – putting their theories to the test (of possibility at least, if not actuality).

In short, the potential benefits of collaboration between computer scientists and philosophers are numerous, and the need for interdisciplinary training is real.

Past, present and future

The term “Artificial Intelligence” was coined at the 1956 Dartmouth Conference by mathematicians interested in computing. But the field is arguably older, and more interdisciplinary, than this key episode suggests. Turing’s paper on Computing Machinery and Intelligence, for instance, was published in 1950… in the philosophy journal Mind; and the theory of computation he developed (and on which AI is built and depends) emerged from 19th century advances in formal logic that were due, in large part, to philosophers such as Frege. In fact, Russell and Norvig (2010) find another important – and much older – connection between AI and philosophy: Aristotle, they say, suggested an algorithm for action planning that “was implemented 2,300 years later by Newell and Simon in their [General Problem Solver, or] GPS program” (2010:7). Built in 1957, this regression planner was one of the early paradigms of a successful development in AI.

Part of the role of philosophers in this area has been to spur innovation through critiques of AI. Indeed, at the AI@50 conference in 2006, Jim Moor identified three main objections: (i) an argument based on Gödel’s incompleteness theorems, developed by Penrose in the 1990s, but initially advanced in the 1960s by Lucas, an Oxford philosopher; (ii) Searle’s famous Chinese Room Argument; and (iii) Dreyfus’ Heideggerian critique of AI, based on the idea that intelligence is typically both embodied and embedded (or situated). Admittedly, the last of these, in its initial 1965 articulation, Alchemy and AI, initiated an AI winter in the US (Wooldridge, 2020). (In the UK, this role was reserved for Lighthill, a mathematician whose 1973 report restricted government funding for AI research to just two universities.) But ultimately, even this criticism has resulted in improved AI systems, techniques, and methods (Dreyfus, 2007). And recently, there have been more sympathetic criticisms of AI from philosophers and cognitive scientists alike – for instance, Cantwell Smith (2019) and Marcus and Davis (2019) have both taken a consciously constructive approach to the articulation of their objections to AI as currently practiced.

There is now a pressing need for interdisciplinary collaboration in this area. AI has recently been monetized, through the application (over the past decade or so) of deep learning techniques to big data sets. This points in the first instance to a need for ethical input in AI: where there is money to be made, there are likely to be diverse interests to be navigated; and who better to help with this than those philosophically trained in ethical thinking? Moreover, this ethical input should occur from the beginning – it should not be a mere afterthought. What’s needed, in other words, is a value sensitive design that engages human values at every stage of building AI (Friedman & Hendry, 2019). Of course, for this to be possible, ethicists must also understand crucial technical aspects of the problems they are engaging with, and of the process of engineering solutions for them.

The monetization of AI also points to the need for appropriate theoretical understanding of e.g. which techniques perform well on which sorts of task. In short, it requires an understanding of how intelligence works. And philosophical insights are likely to prove helpful: otherwise, equipped with “hammers” (e.g. those of machine learning algorithms operating on big data sets), engineers may see every problem as a nail; yet those with philosophical training may draw attention to the fact that the problem at hand involves a screw, and needs to be worked on with a different tool (e.g. an expert system, or other piece of symbolic AI). Such insights can obviously help with advancing the understanding of the solution to the problem – and with the financial bottom line, if resources are directed appropriately.

Interactions

Philosophical engagement with AI can be difficult now, primarily due to the sheer scale of modern AI systems. When AI engineers build a system, they ride a big wave, where more data, parameters, and computers result in more accurate solutions for a given task (Kaplan et al., 2020). Anyone without an abundance of data and computing resources can only observe the watermark left from that wave – what kind of performance an AI system has achieved, but not how it was encoded. Even if scale (i.e. a “bigger brain”) is all that’s needed for a truly intelligent artifact, contemporary AI is often opaque; and accordingly, it remains insulated from external criticisms and contributions.

It is time for AI engineers to pay less attention to marginal improvements and start engaging with philosophers, so that the contributions of the latter are embedded in the design process. This is a balancing act. On the one hand, we need engineers that can explain simply, yet clearly and precisely, what is technically important (e.g. the role of convolution for image classification (He et al., 2015) and self-attention for language understanding (Vaswani et al., 2017). On the other hand, those with philosophical training need to know and understand key details of how computer and data scientists approach AI tasks so they can contribute in maximally helpful ways (e.g. during the training and evaluation process of a model). Specifically, there are three broad stages in the design process of an AI system where thoughtful reflection must be embedded: deployment planning (what is the system’s intended use?); objective setting (what should the system be trained to do?); and training (what input data should be used, and how?).

We typically praise AI systems for their super-human performance on certain tasks, or “benchmarks”, that are performed in a narrow, controlled environment, while little is known of their validity for deployment in a more generalized, “real-world” environment – which is often malleable, changing through the actions of the system, or others, and where the effects of any social and societal bias can be much more dire. The problem is then perpetuated (and possibly exacerbated) when, in an effort to build more advanced user services, engineers deploy complex, pre-trained systems as part of their processing pipeline, without prior analysis of the ecological validity of such an integration; a task for which our philosopher engineers, trained in complex, situated ethical decision making, are well-suited.

Moreover, when a machine learns to perform a specific task, it must have some level of inductive bias in order to perform well “in the wild” (that is, on previously unseen inputs). By unveiling the learning process (e.g. by identifying the discriminative input features that favor one action over another), philosopher engineers can discover whether this inductive bias results in some form of societal bias, and inappropriate actions, as well. But such bias may equally be the result of a kind of human error. This will be so if the learning objective (set by humans) is itself inherently biased, perhaps in some non-obvious manner. After all, a machine learns by minimizing (or maximizing) a loss function (respectively, a reward function) that measures how well it takes the right or wrong actions (as determined by the learning objective). But it has no moral compass; nor does it have any awareness that its actions can be explicitly linked with information external to the system, in order to make further inferences – for good or bad. For example, an AI system may be trained to predict eligibility for loan by excluding race from its input features and relying instead on data about postcodes. Yet such a system might nevertheless be socially unjust, given that its predictions can be linked with area demographics. If so, it might be best to set an alternative objective for such a crucial element of financial decision making.

Finally, deep learning systems will not necessarily develop discriminative features for categories of samples that are rare in the input data. The loss landscape is often a complex, multi-dimensional space with many valleys and saddle points and settling on a valley (that is, finding a minimum) is usually the result of overfitting the most commonly input features (e.g. white women amongst images of mothers – who are not predominantly white, on a global basis). Working with a data scientist, the philosopher engineer can mitigate this effect qualitatively, e.g., by identifying underrepresented categories (e.g. black mothers) or anticipating shifts in input data distributions (e.g. due to demographic change), and even address it quantitively, e.g. by equalizing or normalizing the data.

In modern AI systems working from raw input (i.e. no hand-crafted features) with generic architectures and objectives (cf. LeCun, 2019) to solve narrow AI tasks, the main source of error appears to be of human origin – from data collection to task specification. Much like machine learning, minimizing the effect of design errors will be an iterative process, and should involve philosopher engineers in the loop.

Conclusion

Long ago, Plato recommended the abandonment of democracy and the institution of a Republic ruled by philosopher kings. We are more sanguine about the prospects for democracy (despite recent setbacks) – but we see a role (in fact, many) for the philosopher within the Corporation… and beyond. In the tech sector, the abilities to think and communicate clearly about AI are needed both inside research and development teams, and in other organizational departments that interact with them, and with clients. Of course, elsewhere in the private, public, and third sectors, AI is increasingly prevalent. And in government, regulatory oversight of AI must be informed by genuine understanding of the technology and its applications.
To fill these roles, philosophers will need to learn from, and in some cases become, the engineers. Students will need training, not only in thinking carefully about how to resolve ethical difficulties as they arise in practice, and in comparing artificially and naturally intelligent systems, but also in the technical aspects of computational data analysis and usage. This will help the advancement of science in this area, yielding a deeper understanding of intelligent action; and it will help to ensure the ethical development and deployment of AI.

References

  1. Boden, M. A. (2018). Artificial Intelligence: A Very Short Introduction. Oxford University Press. DOI
  2. Dennett, D. C. (1978). Brainstorms: Philosophical Essays on Mind and Psychology. The MIT Press. DOI
  3. Dreyfus, H. L. (2007). Why Heideggerian AI failed and how fixing it would require making it more Heideggerian. Philosophical Psychology, 20 (2), 247–268. DOI
  4. Friedman, B. & Hendry, D. G. (2019). Value Sensitive Design: Shaping Technology with Moral Imagination. The MIT Press. DOI
  5. He, K., Zhang, X., Ren, S. & Sun, J. (2016). Deep Residual Learning for Image Recognition. In IEEE CVPR’16, pp. 770–778. DOI
  6. Kaplan, J., McCandlish, S., Henighan, T., Brown, T. B., Chess, B., Child, R., Gray, S., Radford, A., Wu, J., & Amodei, D. (2020). Scaling Laws for Neural Language Models. arXiv:2001.08361 [cs.LG].
  7. LeCun, Y. (2020, June 22). Tweet.
  8. Marcus G. & Davis, E. (2019). Rebooting AI: Building Artificial Intelligence We Can Trust. Pantheon Books.
  9. Mayo-Wilson, C. & Zollman, K. J. (2020). The computational philosophy: Simulation as a core philosophical method. Preprint.
  10. Russell, S. & Norvig. P. (2010). Artificial Intelligence: A Modern Approach, 3rd Edition. Pearson.
  11. Cantwell Smith, B. (2019). The Promise of Artificial Intelligence: Reckoning and Judgment. The MIT Press.
  12. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł. & Polosukhin, I. (2017). Attention is all you need. In NeurIPS’17, pp. 6000–6010.
  13. Wooldridge, M. (2020). The Road to Conscious Machines: The Story of AI. Pelican Books.

One Reply to “Training Philosopher Engineers”

Leave a Reply

Your email address will not be published. Required fields are marked *