Research
In philosophie Magazin (Germany)
Is AI an Author? Text generation alone isn’t enough – AI systems lack the mental states needed for authorship, contends Daniel Bracker.
Is ChatGPT an author? Given its capacity to generate something that reads like human-written text in response to prompts, it might seem natural to ascribe authorship to ChatGPT. However, by scrutinizing the normative aspects of authorship, we argue that ChatGPT is not an author. ChatGPT fails to meet the criteria of authorship because it lacks the ability to perform illocutionary speech acts such as promising or asserting, lacks the fitting mental states like knowledge, belief, or intention, and only with many qualifications can provide testimony. Three perspectives are compared: liberalism (which ascribes authorship to ChatGPT), conservatism (which denies ChatGPT’sauthorship for normative and metaphysical reasons), and moderatism (which treats ChatGPT as if it possesses authorship without committing to the existence of mental states like knowledge, belief, or intention). We conclude that conservatism provides a more nuanced understanding of authorship in AI than liberalism and moderatism, without denying the significant potential, influence, or utility of AI technologies such as ChatGPT.
The development of Artificial Intelligence (AI) has given rise to numerous philosophical questions and debates on the limits and possibilities of computational systems. In his book, “The Myth of Artificial Intelligence,” AI scientist Erik J. Larson critically examines these issues, focusing on the capabilities of AI in the domain of natural language processing and its relevance to the Turing Test. This paper offers a comprehensive analysis of Larson’s arguments and perspectives on AI, with a particular emphasis on the concept of inference as the foundation of intelligent systems.
Larson highlights the importance of understanding the types of inference present in advanced AI systems from companies such as Google, Twitter, Amazon, and Netflix. Drawing from the extensive history of intellectual thought on inference, he argues that induction alone cannot provide general intelligence. Furthermore, he contends that computational systems are limited in their ability to reproduce abductive inferences and must rely on induction and deduction to create hybrid systems.
The paper examines Larson’s disjunctive conclusion that the development of truly intelligent AI systems either requires a miracle or is fundamentally impossible. It explores his assertion that there are inherent differences between minds and machines, emphasizing the idea that artifacts, including computers, cannot exhibit personhood.
This book chapter will be on the tension between two epistemic goods. The first is the good of having at one’s disposal accurate information, quickly, easily, and cast in a handy format. If you want to know what a rational number is, or what to do if your child has small pocks, or how to get to Paris from where you are as quickly as possible, or what the law of Lavoisier is—and suppose there is a certain urgency in knowing this—then it is certainly good and appropriate to google it up. You get answers quickly, easily, and cast in a handy format. You might wonder, of course, whether the answers are accurate. But let us suppose, for the sake of the argument, that google or any other AI device that you are using, is indeed accurate in its deliverances. And let us suppose moreover that you needn’t even ‘type in’ your question on a key board, but that there is inserted into your brain an AI device such that when you merely thinkingly ask yourself “What is a rational number?”, the AI provides the accurate answer.[1] That situation and your plight in it is, from an epistemic point of view, you might think, very good.
But there is another epistemic good that is in tension with it, viz. the good of finding things out for yourself, the good of ‘doing your own research’—the good of autonomous thinking. To illustrate that autonomous thinking is a good, compare two ways of filling out a crossword puzzle. You can do it the traditional way: think for yourself, use your own creativity, do your own research. But you can also do it by looking up the answers in the published filled-out version, and just copy the answers into your empty version. The traditional way has a value that copying lacks. As Ernest Sosa comments, “Attaining the truth by just copying the right answers is not in the right spirit. Rather, your aim must be not just success but firsthand success.”[2] If you have that device implanted in your brain that provides you with accurate answers to any question that you happen to think, isn’t that just a variant way of ‘copying’, and hence without much value as it is not a case of firsthand success?
It is this tension that this paper studies. We start with a discussion of the notion of epistemic autonomy, and take as our point of departure Immanuel Kant’s famous essay “An Answer to the Question: ‘What is Enlightenment’?”[3], in which he urges “Have courage to use your own reason!” This raises two initial questions, first, what is it “to use your own reason”, and second, why should one have that courage, what is so good about using your own reason?[4]
[1] This is not just a thought experiment. Elon Musk’s Neuralink aims to create a human-AI symbiosis. See Carter 2022, 1-8 for more on so-called Brain-Computer Interfaces (BCI’s) and the challenge they pose for the analysis of knowledge.
[2] See Sosa 2021, 13
[3] We use Lewis Beck’s translation.
[4] And there is a third question, that we won’t address: is ‘using your own reason’ a virtue, or is it a command?
AI judges pass sentences; AI police predict crime, ChatGPT writes term papers and emails. Whether or not the reader assesses this as morally permissible, it is to be expected that the future will see many more instances of such “cognitive outsourcing”. We will see more of it because cognitive outsourcing to AI has value for us: It saves time and promises to be less prone to human errors.
However, analytic philosophers have recently reminded us of the importance of another value: epistemic autonomy – the value of “thinking for yourself”. There is value in “figuring things out on your own”, in writing an email or term paper yourself. This foundational research in philosophy, however, has not been systematically applied to AI. Thus, the central research question of this project is this: How much value we should place on epistemic autonomy in the age of AI and how can we design AI that promotes this value (assuming it is important)?
This paper brings together two fields of research – the philosophy of epistemic autonomy and research on AI. It also shifts the focus in the philosophy of AI from ethical aspects to more epistemic aspects of AI.