DANIEL BRACKER

Research

Done
Authorship and ChatGPT

Is ChatGPT an author? Given its capacity to generate something that reads like human-written text in response to prompts, it might seem natural to ascribe authorship to ChatGPT. However, by scrutinizing the normative aspects of authorshipwe argue that ChatGPT is not an author. ChatGPT fails to meet the criteria of authorship because it lacks the ability to perform illocutionary speech acts such as promising or asserting, lacks the fitting mental states like knowledge, belief, or intention, and only with many qualifications can provide testimony. Three perspectives are compared: liberalism (which ascribes authorship to ChatGPT), conservatism (which denies ChatGPT’sauthorship for normative and metaphysical reasons), and moderatism (which treats ChatGPT as if it possesses authorship without committing to the existence of mental states like knowledge, belief, or intention). We conclude that conservatism provides a more nuanced understanding of authorship in AI than liberalism and moderatism, without denying the significant potential, influence, or utility of AI technologies such as ChatGPT.

In Progress
Artificial Intelligence and Epistemic Autonomy

AI judges pass sentences; AI police predict crime, ChatGPT writes term papers and emails. Whether or not the reader assesses this as morally permissible, it is to be expected that the future will see many more instances of such “cognitive outsourcing”. We will see more of it because cognitive outsourcing to AI has value for us: It saves time and promises to be less prone to human errors.

However, analytic philosophers have recently reminded us of the importance of another value: epistemic autonomy – the value of “thinking for yourself”. There is value in “figuring things out on your own”, in writing an email or term paper yourself. This foundational research in philosophy, however, has not been systematically applied to AI. Thus, the central research question of this project is this: How much value we should place on epistemic autonomy in the age of AI and how can we design AI that promotes this value (assuming it is important)?

This paper brings together two fields of research – the philosophy of epistemic autonomy and research on AI. It also shifts the focus in the philosophy of AI from ethical aspects to more epistemic aspects of AI.

The Influence of AI on Ignorance: Exploring Algorithmic Bias in Automatic Gender Recognition and Predictive Policing

Artificial Intelligence (AI) has revolutionized various aspects of modern life, with applications spanning from predictive policing to automatic gender recognition and search engine algorithms. Nevertheless, the increasing reliance on AI-driven solutions leads to concerns about the potential perpetuation of ignorance, especially in relation to discriminatory and exclusionary practices. In this paper, I examine the influence of AI on ignorance, concentrating on two particular algorithms: automatic gender recognition and PredPol (predictive policing). Drawing on the works of Mills and Alcoff, I explore how these algorithms might reinforce gender and racial power dynamics.

The paper starts with a concise presentation of the algorithmic bias problem, found in existing literature. Following that, I investigate how ignorance acts as a substantive practice, reinforcing gender and racial power relations. Subsequently, I delve into the two mentioned algorithms, analyzing their potential to exacerbate discrimination and exclusion as described by Mills and Alcoff. Lastly, the paper discusses the potential of algorithms to counteract these patterns of discrimination and exclusion. I argue that for AI to genuinely alleviate these issues, it must transcend mere statistical frequency, which often mirrors biases in the input data. Furthermore, I emphasize ongoing efforts within the AI community to tackle these challenges and provide examples of initiatives that aspire to develop more equitable and inclusive AI systems.

‘The Myth of Artificial Intelligence’: A Critical Examination of Erik Larson’s Arguments and Perspectives on the Limits and Possibilities of AI

The development of Artificial Intelligence (AI) has given rise to numerous philosophical questions and debates on the limits and possibilities of computational systems. In his book, “The Myth of Artificial Intelligence,” AI scientist Erik J. Larson critically examines these issues, focusing on the capabilities of AI in the domain of natural language processing and its relevance to the Turing Test. This paper offers a comprehensive analysis of Larson’s arguments and perspectives on AI, with a particular emphasis on the concept of inference as the foundation of intelligent systems.

Larson highlights the importance of understanding the types of inference present in advanced AI systems from companies such as Google, Twitter, Amazon, and Netflix. Drawing from the extensive history of intellectual thought on inference, he argues that induction alone cannot provide general intelligence. Furthermore, he contends that computational systems are limited in their ability to reproduce abductive inferences and must rely on induction and deduction to create hybrid systems.

The paper examines Larson’s disjunctive conclusion that the development of truly intelligent AI systems either requires a miracle or is fundamentally impossible. It explores his assertion that there are inherent differences between minds and machines, emphasizing the idea that artifacts, including computers, cannot exhibit personhood.