Forthcoming
Digital Development: Technology, Ethics and Governance
Infringements of AI on Epistemic Autonomy. A Graded Approach
2025
Authorship and ChatGPT
Is ChatGPT an author? Given its capacity to generate something that reads like human-written text in response to prompts, it might seem natural to ascribe authorship to ChatGPT. However, by scrutinizing the normative aspects of authorship, we argue that ChatGPT is not an author. ChatGPT fails to meet the criteria of authorship because it lacks the ability to perform illocutionary speech acts such as promising or asserting, lacks the fitting mental states like knowledge, belief, or intention, and only with many qualifications can provide testimony. Three perspectives are compared: liberalism (which ascribes authorship to ChatGPT), conservatism (which denies ChatGPT’sauthorship for normative and metaphysical reasons), and moderatism (which treats ChatGPT as if it possesses authorship without committing to the existence of mental states like knowledge, belief, or intention). We conclude that conservatism provides a more nuanced understanding of authorship in AI than liberalism and moderatism, without denying the significant potential, influence, or utility of AI technologies such as ChatGPT.
→ Link
2025
Vom Mut zu denken in Zeiten algorithmischer Verführung
“Computer können jetzt sehen, denken und verstehen”, versprechen Sam Altman und Jony Ive.
Warum sich also noch anstrengen, wenn KI das Denken übernimmt?
→ Link
2024
Ist KI ein Autor?
Erschöpft sich Autorenschaft im Verfassen von Texten? Nein, meint Daniel Bracker und argumentiert, dass KI-Systemen eine genuin menschliche Fähigkeit fehlt, die sie zu legitimen Urhebern machen würde.
→ Link
2025
Tokyo Forum for Analytic Philosophy, Tokyo, Japan
Why is Epistemic Autonomy Valuable in the Age of AI?
1. Understanding vs. Information
2. Responsibility and Moral Agency
3. Resistance to Manipulation
4. Democratic Integrity
5. Intellectual Flourishing
So Why Not Just Let AI Think for Us?
—
June 27, 2025
Upcoming
International Conference on the Philosophy of Artificial Intelligence, Amsterdam
6th Conference on the Philosophy of Artificial Intelligence, an event designed to explore the philosophical implications and challenges presented by artificial intelligence:
AI and Authorship
AI and consciousness
The future of work in an AI-driven economy
The philosophical underpinnings of machine learning
Autonomy, responsibility, and AI
The nature of intelligence: Human vs. Artificial
Bias, fairness, and transparency in AI systems
AI and influence on belief, evidence, intellectual character
→ Link
—
October 23, 2025 – October 24, 2025
2025
Design for Human Autonomy, TU Delft
How can we ensure that we remain in control of the technology we create?
How can technology be designed and deployed in ways that support and enhance human autonomy, rather than diminish it?
Can technology enhance our autonomy or help us transcend our limitations?
Might there even be scenarios where reducing human autonomy is beneficial in light of our limitations?
When and why does technology pose a risk of manipulation?
→ Link
—
June 18, 2025
2025
XX International Conference on Political Philosophy, University of Barcelona
'Reevaluating epistemic autonomy in the digital age: from social media to generative AI'
—
March 20, 2025 – March 21, 2025
2025
Artificial Intelligence, Art, and the Future of Human Expression at The New School, New York
Teaching Philosophy of AI at The New School in New York offered an exploration of how artificial intelligence is reshaping our relationship with knowledge and creativity. Working with artists, musicians, and performers, we examined everything from ChatGPT and machine learning to epistemic autonomy and AI-mediated knowledge.
It was fascinating to see how artists approach AI differently than technologists. While tech discussions often focus on capabilities and limitations, these creators brought fresh perspectives on AI as a form of cultural technology and knowledge mediator. We explored metaphors - from "stochastic parrots" to "autocomplete on steroids" - each revealing different aspects of how AI shapes our understanding and creative processes.
These discussions deeply connected with my current research on epistemic autonomy and AI authorship. As we increasingly rely on large language models and AI systems for knowledge and creative work, how do we maintain independent thinking and artistic agency? The insights from The New School's creative community added valuable perspectives to these philosophical questions about knowledge, creativity, and human-AI collaboration.
—
February 10, 2025 – February 14, 2025
2024
Yuval Noah Harari: NEXUS in TivoliVredenburg Utrecht, The Netherlands
"ChatGPT lied," Yuval Noah Harari declared during our recent conversation at TivoliVredenburg in Utrecht, where he presented the Dutch translation of his latest book "Nexus." His statement, while compelling, reveals a deeper philosophical confusion about the nature of deception and truth.
The accusation of "lying" presupposes consciousness, intent, and an understanding of truth - qualities that large language models fundamentally lack. These systems don't make truth claims in any meaningful sense; they generate probabilistic responses based on patterns in their training data. When a human lies, they consciously choose to deceive, knowing the difference between truth and falsehood. An AI system can produce incorrect information, but calling this a "lie" anthropomorphizes a statistical process.
During our discussion, I challenged Harari's framing. The real issue isn't that AI systems lie or tell the truth, but rather that they operate in a space entirely outside the traditional binary of truth and falsehood. They are sophisticated pattern matchers generating outputs that can appear convincing while being disconnected from any genuine understanding or truth-seeking behavior.
—
October 20, 2024
2024
A Computational and Neural Model for Mood Dynamics at Santa Fe Institute, New Mexico
John Krakauer presented research on the science of happiness and depression. Using smartphone-based data collection and neuroimaging, his team explored how daily experiences shape overall well-being. We discussed links between happiness, brain activity, and dopamine levels. He also shared insights on major depression, using computational models to analyze decision-making and the mood-behavior relationship in depressed individuals. The talk highlighted the importance of happiness as a societal metric and offered new perspectives on understanding and potentially treating depression.
—
August 6, 2024 – September 6, 2024
2024
“AI and Implications for the Future in the Arts” at The New School, New York City
Teching Performing Arts Strategies for the Future for the New School's Master's in Arts Management and Entrepreneurship.
—
April 8, 2024
2023
PhAI 2023 “Philosophy of Artificial Intelligence Conference”
The philosophy of AI dialogue will be enriched by the contributions of invited speakers from diverse academic backgrounds, including Joanna Bryson from Hertie School, Berlin, Herman Cappellen from the University of Hong Kong, Marta Halina from Cambridge, England, and Sven Nyholm from LMU Munich.
—
December 15, 2023 – December 16, 2023
2023
Authorship and ChatGPT, Epistemology Research Centre Glasgow, Scotland
Is ChatGPT an author? Given its capacity to generate something that reads like human-written text in response to prompts, it might seem natural to ascribe authorship to ChatGPT. However, by scrutinizing the normative aspects of authorship, we argue that ChatGPT is not an author. ChatGPT fails to meet the criteria of authorship because it lacks the ability to perform illocutionary speech acts such as promising or asserting, lacks the fitting mental states like knowledge, belief, or intention, and only with many qualifications can provide testimony. Three perspectives are compared: liberalism (which ascribes authorship to ChatGPT), conservatism (which denies ChatGPT's authorship for normative and metaphysical reasons), and moderatism (which treats ChatGPT as if it possesses authorship without committing to the existence of mental states like knowledge, belief, or intention). We conclude that conservatism provides a more nuanced understanding of authorship in AI than liberalism and moderatism, without denying the significant potential, influence, or utility of AI technologies such as ChatGPT.
—
August 23, 2023
2023
Virtue Epistemology 2023 at Eindhoven University of Technology
Virtue Epistemology is deeply connected with my topic, "Artificial Intelligence and the value of epistemic autonomy." At the conference we discussed intellectual virtues, with virtue reliabilists emphasizing natural cognitive capacities like memory and perception, and virtue responsibilists focusing on a good epistemic life or character. By understanding these different approaches to intellectual virtues, we can better navigate the challenges and opportunities presented by artificial intelligence and emerging technologies.
—
April 17, 2023 – April 21, 2023
2023
Continental Philosophy of TechnoScience 2023
It was great to connect with fellow philosophers specializing in philosophy of technology and science. Together, we delved into the latest advancements in philosophy of technoscience, emphasizing modern and continental perspectives. Stimulating discussions on topics such as Artificial Intelligence, Synthetic Cells, and the Anthropocene cultivated a welcoming and intellectually invigorating atmosphere.
—
March 20, 2023 – March 31, 2023
Forthcoming
Infringements of AI on Epistemic Autonomy. A Graded Approach
2025
Is ChatGPT an author? Given its capacity to generate something that reads like human-written text in response to prompts, it might seem natural to ascribe authorship to ChatGPT. However, by scrutinizing the normative aspects of authorship, we argue that ChatGPT is not an author. ChatGPT fails to meet the criteria of authorship because it lacks the ability to perform illocutionary speech acts such as promising or asserting, lacks the fitting mental states like knowledge, belief, or intention, and only with many qualifications can provide testimony. Three perspectives are compared: liberalism (which ascribes authorship to ChatGPT), conservatism (which denies ChatGPT’sauthorship for normative and metaphysical reasons), and moderatism (which treats ChatGPT as if it possesses authorship without committing to the existence of mental states like knowledge, belief, or intention). We conclude that conservatism provides a more nuanced understanding of authorship in AI than liberalism and moderatism, without denying the significant potential, influence, or utility of AI technologies such as ChatGPT.
→ Link
2025
“Computer können jetzt sehen, denken und verstehen”, versprechen Sam Altman und Jony Ive.
Warum sich also noch anstrengen, wenn KI das Denken übernimmt?
→ Link
2024
Erschöpft sich Autorenschaft im Verfassen von Texten? Nein, meint Daniel Bracker und argumentiert, dass KI-Systemen eine genuin menschliche Fähigkeit fehlt, die sie zu legitimen Urhebern machen würde.
→ Link
2025
Why is Epistemic Autonomy Valuable in the Age of AI?
1. Understanding vs. Information
2. Responsibility and Moral Agency
3. Resistance to Manipulation
4. Democratic Integrity
5. Intellectual Flourishing
So Why Not Just Let AI Think for Us?
—
June 27, 2025
Upcoming
6th Conference on the Philosophy of Artificial Intelligence, an event designed to explore the philosophical implications and challenges presented by artificial intelligence:
AI and Authorship
AI and consciousness
The future of work in an AI-driven economy
The philosophical underpinnings of machine learning
Autonomy, responsibility, and AI
The nature of intelligence: Human vs. Artificial
Bias, fairness, and transparency in AI systems
AI and influence on belief, evidence, intellectual character
→ Link
—
October 23, 2025 – October 24, 2025
2025
How can we ensure that we remain in control of the technology we create?
How can technology be designed and deployed in ways that support and enhance human autonomy, rather than diminish it?
Can technology enhance our autonomy or help us transcend our limitations?
Might there even be scenarios where reducing human autonomy is beneficial in light of our limitations?
When and why does technology pose a risk of manipulation?
→ Link
—
June 18, 2025
2025
'Reevaluating epistemic autonomy in the digital age: from social media to generative AI'
—
March 20, 2025 – March 21, 2025
2025
Teaching Philosophy of AI at The New School in New York offered an exploration of how artificial intelligence is reshaping our relationship with knowledge and creativity. Working with artists, musicians, and performers, we examined everything from ChatGPT and machine learning to epistemic autonomy and AI-mediated knowledge.
It was fascinating to see how artists approach AI differently than technologists. While tech discussions often focus on capabilities and limitations, these creators brought fresh perspectives on AI as a form of cultural technology and knowledge mediator. We explored metaphors - from "stochastic parrots" to "autocomplete on steroids" - each revealing different aspects of how AI shapes our understanding and creative processes.
These discussions deeply connected with my current research on epistemic autonomy and AI authorship. As we increasingly rely on large language models and AI systems for knowledge and creative work, how do we maintain independent thinking and artistic agency? The insights from The New School's creative community added valuable perspectives to these philosophical questions about knowledge, creativity, and human-AI collaboration.
—
February 10, 2025 – February 14, 2025
2024
"ChatGPT lied," Yuval Noah Harari declared during our recent conversation at TivoliVredenburg in Utrecht, where he presented the Dutch translation of his latest book "Nexus." His statement, while compelling, reveals a deeper philosophical confusion about the nature of deception and truth.
The accusation of "lying" presupposes consciousness, intent, and an understanding of truth - qualities that large language models fundamentally lack. These systems don't make truth claims in any meaningful sense; they generate probabilistic responses based on patterns in their training data. When a human lies, they consciously choose to deceive, knowing the difference between truth and falsehood. An AI system can produce incorrect information, but calling this a "lie" anthropomorphizes a statistical process.
During our discussion, I challenged Harari's framing. The real issue isn't that AI systems lie or tell the truth, but rather that they operate in a space entirely outside the traditional binary of truth and falsehood. They are sophisticated pattern matchers generating outputs that can appear convincing while being disconnected from any genuine understanding or truth-seeking behavior.
—
October 20, 2024
2024
John Krakauer presented research on the science of happiness and depression. Using smartphone-based data collection and neuroimaging, his team explored how daily experiences shape overall well-being. We discussed links between happiness, brain activity, and dopamine levels. He also shared insights on major depression, using computational models to analyze decision-making and the mood-behavior relationship in depressed individuals. The talk highlighted the importance of happiness as a societal metric and offered new perspectives on understanding and potentially treating depression.
—
August 6, 2024 – September 6, 2024
2024
Teching Performing Arts Strategies for the Future for the New School's Master's in Arts Management and Entrepreneurship.
—
April 8, 2024
2023
The philosophy of AI dialogue will be enriched by the contributions of invited speakers from diverse academic backgrounds, including Joanna Bryson from Hertie School, Berlin, Herman Cappellen from the University of Hong Kong, Marta Halina from Cambridge, England, and Sven Nyholm from LMU Munich.
—
December 15, 2023 – December 16, 2023
2023
Is ChatGPT an author? Given its capacity to generate something that reads like human-written text in response to prompts, it might seem natural to ascribe authorship to ChatGPT. However, by scrutinizing the normative aspects of authorship, we argue that ChatGPT is not an author. ChatGPT fails to meet the criteria of authorship because it lacks the ability to perform illocutionary speech acts such as promising or asserting, lacks the fitting mental states like knowledge, belief, or intention, and only with many qualifications can provide testimony. Three perspectives are compared: liberalism (which ascribes authorship to ChatGPT), conservatism (which denies ChatGPT's authorship for normative and metaphysical reasons), and moderatism (which treats ChatGPT as if it possesses authorship without committing to the existence of mental states like knowledge, belief, or intention). We conclude that conservatism provides a more nuanced understanding of authorship in AI than liberalism and moderatism, without denying the significant potential, influence, or utility of AI technologies such as ChatGPT.
—
August 23, 2023
2023
Virtue Epistemology is deeply connected with my topic, "Artificial Intelligence and the value of epistemic autonomy." At the conference we discussed intellectual virtues, with virtue reliabilists emphasizing natural cognitive capacities like memory and perception, and virtue responsibilists focusing on a good epistemic life or character. By understanding these different approaches to intellectual virtues, we can better navigate the challenges and opportunities presented by artificial intelligence and emerging technologies.
—
April 17, 2023 – April 21, 2023
2023
It was great to connect with fellow philosophers specializing in philosophy of technology and science. Together, we delved into the latest advancements in philosophy of technoscience, emphasizing modern and continental perspectives. Stimulating discussions on topics such as Artificial Intelligence, Synthetic Cells, and the Anthropocene cultivated a welcoming and intellectually invigorating atmosphere.
—
March 20, 2023 – March 31, 2023
Daniel Bracker
Department of Philosophy
Vrije Universiteit Amsterdam
De Boelelaan 1105
1081 HV
Amsterdam
The Netherlands
Speaking & Writing
Open to speaking invitations and writing opportunities.
Get in touch
Daniel Bracker
Department of Philosophy
Vrije Universiteit Amsterdam
De Boelelaan 1105
1081 HV
Amsterdam
The Netherlands
Speaking & Writing
Open to speaking invitations and writing opportunities.
Get in touch