Events
John Krakauer presented research on the science of happiness and depression. Using smartphone-based data collection and neuroimaging, his team explored how daily experiences shape overall well-being. We discussed links between happiness, brain activity, and dopamine levels. He also shared insights on major depression, using computational models to analyze decision-making and the mood-behavior relationship in depressed individuals. The talk highlighted the importance of happiness as a societal metric and offered new perspectives on understanding and potentially treating depression.
It was fun to guest-teach Performing Arts Strategies for the Future for the New School's Master's in Arts Management and Entrepreneurship.
In my talk at AGORA, I explore the implications of AI on our ability to think for ourselves, drawing insights from philosophers like Kant, Kierkegaard, and Wittgenstein. I discuss both the potential benefits and downsides of AI, including how tech companies can erode our epistemic autonomy, and the risks and uncertainties surrounding the development of artificial general intelligence (AGI).
The philosophy of AI dialogue will be enriched by the contributions of invited speakers from diverse academic backgrounds, including Joanna Bryson from Hertie School, Berlin, Herman Cappellen from the University of Hong Kong, Marta Halina from Cambridge, England, and Sven Nyholm from LMU Munich.
At a time when the value and nature of the Humanities subjects is being questioned, this seminar will address questions relating to the role of the Humanities within the University. The seminar will be structured around discussion of chapters of a forthcoming book on the Philosophy of the Humanities, with other talks given by scholars on topics they work on that fall under the broad theme of "Philosophy in the Humanities".
Is ChatGPT an author? Given its capacity to generate something that reads like human-written text in response to prompts, it might seem natural to ascribe authorship to ChatGPT. However, by scrutinizing the normative aspects of authorship, we argue that ChatGPT is not an author. ChatGPT fails to meet the criteria of authorship because it lacks the ability to perform illocutionary speech acts such as promising or asserting, lacks the fitting mental states like knowledge, belief, or intention, and only with many qualifications can provide testimony. Three perspectives are compared: liberalism (which ascribes authorship to ChatGPT), conservatism (which denies ChatGPT's authorship for normative and metaphysical reasons), and moderatism (which treats ChatGPT as if it possesses authorship without committing to the existence of mental states like knowledge, belief, or intention). We conclude that conservatism provides a more nuanced understanding of authorship in AI than liberalism and moderatism, without denying the significant potential, influence, or utility of AI technologies such as ChatGPT.
The international workshop with philosophers, sociologist and lawyers from all around the world, encompassed phenomena like extremism, conspiracy theorizing, and terrorism.
In the first part of the workshop we discussed the weighty question of responsibility. We deliberated on who should be held accountable for extreme beliefs and behaviors. Should it be the individual, the community, or should we look to broader structural elements of society?
The second part expanded on the types of responsibility associated with extreme beliefs. We explored the intertwining threads of legal, moral, and epistemic responsibility. The dynamics of these responsibilities, their intersections, overlaps, and the occasional friction between them provided for a thought-provoking discourse.
The third part focused on the appropriateness of responsibility attributions. We pondered over when to assign or withhold responsibility for extreme beliefs, bearing in mind factors that might excuse or exempt an individual or group.
Virtue Epistemology is deeply connected with my topic, "Artificial Intelligence and the value of epistemic autonomy." At the conference we discussed intellectual virtues, with virtue reliabilists emphasizing natural cognitive capacities like memory and perception, and virtue responsibilists focusing on a good epistemic life or character. By understanding these different approaches to intellectual virtues, we can better navigate the challenges and opportunities presented by artificial intelligence and emerging technologies.
It was great to connect with fellow philosophers specializing in philosophy of technology and science. Together, we delved into the latest advancements in philosophy of technoscience, emphasizing modern and continental perspectives. Stimulating discussions on topics such as Artificial Intelligence, Synthetic Cells, and the Anthropocene cultivated a welcoming and intellectually invigorating atmosphere.
At the OZSW Conference 2023, I gave a talk on "The Importance of Epistemic Autonomy in a World of Artificial Intelligence." This talk addressed the often overlooked impact of AI on epistemic autonomy, focusing on the potential implications of Brain-Computer Interface (BCI) technology, which may eventually enable information to be 'pre-loaded' into an individual's mind. I presented Adam Carter's account of epistemic autonomy and discussed the transition from a 'JTB + X' (Justified True Belief + X) to a 'JTAB + X' (Justified True Attitudinal Belief + X) template. By examining the relationship between epistemic autonomy and propositional knowledge, I argued that epistemic autonomy adds value to justified, true anti-Gettiered beliefs, emphasizing its significance in the age of AI.
It was enjoyable to connect with thought leaders and experts from all over the world, discussing the societal impacts of emerging technologies. We delved into a variety of cutting-edge and transformative technologies, such as artificial intelligence, robotics, neurotechnology, synthetic biology, 3D printing, and energy transition tech, among others.