top of page

Academic Writings

Since the 1950s, philosophers and AI researchers have held that disambiguating natural language sentences depended on common sense. In 2012, the Winograd Schema Challenge was established to evaluate the common-sense reasoning abilities of a machine by testing its ability to disambiguate sentences. The designers argued only a system capable of “thinking in the full-bodied sense” would be able to pass the test. However, by 2023, the original authors concede the test has been soundly defeated by large language models which still seem to lack common sense of full-bodied thinking. In this paper, we argue that disambiguating sentences only seemed like a good test of common-sense based on a certain picture of the relationship between linguistic comprehension and semantic knowledge—one typically associated with the early computational theory of mind and Symbolic AI. If this picture is rejected, as it is by most LLM researchers, then disambiguation ceases to look like a comprehensive test of common-sense and instead appear only to test linguistic competence. The upshot is that any linguistic test, not just disambiguation, is unlikely to tell us much about common sense or genuine intelligence.

Written with Zed Adams. Inquiry

A central question for understanding social media platforms is whether the design of these systems is itself responsible for the harmful effects they have on society. Do these systems push users toward unhealthy forms of engagement? Is there something inherently toxic about the design that distorts who we are when we use it? In a recent paper, Thi Nguyen argues that the design of Twitter is responsible for many of its most toxic outcomes. Nguyen’s argument is based on an analogy between Twitter and games. He argues that Twitter’s game-like features encourage users to rack up Likes and Retweets rather than engaging in the rich and subtle activity of communication. For Nguyen, this drive for high scores leads to many of the toxic effects of the platform. In this paper, we critique Nguyen’s argument. We contend that, in a crucial respect that matters, Twitter is not game-like. We show that the apparent plausibility of Nguyen’s argument rests upon overlooking this crucial disanalogy. Moreover, drawing out how Nguyen’s analogy breaks down makes clear not just that his account fails to explain Twitter’s toxicity, but also that it actively occludes the design choices that have negative effects on its users.

In current histories, C.I. Lewis is credited for bringing the strict concept of qualia – concerned solely with sensory states – into contemporary philosophy. It is this strict notion which is then credited with bringing in worries about inverted spectra, philosophical zombies, and the idea that we can individuate the senses introspectively. In this paper, I argue that this is a mistaken reading of Lewis and the history of qualia. I argue that the strict notion of qualia stems from the work of Johannes Müller in the mid-nineteenth century and his work on individuating the senses. The structuralist psychologists who followed in his wake, in turn, developed an account where the qualitative character of experience played no causal role. I also show that Lewis adamantly rejects this strict concept of qualia. He instead endorses a pragmatic conception of qualia derived from William James in which evaluative states – such as the painfulness of pain – play an essential, causal role in the life of the organism. The upshot is that Lewis positions himself against the strict conception of qualia, arguing it is phenomenologically false and philosophically wrong-headed.

AI & Society

Recent artificial intelligence advances, especially those of large language models (LLMs), have increasingly shown glimpses of human-like intelligence. This has led to bold claims that these systems are no longer a mere “it” but now a “who,” a kind of person deserving respect. In this paper, I argue that this view depends on a Cartesian account of personhood, on which identifying someone as a person is based on their cognitive sophistication and ability to address common-sense reasoning problems. I contrast this with a different account of personhood, one where an agent is a person if they are autonomous, responsive to norms, and culpable for their actions. On this latter account, I show that LLMs are not person-like, as evidenced by their propensity for dishonesty, inconsistency, and offensiveness. Moreover, I argue current LLMs, given the way they are designed and trained, cannot be persons—either social or Cartesian. The upshot is that contemporary LLMs are not, and never will be, persons.

Written with Mark Theunissen. Ethics and Information Technology, 2022

There is a current debate about if, and in what sense, machine learning systems used in the medical context need to be explainable. Those arguing in favor contend these systems require post hoc explanations for each individual decision to increase trust and ensure accurate diagnoses. Those arguing against suggest the high accuracy and reliability of the systems is sufficient for providing epistemic justified beliefs without the need for explaining each individual decision. But, as we show, both solutions have limitations-and it is unclear either address the epistemic worries of the medical professionals using these systems. We argue these systems do require an explanation, but an institutional explanation. These types of explanations provide the reasons why the medical professional should rely on the system in practice-that is, they focus on trying to address the epistemic concerns of those using the system in specific contexts and specific occasions. But ensuring that these institutional explanations are fit for purpose means ensuring the institutions designing and deploying these systems are transparent about the assumptions baked into the system. This requires coordination with experts and end-users concerning how it will function in the field, the metrics used to evaluate its accuracy, and the procedures for auditing the system to prevent biases and failures from going unaddressed. We contend this broader explanation is necessary for either post hoc explanations or accuracy scores to be epistemically meaningful to the medical professional, making it possible for them to rely on these systems as effective and useful tools in their practices.

International Journal of Philosophical Studies, 2022

Despite its substantial influence, there is surprisingly little agreement about how to read C.I. Lewis’s Mind and the World Order. Lewis has historically been read as a reductionist attempting to ground knowledge in qualia, but more recently it has become fashionable to read Lewis as a pragmatist engaged in a non-reductive, transcendental project. In this essay, I argue that while Lewis does have pragmatist leanings, the way he defines the fundamental categories of “the given,” “meaning,” and “concepts” ultimately commits him to a narrow reductionism. This is because he regards philosophy as an individualistic practice whose categories are verifiable only in introspection. I show how this commits Lewis to reductionism by focusing on his trouble grounding ethics and knowledge of other minds. The upshot is that while Lewis often leans towards pragmatism, his metaphysical commitments force him towards reductionism.

European Journal of Philosophy, 2022

A central issue in debates about Kant and nonconceptualism concerns the nature of intuition. There is sharp disagreement among Kant scholars about both whether, prior to conceptualization, mere intuition can be considered conscious and, if so, how determinate this consciousness is. In this article, I argue that Kant regards pre-synthesized intuition as conscious but indeterminate. To make this case, I contextualize Kant's position through the work of H.S. Reimarus, a predecessor of Kant who influenced his views on animals, infants, and the role of attention. I use Reimarus to clarify Kant's otherwise ambiguous commitments on the determinacy of intuition in animals and newborns, and the role attention, concepts, and judgment play in making intuited contents determinate. This contextualization helps to shed light on Kant's discussion of pre-synthesized intuition in the threefold synthesis of the A-Deduction by demonstrating that Kant's theory of mind in the deduction offers transcendental grounding for empirical accounts of infant development like Kant and Reimarus's. The upshot is a Kant at odds with many recent interpretations of his theory of mind: pre-synthesized intuition is conscious but indeterminate.

Written with Zed Adams. Cultural History of Color in the Modern Age, 2021

The study of color expanded rapidly in the 20th century. With this expansion came fragmentation, as philosophers, physicists, physiologists, psychologists, and others explored the subject in vastly different ways. There are at least two ways in which the study of color became contentious. The first was with regard to the definitional question: what is color? The second was with the location question: are colors inside the head or out in the world? In this chapter, we summarize the most prominent answers that color scientists and philosophers gave to the definitional and location questions in the 20th century. We identify some of the different points at which their work intersected, as well as the most prominent schisms between them. One overarching theme of the chapter is the surprising proliferation of different views on color. Whereas some assume that progress in science must take the form of convergence, the 20th century history of color exhibited a marked divergence in views. This chapter leaves it an open question whether an ultimate unification of views is possible, or whether the only thing that ties together the study of “color” is the shared inheritance of a word.

History of Philosophy & Logical Analysis, 2021

Over the last thirty years, a group of philosophers associated with the University of Pittsburgh—Robert Brandom, James Conant, John Haugeland, and John McDowell—have developed a novel reading of Kant. Their interest turns on Kant’s problem of objective purport: how can my thoughts be about the world? This paper summarizes the shared reading of Kant’s Transcendental Deduction by these four philosophers and how it solves the problem of objective purport. But I also show these philosophers radically diverge in how they view Kant’s relevance for contemporary philosophy. I highlight an important distinction between those that hold a quietist response to Kant, evident in Conant and McDowell, and those that hold a constructive response, evident in Brandom and Haugeland.

Kantian Review, 2021

Close attention to Kant’s comments on animal minds has resulted in radically different readings of key passages in Kant. A major disputed text for understanding Kant on animals is his criticism of G. F. Meier’s view in the 1762 False Subtlety of the Four Syllogistic Figures. In this article, I argue that Kant’s criticism of Meier should be read as an intervention into an ongoing debate between Meier and H. S. Reimarus on animal minds. Specifically, while broadly aligning himself with Reimarus, Kant distinguishes himself from both Meier and Reimarus on the role of judgement in human consciousness.

Written with Zed Adams. Journal of Consciousness Studies, 2020

The meta-problem of consciousness is the problem of explaining why we have problem intuitions about consciousness, why we intuitively think that conscious experience cannot be scientifically explained. In his discussion of this problem, David Chalmers briefly considers the possibility of giving a 'genealogical' solution, according to which problem intuitions are 'accidents of cultural history' (2018, p. 33). Chalmers' response to this solution is largely dismissive. In this paper, we defend the viability of a genealogical solution. Our strategy is to focus on a particular problem intuition: the thought that the phenomenal character of colour experience is irreducibly subjective. We use the history of the inverted spectrum thought experiment as a window into how various philosophers have thought about colour experience. Our genealogy reveals that problem intuitions about colour are not timeless, but instead arise in a specific historical context, one that, in large part, explains why we have these intuitions.

dialectica, 2019

In Mind and World, John McDowell provided an influential account of how perceptual experience makes knowledge of the world possible. He recommended a view he called “conceptualism”, according to which concepts are intimately involved in perception and there is no non‐conceptual content. In response to criticisms of this view (especially those from Charles Travis), McDowell has more recently proposed a revised account that distinguishes between two kinds of representation: the passive non‐propositional contents of perceptual experience – what he now calls “intuitional content” – and the propositional contents of judgment – what he now calls “discursive content.” In this paper, I criticize McDowell's account of intuitional content. I argue that he equivocates between two different notions of intuitional content. These views propose different, and incompatible, ways of understanding how a perceiver makes a judgment based on perceptual experience. This is because these two views result from an underlying indeterminacy as to what, if anything, McDowell now means by “conceptual” when he makes claims that intuitional content is conceptual.

Edited, with an introduction, with Zed Adams. MIT Press.

In his work, the philosopher John Haugeland (1945–2010) proposed a radical expansion of philosophy’s conceptual toolkit, calling for a wider range of resources for understanding the mind, the world, and how they relate. Haugeland argued that “giving a damn” is essential for having a mind—suggesting that traditional approaches to cognitive science mistakenly overlook the relevance of caring to the understanding of mindedness. Haugeland’s determination to expand philosophy’s array of concepts led him to write on a wide variety of subjects that may seem unrelated—from topics in cognitive science and philosophy of mind to examinations of such figures as Martin Heidegger and Thomas Kuhn. Haugeland’s two books with the MIT Press, Artificial Intelligence and Mind Design, show the range of his interests. This book offers a collection of essays in conversation with Haugeland’s work. The essays, by prominent scholars, extend Haugeland’s work on a range of contemporary topics in philosophy of mind—from questions about intentionality to issues concerning objectivity and truth to the work of Heidegger. Giving a Damn also includes a previously unpublished paper by Haugeland, “Two Dogmas of Rationalism” as well as critical responses to it. Finally, an appendix offers Haugeland’s “Outline of Kant’s Transcendental Deduction.”

bottom of page