Talks and Presentations

Concept Alignment

December 15, 2023

Poster Presentation, 1st NeurIPS Workshop on AI Meets Moral Philosophy and Moral Psychology: An Interdisciplinary Dialogue about Computational Ethics, New Orleans, Louisiana

Discussion of AI alignment (alignment between humans and AI systems) has focused on value alignment, broadly referring to creating AI systems that share human values. We argue that before we can even attempt to align values, it is imperative that AI systems and humans align the concepts they use to understand the world. We integrate ideas from philosophy, cognitive science, and deep learning to explain the need for concept alignment, not just value alignment, between humans and machines. We summarize existing accounts of how humans and machines currently learn concepts, and we outline opportunities and challenges in the path towards shared concepts. Finally, we explain how we can leverage the tools already being developed in cognitive science and AI research to accelerate progress towards concept alignment.

Semantics and Syntax Co-emerge in Adaptive Reservoir Network Dynamics

July 19, 2023

Conference Talk, Diverse Intelligences (DI) Summit, St. Andrews University, Scotland

It is well known that a word can carry different meanings in different contexts, but how exactly do lexical semantics interact with a broader syntactic context? On the one hand, meaning at the lexical level may influence how one syntactically parses the sentence within which it is embedded. Snedeker & Trueswell (2004) found behavioral evidence that the interpretation of syntactically ambiguous verb-argument structures is influenced by a verb’s bias towards one of these structures or another, and more recent evidence from Ryskin et al. (2017) indicates that these verb biases can be manipulated through experience. This constitutes behavioral evidence that semantics can influence syntax in language use. On the other hand, in neural network simulations, changes in lexico-syntactic context have been shown to subtly modulate the population encoding of lexical inputs, or, in other words, induce shifts in semantics. (Elman, 2009). These findings suggest that the distinction between semantics and syntax may be more conceptual than it is indicative of two natural kinds. Building on the findings of Falandays et al. (2021), we test if an unsupervised, adaptive reservoir computing network is able to learn long-distance dependencies in a simple linguistic environment of sentences containing verbs which have differential biases toward possible arguments. The future goal of this work is to use this model to explore if changes in the statistical patterns of verb bias result in semantic shifts. This work uses computational modeling to explore how semantics and syntax can co-emerge as linguistic properties from simple adaptive behaviors of a cognitive agent entraining with its environment.

Probing the Methodology and Interpretation of Learned Categorical Perception Research

September 30, 2020

Poster Presentation, Vassar College Undergraduate Research Summer Institute (URSI), Poughkeepsie, New York

Learned categorical perception (CP) is a phenomenon where learning to place objects in categories influences how similar they appear, with objects in different categories becoming easier to tell apart and/or objects in the same category becoming harder to tell apart. Despite these effects being widely demonstrated, past studies exhibit low statistical power and the literature lacks a unifying theoretical framework. We seek to rectify these issues by conducting a systematic methodological investigation of learned CP, starting with replicating the effect under the conditions with which it has traditionally been reported, then exploring how successive methodological changes impact the presence of the effect. Our replication failed to show a pattern indicative of learned CP from comparing discrimination performance between a group that had learned a category distinction and a control group that had not. Through exploring our data to scrutinize possible key differences between our study and previous demonstrations of learned CP, we hypothesized that a combination of our stimuli being too easy to discriminate and the memorization of individual stimuli along each dimension obstructed the influence of category membership on discrimination behavior and was responsible for the absence of CP effects. We addressed this issue by lowering the discriminability of stimulus pairs and by increasing the number of stimuli in each category. Preliminary results suggest a possible learned CP effect and we plan to collect additional data to clarify the nature of the pattern.

Hearing That It Is Silent: How to Hallucinate a Non-Perception

March 27, 2020

Paper Talk, 10th Annual Mid-Hudson Valley Undergraduate Philosophy Conference, Marist College, Poughkeepsie, New York

What do we perceive when we experience silence? The traditional position denies that experiences of silence are perceptions because to perceive is to perceive something. If this is the case, then it does not seem possible to hallucinate silence because there would be no object of the hallucinatory perceptual experience. Yet our ability to hallucinate silence is intuitively plausible. In this paper, I develop an account of the traditional view on which the hallucination of perceptual absences is possible. Utilizing a disjunctivist notion of hallucination, I emphasize one’s epistemic access to the relevant perceptual information rather than their perceptual object to account for the experience of silence for a hearing person in both veridical and hallucinatory cases. (Conference canceled due to COVID-19)

Psychophysical Adaptive Procedure: Developing a New, Generalizable Method

September 25, 2019

Poster Presentation, Vassar College Undergraduate Research Summer Institute (URSI), Poughkeepsie, New York

“Staircasing” is a classic experimental procedure used in psychophysics wherein when a subject successfully discriminates between two stimuli that differ on one perceptual dimension, the stimuli become more similar to each other on that dimension, and when the subject fails, the stimuli become more different. The purpose of this procedure is to find the size of the difference that produces a specific level of discrimination success. Traditional staircase procedures require subjects to be run individually for many trials. Our goal was to develop a procedure for crowd-sourced online data collection in which subjects can be tested simultaneously using only a few trials each with every new piece of data altering the stimulus pair shown on the next trial. We do this by continually fitting subject responses to a plausible function that is used to predict the difference that should produce the desired accuracy level. Only presenting differences predicted to yield the desired performance allows us to find the difference corresponding to this accuracy level faster than sampling the whole range of stimulus differences. The stimulus values we determined with this method were used for research on the phenomenon of learned categorical perception. We present this new technique as an efficient way of determining psychophysical values more generally for purposes of controlling stimulus discriminability in experimental research.