Darcey Riley


About me:

I am a 4th-year PhD student in the Department of Computer Science and Engineering at University of Notre Dame, where I work in David Chiang's NLP group.

My current research focuses on generation from language models. I'm interested in the entire probability distribution defined by these models, and particularly the ways in which it differs from the "human distribution". Why do language models like to generate overly short or repetitive text? And what can this tell us about how their language processing differs from human language processing? I am also interested in different decoding methods that make it possible to access different regions of the language model's probability distribution.

My roots are in probabilistic modeling and Bayesian epistemology: can we build agents that continually learn about the world by making observations and then updating their probability distributions? What sorts of approximations are needed to make this tractable? What can the process of approximating the ideal Bayesian agent teach us about human thought? Although I am not currently working on this topic, it remains in the back of my thoughts and continues to inform my research perspective.


Publications:


Recorded Talks:


Education and work experience:


Professional service:


In my spare time...

I enjoy dressing up in CS-themed costumes for Halloween.

My actual proudest NLP accomplishment is this sonnet I once wrote about HMMs.