Search Results for author: Curt Tigges

Found 3 papers, 1 papers with code

Transformer-Based Models Are Not Yet Perfect At Learning to Emulate Structural Recursion

no code implementations23 Jan 2024 Dylan Zhang, Curt Tigges, Zory Zhang, Stella Biderman, Maxim Raginsky, Talia Ringer

The framework includes a representation that captures the general \textit{syntax} of structural recursion, coupled with two different frameworks for understanding their \textit{semantics} -- one that is more natural from a programming languages perspective and one that helps bridge that perspective with a mechanistic understanding of the underlying transformer architecture.

Linear Representations of Sentiment in Large Language Models

1 code implementation23 Oct 2023 Curt Tigges, Oskar John Hollinsworth, Atticus Geiger, Neel Nanda

Sentiment is a pervasive feature in natural language text, yet it is an open question how sentiment is represented within Large Language Models (LLMs).

Zero-Shot Learning

Can Transformers Learn to Solve Problems Recursively?

no code implementations24 May 2023 Shizhuo Dylan Zhang, Curt Tigges, Stella Biderman, Maxim Raginsky, Talia Ringer

Neural networks have in recent years shown promise for helping software engineers write programs and even formally verify them.

Cannot find the paper you are looking for? You can Submit a new open access paper.