Rylan Schaeffer

Logo
Resume
Publications
Learning
Blog
Teaching
Jokes
Kernel Papers


How Do Large Language Monkeys Get Their Power (Laws)?

Rylan Schaeffer, Joshua Kazdan, John Hughes, Jordan Juravsky, Sara Price, Aengus Lynch, Erik Jones, Robert Kirk, Azalia Mirhoseini, Sanmi Koyejo

International Conference on Machine Learning Accepted Oral Presentation

July 2025

Abstract

We investigate the origins of power law scaling in large language model inference-time compute, explaining why repeated sampling yields predictable improvements in task performance.

Summary

Understanding the origins of power law scaling in large language model inference-time compute.

Overview

We investigate the origins of power law scaling in large language model inference-time compute, explaining why repeated sampling yields predictable improvements in task performance.


Oral presentation at ICML 2025.