Discussion about this post

User's avatar
Matt Reardon's avatar

Stefan Schubert and I had a twitter fight related to the time extrapolation ratio where I had an intuition that most "long time horizon" tasks actually contain enough recursion and self-reference that they should be thought of as mere sequences of much shorter horizon tasks.

This seems especially true to me from the data side. Even non-agentic LLMs can be pretty good at choosing between specific options, which suggests they can do short (let's say one-day) "planning" tasks of e.g. six month projects. Once that's done, an agent needs only pretty minimal scaffold to prompt itself to pick up the plan, note which task comes next, and execute, without needing to have all the previous work in context. The longest task data you need for that agent to be capable (even at a 1:1 time extrapolation) might only be a couple days, if not hours depending on how long it takes to make a plan with good enough pointers to the kinds of tasks that need doing.

A simple example is chapters of a book. You can outline those with some detail and subheadings in a day, but draft each chapter pretty independently of each other over the course of a few weeks. In that sense, "writing a book" isn't an 18-month long task, but 18 1-month tasks with some minor scaffolding that wouldn't be very different for a 6-month book or a 24-month book.

My sense is that most work can be thought of in this way and if you turn just the time extrapolation ratio numbers down a lot in your model, timelines get quite short!

Of course Stefan did think I was making some fundamental error here that I failed to understand so who knows. https://x.com/Mjreard/status/1902466669940756767

Expand full comment

No posts