Discussion about this post

User's avatar
Henry Josephson's avatar

More anecdata + agreement wrt robustness for careers:

The people I know who are making big career choices in the shadow of transformative AI seem to be making choices pretty robust to uncertainty. It's more "I had an AWS SWE offer right out of undergrad, but I decided to move to the Bay and do a mechinterp startup instead," than "I'm telling everyone I love to FUCK OFF and joining a cult."

Suppose it's 2040, and Nothing Ever Happened. The person who turned down the AWS offer because she wrongly believed Something Would Happen now has no retirement account and no job (the mechinterp startup is dead; there's nothing to interpret). Where does that leave her? In the same boat as 4 in 10 Americans[1], and probably with at least one economically-useful skill. (Startups will probably still exist in 2040). She's certainly counterfactually worse off than if she'd gone to Seattle instead of the Bay... but not so worse off that I'd be comfortable calling her plans "not robust to uncertainty."

If the pivoters I know are wrong, they don't enter Yann LeCun-world with a couple embarrassing-in-hindsight substack comments and zero career capital — they enter with a couple embarrassing-in-hindsight substack comments and pretty-good career capital. Maybe titotal and I (and you) just disagree about what "robust" means?

[1] https://news.gallup.com/poll/691202/percentage-americans-retirement-savings-account.aspx

Also, the diamondoid bacteria link is broken :(

Expand full comment
Harjas Sandhu's avatar

> A few years of reduced savings or delayed family planning seems like a fair hedge against the real possibility of transformative AI this century.

I think my issue with this is the assumption that we'll actually know the state of play in a few years. Obviously if AGI happens and we hit superhuman coders in 2027, we'll know by 2027. But if we don't, there's no reason to assume that we won't be having the exact same debate in 2027. It might get even worse as we get closer to AGI; for example, we can imagine a timeline in which ChatGPT almost reaches AGI in 2027 and causes significant job losses (or at least shifts in the job market) via increased automation and productivity, but doesn't manage to fully replace humans.

In that timeline, I would expect AI Doomers to be even more frantic about basing their life decisions on short timelines. But as you point out, the question is transformative AI this *century*. In that case, I think that the "just a few more years" framework is pretty problematic. At some point, you have to make an arbitrary decision to just keep living your life, and I think that will only get harder as we get "closer" to AGI.

Otherwise, great post! I particularly like the framing of inaction as a bet too—it being the default makes it no less of a choice than action.

Expand full comment
5 more comments...

No posts