3 Comments
⭠ Return to thread

I know you are kind of joking, but to give a serious answer:

As someone who doesn't have strong opinions on exact timelines, but thinks that short timelines are at least possible, I'd advise planning as if it either won't happen or it will happen in a very long time.

Mostly because, if it happens soon, and you lived your life normally, then you missed on a few years of hedonism. If it doesn't happen at all, or happens in a very long time, and you lived your life as if it was imminent, then you probably wind up having a very enjoyable few years, and then the rest of your life is pretty hosed as you've blown your savings, and probably damaged your professional life and maybe even some of your relationships.

So in my opinion, it's better to live your life under the optimistic assumptions, even if one believes that society should be investing time and effort under more pessimistic assumptions.

Expand full comment

Listening to an economist who focuses on AI impact on the economy right now. His best guess is, if we get a slow AI rollout, society might navigate it well enough to figure out entirely new social structures. If we get a fast rollout, we will have very serious political instability. I'm personally pretty worried about AI-if you're a writer you're already screwed unless you have a big personal brand-but also taking comfort that current AI models-the generative pretrained transformers-seem to have hit a ceiling until someone can figure out how to fix the "hallucination" problem. So far, more than half of businesses that have tried to embed AI in their business have seen it fail either because it is not ready for prime time in many applications, or the workers don't trust it and rightly view it as a threat to their jobs, and so are slow adopters. If the "hallucination" problem gets solved- then we're in very deep shit.

Expand full comment

I'm not *too* worried about "mundane" AI risks. That is, disruptions to various professions, cultural norms, etc. Not because I think they will be small or that it won't cause problems (I think that mundane issues could both be large and have pretty bad impacts), but mostly because I trust that, in the long run, society will be able to figure those out and adapt, and that afterwards it will be better off.

Rather, I'm worried (to the extent that I'm worried), about existential risks. I think the likelihood is pretty low, but you don't get a do-over. If it happens, that's it. Our species lost.

It's the same reason I think that we should be investing more resources in asteroid detection (although I think this one has actually gotten quite a bit better in recent years)

Expand full comment