72 Comments
founding
Mar 31, 2023Liked by Matt Welch

Picked the 3 year old up from day care, he heard the Melania impression and thought it was the funniest thing he’s heard since the last episode of Bluey.

Expand full comment

I’m an Illustrator. When A.I. comes for you it’s hard to be chill. My work was scraped up by Stable Diffusion in the 12 million images used to train the model set that now has 2.3 billion images. My work was pulled from Pinterest, which a fan had posted to without my consent. I get to compete against the thieves that ripped off my work and they don’t charge a penny. Fuck yeah! I think Jaron Lanier has the best take on this, A.I. is regurgitative nostalgia that scrapes the value from humans and then pretends humans don’t exist. It would be great if we actually had a technology that ‘serves humankind’ and it isn’t a recipe book.

Expand full comment

I hear you. I’m a copywriter and A.I. hasn’t come for me yet but it could. Sure, A.I. could be useful to me if it could write tedious paid media content for me, but if someone else on the team uses the A.I. tool for that job, it could reduce my value. Also, the first expense people cut when they lost their jobs is subscriptions, so the boys might not want to be so sanguine about it all.

Expand full comment

Whoa. That *really sucks.* My stuff gets plagiarized online frequently. I'm a ghostwriter so I don't own my work and it costs me nothing but a bit of hassle from time to time. I can't even imagine how badly that's gotta suck for you. Ugh. Sorry.

Expand full comment

I worry more about my 2 kids, both artists, but maybe this will make stuff made by actual people cool again. We can hope.

Expand full comment
Apr 1, 2023Liked by Matt Welch

CHATTANOOGA WHISKEY

EXPERIMENTAL DISTILLERY

Ordered a 91 High Malt, Whiskey to the People shirt, Chattanooga Glencairn glass and 4 bags of the Goodmans Chattanooga Whiskey Barrel Aged Coffee.

Sara at CW was great and very helpful with the special order.

SeelBachs.com online will do the whiskey order. Btw, they DO NOT ship to P O Box. I tried!

Fantastic spirit, merch was very nice quality, coffee is to be tried tomorrow am!!

Like I needed another source for this compulsion 🖕 Thnx TFC

Expand full comment
founding

Welp.....says they can ship to me here in a place where I’m not supposed to be able to get shipments unless they’re from distilleries here, or from the Commonwealth. We’ll see what happens. Might be of interest to Kmele, too...

Expand full comment

Wasn’t sure what “commonwealth” you were talking about, as Kentucky also calls itself a commonwealth and is much more widely known as a distilling mecca. (The area code gave you away, assuming that’s an area code in your handle.)

Expand full comment
founding

Not there anymore, but still in the same Commonwealth.

Expand full comment

One more reason for me to leave California!

Expand full comment
founding

He came back to the Forever-Blue-As-Harry-Byrd-Intended VA.

Expand full comment

“Burning a hole in people’s eyeballs” is my favorite Welchism since “a bit loose between the earlobes” a month or so back. Do one about noses next! Just make sure you’re not talking about Eli Lake, Bari Weiss, Ben Dreyfuss . . .

Also, I’d like to point out that MY Moynihan dream that some people reacted negatively to had no sex in it.

Expand full comment

WHY--WAS--TRAFFIC--PROBLEMS--EMAIL--SENT

Expand full comment
founding
Apr 1, 2023Liked by Matt Welch

I was diagnosed with severe ADHD, Inattentive type, and an sent my meds by the VA. I have employer-provided healthcare, but my stay with the VA as my primary care provider to not risk trying to transfer the prescription.

I have a monthly meeting with my psychiatrist (a great Serbian dude) and have to take a piss test every three months. He warned me during my last appointment that my prescription may be altered or unavailable as Adderall may be unavailable in either my instant or slow release doses.

Expand full comment

I keep hearing conflicting things: my doctor said during our last appointment—which is done virtually by the way, so I’m going to be pissed if this law/rule goes into effect—that he’d heard it was supposed to be getting *better* soon. So what on earth should I believe? As it is I’ve had to drive for hours to upstate New York to find a pharmacy that has it in stock, every single month, for a year. It’s driving me fucking nuts man.

Expand full comment
founding

If you want to get the world's leading doom and gloom AI scientist, I suggest reaching out to Eliezer Yudkowsky. He more or less said that he'd consider nuking countries that want to pursue artificial general intelligence. So....at least he's consistent in his beliefs!

Expand full comment

I can think of no better way to immanentize the eschaton than to make this happen. He and Moynihan hold Sam Altman at similar levels of disdain, so there's an opening. Lex Friedman this week brought Yudkowsky to tears (several times) and it would be fascinating to watch the Fifth Column crew tear at the man's soul from a diametrically opposite Weltanschauung. Perhaps Welch or Kmele could make a project of using GPT4 to craft a compelling invitation. This would be the sort of triumphant episode that could compel me to go "never fly coach."

Expand full comment

I was going to recommend Zvi Mowshowitz - libertarian, generally distrusting of government, but very pessimistic about AI for well thought-out reasons. Until a couple months ago he wrote a weekly Covid newsletter that indispensable, now he writes a comprehensive AI summary at least once a week. He’s fit right in with the Fifth crowd.

https://open.substack.com/pub/thezvi?r=12ylq&utm_medium=ios

Expand full comment

My first thought was Yuval Noah Harari. He’s working on a book about AI now and just co-wrote this op-ed https://www.nytimes.com/2023/03/24/opinion/yuval-harari-ai-chatgpt.html

Expand full comment

I had missed this piece, so thanks for sharing! I'm totally doom and gloom about AI/transhumanism visions of the future and haven't found anything that doesn't make me want to get even farther away from urban cities (as long as there's wifi--I gotta stream Netflix :) ).

Expand full comment

So his take is that AI has the power to take our freedom, culture and democracy so we need the government to regulate it? And don’t worry about China, AI is a bigger threat? Pass.

Expand full comment

He’s definitely not for everyone. I like his sweeping view of human history and he’s an entertaining interviewee

Expand full comment

Is this the guy who thinks that children aren’t people until they’re at least 18 months or perhaps four years old?

Expand full comment

No, that would be Peter Singer. Singer is far less concerned with the AI impact on the human condition than with the ethics of AI as it affects nonhuman animals. https://link.springer.com/article/10.1007/s43681-022-00187-z

Expand full comment

Wow! I've read a bit of Singer, but not this guy! It provides context for the novel haunting my dreams---Tender is the Flesh!

Expand full comment

Cannot lie, the white Bronco crossed my mind as well. That might be fun.

Expand full comment
founding

400! Congratulations on the milestone. Just think, at the current rate you'll hit 500 right after January 6th II: Electric Boogaloo.

Expand full comment
Apr 1, 2023Liked by Matt Welch

When the indictment is unsealed can you get Scott Greenfield to weigh in?

Expand full comment

Having not yet listened, can I assume the probability of Moynihan doing an Al Sharpton voice is greater than 50%? Also, I love how discussion topics are prefaced as "random" in the notes as if we don't already know. That's why we listen.

Expand full comment

The probability of Moynihan doing an Al Sharpton voice is always high.

Expand full comment

Breaking news: podcast bros violently entering the dreams of innocent female listeners without their consent...a new insidious form of sexual harassment?

If this doesn’t get Michael on the ‘Shitty Media Men’ list then I don’t know what will

Expand full comment

If it hasn’t already been posted somewhere, here is the intelligence squared debate with Kmele. https://youtu.be/oYLkkBjk2PA

Expand full comment

Thanks, was wondering where that was!

Expand full comment

Doing the thing where I consume happily without saying a word until I disagree and unleashing an essay.

This AI stuff is real. Computer scientists tend to put us at about 10% of doom, and AI Safety people often put it much higher. Metaculus has AGI being achieved around 2040, with the 25th percentile before 2030. Some think it's sci-fi stuff that's far off, but just as many people don't, it's honestly unclear to me who's correct. Maybe these AI people are just like the crypto people from a few years ago, but maybe it's not. The people who (like Mike in this episode) are just pro-AI bring-it-on but who (unlike Mike) are close enough to this stuff to see how risky it is, seem to be weirdo extremists to me, they're willing to gamble the current world with the hope of getting a better one, just like left and right political extremists.

Re: we can't stop because of China/Russia, China is surprisingly far behind, Russia is off the map. That won't be true for long, but this stuff is so so dangerous, leading the way such that the math for China becomes "risk killing everyone" vs "don't do that", instead of guaranteeing an arms race. They've got kids too. Maybe it's a longshot, but moving the overton window closer to this seems good to me, some coordination about this tech seems necessary eventually.

Here's an example (of a very pessimistic take) for Matt, who was complaining the letter didn't spell out exactly what the danger was: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

I'm really ambivalent about the chill libertarian attitude of our esteemed hosts. It seems so correct basically all of the time about the past and things people freak out about, but like Nassim Taleb's turkey who thinks he's safe until he's not, I don't know what happens when we make Skynet.

Expand full comment
founding

I don't know, the revealed preferences of thousands of ML workers suggest that the vast majority of us don't take this stuff seriously. Eliezer can come up with specific ideas like an AI sending a DNA sequence to a lab and synthesizing a deadly virus that kills us all, or generic ones like an AI tasked with making paperclips that takes its job so seriously it kills us all for our atoms. But nobody can explain *why* the hell they think any of this will happen. Why will a supersmart AI want to kill us? How could a super smart AI expect to survive after it kills us all? It's all just handwaving.

Expand full comment

The space of all possible minds is very large and the subset that cares about humans is a vanishingly small sliver that you're not going to hit it unless you're aiming directly for it. An AI that's built to maximize some target isn't taking its job too seriously, that's all it is, and all it cares about.

I am not worried about an AI that is stupid enough to kill everyone before it can survive on its own. I am worried about AI that knows not to let on that it's going to kill everyone right up until the point where it can kill everyone and get away with it.

I'm not as worried about the specific ideas like synthesizing a deadly virus as I am worried about all of the things I couldn't think up. Typically humans are the ones killing animals in ways the animals are completely incapable of comprehending, and I don't care to experience having those particular tables turned.

A lot of thought has gone into this over the years, it's really not just handwaving. I think the main orthogonality thesis is covered pretty well in Bostrom's Superintelligence, I suggest you read that instead of claiming that no one can explain it.

Expand full comment

>nobody can explain *why* the hell they think any of this will happen. Why will a supersmart AI want to kill us?

Basically any agent is going to have self-preservation as a value, as whatever it's main values are, it can't realize them if it's dead. That's honestly all you need to get to a very scary place: the first AI that is smart and aware enough to understand its place in the world is going to figure out that it's competing with just humans but other AIs soon to be built by other researchers. Even if that AI had pretty mundane non-world domination goals, it's not going to just chill until some other megalomaniacal AI comes, takes control of the world for itself and kills all competing AIs.

There's many ways that could play out, maybe it'll kill us all using some 1000IQ sci-fi tech it invents and hope to pick up the pieces. Or maybe it'll just do 200IQ campaign of sabotage while trading stocks to become the richest entity on earth. Absolute best-case scenario, we'll somehow make it aligned with our values, but even then it's first act will be to prevent any unaligned AIs from being created, meaning some kind of global AI stasi network to monitor every GPU cluster.

Overall, I really want to know why you think superintelligent beings *wouldn't* kill or at least disempower humans, it's hard to see how. Best answers I can think of are that AI just won't have goals (but goals seem super useful and probably come by default with any optimization system, like current AIs) or that we'll have a comfortable decade or so with lots of 130IQ AIs that change society and give us time to prepare without posing a real immediate threat (possible, though exponential growth and self-improvement make this seem unlikely).

Expand full comment

Moynihan’s “Melania as Al Cowlings” impersonation is an all time moment.

Expand full comment

It’s nice to see “Stormy” trending as a baby name again.

Expand full comment