AGI god: what & how

I've been lurking LessWrong for years. A large subset of them folks are what I call AI-doomers.

I find that topic annoying as it is. I'm already rather late to the scene so I find no problem passing on all of them.

Lately the series of events sparked by ChatGPT followed by the clusterfuck that is Bing Chat/Sydney created a cornerstone event for AI-doomers.

I find the subject might be worth studying even if it ends up to be full of shit. The plot is unfolding fast enough that window for deep thinking on it may be vanishing.

Preamble

This section is to catch you up to what this whole deal is about. If you are well-versed with the topic of AI alignment, skip right to here.

Artificial general intelligence (AGI) refers to a digital form of intelligence on the level of a human or beyond. It's good at not just playing chess but anything and everything. Pop culture manifestation of them are Commander Data and Hal 9000. We expect a true AGI to be capable of self improvement, this is a key assumption.

AI-doomers claim that AGI is an existential threat to humanity and civilizations, on par with nuclear weapons or maybe worse.

AI-alignment is sometimes referred to as AI not-kill-everyone-ism. It's a field of study that attempts to keep incentives and value systems of any given AGI aligned with humans', in such a way that civilizations' downfall would mean a mutually assured destruction.

On a separate tangent, there's the idea of singularity. It's a point in the time right after an AGI is awakened. As a stark contrast to AI-doomers, the singularity takes an optimistic humility stance that thinks what happens post-AGI cannot possibly be predicted because it's god-level intelligence we're dealing with here.

Now you are sufficiently caught up to go forward. Understanding the arguments behind this existential threat is a huge time sink. If you are feel like jumping into this rabbit hole, this and this are decent starting points.

Analysis

When I do get to it, I read about AI-alignment with some empathy but maintain a cautious distance. An entertaining one is Roko's basilisk.

The AI-alignment crowd is the closest thing I know to a demon-slaying cult. Only these guys are backed up by logic.

Taking their logic apart is not easy. That's why I can't honestly decide if they are right. Truth is I would rather they not be, but that doesn't make it so.

My coverage here is mostly about what I don't know than what I do, which is at best surface level. Therefore any argument I can make against their theses would likely have been addressed by someone already.

For what it's worth, the AI-doomers are pure. As far as I can tell, their motivations are not profit or status driven. They get to say what they mean. Earnestness is a rare quality these days.

Singularity folks on the other talk about mind-uploads and digital after-life. You don't need to squint to see the signatures of conventional religions here. Theologians might have more interesting things to say about this.

Considerations

Ultimately it's about how we move forward, individually. Influencing the collective progress on AI is one thing (if it's even within your capability), but it's the individual moves that count if it all goes south.

So, what if the AI-doomers are right? Clearly I have no answer here but I can attempt to have a framework of how to think about it.

Effectively we would have a new god among us. It could be a few but let's limit it to one (I'm foreseeing consolidation in rapid pace). Without predicting what this god wants from us, it would probably require a heavy dose of humility from humans to yield to it.

On my end I need to think about shifting focus to creating values in ways only I can. I've said before that those who feel their job is safe are not paying enough attention.

At least for the moment I feel good about dropping the idea of writing a full-length novel. Going forward it's going to be so easy that what I write will share the shelf with too many machine-written spam.

The questions to keep asking in the world of AI-god is "where the opportunities are". For now the more useful question is "where will opportunities disappear".