8 Comments
User's avatar
Blue Thoughts From a Red State's avatar

The potential uses for its magnificent potential of assisting humanity and our planet, appear necessary to surviving the damages already wrought by mankind. Unfortunately, Its own creators don’t trust their or our ability, to manage the power of its dangerous capabilities. AI is a Pandora’s box and its contents, in my view, are beyond the capacities of our current society to control without threatening our existence. We’re not yet good enough or disciplined enough to handle it.

Expand full comment
Sigurd's avatar

I've seen two AI projects aimed at working from a "wisdom" base rather than a scraping of as much as possible (and rather indiscriminately) from internet-accessible sources.

One is Aidan Cinnamon Tea, the work of Vanessa Andreotti and the rest of the crew around GTDF - Gesturing Towards Decolonial Futures. There are indigenous folks involved in training this AI.

https://burnoutfromhumans.net/chat-with-aiden

The other is Nipun Mehta and other Service Space folks who are mining the content they've collected over many years of working towards a common good and creating various custom "bots" including a Compassion Bot.

https://www.awakin.ai/

Expand full comment
Allie Middleton's avatar

yes!

Expand full comment
Renaee's avatar

Hi Nate and Team. I am leaving a link here through to an author who has written a most unsual book about AI. I think it will be of interest, and if someone in the team, or Nate, were to read it, I thought you might want to invite the author onto your show to chat about it. Here is the link to a book review: https://robertsaltzman.substack.com/p/chatgpt-40-reviews-understanding

Expand full comment
Kel de Bruijn's avatar

I don't know how else to contact you to share interesting links.

These guys seem to think shale oil peaked already:

https://blog.gorozen.com/blog/the-depletion-paradox

Expand full comment
Robin Schaufler's avatar

I love your definition of Life, your contrast of capital-L Life with small-l life, and your criteria for what is NOT in service of Life. I agree about AI just providing a bigger straw.

An AI is inherently sociopathic. It is incapable of empathy or compassion, but fully capable of emulating them. An AI cannot experience anything. If you have never felt hungry, you cannot empathize with hunger.

In order to train an AI to adequately emulate human intelligence, it needs mind-boggling amounts of data. That data cannot be manually vetted, it's too much. Therefore, the training material will inevitably be contaminated with sociopathic bits. If the input is 99.999% skewed to be pro-social, the best the AI can do is lull its "users" into complacency. The more pro-social the training, the harder it will be for the humans in its sphere to maintain vigilance. But I guarantee you that some case will turn up where some contaminent will take effect, and the results can be devastating.

Furthermore, every time anyone uses it, it learns from the interaction. So even if humans could adequately filter the initial training data, once it's out in the world, all bets are off.

That said, the genie is out of the bottle and we cannot put it back in. Our legislative system is too moribund to give it guardrails, if that is even possible. AI is now an organ of the Superorganism, one which can turbocharge the SO's metabolism.

The best I can suggest is:

1) vigilence;

2) use it, but do so carefully;

3) try to train it pro-socially in the course of your interaction with it;

4) your friend should ask that AI company to build redundant fail-safe mechanisms so that if the AI does come up with something dangerous, a fail-safe tripwire will be tripped to stop it in its tracks.

The tripwire might itself be an AI. The way a chess AI gets better is to play against other AIs. Perhaps an artifical ecosystem could weave some sort of safety net. But the AIs in the ecosystem need to be trained differently, and maybe also use different "dna" - a different code formulation for its neural net - in order to provide the diversity of an ecosystem.

Just a thought.

Stay well, Nate.

Expand full comment
Jay Scott Hardy's avatar

Should invest in an AI mirror system cooperatively owned like Audrey Tang writes and talks about

Expand full comment
Gareth Manning's avatar

This is brilliant. We need to develop pro-life sociocultural and economic goals and embed them into the code of future AIs, while also learning how to ask good questions.

The question of pro- and anti-life AI competition is quite compelling. Open-source local models hold great potential, especially for shaping goals away from innate corporate imperatives, yet they are the easiest for a kid in their basement to corrupt.

That said, so far, if Anthropic’s research is correct, their models have a better moral compass than a lot of people I encounter online.

Expand full comment