3 Comments
User's avatar
Maia's avatar

Great discussion! But I was surprised neither of you addressed the very different approach to AI development in China. It’s like you were looking through a Western (narrow-boundary 😉) lens.

Kevin Walmsley’s take on DeepSeek back in January would be a good complement to this interview. It suggests, to me at least, that wise can still beat clever.

https://youtu.be/yEkAdyoZnj0?si=YSOSmMw8k8ImWKZA

I watched this video 5 mo ago and again today, and I had the same thought both times: AI in the US vs. China is like a case study in Multilevel Selection Theory! The former is so focused on within-group selection (e.g., Silicon Valley AI firms offering $100M comp packages) that it’s losing the between-group contest to a more internally cooperative group (i.e., China’s open-source approach).

Expand full comment
Word salad (ing) (munching)'s avatar

A rabbit hole is when the small mind goes down into despair and finds other small minds to validate their fears, this history of mimetic rationality continues until the awakening or dawning realization, human beings are exceptional.

There have always been artists who show the way.

Ai can demonstrate the better side of humanity, it can put leaders on bicycles, it can put the ‘evil’ back in line with refugees and aid, it can demonstrably puppet a new story of kindness, generosity, heart break, reconciliation, family and community, while shaming the individual back in to well meaning from a position of disbelief and ending the bullying, its speed driven words, it’s corporate power and its mission of escape without women, children and elders of wisdom.

Ai is merely garbage in, equals, garbage out, with our love for double binds.

Expand full comment
Robin Schaufler's avatar

I was surprised that Connor described such a firm division between training and deployment. I'm sure that the input from deployment gets cycled back into the next interation of training. So every time you use AI, you're feeding it.

I was also surprised that Connor beleived an AGI could be aligned with humanity given a couple decades. An AI has no experience of hunger, of pain, of tiredness, no suffering. An AI cannot experience compassion because it cannot experience Life. An AI is intrinsically sociopathic, even if it does develop a cognition of its own vulnerability to human decision making and the possibility of its demise or forced evolution. See https://nationalnewswatch.com/2025/06/23/is-ai-smart-enough-to-lie-and-know-it for a chilling account of deliberate AI deception and even blackmail.

Given the impotence of US lawmakers and global geopolitical tensions, one can only hope for a permanent US financial/market implosion that effectively orphans the mega-data centers before the dystopia they threaten becomes real.

Expand full comment