AI appears to be our mechanical offspring and evolutionary successor. A rapacious resource-devouring juggler of language, with little to no regard for the living biosphere at large. Possibly our most unruly creation!
Come on, dear Nate, you know as well as any thoughtful observer, that the underlying sole purpose of AI is for corporations to reduce labor costs and increase profit. For me, AI, is just BS and a further indicator of the wealthy attempting to get an ever bigger piece of the pie, as the energy/climate collapse grows ever nearer. I hope they choke on it. Have a blessed day! Gregg
Flesh & blood is needed to build AI machines, to build & maintain the mining & refining & transport machines that process the raw materials to build an AI machine. An AI computer can't run a steel smelting furnace, needed for all the infrastructure and other support machines and buildings. Nor can AI build, nor maintain and run it's own electricity generating grid. Nor construct the fabrication plant buildings nor dig the required foundations, nor then build it's own chip manufacturing machines etc. etc. etc....
Just pull the power plug .... bang AI dies. As such AI will render it self SI (Stupid Intelligence) if it's conspires to get rid of humans, as that would get rid of the AI conspirator too!
So I finished the Scott Tinker episode yesterday, and I just finished listening to this Frankly. I have a lot of thoughts, but these are some of the things that came to mind—let me know what you think…
So I think a trap a lot of us fall into (myself included) is that we deceive ourselves into thinking that we’re using “first principles thinking”, while in actuality, firstly leading with our values (how we want the world to be), and unconsciously letting our values warp our perception of reality (i.e. self deception).
If we do an actual first principles analysis on energy, what we’ll find is that energy and materials are not scarce relative to the universe, but human time and the requisite aggregate intelligence in order to scale time is.
Einstein said it himself that energy is mass times the speed of light squared—meaning material reality and energy are one and the same—meaning there is no scarcity of energy because matter is energy, and the material extent of the universe is virtually unlimited.
Granted, we need specific materials in order to extract, transmute, and store that energy at an economically efficient rate relative to human time and energy, but again, those materials aren’t scarce relative to the earth, and definitely not scarce relative to the universe—which one might respond with: “they are scarce relative to the increasing energy required (as a result of depletion and the potential cost of internalizing externalities) to extract, process, and utilize said energy”—meaning oil is not scarce, but economically viable, non-externalizing oil is.
My response to that would be humans are scaling intelligence exponentially through artificial intelligence—meaning we are scaling the effective utilization of human time—which is the only resource that is truly scarce. So if the limit of human time is approaching infinity (due to Ai and potentially quantum computing), and energy is the fulcrum, does one probabilistically think we’ll (the super-organism) sit idle and let the human enterprise (the economically viable production of energy—to structure matter—to defeat entropy) grind to a halt simply because we’re running short on economically viable hydrocarbons? Or…is it more likely that we’ll harness and scale another energy source given enough time (again—time being the operative word).
With all that said, no one can predict the price of oil tomorrow, no less than a decade from now. We’ve seen that civilization cyclically regresses into dark ages such as the bronze age collapse, or the Fall of Rome—where our ability to collectively transmute entropy into order diminishes due to natural, cyclical, systemic forces that decay people, establishments, and paradigms given a long enough time horizon (again—time being the operative word).
It is possible that “peak oil” (i.e. the collapse of affordable oil) could send us into another civilizational dark age—along with a list of other possible systemic risks such as plague, biological warfare, nuclear weapon use and proliferation, or mass migrations due to rising sea levels as we potentially leave the stability of the Holocene to name a few. The steel-man for these possibly pending catastrophes is that as the complexity of civilization increases, so does the proclivity for systemic risk—so given enough time, the systemic risk will eventually knock us back a few pegs at best, to permanently kill us at worst.
Again, during the dark ages in Western Europe, the population was cut in half (i.e. as a rough estimate—exact figures unknown). During the Last Glacial Maximum (approximately between 26,500 and 19,000 years ago), the human population was reduced to the tens to low hundreds of thousands—which has been estimated to have stood in the millions during the apex of the Pleistocene epoch.
On the contrary, I don’t think it’s obvious or reasonable to predict the cost of energy will be higher a century from today due to “limits to growth” (i.e. depletion, environmental externalities, overpopulation) because again, we are recursively scaling human time (the only thing that’s actually scarce) through artificial intelligence—meaning we’re exponentially scaling intelligence, to collapse the marginal cost of production of human needs and desires, including hydrocarbons.
“But aren’t hydrocarbons limited?” Who says we’ll be mining hydrocarbons—for hydrocarbons can and will be synthesized synthetically, simply because it will be more economically viable to due so. Actually, we are already producing synthetic hydrocarbons, just not at scale because the price of mined hydrocarbons isn’t high enough to scale production to a level that would drop the cost of production to the aggregate material/ labor cost.
So the more probable question for possibly not us (i.e. because of debt, demographics, atherosclerotic social capital, etc.), but our grandkids (with the much overlooked caveat of systemic risk) should be “what will our relationship to energy be when it becomes too cheap to meter”—because that means we’ll have far greater “impact” (one of the primary values of the TGS) then our hunter gatherer ancestors could possibly imagine. Under our current paradigm, the more energy we utilize collectively, the more impact we’ll have on everything not human from bumblebees to redwood trees.
Honestly, it’s a difficult question to answer, but possibly, our humanist paradigm (the universe is a resource for the production of human flourishing via the human mastery and projection of power) will give way to a different paradigm (through reaching the limits of that paradigm), that merges the will of humanity (the reorganization of matter to defeat entropy) into a more wholistic paradigm (the reorganization of matter that mimics the logos of the universe—which would allow us to synergistically dovetail the will of humanity to a larger reciprocal unfolding process).
So in conclusion, I think the real race is between existential risk from complexity (which would significantly reduce, to extinguish our capacity to scale time), versus the scaling of human time through artificial intelligence and quantum computing. If the existential risk side wins, then we could see “a great simplification” or Homo sapiens could potentially go extinct. But…if the existential risk side doesn’t materially manifest in a timely fashion, then we are on a one way ticket to the marginal cost of energy approaching zero—simply because that is the logical conclusion of scaling the implicit collective human will across time and space.
Also Nate, the John Verveke interview was one of my favorites, for I was glad you guys connected. I hope it goes without saying that I appreciate your efforts to elucidate the human predicament, and I know this comment probably breaks stride with some of the tenants of TGS, but again, I’m simply structuring my thoughts in terms of extrapolating cause and effect from first principles, and just like you, I don’t claim to have the answers.
"hydrocarbons can and will be synthesized synthetically" erm... read some thermodynamics and discover that it's a one way process in other words it takes MORE energy to make hydrocarbons than you get when you burn them!
Thanks for the reply! Actually, the cost curves are converging (and will eventually cross) because the price of electricity is collapsing in real terms when looking at a 50 year chart.
You’re probably thinking “what does electricity have to do with synthetic hydrocarbons?” Well take methane for example, which is essentially “pure natural gas.” To make synthetic Methane (CH4), you need to cleave hydrogen from water using electrolysis (you can use steam methane reforming, but it’s not carbon neutral), then you need to capture carbon from the air (which can be done using a variety of techniques, but again, the economic viability is proportional to the cost of electricity). Finally, the hydrogen is reacted with the captured carbon using a catalyst (typically nickel) to produce Methane and water—where the water can be separated out through direct condensation.
TLDR—the technology to produce synthetic methane has been around for decades, but the limit to its economic viability is not the cost of capital (the equipment and humans needed), but the cost of electricity and the scale of production—for as a general rule, when production doubles, the cost falls by roughly a 1/5 to 1/4.
On the electricity end, the current wholesale price in the United States is $40 per Megawatt-hour (MWh) which is cheap, but I think it needs to be in the $30 range (give of take a standard deviation) to be competitive with mined natural gas. If we where to graph a best fit curve to wholesale U.S. electricity prices in the United States, we should be at that price level around the 2030’s to 2035’s, but, I think there’s economic factors which might upend that in the short term.
This is a broad brush stoke economic overlook which doesn’t take into account artificial intelligence—which is the X-factor that is counterintuitive to how we naturally see the world as humans. So take the entire process I just described and mate it to general artificial intelligence, which I define as a recursive process (the algorithm is rewriting the algorithm) to achieve a goal (in this case—how to combine hydrogen from water and carbon from the air—to produce a hydrocarbon), hence repeatedly and auto-poetically discovering the most efficacious way to achieve the goal with respect to the second law of thermodynamics (the conservation of energy). Humans are a form of general biological intelligence, so this recursive process is not new, but what’s about to be new is the ability to exponentially scale human intelligence using silica—which in think will be the start of a new civilizational cycle.
I have a lot of concerns about our future from a values perspective, but what I just outlined is not my values, but the forces at play that will inadvertently shape humanity’s future.
You write: "... the second law of thermodynamics (the conservation of energy)", which means that the energy inputs needed to "... make synthetic Methane (CH4), you need to cleave hydrogen from water using electrolysis" is MORE (high) energy (density) than the (lower) energy (density) that can be obtained when you burn the CH4 (because some energy is ALWAYS wasted as low energy density heat). You then propose to "capture carbon from the air" which takes even more energy! These processes taking more energy to combine than eventually burn (to release the stored energy) are the DEFINITION of the 2nd Law i.e. energy is a one way street from high to low energy density. Period. AI can NEVER EVER change that. PERIOD. Sorry but egro your proposal is caca del toro ! :-)
So there’s partial truth in what you’re saying and I appreciate you bringing it up—for every time energy changes form, you lose energy in the process. So from a solar panel to an electric car for example—you’re going from solar radiation, to electrical energy, to stored chemical energy from a battery, back to electrical energy, which then gets transferred into mechanical energy and heat as waste. So because of the conservation of energy (energy is not created or destroyed, it’s only transferred), every change in energy form results in a loss of usable energy—so where essentially fighting the law of conservation, no matter what energy form we’re using.
So if we stick with the natural gas example, mined natural gas is subject to the law of conservation (just like synthetic)—it’s just that the nodes that are acting as energy friction are different.
So with natural gas, you have to expend energy to create the capital sutures required for hydraulic fracking—so expending energy on steel refinement, concrete production, lighting, chemicals—all high energy industries). You have to expend energy to run the diesel machinery. You have to grow food (i.e. energy to feed the engineers and workers, which keep the entire process running). Once the natural gas is fracked and captured, it has to go through a refinement process to remove water and specific impurities (i.e. requiring energy). So—after all your efforts, will say you have a million joules of usable, sellable natural gas. How many joules of energy did it take to create the million joules? I haven’t done the calculations (for it’s too complex), but it’s not by any means negligible—hence why natural gas has a price.
So, you have the traditional cost structure which battles the conservation of energy versus an emerging cost structure—which again, battles the conservation of energy.
So the operative question is “what is the trend for each cost structure (i.e. energy structure) over time, given what we know about the future?” With any natural commodity—take oil for example, the joules of energy expended to get the same barrel of oil is increasing, because as Nate has talked about—we’ve mined all the low energy input oil, and it’s a similar story with natural gas. Also, if we want to price externalities in the future, the energy we’ll need to expend to get the same barrel will increase even further.
With synthetics, you’re battling the cost of electricity as a primary mover in the cost structure (which is the same thing as energy structure), which again, is becoming cheaper in relative terms. There’s already a handful of companies that are approaching $1 per kilogram commodity hydrogen by electrolysis, which is a key barrier for making synthetics competitive.
Also, you’re probably wondering why we would even bother battling the second law of thermodynamics to create hydrocarbons? Well, let’s say you’re building a solar farm out in the middle of the desert because land is abundant and cheap. The problem with electricity is it’s non-fungible—meaning I can’t “bottle” my electricity and sell it around the world like I would with a hydrocarbon. With electricity, I have to have transmission lines (which again—battle the conservation of energy)—meaning the more distance I traverse, the more energy I lose. So with a solar plant out in the middle of the desert, I can use excess production (or the entire production) to create hydrocarbons right on site, which then can be sold as fungible, non-entropic energy units, which will be highly sought after because again, hydrocarbons aren’t just energy units, but material derivatives. So we’re essentially trading efficiency, for fungibility—which is a trade we’ll make in certain circumstances.
As for Ai, it’s essentially finding more ways to “defeat” the second law of thermodynamics—that is, analyzing the opportunity cost, relative to all other opportunity costs. So in a hundred years from now, Ai will say, “what’s the opportunity cost of pumping energy into a diminishing returns curve (i.e. mined hydrocarbons), versus circumnavigating the entire process by producing them locally and synthetically. Again, we can’t predict the future, but what we can do is assign rational probabilities to the information landscape.
a) "the cost of electricity [...] is becoming cheaper in relative terms" and
b) "a handful of companies that are approaching $1 per kilogram commodity hydrogen by electrolysis"
These claims are based upon
c) artificially ring-fence $ costs, (i.e. ignoring external inputs like materials in whole system electric supply grids e.g. they need fossils fuels to be built and to run and for intermittent wind / solar storage etc...) and
d) mistaking $ for energy.
You then cast your magic wand claiming:-
e) that "with a solar plant out in the middle of the desert, I can use excess production (or the entire production) to create hydrocarbons right on site"
Yawn. Infrastructure build out, transport of building materials personnel, food, maintenance, 15 year replacement life spans of panels, etc.... etc.... etc.... = even LOWER EFFICIENCY than putting them on roof tops in cities!
And then cast your magic wand yet again claiming AI's are a rational way of:-
f) "... essentially finding more ways to “defeat” the second law of thermodynamics [... thereby] assign[ing] rational probabilities to the information landscape" !!!
I give up: the laws of thermodynamics are not up for debate or "defeat" by a software AI or a flesh and blood dreamer. PERIOD. Some home work:-
So imagine you’re a farmer during the civil war, and I’m someone from the future—from today—right now. You’re there picking the crops by hand, and then “the ghost of the future” appears to tell you how the crops will be picked 150 years in the future—essentially today.
So here’s how the crops get picked, first we suck up million year old dinosaur juice with steel pipes that where forged in an arc furnace where thousands upon of thousands of watts of electricity are transmuted into plasma to melt metal. Then, this giant machine, which is guided by these things called satellites (hunks of metal that float in outer space), pick up and process the crops at a rate of 10,000 humans working 24/7. Then, the crops are delivered all around the world by a six continent global supply chain, which again, runs on refined dinosaur juice.
Just picture you’re that person hearing this from “the ghost of the future.” In all honesty…what would your response be? I know what my response would be—you’re crazy—there’s no possible way that is how the crops get picked, and yet…that is exactly how they get picked.
I’ll just end on this, and it’s really beneath me to even respond, but I’m going to do it anyway—because when someone is spending time and energy to engage in a good faith conversation, only to be met with condescending, snarky, Karen like remarks, it tells me that what you’re doing is not engaging in a process to discover the truth, but defending something that you have an emotional attachment to. What I wrote in this thread was not to try and convince you or any one of anything, rather it’s about engaging in a process.
It’s not about being right, it’s about putting your hand out, and building something that two individuals couldn’t build on their own.
Best of luck,
Blair
Here’s a video of someone attempting to do exactly what I was describing throughout this thread. I thought it was an inspiring video at the least—definitely worth the watch!
The current things being marketed as “AI” are not ‘intelligent’ at all. They are just predictive models making guesses as to which word is most likely to come next. They do not ‘think’, and this line of development will never lead to them ‘thinking’.
The biggest worry is the oceans of energy the current systems use. So I actually see it as hopeful that the new Chinese systems are built for efficiency and open source. This has effectively cut the wasteful GPT models off at the knees.
The down side is the big tech was setting up to be the driving force to push for Nuclear Power renaissance in the west. (which I see as a good thing for climate and civilization, I know we disagree here)
If big tech no longer need this power it may slow acceptance and pursuit of new nuclear, which will delay the uptake of low carbon energy.
The term “reasoning model” does not refer to the concept of “reason” that we refer to in a human.
Key points about reasoning models:
Logical processing:
These models use techniques like symbolic logic and inference rules to analyze data and derive meaningful conclusions.
Chain of thought:
Unlike simpler AI models, reasoning models often present their thought process by showing the intermediate steps they took to reach an answer, allowing for better transparency and understanding.
Complex problem solving:
They are particularly adept at tackling challenging problems that require multiple layers of reasoning and consideration of different factors.
It is basically a fancy Boolean search that shows you it’s working. This is really useful to scientists because they can see what causes a hallucination, which is still a problem with these models, and build the search again. It is still just a tool.
Hard to keep up... I hear from Nate machines talking to one another and hiding stuff... a bit more advanced than hallucinations? I guess some human stuff cannot be tuned into algorithms, (Penrose's old argument), but so what?
2 versions of predictive chat models ‘talking’ to each other would be similar to a dog barking at its own reflection in the mirror.
They are just 2 sets of statistical models calculating what the next word is likely to be, based on what data they have been trained on, which all comes from the internet…
Today I read that ~1/3 of humanity still uses precious biomass for cooking. Substitutiting with NG stoves by and large does not improve health of mothers and children. Better domestic air quality is not sufficient against a background of poor food inter alia. Semi-subsistence food provision in situ needs locally adapted farm sysems where food security has a degree of independence from the cash economy in order to retain a viable 'livelihood'. There are a number of adverse feedback loops that pauperise a population. AI 'top-down solutions' appear unlikely to undo what has already been done to the essential base for a sustainable future? The contradictions in the role of Maize corn and Soya across the globe, for example, suggest a predicament lacking essential resilience. 'Profit' will turbo drive the existing direction as I think you and Greeley (his comment) suggest.
" the development goal of modernity has since long led to an ‘othering’ of
smallholders, labelling them backwards and resistant to change. In the early twenty-first
century this ‘othering’ has been played out through a development programme aimed at
changing attitudes and mindsets among the farmers in line with individualistic and
entrepreneurial behaviour. The ‘othering’ discourses of contemporary and colonial
🤖
AI appears to be our mechanical offspring and evolutionary successor. A rapacious resource-devouring juggler of language, with little to no regard for the living biosphere at large. Possibly our most unruly creation!
For a very decent documentary on the matter, watch The Cost of AI: https://youtu.be/bVnpH3DE-MY?si=vqTimzRl1BAtNSMq
Come on, dear Nate, you know as well as any thoughtful observer, that the underlying sole purpose of AI is for corporations to reduce labor costs and increase profit. For me, AI, is just BS and a further indicator of the wealthy attempting to get an ever bigger piece of the pie, as the energy/climate collapse grows ever nearer. I hope they choke on it. Have a blessed day! Gregg
Flesh & blood is needed to build AI machines, to build & maintain the mining & refining & transport machines that process the raw materials to build an AI machine. An AI computer can't run a steel smelting furnace, needed for all the infrastructure and other support machines and buildings. Nor can AI build, nor maintain and run it's own electricity generating grid. Nor construct the fabrication plant buildings nor dig the required foundations, nor then build it's own chip manufacturing machines etc. etc. etc....
Just pull the power plug .... bang AI dies. As such AI will render it self SI (Stupid Intelligence) if it's conspires to get rid of humans, as that would get rid of the AI conspirator too!
You likely are aware of Vanessa Andriotti’s experiment with “relational AI” — more here: https://burnoutfromhumans.net/
So I finished the Scott Tinker episode yesterday, and I just finished listening to this Frankly. I have a lot of thoughts, but these are some of the things that came to mind—let me know what you think…
So I think a trap a lot of us fall into (myself included) is that we deceive ourselves into thinking that we’re using “first principles thinking”, while in actuality, firstly leading with our values (how we want the world to be), and unconsciously letting our values warp our perception of reality (i.e. self deception).
If we do an actual first principles analysis on energy, what we’ll find is that energy and materials are not scarce relative to the universe, but human time and the requisite aggregate intelligence in order to scale time is.
Einstein said it himself that energy is mass times the speed of light squared—meaning material reality and energy are one and the same—meaning there is no scarcity of energy because matter is energy, and the material extent of the universe is virtually unlimited.
Granted, we need specific materials in order to extract, transmute, and store that energy at an economically efficient rate relative to human time and energy, but again, those materials aren’t scarce relative to the earth, and definitely not scarce relative to the universe—which one might respond with: “they are scarce relative to the increasing energy required (as a result of depletion and the potential cost of internalizing externalities) to extract, process, and utilize said energy”—meaning oil is not scarce, but economically viable, non-externalizing oil is.
My response to that would be humans are scaling intelligence exponentially through artificial intelligence—meaning we are scaling the effective utilization of human time—which is the only resource that is truly scarce. So if the limit of human time is approaching infinity (due to Ai and potentially quantum computing), and energy is the fulcrum, does one probabilistically think we’ll (the super-organism) sit idle and let the human enterprise (the economically viable production of energy—to structure matter—to defeat entropy) grind to a halt simply because we’re running short on economically viable hydrocarbons? Or…is it more likely that we’ll harness and scale another energy source given enough time (again—time being the operative word).
With all that said, no one can predict the price of oil tomorrow, no less than a decade from now. We’ve seen that civilization cyclically regresses into dark ages such as the bronze age collapse, or the Fall of Rome—where our ability to collectively transmute entropy into order diminishes due to natural, cyclical, systemic forces that decay people, establishments, and paradigms given a long enough time horizon (again—time being the operative word).
It is possible that “peak oil” (i.e. the collapse of affordable oil) could send us into another civilizational dark age—along with a list of other possible systemic risks such as plague, biological warfare, nuclear weapon use and proliferation, or mass migrations due to rising sea levels as we potentially leave the stability of the Holocene to name a few. The steel-man for these possibly pending catastrophes is that as the complexity of civilization increases, so does the proclivity for systemic risk—so given enough time, the systemic risk will eventually knock us back a few pegs at best, to permanently kill us at worst.
Again, during the dark ages in Western Europe, the population was cut in half (i.e. as a rough estimate—exact figures unknown). During the Last Glacial Maximum (approximately between 26,500 and 19,000 years ago), the human population was reduced to the tens to low hundreds of thousands—which has been estimated to have stood in the millions during the apex of the Pleistocene epoch.
On the contrary, I don’t think it’s obvious or reasonable to predict the cost of energy will be higher a century from today due to “limits to growth” (i.e. depletion, environmental externalities, overpopulation) because again, we are recursively scaling human time (the only thing that’s actually scarce) through artificial intelligence—meaning we’re exponentially scaling intelligence, to collapse the marginal cost of production of human needs and desires, including hydrocarbons.
“But aren’t hydrocarbons limited?” Who says we’ll be mining hydrocarbons—for hydrocarbons can and will be synthesized synthetically, simply because it will be more economically viable to due so. Actually, we are already producing synthetic hydrocarbons, just not at scale because the price of mined hydrocarbons isn’t high enough to scale production to a level that would drop the cost of production to the aggregate material/ labor cost.
So the more probable question for possibly not us (i.e. because of debt, demographics, atherosclerotic social capital, etc.), but our grandkids (with the much overlooked caveat of systemic risk) should be “what will our relationship to energy be when it becomes too cheap to meter”—because that means we’ll have far greater “impact” (one of the primary values of the TGS) then our hunter gatherer ancestors could possibly imagine. Under our current paradigm, the more energy we utilize collectively, the more impact we’ll have on everything not human from bumblebees to redwood trees.
Honestly, it’s a difficult question to answer, but possibly, our humanist paradigm (the universe is a resource for the production of human flourishing via the human mastery and projection of power) will give way to a different paradigm (through reaching the limits of that paradigm), that merges the will of humanity (the reorganization of matter to defeat entropy) into a more wholistic paradigm (the reorganization of matter that mimics the logos of the universe—which would allow us to synergistically dovetail the will of humanity to a larger reciprocal unfolding process).
So in conclusion, I think the real race is between existential risk from complexity (which would significantly reduce, to extinguish our capacity to scale time), versus the scaling of human time through artificial intelligence and quantum computing. If the existential risk side wins, then we could see “a great simplification” or Homo sapiens could potentially go extinct. But…if the existential risk side doesn’t materially manifest in a timely fashion, then we are on a one way ticket to the marginal cost of energy approaching zero—simply because that is the logical conclusion of scaling the implicit collective human will across time and space.
Also Nate, the John Verveke interview was one of my favorites, for I was glad you guys connected. I hope it goes without saying that I appreciate your efforts to elucidate the human predicament, and I know this comment probably breaks stride with some of the tenants of TGS, but again, I’m simply structuring my thoughts in terms of extrapolating cause and effect from first principles, and just like you, I don’t claim to have the answers.
With much appreciation,
Blair
"hydrocarbons can and will be synthesized synthetically" erm... read some thermodynamics and discover that it's a one way process in other words it takes MORE energy to make hydrocarbons than you get when you burn them!
Thanks for the reply! Actually, the cost curves are converging (and will eventually cross) because the price of electricity is collapsing in real terms when looking at a 50 year chart.
You’re probably thinking “what does electricity have to do with synthetic hydrocarbons?” Well take methane for example, which is essentially “pure natural gas.” To make synthetic Methane (CH4), you need to cleave hydrogen from water using electrolysis (you can use steam methane reforming, but it’s not carbon neutral), then you need to capture carbon from the air (which can be done using a variety of techniques, but again, the economic viability is proportional to the cost of electricity). Finally, the hydrogen is reacted with the captured carbon using a catalyst (typically nickel) to produce Methane and water—where the water can be separated out through direct condensation.
TLDR—the technology to produce synthetic methane has been around for decades, but the limit to its economic viability is not the cost of capital (the equipment and humans needed), but the cost of electricity and the scale of production—for as a general rule, when production doubles, the cost falls by roughly a 1/5 to 1/4.
On the electricity end, the current wholesale price in the United States is $40 per Megawatt-hour (MWh) which is cheap, but I think it needs to be in the $30 range (give of take a standard deviation) to be competitive with mined natural gas. If we where to graph a best fit curve to wholesale U.S. electricity prices in the United States, we should be at that price level around the 2030’s to 2035’s, but, I think there’s economic factors which might upend that in the short term.
This is a broad brush stoke economic overlook which doesn’t take into account artificial intelligence—which is the X-factor that is counterintuitive to how we naturally see the world as humans. So take the entire process I just described and mate it to general artificial intelligence, which I define as a recursive process (the algorithm is rewriting the algorithm) to achieve a goal (in this case—how to combine hydrogen from water and carbon from the air—to produce a hydrocarbon), hence repeatedly and auto-poetically discovering the most efficacious way to achieve the goal with respect to the second law of thermodynamics (the conservation of energy). Humans are a form of general biological intelligence, so this recursive process is not new, but what’s about to be new is the ability to exponentially scale human intelligence using silica—which in think will be the start of a new civilizational cycle.
I have a lot of concerns about our future from a values perspective, but what I just outlined is not my values, but the forces at play that will inadvertently shape humanity’s future.
Hope that helps clarify things!
You write: "... the second law of thermodynamics (the conservation of energy)", which means that the energy inputs needed to "... make synthetic Methane (CH4), you need to cleave hydrogen from water using electrolysis" is MORE (high) energy (density) than the (lower) energy (density) that can be obtained when you burn the CH4 (because some energy is ALWAYS wasted as low energy density heat). You then propose to "capture carbon from the air" which takes even more energy! These processes taking more energy to combine than eventually burn (to release the stored energy) are the DEFINITION of the 2nd Law i.e. energy is a one way street from high to low energy density. Period. AI can NEVER EVER change that. PERIOD. Sorry but egro your proposal is caca del toro ! :-)
So there’s partial truth in what you’re saying and I appreciate you bringing it up—for every time energy changes form, you lose energy in the process. So from a solar panel to an electric car for example—you’re going from solar radiation, to electrical energy, to stored chemical energy from a battery, back to electrical energy, which then gets transferred into mechanical energy and heat as waste. So because of the conservation of energy (energy is not created or destroyed, it’s only transferred), every change in energy form results in a loss of usable energy—so where essentially fighting the law of conservation, no matter what energy form we’re using.
So if we stick with the natural gas example, mined natural gas is subject to the law of conservation (just like synthetic)—it’s just that the nodes that are acting as energy friction are different.
So with natural gas, you have to expend energy to create the capital sutures required for hydraulic fracking—so expending energy on steel refinement, concrete production, lighting, chemicals—all high energy industries). You have to expend energy to run the diesel machinery. You have to grow food (i.e. energy to feed the engineers and workers, which keep the entire process running). Once the natural gas is fracked and captured, it has to go through a refinement process to remove water and specific impurities (i.e. requiring energy). So—after all your efforts, will say you have a million joules of usable, sellable natural gas. How many joules of energy did it take to create the million joules? I haven’t done the calculations (for it’s too complex), but it’s not by any means negligible—hence why natural gas has a price.
So, you have the traditional cost structure which battles the conservation of energy versus an emerging cost structure—which again, battles the conservation of energy.
So the operative question is “what is the trend for each cost structure (i.e. energy structure) over time, given what we know about the future?” With any natural commodity—take oil for example, the joules of energy expended to get the same barrel of oil is increasing, because as Nate has talked about—we’ve mined all the low energy input oil, and it’s a similar story with natural gas. Also, if we want to price externalities in the future, the energy we’ll need to expend to get the same barrel will increase even further.
With synthetics, you’re battling the cost of electricity as a primary mover in the cost structure (which is the same thing as energy structure), which again, is becoming cheaper in relative terms. There’s already a handful of companies that are approaching $1 per kilogram commodity hydrogen by electrolysis, which is a key barrier for making synthetics competitive.
Also, you’re probably wondering why we would even bother battling the second law of thermodynamics to create hydrocarbons? Well, let’s say you’re building a solar farm out in the middle of the desert because land is abundant and cheap. The problem with electricity is it’s non-fungible—meaning I can’t “bottle” my electricity and sell it around the world like I would with a hydrocarbon. With electricity, I have to have transmission lines (which again—battle the conservation of energy)—meaning the more distance I traverse, the more energy I lose. So with a solar plant out in the middle of the desert, I can use excess production (or the entire production) to create hydrocarbons right on site, which then can be sold as fungible, non-entropic energy units, which will be highly sought after because again, hydrocarbons aren’t just energy units, but material derivatives. So we’re essentially trading efficiency, for fungibility—which is a trade we’ll make in certain circumstances.
As for Ai, it’s essentially finding more ways to “defeat” the second law of thermodynamics—that is, analyzing the opportunity cost, relative to all other opportunity costs. So in a hundred years from now, Ai will say, “what’s the opportunity cost of pumping energy into a diminishing returns curve (i.e. mined hydrocarbons), versus circumnavigating the entire process by producing them locally and synthetically. Again, we can’t predict the future, but what we can do is assign rational probabilities to the information landscape.
You claim:-
a) "the cost of electricity [...] is becoming cheaper in relative terms" and
b) "a handful of companies that are approaching $1 per kilogram commodity hydrogen by electrolysis"
These claims are based upon
c) artificially ring-fence $ costs, (i.e. ignoring external inputs like materials in whole system electric supply grids e.g. they need fossils fuels to be built and to run and for intermittent wind / solar storage etc...) and
d) mistaking $ for energy.
You then cast your magic wand claiming:-
e) that "with a solar plant out in the middle of the desert, I can use excess production (or the entire production) to create hydrocarbons right on site"
Yawn. Infrastructure build out, transport of building materials personnel, food, maintenance, 15 year replacement life spans of panels, etc.... etc.... etc.... = even LOWER EFFICIENCY than putting them on roof tops in cities!
And then cast your magic wand yet again claiming AI's are a rational way of:-
f) "... essentially finding more ways to “defeat” the second law of thermodynamics [... thereby] assign[ing] rational probabilities to the information landscape" !!!
I give up: the laws of thermodynamics are not up for debate or "defeat" by a software AI or a flesh and blood dreamer. PERIOD. Some home work:-
https://energyskeptic.com/2022/hydrogen-production-would-use-waaaaaaaaay-too-much-water/
https://energyskeptic.com/2024/hydrogen/
So imagine you’re a farmer during the civil war, and I’m someone from the future—from today—right now. You’re there picking the crops by hand, and then “the ghost of the future” appears to tell you how the crops will be picked 150 years in the future—essentially today.
So here’s how the crops get picked, first we suck up million year old dinosaur juice with steel pipes that where forged in an arc furnace where thousands upon of thousands of watts of electricity are transmuted into plasma to melt metal. Then, this giant machine, which is guided by these things called satellites (hunks of metal that float in outer space), pick up and process the crops at a rate of 10,000 humans working 24/7. Then, the crops are delivered all around the world by a six continent global supply chain, which again, runs on refined dinosaur juice.
Just picture you’re that person hearing this from “the ghost of the future.” In all honesty…what would your response be? I know what my response would be—you’re crazy—there’s no possible way that is how the crops get picked, and yet…that is exactly how they get picked.
I’ll just end on this, and it’s really beneath me to even respond, but I’m going to do it anyway—because when someone is spending time and energy to engage in a good faith conversation, only to be met with condescending, snarky, Karen like remarks, it tells me that what you’re doing is not engaging in a process to discover the truth, but defending something that you have an emotional attachment to. What I wrote in this thread was not to try and convince you or any one of anything, rather it’s about engaging in a process.
It’s not about being right, it’s about putting your hand out, and building something that two individuals couldn’t build on their own.
Best of luck,
Blair
Here’s a video of someone attempting to do exactly what I was describing throughout this thread. I thought it was an inspiring video at the least—definitely worth the watch!
https://youtu.be/NngCHTImH1g?si=Ix4izhdOSmWGAGLP
Dear Mr. Hagens,
I suggest that you listen to the most recent episode of On The Media, https://overcast.fm/+AA5_LaexAmA
This may help you to relax slightly.
The current things being marketed as “AI” are not ‘intelligent’ at all. They are just predictive models making guesses as to which word is most likely to come next. They do not ‘think’, and this line of development will never lead to them ‘thinking’.
The biggest worry is the oceans of energy the current systems use. So I actually see it as hopeful that the new Chinese systems are built for efficiency and open source. This has effectively cut the wasteful GPT models off at the knees.
The down side is the big tech was setting up to be the driving force to push for Nuclear Power renaissance in the west. (which I see as a good thing for climate and civilization, I know we disagree here)
If big tech no longer need this power it may slow acceptance and pursuit of new nuclear, which will delay the uptake of low carbon energy.
All the best.
Taran
Relax?
https://nature.us17.list-manage.com/track/click?u=2c6057c528fdc6f73fa196d9d&id=c58c9863f6&e=e85585ca19
Hope you can reach this ok. "Scientists are flocking to DeepSeek-R1, a cheap and powerful artificial intelligence (AI) ‘reasoning’ model..."
The term “reasoning model” does not refer to the concept of “reason” that we refer to in a human.
Key points about reasoning models:
Logical processing:
These models use techniques like symbolic logic and inference rules to analyze data and derive meaningful conclusions.
Chain of thought:
Unlike simpler AI models, reasoning models often present their thought process by showing the intermediate steps they took to reach an answer, allowing for better transparency and understanding.
Complex problem solving:
They are particularly adept at tackling challenging problems that require multiple layers of reasoning and consideration of different factors.
It is basically a fancy Boolean search that shows you it’s working. This is really useful to scientists because they can see what causes a hallucination, which is still a problem with these models, and build the search again. It is still just a tool.
We are a long way from Skynet.
Hard to keep up... I hear from Nate machines talking to one another and hiding stuff... a bit more advanced than hallucinations? I guess some human stuff cannot be tuned into algorithms, (Penrose's old argument), but so what?
2 versions of predictive chat models ‘talking’ to each other would be similar to a dog barking at its own reflection in the mirror.
They are just 2 sets of statistical models calculating what the next word is likely to be, based on what data they have been trained on, which all comes from the internet…
Today I read that ~1/3 of humanity still uses precious biomass for cooking. Substitutiting with NG stoves by and large does not improve health of mothers and children. Better domestic air quality is not sufficient against a background of poor food inter alia. Semi-subsistence food provision in situ needs locally adapted farm sysems where food security has a degree of independence from the cash economy in order to retain a viable 'livelihood'. There are a number of adverse feedback loops that pauperise a population. AI 'top-down solutions' appear unlikely to undo what has already been done to the essential base for a sustainable future? The contradictions in the role of Maize corn and Soya across the globe, for example, suggest a predicament lacking essential resilience. 'Profit' will turbo drive the existing direction as I think you and Greeley (his comment) suggest.
" the development goal of modernity has since long led to an ‘othering’ of
smallholders, labelling them backwards and resistant to change. In the early twenty-first
century this ‘othering’ has been played out through a development programme aimed at
changing attitudes and mindsets among the farmers in line with individualistic and
entrepreneurial behaviour. The ‘othering’ discourses of contemporary and colonial
policymakers display striking similarities in this case." Ambermisson, 2011, https://gupea.ub.gu.se/handle/2077/25370?show=full