> For now, this is a rough comparison that helps us keep AI energy use in perspective - especially next time someone says chatbot use is causing an energy disaster.
The main concern isn't energy use from AI, it's how companies are getting that energy, and how much their emissions are increasing due to it. [Google says its total greenhouse gas emissions climbed nearly 50% over five years, mostly due to electricity that powers AI data centers](https://www.npr.org/2024/07/10/nx-s1-5028558/artificial-inte...). [Chevron to build gas plants to power data centers amid AI boom](https://www.reuters.com/business/energy/chevron-partners-wit...). [Electric utility companies are building more power plants that will burn natural gas to meet demands of a data center construction boom](https://www.washingtonpost.com/climate-environment/2024/11/1...). Considering this, the argument this piece makes falls apart rather quickly.
Not to mention the article is a bit disgenuous because they only focus on inference on the consumer side (thus also helping to drive up demand for training?)
Author is prof at EPFL
I find his other article on generative bio LLMs more interesting
>Remarkably, Evo 2 wasn’t even trained on human variant data - only on the standard human reference genome. Yet it can still infer which mutations are harmful in humans, because it seems to have learned the evolutionary constraints on genomic sequences.
> For now, this is a rough comparison that helps us keep AI energy use in perspective - especially next time someone says chatbot use is causing an energy disaster.
The main concern isn't energy use from AI, it's how companies are getting that energy, and how much their emissions are increasing due to it. [Google says its total greenhouse gas emissions climbed nearly 50% over five years, mostly due to electricity that powers AI data centers](https://www.npr.org/2024/07/10/nx-s1-5028558/artificial-inte...). [Chevron to build gas plants to power data centers amid AI boom](https://www.reuters.com/business/energy/chevron-partners-wit...). [Electric utility companies are building more power plants that will burn natural gas to meet demands of a data center construction boom](https://www.washingtonpost.com/climate-environment/2024/11/1...). Considering this, the argument this piece makes falls apart rather quickly.
Not to mention the article is a bit disgenuous because they only focus on inference on the consumer side (thus also helping to drive up demand for training?)
Author is prof at EPFL
I find his other article on generative bio LLMs more interesting
https://substack.com/home/post/p-159626430
>Remarkably, Evo 2 wasn’t even trained on human variant data - only on the standard human reference genome. Yet it can still infer which mutations are harmful in humans, because it seems to have learned the evolutionary constraints on genomic sequences.