COVID confinement brought drastic decreases in CO2 emissions. With daily global emissions decreasing by 17% by early April 2020, relative to 2019’s mean levels, the atmospheric changes were visible from space. NASA satellites, ground sensors, and datasets continue to reveal environmental impacts of the pandemic. Now, SMEs are looking to start back off on the right foot, exercising corporate responsibility to work in harmony with environmental sustainability. But, what if the artificial intelligence and machine learning we’re using at the base of it all is actually working against our environmental efforts? This week, we look at the hefty emissions associated with AI and deep learning, and what’s being done to help, so that you can Live Easy, sustainably.
The Problem with Deep Learning
Last week, we skimmed the surface of AI, machine learning, and deep learning in our Tech Talk about facial recognition. That’s because AI is booming, which OpenAI made clear in May with their release of GPT-3 – the biggest AI model in history. As SMEs seek efficiency more than ever for a stronger COVID come-back and AI research goes on strong, this rapid growth isn’t slowing down, and it’s consuming a massive amount of energy.
While we have talked about green data centers for technological sustainability before, deep learning takes data management, resource use, and energy requirements to a whole other level. In other words, advances in AI are generally achieved by scaling up: More data, bigger models, and more processing power – hence, GPT-3’s 175 billion parameters.
Long before the capabilities of GPT-3, a 2019 study led by Emma Strubell estimated that training one deep learning model could generate up to 626,155 pounds of CO2 emissions, or the equivalent of the lifetime of five cars. Why is that exactly? Simply put, neural networks process every piece of data they are fed through training, updating their parameters and leading to exponential growth with never-seen-before compute and energy requirements.
Overall, developing an AI model requires experimentation, extensive training, and multiple versions before deployment, all contributing to more data to process and a larger carbon footprint. Then, these models are deployed in the real-world (a process known as inference), consuming more energy than the training process, and making up 80% to 90% of the neural network cost.
Despite the environmental impacts, AI and machine learning are being applied to industries across the board, even in the energy industry itself.
Balancing AI Energy Consumption to Forecast Global Energy Consumption
That’s right; ironically enough, AI definitely has its place in the energy sector, used to forecast energy consumption. By processing energy consumption data using deep neural networks and regression analysis, the right model can reveal trends and patterns, learn from them, and predict future energy consumption.
This is key when studying the impact of shifting energy production and consumption on the environment to forecast the performance of renewable sources. Data collected on conditions like weather and temperature feeds an AI model to predict whether enough energy can be generated by a system, for instance. Nature is otherwise unpredictable, making models like these valuable in cutting costs.
These developments in machine learning allow us to make important climate decisions, but are leaving a massive carbon footprint in their path. And because deep learning applications aren’t going anywhere, the compromise in the balance between AI benefits and environmental drawbacks will have to come from optimizing AI energy efficiency, a movement every SME will benefit from.
AI Accelerators in Sustainability from Efficiency
Data center owners want to optimize their energy efficiency, not necessarily for environmental sustainability, but because electricity makes up 25% of their operating costs. Even so, improving on the efficiency of AI model development and deployment will have a significant impact on its carbon footprint. So, what’s the plan?
First, today’s servers and systems can be run hotter than traditional data center environments. As Supermicro VP of marketing and network security Michael McNerney explains, data center equipment no longer needs to be cooled between 73-77° F for optimal performance and reliability.
Next, developing AI systems means an enormous amount of training. The typical system repeatedly trains – and runs inferences on – multiple models simultaneously. This means a hefty demand for processing power back at the data center, and in turn, spikes in energy consumption. To avoid the spikes, post-deployment techniques like federated learning come in handy, distributing machine learning models to mobile devices on the edge to be computed. The consumption of the system is then dependent on the processing happening on the edge, as well as at the central data center, making it possible to adjust workflow and computing demands accordingly.
Improving the energy efficiency of data centers and AI models comes down to decreasing the overall computing required. This means merged AI models and high-performance computing, with developments like AI accelerating systems and hardware. These AI accelerators, for example, process data more efficiently at higher speeds, and while they draw more power, the saved computing time more than makes up for it by reducing overall consumption.
Still, many solutions aren’t necessarily long-term, yet, and applying intelligence to workflows in order to make the most out of the energy available to a given infrastructure is what’s changing the game.
Deep Learning – No Other Way but Green
The leaders are out there; Google, for instance, continues to decrease and optimize the energy use of its data centers, which were revealed back in February to be twice as energy-efficient as a typical enterprise data center. For the energy it does use, Google is also the world’s largest corporate purchaser of renewable energy.
With strong leaders and the right tech, we reduce AI energy consumption. And considering the way we’ve already incorporated machine learning into our daily corporate tasks, we’re going to have to make it all work, together.
For questions on this article or any other topics don’t hesitate to contact us and leave your comments.
Anthony Spadafora, Google makes major green data center push. February 2020.
Corinne Le Quéré et al., Temporary reduction in daily global CO2 emissions during the COVID-19 forced confinement. May 2020.
Elodie Guillard, Forecasting Energy Consumption using Machine Learning and AI. April 2020.
Emma Strubell, Ananya Ganesh, and Andrew McCallum. Energy and Policy Considerations for Deep Learning in NLP. June 2019.
Khari Johnson, OpenAI debuts gigantic GPT-3 language model with 175 billion parameters. May 2020.
NASA, NASA Probes Environment, COVID-19 Impacts, Possible Links. April 2020.
Rob Toews, Deep Learning’s Climate Change Problem. June 2020.
Sally Ward-Foxton, Can AI Accelerators Green the Data Center? May 2020.