Artificial intelligence is everywhere. It writes emails, recommends shows, powers online searches and drives major innovations in health care, finance and education. Behind the buzz, though, lies a hidden reality: AI is not weightless. The tools many of us interact with every day, whether a chatbot or an image generator, require enormous amounts of electricity, water and physical infrastructure to function. Far from immaterial, AI has a growing environmental footprint with real consequences for people, communities and ecosystems.

For LGBTQ+ readers, the climate story is not separate from the story of technology. Just as AI can amplify bias and harm against queer and trans communities, its environmental costs fall disproportionately on vulnerable groups already on the frontlines of climate change, health inequities, and systemic discrimination. To understand what ethical AI should mean, we must look beyond algorithms to their environmental impact.

What is AI, really?

Artificial intelligence refers to computer systems designed to perform tasks that normally require human intelligence, such as recognizing speech, analyzing images, generating text or making predictions. The most familiar branch today is generative AI, which can create new text, art, or audio based on patterns in huge datasets.

Training these systems requires staggering amounts of computational power. A single large language model can consume as much electricity as an entire U.S neighborhood uses in a year. Running them day to day, through millions of user queries and image requests, multiplies that impact exponentially.

The infrastructure that makes this possible, massive warehouses filled with servers called data centers, relies on continuous cooling. That cooling often depends on water drawn from local reservoirs or rivers and energy from the electrical grid. The more AI grows, the larger this footprint becomes.

The environmental footprint

The energy required to train and run large AI models is already pushing electrical grids to their limits. Typical chatbot prompts can use around five times more electricity than a standard web search, with some studies estimating an AI query consumes up to 23 times more energy. As adoption accelerates, experts warn of a dramatic rise in greenhouse gas emissions if the electricity comes primarily from fossil fuels. To meet this demand, some tech companies are experimenting with nuclear energy. Meta, for example, recently signed a deal to power data centers with nuclear power, even while maintaining investments in gas-fired plants. The result is a messy, uneven transition that pairs promises of clean energy with continued dependence on fossil fuels.

Water is an equally pressing concern. An investigation by the Associated Press found that large data centers can consume millions of gallons every day, enough to match the demand of a small city. When these facilities are built in water-stressed regions, the strain on local ecosystems and communities can be devastating. In underserved rural areas, data centers often drive up utility prices while competing with residents for limited water supplies.

The environmental impact of AI extends further. Servers require rare earth minerals mined under exploitative conditions, and hardware disposal generates toxic e-waste. Efficiency gains often backfire through a phenomenon known as the Jevons Paradox: as tools become faster and cheaper, usage skyrockets, wiping out sustainability gains. Without regulation, AI’s footprint risks becoming one of the fastest-growing contributors to climate stress.

Why it matters for LGBTQ+ communities

At first glance, AI’s environmental impact might seem like a distant issue, an infrastructure problem left to governments and corporations. In reality, the consequences are deeply intersectional. Pollution and water stress caused by data centers often fall hardest on marginalized communities. LGBTQ+ people, particularly queer and trans people of color, already face disproportionate health disparities that these additional burdens only worsen. Rising energy and water costs compound the problem, placing an even heavier weight on low-income households where many LGBTQ+ people are already struggling to make ends meet.

Advocacy groups like LGBT Tech have warned that without intentional design, AI risks reinforcing bias against queer and trans communities. As anthropologist Mary L. Gray has observed, AI often fails LGBTQ+ people because it struggles to capture the fluidity of identities. The same extractive mindset that reduces queer lives to rigid categories also reduces ecosystems to consumable resources. Both are consequences of designing technology without consent, nuance or accountability.

Building an ethical and sustainable AI framework

Around the world, momentum is growing to regulate AI responsibly. The EU’s AI Act, the first of its kind, bans certain harmful uses, sets risk categories and explicitly links responsible AI to environmental sustainability. In the U.S., federal rules remain piecemeal, with executive orders and voluntary industry commitments that critics argue lack enforceable power.

Nonprofits and industry groups are also stepping in. From IBM’s principles of accountability and fairness to certification efforts by AI4Good, the Responsible AI Institute and the Ethical AI Alliance, the goal is to tie technological progress to ethics, transparency and sustainability.

What unites these efforts is the recognition that AI cannot be left to market forces alone. Communities deserve a say when their water, electricity and land are used for data centers, and they should see benefits when resources are consumed. The Indigenous principle of data sovereignty shapes conversations about consent in AI, and applying similar frameworks to environmental stewardship could help ensure technology serves both people and planet.

Moving from awareness to action

It is easy to feel powerless in the face of massive tech companies and global energy systems. But just as individuals and communities have driven climate action, they can also shape AI’s trajectory.

Not all AI models require the same resources. Smaller, open-source models often have lower energy costs. Demanding transparency from vendors about energy use, water cooling and sustainability strategies can create pressure for accountability. If organizations rely heavily on resource-intensive AI, they can reinvest in local climate justice or conservation efforts to repair the impact. And when communities and businesses talk openly about AI’s footprint, they help normalize the idea that sustainability matters in technology choices.

Health systems in California are already piloting this approach. The California Health Care Foundation reports that by forming AI working groups, they ensure new tools both improve care and reduce disparities. That kind of intentionality can be applied far beyond medicine.

Looking ahead

AI is at a crossroads. Progress framed only in terms of competitiveness and efficiency is not progress if it comes at the cost of ecosystems, water supplies and community health. Yet AI also holds real promise. Researchers are developing models that require less energy, engineers are testing renewable-powered data centers, and policymakers are beginning to link sustainability with ethics and equity. Tools like planetary accounting can help measure AI’s impact across water, carbon and biodiversity, while innovations in hardware design may drastically reduce resource demands.

For LGBTQ+ communities, the fight for ethical AI is inseparable from the fight for climate justice. Both demand visibility, accountability and shared power, and both offer opportunities to build systems that uplift rather than extract. Liberation movements have always been strongest when struggles are connected. The climate fight is a queer fight. The AI ethics fight is a queer fight.

A more sustainable AI future is possible. With intention, transparency and creativity, we can mitigate environmental costs while embracing the innovations that help us thrive.