Our Work Overview AI Certification Research & Papers Convenings Programs & Training Policy Advocacy
Who We Serve Students Parents & Families Educators Senior Adults College & Career Corporate Government Higher Education Nonprofit Faith-Based K-12 Education
Resources AI Thinking Model AI Energy Guide AI Citizens Kit AI Concerns: A Q&A
About About GIAH Independence FAQ
More Hard Questions
Take Action
Critical Thinking Resource

AI Concerns: A Critical Thinking Q&A

Honest answers to the hardest questions about AI. Every concern here is legitimate. Every one deserves more than a headline.

By Liz B. Baker

These questions came from real students. In a VCU communications class on AI, my daughter Rachel and I were invited as debate adversaries. She represented the skeptics. I represented the case for strategic AI engagement. The students were informed, vocal, and unapologetically critical. They brought every concern on this list and more.

I welcomed it. These aren't fringe objections. They are the questions every serious person should be asking. What follows are my best attempts at honest answers, written the way I wish I'd had more time to respond in that room.

Liz B. Baker  ·  Founder, Global Institute for AI & Humanity

"AI content is slop. It's garbage."

A lot of it is. The internet is flooded with low-effort, AI-generated content that nobody asked for and nobody benefits from. That's real and it's a problem.

But the conclusion that all AI output is garbage doesn't survive scrutiny. The quality of AI output depends entirely on the quality of the human interaction behind it. An unconfigured AI used as a copy-paste machine produces slop. A configured AI used as a thinking partner produces work that reflects the human's judgment, experience, and standards, because the human is thinking harder, not less.

The critical thinking questionAre you evaluating the tool by its worst use or its best? You wouldn't judge writing by the worst blog post on the internet. You'd judge it by what a skilled writer produces. The same standard applies to AI collaboration.

"AI steals from creators. It's built on stolen work."

AI models were trained on vast amounts of publicly available text and images, much of it created by people who were never compensated or consulted. That's a legitimate grievance, and it's being litigated in courts right now. Multiple lawsuits are active. The legal and ethical frameworks are still being written.

But critical thinking adds nuance here. Every technology in history has disrupted the people who came before it. The printing press displaced scribes. Photography disrupted portrait painters. Digital music disrupted physical media. In every case, the technology also created new opportunities for creators that didn't exist before. That doesn't erase the harm to the people who were displaced. It means the conversation has to hold both truths at once.

The critical thinking questionCan you acknowledge that creator displacement is a real harm AND that the technology creates new forms of creative expression simultaneously? If you can only hold one of those truths, you're doing criticism, not critical thinking.

"AI uses too much water and energy. It's environmentally irresponsible."

The environmental footprint of AI is real and significant. Data centers consume enormous amounts of water for cooling and electricity for processing. This is a legitimate concern that deserves serious attention and better solutions from the companies building these systems.

And. The smartphone in your pocket, the social media you use daily, the streaming services you watch, the cloud storage that holds your photos all of these run on the same data centers, use the same water, and consume the same energy. Social media's environmental footprint was already astronomical before generative AI entered the picture. The supply chains that build the devices you use to criticize AI have their own environmental and labor costs.

The critical thinking questionAre you applying this standard consistently, or only to the technology you've already decided to oppose? If water usage disqualifies AI, does it also disqualify the platforms you use every day? If not, why not? What would it look like to advocate for environmental accountability across all of tech, not just the part that's new?

"AI is biased against impoverished communities. It takes their jobs."

AI systems can and do reflect biases present in their training data. This is documented, it's harmful, and it requires active work to address. Hiring algorithms have discriminated. Facial recognition has higher error rates for people of color. Credit scoring models have perpetuated existing inequities. These are facts, not opinions.

The jobs question is more complex. Historically, technology has displaced some jobs and created others. The real risk isn't that AI eliminates jobs universally. It's that the transition disproportionately harms people who lack access to training and education, which are the same communities that are already underserved.

The critical thinking questionIf AI is going to exist regardless of how you feel about it, what's the more effective response: refusing to engage with it, or demanding that impoverished communities get access, training, and a seat at the table where deployment decisions are made? Which response actually helps the people you're concerned about?

"AI should only be available to the government and important people. The public shouldn't have access."

This is the most dangerous position in the room, and it sounds like the most virtuous one.

Who decides what "good things" means? Who defines "important people"? Every centralization of power in history has been justified by the claim that the powerful would use it responsibly. And every centralization of power has eventually been used to control, surveil, or oppress the people who were denied access to it.

In China, AI-powered surveillance tracks ethnic minorities, assigns social credit scores, and feeds a system where people disappear for posting the wrong thing. That's what "only the government has access" looks like at scale.

The critical thinking questionIf you restrict AI to governments and elites, who protects the people those governments and elites have historically failed to protect? Is it possible that broad public access to AI, with appropriate regulation and education, is actually safer than concentrated access?

"AI will go away. It's financially unsustainable."

Some AI products will fail. That's a product failure, not a technology failure. It's the equivalent of one dot-com startup going bankrupt in 2001 and concluding the internet was over.

The enterprise AI market is accelerating. Microsoft, Google, Amazon, Apple, and Meta are collectively investing hundreds of billions. Employers across 31 countries say they won't hire candidates without AI skills. Workers with AI fluency earn a significant wage premium. The technology isn't going away because one consumer product didn't find a market.

The critical thinking questionAre you confusing the failure of specific products with the viability of the underlying technology? Would you have bet against the internet in 2001 because Pets.com failed?

"I wouldn't read your book because you collaborated with AI to write it."

That's a choice. But let's examine what it means.

The book is The AI Thinking Model: Reclaiming Critical Thinking in the Age of Artificial Intelligence. The ideas in it are human ideas. The research is sourced from peer-reviewed institutions. The stories are lived experiences. The framework was developed through thousands of hours of practice. AI was used as a thinking partner, an editor, a research assistant, and a collaborator, the same way an author might use a human editor, a research assistant, or a writing group.

Refusing to engage with ideas because of the process used to develop them isn't a principled stand. It's prejudice dressed as a standard. If the ideas are wrong, challenge them. If the framework doesn't work, prove it. But dismissing them because AI was part of the process is choosing ignorance over engagement.

The critical thinking questionWould you reject a medical breakthrough if the researchers used AI to analyze data? Would you refuse a building if the architects used AI-assisted design tools? Where exactly is your line, and is it consistent?

"AI will make us dumber. It will replace human thinking."

If you use AI as a shortcut, yes, it will. The research supports this. Students who used unconfigured AI as a study tool scored significantly worse on exams than students who never used AI at all. The tool that felt helpful was actively undermining their learning.

But the same research shows the opposite effect when AI is configured strategically. Students who used a well-configured AI tutor performed just as well as those without AI and solved significantly more practice problems. They learned more, not less.

The tool is the same. The configuration determines the outcome.

This is the problem the AI Thinking Model was built to solve. The free Configuration Wizard takes fifteen minutes and reconfigures AI to develop your thinking rather than replace it. That is not a pitch. It is the direct answer to this concern.

The critical thinking questionAre you evaluating AI based on how most people use it, or based on what's possible when it's used well? If a gym made some people less healthy because they used the equipment wrong, would you close the gym or teach people how to use it?

"The US government uses AI to surveil citizens. It's the same as China."

The US government does use technology for surveillance, and that deserves scrutiny and accountability. Speed cameras, facial recognition pilots, data collection these are worth debating and challenging.

But equating a traffic camera with ethnic persecution fails the most basic test of proportionality. In the US, you can contest a ticket in court. You can vote for representatives who ban speed cameras. Multiple cities and states have done exactly that. You can protest surveillance programs publicly without disappearing. You can write, speak, and organize against government technology use without consequence.

The critical thinking questionCan you hold two truths simultaneously? Government surveillance in the US is worth watching and challenging. Government surveillance in China is an active tool of ethnic oppression. These are not the same thing. If you can't distinguish between them, you're collapsing a distinction that matters for the billions of people living under actual authoritarian AI surveillance.

"I've already made up my mind that AI is bad."

That's honest. And honesty is the prerequisite for growth.

But a mind that's made up is a mind that's stopped thinking. Critical thinking isn't arriving at a conclusion and defending it forever. It's holding your conclusion up to scrutiny as rigorously as you hold up everyone else's.

Every concern on this list is real. Water usage is real. Bias is real. Creator displacement is real. The question isn't whether these problems exist. It's whether the response to them should be rejection or engagement. Rejection feels principled. Engagement is harder. It requires sitting with complexity, tolerating ambiguity, and accepting that a technology can be both harmful in some applications and transformative in others.

The future doesn't belong to people who rejected AI. It doesn't belong to people who accepted it uncritically either. It belongs to the people who engaged with it thoughtfully, demanded accountability from the people building it, and configured it to develop rather than diminish the humans who use it.

The critical thinking questionThat's what the AI Thinking Model is built for. Not to convince you AI is good. To help you think about it clearly. Explore the AI Thinking Model.

The future belongs to the people who engaged with AI thoughtfully, demanded accountability, and configured it to develop rather than diminish the humans who use it.

Liz B. Baker · GIAH

Keep thinking.

The AI Thinking Model is built to help anyone engage with AI as a tool for sharper thinking, not a replacement for it. Explore the framework, or take action.

Explore the AI Thinking Model Hard Questions