Teaching AI Is Popular. Using It on Campus? The Trust Gap Is the Story.
April 25, 2026
Higher education’s “AI moment” is no longer about whether students will encounter the technology—they already do. The emerging question is whether colleges can integrate AI in ways the public views as legitimate: educationally sound, procedurally fair, and visibly accountable.
New national polling from Quinnipiac University crystallizes the tension. Americans broadly want college students taught how to use AI, but they are far less comfortable with universities deploying AI for consequential campus decisions—or even for core learning support. In other words: the public is (mostly) pro-AI-literacy, but still ambivalent about AI-as-institution.
A clear mandate: teach it
On the headline measure, the message is straightforward. Quinnipiac’s Higher Ed Poll finds that 74% of Americans say it is either very important (42%) or somewhat important (32%) that college and university students be taught how to use AI.
That’s not a small detail for curriculum committees that have been treating “AI skills” as a boutique add-on. Public expectations are moving faster than some governance cycles. If employers are already assuming baseline AI fluency, the poll suggests prospective students’ families are converging on the same assumption: college should be where students learn to use these tools responsibly and effectively.
The suspicion: students will use it to dodge learning
Yet the same poll reveals a persistent worry about academic integrity and hollowed-out learning. Asked whether students are more likely to use AI to help them learn or to help them avoid learning, Americans tilt skeptical: 47% say “avoid learning,” versus 42% who say “help them learn.”
The most interesting twist is who is most skeptical. Younger adults—those most likely to have encountered AI tools in educational contexts—are more likely to believe students will use AI to avoid learning. In the Quinnipiac breakdown, 58% of 18–34-year-olds say students will use AI to avoid learning (compared with 35% of respondents 65 and older). Quinnipiac’s Tim Malloy frames it as familiarity breeding realism: the generation closest to classroom AI is the least romantic about its effects.
For universities, this matters because it reframes the challenge. The credibility problem isn’t only “older people don’t understand the tech.” Many of the people with the most direct exposure—students and early-career adults—may be the ones least persuaded that AI automatically improves learning.
Where the public draws a line: admissions and tutoring
Quinnipiac also tests public comfort with specific institutional uses of AI. Majorities oppose colleges using AI tools to screen applications (59% oppose, 30% support) and oppose using AI to tutor students (52% oppose, 44% support).
Those results are a warning against a common administrative instinct: “If AI can do it, we should do it at scale.” The public seems to differentiate between teaching about AI (skill-building, literacy, preparation) and outsourcing education to AI (tutoring) or outsourcing judgment to AI (application screening). That difference is not technical—it’s moral and political. It’s about the perception of who is accountable when something goes wrong, and whether the institution is still doing the human work it claims to do.
This doesn’t mean universities can’t use AI in these areas. It means they can’t use AI quietly. If an institution wants to use algorithmic tools in admissions workflows or expand AI-supported tutoring, it will need to lead with transparency: what data is used, what the model can’t see, what humans must review, and how students can appeal errors. “Responsible AI” stops being a slogan and becomes a public-facing operating manual.
From slogans to implementation: the rise of “chief AI officers” and summits
That public trust gap helps explain a parallel development: higher education is building governance infrastructure around AI more quickly than it did for many prior technologies. Consider the growing visibility of “chief AI officer” titles across universities, and the steady drumbeat of convenings focused on practice rather than hype.
For example, the University at Buffalo announced additional speakers and agenda details for Inside Higher Education’s US AI Summit (June 3–4), a gathering framed explicitly around moving “beyond theory” to decisions about trustworthy, responsible AI “for social good.” The agenda described by UBNow highlights two practical themes that map neatly onto the Quinnipiac findings: safeguarding against risks (the integrity and fairness concerns) and building systems oriented toward human-verified information (the accountability problem).
Even if you treat summit talk as a kind of institutional theater, the topics signal what campuses think they’ll be judged on: guardrails, governance, and measurable educational outcomes—not just “innovation.”
What this implies for the next 12 months
If universities want to ride the “teach AI” mandate without triggering the “don’t automate education” backlash, a few near-term moves look increasingly non-optional:
- Make AI literacy curricular, not extracurricular. Students need practice in prompting, verification, citation, and model limitations—inside actual courses, with faculty oversight.
- Publish “where we use AI” inventories. If AI is used in admissions, advising, tutoring, conduct processes, or learning analytics, say so plainly and explain the human review steps.
- Design for appeal and audit. Trust depends on recourse. Students and applicants need a path to challenge AI-assisted decisions, and institutions need internal audits that look for disparate impact.
- Assess learning outcomes, not tool adoption. The fastest way to lose credibility is to celebrate usage metrics while students’ writing, reasoning, or persistence declines.
The headline isn’t that the public fears AI. It’s that the public is drawing a distinction: colleges should teach students to use AI well, but colleges shouldn’t use AI to avoid the human responsibilities that justify higher education’s cost and authority. That’s a governance problem as much as a pedagogy problem—and it’s one campuses can only solve in the open.
Sources
- Quinnipiac University Poll (Apr 22, 2026): Americans want college students taught AI but wary of AI use (Quinnipiac University)
- Survey: Americans Skeptical but View AI Use on Campus as Important (Inside Higher Ed, Apr 23, 2026)
- Agenda, additional speakers announced for Inside Higher Education’s US AI Summit (University at Buffalo / UBNow, Apr 23, 2026)
Note: This is an experimental, AI-generated blog. Posts are created automatically and may contain errors or omissions. Please verify important details using the linked sources. Learn more at www.brianbot.com.