AI Strategy for Terry College of Business

A Strategic Plan for 2026–2028

Faculty AI Strategy Task Force

Terry College of Business

February 2026

Task Force Members

Name Department / Unit
Chris Cornwell (Chair) Economics & Ivester Institute
Margaret Christ Accounting
Susan Cohen Management
Jerry Kane Management Information Systems
Son Lam Marketing
Jim Metcalf Office of Information Technology
Nikhil Paradkar Finance
Marc Ragin Institute for Leadership Advancement

Overview

Part I: The AI Landscape

  • From chatbot to autonomous agent
  • Enterprise adoption accelerating
  • Labor market impacts

Part II: Strategic Framework

  • Strategic thesis and operating model
  • Three pillars: Faculty, Students, Industry

Part III: Implementation

  • 24-month phased roadmap
  • Governance and risk management

Discussion

  • Questions and next steps

Part I: The AI Landscape

From Chatbot to Agent

Chatbot Era (2023–2024)

  • Human remains the actor
  • AI assists when prompted
  • You write, AI helps edit
  • You analyze, AI helps explain

Agent Era (2025–2026)

  • Human becomes the principal
  • AI acts autonomously
  • Agent builds the model, tests it, presents results
  • Agent executes multi-step workflows


The shift is from teaching students to do the work to teaching students to direct, verify, and take responsibility for work that agents perform.

The Agent Timeline

15 Months of Transformation

  • Nov 2024: Anthropic open-sources Model Context Protocol (MCP)
  • Feb 2025: Claude Code launches as research preview
  • May 2025: Claude Code and OpenAI Codex generally available
  • Mid-2025: Google, Microsoft, enterprise vendors adopt agent frameworks
  • Aug 2025: Gartner projects 40% of enterprise apps will embed agents by end of 2026
  • Jan 2026: Anthropic releases Claude Cowork for non-technical users
  • Feb 2026: OpenAI launches multi-agent desktop application

Enterprise Adoption is Accelerating

Already Using

88%

of organizations use AI regularly in at least one business function

McKinsey 2025

Deploying Agents

23% scaling agentic AI

39% actively experimenting

McKinsey 2025

Near-Term Projection

40%

of enterprise apps will embed task-specific AI agents by end of 2026

Gartner 2025


The Gap

79% of companies report AI adoption, but only 21% have mature governance models

Entry-Level Workers Bear the Brunt

Employment Effects

  • Workers aged 22–25 in AI-exposed occupations experienced a 13% relative decline in employment
  • Decline through lost jobs, not lower wages
  • Concentrated in occupations where AI automates rather than augments

Brynjolfsson, Chandar & Chen 2025

Hiring Patterns

  • Firms adopting GenAI show sharp declines in junior hiring
  • Senior employment continues to rise
  • Not layoffs—firms simply stopped filling entry-level roles

Hosseini & Lichtinger 2025

The traditional career ladder is being compressed from the bottom.

The Cybernetic Teammate

P&G Field Experiment Findings (Dell’Acqua et al. 2025)

  • Individuals working with AI matched the performance of two-person teams with AI
  • Non-experts with AI achieved performance comparable to expert teams
  • Teams with AI were 3x more likely to produce top-10% solutions

The Implication

One analyst with well-configured AI agents may outperform a traditional team.

Organizations will need:

  • Fewer people who can do more
  • Agent orchestration skills
  • Output verification at scale

Graduates will need:

  • Judgment about delegation
  • Process discipline
  • Responsibility for AI-mediated outcomes

Part II: Strategic Framework

The Challenge for Business Schools

The Training Paradox

If agents perform the tasks that built junior expertise, how do newcomers develop judgment?

  • Can’t verify what you don’t understand
  • Can’t delegate wisely without domain knowledge
  • Can’t take responsibility for work you can’t evaluate

The Integrity Crisis

Academic honesty cases at UGA nearly doubled from AY24 to AY25

  • Detection is not a durable strategy
  • AI-generated text increasingly indistinguishable
  • Traditional assessment no longer reliably signals mastery

The business school must ensure graduates arrive with enough foundational expertise that firms find value in hiring them—not as cheaper alternatives to agents, but as the professionals who can direct agents, verify their work, and make decisions that agents cannot.

What We Heard From You

Finding Result
Use at least one AI tool for instruction 85%
ChatGPT specifically 76%
AI has NOT changed assessment approach 56%
Willing to pilot new approaches 49%
Top barrier: Accuracy concerns 78%
Time to learn as barrier 57%
Prefer quick guides over workshops 68%

Note

Faculty are engaged but need practical support—not mandates. The barriers you cited are solvable: accuracy concerns, time constraints, and tool access.

Strategic Thesis

Our Core Commitment

Terry will become the business school known for producing graduates and partners who can use AI to make better decisions—with integrity, transparency, and measurable impact.

In a fast-takeoff world, this means graduates who can direct and oversee autonomous AI agents, not merely prompt chatbots.


The Durable Advantage

  • Tool proficiency will be ubiquitous and commoditized
  • Differentiation comes from judgment: framing problems, verifying outputs, managing workflows, making decisions
  • These are deeply human capabilities—and the historical strength of business education

Operating Model: Stable Core, Flexible Frontier

Stable Core

Moves slowly and deliberately

  • Governance: AI Steering Committee with clear authority
  • Tools: Approved, enterprise-licensed AI stack
  • Hub: Teaching & Learning resource center
  • Standards: Tiered assessment framework
  • Baseline: Common AI literacy expectations

Flexible Frontier

Moves fast and learns from failure

  • Pilots: Faculty-led course experiments
  • Agent learning: Authentic AI-integrated experiences
  • Credentials: Rapidly deployable micro-credentials
  • Projects: Industry-sponsored applied work

Without a stable core, experiments cannot build on each other and institutional credibility erodes. Without a flexible frontier, we cannot adapt to rapid change.

Five Guiding Principles

  1. Credibility is the asset
    • Assessment design and verification practices are the foundation of our value proposition
  2. Teach judgment, not just tools
    • Tools will change; the ability to frame, test, validate, and communicate persists
  3. Default to access with guardrails
    • Enable responsible use through structure, not prohibition
  4. Design for faculty time
    • The binding constraint is bandwidth, not interest
  5. Move with disciplined speed
    • Every pilot has clear success criteria and a decision point

Pillar 1: Faculty Development

Three-Track Development Program

Foundational Track (all faculty)

  • AI capabilities and limitations
  • Hallucination and bias risks
  • Course policy guidance

Applied Track (teaching with AI)

  • Tiered assessment design patterns
  • Assignment redesign templates
  • Discipline-specific examples

Advanced Track (research enablement)

  • Literature synthesis workflows
  • Coding and analysis assistance

Faculty AI Fellows

Rotating cohort with course release/stipend:

  • Redesign courses for AI integration
  • Document approaches for the Hub
  • Mentor colleagues in department
  • Present at showcases

Why Fellows?

Converts the 49% willing to pilot into visible leadership that drives adoption by example.

What Support Looks Like

The Teaching & Learning Hub

A centralized resource that reduces duplication and protects your time

What You Get

  • Quick guides (the format you prefer)
  • Assignment templates for each AI-use tier
  • Rubrics for defense-based evaluation
  • Office hours for rapid-response support
  • Discipline-specific examples from Fellows

What Changes

  • You don’t solve problems others have already solved
  • Syllabus language is standardized (less work)
  • Tool evaluation is done for you (approved stack)
  • Best practices flow across departments

The Hub should feel like an enablement resource, not a compliance checkpoint.

Pillar 2: Student Learning

Agent-Ready Learning Outcomes

Every Terry graduate should demonstrate competence in:

  1. Problem framing and domain understanding
    • Agents execute; humans frame what to execute
  2. Effective human-agent interaction
    • Selection, configuration, decomposition, iteration
  3. Verification and validation
    • Fact-checking, provenance, sensitivity analysis
  1. Ethical and legal awareness
    • Privacy, IP, bias, disclosure, consequences
  2. Decision-making in AI-mediated environments
    • Synthesis, judgment, communication, responsibility

Verification is the highest-value human skill in a fast-takeoff world.

The Tiered Assessment Framework

Zone AI Expectation Example Assessments
Green (Encouraged) AI agents allowed and encouraged, with disclosure Brainstorming, drafting, research synthesis, agent-assisted analysis
Yellow (Constrained) AI allowed for specific steps, with process evidence Research papers with interaction logs, analysis with validation documentation
Red (Restricted) AI not permitted; individual demonstration required In-class exams, oral defenses, proctored assessments, live presentations

The Framework’s Purpose

Normalizes AI use where appropriate, creates structured skill-building, and preserves spaces where students demonstrate genuine mastery.

Pillar 3: Industry Engagement

What Industry Tells Us

  • AI use will be assumed, not differentiating
  • Employers need graduates who can verify agent outputs and manage risk
  • The governance gap creates demand for AI-literate professionals

AI Employer Advisory Council

  • 12–20 leaders across key sectors
  • Quarterly candid feedback
  • Validates learning outcomes match needs

Engagement Mechanisms

Terry AI in Business Practice Summit

  • Annual convening for applied, honest discussion
  • Faculty, students, and practitioners together

Applied Project Marketplace

  • Real problems from real organizations
  • Supervised student teams using AI responsibly

Executive Education

  • Mid-career upskilling programs
  • Deepens employer relationships

Part III: Implementation

24-Month Roadmap

Phase 1

Build the Core

Months 1–6

  • Establish AI Steering Committee
  • Launch Teaching & Learning Hub
  • Recruit first Fellows cohort
  • Finalize agent-ready outcomes
  • Pilot defense-based evaluation

Phase 2

Pilot to Scale

Months 7–12

  • Full foundational training rollout
  • Map outcomes across programs
  • Host first Terry AI Summit
  • Launch project marketplace
  • Begin executive education pilots

Phase 3

Institutionalize

Months 13–24

  • Embed in program review
  • Full AI Certificate launch
  • Scale executive education
  • Integrate into annual reviews
  • Pursue external recognition

The sequencing reflects dependencies: governance before pilots, pilots before embedding.

Phase 1: Build the Core (Months 1–6)

Domain Key Actions
Governance Establish AI Steering Committee; designate AI Strategy Lead; negotiate enterprise licenses
Faculty Development Launch Hub with minimum viable offering; recruit first Fellows cohort; deploy foundational training
Student Learning Finalize agent-ready outcomes; develop tiered assessment guidance; pilot defense-based evaluation in 3+ departments
Industry Engagement Plan Advisory Council; approach founding partners

Critical Deliverable

Clear, publicly communicated decisions about AI tool use and assessment expectations within 60 days. Faculty and students cannot plan if the rules are uncertain.

Phases 2–3: Scale & Embed

Phase 2 Milestones (Months 7–12)

  • Applied Track launches for faculty
  • Hub repository expands with Fellows examples
  • Agent-ready outcomes mapped to all core courses
  • Student AI literacy module available
  • First Terry AI Summit held
  • At least 5 applied project partnerships

Phase 3 Milestones (Months 13–24)

  • Research Enablement track launches
  • AI engagement in faculty annual review
  • AI outcomes in program review and accreditation
  • Full AI Certificate operational
  • Executive education at scale
  • External funding pursuit

Phase 3 is when the strategy must prove its durability by surviving a leadership transition, a budget cycle, or a technology shift.

Managing the Risks

Risk Likelihood Impact Mitigation
Assessment credibility erosion High High Tiered framework; defense-based evaluation; process evidence
Agent-enabled academic dishonesty High High Agent interaction logging; oral defenses; real-time demonstration
Faculty adoption gaps Medium High Peer-led development; Fellows model; protect faculty time
Student AI dependency Medium Medium Red Zone assessments; metacognitive exercises; reflection requirements

The Deeper Risk

The cost of investing too cautiously—moving at academic pace while the world restructures at technology pace—exceeds the cost of investing too aggressively.

Conclusion

The Case for Urgency with Wisdom

The Core Argument

In a fast-takeoff world, the graduates who thrive will not be those who can use AI most skillfully—that skill will be ubiquitous.

They will be those who can:

  • Exercise judgment when AI is confidently wrong
  • Verify outputs when the stakes are high
  • Direct agent workflows toward the right problems
  • Take professional responsibility for AI-mediated decisions

This is what Terry must teach. This is the signal a Terry degree must send.

Questions & Discussion


Resources

  • Full Report: Available from the Task Force
  • Faculty Survey Results: Detailed analysis in appendix
  • Implementation Details: Phased roadmap with action items


Next Steps

  • Steering Committee formation
  • Hub development planning
  • Fellows program recruitment


Your Input Matters

This strategy succeeds only with faculty engagement. We want your questions, concerns, and ideas.