Public sector leaders responsible for AI deployments need to assess whether their organization has the governance, data, stakeholder, and psychological safety foundations required to accelerate AI adoption responsibly. Leaders need a systematic way to identify and close critical gaps before launching pilots so they can meet federal mandates, demonstrate measurable productivity gains, and avoid the high rate of pilot failure that results from attempting implementation without established foundations.
Our Advice
Critical Insight
Public sector AI acceleration fails not because of inadequate technology or strategy documents, but because organizations attempt implementation before establishing critical pre-conditions. Success requires diagnosing organizational maturity across four foundation dimensions – governance infrastructure, enterprise architecture integration, stakeholder foundations, and psychological safety – then systematically addressing gaps before scaling AI adoption. Treating psychological safety as technical infrastructure enables systematic resistance management before pilots launch.
Impact and Result
Diagnose organizational foundations before accelerating AI adoption. Identify gaps blocking successful implementation with evidence-based assessment. Close the trust gap between business leaders and the general workforce. Launch pilots matched to organizational maturity for 50%+ success rates versus industry 12% average (Lenovo, 2024). Save $69,000 to $112,000 in consulting fees while completing assessment in two to three hours versus nine to thirteen weeks of consulting.
Accelerate AI in Government for Improved Impact and Results
Build foundations first, then accelerate with confidence.
Analyst perspective
A strong foundations assessment will increase odds of a successful initiative.
Data from around the world shows that translating AI pilots into long-term operational results is very difficult: Across sectors, most AI pilots (88%) fail to launch1, while across the EU public sector, 58% of AI solutions remain in pilot or development phases.2 Meanwhile, even high-performing Australian public organizations have moved fewer than half of their pilots into full production.3 This isn't a technology problem; it's a foundations problem.
Public sector organizations are under intense pressure to accelerate AI adoption, yet most organizations attempt implementation before establishing the critical preconditions for success.
The evidence is clear: 47% of workers feel unprepared for AI,4 23% fear job obsolescence, 4 and research shows AI adoption significantly reduces psychological safety.5 Meanwhile, a consistent gap exists between leadership confidence and workforce trust: a 10% trust gap between business leaders and employees.6 This creates a destructive pathway from fear to resistance to failed implementation.
This blueprint provides what technology leaders and chief data officers actually need: a diagnostic-first approach that assesses organizational foundations across governance, architecture, stakeholders, and psychological safety, then systematically addresses gaps before scaling.
The principle is simple: diagnose first, ensure your strong foundations are in place, then accelerate and deliver results with confidence.

Andy Best
Research Director, Public Sector
Info-Tech Research Group
1. Lenovo, 2025
2. OECD, 2025
3. Deloitte, 2024
4. "When Automation Backfires," SHRM, 2024
5. Kim et al., 2025
6. Workday, 2024
Executive summary
Your Challenge
- Public sector CIOs face pressure to harness AI responsibly while turning mandates into measurable productivity gains.
- Most AI pilots never reach production and many fail to deliver measurable impact.1 Forty-seven percent of workers feel unprepared.2
- Multilayer governance tensions block progress, while federal mandates demand action within 12 to 18 months.
Common Obstacles
Organizations fail because they attempt implementation before establishing foundations. Obstacles include:
- Pilot-centric thinking without diagnosis.
- Tech optimism bias ignoring trust gap.3
- Strategy-first fallacy over foundations.
- Psychological safety deficit as silent killer.
- Shadow AI proliferation creating ungoverned risk.
Info-Tech's Approach
- Diagnose foundations first, then accelerate with confidence.
- Use the Strong Foundations Assessment across four dimensions, considering maturity-appropriate acceleration paths and psychological safety as critical infrastructure, to get gap-closing recommendations with quick wins.
- Complete the diagnostic in two to three hours vs. nine to thirteen weeks of consulting.
1. Lenovo, 2024
2. "When Automation Backfires," SHRM, 2024
3. Workday, 2024
Info-Tech Insight
AI initiatives fail not because of inadequate technology, but because organizations attempt implementation before establishing strong governance, building architecture, aligning with stakeholders, and ensuring psychological safety.
Your challenge
This research is designed to help public sector organizations who need to:
Translate AI mandates into measurable results - federal requirements (OMB M-25-21, Canada's AI Strategy) demand action, but strategies alone don't deliver productivity gains.
Engage staff without triggering fear - 23% fear job obsolescence and replacement.1 AI adoption reduces psychological safety, creating resistance.
Navigate multilayer governance tensions - balance security frameworks (e.g. Communications Security Establishment), operational needs, and political acceleration pressure simultaneously.
Assess foundations before investing - without a framework to diagnose readiness, you risk running AI pilots that fail to reach production.
The core problem: Organizations are under pressure to accelerate AI but lack a systematic method to assess whether foundations exist to support successful implementation.
88% of AI pilots never reach production
Source: Lenovo, 2024
47% of workers feel unprepared for AI
Source: "When Automation Backfires," SHRM, 2024
1. "When Automation Backfires," SHRM, 2024
Common obstacles
These barriers make AI acceleration difficult for most public sector organizations
Pilot-centric thinking: Launching multiple pilots hoping something succeeds, rather than diagnosing foundation gaps first and launching strategically selected initiatives only when foundations are in place.
Technology optimism bias: Assuming that if the technology works, adoption will follow. This ignores the general workforce trust gap and the psychological safety crisis.
Strategy-first fallacy: Investing heavily in strategy documents and roadmaps while neglecting foundational assessment. Attempting implementation without knowing if governance, data, or stakeholder alignment exists.
Psychological safety deficit (primary root cause): AI adoption significantly reduces psychological safety, increasing resistance. Organizations launch AI into hostile environments where non-adoption and passive sabotage become inevitable.
62% of business leaders welcome AI adoption
vs. only
52% of employees
Source: Workday, 2024
The Silent Killer: This trust gap is rarely measured, acknowledged, or systematically addressed. Initiatives launched across this chasm face predictable resistance.
Info-Tech's approach
A diagnostic-first framework: Assess foundations, close gaps, then accelerate
Success requires a fundamental shift: Technology leaders must first diagnose their organization's maturity across four foundation dimensions, then systematically address gaps before scaling AI adoption.
1. Governance Infrastructure
Policy, risk, decision rights, ethics framework, governance model selection
2. Enterprise Architecture
Data readiness, platform integration, security controls, AI tooling
3. Stakeholder Foundations
Executive commitment, staff readiness, change management capacity
4. Psychological Safety
Trust, fear management, experimentation culture, transparency (weighted 1.5x)
The Info-Tech Difference
We treat psychological safety as technical infrastructure rather than "soft" change management. This enables systematic resistance management before pilots launch.
The Strong Foundations Assessment
- Four foundational areas, including psychological safety, assessed across four maturity stages
- Automated gap analysis with dimension scores and status ratings
- Prioritized gap-closing recommendations
- 60-day gap closure roadmap
- Quick-win identification for early momentum
Complete in two to three hours: Save $69,000 to $112,000 in consulting
AI Acceleration Strong Foundations Model
From pilot failure to strategic success: Diagnose first, then accelerate.
AI initiatives face:
88% PILOT FAILURE1
10% TRUST GAP2
50%+ TARGET
Why AI initiatives fail
- Psychological safety deficit - root cause
- Trust gap of 10% for management vs general workforce2
- Maturity-mismatched acceleration
- Governance-architecture misalignment
- Implementation knowledge vacuum
47% of workers feel unprepared3
23% fear job loss3
FOUR FOUNDATIONAL DIMENSIONS
Governance Infrastructure
Policy, Risk, Ethics, Decision Rights
Enterprise Architecture
Data, Platform, Security, AI Tools
Stakeholder Foundations
Leadership, Staff, Change Capacity
Psychological Safety
Trust, Fear Mgmt., Culture (Weighted 1.5×)
METHODOLOGY: CONTEXT → DIAGNOSE → ALIGN LAUNCH
1 Context
Shadow AI, sentiment
2 Diagnose
Four foundations
3 Align
Close gaps, govern
4 Launch
Pilots to maturity
TOP INSIGHT
AI acceleration fails not because of inadequate technology or strategy, but because organizations attempt implementation before establishing critical foundations across governance, architecture, stakeholders, and psychological safety.
Strong Foundations Assessment
- Maturity Assessment of Four Dimensions, Including Psychological Safety
- Foundations Scorecard With Gap Analysis
- Prioritized Gap-Closing Recommendations
WHAT YOU ACHIEVE
- Evidence-based readiness assessment
- Prioritized gap closure action plan
- Trust gap navigation strategy
- Path to 50%+ pilot success rate
1. Lenovo, 2024
2. Workday, 2024
3. "When Automation Backfires," SHRM, 2024
Info-Tech's methodology for accelerating AI delivery
| 1. Discover Context | 2. Diagnose Foundations | 3. Align and Close Gaps | 4. Launch Initiative | |
|---|---|---|---|---|
| Phase Steps | 1.1 Understand the use case. 1.2 Map stakeholder sentiment. 1.3 Political vs. operational context. 1.4 Union/labor consultation. |
2.1 Complete Strong Foundations Assessment. 2.2 Review psychological safety baseline. 2.3 Generate foundations scorecard. 2.4 Evaluate governance model fit. |
3.1 Implement gap-closing actions. 3.2 Assess stakeholder preparedness. 3.3 Create governance structures. 3.4 Address knowledge gaps. |
4.1 Verify critical gaps addressed. 4.2 Implement psychological safety interventions. 4.3 Launch selected pilot. 4.4 Establish monitoring and improvement. |
| Phase Outcomes |
|
|
|
|
Insight summary
Build strong foundations and then accelerate your deployment and success
AI acceleration fails not because of inadequate technology or strategy documents, but because organizations attempt implementation before establishing critical preconditions. Success requires diagnosing foundations across governance, architecture, stakeholders, and psychological safety, then systematically addressing gaps before scaling.
Uncover critical gaps
48% of public servants already use AI, and half of them with unsanctioned tools.1 You cannot diagnose what you cannot see: shadow AI inventory and stakeholder mapping are prerequisites, not optional.
Ensure psychological safety
Psychological safety is the primary root cause of initiative failure, yet most organizations treat it as "soft" change management. It must be assessed and addressed as technical infrastructure.
Custom messaging is key
The AI confidence trust gap requires differentiated communication. One-size-fits-all messaging fails across this chasm.
Maturity awareness and diagnosis is key
Maturity-mismatched acceleration contributes to the 88% pilot failure rate reported by Lenovo.2 Organizations at Aware or Active stages must not attempt Operational or Systemic initiatives until foundations support them.
1. KPMG Canada, 2025
2. Lenovo, 2024
Executive brief case study
INDUSTRY
Government
SOURCE
UK Government Digital Service (GDS), 2025
UK Ministry of Justice
The UK Ministry of Justice is a major government department responsible for the courts, prisons, probation services, and attendance centers. The Ministry employs over 80,000 people and handles millions of documents annually.
Microsoft Copilot Pilot Program
The Ministry conducted a controlled pilot of Microsoft Copilot with proper foundations in place, including governance structures, stakeholder preparation, and psychological safety measures.
Foundations-First Approach
A governance framework was established before pilot launch. Stakeholder communication addressed augmentation vs. replacement concerns. Training and support infrastructure were deployed alongside technology, and clear success metrics were defined upfront.
Results
Workers saved 26 minutes per day, translating to 13 days of productivity gain per worker annually. The pilot demonstrated that proper foundations enable sustainable AI adoption versus the higher pilot failure rate seen when organizations skip foundational work.
Executive brief case study
INDUSTRY
Government
SOURCE
City of Edmonton, 2024
City of Edmonton
The City of Edmonton is one of Canada's largest municipalities, serving over one million residents across Alberta's capital region. The city processes thousands of development and building permits annually, a traditionally paper-intensive process.
AutoReview: AI-Powered Permit Automation
Edmonton deployed AutoReview, an AI-powered permit automation system built on proper data foundations and governance structures. The city invested in data quality, process documentation, and staff preparation before deployment.
Foundations-First Approach
The city established robust data foundations and governance frameworks before AI implementation. Staff were engaged early to understand how automation would augment their expertise. Clear metrics and accountability structures ensured measurable public value.
Results
Permit approval time reduced from two weeks to one day, delivering $5.3M in savings and demonstrating the impact of proper foundations versus the trend of AI pilots that fail without them.
Blueprint deliverables
This blueprint is accompanied by a supporting deliverable to help you accomplish your goals.
KEY DELIVERABLE
AI Acceleration Strong Foundations Assessment
Interactive Excel workbook with automated scoring and visual dashboards. Complete in two to three hours to diagnose your organization's readiness across four critical dimensions.
Assessment Components:
- Four-Stage Maturity Self-Assessment (four foundations, including psychological safety)
- Foundations Scorecard (dimension scores and gap analysis)
- Prioritized Gap-Closing Recommendations
Four-Stage Maturity Self-Assessment (four foundations): Rate your organization across Governance, Architecture & Data, Stakeholder Alignment, and Psychological Safety from Aware to Systemic across four maturity stages. Psychological safety is weighted 1.5x in the overall score and measures workforce trust, fear levels, and experimentation culture with four targeted questions.
Foundations Scorecard (0-100 with dimension scores): Auto-generated overall readiness score with individual dimension breakdowns and status ratings: Critical (below 50), Warning (50 to 74), or Adequate (75 and above).
Prioritized Gap-Closing Recommendations: Specific actions for each foundation area - prioritize three to five actions for critical gaps, one to two for warning areas.
Value: Complete foundations assessment in 2 to 3 hours vs. 9 to 13 weeks of consulting; save $69,000 to $112,000 in consulting fees.
Blueprint benefits
| IT Benefits | Business Benefits |
|---|---|
|
|
Measure the value of this blueprint
The cost of undertaking an AI foundations assessment varies by organizational size.
- External consulting firms typically charge $50,000 to $150,000 for assessments, with implementation roadmaps adding another $75,000 to $200,000.
- Consulting cost avoidance: $69,000 to $112,000 in external assessment and planning fees.
- Pilot Failure Risk Mitigation: $220,000 avg. cost of failed AI pilot × 88% failure probability.
- Time Savings: 47 days (two to three hours with Info-Tech's assessment vs. nine to thirteen weeks traditional consulting).
Source: Average IT consulting rate in the United States is $100 to $250 per hour (Clutch, 2026).
Info-Tech offers various levels of support to best suit your needs
| DIY Toolkit | Guided Implementation | Workshop | Executive & Technical Counseling | Consulting |
|---|---|---|---|---|
| "Our team has already made this critical project a priority, and we have the time and capability, but some guidance along the way would be helpful." | "Our team knows that we need to fix a process, but we need assistance to determine where to focus. Some check-ins along the way would help keep us on track." | "We need to hit the ground running and get this project kicked off immediately. Our team has the ability to take this over once we get a framework and strategy in place." | "Our team and processes are maturing; however, to expedite the journey we'll need a seasoned practitioner to coach and validate approaches, deliverables, and opportunities." | "Our team does not have the time or the knowledge to take this project on. We need assistance through the entirety of this project." |
Diagnostics and consistent frameworks are used throughout all five options.