I Like IT, on AI - August 2025

On August 27th, 2025, I joined Pro TV’s iLikeIT to discuss a report claiming that 95% of companies aren’t seeing the results they expect from Artificial Intelligence — and what’s really behind those numbers.


We looked at how measurement choices shape conclusions, the gap between pilots and production, and practical ways to translate AI into auditable business value with clear ownership and guardrails.

"There’s a saying in statistics: if you torture the data long enough, it will confess to anything. That’s what’s happening here — conclusions about ‘AI not working’ follow directly from how the study measured success."

Pro TV iLikeIT — Why many companies miss AI results (Aug 27, 2025)

We unpacked the methodology — 300 deployed models and 50 interviews with implementers — and contrasted it with outcome‑based evaluations anchored in workflows, constraints, and cost.

Why Enterprise AI Efforts Fall Short

Common failure modes include:

  • Vague objectives: no crisp, measurable success criteria at the task level
  • Data friction: unclear ownership, data quality gaps, and access controls
  • Change management gaps: process redesign and training underinvested
  • Late guardrails: privacy, safety, and compliance added after the fact
  • No observability: weak telemetry, A/B testing, and drift monitoring
  • Pilot paralysis: demos never graduating to accountable operations

When teams define outcomes first and wire observability into the workflow, success rates rise — shifting the conversation from impressive demos to reliable decisions.

Measure What Matters

Tie AI to business KPIs and human‑in‑the‑loop checkpoints. Evaluate tasks, not hype: latency, cost per task, error bands, escalation rate, and real adoption.

"Models can sound confident, but value is measured in outcomes — accuracy where it matters, lower cost, and faster cycle time with safe escalation."

From Hype to Reliable Value

Start with one real workflow, instrument end‑to‑end, set gates for expansion, and revisit assumptions as telemetry arrives. Keep humans in the loop and make reversibility a first‑class requirement.

This iLikeIT segment on Pro TV emphasized rigorous measurement and operational discipline — the difference between impressive prototypes and durable, trustworthy impact.