Live on TV: AI is clicking your mouse

On March 26th, I joined Pro TV's iLikeIT to discuss a new frontier in artificial intelligence: AI models that can take control of your computer and perform tasks autonomously — browsing, clicking, typing, and navigating applications just like a human would.

Pro TV iLikeIT AI Computer Control March 2026

The segment explored how the latest generation of AI models — from Anthropic's Claude, to OpenAI's Operator, to Google's Project Mariner — can now see your screen, understand what's on it, and interact with your computer directly. Instead of just answering questions, these models can book flights, fill out forms, organize files, and complete multi-step workflows without human intervention.

"We're moving from AI that talks to AI that acts. These models don't just generate text — they see your screen, move the mouse, click buttons, and complete tasks. It's a fundamental shift: the computer becomes the interface, and AI becomes the operator."

How Computer-Use AI Works

Unlike traditional chatbots that only process text, computer-use models take screenshots of your screen, interpret what they see, and decide what action to take next — a click, a keystroke, a scroll. They operate in a loop: observe, think, act, observe again. This allows them to navigate complex interfaces, switch between applications, and handle real-world tasks that previously required a human sitting at the keyboard.

The Opportunity and the Risk

The potential is enormous — imagine delegating hours of repetitive computer work to an AI agent. But the risks are equally real. An AI with access to your screen can see passwords, personal messages, and sensitive documents. A wrong click could send an email to the wrong person, delete important files, or authorize a payment. The technology is powerful, but it demands a new level of trust — and caution.

Where We Are Today

These tools are already available — some in preview, some fully launched. Anthropic's Claude can control a computer through its API, OpenAI's Operator handles web-based tasks, and Google is testing similar capabilities through Project Mariner. We're at the early stages, but the direction is clear: AI is evolving from assistant to autonomous agent, and the way we interact with computers is about to change fundamentally.

The key takeaway: this technology works best when you stay in the loop. Let AI handle the repetitive work, but keep oversight over anything sensitive. We're not handing over the keys — we're getting a very capable co-pilot.

Live link -> live show here