AI in IT operations often fails not due to the models themselves, but because of incomplete, siloed, or stale data, emphasizing that a robust data foundation is paramount for effective AI.
Everyone is adding AI to their IT tools right now. Vendors are racing to slap “AI-powered” on dashboards, helpdesk platforms, monitoring tools, and security scanners. And IT leaders, under pressure to modernize and do more with less, are buying in.
I get it. The promise is genuinely exciting. Self-healing environments. Predictive remediation before users even notice a problem. Autonomous IT that closes tickets before they’re opened. If it works, it’s transformational.
But here’s what I keep hearing when I talk to IT teams who’ve piloted these tools: the AI wasn’t the problem. The data was.
When an AI-powered IT tool fails to deliver, and plenty of them do, the instinct is to blame the model. It wasn’t smart enough. It wasn’t trained on enough scenarios. We need a better algorithm.
That’s almost never the real issue.
The real issue is that the AI was asked to make intelligent decisions on top of incomplete, siloed, or stale data. And no matter how sophisticated the model is, “garbage in” means “garbage out”. You wouldn’t ask a doctor to diagnose a patient without running any tests. Asking AI to fix your IT environment without comprehensive telemetry is exactly the same mistake.
The teams succeeding with AI in IT operations aren’t the ones with the most advanced models. They’re the ones who built the richest, most complete view of their environment first, and then let AI work on top of that foundation.
Most IT environments have data. The problem is it’s fragmented, spread across a network monitoring tool, a security scanner, an endpoint manager, a helpdesk platform, and a handful of spreadsheets someone in ops maintains manually. Each tool sees a slice of the picture. None of them see the whole thing.
AI needs the whole thing.
Here’s what I mean. Say an employee submits a ticket: their video calls keep dropping, and they can’t figure out why. Here’s what happens when your AI tools only see their own slice:
Three tools, three clean bills of health, and the employee is still suffering. Why? Because none of those tools could see what was really happening: the device CPU was spiking to 95% during every call, a VPN split-tunneling configuration had drifted four days earlier after a routine update, and six other employees on the same subnet were experiencing the identical pattern.
That correlation, across the device, OS, network, application, and configuration data, is what intelligent IT requires. And it’s only possible when your data foundation covers all of it together.
In my experience, most organizations fall into one of three traps before they ever get to the AI layer.
Security data lives in one tool. Performance data lives in another. User sentiment, if it’s captured at all, lives somewhere else entirely. AI can’t connect dots that it can’t see. When your monitoring environment is fragmented, your AI is too.
Many IT tools only capture detailed data when something goes wrong. That means you’re training your AI on incidents rather than on the full, continuous picture of what normal looks like. An AI that’s only ever seen broken environments has no baseline to work from. It can’t tell you when something is trending toward failure before it gets there.
If your monitoring doesn’t cover every device type and every location, remote workers, BYOPC devices, ChromeOS, Linux, frontline worker devices, your AI has blind spots. And blind spots are where real problems hide. Partial visibility isn’t just a gap in your reporting. It’s a gap in your AI’s intelligence.
This is where I’ll be direct: ControlUp has spent years building exactly the data foundation that AI in IT operations requires, and we didn’t build it for AI. We built it because comprehensive telemetry is what great IT support has always needed.
Today, ControlUp collects thousands of metrics across every layer of the digital employee experience: physical and virtual devices, operating systems, networks, local applications, SaaS and web applications, unified communications, security posture, and user sentiment. Windows, macOS, Linux, ChromeOS. On-premises, cloud, hybrid.
That breadth isn’t incidental. It’s the point.
When AI works on top of ControlUp’s telemetry, it isn’t reasoning from a single tool’s perspective. It’s reasoning from a complete picture of the environment, the same picture that an experienced IT engineer would want before making a call. It can see that the network looks fine and that the device is struggling and that the security configuration changed and that user sentiment scores on that team dropped this week. It can surface correlations that no single-domain tool could find, because no single-domain tool has all the data.
For IT teams using ControlUp alongside other tools, a separate ITSM, a security scanner, an endpoint manager, ControlUp becomes the intelligence layer that sees across the entire environment, connecting signals from domains that otherwise never talk to each other. The AI doesn’t just help ControlUp customers make better decisions. It helps their whole IT operation get smarter, regardless of what else is in the stack.
AI is going to change IT operations. I believe that genuinely. But the transformation won’t come from the model. It’ll come from the data underneath it.
Before you evaluate another AI-powered IT tool, ask one question: what data is this AI actually working from? How complete is it? How current is it? How many domains does it cover? If the answer is “just our tool’s data,” you already know the ceiling.
The teams that win with AI at their side in the next few years won’t be the ones who bought the cleverest algorithm. They’ll be the ones who did the unglamorous work of building a complete, continuous, cross-domain view of their environment, and then let AI do what AI is actually good at.
Get the data right first. The intelligence follows.