Organisations spend 47% of their cloud storage budgets on fees, not capacity. That’s what happens when architectural decisions get made without understanding where money actually goes.
There’s a saying in enterprise computing, “bring the compute to your data.” This logic is sound, moving petabytes across networks can be slow and costly, so run your algorithms as close to where the files live.
But some workloads force us to question this orthodoxy. Sometimes it’s right for the data to move to the compute. The answer really depends on what you’re trying to achieve.
Let’s think of it like manufacturing. Sometimes shipping raw materials to a central factory is right; other times it’s better to assemble components on-site. Neither is universally correct, and the economics depend on what you’re building and how often.
AI model training is a clear case for bringing data to compute. Training AI models needs massive parallel processing to feed GPU farms. Distributing that kind of workload across scattered edge locations is not going to feed the compute in time. Data has to consolidate where the horsepower lives.
But inference flips our equation. When a self-driving car needs to identify a pedestrian, we can’t afford the latency to a distant data center. The compute must live as close to the source of the data as physically possible. Latency isn’t a performance metric here, it’s a safety requirement.
Media and Entertainment workflows sit somewhere in the middle, and that’s what makes them interesting. Adobe’s AI tools in Photoshop can run locally on your workstation or leverage cloud processing for heavier lifts. It’s technically possible to do either, but here we realise that capability isn’t the only thing we need to think about.
We start to think about governance, data sovereignty, and risk tolerance when we look at our architectural decisions. A production handling unreleased footage might need processing to stay on-premises regardless of technical efficiency. Our compute-to-data conversation is often as much about policy as performance.
The 47% waste isn’t a technology failure, it’s a cry for help from the workflow. Companies that think about compute placement for each stage of production, rather than defaulting to single architectures, stop paying for orthodoxy. The question isn’t where your data lives. It’s whether anyone asked why.
