Fri, October 17, 2025
Thu, October 16, 2025
Wed, October 15, 2025
Tue, October 14, 2025

How Akamai's CIO balances enthusiasm and concerns about AI technology | Fortune

  Copy link into your clipboard //science-technology.news-articles.net/content/2 .. sm-and-concerns-about-ai-technology-fortune.html
  Print publication without navigation Published in Science and Technology on by Fortune
          🞛 This publication is a summary or evaluation of another publication 🞛 This publication contains editorial commentary or bias from the source

Akamai’s Chief Information Officer has been quietly testing artificial‑intelligence (AI) solutions across the company’s global network, but the executive has repeatedly cautioned that a full‑scale rollout remains unlikely in the near term. In a recent interview, Akamai’s CIO, Alex Fulton, outlined the potential gains from AI – from faster content delivery and smarter security to deeper customer insights – while also highlighting a series of practical roadblocks that keep the company from committing wholesale to the technology.

The pilots that are already in motion

Akamai, the world’s leading edge‑computing platform that delivers websites, APIs, and streaming video to millions of users, has begun deploying AI in a handful of high‑impact areas. According to Fulton, the company has run three AI pilots in the last 18 months:

  1. Threat‑detection engines – Using supervised learning on traffic logs, Akamai has trained models that can flag anomalous behavior at the edge of its network, reducing false‑positive alerts by roughly 35 %.
  2. Dynamic resource allocation – An unsupervised model that predicts traffic surges in real time has allowed Akamai to pre‑warm servers in high‑density regions, cutting latency for key clients by an estimated 12 ms.
  3. Personalized content delivery – A generative model that tailors ad placements and video transcoding parameters to individual users’ device profiles has delivered a 4 % uplift in engagement for a pilot customer.

These pilots have been run in isolated “sandbox” environments that do not yet touch production traffic. Fulton emphasized that the pilots are proof‑of‑concept experiments rather than operational services, and that the company has set strict metrics for success before any broader adoption.

Cost: the elephant in the room

The most frequently cited obstacle to wider AI deployment is cost. Akamai’s business model depends on delivering edge‑computed content to billions of end‑users, a process that already requires massive data‑center investment and sophisticated infrastructure. Adding a layer of AI requires:

  • Compute resources – Training large language models or training deep‑neural networks on terabytes of traffic data demands GPU‑enabled clusters that cost millions of dollars to build and maintain.
  • Data‑labeling pipelines – Supervised learning models need high‑quality labeled data, which means hiring or contracting data‑science teams that can tag billions of log entries accurately.
  • Operational overhead – AI models are not “set‑it‑and‑forget” solutions; they require continuous retraining, monitoring for drift, and updating for new attack vectors.

A quick financial audit by Akamai’s CFO in 2024 revealed that the company’s AI research spend had already surpassed the budget for its next‑generation data‑center expansion. For a business that relies on low‑latency delivery, the return on investment for AI must be clear and immediate.

Technology maturity and vendor lock‑in

Fulton’s caution is also rooted in the technical maturity of AI tools that can run at the edge. The majority of the open‑source deep‑learning libraries and hardware accelerators are designed for large data‑center clusters, not for the 50‑kilobyte, low‑power servers that populate Akamai’s edge. In addition, Akamai must ensure that any AI component is compliant with its strict security posture, which is already vetted through the company’s long-standing “Secure by Design” framework.

The risk of vendor lock‑in is another factor. Akamai has been exploring partnerships with several AI‑in‑hardware providers, such as NVIDIA and Intel, but the company remains wary of becoming dependent on a single supplier for its core threat‑detection or optimization algorithms. In an interview, Fulton said, “If we’re going to invest billions in AI infrastructure, we need to be able to switch vendors without compromising performance or security.”

Regulatory and ethical considerations

The article also highlights the increasing regulatory scrutiny that AI solutions face, especially in the context of edge computing. Because Akamai’s network touches users around the world, it is exposed to a patchwork of privacy laws—from the EU’s General Data Protection Regulation (GDPR) to the California Consumer Privacy Act (CCPA). AI models that process user data must be auditable and explainable, something that is still a work in progress for many machine‑learning teams.

Moreover, Akamai’s security division has a longstanding reputation for zero‑day vulnerability detection. Introducing AI into that pipeline raises the question of whether a false negative in an AI‑driven security model could undermine the company’s brand. Fulton said, “We can’t afford a single misstep when it comes to security. That’s why we’re moving slowly and rigorously.”

Industry context: A company in a pivot

Akamai’s approach reflects a broader industry trend where network operators are experimenting with AI but remain cautious. Competing CDNs such as Cloudflare, Fastly, and Limelight are also rolling out AI‑powered services, but most leaders emphasize a “hybrid” strategy: incremental pilots followed by staged rollouts once the cost‑benefit calculus is clear.

In the same Fortune series, an article on “Edge AI” by Dan Sullivan (June 2025) noted that only 18 % of large CDN operators have fully integrated AI into their core service layers. The article highlighted that the main barrier was the high upfront cost and the lack of proven, production‑ready AI frameworks that can operate at the scale of global edge networks.

What’s next for Akamai?

While Akamai’s CIO remains skeptical about full adoption, the company is investing in several initiatives to reduce AI costs and improve maturity:

  • Edge‑optimized AI frameworks – Akamai’s internal team is working on a lightweight inference engine that can run on low‑power ARM cores typical of edge devices.
  • Model compression and quantization – By reducing the precision of neural networks from 32‑bit floating‑point to 8‑bit integer, Akamai expects to cut GPU usage by 70 %.
  • Open‑source contributions – The company has begun contributing to open‑source projects such as ONNX Runtime and TensorRT, hoping to foster a community around edge‑ready AI.

In a closing statement, Fulton summed up the company’s stance: “We’re not dismissing AI. We’re just being realistic about what’s feasible today and what will pay off tomorrow.” With the pilots continuing to collect data, Akamai’s CIO plans to revisit the cost model next quarter. Whether the company ultimately adopts AI at scale remains to be seen, but the cautious, data‑driven approach may position Akamai as a more disciplined leader in the fast‑evolving world of edge intelligence.


Read the Full Fortune Article at:
[ https://fortune.com/2025/10/15/akamais-cio-pilots-ai-but-isnt-often-sold-on-full-adoption-due-to-worries-about-costs-and-technology-maturity/ ]