New publication on the parameterized complexity of caching in networks accepted at AAAI 2025

Robert Ganian (TU Wien), Fionn Mc Inerney (Telefónica Research), and Dimitra Tsigkari (Telefónica Research) have co-authored a new paper, titled Parameterized Complexity of Caching in Networks, in the context of the ELASTIC project in which Fionn Mc Inerney is Telefónica’s leading researcher. The work was presented and published at AAAI 2025, one of the leading international conferences in artificial intelligence. The study focuses on a fundamental problem in distributed systems and content delivery networks: how to allocate contents across a network of caches to maximize performance metrics such as cache hit rate. While the caching problem has been studied extensively from an empirical perspective, little was known about its theoretical complexity boundaries. This paper makes a major step forward by providing a comprehensive complexity-theoretic analysis, precisely identifying when the caching problem becomes computationally tractable and when it remains intractable.
The central contribution of the paper lies in its detailed classification of the caching problem through the lens of parameterized complexity. The authors examined three different versions of the problem, distinguished by how content sizes are represented: the homogeneous setting where all contents have the same size, a unary-encoded heterogeneous setting, and a binary-encoded heterogeneous setting. For each of these variants, they investigated how the complexity changes with respect to six natural parameters: the number of caches in the system, the number of users, the number of contents, the maximum cache capacity, the maximum degree of the network, and the number of requests a user may issue. By systematically studying these parameterizations, the authors established a clear “complexity landscape” that marks the frontier between tractable and intractable cases.
The methodology combines parameterized algorithm design and hardness results to delineate the boundary of tractability, that is directly useful to both theoreticians and practitioners. In particular, caching underpins modern video streaming platforms, cloud services, and mobile networks, and is becoming increasingly central to artificial intelligence, where trained models, datasets, and inference workloads must be distributed across edge–cloud infrastructures. The analysis presented in this work clarifies which caching strategies can be computed optimally in practice and where only approximate or heuristic solutions may be realistic.
For the ELASTIC project, these findings directly support ongoing research on edge–cloud orchestration. Resource allocation and workload placement, including caching, are fundamental to building efficient and adaptive systems. By identifying the boundary of tractability, the paper provides guidance for the design of ELASTIC demonstrators such as:
  • Smart connected factory scenarios, where efficient caching and workload placement can enhance responsiveness and reduce costs.
  • IT/OT migration pilots, where caching decisions affect performance guarantees.

The full publication is availabe on project Zenodo repository.