Loading...
This review study explores the theoretical foundations, evolution, applications, and future potential of Kolmogorov-Arnold Networks (KAN), a neural network model inspired by the Kolmogorov-Arnold representation theorem. KANs set themselves apart from traditional neural networks by employing learnable, spline-parameterized functions rather than fixed activation functions, allowing for flexible and interpretable representations of high-dimensional functions. The review explores Kan's architectural strengths, including adaptive edge-based activation functions that enhance parameter efficiency and scalability across varied applications such as time series forecasting, computational biomedicine, and graph learning. Key advancements including Temporal-KAN (T-KAN), FastKAN, and Partial Differential Equation (PDE) KAN illustrate KAN's growing applicability in dynamic environments, significantly improving interpretability, computational efficiency, and adaptability for complex function approximation tasks. Moreover, the paper discusses KANs integration with other architectures, such as convolutional, recurrent, and transformer-based models, showcasing its versatility in complementing established neural networks for tasks that require hybrid approaches. Despite its strengths, KAN faces computational challenges in high-dimensional and noisy data settings, sparking continued research into optimization strategies, regularization techniques, and hybrid models. This paper highlights KANs expanding role in modern neural architectures and outlines future directions to enhance its computational efficiency, interpretability, and scalability in data-intensive applications.
Kolmogorov-Arnold Networks (KANs) introduce a novel edge-based approach where learnable spline functions replace traditional fixed activation functions. Instead of applying non-linearities at nodes (like ReLU), each edge in a KAN applies a parameterized transformation , typically modeled using B-splines.
This design enables adaptive, interpretable, and compositional learning, inspired by the Kolmogorov–Arnold representation theorem. It enhances both model flexibility and explainability by learning customized activation behavior along each connection.

Figure 1: Schematic representation of the KAN architecture. Each layer applies learnable univariate spline functions along the edges (between nodes), enabling edge-based transformations rather than traditional node-based activations. While the classical Kolmogorov-Arnold formulation applied outer functions only at the final stage, modern deep KAN architectures generalize this by applying spline-based transformations at every layer, ensuring compositional and interpretable modeling across the network.
Since its introduction, the Kolmogorov-Arnold Network (KAN) has inspired several powerful architectural variants. Each is tailored to specific tasks or computational challenges, while preserving the core idea of edge-based, learnable spline activations.
These variants demonstrate KAN's adaptability across domains such as physics, time-series analysis, graph learning, and signal processing.
This comprehensive survey has been officially published in ACM Computing Surveys, one of the most prestigious journals in computer science, reflecting its significance and contribution to the field. The work has garnered significant attention within the machine learning community, contributing to the growing understanding and adoption of Kolmogorov-Arnold Networks.
Beyond the academic publication, our research has fostered a vibrant community ecosystem. We maintain an awesome KAN repository - a curated collection of KAN (Kolmogorov-Arnold Network) resources including libraries, projects, tutorials, papers, and more, specifically designed for researchers and developers working in this exciting field. This repository serves as a comprehensive hub for anyone looking to explore, implement, or contribute to KAN-related research.
The work has been featured in various academic platforms and has sparked discussions about the future of interpretable neural architectures. To learn more about the broader impact of this research and its implications for the field, you can explore our publication story on Kudos, which provides additional insights into the research journey, community reception, and potential applications of KAN architectures.