Research Article | | Peer-Reviewed

Equilibrium-based Deterministic Learning in AI via σ-Regularization

Received: 9 January 2026     Accepted: 19 January 2026     Published: 30 January 2026
Views:       Downloads:
Abstract

Gradient-based learning methods such as Gradient Descent (GD), Stochastic Gradient Descent (SGD), and Conjugate Gradient Descent (CGD) are widely used in supervised learning and inverse problems. However, when the underlying system is underdetermined, these iterative approaches do not converge to a unique solution; instead, their outcomes depend strongly on initialization, learning rates, numerical precision, and stopping criteria. This study presents a deterministic σ-regularized equilibrium framework, referred to as the Cekirge Method, in which model parameters are obtained through a single closed-form computation rather than iterative optimization. Using a controlled time-indexed dataset, the deterministic equilibrium solution is compared directly with GD, SGD, and CGD under identical experimental conditions. While gradient-based methods follow distinct optimization trajectories and require substantially longer runtimes, the σ-regularized formulation consistently yields a unique and numerically stable solution with minimal computational cost. The results demonstrate that the inability of gradient-based methods to reproduce the deterministic equilibrium in underdetermined systems is not an algorithmic shortcoming, but a structural consequence of trajectory-based optimization in a non-unique solution space. The analysis focuses on formulation-level properties rather than predictive accuracy, emphasizing equilibrium existence, numerical conditioning, parameter stability, and reproducibility. By prioritizing equilibrium recognition over iterative search, the proposed framework highlights deterministic algebraic learning as a complementary paradigm to conventional gradient-based methods, particularly for time-indexed systems where stability and repeatability are critical.

Published in American Journal of Artificial Intelligence (Volume 10, Issue 1)
DOI 10.11648/j.ajai.20261001.15
Page(s) 48-60
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2026. Published by Science Publishing Group

Keywords

Deterministic Learning, σ-Regularization, Underdetermined Systems, Equilibrium Computation, Gradient-based Optimization, Time-indexed Systems, Non-recurrent Models

References
[1] Cekirge, H. M. “Tuning the Training of Neural Networks by Using the Perturbation Technique.” American Journal of Artificial Intelligence, 9(2), 107–109, 2025.
[2] Cekirge, H. M. “An Alternative Way of Determining Biases and Weights for the Training of Neural Networks.” American Journal of Artificial Intelligence, 9(2), 129–132, 2025.
[3] Cekirge, H. M. “Algebraic σ-Based (Cekirge) Model for Deterministic and Energy-Efficient Unsupervised Machine Learning.” American Journal of Artificial Intelligence, 9(2), 198–205, 2025.
[4] Cekirge, H. M. “Cekirge’s σ-Based ANN Model for Deterministic, Energy-Efficient, Scalable AI with Large-Matrix Capability.” American Journal of Artificial Intelligence, 9(2), 206–216, 2025.
[5] Cekirge, H. M. Cekirge_Perturbation_Report_v4. Zenodo, 2025.
[6] Cekirge, H. M. “Algebraic Cekirge Method for Deterministic and Energy-Efficient Transformer Language Models.” American Journal of Artificial Intelligence, 9(2), 258–271, 2025.
[7] Cekirge, H. M. “Deterministic σ-Regularized Benchmarking of the Cekirge Model Against GPT-Transformer Baseline.” American Journal of Artificial Intelligence, 9(2), 272–280, 2025.
[8] Tikhonov, A. N. Solutions of Ill-Posed Problems. Winston & Sons, 1977.
[9] Bishop, C. M. Pattern Recognition and Machine Learning. Springer, 2006.
[10] Goodfellow, I., Bengio, Y., Courville, A. Deep Learning. MIT Press, 2016.
[11] Nocedal, J., Wright, S. Numerical Optimization. Springer, 2006.
[12] Hansen, P. C. Discrete Inverse Problems: Insight and Algorithms. SIAM, 2010.
[13] Engl, H. W., Hanke, M., Neubauer, A. Regularization of Inverse Problems. Springer, 1996.
[14] Golub, G. H., and Van Loan, C. F., Matrix Computations, 4th Edition, Johns Hopkins University Press, 2013.
[15] Horn, R. A., and Johnson, C. R., Matrix Analysis, 2nd Edition, Cambridge University Press, 2013.
[16] Trefethen, L. N., and Bau, D., Numerical Linear Algebra, SIAM, 1997.
Cite This Article
  • APA Style

    Cekirge, H. M. (2026). Equilibrium-based Deterministic Learning in AI via σ-Regularization. American Journal of Artificial Intelligence, 10(1), 48-60. https://doi.org/10.11648/j.ajai.20261001.15

    Copy | Download

    ACS Style

    Cekirge, H. M. Equilibrium-based Deterministic Learning in AI via σ-Regularization. Am. J. Artif. Intell. 2026, 10(1), 48-60. doi: 10.11648/j.ajai.20261001.15

    Copy | Download

    AMA Style

    Cekirge HM. Equilibrium-based Deterministic Learning in AI via σ-Regularization. Am J Artif Intell. 2026;10(1):48-60. doi: 10.11648/j.ajai.20261001.15

    Copy | Download

  • @article{10.11648/j.ajai.20261001.15,
      author = {Huseyin Murat Cekirge},
      title = {Equilibrium-based Deterministic Learning in AI via 
    σ-Regularization},
      journal = {American Journal of Artificial Intelligence},
      volume = {10},
      number = {1},
      pages = {48-60},
      doi = {10.11648/j.ajai.20261001.15},
      url = {https://doi.org/10.11648/j.ajai.20261001.15},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajai.20261001.15},
      abstract = {Gradient-based learning methods such as Gradient Descent (GD), Stochastic Gradient Descent (SGD), and Conjugate Gradient Descent (CGD) are widely used in supervised learning and inverse problems. However, when the underlying system is underdetermined, these iterative approaches do not converge to a unique solution; instead, their outcomes depend strongly on initialization, learning rates, numerical precision, and stopping criteria. This study presents a deterministic σ-regularized equilibrium framework, referred to as the Cekirge Method, in which model parameters are obtained through a single closed-form computation rather than iterative optimization. Using a controlled time-indexed dataset, the deterministic equilibrium solution is compared directly with GD, SGD, and CGD under identical experimental conditions. While gradient-based methods follow distinct optimization trajectories and require substantially longer runtimes, the σ-regularized formulation consistently yields a unique and numerically stable solution with minimal computational cost. The results demonstrate that the inability of gradient-based methods to reproduce the deterministic equilibrium in underdetermined systems is not an algorithmic shortcoming, but a structural consequence of trajectory-based optimization in a non-unique solution space. The analysis focuses on formulation-level properties rather than predictive accuracy, emphasizing equilibrium existence, numerical conditioning, parameter stability, and reproducibility. By prioritizing equilibrium recognition over iterative search, the proposed framework highlights deterministic algebraic learning as a complementary paradigm to conventional gradient-based methods, particularly for time-indexed systems where stability and repeatability are critical.},
     year = {2026}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Equilibrium-based Deterministic Learning in AI via 
    σ-Regularization
    AU  - Huseyin Murat Cekirge
    Y1  - 2026/01/30
    PY  - 2026
    N1  - https://doi.org/10.11648/j.ajai.20261001.15
    DO  - 10.11648/j.ajai.20261001.15
    T2  - American Journal of Artificial Intelligence
    JF  - American Journal of Artificial Intelligence
    JO  - American Journal of Artificial Intelligence
    SP  - 48
    EP  - 60
    PB  - Science Publishing Group
    SN  - 2639-9733
    UR  - https://doi.org/10.11648/j.ajai.20261001.15
    AB  - Gradient-based learning methods such as Gradient Descent (GD), Stochastic Gradient Descent (SGD), and Conjugate Gradient Descent (CGD) are widely used in supervised learning and inverse problems. However, when the underlying system is underdetermined, these iterative approaches do not converge to a unique solution; instead, their outcomes depend strongly on initialization, learning rates, numerical precision, and stopping criteria. This study presents a deterministic σ-regularized equilibrium framework, referred to as the Cekirge Method, in which model parameters are obtained through a single closed-form computation rather than iterative optimization. Using a controlled time-indexed dataset, the deterministic equilibrium solution is compared directly with GD, SGD, and CGD under identical experimental conditions. While gradient-based methods follow distinct optimization trajectories and require substantially longer runtimes, the σ-regularized formulation consistently yields a unique and numerically stable solution with minimal computational cost. The results demonstrate that the inability of gradient-based methods to reproduce the deterministic equilibrium in underdetermined systems is not an algorithmic shortcoming, but a structural consequence of trajectory-based optimization in a non-unique solution space. The analysis focuses on formulation-level properties rather than predictive accuracy, emphasizing equilibrium existence, numerical conditioning, parameter stability, and reproducibility. By prioritizing equilibrium recognition over iterative search, the proposed framework highlights deterministic algebraic learning as a complementary paradigm to conventional gradient-based methods, particularly for time-indexed systems where stability and repeatability are critical.
    VL  - 10
    IS  - 1
    ER  - 

    Copy | Download

Author Information
  • Sections