We use cookies.
By using the site, you agree to our Privacy Policy.

Laboratory «Hybrid modeling and optimization methods in complex systems»

Contract number
075-15-2022-1121
Time span of the project
2022-2024

As of 01.12.2023

36
Number of staff members
27
scientific publications
6
Objects of intellectual property
General information
Name of the project:

Hybrid methods of modeling and optimization in complex systems

Research directions: Computer, information sciences and technologies

Goals and objectives

Goals of project:

Creating a laboratory that will develop and apply metaheuristic and other hybrid methods of general-purpose optimization.

Project objective:

  1. Creating a technology for automated algorithmic design that will ensure an efficient combination of precise and mathematically substantiated methods methods with heuristic methods of solving complex optimization problems.
  2. Creating an education center on the basis of the laboratory to implement education programs as well as separate modules on the mathematical modeling and optimization of complex systems, including with application to machine learning and artificial intelligence systems.
The practical value of the study

Scientific results:

  1. Hybrid evolutionary algorithms have been developed to solve the problem of identifying dynamic systems in the form of differential equations and their systems in explicit form. The algorithmic basis of the approaches is genetic programming and differential evolution. The value of the developed approaches is in the symbolic form of the resulting models, accessible for further interpretation.
  2. We developed a hybrid algorithm based on the subgradient algorithm with the fixed step to solve the problem of equidistant placement of points on a sphere in high-dimensional space, also formulated as the problem of suboptimal ball packing or minimization of correlation vector frames. The problem has application in radio engineering, in particular, such a problem arises when developing approaches to building sixth-generation cellular networks.
  3. The correction of metric matrices in quasi-Newton methods (QNM) was considered from the perspective of machine learning theory. Based on training information for estimating the matrix of the second derivatives of a function, we formulated a quality functional and minimized it by using gradient machine learning algorithms. We demonstrated that this approach leads us to the well-known ways of updating metric matrices used in QNM such as Broyden–Fletcher–Goldfarb–Shanno (BFGS) method and Davidon–Fletcher–Powell method (DFP). The machine learning algorithm for finding metric matrices performs minimization along a system of directions, the orthogonality of which determines the convergence rate of the learning process.
  4. Several new expressions are proved for the m-weak group inverse. An effective algorithm for computing m-weak group inverse in terms of the QR decomposition is proposed. Applying the m-weak group inverse, we present the uniquely determined solution to the restricted minimization problem in the Frobenius norm.
  5. We introduced tensor generalized bilateral inverses (TGBIs) under the Einstein tensor product as an extension of generalized bilateral inverses (GBIs) in the matrix environment. Moreover, the TBGI class includes so far considered composite generalized inverses (CGIs) for matrices and tensors. A few characterizations of known CGIs (such as the CMP, DMP, MPD, MPCEP, and CEPMP) are derived. The main properties of the TGBIs were exploited and verified through numerical examples.
  6. We investigated applications of gradient method for nonlinear optimization in development of Gradient Neural Network (GNN) and Zhang Neural Network (ZNN). Particularly, the solution of the matrix equation AXB=D which changes over time was studied using the novel GNN model, termed as GGNN(A,B,D). The GGNN model is developed applying GNN dynamics on the gradient of the error matrix used in the development of the GNN model. The convergence analysis shows that the neural state matrix of the GGNN(A,B,D) design converges asymptotically to the solution of the matrix equation AXB=D, for any initial state matrix. It is also shown that the convergence result is the least square solution which is defined depending on the selected initial matrix. A hybridization of GGNN with analogous modification GZNN of the ZNN dynamics was considered.

Education and personnel occupational retraining:

Candidate thesis

  1. Evolutionary algorithms for solving symbolic regression problems for the identification of dynamic systems.
  2. Assessment and management of territorial technosphere risks of social-natural-technogenic systems of industrial regions of Siberia
  3. Hybrid method of resource management in distributed dynamic computing systems

Doctoral thesis «Models and methods of managing the processes of creating permanent connections at enterprises of the rocket and space industry»

Cooperation:

  1. LLC «Center for Computational Technologies»
  2. LLC «Huawei Techcompany»
  3. LLC «Aktiv Tuim»

Hide Show full
V. Krutikov, E. Tovbis, P. Stanimirović [et al.]
Machine Learning in Quasi-Newton Methods // Axioms. – 2024.
D. Mosić, P. S. Stanimirović, L. A. Kazakovtsev.
Application of m-weak group inverse in solving optimization problems / // Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A, Matemáticas. – 2024.
Behera, R., Sahoo, J.K., Stanimirović, P.S. et al.
Computing Tensor Generalized Bilateral Inverses. Commun. Appl. Math.Comput.2024.
Stanimirović, P.S.; Tešić, N.; Gerontitis, D.; Milovanović, G.V.; Petrović, M.J.; Kazakovtsev, V.L.; Stasiuk, V.
Application of Gradient Optimization Methods in Defining Neural Dynamics. Axioms . 2024.
Stanovov, V; Kazakovtsev, L; Semenkin, E.
Hyper-Heuristic Approach for Tuning Parameter Adaptation in Differential Evolution, Axioms, 2024.
Mosic, D; Stanimirovic, PS; Mourtas, SD.
Minimal Rank Properties of Outer Inverses with Prescribed Range and Null Space. Mathematics. 2023
Other laboratories and scientists
Hosting organization
Field of studies
City
Invited researcher
Time span of the project
Laboratory «Research of ultra-low-latency network technologies with ultra-high density based on the extensive use of artificial intelligence for 6G networks»

The Bonch-Bruevich Saint Petersburg State University of Telecommunications

Computer and information sciences

St. Petersburg

Abd El-Latif Ahmed Abdelrahim

Egypt

2022-2024

Laboratory for Non-linear and Microwave Photonics

Ulyanovsk State University - (USU)

Computer and information sciences

Ulyanovsk

Taylor James Roy

United Kingdom, Ireland

2021-2023

Multi-scale Neurodynamics for Smart Systems

Skolkovo Institute of Science and Technology - (Skoltech)

Computer and information sciences

Moscow

Wang Jun

2021-2023