• Control Systems
December 16, 2022

Control Theory, Advancements, and Challenges

Control system engineering is fast-growing and has played a significant role in technology development over the last decades. This article reviews the evolution of control theory, methods, challenges, paradigm shifts, and the future of the field.

“If physics is defined as the science of understanding the physical environment, then control theory may be viewed as the science of modifying that environment, in the physical, biological, or even social sense”. Control theory is defined as a branch of mathematics that has a tight connection with dynamical systems and is thus linked with physics, engineering, and technology. The term dynamical system refers to a system that evolves over time. From a theoretical perspective, a dynamical system links different subjects from linear algebra and differential equations to numerical analysis and geometry. Control theory encompasses the methods to influence dynamical system behavior to reach a certain goal.

While there are different stages in the evolution of control theory, the contributions of Minorsky in determining the stability from differential equations of the system, and Nyquist by developing a procedure for determining the stability of closed-loop systems in the 20s and 30s were significant in the development of the field. Minorsky was the first who proposed PID control for a practical application, the automatic steering system of navy vessels. PID controllers have been widely used in many industrial applications since the 40s and 50s. Over these decades, frequency-response methods and root-locus were established to further analyze the system dynamics and used for control design. These techniques are shaping the key elements of classical control theory by which some arbitrary requirements like the stability of the system were fulfilled, though those methods do not lead to optimal control in any sense.

Optimal control was extensively studied in the 60s and 70s. An optimal controller gives high performance when a good model of the system is available and is regarded as a Model-Based Controller. Model-Based Design (MBD) is the process of developing functions based on the model of the system. Creating a Model-Based Controller (MBC) also includes other methods for both linear and nonlinear systems, e.g., Lyapunov-based controllers. In MBC theory, firstly the plant model is identified and then the controller is designed based on the dynamic properties of the plant model. In this regard, system identification techniques, like the Kalman filter, have been developed to provide the plant model within a model set to approximate the real system. Nevertheless, there will be always a deviation from reality, unmodeled dynamics, that may degrade performance. In other words, the robustness of the controller against model errors is a significant challenge in control design. However, creating a model for the system gives far many more advantages than just enabling optimal control through a MBC, such as increasing the understanding of the system and drastically reducing development times and time to market KPIs. This is why the MBD approach have exploded in the industry the last decade.

Optimal control may be the most used state-space formulation of control today, though the later paradigm shift was towards robust control. The popular optimal control strategy, Linear Quadratic Gaussian (LQG) consists of the Kalman state-space model combined with optimal control.  In 1978, John Doyle published the astonishing paper, “Guaranteed Margins for LQG Regulators”, which is a one-page publication with 3 words abstract, “there are none”! In this work, it is shown that when the Kalman filter is in the loop, the combined LQG provides no global system-independent guaranteed robustness properties. That means a system may have high performance whereas the robustness is arbitrarily low. Therefore, for a control system, not only performance, stability, but also robustness, the sensitivity of the system with respect to uncertainties, etc., and their trade-off should be investigated.

Control System Engineering is moving forward with the industrial revolutions. The shift from analog and mechanical technology to digital automation with the adoption of computers in the third revolution was a great deal to apply the control theory in practice. Digital computers have been widely used as a part of control systems and there have been a growing number of control software requirements in place for different applications. Nowadays, we are in the course of the fourth industrial revolution, Industry 4.0, which enhances the computerization of the third revolution with smart and autonomous systems through the introduction of Cyber Physical Systems (CPS). CPS are autonomous systems with integrated computational and physical capabilities that gave birth to the Internet of Things (IoT) and Big Data.

The recent industrial revolutions are enablers of data-driven control methods. Another motivation for data-driven control is when obtaining a simple model for MBC is impractical for some systems; like neuroscience, turbulence, epidemiology, climate, and finance. While the reduced-order models for high-dimensional linear systems are one solution, the nonlinear control objective can be also presented as an optimization problem with a high-dimensional nonconvex cost function. When high-quality measurement data is available, Machine Learning (ML) techniques can be adopted for nonlinear optimization in high-dimensional space to control strongly nonlinear and multi-scale systems. Generally, ML techniques may be used to identify a system for MBC or directly determine the control law for the system. Reinforcement Learning (RL), Iterated Learning Control (ILC), and Genetic Algorithms (GA) are examples of ML methods.  One of the most established applications in which ML is used is self-driving vehicles where the system has to interact with the complex environment and associate with humans. There are other data-driven model-free control methods like Extremum Seeking Control (ESC). ESC is an effective technique to track the local extremum of an objective function despite disturbances and nonlinearities. Maximizing the road-tire interaction force during braking is a well-known example of where the extremum seeking control can be applied.

Control system engineering is a rapidly developing field that is evolving with technological advancements and real-world challenges.  The future of the field is tied to the improvement of data-driven methods for high-dimensional nonlinear problems and adopt more advanced solutions regarding possibilities that are offered by technology and industrial progress. It requires multi-disciplinary competencies to address complex control challenges. We at Combine Control Systems are aiming at the edge of the field by combining these competencies to deliver solutions for Control Systems and let you “enter the next level”.

Author: Alireza Marzbanrad
 
 

Are you interested in hearing more from us?

Being a part of Edge means staying at the forefront of technology. Enter your contact information in order to stay in touch with us at Combine. Let’s connect!
Join the Edge
 
 

Sources

  • https://www.britannica.com/science/control-theory-mathematics
  • “Data-Driven Science and Engineering”. Steven L. Brunton and J. Nathan Kutz. Cambridge University Press
  • “Modern Control Engineering”. Katsuhiko Ogata. Prentice Hall.
  • “From model-based control to data-driven control: Survey, classification and perspective”. Zhong-Sheng Hou, Zhuo Wang. Information Sciences 235 (2013) 3–35.
  • “Guaranteed Margins for LQG Regulators”. John Doyle, IEEE Transactions on Automatic Control (IEEE)-Vol. 23, Iss: 4, pp 756-757
  • “Efficient Data Streaming Analytic Designs for Parallel and Distributed Processing”. Hannaneh Najdataei. Doctoral thesis in computer science and engineering. Chalmers University of Technology.