- 1 Introduction to Classical Control
- 2 Introduction to State-Space Control Systems
- 3 Introduction to Modern Control Systems
- 4 Introduction to Digital Control Systems
- 5 Introduction to Non-Linear Control Systems
- 6 Artificial Intelligence in Control Systems
- 7 References
1 Introduction to Classical Control
Open-loop systems lack feedback and therefore cannot compensate for errors. Prior to the advent of sensors and control theory, people provided the feedback of control for these systems. As the complexity and speed of machines grew people were no longer capable of providing adequate feedback.
Today we have a large variety of sensors for feedback allowing for fully automated control. With automated control comes the ability to make adjustments to a given system at rates well above any human's capacity (i.e. a 1 kHz update is reasonable for a computer and impossible for a person). Higher update rates allow a properly designed system to make corrections sooner. These corrections will be small since errors have had little time to accumulate.
Classical control techniques can be broken up into Frequency Domain techniques and Time Domain techniques. Frequency Domain techniques are a suite of analysis and design tools that include Root Locus, Pole Placement, Bode Plots, and Nyquist. State-Space techniques are the only part of Classical control that is done in the Time Domain.
Single-Input Single-Output (SISO) systems are the primary focus of Classical controls. Multiple-Input Multiple-Output (MIMO) systems can be controlled through Classical control techniques but to do so usually requires particular conditions be true. For example, an SISO controller can be used on an MIMO system if each input and output can be decoupled. In this case the system can be treated as a bunch of SISO systems and each controller can be designed wihtout consideration of the others.
1.1 Control Systems: Generic Block Diagram
Design the pre-filter (W) and controller (K) on the basis of a nominal model for the plant such that the feedback system exhibits the following properties:
- Stability: if the system is perturbed then the system will return to equilibrium
- Small tracking error
- Good low frequency command following
- Good low frequency disturbance attenuation
- Good high frequency noise attenuation
The stated goals must be achieved in the presence of the following sources of uncertainty
- is not known exactly
- and are not known exactly
- is not known exactly
See Block Diagram Quick Reference for the necessary equations to form the open loop, closed loop, and disturbance rejection transfer functions.
1.2 Controller Design
Controllers are really just filters that shape the feedback error in order to acheive the desired system response. The input to a controller is the error between the desired system output and the measured system output. The controller's output is a filtered version of that error. A cleverly designed filter can adjust the feedback error so that the system responds optimally.
Controller design is a matter of choosing poles, zeros, and gains. Nothing more. Poles provide a controller with integral action which is slow responding to errors but drives the system's step response to a 0 steady-state error. Zeros provide a controller with derivative action which is fast but can amplify noise and leads to a non-zero steady-state error. Gains provide instantaneous response to errors but non-zero steady-state errors.
The short comings of integral, derivative and proportional control led to the formation controllers based on a combination of these properties. The most famous of these controllers is the PID controller. See Standard Controller Forms for specifics on classic controllers like the PID.
2 Introduction to State-Space Control Systems
A state-space representation is a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations. To abstract from the number of inputs, outputs and states, the variables are expressed as vectors and the differential and algebraic equations are written in matrix form. The state-space representation provides a convenient and compact way to model and analyze MIMO systems. We would otherwise have to write down Laplace transforms to encode all the information about a system. Unlike the classical control (frequency domain) approach, the use of the state-space representation is not limited to systems with linear components and zero initial conditions. "State-space" refers to the space whose axes are the state variables. The state of the system can be represented as a vector within that space.
2.1 State Variables
The internal state variables are the smallest possible subset of system variables that can represent the entire state of the system at any given time. State variables must be linearly independent (i.e. not a linear combination of other state variables). The minimum number of state variables required to represent a given system, n, is usually equal to the order of the system's defining differential equation. If the system is represented in transfer function form, the minimum number of state variables is equal to the order of the transfer function's denominator after it has been reduced to a proper fraction. It is important to understand that converting a state-space realization to a transfer function form may lose some internal information about the system, and may provide a description of a system which is stable, when the state-space realization is unstable at certain points.
2.2 State-Space Controller Design
In many SISO systems a controller can be designed through a technique named Loop Shaping using Classical Control techniques; the most important tool in Loop Shaping is the Bode plot. For MIMO systems the singular value decomposition (SVD) provides a tool similar to the Bode plot. Loop Shaping is more difficult for MIMO systems than SISO.
Since Loop Shaping is diffcult, sometimes impossible, for MIMO systems other techniques were developed designing state-space controllers. The two most popular state-space controllers are the Linear Quadratic Regulator (LQR controller) and the Linear Quadratic Gaussian (LQG).
3 Introduction to Modern Control Systems
LQR and LQG are just the beginning of MIMO controller design. Modern control systems extend the suite of state-space controller designs. These controllers are put in 1 of 2 categories: Optimal or Robust.
Optimal controllers seek to achieve the desired results while minimizing or maximizing a particular cost function. An example of a cost function would be the energy required to achieve the desired result.
Robust controllers seek to achieve the desired results in the presence of significant uncertainties. An example of uncertainties would be an RLC circuit where each resistor, inductor, and capacitor will have a nominal values plus or minus some tolerance. If the tolerances are large enough then the model of that RLC circuit is only nominal and the uncertainty is large enough to cause control problems. Robust controllers incorporate certain design techniques and enough margin to overcome the predicted uncertainty.
3.1 Optimal Controller Design
As stated before Optimal Control revolves the minimization of a cost function. The example I was provided as a student was that of the travelling salesman.
3.1.1 Optimal Control Systems Example: Travelling Salesman
The traveling saleman must drive to 3 cities, each at least once. The minimization of the number of miles travelled is a way to minimize (optimize) the cost of doing business. In the travelling salesman problem it is easy to understand why the optimizing cost function is the number of miles driven. For a problem with only 3 cities the solution is fairly straight forward. Add up the miles for every possible route and pick the shortest one.
However, when travelling saleman example is expanded to 10 or more cities the number of calculations required becomes very large. At some point, at a fairly small number of cities, the number of possible routes becomes so large that they can't all be added up. This leads to a variety of analytical solutions to find the optimal route.
3.1.2 Optimization Example: Microprocessor Manufacturing
Another example I was provided in school was that of Intel. The example goes like this, Intel can manufacture 10 different chips. Each chip has a different profit margin. Each chip has a different number of chips that can be sold each week. So what is the optimal configuration of chips to manufacture to maximize Intel's profit?
Problems like these are still unsolved. Solutions to optimal problems usually lead to a solution which is not quite optimal but very close to optimal. Typically the analytical solution is better than any solution we come to through other means.
3.1.3 The cost of Optimal Control
Optimal control comes at a cost. That cost is usually one of performance. The cost function to be minimized is almost always actuation energy, tracking error, or something similar. It is typically not settling time or overshoot.
3.2 Robust Controller Design
Robust controller design for control systems is typically concerned with maintaining control and stability despite uncertainties is the plant, sensor, and actuation models. Robust control is useful in manufacturing environments where you are building multiple copies of the same plant. You want to design 1 controller for the nominal plant which is capable of control and stablization for every plant manufactured without designing a specific controller for each.
When variations from the nominal plant are small then classical control and state-space control methods are adequate. As the variations grow so does the need for Robust control methods.
3.2.1 Robust Control: H∞ Control
3.2.2 The cost of Robust Control
Robust control comes at a cost. That cost is usually one of performance. The cost function to be minimized is almost always actuation energy, tracking error, or something similar. It is typically not settling time or overshoot.
4 Introduction to Digital Control Systems
Typically a continous time controller is designed for the plant. Then, using the Z-transform, a digital controller is formed. As a result there aren't that many truly digital control design techniques.
Analog control requires the controller be implemented in hardware. This is far more difficult than implementing the controller in software.
In today's aerospace environment that the control of any satellite subsystem of reasonable complexity is designed by a team of people. This requires that the control engineer be cross discipline. And not just familiar with the topics but adept at mechanical and electrical design. It's easy to make mistakes and these designs need double and triple checking.
With digital controls you can take a measurement of the plant to be controlled then develop a controller optimized for that plant. Variations from the nominal plant can be minimized by custom design for the measured plant. Many incorrectly modeled items can be dealt with when the controller is implemented digitally.
5 Introduction to Non-Linear Control Systems
Most real world systems have non-linearities. Control systems theory (or control theory or control systems) is revolves around linear systems. All classical control theory is based on the assumption that the plant to be controlled and the controller itself are linear time-invariant (LTI) systems.
Typically control systems are designed such that the nominal operating point for the system is such that it can be modeled with linear approximations. To a large extent the linearization of non-linear systems is what non-linear control theory encompasses. Often times a non-linear system is linearized around a particular operating point. A controller is designed for that operating point. Then another operating point is considered and a new controller is designed for that operating point. On top of these controllers for the different operating points a transition scheme must developed.
6 Artificial Intelligence in Control Systems
Artifical Intelligence (AI) is not widely used but it is typically used for systems which are
- non-linear or
- the plant/environment changes rapidly
There are promising areas of study employing AI control through neural networks to maintain control and stability for systems where the environment changes (i.e. a submarine diving through different thermal layers in the water) or where the plant changes (i.e. an airplane where 1 engine fails or the fuselage is damaged).
There are lots of areas of research in this are and I find it fascinating but I don't know enough about it to add much more.
Leigh, J. R. 2004 Control Theory. ISBN 0863413390