In The Name of Omnipotent
What is Digital control?
Digital control is a branch of control theory that uses digital computers to act as a system. Depending on the requirements, a digital control system can take the form of a microcontroller to an ASIC to a standard desktop computer. Since a digital computer is a discrete system the Laplace transform is replaced with the Z-transform. Also since a digital computer has finite precision (See quantization) extra care is needed to ensure the error in coefficients, A/D conversion, D/A conversion, etc. are not producing undesired or unplanned effects.
The application of digital control can readily be understood in the use of feedback. Since the creation of the first digital computer in the early 1940s the price of digital computers has dropped considerably, which has made them key pieces to control systems for several reasons:
- Cheap: under $5 for many microcontrollers
- Flexibility: easy to configure and reconfigure through software
- Static operation: digital computers are much less prone to environmental conditions than capacitors, inductors, etc.
- Scaling: programs can scale to the limits of the memory or storage space without extra cost
- Adaptive: parameters of the program can change with time (See adaptive control)
Digital Controller Implementation
A digital controller is usually cascaded with the plant in a feedback system. The rest of the system can either be digital or analog. Some examples of analog systems with a digital feedback controller are:
Typically, a digital controller requires:
- A/D conversion to convert analog inputs to machine readable (digital) format
- D/A conversion to convert digital outputs to a form that can be input to a plant (analog)
- A program that relates the outputs to the inputs
Output Program
- Outputs from the digital controller are functions of current and past input samples, as well as past output samples - this can be implemented by storing relevant values of input and output in registers. The output can then be formed by a weighted sum of these stored values.
The programs can take numerous forms and perform many functions
- A digital filter for low-pass filtering (analog filters are preferred because digital filters introduce more delay)
- A state space model of a system to act as a state observer
- A telemetry system
Stability
Note that although a controller may be stable when implemented as an analog controller, it could be unstable when implemented as a digital controller, due to a large sampling interval. Thus the sample rate characterizes the transient response and stability of the compensated system, and must update the values at the controller input often enough so as to not cause instability.
Stability of digital control systems can be checked using a specific bilinear transform to the Laplace domain, allowing the use of the Routh-Hurwitz stability criterion. This bilinear transform is application specific, and can not be used to compare system attributes such as transient responses in the s and z domains.
What is Nonlinear control?
Non-linear control is a sub-division of control engineering which deals with the control of non-linear systems. The behavior of a non-linear system cannot be described as a linear function of the state of that system or the input variables to that system. For linear systems, there are many well-established control techniques, for example root-locus, Bode plot, Nyquist criterion, state-feedback, pole-placement etc.
Properties of non-linear systems
Some properties of non-linear dynamic systems are
- They do not follow the principle of superposition (linearity and homogeneity).
- They may have multiple isolated equilibrium points (linear systems can have only one).
- They may exhibit properties such as limit-cycle, bifurcation, chaos.
- Finite escape time: The state of an unstable nonlinear system can go to infinity in finite time.
- Non-linear systems cannot be described in terms of their eigenvectors, unlike, for instance, translation-invariant linear systems that can be completely described by the input/output relationship of sinusoidal inputs.
Analysis and control of non-linear systems
There are several well-developed techniques for analyzing nonlinear feedback systems:
- Describing function method
- Phase plane method
- Lyapunov stability analysis
- Singular perturbation method
- Popov criterion (described in The Lur'e Problem below)
- Center manifold theorem
- Small-gain theorem
- Passivity analysis
Control design techniques for non-linear systems also exist. These can be subdivided into techniques which attempt to treat the system as a linear system in a limited range of operation and use (well-known) linear design techniques for each region:
Those that attempt to introduce auxiliary nonlinear feedback in such a way that the system can be treated as linear for purposes of control design:
And Lyapunov based methods:
- Lyapunov Redesign
- Nonlinear Damping
- Backstepping
- Sliding mode control