INTRODUCTION
Control theory is an interdisciplinary branch of
engineering and mathematics. It deals
with the response of the Dynamic system for the given input and the change in
its behaviour by feedback. Automatic control is important in many engineering
application and science. Automatic
control has many applications in process plants, robotics, space-vehicle and in
many industrial operations like monitoring and control of temperature,
pressure, humidity, etc.
Commonly
used control theories are Conventional/classical control theory, Modern control
theory and Robust control theory. Basic knowledge of Laplace Transform,
Differential equations, Partial-Fraction Expansion, Vector-Matrix Algebra are
required for complete understanding of Control theory.
Control
theories and techniques are roughly classifies into :
·
Classical Control: Proportional -Integral-Derivative(PID)
controller used by many industries in 1940s to control pressure, temperature,
etc., Examples: Process control in chemical plants, Aeroplanes, etc..
·
Optimal Control: Kalman filter, Linear
quadratic regulator control developed in 1960s to achieve optimal performance.
·
Modern Control: It is Centered around
robust control and associated topics. It is developed in 1980s to 1990s.
·
Robust Control: Hα control,
to handle systems with uncertainties, disturbances and with high performance.
·
Non-linear Control: It is the hot
research topic and it is developed to handle non-linear system with high
performance. Examples: Missiles.
·
Intelligent Control: These control
techniques adapt various AI approaches like Artificial Neural Network (ANN),
Fuzzy logic, Knowledge based control, adaptive control, evolutionary computation,
genetic algorithm, etc., to control highly dynamic systems. It is developed in
1990s to handle systems with unknown models. Examples: eco-system, human
systems.
There are many
other classifications in control theory other than the above. The above classifications are just an
introductory to the types of control theories.
1868 The control system field
begin with the work of a physicist James Clerk
Maxwell on dynamic analysis of
Centrifugal Watt governor for the speed control
of a steam engine.
1877 Edward John Routh abstracted Maxwell’s result for the general class of Linear
systems. Adolf Hurwitz
analysed system stability using differential equations which is known as
Routh-Hurwitz theorem
1922 Minorsky worked on automatic controllers for
steering ships. He showed how stability could be determined from the
differential equations describing the system.
1932 Nyquist developed relatively simple procedure for
determining the stability of closed-loop systems on the basis of open-loop response to steady-state sinusoidal
inputs.
1934 Hazen introduced the term SERVOMECHANISM for position
control systems.
1940 Frequency-Response method was
developed which made engineers to design linear closed-loop control systems.
1940-1950 PID controllers were used in many industrial
control systems to control pressure, temperature, etc., Ziegler-Nichols suggested rules for tuning PID
controller called Ziegler-Nichols tuning rules. During this period root-locus
method due to Evans was fully developed.
1960 Control theories for modern
plants with many inputs and many outputs was developed. Such a modern control
systems are complex and requires large number of equations. Modern control
theory, based on time-domain analysis and synthesis using state variables ,
time-domain analysis of complex system is possible by the availability of digital computers.
1960-1980 Optimal control of both deterministic and
stochastic systems were investigated.
Also adaptive and learning control of complex system were analysed.
1980-1990 robust control and its associated topics
were developed.
·
Katsushiks Oguta -
Modern Control Engineering, V Edition.
·
Wikipedia
No comments:
Post a Comment