Introduction to System Modeling and Control Introduction Basic Definitions Different Model Types System Identification Neural Network Modeling Mathematical Modeling (MM)

A mathematical model represent a physical system in terms of mathematical equations It is derived based on physical laws (e.g.,Newtons law, Hookes, circuit laws, etc.) in combination with experimental data. It quantifies the essential features and behavior of a physical system or process. It may be used for prediction, design modification and control. Engineering Modeling Process dv f m bv Theory dt v v c f T x Data Engineerin

g System Numerica l Solution Solution Data Math. Model Model Reduction Control Design Example: Automobile Engine Design and Control Heat & Vibration Analysis Structural Analysis Graphical Visualization/ Animation System Variables Every system is associated with 3 variables: y

u System x Input variables (u) originate outside the system and are not affected by what happens in the system State variables (x) constitute a minimum set of system variables necessary to describe completely the state of the system at any given time. Output variables (y) are a subset or a functional combination of state variables, which one is interested to monitor or regulate. Mathematical Model Types discreteevent Lumpedparameter Most

x General f ( x, u , t ) y h( x, u , t ) Linear-Time invariant (LTI) x Ax Bu y Cx Du distributed Input-Output Model ( n 1) y (n) f ( y , , y , y, u ( n ) , , u , u , t ) LTI Input-Output Model y ( n ) a1 y ( n 1) an 1 y an y b0u ( n ) bn 1u bnu

Discrete-time model: Transfer Function Model Y ( s ) G ( s )U ( s ) x (t ) x (t 1) y ( i ) (t ) y (t i ) Example: Accelerometer (Text 6.6.1) Consider the mass-spring-damper (may be used as accelerometer or seismograph) system shown below: Free-Body-Diagram x x fs fs M

M fd fd fs(y): position dependent spring force, y=u-x fd(y): velocity dependent spring force Newtons 2nd law Linearizaed model: M u y fd ( y ) fs ( y ) Mx My by ky Mu u Acceleromter Transfer Function

My by ky Mu Accelerometer Model: Transfer Function: Y/A=1/(s2+2ns+n2) n=(k/m)1/2, =b/2n Natural Frequency n, damping factor Model can be used to evaluate the sensitivity of the accelerometer Impulse Response

Impulse Response Frequency Response Bode Diagrams From: U(1) 40 0 -20 -40 -60 0 -50 To: Y(1) Phase (deg); Magnitude (dB) 20 -100 -150 -200

10-1 10 0 Frequency (rad/sec) 101 /n Mixed Systems Most systems in mechatronics are of the mixed type, e.g., electromechanical, hydromechanical, etc Each subsystem within a mixed system

can be modeled as single discipline system first Power transformation among various subsystems are used to integrate them into the entire system Overall mathematical model may be assembled into a system of equations, or a transfer function Electro-Mechanical Example Input: voltage u Output: Angular velocity Ra u La ia B dc J

Elecrical Subsystem (loop method): di a u Ra i a La eb , eb back - emf voltage dt Mechanical Subsystem B Tmotor J Electro-Mechanical Example Power Transformation: Torque-Current: Voltage-Speed: Ra Tmotor K t i a eb K b

u La ia B dc where Kt: torque constant, Kb: velocity constant For an K t K b ideal motor Combing previous equations results in the following mathematical model: di a Ra i a K b u La dt B K t i a 0 J

System identification Experimental determination of system model. There are two methods of system identification: Parametric Identification: The inputoutput model coefficients are estimated to fit the input-output data. Frequency-Domain (non-parametric): The Bode diagram [G(j) vs. in loglog scale] is estimated directly form the input-output data. The input can either be a sweeping sinusoidal or random signal. Electro-Mechanical Example Ra Transfer Function, La=0: La B K t Ra (s)

k U(s) Js B K t K b Ra Ts 1 ia u Kt 12 u t ku 10 k=10, T=0.1

Amplitude 8 6 4 T 2 0 0 0.1 0.2 0.3 Time (secs) 0.4 0.5

Comments on First Order Identification Graphical method is difficult to optimize with noisy data and multiple data sets only applicable to low order systems difficult to automate Least Squares Estimation Given a linear system with uniformly sampled input output data, (u(k),y(k)), y (k )then a y (k 1) a y (k n ) b u(k 1) b u(k n ) noise 1 n 1

n Least squares curve-fitting technique may be used to estimate the coefficients of the above model called ARMA (Auto Regressive Moving Average) model. Nonlinear System Modeling & Control Neural Network Approach Introduction

Real world nonlinear systems often difficult to characterize by first principle modeling First principle models are often suitable for control design Modeling often accomplished with inputoutput maps of experimental data from the system Neural networks provide a powerful tool for data-driven modeling of nonlinear systems Input-Output (NARMA) Model u z -1 z -1 z -1 y g z -1 z -1

z -1 y [k ] g ( y [k m],..., y [k 1], u[k m],..., u[k 1]) What is a Neural Network? Artificial Neural Networks (ANN) are massively parallel computational machines (program or hardware) patterned after biological neural nets. ANNs are used in a wide array of applications requiring reasoning/information processing including

pattern recognition/classification monitoring/diagnostics system identification & control forecasting optimization Advantages and Disadvantages of ANNs Advantages: Learning from Parallel architecture Adaptability Fault tolerance and redundancy

Disadvantages: Hard to design Unpredictable behavior Slow Training Curse of dimensionality Biological Neural Nets A neuron is a building block of biological networks A single cell neuron consists of the cell body (soma), dendrites, and axon. The dendrites receive signals from

axons of other neurons. The pathway between neurons is synapse with variable strength Artificial Neural Networks They are used to learn a given inputoutput relationship from input-output data (exemplars). The neural network type depends primarily on its activation function Most popular ANNs:

Sigmoidal Multilayer Networks Radial basis function NLPN (Sadegh et al 1998,2010) Multilayer Perceptron MLP is used to learn, store, and produce input output relationships y w i ( x j v ij ) x1 y x2 i j weights

activation function The activation function (x) is a suitable nonlinear function: Sigmidal: (x)=tanh(x) Gaussian: (x)=e-x2 Triangualr (to be described later) Sigmoidal and Gaussian Activation Functions 1 0.9 gaussian 0.8

sigmoid 0.7 sig(x) 0.6 0.5 0.4 0.3 0.2 0.1 0 -5 -4 -3 -2 -1

0 x 1 2 3 4 5 Multilayer Netwoks y x W0 Wp Wk,ij: Weight from node i in layer k-1 to node j in layer k

y WpT WpT 1 W1T W0T x Universal Approximation Theorem (UAT) A single hidden layer perceptron network with a sufficiently large number of neurons can approximate any continuous function arbitrarily close. Comments: The UAT does not say how large the network should be

Optimal design and training may be difficult Training Objective: Given a set of training input-output data (x,yt) FIND the network weights 2that minimize the E ( y yt ) expectedL error Steepest Descent Method: Adjust weights in the direction of steepest descent of L to make dL as negative as possible. dL E (eT dy) 0, e y yt Neural Networks with Local Basis Functions

These networks employ basis (or activation) functions that exist locally, i.e., they are activated only by a certain type of stimuli Examples: Cerebellar Model Articulation Controller (CMAC, Albus) B-Spline CMAC Radial Basis Functions Nodal Link Perceptron Network (NLPN, Biological Underpinnings

Cerebellum: Responsible for complex voluntary movement and balance in umans Purkinje cells in cerebellar cortex is believed to have CMAC like architecture Nodal Link Perceptron Network (NLPN) [Sadegh, 95,98] Piecewise multilinear network (extension of 1-dimensional spline) Good approximation capability (2nd order)

Convergent training algorithm Globally optimal training is possible Has been used in real world control applications NLPN Architecture Input-Output Equation y w i i ( x, v ) wi x i Basis Function: i ( x, v ) i1 ( x1, v )i 2 ( x 2 , v ) i n ( x n , v )

Each ij is a 1-dimensional triangular basis function over a finite interval y NLPN Approximation: 1-D Functions Consider a scalar function f(x) wi+1 wi ai ai+1

f(x) on interval [ai,ai+1] can be approximated by a line x ai f ( x ) 1 ai 1 ai x ai w i ai 1 ai w i 1 Basis Function Approximation Defining the activation/basis functions a1 x ai 1

x [ai 1, ai ] a a , i1 i x ai i ( x ) 1 , x [ai , ai 1 ] ai 1 ai 0, otherwise a aN ai-1 ai Function f can expressed as

f ( x ) w i i ( x, a ) ai+1 (1st order B-spline CMAC) i This is also similar to fuzzy-logic approximation with triangular Neural Network Approximation of NARMA Model u[k-1] y y[k-m] Question: Is an arbitrary neural network model

consistent with a physical system (i.e., one that has an internal realization)? State-Space Model u system States: x1,,xn x[k 1] f ( x[k ], u[k ]) y [k ] h( x[k ]) y State Space Realizable Models Consider the input-output model: y [k ] g ( y [k m],..., y [k 1], u[k m],..., u[k 1])

When does the input-output model have a state-space realization? x[k 1] f ( x[k ], u[k ]) y [k ] h( x[k ]) Comments on State Realization of Input-Output Model A Generic input-Output Model does not necessarily have a state-space realization (Sadegh 2001, IEEE Trans. On Auto. Control) There are necessary and sufficient conditions for realizability Once these conditions are satisfied the

state-space model may be symbolically or computationally constructed A general class of input-Output Models may be constructed that is guaranteed to admit a state-space realization The Model Form The following Input-Output Model always admits a minimal state realization: m 2 g ( y 1,..., y m , u1,..., um ) g m 1( y i 1, y i 2 , ui 1 ) g1( y m , um ) i 0 State Space Realization The state-model of the input-output model is as follows with y=x1:

x x 2 g1( x1, u ) x 2 x3 g 2 ( x1, x1 , u ) 1 x m 1 x m g m 1( x1, x1 , u ) x m g m ( x1, x1 , u ) Neural Networks Reduced coupling results in subnetworks:

Cant use prepackaged software, but standard training methods are the same Nodal Link Perceptron Networks Local basis functions, similar to CMAC networks. Reduced Coupling also results in sub-networks: Simulation Example Nonlinear mass spring damper Data sampled at 0.01s, output is the velocity of the 2nd mass

Simulation Results 1 1 0 0 output I -1 0 5 II -1 10

1 1 0 0 0 5 IV III -1 0 5 10

-1 10 0 5 10 time model response system response I: Linear model. mse=0.0281. II: NARMA model. mse=0.0082. III. Neural network. mse=3.6034e-4. IV. NLPN. mse=7.2765e-4. training(static) mse=0.0059. training(static) mse=0.0021.

training(static) mse=0.0016.N training(static) mse=2.6622e-4. Simulation Results 1 1 II I output 0 -1 0 0 2

4 6 -1 8 1 0 4 6 8 2 4 6

8 1 IV III 0 -1 2 0 0 2 4 6 -1

8 0 time model response system response I: Linear model. mse=0.0271. II: NARMA model. mse=0.0067. III. Neural network. mse=5.3790e-4. IV. NLPN. mse=7.1835e-4. Conclusions A number of data driven modeling techniques are suitable for an observable

state space transformation Rough guidelines were given for when and how to use NARMA, neural network and NLPN models NLPN modifications make it an easily trainable option with excellent capabilities Substantial training & design issues include data sampling rate and input repetition due to the reduced coupling restriction Fluid Power Application INTRODUCTION APPLICATIONS:

Robotics Manufacturing Automobile industry Hydraulics EXAMPLE: EHPV control (electro-hydraulic poppet valve) Highly nonlinear Time varying characteristics Control schemes needed to open two or more valves simultaneously Motivation

The valve opening is controlled by means of the solenoid input current The standard approach is to calibrate of the current-opening relationship for each valve Manual calibration is time consuming and inefficient Research Goals Precisely control the conductivity of each valve using a nominal input-output relationship. Auto-calibrate the input-output relationship Use the auto-calibration for precise control without requiring the exact

input-output relationship INTRODUCTION EXAMPLE: Several EHPVs were used to control the hydraulic piston Each EHPV is supplied with its own learning controller Learning Controller employs a Neural Network (NLPN) in the feedback Satisfactory results for single EHPV used for pressure control Control Design

Nonlinear system (lifted to a square system) x k n F x k ,uk Feedback Control Law ( x d , xd ) ( xd , xd ) K p u x d

( x xd ) ( xd , xd ) is the neural network output The neural network controller is directly trained based on the time history of the tracking error Learning Control Block Diagram Experimental Results Experimental Results