Loading

PID Gains: P: 5 , , I: 0.1 D: 0.1

What is constrained, optimal, model predictive control? ** MPC** itself is a control scheme that makes use of past data, model physics, and future demands in order to better control a system.

Consider a basic models such as a second order transfer function:

$$ H(s) = \frac{a \cdot s + b}{s^2 + c \cdot s + d} \quad\quad (1) $$ Where $ a,b,c,d $ represent optimally chosen model parameters. The optimal choice of these parameters is known as system identification. While this identification process is out of the scope of this blog post, note the Zallus Controller must do this in order to implement MPC of an arbitrary system. The next step is to discretize the system and this can be done through various methods such as ZOH, forward difference, or bilateral transform to name a few. I have a basic forward difference example in this blog post.

$$ \frac{Y(k)}{U(k)} = \frac{A \cdot z^{-1} + B \cdot z^{-2}}{1 -C \cdot z^{-1} -D \cdot z^{-2}} \quad\quad (2) $$ $$ \hat{y}(k+1) = C \cdot y(k) + D \cdot y(k-1) + A \cdot u(k) + B \cdot u(k-1) \quad\quad (3) $$

Equation $ (2) $ shows the transfer function in the Z domain where $ A,B,C,D $ represent the discretized model parameters. With algebra and applying the definition of the Z-transform one can convert this into a recursive predictor form as shown in equation $ (3) $ . Now with some structure in place, consider a basic example of a 2nd order transfer function with input constraints where the input can only be between 0 and 1:

$$ H(k) = \frac{0.6 \cdot z^{-1} + 0.2 \cdot z^{-2}}{1 -0.9 \cdot z^{-1} +0.05 \cdot z^{-2}} \ , \quad 0 < u(k) < 1 \quad\quad (4) $$ Next, if we formulate this discrete transfer function in the form of equation $ (3) $ , and then solve for the present input, one gets the following:
$$ u(k) = \frac{\hat{y}(k+1) + 0.9 \cdot y(k) -0.05 \cdot y(k-1) -0.2 \cdot u(k-1)}{0.6} \ , 0 < u(k) < 1 \quad\quad (5) $$
Equation $ (5) $ is known as one step ahead control. This control scheme can execute a trajectory perfectly assuming the input can take any value. However, given the reality of input constraints, this control is no longer optimal as seen in the below plot where the input is clamped at 1 and 0:

Loading

As seen in the plot, with real world constraints, one step ahead control is no longer an optimal control. Consequently, to achieve an optimal control, constraints must be factored in when solving for optimal inputs. In order to handle MPC as a large scale constrained optimization problem, the problem should be stated in a matrix format versus a recursive algorithm. This matrix format must also has to handle initial conditions such that MPC can be continuously updating once new information becomes available. An example will be shown for predicting 4 steps into the future. In this matrix form, future outputs $ y(k+1,2, \ldots ,N) $ will be denoted as a function of future inputs $ u(k+0,1,2, \cdots, N) $ as well as past data $ \tilde{y}(k-1) , \tilde{u}(k-1) $ where $ \tilde{} $ denotes a constant / initial condition.

$$ \hat{y} = J \cdot \hat{u} + C \quad\quad (6) $$ $$ \begin{bmatrix} \hat{y}(k+1) \\ \hat{y}(k+2) \\ \hat{y}(k+3) \\ \hat{y}(k+4) \\ \cdots \end{bmatrix} = \begin{bmatrix} A & 0 & 0 & 0 & \cdots \\ B + A \cdot C & A & 0 & 0 & \cdots \\ B \cdot C + A \cdot (C^2 + D) & B + A \cdot C & A & 0 & \cdots \\ A \cdot (C^3 + 2 \cdot D \cdot C) + B \cdot (C^2 + D) & B \cdot C + A \cdot (C^2 + D) & B + A \cdot C & A & \cdots \\ \cdots & \cdots & \cdots & \cdots & \cdots \end{bmatrix} \cdot \begin{bmatrix} \hat{u}(k) \\ \hat{u}(k+1) \\ \hat{u}(k+2) \\ \hat{u}(k+3) \\ \cdots \end{bmatrix} + $$ $$ \begin{bmatrix} C & D & B \\ C^2 + D & C \cdot D & B \cdot C \\ C \cdot D + C \cdot (C^2 + D) & D \cdot (C^2 + D) & B \cdot (C^2 + D) \\ C \cdot (C^3 + 2 \cdot D \cdot C) + D \cdot (C^2 + D) & D \cdot (C^3 + 2 \cdot D \cdot C) & B \cdot (C^3 + 2 \cdot D \cdot C) \\ \cdots & \cdots & \cdots \end{bmatrix} \cdot \begin{bmatrix} \tilde{y}(k-1) \\ \tilde{y}(k-2) \\ \tilde{u}(k-1) \end{bmatrix} \quad\quad (7) $$

Equation $ (6) $ shows the short hand matrix prediction equation where $ \hat{y} $ is our predicted future outputs and where $ \hat{u} $ are our chosen/calculated future inputs. The future inputs are multiplied by $ J $ which is the **jacobian** matrix of the system and matrix $ C $ represents a matrix of constants that are dependent on the initial conditions. Note that Jacobians are crucial for nearly all optimization algorithms as well as their 2nd order equivalent which is the hessian (often approximated as $H \approx J^T \cdot J $ ). Note the structure of the jacobian in MPC problems is a lower triangular matrix as well as a toeplitz matrix. These features are key for validation as well as an efficient implementation of inversion/solving.

Moving on, solve equation $ (6) $ for the optimal inputs given target output values as well as a set of initial conditions.

$$ \hat{u} = J^{-1} \cdot (\hat{y} -C) \quad\quad (8) $$ $$ \hat{u} = J^{-1} \cdot (\hat{y} -C) \quad st. \quad lb < \hat{u} < ub \quad\quad (9) $$
Equation $ (8) $ solves for optimal inputs given target future outputs as well as initial conditions. However, this solution is still an **un-constrained** solution. Equation $ (9) $ states the constrained case where $ lb,ub $ are the lower bounds and upper bounds respectively. Note this is not a easy problem to solve optimally in my humble opinion. This problem has $ 2\cdot N $ constraints and is considered a non-linear programming problem. Furthermore, with so many constraints, if not handled with care, the computation time can become excessive. The easiest way to solve this problem would be with an active set method however this would be a bad idea since active set methods generally only identify 1-2 active constraints at a time. For MPC, a common case may be where half to all the constraints are simultaneously active. Consequently, I would recommend a more intelligent method such as the interior point method. Matlab/Octave of course have solvers for these types of constraints relying on such methods is non ideal for a cross platform IoT system. Consequently, implementing a constrained efficient solver in javascript was probably the hardest challenge of getting this Zallus Controller working. Looking at matlab’s nonlinear constrained algorithms is a good place to start for algorithm ideas. Once successfully implemented, the MPC solution to the previous example should look like the following:

Loading

As expected, the constrained optimal MPC reacts well ahead of time to future demands and bisects all the step changes in the output to minimize the error. Naturally, this produces lower error than the previously shown control methods. Interestingly enough, at $ k=27, k=36 $ there appeared to be some ripple and sub-optimal control. Surprisingly, those points are indeed optimal points where the controller is making use of the nature of the model to achieve a better curve. The following plot is all the displayed control techniques on the same plot for reference.

Loading

While the previous examples demonstrate the concept of MPC, they are missing a key construct for practical implementation which is a finite horizon. Without limiting the scope of future predictions, the calculations involve optimization of $ N $ by $ N $ matrices where $ N $ is the length of the target. In order to make computation time feasible on the fly, it follows that the amount of future predictions must be limited. The plots previously shown were produced with a so called infinite horizon or a horizon of all the steps which was $ N = 50 $ . However, to achieve an optimal control of the previous examples, a future horizon of only $ N = 4 $ was given. The required finite horizon for optimal control is dependent upon the model characteristics as well as the target trajectory. To illustrate the affect of non-infinite horizons, please see the following plot. Note that the accuracy is reduced and converges towards one step ahead control as the finite horizon is reduced. Also note that the model and target were modified to better illustrate the affect of the horizon.

Loading

As seen above, when the finite horizon is reduced, the control has less ability to anticipate/optimize for future demand. Consequently, the overall error is increased but the computation time per step was decreased. Accordingly, optimal MPC is a balancing act of the computation time versus the time step acceptable.

This concludes my intro to optimal constrained MPC. Happy controlling! I hope to see some other optimal MPC control products out there!

]]>

Loading

**Intro: **Unfortunately, we live in a complex but wonderful world that cannot always be sufficiently modeled by basic linear systems. Naturally, nonlinear models are in widespread use and range from distribution models such as the normal distribution to more convoluted models such as deep neural networks. Fortunately, Kenneth Levenberg and Donald Marquardt contrived a fairly robust and simple algorithm for optimizing nonlinear models via nonlinear least squares.

Before jumping into nonlinear least squares, it is prudent to review linear least squares. “Least squares”, as the name in implies, is method for finding the minimal(least) squared error between a model and data. Consequently, it is used to determine model coefficients in cases of over-determined systems. In more simple words, it is a method of curve fitting data. With the assistance of my javascript math/plotting library, one can generate some random data.

Loading

/* Maths.js code @ https://github.com/natezimmer/maths.js */ var m = Math.random()*20; var b = Math.random()*20; var X = Matrixs.range(100).addNoise(0.9); // Creates a 0-99 value array var Y = X.multiply(m).add(b); // Line equation Plots.create([Y],{type:'scatter',div:'myPlotData'});So what is the best fit line for this data? The approach is generic for any linear model. First write out the model.

$$ f(x) = m \cdot x + b $$ Now we take the gradient with respect to the

$$ A = \begin{bmatrix} \underline{ \nabla f} \\ \nabla f_1 \\ \nabla f_2 \\ \cdots \\ \nabla f_m \end{bmatrix} = \begin{bmatrix} \underline{m} & \underline{c} \\ x_1 & 1 \\ x_2 & 1 \\ \cdots & \cdots \\ x_m & 1 \end{bmatrix} $$

Array $A$ has dimensions [m,2] since it has m samples and 2 features. Since our model is linear, we can use this array directly in the over-determined least squares normal equation shown below:

$$ [m ; c] = \big( A^T \cdot A \big)^{-1} \cdot A^T \cdot f(x) $$

With this mathmagical formula one can easily fit a huge assortment of curves. Lets fit the initially shown curve.

/* Maths.js code @ https://github.com/natezimmer/maths.js */ var X2 = Matrixs.range(100); var A = X.catHorizontal(Matrixs.ones(X.value.length,1)); res = A.transpose().multiply(A).invert().multiply(A.transpose()).multiply(Y).flatten(); // Normal Equation var X2 = Matrixs.range(100); Ynew = X2.multiply(res[0]).add(res[1]); Plots.create([Y],{type:'scatter',div:'myPlotData2'}); Plots.add([Ynew],{type:'lines',div:'myPlotData2'});

A line is pretty basic. This same principle can be extended to any linear function. For a more complex example, I will fit a 4th order polynomial to a noisy 3rd order polynomial;

$$ f(x)= -1.3 \cdot x^3 + 13 \cdot x^2 -5 \cdot x + 30 $$

Recall the gradient for the 4th order polynomial of the form:

$$ f(x) = a \cdot x^4 + b \cdot x^3 + c \cdot x^2 + d \cdot x +e $$ $$ \nabla f = \big[ x^4 , x^3, x^2 , x^1, 1 \big] $$ $$ [a;b;c;d;e] = \big( A^T \cdot A \big)^{-1} \cdot A^T \cdot f(x) $$

Here is a plot of the 3rd order poly as well as the 4th order least squares best fit.

X = Matrixs.range(0,0.1,10); Y = X.pow(3).multiply(-1.3).add(X.pow(2).multiply(13)) .add(X.multiply(-5)).add(30).addNoise(0.3); // create Y var A = X.pow(4).catHorizontal(X.pow(3)).catHorizontal(X.pow(2)) .catHorizontal(X.pow(1)).catHorizontal(Matrixs.ones(X.value.length,1)); // Create A matrix, 5 features res = A.transpose().multiply(A).invert().multiply(A.transpose()).multiply(Y).flatten(); // Normal Equation Ynew =X.pow(4).multiply(res[0]).add(X.pow(3).multiply(res[1])).add(X.pow(2).multiply(res[2])).add(X.multiply(res[3])).add(res[4]); // Use least squares result to create new poly Plots.create([X,Y],{type:'scatter',div:'myPlotData3'}); Plots.add([X,Ynew],{type:'lines',div:'myPlotData3'});

So as observed, a higher order function can generally fit unknown lower order data nearly perfectly. I could take linear regression much further but this was simply a soft introduction to make way for nonlinear least squares.

Consider the following equation:

$$ f(x) = e^{-a \cdot x} + b \cdot sin(c \cdot x) $$ Consider the equation’s gradient with respect to its **dependent** variables (a,b,c):

$$ \nabla f = \bigg[ x \cdot e^{a \cdot x},\quad sin(c \cdot x ),\quad b \cdot x \cdot cos( c \cdot x) \bigg] $$ Now look at the previous gradients that were calculated. Previous gradients only involved independent variables and were linearly separated. This prevents one from using the normal equation accurately. Furthermore, taking derivatives of more complicated functions may not be analytically possible. These factors are introductory to the challenge to non-linear least squares. An important step in NNLS is obtaining the array of functional partial derivatives that we will now call J or the Jacobian

The Jacobian is essentially an array of functional gradients and it takes a similar form to the linear matrix A as seen below:

$$ J = \begin{bmatrix} \underline{ \nabla f} \\ \nabla f_1 \\ \nabla f_2 \\ \cdots \\ \nabla f_m \end{bmatrix} = \begin{bmatrix} \underline{\partial f / \partial a} & \underline{ \partial f / \partial b } & \underline{ \partial f / \partial c } & \cdots \\ \partial f_1 / \partial a & \partial f_1 / \partial b & \partial f_1 / \partial c & \cdots \\ \partial f_2 / \partial a & \partial f_2 / \partial b & \partial f_2 / \partial c & \cdots \\ \cdots & \cdots & \cdots & \cdots \\ \partial f_m / \partial a & \partial f_m / \partial b & \partial f_m / \partial c & \cdots \end{bmatrix} $$

Similar to the previous linear matrix A, the Jacobian will have dimensions [samples,features]. While numerical calculation of the Jacobian is fundamentally inefficient, it is a simple starting point for solving new non-linear systems. A numerical derivative is trivial to calculate when one recalls the limit definition of a derivative.

$$ f'(x) = \lim_{\Delta x \to 0} \frac{f(x + \Delta x) -f(x)}{\Delta X} \approx \frac{f(x + 0.000001) -f(x)}{0.000001} $$ This approximation is similar to the approximation I used in my previous blog post regarding the conversion from Laplace-transforms to Z-transforms. Note there are a multitude of methods of taking numerical derivatives but I find this method is generally ‘good enough’ for generic applications. This method can be extended to a 2D array as seen in my numerical Jacobian code.

Another mathematical construct highly recommenced for non-linear least squares is the pseudo inverse. The pseudo inverse represents the optimal compromise for a rank deficient matrix. Through traditional matrix inversion methods, an inverse cannot be found if the matrices determinant is 0. Singular matrices are often encountered in NNLS and consequently having the pseudo inverse is crucial. The Pseudo inverse can be calculated with the assistance of Singular Value Decomposition(SVD). SVD decomposes a matrix X into three parts such that X = UΣV^T. Two of these parts, U and V are orthogonal and Σ contains a diagonal matrix of singular values. By plugging this into the Normal Equation and applying matrix simplification, we can arrive at a simple result:

$$ X = U \cdot \Sigma \cdot V^T $$ $$ (X^T \cdot X)^{-1} \cdot X \cdot Y \approx $$ $$ \bigg( (U \cdot \Sigma \cdot V^T)^T \cdot (U \cdot \Sigma \cdot V^T) \bigg)^{-1}\cdot (U \cdot \Sigma \cdot V^T) \cdot Y \approx $$ $$ V \cdot \Sigma^{-1} \cdot U^T \cdot Y $$

Here is an example of doing the pesudo invere in Maths.js.

var A = Matrixs.make([[1,2,3],[4,5,6],[7,8,9]]); A.pinv().print();

With this we can see an approximate inverse of previously non invertable singular matrices:

$$ A^{-1} = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix}^{-1} \approx \begin{bmatrix} -0.639 & -0.167 & 0.306 \\ -0.056 & -0.000 & 0.056 \\ 0.528 & 0.167 & -0.194 \end{bmatrix} $$

With a Jacobian and a reliable matrix inverse, one can begin to implement the Levenberg Marquadrt algorithm. This algorithm is a hybrid between Gradient Descent and Gauss Newton. It is recommended one knows both but in short summary, gradient descent moves too slow and Gauss Newton can often move too fast. In short, both have convergence problems and so an algorithm was developed as a hybrid between these too methods. First new vector to define is the residual vector (**r**) or more practically speaking the error vector.

$$ r(x) = \hat{y(x)} -y(x) $$ Note that $ \hat{y(x)} $ is our estimated model of $ y(x) $. Naturally, the goal is to minimize the RMS value of the error (residual). Another new vector to define is an approximation of the Hessian which is the second derivative of the Jacobian. This matrix is often approximated as the following:

$$ H(x) \approx J^T \cdot J $$ The near final form Levenberg step $p$ is the following:

$$ p = (J^T \cdot J + \lambda I)^{-1} \cdot J^T \cdot r $$ p is the **step** that the features will take upon completion of the calculation. Note, this is consequently a **iterative** process as are most non-linear solvers. Secondly $ \lambda $ is the damping parameter. Sometimes LMA is called a damped Gauss Newton step. When $ \lambda $ is large, each step is near equivalent of a gradient descent step. When $ \lambda $ is small, each step is equivalent of a Gauss Newton step. Note, $ \lambda $ should update every itteration based upon if $ r_{k+1} > r_{k}$. If that is true, that means the error increased and the previous step should be reversed and $ \lambda $ should be increased by a factor of 10(commonly). The following is pseudo code on how to write the algorithm:

$$ loop \ k \ iterations : $$ $$ p_{k+1} = (J^T_k \cdot J_k + \lambda_k I)_{ V \cdot \Sigma^{-1} \cdot U^T } \cdot J^T_k \cdot r_k $$ $$ p_{k+1} = (r_{k+1} > r_{k}) \quad ? \quad p_{k} : p_{k+1} $$ $$ \lambda = (r_{k+1} > r_{k}) \quad ? \quad \lambda \cdot 10 : \lambda \cdot 0.1 $$ $$ x = x -p_{k+1} $$

My javascript source for this can be found here. It is also running in your browser right now ;). Next this method will be applied to the nonlinear function previously shown.

function nls(param){this.param = param;} // Create Object nls.prototype.fnc = function(x) //Create Object's function { var X = Matrixs.make(x); var a = this.param[0]; var b = this.param[1]; var c = this.param[2]; var Y = X.multiply(a).apply(Math.exp).add(X.multiply(c).apply(Math.cos).multiply(b)); return Y.value; } var X = Matrixs.range(-5,0.05,5); var Y = Matrixs.make((new nls([-0.5,4,1.5])).fnc(X)).addNoise(0.3); var guess = [-1,2,1]; var testModel = new nls(guess); var inputObj ={input: X.value , output: Y.value }; var resultObj = Solvers.levenbergMarquardt(inputObj,testModel); Plots.create([X,Y],{type:"scatter", size:10,div:'myPlotData5'}); Plots.add([X,Matrixs.make(testModel.fnc(X.value))],{type:"line",div:'myPlotData5'});Note that in the code above, a ‘guess’ was required. This is a fundamental weakness with the Levenberg Marquadrt algorithm is that a guess and a good guess is required else the function will get stuck in a local minimum.

While LMA struggles with initial guesses, models that support pre-conditioning are ideal for LMA. Neural networks for example generally constrain inputs and outputs which makes initial guesses far less of a problem. Consequently, LMA becomes a rapid batch solver for smaller scale neural networks. Here is a classic example:

$$ f = a \quad xor \quad b $$ $$ TruthTable $$ $$ \begin{bmatrix} \underline{ a } & \underline{ b } & | &\underline{ f } \\ 0 & 0 & | & 0 \\ 0 & 1 & | & 1 \\ 1 & 0 & | & 1 \\ 1 & 1 & | & 0 \end{bmatrix} $$

var xx = new Models.neuralNet(); xx.setInputNumber(2); xx.setLayerSizes([4]); xx.setOutputNumber(1); xx.init(); var inputs = [[0,0],[0,1],[1,0],[1,1]]; var outputs = [[0],[1],[1],[0]]; var inputObjs = {input:inputs,output:outputs}; var resultObj = Solvers.levenbergMarquardt(inputObjs,xx); A = Matrixs.make(xx.fnc(inputs)).round(1);Visualization of NN is still an ongoing feature needing writing in my library. Anyhow, thats all for now folks.

Imagingshop LMA

JS Numeric’s SVD

Wiki SVD

Wiki LMA

Matlab LSQ

Many engineers and hobbyist alike are unable to solve the following ‘simple’ problem without assistance of simulation. If you don’t believe me, try it yourself.

How does one calculate the behavior of a circuit/system with a square wave input? Furthermore, how is this solved on an embedded platform without the aid of a spice simulator like LTSpice? This is relevant question because at times one may be forced to solve/utilize differential equations on embedded systems. This problem may seem awkward since most of academia literature uses a step or impulse as a input as opposed to real-world arbitrary/exogenous input(such as a square wave). In addition, note that neither phasor analysis or Laplace analysis alone is adequate to arrive at a simple solution. However, with their assistance and the Z-transform, one can contrive a simple solution to this problem.

*[Disclaimer]* As for all problems, many solution methods exist and the following is simply my approach. Step 1, convert the system to the Laplace domain. Step 2, discretize the system and convert it to the Z domain. Step 3, apply the definition of the Z-transform to convert from the discrete frequency domain to the time domain. Step 4, apply the square wave to the resulting function and calculate the answer.

First we convert our circuit to the Laplace domain. Capacitors can be re-written as $1/(s \cdot C)$ and inductors (if we had any) are converted to $(s \cdot L)$. Here the values are replaced with their generic counterparts.

Writing out the nodal equation for Vout we have the following:

$$\frac{u -V_{out}}{R} = \frac{V_{out} -0}{\frac{1}{s \cdot C}}$$

Now with some algebra one can write the equation in the form of its output and subsequently the transfer function.

$$V_{out} = \frac{u}{1 + s \cdot R \cdot C} $$ $$ H(s) = \frac{1}{1 + s \cdot R \cdot C}$$

Now from here, the traditional approach might be to do the inverse Laplace transform. Often though, this is done for step response or impulse response analysis. You must choose your input ahead of time which eliminates the opportunity to apply any arbitrary input. In addition, the laplace representation of a square wave is tricky to contrive and can be a difficult to invert. Thankfully, a simpler method exists.

This is where the long lost friend of the Z-transform comes to the rescue. With the Z transform, one can easily calculate an arbitrary input for a given transfer function. However, how do we convert from the Laplace domain to the Z domain? Note this process is sometimes referred to as system discretization. A simpler method will be provided than what is shown on Wikipedia/Z-transform for conversion. An approximation can be made by considering the numerical derivative and its correlation to the frequency domain.

$$Numerical Derivative = \frac{x(k+1) -x(k)}{\Delta T} $$ $$ Laplace Derivative = s $$ $$ s \approx \frac{z -1}{\Delta T} $$

Note that $\Delta T$ is the sampling period. Now with the approximation of $s$ in terms of $z$, one can substitute it into original transfer function.

$$\frac{Y(k)}{X(k)} = \frac{1}{1 + (\frac{z -1}{\Delta T}) \cdot R \cdot C} $$

Okay thats great, we are in the Z-domain but now how do we go back to a time domain such that we can apply this function? Well again, if you look at Wikipedia for an inverse Z-transform you will see some frighting never used function such as this: $Z^{-1}{X(z)} = \frac{1}{2 \pi j}\oint \limits_{C} X(z)z^{n-1}dz$. However, do not forget the definition of the Z-transform where $z$ is an advance and $z^{-1}$ is a delay of a function. One can apply this definition of the Z-transform after re-arranged and simplification of the original function as seen below.

$$\frac{Y(k)}{X(k)} = \frac{1}{1 + (\frac{z -1}{\Delta T}) \cdot R \cdot C} $$ $$ Y(k) = \frac{\Delta T \cdot X(k)}{\Delta T + (z -1) \cdot R \cdot C} $$ $$ Y(k) \cdot (\Delta T + (z -1) \cdot R \cdot C) = \Delta T \cdot X(k) $$ $$ Y(k) \cdot ((\Delta T -1) + z) \cdot R \cdot C) = \Delta T \cdot X(k) $$

Now if we take into account a realistic sampling period $(\Delta T = 0.01s)$ and that $R \cdot C = 1$, we now have:

$$ Y(k) \cdot (z -0.99) = 0.01 \cdot X(k)$$

Now we apply the definition of the Z-transform and convert back to the discrete time domain. Note the variables change from $ X,Y $ to $ x,y $ which is a common method of annotating the difference between frequency domain and time domain.

$$ y(k+1) = 0.99 \cdot y(k) + 0.01 \cdot x(k) $$ And presto! Here is a function that can easily be programed via C(or your embedded preference) that is in the discrete time domain.

float y = 0.99 * y + 0.01 * x;Next apply the given square wave input to the contrived function. Note the design period was 0.01s so everything is multiplied by a factor of 100.

float y= 0; int x = 0; //Note sampling rate 100 Hz for(int i = 1; i <340; i++) //3.4 seconds { y = 0.990 * y + 0.01 * x; x = ((i % 50) == 0) ? (!x) : x; // Square wave of 1Hz } printf("Vout = %3.3f V\r\n",y); //Output: Vout = 0.400 VTada, with our 8bit micro-controller or hopefully a 32bit ARM, we are able to calculate what $V_{out}$ is at 3.4 seconds which is $\approx 0.40 V$. Look mom, no simulator! Furthermore, if one puts the values of y in a array and plots it, we get the following:

With that plot, this refresher of Laplace and Z-Transforms comes to a close. Note another method of solving this problem may include a ODE solver or Fourier analysis. However, it is doubtful these methods would production a solution as simple as the Z-transform.

**References:**

Many thanks to my undergrad controls professor, Dr. Glower, and his easy to follow notes @: http://www.bisonacademy.com/ECE461

Welcome! The above plots are live temperature streams from my apartment. How does one achieve this? Well that is what will be explained in this blog post. As all hobby projects start, undoubtedly one begins with elegantly executed and orderly hardware design.

After you get over the amazing architecture of the design, you may notice the hardware involved. The primary hardware involved is the ESP8266 Wifi SoC, a MAX6675 SPI thermocouple IC, and a thermocouple. This marks my first project with the ESP8266 and I thought streaming data to my website would be a fun place to start. The reason i’m using a thermocouple is sadly I don’t own a single thermistor. This was unfortunate as I had to get SPI up and running on the ESP8266. Anyhow, i’m assuming you can imagine the hardware design in your head and I won’t bore you with further details regarding that.

The above image attempts to show how all the software works. Firstly, the ESP8266 sends TCP packets to a wifi router. These TCP packets are formulated as HTTP Post request packets. The wifi router exposes these packets to the WWW and these packets are sent specifically to a PHP file on my server. The PHP file parses the post request and saves the data to a JSON file. This JSON file can then be distributed to viewers with web code such as HTML,CSS,Javascript,AJAX. AJAX/Jquery is used to fetch the JSON file and HTML/CSS/Javascript is used to make the plot visible and purty.

….

Viewing the frequency spectra of signals is useful for filtering of signals and for signal extraction. To convert a signal to the frequency domain, numerous transforms and algorithms exist. Frequency spectrum conversions for signals include the Fourier Transform(FT), Discrete Fourier Transform(DFT), Fast Fourier Transform(FFT), Laplace Transform, Z-Transform…ect. With a wide variety of conversions, it is easy to wonder which is the best and most useful transformation? Naturally, the answer is that each transformation has its own advantages and disadvantages. This article will present how to use the Discrete Fourier Transform to analyze data. The advantages of the DFT is that it is very easy to program and consequently is easy to implement in any coding language. The disadvantage is the computational time required but this is becoming less and less of an issue in the presence of modern computing. An equation for a DFT is presented below:

$$DFT[k] = \bigg| \sum^{N-1}_{n=0} \sum^{N-1}_{K=0}x[n] \cdot e^{-j \frac{2 \pi k n}{N}} \bigg| $$

So how does it work? In the continuous time domain, integrating and multiplying a signal by a complex exponential ($e^{-j 2 \pi f \cdot t}$) decomposes a signal into its frequency components. In discrete time, integration becomes a summation. In the above equation, summing variable $k$ integrates through various frequencies while summing variable $n$ iterates through the values of the discrete signal $x[n]$.

To demonstrate this process, lets look at a simple test signal $x[n] = cos(10 \pi t)$ and plot it in matlab/octave.

clc;clear; Step = 0.001; t= 0:Step:1; x = cos(10*pi*t); plot(t,x,'linewidth',2) axis([0 1 -1.2 1.2]) title('cos(10\pit)') grid on

Next we will perform the DFT on the signal in order to observe its frequency components. Naturally a cosine is of a pure singular frequency component. In this case, the cosine wave $x[n]=cos(10 \pi n)$ has a frequency of $5 Hz$. Note that frequency of $cos(k \cdot x)$ can be calculated with $f = k / 2 \pi = 5$. The following image is the DFT waveform:

Note that in this waveform, the DFT is not simply $\delta ( k – 5)$ as we would expect. Reasons for this includes that the DFT is an approximation since cosine is not of infinite resolution, also that the DFT is performed only using $k$ amount of spectra, and that the signal sampled is not of infinite period. Consequently, there are artifacts of many other signals than just 5 Hz. None the less, this approximation clearly shows that 5 Hz is the dominate frequency spectra of the signal. In regards to coding the DFT, built in matlab / octave functions can be used. However, to gain a better understanding of the algorithm, I implemented it in matlab / octave with the following code:

function [DFT,DFTX] = DFTP(X,TimeStep,Points,maxFreq) N=length(X); j = (-1)^(1/2); if nargin < 4 % if max freq is passed in, use it, else use alis freq AliasFreq = 1/TimeStep/2; else AliasFreq = maxFreq; end Total_Time = TimeStep*length(X); DFTX = 1:Points; DFTX = DFTX*AliasFreq/Points; DFT = zeros(Points,1); for n=1:N for k=1:Points DFT(k)=DFT(k)+ X(n)*exp(-j*2*pi*k*AliasFreq*Total_Time/(N*Points)*n); end end DFT = abs(DFT); % Remove imaginary component, look at magnitude DFT = DFT./sum(DFT); %represent signal magnitude as PDF end

This function allows for the user to specify the resolution of the DFT with $Points$. It also allows a user to specify the time step of the signal as well as the max frequency. If a max frequency is not specified, the anti alias frequency of $f = 1 / (TimeStep * 2)$ is used. This function can be called and ploted with the following code:

clc;clear; Step = 0.001; t= 0:Step:3; x = cos(10*pi*t); [DFTY,DFTX] = DFTP(x,Step,1000,200); plot(DFTX,DFTY);

Next, to demonstrate to help demonstrate the usefulness of the DFT, a more complicated signal can be plotted and white noise $e(t)$ is added:

$$x[n]=sin\big( 150(2\pi\cdot n)\big) + sin\big( 40(2\pi\cdot n)\big) + sin\big( 5(2\pi\cdot n)\big) + 4\cdot e(n) $$

When the above signal is plotted and the DFT is performed, we observe the following:

Clearly the dominate spectra are at $f=5,40,150Hz$ with roughly even distribution.

Note that my DFT function’s height is in terms of a probability mass function(pdf, $f_n[n]$) for discrete time. This is similar to the probability density function for continuous time. The probability mass function is defined by the following characteristic.

$$\sum^{N}_{k=0}f_k[k]=1 \\ P(k=X) = f_k[X] $$

Note the height of a DFT is somewhat arbitrary which is why I created a pdf. A one-normalization would have worked as well but this would have been poor at comparing signals. A demonstration of comparing DFT pdfs can be seen below:

Here the more pure signal $y[n]$ is of higher magnitude than the combined signal $x[n]$. This difference is useful when calculating the cross correlation or cross co-variance. This difference would have been nullified had the signals been normalized. For reference, the complete source code of the above plot is included below:

clc;clear; figure(1) subplot(2,1,1) Step = 0.001; t= 0:Step:3; xx = cos(200*pi*t)+1*rand(1,length(t))-0.5; x = 1*sin(150*(2*pi*t)) + 1*sin(40*(2*pi*t))+sin(5*(2*pi*t))+1*rand(1,length(t))-0.5; plot(t,x,t,xx,'linewidth',1) axis([0 0.4 -3 3.5]) title('cos(10\pit)') legend('x[n]','y[n]') grid on title('x[n]=sin(150(2\pi\cdotn)) + sin(40(2\pi\cdotn)) + sin(5(2\pi\cdotn)) + \e(n)\newliney[n]=cos(100(2\pi\cdotn))+e(n)') %figure(2) subplot(2,1,2) [DFTY,DFTX] = DFTP(x,Step,1000,200); [DFTYY,DFTXX] = DFTP(xx,Step,1000,200); plot(DFTX,DFTY,DFTXX,DFTYY,'linewidth',1); title('DFT(x[n]), DFT(y[n])') xlabel('Hz') ylabel('pdf') ylim([0,0.08]) legend('DFT(x[n])','DFT(y[n])') grid on

https://en.wikipedia.org/wiki/Discrete_Fourier_transform

Many thanks to Dr. Josse of Marquette University for his fundamental probability course.

Hello friends and welcome to my guide on how NOT to make a tachometer. A tachometer is a device that reads RPM. The story behind this is I needed a tachometer and I didn’t have the ideal components to make one so I made do with what I had. Here are the circuit contents:

**Rough Component List:**

1x LM358

Xx 1k, 10k, 100k 1/4W Resistors

Xx 1uF Caps

1x Helping Hands

2x LEDs

1x NPN

**Principle of operation**: Most transducers (speakers, motors…ect) work as “generators”. The same is true with LEDs. I wanted to use a LED as a sensor and another LED of the same type as a transmitter. However, since a given LED isn’t designed to work as a photo-diode, it is a very poor sensor. None the less, with a bit of effort you can make it work! First step is to amplify the small bit of excitation power the LED produces when a LED of the same wavelength is shined at it. To do this, I selected a NPN BJT. The signal gain was not large enough so I used a darlington configuration. Below is the circuit + scope shot with propeller spinning.

Again, above is the circuit and scope shot of the output. When the rotor passes inbetween the 2 LEDs, the lower LED no longer receives power and consequently shuts off the NPN pair. When the NPNs are shutoff, the output is pulled high by the open collector pullup resistor(R8). Notice we have 60Hz coupled onto the output in addition to some spikes. The spikes a course are when the rotor passes over the LED. This signal by itself is rather crappy and hard to read with a microcontroller. We want to turn these spikes into nice puleses that our MCU can easily read via a GPIO pin. Looking ahead, using a comparator, we should be able to find the peaks and generate pulses for the tachometer. However, the reference is somewhat messy for the comparator because of the DC offset on the signal. Next step is to create a simple high pass filter to block the DC Offset.

Now that the DC is blocked, we still need a reference for our comparator. The problem with a fixed voltage is that the spike output voltage will vary with respect to speed. Consequently, using the average of the signal is ideal. To average the signal we use a very low cutoff frequency low pass filter. Next I wired an op amp in a comparator configuration.

At this point, your probably thinking i’m trolling you which… well trying to use an LED for this application is abit trollish. Anyhow, the reason the output is pure garbage is because of the low frequency content of the signal is periodically crossing our averaged reference and is very noisy. This is why you almost never use a comparator without having a hysteresis region. Add some good ole positive feedback to create a hysteresis region. This will eliminate any small signals from creating a pulse on the output of our op amp.

Thats a tad better. Look mom, no photodiode! A course, from here its peanuts to get a MCU hooked up and counting the pulse timings. A simple GPIO interrupt rising edge interrupt + timer would do the trick. I needed this tachometer with a voltage output in order to do frequency verification of my “ESC”(sensorless BLDC controller). To convert this signal to RPM its simply 2 / (Time Between Rising edge interrupts) * 60. Anyhow, thought ya might find this interesting.

Thanks,

Nate

]]>

**http://www.zallus.com/product/zallus-oven-kit/**

The Zallus Reflow Oven Kit is a comprehensive kit allowing you to easily convert a toaster oven into an industry compliant reflow oven. This kit is powerful and unique because of its touch screen interface as well as its USB interface for designing and downloading custom profiles.

An assembled control board with the following:

- Touch Screen Interface for Displaying Profiles
- USB Interface for Downloading Profiles & PC control
- Fan Input & Buzzer (Rev 2 Hardware only)
- Legitimate Thermocouple connector & Cold Compensation Measurements

Other crucial components included:

- K Type Quick Response Thermocouple
- 20A / 280VAC Max Solid State Relay
- Mounting standoffs & Screws
- USB Cable for Power or PC Control
- Wiring for Relay/Control Board Connecting

Apart from the obvious exterior features, the Zallus Oven Controller actually verifies your profiles against industry reflow standards. While controllers may have learn features or display output temp, few if any actually self verify whether or not the profiles implemented are optimum for the solder or for the components. It is easy to melt solder, you simply apply heat. However, the whole point of reflowing is to increase component reliability by adhering to strict standards. The Zallus Profile creator tool allows you to verify the compliance of the profiles you draw as well as profiles that are run.

While hot-airing / hand soldering / blind reflowing appears to work in the lab, you will eventually run into issues with the following if you do it long enough:

- Sensitive RF components can change in value / tolerance / and rating (especially when hand soldered)
- Assemblies with 100s of joints per board are far more likely to experience bad joint issues
- Outside the lab, components experience operational shock, thermal shock, vibration…ect are far more likely to fail without proper formed joints

After some time of industry electronic design, producing assemblies on manufacturing floors, experiencing field returns, and solving product test failures, I deemed this feature critical to any assemblies that I produce.

A good chunk of users of the Zallus Oven Controller are not actually using it for reflowing. The profile creator design tool makes creating profiles so easy that people have applied it numerous applications. From heating ceramics to cooking coffee beans, having a reliable and user friendly interface makes such applications easy.

Many thanks for reading up on my oven controller and be sure to checkout my kickstater at the following: https://www.kickstarter.com/projects/1070729460/zallus-oven-controller

I’m always looking to improve these controllers and would love to hear from you!

]]>

Video:

Its no secret that most of us nerds don’t get enough exercise. A handy way to get that is exercising while nerding out on the PC which treadmill desk. Or a dog…. http://xkcd.com/1329/ . The issue is that with most treadmill desk tutorials out there is that there all about building a desk platform on the existing bars/display of the treadmill. This is annoying cause I already got a desk that I like and I don’t want to make another one on top of a treadmill. I kinda suck at mechanical stuff anyways so I figured it be easier to hack the controls of the internal controller. Turns out, it wasn’t so bad.

I’m using the Gold Gym 420 treadmil from walmart. This thing costs about 380 something USD (ouch!). I couldn’t find a good cheapie on cregs list and the beauty of the 420 treadmill is that it doesn’t rely on the vertical bars to support the platform thus making it insanely simply to convert to a treadmill desk. One simply skips the assembly and designs a simple control board and presto you are done!

Here is the treadmill I used:

http://www.walmart.com/ip/Gold-s-Gym-Trainer-420-Treadmill/27407264

Thankfully this treadmill is well designed and consequently is easy to control. It uses a standard 100mil spaced header and only uses 8 wires. See picture:

Starting from right to left, here is the pinout:

1. Black –> GND

2. Pink –> 12V

3. Green –> 3.3V (you must source this, ideally get it from the 12V)

4. Blue –> Speed Control, 1.5V Signal, PWM (3.3V IO through 1kohm resistor seems to work)

5. Orange –> Incline Control Drive+

6. Yellow –> Incline Control Drive-

7. Purple –> Incline Control Encoder Feedback

8. Black –> Ground

**Speed Control:**

The speed control on this device is elegant and simple. It is controlled through a

simple PWM controller where the +%duty cycle correlates to the speed. The frequency

is 20hz and 10ms on correlates to 1mph. The speed then increases by 0.4ms per 0.1mph

increase. Consequently, 3mph = 10ms + (3-1mph)*10*0.4ms = 18ms positive pulse width.

See screenshot below for reference. This screenshot is going at 1.3mph

**Todo**

Incline control walkthrough

Add distance & Calorie burn support

Add 802.11 support to send exercise information to the cloud / Xlivley

Commands will be sent through a NRF24L 2.4 GHz to a dumb controller that will simply pulse width modulate accordingly to the speed sent. The main controller will be the same controller I designed for my smart Scale. It has 2.2″ touch support, 2.4Ghz NRFL24 support, as well as 802.11 support.

]]>Howdy all, thanks so much for your support! Currently I only have released the deluxe version of this Reflow Oven Kit and have been trying to find some spare time to write the documentation on how to build the thing. Everything is still a WIP so please feel free to email me if you have questions.

1. MSP430 Launchpad for programming

2. Soldering iron w/ solder

3. Cold beer (required)

**1. Verify that your screen works.**

All surface mount components necessary for 2.2″ touch screen operation will come pre soldered. Simply plug in the msp430 launchpad w/ processor into the bottom headers. Program the MSP430 processor with the code located in the github repo here: https://github.com/NateZimmer/Reflow_Oven_Kit . Import the code into code composer studio and compile & flash to the board. At this point you should see the touch screen become fully operation and should be able to navigate through the menus. Unplug the launchpad and proceed to the next step.

**2. Solder through-hole components.**

Solder in through-hole components as shown in the reference image above. Schematic can also be used for correlation with its reference designators.

Tips:

1. Ensure you solder polarized components correctly!

2. Don’t insert ICs backwards

3. Don’t insert ICs in incorrect pads

4. Brush up on wikipedia for resistor color codes if you don’t know them.

]]>

– 1x 2.2″ Touch LCD Display

– Discrete Cold Compensation Circuit

– A high temp low thermal mass thermocouple

– A high Current Solid State Relay

– A RGB LED

– Optional Female Header interface for launchpad

– Optional External Power interface for wall supply.

Price: 50 USD + Shipping

**Standard kit includes and features:**

– 1x Nokia 5110 Display

– Discrete Cold Compensation Circuit

– A high temp low thermal mass thermocouple

– A high Current Solid State Relay

– Optional Female Header interface for launchpad

– Optional External Power interface for wall supply.

Price: 40 USD + Shipping

]]>