### Stitching Local Functions Together Using Equality Constrained Least-Squares

To find a function which describes complex data over an entire domain can be hard. By subdividing the domain into smaller regions it is possible to use simpler functions locally which, when joined together, is able to predict the entire domain well. Fitting the local functions using least-squares does not guarantee a continuous function globally without introducing equality constraints. In the book Matrix Computations by Golub and van Loan a simple method to solve least-squares problems with equality constraints using linear algebra is described. This method involves two factorizations and a matrix multiplication. The LSE-algorithm is implemented in LAPACK as `GGLSE`

and can be found in e.g. Intel MKL.

To find a function which describes complex data over an entire domain can be very difficult. By subdividing the domain into smaller regions it is possible to use simpler functions locally which, when joined together, is able to predict the entire domain well. Fitting the local functions using least-squares does not guarantee a continuous function globally without introducing equality constraints. In the book Matrix Computations by Golub and van Loan a simple method to solve least-squares problems with equality constraints (LSE) using linear algebra is described. This method involves two factorizations and a matrix multiplication.

The LSE-algorithm is implemented in LAPACK as `GGLSE`

and can be found in e.g. Intel MKL.

Given \(A \in \mathbb{R}^{m \times n}\), \(B \in \mathbb{R}^{p \times n}\), \(b \in \mathbb{R}^m\), \(d \in \mathbb{R}^p\), \(\text{rank}(A) = n\), and \(\text{rank}(B) = p\) we can solve

$$\begin{array}{rcl}

Ax & = & b \qquad \text{(regression)} \\

Bx & = & d \qquad \text{(constraint)}

\end{array}$$

using

$$\begin{array}{rcl}

B^T & = & QR \\

\text{Solve} \, R(1:p, \, 1:p)^T y & = & d \\

A & = & AQ \\

\text{Find} \, z \, \text{which minimizes} & & || A(:, \, p+1:n)z-(b-A(:, \, 1:p)y)||_2 \\

x & = & Q(:, \, 1:p)y + Q(:, \, p+1:n)z

\end{array}$$

The first example is a piecewise linear function which is divided into six intervals. Each local function is fit using the data points present in its interval. The end points do not match up leading to a discontinuous function representation.

By using LSE and forcing the end-points of each segment to be continuous (with discontinuous derivatives) while the mean-square error is minimized, we obtain the following result:

Using higher-order polynomials opens up for even better descriptions of the data. Using third order polynomials in the same intervals as the linear without using LSE does not look very nice.

Since we have a higher order polynomial we can set equality constraints for both the end-points and the and the first-order derivatives leaving a discontinuity in the second-order derivative and higher.

Taking the step to two-dimensional data is straightforward. Here we are using a set of points generated by the inverse of the squared distance from (0,0).

Dividing the domain into 5×5 square subdomains, in which the points are described using two-dimensional functions without any constraints, looks like a storm-damaged tiled roof.

Applying the LSE-algorithm on each corner point effectively stitches all edges together.

Subdividing the domain further gives a better fit, but the global function risks becoming overfitted since fewer data points are used for each subdomain. Regularizing the final problem can help out if this happens. With 8×8 intervals, the function looks smoother.

This is just one application where a mean-squared error measure can be minimized while fulfilling equality constraints without having to resort to an iterative optimization algorithm. Linear algebra, despite being linear, can be incredibly powerful in practice.