Blog | Combine

Blog

Introduction

Scheduling constrained resources over time is a tough problem. In fact, the problem is NP-hard. One part of the problem is to find a feasible solution where all constraints are satisfied simultaneously. Another part is to also find a solution which also satisfies some measure of optimality. In practice, it is often sufficient to solve the problem partially such that the solution is better than the first feasible solution which has been found.

Solving combinatorial scheduling problems can be done using mixed-integer linear programming (MIP) or some heuristic approaches such as the Tabu search.

The Flow Approach

There are several ways to formulate the scheduling problem. The flow formulation presented by Christian Artigues in 2003 is one interesting approach. Assume for simplicity a process with two tasks:

The nodes 1 and 2 are representing the tasks. “S” is the start task and no edges can go back here. “E” is the end task and only incoming edges are allowed. The directed graph shows all possible edges for this configuration per resource type.

The graphs tell us how resources can be transported between different tasks. Hence, “S” can give resources to task 1 and task 2 while also sending resources which are not needed directly to “E.”

Tasks can handle resources in three different ways:

  1. A task can consume a resource such that it disappears.
  2. A task can produce a new resource making it available for someone else to interact with.
  3. A task can pass a resource through for others to use (e.g., a person or a tool).

Assume that we have the following resources available in “S” to start with:

  • One operator.
  • Two tools.
  • Three raw materials.

Task 1 requires the following to be able to produce one product A.

  • One operator.
  • One tool.
  • One raw material.

Task 2 requires the following to produce one product B.

  • One operator.
  • Two tools.
  • One product like the one produced by task 1.
  • One raw material.

This problem can easily be solved by hand yielding:

The solution is an acyclic directed graph where resources flow from “S” to “E” fulfilling the constraints given by each node. Based on the solution task 2 must wait for task 1 to finish. Also, note that the edge between 2 and 1 has been removed since it is not needed.

Solving more complex resource constrained scheduling problems increases the available number of combinations rapidly making the problem harder and harder to solve as can be seen for the full graphs for 3, 6 and 10 tasks:

Scheduling

In some cases, it is possible to change the order between tasks or even execute them in parallel without violating any constraints. The graph tells us whether any task must follow any other task(s), either directly or through a chain of events. What is left is to take the duration of each task into consideration to produce the final time schedule.

Conclusion

The resource-constrained scheduling problem is relatively simple to solve if there is either excess of tasks or resources compared to the other. The number of combinations is much reduced then. When the distribution of resources is close to the number of tasks involved the problem gets much harder to solve.

Playing around with the measure of optimality can yield many different results depending on the formulation. For example, some tasks could be prioritized to be executed before others and tasks could be distributed over a time interval to maximize robustness for deviations and so forth.

The problem can be solved using mixed-integer linear programming (MIP) for which there are several mature solvers available on the market.

Read more

Introduction

Scheduling constrained resources over time is a tough problem. In fact, the problem is NP-hard. One part of the problem is to find a feasible solution where all constraints are satisfied simultaneously. Another part is to also find a solution which also satisfies some measure of optimality. In practice, it is often sufficient to solve the problem partially such that the solution is better than the first feasible solution which has been found.

Solving combinatorial scheduling problems can be done using mixed-integer linear programming (MIP) or some heuristic approaches such as the Tabu search.

The Flow Approach

There are several ways to formulate the scheduling problem. The flow formulation presented by Christian Artigues in 2003 is one interesting approach. Assume for simplicity a process with two tasks:

The nodes 1 and 2 are representing the tasks. “S” is the start task and no edges can go back here. “E” is the end task and only incoming edges are allowed. The directed graph shows all possible edges for this configuration per resource type.

The graphs tell us how resources can be transported between different tasks. Hence, “S” can give resources to task 1 and task 2 while also sending resources which are not needed directly to “E.”

Tasks can handle resources in three different ways:

  1. A task can consume a resource such that it disappears.
  2. A task can produce a new resource making it available for someone else to interact with.
  3. A task can pass a resource through for others to use (e.g., a person or a tool).

Assume that we have the following resources available in “S” to start with:

  • One operator.
  • Two tools.
  • Three raw materials.

Task 1 requires the following to be able to produce one product A.

  • One operator.
  • One tool.
  • One raw material.

Task 2 requires the following to produce one product B.

  • One operator.
  • Two tools.
  • One product like the one produced by task 1.
  • One raw material.

This problem can easily be solved by hand yielding:

The solution is an acyclic directed graph where resources flow from “S” to “E” fulfilling the constraints given by each node. Based on the solution task 2 must wait for task 1 to finish. Also, note that the edge between 2 and 1 has been removed since it is not needed.

Solving more complex resource constrained scheduling problems increases the available number of combinations rapidly making the problem harder and harder to solve as can be seen for the full graphs for 3, 6 and 10 tasks:

Scheduling

In some cases, it is possible to change the order between tasks or even execute them in parallel without violating any constraints. The graph tells us whether any task must follow any other task(s), either directly or through a chain of events. What is left is to take the duration of each task into consideration to produce the final time schedule.

Conclusion

The resource-constrained scheduling problem is relatively simple to solve if there is either excess of tasks or resources compared to the other. The number of combinations is much reduced then. When the distribution of resources is close to the number of tasks involved the problem gets much harder to solve.

Playing around with the measure of optimality can yield many different results depending on the formulation. For example, some tasks could be prioritized to be executed before others and tasks could be distributed over a time interval to maximize robustness for deviations and so forth.

The problem can be solved using mixed-integer linear programming (MIP) for which there are several mature solvers available on the market.

Read more

Introduction

Ordinary linear differential equations can be solved as trajectories given some initial conditions. But what if your initial conditions are given as distributions of probability? It turns out that the problem is relatively simple to solve.

Transformation of Random Variables

If we have a random system described as

$$\dot{X}(t) = f(X(t),t) \qquad X(t_0) = X_0$$

we can write this as

$$X(t) = h(X_0,t)$$

which is an algebraic transformation of a set of random variables into another representing a one-to-one mapping. Its inverse transform is written as

$$X_0 = h^{-1}(X,t)$$

and the joint density function \(f(x,t)\) of \(X(t)\) is given by

$$f(x,t) = f_0 \left[ x_0 = h^{-1}(x,t) \right] \left| J \right|$$

where \(J\) is the Jacobian

$$J = \left| \frac{\partial x^T_0}{\partial x} \right|$$.

Solving Linear Systems

For a system of differential equations written as

$$\dot{x}(t) = A x(t) + B u(t)$$

a transfer matrix can be defined

$$\Phi(t,t_0) = e^{A(t-t_0)}$$

which can be used to write the solution as

$$x(t) = \Phi(t,t_0) x(0) + \int_{t_0}^{t} {\Phi(t,s) B u(t) ds}$$.

The inverse formulation of this solution is

$$x(0) = \Phi^{-1}(t,t_0) x(t) – \Phi^{-1}(t,t_0) \int_{t_0}^{t} {\Phi(t,s) B u(t) ds}$$.

Projectile Trajectory Example

Based on the formulations above we can now move on to a concrete example where a projectile is sent away in a vacuum. The differential equations to describe the motion are

$$\left\{ \begin{array}{rcl} \dot{p}_{x_1}(t) & = & p_{x_2}(t) \\ \dot{p}_{x_2}(t) & = & 0 \\ \dot{p}_{y_1}(t) & = & p_{y_2}(t) \\ \dot{p}_{y_2}(t) & = & -g \end{array} \right.$$

where \(p_{x_1}\) and \(p_{y_1}\) are cartesian coordinates of the projectile in a two dimensional space while \(p_{x_2}\) is the horizontal velocity and \(p_{y_2}\) is the vertical velocity. We only have gravity as an external force, \(-g\), and no wind resistance which means that the horizontal velocity will not change.

The matrix representation of this system becomes

$$A = \left( \begin{array}{cccc} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array} \right)$$

with

$$B^T = \left( \begin{array}{cccc} 0 & 0 & 0 & 1 \end{array} \right)$$.

The transfer matrix is (matrix exponential, not element-wise exponential)

$$\Phi(t,t_0) = e^{A(t-t_0)} = \left( \begin{array}{cccc} 1 & 0 & t-t_0 & 0 \\ 0 & 1 & 0 & t-t_0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right)$$

Calculating the solution of the differential equation gives

$$x(t) = \Phi(t,0) x(0) + \int_0^t {\Phi(t,s) B u(t) ds}$$

where \(u(t) = -g\) and \(x^T(0) = \left( \begin{array}{cccc} 0 & 0 & v_x & v_y \end{array} \right)\). The parameters \(v_x\) and \(v_y\) are initial velocities of the projectile.

The solution becomes

$$x(t) = \left( \begin{array}{c} v_x t \\ v_y t – \frac{g t^2}{2} \\ v_x \\ v_y – g t \end{array} \right)$$

and the time when the projectile hits the ground is given by

$$p_y(t) = v_y t – \frac{g t^2}{2} = 0 \qquad t > 0$$

as

$$t_{y=0} = 2 \frac{v_y}{g}$$.

A visualization of the trajectory given \(v_x = 1\) and \(v_y = 2\) with gravity \(g = 9.81\) shows an example of the motion of the projectile:

Now, if assume that the initial state x(0) can be described by a joint Gaussian distribution we can use the formula shown earlier to say that

$$f(x,t) = f_0\left[x(0)=h^{-1}(x,t)\right] \left|J\right| = \frac{1}{\sqrt{\left|2 \pi \Sigma \right|}} e^{-\frac{1}{2}(x(0)-\mu)^T \Sigma^{-1} (x(0)-\mu)}$$,

where \(\left| J \right| = \left| \Phi^{-1}(t) \right|\), \(\mu^T = \left( \begin{array}{cccc} 0 & 0 & v_x & v_y \end{array} \right)\) and

$$\Sigma = \left( \begin{array}{cccc} 0.00001 & 0 & 0 & 0 \\ 0 & 0.00001 & 0 & 0 \\ 0 & 0 & 0.01 & 0 \\ 0 & 0 & 0 & 0.01 \end{array} \right)$$

which means that we have high confidence in the firing position but less in the initial velocity.

We are only interested in where the projectile lands and we can marginalize the velocities to get:

$$f\left(p_{x_1},p_{y_1},t\right) = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} f(x,t) dp_{x_2} dp_{y_2}$$

which when plotted gives

Since we have used the landing time for the deterministic trajectory, we get a spread across the y-axis as well (the ground is located at \(p_y = 0\)). We could marginalize the y-direction as well to end up with:

This shows the horizontal distribution of the projectile at the time when the deterministic trajectory of the projectile is expected to hit the ground.

Conclusion

Given a set of ordinary differential equations, it is possible to derive the uncertainty of the states given a probability distribution in the initial conditions. There are two other important cases to look into as well: stochastic input signals and random parameters.

Read more

The process of testing the software includes both virtual testing, as mentioned above, and actual physical testing with the “for-the-purpose” designed hardware. The whole process of testing can be compared to a multistage rocket, where a sequence of “in-the-loop”-tests are pinpointing different areas within the software development.

In this post, we will look into this family of “in-the-loop”-tests and present the different members. Most of all, we will present the latest sibling in the family, the vHIL, and the introduction of virtual processors into the “in-the-loop” environment.

Physical testing

Still, physical testing is important, since the final goal of the software is to control some physical hardware. However, physical testing has its drawback. To perform physical testing, one needs plants, physical test prototypes, which can be both expensive and time-consuming to produce. Because of this, the number of plant objects is highly restricted. Also, they might have to be shared among multiple development teams so that the effective time for testing is greatly limited. Add to this, the plants often are hard configured, which means that to test another set-up; it is necessary to send the plant to the workshop to be modified, a time-consuming and error-prone procedure.

Physical testing is a kind of a gamble since this can be the first time the software, uploaded on the ECU connected to the plant, is in contact with any physical hardware and the software can have one or several bugs that can damage the plant. Since the pressure after test prototypes can be very high, one wants to avoid the risk of damage the plants. Therefore, physical testing is first taking place late in the development process when the software has gone through some security checks; it is here the purpose of “in-the-loop” tests comes into the picture.

MIL – Model in the loop

The whole purpose of the “in-the-loop” process is to have the software as mature as possible when it has its first contact with the physical plant. At the first stage, the control software components, or control models, are tested; one checks the behaviors of the control algorithms against virtual plant model. The purpose of these virtual plant models is to simulate the physical behavior of physical test objects. Often, the software components are limited to control a specific task; which also limits the range of the virtual plant model.

The testing starts by considering single components, unit testing, and as they start to function properly, one starts to consider larger and larger groups of components. With the groups, it is possible to test the interfaces and the communications between the components.

When the tests show that the behavior is satisfying, the control algorithms are implemented. The implementation can either be performed manually by writing C-code or by using code-generating tools, like Simulink.

SIL – Software in the loop

At the software in the loop stage, the implementation of the control algorithms is validated. The compiled code is here simulated against the same virtual model of the plant used in the MIL testing. In this way, it is possible to compare the results from the two stages to control that the implementation has not caused any differences in the behavior of the system.

The problem with the SIL is that the C-code is compiled for the workstation the engineer is sitting at, not for the actual microcontroller. Between these, there are several differences, for example when it comes to precision and different data types. The microcontroller is also much limited when it comes to memory and performance.

PIL – Processor in the loop

The third stage is not that commonly used. The purpose here is to remedy the shortcomings in the SIL environment and compile the code against the microcontroller architecture. Again, the implementation of the control algorithms into compiled C-code is tested and the same setup of testing against virtual plant models, as in the first two “in-the-loop” stages, is used. Except testing the compiled code, it is also possible to check memory allocations and executions times.

HIL – Hardware in the loop

It is first at this stage; the code is leaving the host PC and is uploaded to a real ECU/MC. When leaving the host PC, the control algorithms can no longer be handled as single units or small groups of units. Instead, they are one component of many in software stacks. Here, they share space with other software components, like other control algorithms, but also with other software layers, like OS, drivers, and middleware.

To get the software stack compiled and the components in it to function together as a unit, will be the first task in the procedure to set up an HIL environment. It is not uncommon that it requires multiple iterations through all previous “in-the-loop” stages to have a mature software stack ready for the HIL environment.

The actual HIL environment consists of an ECU/MC connected to the virtual plant model through an HIL simulation box, which speeds up the simulation of the virtual plant model to real-time. The problem is that this environment suffers from some of the drawbacks common to physical testing; expensive setups shared among multiple teams doing integration and testing.

vHIL – Virtual hardware in the loop

Lately, there have appeared fast simulation models of digital ECU/MC hardware on the market, so-called virtual prototypes or VPs. These are executing the same binary software as the target MCU and can be simulated against the same virtual plant models used for the MIL, and SIL testing. This new environment is the vHIL.

It should be stated that the vHIL will not replace the HIL. Instead, it is smoothening the path between the SIL and the HIL. Since it does not require any physical ECU, the vHIL can be up and run early in the process and deliver a more mature software stack to be integrated into the HIL environment, reducing the bring-up time from weeks to days, which gives more time for actual HIL testing.

With the VP, one can achieve better control and visibility compared to testing on a real ECU/MCU. Many of the simulation platforms for the VP give the opportunity to debug, analyze and automate the testing procedure. It is possible to set breakpoints in the software for debugging, which halts the whole simulation. From these breakpoints, one can step through the software row by row and jump in and out of subroutines. The platforms can also give you a visualization of the communication paths in the software and check the code coverage during a test.

With the vHIL, more tests are performed early in the process, since one do not need to wait for a physical prototype of the ECU. Also, the vHIL is easier to scale up and have running at multiple virtual locations at the same time. More testing at an early stage will result in higher quality testing in the HIL environment or the physical testing. Also, it is possible, with the vHIL, to prepare the tests for the upcoming HIL in advance, compose and test them in a virtual environment and have them ready when the prototypes of the physical ECUs appear.

Two examples of VP platforms are the Virtualizer from Synopsys and the Vehicle System Integrator, VSI, from Mentor Graphics.

Read more

The process of testing the software includes both virtual testing, as mentioned above, and actual physical testing with the “for-the-purpose” designed hardware. The whole process of testing can be compared to a multistage rocket, where a sequence of “in-the-loop”-tests are pinpointing different areas within the software development.

In this post, we will look into this family of “in-the-loop”-tests and present the different members. Most of all, we will present the latest sibling in the family, the vHIL, and the introduction of virtual processors into the “in-the-loop” environment.

Physical testing

Still, physical testing is important, since the final goal of the software is to control some physical hardware. However, physical testing has its drawback. To perform physical testing, one needs plants, physical test prototypes, which can be both expensive and time-consuming to produce. Because of this, the number of plant objects is highly restricted. Also, they might have to be shared among multiple development teams so that the effective time for testing is greatly limited. Add to this, the plants often are hard configured, which means that to test another set-up; it is necessary to send the plant to the workshop to be modified, a time-consuming and error-prone procedure.

Physical testing is a kind of a gamble since this can be the first time the software, uploaded on the ECU connected to the plant, is in contact with any physical hardware and the software can have one or several bugs that can damage the plant. Since the pressure after test prototypes can be very high, one wants to avoid the risk of damage the plants. Therefore, physical testing is first taking place late in the development process when the software has gone through some security checks; it is here the purpose of “in-the-loop” tests comes into the picture.

MIL – Model in the loop

The whole purpose of the “in-the-loop” process is to have the software as mature as possible when it has its first contact with the physical plant. At the first stage, the control software components, or control models, are tested; one checks the behaviors of the control algorithms against virtual plant model. The purpose of these virtual plant models is to simulate the physical behavior of physical test objects. Often, the software components are limited to control a specific task; which also limits the range of the virtual plant model.

The testing starts by considering single components, unit testing, and as they start to function properly, one starts to consider larger and larger groups of components. With the groups, it is possible to test the interfaces and the communications between the components.

When the tests show that the behavior is satisfying, the control algorithms are implemented. The implementation can either be performed manually by writing C-code or by using code-generating tools, like Simulink.

SIL – Software in the loop

At the software in the loop stage, the implementation of the control algorithms is validated. The compiled code is here simulated against the same virtual model of the plant used in the MIL testing. In this way, it is possible to compare the results from the two stages to control that the implementation has not caused any differences in the behavior of the system.

The problem with the SIL is that the C-code is compiled for the workstation the engineer is sitting at, not for the actual microcontroller. Between these, there are several differences, for example when it comes to precision and different data types. The microcontroller is also much limited when it comes to memory and performance.

PIL – Processor in the loop

The third stage is not that commonly used. The purpose here is to remedy the shortcomings in the SIL environment and compile the code against the microcontroller architecture. Again, the implementation of the control algorithms into compiled C-code is tested and the same setup of testing against virtual plant models, as in the first two “in-the-loop” stages, is used. Except testing the compiled code, it is also possible to check memory allocations and executions times.

HIL – Hardware in the loop

It is first at this stage; the code is leaving the host PC and is uploaded to a real ECU/MC. When leaving the host PC, the control algorithms can no longer be handled as single units or small groups of units. Instead, they are one component of many in software stacks. Here, they share space with other software components, like other control algorithms, but also with other software layers, like OS, drivers, and middleware.

To get the software stack compiled and the components in it to function together as a unit, will be the first task in the procedure to set up an HIL environment. It is not uncommon that it requires multiple iterations through all previous “in-the-loop” stages to have a mature software stack ready for the HIL environment.

The actual HIL environment consists of an ECU/MC connected to the virtual plant model through an HIL simulation box, which speeds up the simulation of the virtual plant model to real-time. The problem is that this environment suffers from some of the drawbacks common to physical testing; expensive setups shared among multiple teams doing integration and testing.

vHIL – Virtual hardware in the loop

Lately, there have appeared fast simulation models of digital ECU/MC hardware on the market, so-called virtual prototypes or VPs. These are executing the same binary software as the target MCU and can be simulated against the same virtual plant models used for the MIL, and SIL testing. This new environment is the vHIL.

It should be stated that the vHIL will not replace the HIL. Instead, it is smoothening the path between the SIL and the HIL. Since it does not require any physical ECU, the vHIL can be up and run early in the process and deliver a more mature software stack to be integrated into the HIL environment, reducing the bring-up time from weeks to days, which gives more time for actual HIL testing.

With the VP, one can achieve better control and visibility compared to testing on a real ECU/MCU. Many of the simulation platforms for the VP give the opportunity to debug, analyze and automate the testing procedure. It is possible to set breakpoints in the software for debugging, which halts the whole simulation. From these breakpoints, one can step through the software row by row and jump in and out of subroutines. The platforms can also give you a visualization of the communication paths in the software and check the code coverage during a test.

With the vHIL, more tests are performed early in the process, since one do not need to wait for a physical prototype of the ECU. Also, the vHIL is easier to scale up and have running at multiple virtual locations at the same time. More testing at an early stage will result in higher quality testing in the HIL environment or the physical testing. Also, it is possible, with the vHIL, to prepare the tests for the upcoming HIL in advance, compose and test them in a virtual environment and have them ready when the prototypes of the physical ECUs appear.

Two examples of VP platforms are the Virtualizer from Synopsys and the Vehicle System Integrator, VSI, from Mentor Graphics.

Read more

For more complex models it is possible to solve the Bayesian problem numerically using, for example, MCMC (Markov-Chain Monte-Carlo). It is a computationally expensive method which gives the solution as a set of points in the parameter space which are distributed according to the likelihood of the parameters given the data at hand.

Regression Example

For this example, we are working with a linear model of the form \(f(x)=a+bx + \epsilon\), where \(\varepsilon \sim N\left(0,\sigma^2\right)\) (normal distributed noise).

First, we need to generate some random data starting choosing 50 samples where \(a=1\), \(b=2\), \(\sigma^2=1\):

One simple way to solve the MCMC-problem is the Metropolis-Hastings method. It is based on evaluating changes in the posterior likelihood function one parameter at a time doing a random walk trying to stay in a region with high probability all the time. If the likelihood is multi-modal, it is, of course, possible to get stuck in one mode.

The resulting estimated likelihood given 100,000 samples for the linear regression is shown below where the red dot represents the highest likelihood, and the blue dot is the real parameters. The contours show a smoothed kernel estimate of the density of the distribution. Note that there is a slight covariance between parameters a and b which means that if you change one of the parameters the other has to change as well.

It turns out that the maximum likelihood of the MCMC-estimate and the Least-Squares method gives the same result, which is expected since maximum likelihood and least-squares are equal in the presence of Gaussian noise.

This example has just been a simple demonstration of how to find a good fit for model parameters given some data measurement. Based on the likelihood plots above we obtain some understanding of the sensitivity of changes in the parameters and if they are likely to be correlated to obtain maximum likelihood.

In the presence of non-gaussian noise and high dimensional complex models, MCMC might be your only way to obtain a solution at all at the cost of long durations of computation.

Read more

For more complex models it is possible to solve the Bayesian problem numerically using, for example, MCMC (Markov-Chain Monte-Carlo). It is a computationally expensive method which gives the solution as a set of points in the parameter space which are distributed according to the likelihood of the parameters given the data at hand.

Regression Example

For this example, we are working with a linear model of the form \(f(x)=a+bx + \epsilon\), where \(\varepsilon \sim N\left(0,\sigma^2\right)\) (normal distributed noise).

First, we need to generate some random data starting choosing 50 samples where \(a=1\), \(b=2\), \(\sigma^2=1\):

One simple way to solve the MCMC-problem is the Metropolis-Hastings method. It is based on evaluating changes in the posterior likelihood function one parameter at a time doing a random walk trying to stay in a region with high probability all the time. If the likelihood is multi-modal, it is, of course, possible to get stuck in one mode.

The resulting estimated likelihood given 100,000 samples for the linear regression is shown below where the red dot represents the highest likelihood, and the blue dot is the real parameters. The contours show a smoothed kernel estimate of the density of the distribution. Note that there is a slight covariance between parameters a and b which means that if you change one of the parameters the other has to change as well.

It turns out that the maximum likelihood of the MCMC-estimate and the Least-Squares method gives the same result, which is expected since maximum likelihood and least-squares are equal in the presence of Gaussian noise.

This example has just been a simple demonstration of how to find a good fit for model parameters given some data measurement. Based on the likelihood plots above we obtain some understanding of the sensitivity of changes in the parameters and if they are likely to be correlated to obtain maximum likelihood.

In the presence of non-gaussian noise and high dimensional complex models, MCMC might be your only way to obtain a solution at all at the cost of long durations of computation.

Read more

Combine Control Systems visited this year’s UAS Forum conference in Linköping. UAS is the abbreviation for Unmanned Aerial System which has been an interest at the company since the first master’s thesis regarding the subject started in 2012. For two days the UAS Forum with lectures, presentations and fair was held in connection with the international workshop on research, education and development on unmanned aerial systems (RED-UAS) which has been held every second year since 2011. This substantiates the fact that Linköping is Sweden’s capital regarding aviation.

During Combine Control Systems visit at the conference the major topics were “Introduction of UAS in an organization”, “How do we share the airspace”, “The need for artificial intelligence in UAS” as well as presentations of companies in the region such as CybAero and UMS Skeldar. It was very interesting to hear from both known and unknown actors in the industry and understanding their thoughts about using UAS in their businesses.

Thanks to all participants and organizers and a special thanks to Jan Holmbom who moderated the event. If you are interested in talking more about UAS or other flight systems, please contact us at Combine Control Systems. Some of our previous projects related to UAS can be found on the webpage, such as cluster behavior for UAS.

Read more

Combine Control Systems visited this year’s UAS Forum conference in Linköping. UAS is the abbreviation for Unmanned Aerial System which has been an interest at the company since the first master’s thesis regarding the subject started in 2012. For two days the UAS Forum with lectures, presentations and fair was held in connection with the international workshop on research, education and development on unmanned aerial systems (RED-UAS) which has been held every second year since 2011. This substantiates the fact that Linköping is Sweden’s capital regarding aviation.

During Combine Control Systems visit at the conference the major topics were “Introduction of UAS in an organization”, “How do we share the airspace”, “The need for artificial intelligence in UAS” as well as presentations of companies in the region such as CybAero and UMS Skeldar. It was very interesting to hear from both known and unknown actors in the industry and understanding their thoughts about using UAS in their businesses.

Thanks to all participants and organizers and a special thanks to Jan Holmbom who moderated the event. If you are interested in talking more about UAS or other flight systems, please contact us at Combine Control Systems. Some of our previous projects related to UAS can be found on the webpage, such as cluster behavior for UAS.

Read more

This time, we will consider the development of an arbitrary mechatronic system, a system consisting of both hardware and software. Into the hardware, we count all physical components that range all the way from processors and integrated circuits through actuators and sensor to engines and ventilation circuits. While with the term software, we refer to the embedded code uploaded on processors and integrated circuits.

Back in the days, which is not that many years ago, the graph in the figure above could have been a good schematic view of a development process. Here, we have time along the x-axis, a start point to the left and a delivery point to the right. At this period, it was more or less necessary to have a sequential process, where the actual hardware had to be available before the development of the software could be started.

In this graph, it can be noticed that,

  • The knowledge about the system, the green curve, is increasing with time. The knowledge is obtained by testing different solutions and the more tests that can be performed, the more knowledge the developers will obtain about the system.
  • The possibility to make changes, the red curve, is instead decreasing with time. The closer one gets to the point of delivery the more limited is the possibility to make changes. The short period left to the delivery makes it hard to get large changes in place and even small changes can rock the foundation of the rigid structure that the system has become close to the delivery point.
  • The yellow marked area between the two curves and x-axis, within the considered developing time, is a measure of the effective work that can be performed during the process.

Clearly, the goal should be to maximize the efficient work during the process. The more useful work, the more tests can be performed and more bugs and faults can be found. Fewer bugs and errors result in better quality of the system and a better final product.

One important variable that we have disregarded in this graph is the cost. We do all know that there always exist alignments within companies whose main responsibility is to keep the costs as low and the income as high as possible. One efficient way to obtain this is to reduce the development time, to shortening the time-to-market, which is exactly what is visualized in the figure below.

Directly, two significant consequences can be noticed. First, the amount of productive work is more limited. Secondly, the sequential procedure, where the software development starts first after the hardware is in place, does not longer fit within the time for development.

The introduction of Model-Based Design, MBD, has made it possible to separate the software into different components. Some of these are in direct contact with the hardware and are interacting with it. While others, like the controller software components C-SWC, which have the purpose to control the behavior of some physical quantities, have several layers of software between themself and the hardware. To test the C-SWC, it might not be necessary to have the actual device in an early stage of the project. Instead, virtual models describing the dynamics of the physical quantities, and how they are perturbed by other quantities, can be the object to test the C-SWC code against, a so-called plant model.

The entry of plant models and separable software components made it possible to start the development of the software earlier and test it on desktop computers instead on physical test benches. The effect of the virtual testing is visualized by the graph in the figure above. Before virtual testing was introduced, one had to wait to have an available test bench before testing the software, described by the blue curve in the graph. With the introduction of MBD, “only” plant models were needed to verify part of the software for bugs and faults in a virtual environment. The bugs and errors are found at an earlier stage of the process, which is what the red curve is telling us. Still, there is a need for physical testing, but the amount has now been reduced.

The look of the schematic view of the development process changes with the introduction of a virtual testing, see the figure above. The curve for obtaining knowledge about the system has a steeper behavior at the beginning of the process. This corresponds to the possibility to perform virtual tests at an earlier stage. Faster obtained knowledge increases the effective work performed during the process, and the question now is, how can one get even more knowledge at an early stage?

One way is to increase the number of tests that are performed and a virtual environment is an ideal location for performing tests in large numbers, see the figure below. A physical test bench is usually designed for a specific type of test, if one wants to do something outside its specification one has to rearrange the setup or build a new test bench. This can be both expensive and cost a lot of time. With virtual testing, a new test bench can just be some lines in a script away, which makes it simple to switch between test configurations and set up automated processes.

A growing field within virtual testing is Model-Based Testing, MBT, where software algorithms are used to design the test cases, run the test procedures and analyze the result. These algorithms can automatically produce a substantial number of test cases and do even feedback information from the results back to the process in order to create new and better test cases. An example is the TestWeaver algorithm that is described to play chess with the system under test [1].

Testing a system under test (SUT) is like playing chess against the SUT and trying to
drive it into a state where it violates its specification.

Most, if not all, applications presented so far have been introduced to benefit the software development. Plant models and separable software gave the developers access to the virtual test benches. Will it also be possible to use virtual hardware models to actually improve the development of the physical hardware, as well as the software?

If the virtual hardware can be in place early in the process, it would be possible to test combinations of different components in virtual test benches and obtain early knowledge for both hardware and software developers. This will, of course, require much more detailed models of the hardware that exist today, including non-trivial behaviors and limitations that could be triggered from the virtual test environment.

Hardware models of fine granularity will benefit the development of both the hardware and the software. With a structure of common virtual test benches, into which both hardware and software teams are delivering models, it will be possible to test the robustness of the systems in new ways. For example, how the software will react to signals coming from hardware components that are old and not functioning perfectly anymore? Or, how the hardware components should be designed in order to hold for the large forces which can appear with rapid actions from the control algorithms? To be able to test these kind of scenarios at an early stage will not only generate knowledge within the hardware and software teams themself, but also put the teams closer together to make it possible for them to find solutions together.

Model-Based Testing and virtual hardware are both two examples of concepts that will increase the knowledge about the system at an early stage and decrease the need for expensive physical test environments.

[1]: https://www.qtronic.de/doc/TestWeaver_Modelica2008.pdf

Read more
Contact us