Blog | Combine

Blog

The engineering needed for controlling a missile is comprised of many separate fields: control theory, aerodynamics, propulsion, material science, etc… Here only control theory is discussed, and only a subset of that.

A typical architecture for missile guidance and control can be described as follows:

Figure 1. Missile guidance and control.

The target state is measured by sensor(s). This measurement together with the missile state is fed into the missile control system. This system can be split into two parts, guidance and autopilot. The guidance portion determines what the optimal maneuver is for the missile. The autopilot performs that maneuver by controlling the missile, typically with control surfaces such as rudders. In this discussion the guidance is considered, i.e. how should the missile maneuver in an optimal way. The sensors, autopilot and missile dynamics are assumed to be ideal; without latency, noise or other issues.

Guidance principles and strategies

Event though there exist many different strategies for missile guidance they can be dived into two major ones to consider when designing modern missiles, Proportional Navigation and Command-to-Line-Of-Sight.

Proportional Navigation

Proportional Navigation (PN) is a guidance law that exploits the fact that two vehicles that have constant Line-of-Sight (LOS) to each other are on a collision course. In other words, if the LOS to the target does not rotate seen from the missile, it is on an intercept course. This has been known in shipping for hundreds of years and is used to avoid collisions between ships.

PN tries to achieve a constant LOS angle by accelerating the missile towards the rotation of the LOS, and thereby eliminating the rotation. Basically, the assumption is that the best guess on future target trajectory is that it will continue its current course.

The guidance law in its simplest form can be described as:

$$
a_m =N\dot{\lambda} |V_c|
$$

\(a_m\): Commanded acceleration perpendicular to LOS
\(\dot{\lambda}\): rotation of LOS
\(|V_c|\): closing speed of missile relative to target
N: Navigation constant, design parameter

In a missile with an onboard target tracking system, a camera or an IR-sensor,  is easily available.   can in some cases be measured but is often approximated. Since the missile should have much higher speed than the target a good approximation can be missiles speed.

N is a design parameter, typically in the range 3-6 and always >2. A high value guides the missile on to an intercept course faster but requires higher accelerations and makes the missile more sensitive to noise. In practice it is common to have a varying value of N, a low value at launch and higher when closer to the target.

Note that the range to the target is not needed. This is an important property that has contributed to the widespread use of this principle.

The above guidance law cannot guarantee interception against a maneuvering target, that requires an extension of the guidance law where the target acceleration is considered, called augmented proportional navigation (APN):

$$
a_m=N\dot{\lambda}|V_c|+\frac{Na_{\bot}}{2}
$$
\(a_{\bot}\):target acceleration normal to the LOS

The acceleration is seldom possible to measure by a sensor and must be estimated, which can be difficult and create high noise. Note also that the required acceleration by the missile is proportional to N. This means that choosing a more responsive missile, i.e. high N, requires more acceleration from the missile.

 

Command-to-Line-Of-Sight

Command-to-Line-Of-Sight (CLOS) works by keeping the missile on the line seen from the sight towards the target. If the missile is closing on the target it will eventually intercept its path regardless of the target range.

Figure 3. Command-to-Line-of-sight. The missile is kept on the line between the sight and the target.

The resulting flight path is one where the missile leads the target more and more the closer it is to intercept. This can be seen in Figure 3 as an increasing angle between the LOS and missile velocity vector.

At launch the missile flies straight at the target and near intercept the missile matches the targets angular velocity. This is intuitively a good strategy: at launch it is hard to predict where the target is heading and how it will maneuver in the future, so a good guess is its current direction. But closer to intercept, the target has little time left to maneuver and the intercept point can be predicted, i.e. use full lead angle.

Note that this guidance principle also doesn’t require the range to the target. All that is needed is some way to measure the missiles position relative to the LOS as seen from the sight. Also, the missile is guaranteed to intercept the target if kept on the LOS, regardless of the target maneuver. The missile does not, in theory, need to maneuver more than the target, which is the case for PN and APN.

Other guidance principles

There exist other principles than PN and CLOS. Pure Pursuit is a principle that has been used. Pursuit works by just pointing the missile velocity vector straight at the target. If the target is not stationary, or very close to stationary, then this will result in a missile trajectory that requires very high, in theory infinite, accelerations close to the target. The principle is chosen where simplicity is more important than performance, i.e. against targets with negligible movements.

Simulation

The properties of the principles can be examined by some simulation examples. In the simulations the missile has constant speed which is unrealistic, but it helps to clearly show the properties of the guidance principles. The missile is initialized with a velocity pointing straight at the target.

In the first scenario the target is travelling with a constant velocity, from right to left. CLOS and PN results in the following missile trajectories.

In Figure 4 it is clear that at launch PN accelerates towards a straight intercept course. In comparison, CLOS does not accelerate as much at launch but the trajectory requires more acceleration closer to intercept with the target. For a target with a constant velocity PN generally travel a shorter distance.

In the next simulation the target starts with a constant velocity (right to left), but will after some time do a 90° maneuver, and then continue with a constant velocity.

At launch and until the target maneuver (#1 – #2) the missiles behave as the previous example. PN maneuvers towards an intercept course assuming the target will continue with a constant velocity. This results in a greater course change for PN during and after the target maneuver (#2 – #3) since the new predicted intercept course has moved. CLOS on the other hand did not fully commit to the intercept course before the target maneuver and requires smaller course corrections after the target maneuver.

Note also that PN is not guaranteed to intercept the target during its maneuver, but CLOS is (in theory).

Both CLOS and extended PN are useful as guidance principles. Which one that’s optimal is, as always, a matter of how “optimal” is defined. CLOS is in practice only used for shorter ranges since the target must be seen by the sight at all times.
Missiles using PN typically has the angular measuring sensor in the missile which gives increasing accuracy and precision when closing on the target. CLOS has the sensor in the sight which requires a better sensor to achieve an acceptable performance, because of the longer range to target. However, a sensor in the missile needs to be small, cheap, and disposable, whereas a sensor in the sight can be designed with less compromises.

In practice the type of guidance principle is chosen with a number of considerations. Considerations such as: what is the kinematic performance of the missile? What are the intended targets? What kind of sensors are available?

Path optimization in nature

In nature there exist several techniques used by predators to pursue their prey. The optimal strategies can vary depending on the goal. As for missile guidance, the chance to catch a prey can be optimized but there can also be other optimization variables. Animals have evolved to detect motion, predators can therefore try to minimize their movement against the perceived background to limit the available reaction time the prey has until it is caught (Zamani & Amador Kane, 2014; Mizutani, Chahl, & Srinivasan , 2003).

These techniques that have been shown to be used in nature are very similar to modern missile guidance laws[1].

When the background, e.g. trees and bushes, is sufficiently close, minimizing movement against that background is accomplished by staying on the line between a landmark and the prey. This is of course very similar to CLOS, where the sight is exactly behind the missile as seen from the target.

When the background is far away, as the sky is for birds attacking from above, the strategy instead becomes “Parallel Navigation” where the line between the predator and prey has a constant bearing.

The same strategy has also been seen with bats, where they keep a constant bearing towards their prey. But since bats hunt at night this is not to avoid detection but rather because it is an efficient strategy, and quite close to optimal for catching erratically moving insects  (Ghose, Horiuchi, Krishnaprasad, & Moss, 2006). Bats and their interaction with prey is interesting in many aspects. The echolocating sonar they use has made some of their prey evolve countermeasures against it, where they emit sound to “jam” the sonar. This has then caused the bats to evolve a more complex sonar to counteract the jamming. Compare this to the military techniques of ECM and ECCM.

Conclusion

Choosing the strategy for guiding an object to intercept another object is an interesting engineering problem. A theoretical analysis is useful to show the practical application. Choosing the “optimal” principle is only possible if there are stated goals and requirements, as well as knowing prerequisites and limitations.

Choosing PN as a guidance principle can seem to be optimal if looking at the problem with some specific conditions, such as having a target traveling with constant velocity. But when the target maneuvers CLOS seems to be a better choice. But PN can be modified to Augmented PN, where it may give better performance.

The analysis can show the basic properties of the principles, but the final choice needs to consider all aspects, including such things as development cost and time. But this is all part of engineering!

References

Armstrong, R. E., Drapeau, M. D., Loeb, C. A., & Valdes, J. J. (2010). Bio-inspired Innovation and National Security.

Ghose, K., Horiuchi, T. K., Krishnaprasad, P. S., & Moss, C. F. (2006). Echolocating Bats Use a Nearly Time-Optimal Strategy to Intercept Prey. PLOS Biology.

Mizutani, A., Chahl, J. S., & Srinivasan , M. V. (2003). Motion camouflage in dragonflies. Nature.

Zamani, M., & Amador Kane, S. (2014). Falcons pursue prey using visual motion cues: new perspectives from animal-borne cameras. Journal of Experimental Biology.

Zarchan, P. (2013). Tactical and Strategic Missile Guidance.

[1] https://www.newscientist.com/article/dn3870-dragonfly-trick-makes-missiles-harder-to-dodge/

Read more

March came by, and brought about another PyCon event, this time in Bratislava. The range of topics presented was wide, as usual, spanning things like robotics, machine learning, operations, but also some of the more social aspects of software engineering, software projects in government, or even an entire track focused on education. Of course, we did not want to miss it and are happy to share some of the highlights here.

The opening talk of the conference was about Anvil, a platform making it possible to build interactive web applications entirely with Python, without having to deal with concepts like CSS, HTML, Javascript, or SQL. It seems like a really neat tool, especially when building an application with limited resources. On the other hand, it is not an open platform, which would make lock-in a concern.

Anton Caceres shared some of his insights on architectures based around micro-services, which has been a very popular trend this decade. An important point was that even when using micro-services, it is beneficial to share the same base stack among all of the services, such as the language, frameworks, discovery mechanisms, failover, etc. In that case they only need to be maintained once, rather than for each specific flavor separately. He also presented a number of common patterns, such as sidecar containers, ambassador, or a pattern which combines a service registry with a side-car to keep all services informed about each other.

The first day was wrapped up with a talk by Miroslav Šedivý, who went on a deep dive into tzdata, the time zone database, sometimes referred to as the Olson database, which aims to be a complete compilation of all the information about the world’s time zones since 1970. Among other things we learned that Czechoslovakia was the only country which had, in addition to the standard time, not just summer time, but also a third winter time one year, and that both the Czech Republic, and Slovakia have inherited the law which makes it possible to declare a winter time again. The key takeaway from this talk was that whenever you are dealing with time zones, it is of utterly importance to use a library that is based on tzdata, such as pytz, since dealing with the ever-changing definitions of all the world’s time zones on your own is simply not feasible.

On Saturday, a keynote talk was given by Honza Král, a former core developer of Django, in which he shared his insight on what skills are necessary in order to be a good software engineer. It is very common for people in technical fields, like data science, software engineering, or information security, to think technical skills are the key to success. This is also reinforced by the usual framing of “soft” vs. “hard” skills, which makes it easy for us to downplay the importance of the latter. After all, “soft” implies “fuzzy”, “non-exact”, and that is antithetical to the perceived exact nature of the field of software development.

However, we can implement the most brilliant piece software, and it will not be worth much if we cannot explain that fact to other people in a polite, efficient way, and collaborate with each other. That is why it has been suggested to change the labels we apply to the different skills, like, for example, technical, and professional skills.

Next up, Ján Suchal, and Gabriel Lachmann gave a talk that was of particular interest to the Slovak audience members. The topic was IT projects in the Slovak government. For decades, the modus operandi was that the majority of government IT projects were defined in such a way, that there was exactly one supplier who could fulfill all the requirements, usually one with ties to people sitting in the government organization making the order. As a result, the typical project was way overpriced, delayed ad infinitum, and would rarely produce any usable result.

That is why several years ago, a group of professionals, who were tired of this, founded an NGO called Slovensko.Digital. They are lobbying to open up the processes, pushing for open access to data and government platforms, and highlighting any shady practices going on within the world of government IT. Ján and Gabriel presented their vision, some of their recent successes, and how members of the public can get involved. While the current situation is still far from perfect, things have improved somewhat over the past years, and there is yet hope for the Slovak government.

On Sunday, one of the speakers could not make it to the conference, so in order to fill the hole in the schedule, the organizers played back a recording of Kenneth Reitz’s talk from PyCon US 2018 about Pipenv. This was a very useful introduction for those of us in the audience who never took the time to look into Pipenv. This tool automates the tasks of keeping a list of direct dependencies, a list of all pinned transitive dependencies, and an up-to-date Python virtualenv. It is really nice how adding a new dependency, while maintaining all of the above, only takes a single short command. Not to mention that it also includes other bells and whistles, such as sanity checks that all direct dependencies are reflected in the pins to prevent deployments using inconsistent state, or automatic checks of dependencies against known security vulnerabilities.

As one of the last talks of the conference, Ingrid Budau gave us an introduction to pandas, a popular library often used in data science, and in machine learning to manipulate large data sets. She walked us through the basics of importing a dataset, the data types that pandas recognizes, and how to work with variants. Then Ingrid moved on to show us how pandas can be used to detect malformed input data by looking at rows with shifted values, how to deal with that, or how to fill in missing data.

Read more

You have switched between testing, specifications and leadership in your assignments. Tell us more about this.
I really like to dig into the technology that makes products work (or not), but I also generate a lot of ideas on how to solve various problems. So, my assignments have varied between finding and analyzing the cause of problems to building tools and processes for aftermarket documentation, or leading activities aimed at test automation.

How has your career developed at Combine?
I had previously worked in the telematics area and aftermarket. At Combine I got into engineering IT and finally back to telematics, navigation specifically. I’m currently working as a product owner, which I suppose is a fancier title as well.

When Combine sends you to help a customer, what can the customer expect?
They can expect me to get deep into the technical workings of their products and to generate ideas for improvement. Nowadays I can add value to the customer as a solution architect or by leading development of tools and processes, typically for testing. In short, I often end up evolving my original assignments into something far more valuable for the customer; improved way of working.

What do you do outside work?
My family lives on a property with several buildings and large fields. We have horses and chickens.

I have recently installed solar panels and bought a Model 3, so I am looking at zero fuel costs with electricity to spare for other uses.
A property like that requires quite a lot of maintenance and renovation, but it also gives me the opportunity to ”not think about work” during my spare time. For instance, building a music studio or renovating an old car, or setting up a retro phone booth in the yard.

The engineers at Combine especially enjoyed my article on re-baking a surface mounted circuit board by heating it in the oven.
Since I originally was a real ski-bum going regularly to the Alps, I now frequently ride an Electrical Unicycle (EUC), anyone is welcome to get in touch and give it a try!

I have also joined Hemvärnet and enjoy it quite a lot.

Read more

Autonomous systems are expected to have a significant impact on many aspects of our society. One of them, the transportation of goods and people, is already undergoing a profound transformation toward autonomous delivery. Combine is contributing to the HUGO Delivery Robot by Berge AB by developing advanced positioning systems to tackle a major challenge in the field – the area close to the destination, where small-scale adjustments and precise solutions are required for timely and efficient delivery.   

The idea of ordering takeaway from a local restaurant and having it quickly delivered by autonomous robots not only sounds like scifi come to life, it also promises gains in efficiency and safety. So why are our streets not yet filled with R2D2’s nimble cousins whizzing about culinary delights or important packages? Unfortunately, there are still many challenges to be overcome, and localization is the most important among them. To solve it, we need accurate position and orientation data, a task we also face with the Hugo Delivery Robot. 

There are many different sensors that can be used for localization, but they all have different advantages and drawbacks. The most crucial difference are the different error types they can produce. The good news is that instead of causing overwhelming chaos, the differing nature of these error types can be used for cross-verification, like a high-frequency version of the cross-examination popularized by police procedurals, with sensors as criminals ratting each other out. 

IMU
IMU sensors are a very common sensor type that can be found even in a common phone or smartwatch. These sensors are generally measuring rotational velocity, linear acceleration and magnetic field by hall sensors. All these measurements are done by measuring capacitance and resistance at the silicone level. As depicted in the image below, the sensor structure is mostly dominated by loosely connected inertial masses, which shift their positions in response to sensor motion. By capturing the resulting changes in capacitance of the conductive wall elements, both linear acceleration and angular velocity can be measured. 

 

Tamara, Nesterenko & Koleda, Aleksei & E.s, Barbin. (2018). Integrated microelectromechanical gyroscope under shock loads. IOP Conference Series: Materials Science and Engineering. 289. 012003. 10.1088/1757-899X/289/1/012003. 

 On the other hand, measuring the Yaw/Pitch/Roll axes is trickier than measuring acceleration and velocity. The image below illustrates the problem – since angle measurement depends on the earth’s magnetic field B, an IMU sensor placed near other magnetic field sources such as electric motors might end up checking the motor’s condition instead of measuring angles. 

Mathias Schubert, Philipp Kühne, Vanya Darakchieva, and Tino Hofmann, “Optical Hall effect—model description: tutorial,” J. Opt. Soc. Am. A 33, 1553-1568 (2016).

Since an IMU is measuring acceleration and velocity, this data should be integrated along time to determine angle and position. As we can guess, there is no sensor without error and all these errors are accumulating along time and position, causing the orientation data to drift with time, as depicted in the  graph below. A common first measure to get rid of this error is to merge the data with another source characterized by a different error type using a Kalman Filter, a proven approach with a wealth of field testing and literature. 

  1. Sa and P. Corke, 100Hz Onboard Vision for Quadrotor State Estimation, Australasian Conference on Robotics and Automation, 2012.

 As highly accurate IMUs can be prohibitively expensive to be employed in a robot fleet, reaching well into the 10k SEK range, a more sensible option is to buy low-end 9-Axis sensors with an internal fusion algorithm. However, these sensor types are sensitive to the geographical location, since they use the earth magnetic field for error correction. Typically, moving them by more than 100km would require recalibration. Since we are not considering fighter jets or long-distance cargo, however, they are a perfect choice for delivery robots servicing a fixed local area. 

Wheel Encoders
Wheel encoders are devices that measure position or velocity of wheels. These sensors are producing very clear data, as can be seen in the image below. 

 

Gougeon, Olivier & H Beheshti, Mohammad. (2017). 2D Navigation with a Differential Wheeled Unmanned Ground Vehicle. 10.13140/RG.2.2.20876.16006. 

Since the wheels have some slippage (especially in skid-steer vehicles), this data also drifts, defined by surface friction. This error is worse than the IMU data error because it is very hard to determine the surface friction constant, e.g. by using cameras or any other cheap sensor. A surface friction map of the environment would be an ideal, yet impractical solution. Another approach could involve using machine learning on a large-scale database of surface images and friction constants. As a more straightforward alternative, the wheel encoder data can also be fed into the Kalman Filter to be merged with other data sources. 

GPS Receiver
GPS is a ubiquitous technology inseparable from our daily lives. The GPS receiver needs to get the signal from 3 different satellites, all of which have to be from the same satellite group, to derive position and orientation data. Since the clock speed of the GPS receiver is limited, it can only measure the delay between different satellites very coarsely. The error characteristics of the receiver are shown in the error graph below. If we consider the world as an area of 5×5 meter squares, a low-end GPS receiver would only be able to determine the block it is located in. Were it located in an indoor environment, the resolution could degrade down to about 50 meters in each direction, since the satellite clock signals would be attenuated when propagating through a medium like steel or concrete. 

 

Bshara, Mussa & Orguner, Umut & Gustafsson, Fredrik & Van Biesen, Leo. (2012). ENHANCING GPS POSITIONING ACCURACY FROM THE GENERATION OF GROUND-TRUTH REFERENCE POINTS FOR ON-ROAD URBAN NAVIGATION.  

Kalman Filter
As mentioned above, the Kalman Filter is used to combine the advantages of individual sensors while suppressing their different errors. Thus, data that is accurate but imprecise can be combined with precise but inaccurate data to obtain the best of both worlds, a process known as sensor fusion. For example, the precise but inaccurate IMU data is corrected by the accurate but imprecise GPS position so that its drift can be reset once it travels outside the 5×5 meter grid or the absolute drift exceeds 2.5 meters on average. 

2.5 meters might seem like a very poor spatial resolution at first. However, when combined with the highly accurate data of wheel encoders, it is possible to achieve a positioning accuracy of 5 centimeters with a heading accuracy of 0.5 degrees for an autonomous robot. The results of applying a Kalman Filter to another fusion system are shown in the graph below. 

Wong, Alexander. (2019). Low-Cost Visual/Inertial Hybrid Motion Capture System for Wireless 3D Controllers. 

Point Cloud Data
Many other measures can help increase accuracy without the need for expensive sensors.  In our case, it was point cloud data which was generated by a stereo infrared camera. Stereo cameras are based on the same principle as human vision, in that they derive depth data by comparing the images produced by two different image sensors. Since the distance between the image sensors is a known parameter, frames can be easily processed to generate a point cloud (or depth data). 

 

In our specific case, the major problem was the rotation of the robot around its center axis. During this motion, its tires were folding under the wheels and acting like a spring. When the spring force exceeds the robot’s weight, it jumps by a few millimeters and causes a displacement which is not detected by the wheel encoders and can be considered as a noise by the IMU (because it is not a continuous motion but an impact). 

To solve this, the features in the point cloud were used. The algorithm checks for any flat surface in the point cloud and compares the differences in sensor data – IMU, the wheel encoder as well as the flat surface position and angle – to correct the robot’s heading and position data. 

This method did not derive the perfect result but improved our position and orientation data sufficiently while only marginally increasing the CPU load. The improvements point toward efficient implementations and accurate localizations, promising to soon bring a friendly R2D2 clone to your doorstep bearing delicious food! 

 

Read more

Basic physics for simulating fabrics

A common method to simulate cloth for computer graphics and the movie industry is to use a so called spring and dampener system. This method is based on simulating the fabrics by splitting them up on a grid and simulate the movement of the cloth at each point (intersection) on the grid. By enforcing forces and constraints that act on these points you can simulate many different types of fabrics, and by using a finer resolution grid you get more natural and realistic simulations.

By varying different constants during the simulation you can simulate many different types of fabrics. In the example below we are going for a fairly stiff and smooth material.

Each point on the grid can be simulated using Newtonian physics to update the velocity and position of the points. However, in order for the grid to behave as a fabric and not just a collection of points we need to add constraints and forces that act on these points. Typically this is done with the metaphor of springs and dampeners. We imagine that there exists a spring-like force between nearby points that applies a force to keep them at a set distance. Likewise we imagine that there exists dampeners in additions to the springs that simulates friction, getting rid of the unwanted oscillations that one would otherwise get from a perfect spring.

Consider the grid with springs and dampeners in the illustration above. We can here define the force acting on each point as a function of the extensions of the springs and the relative velocities of the two points. If we store the position and velocities that act on the grid points in the arrays P and V we can compute the forces that act on the points as follows. For the case of a single connection between grid cell [i,j] and [i+1,j]:

where k_s and k_d are the spring constant and dampening constant, respectively and l is the spring length at rest.

In order to efficiently calculate the forces over the whole grid, we use array operations to perform the calculations above in a single step. In numpy this can be formulated as follows for the spring part of the calculation:


# skips the last point
pos1 = pos[:-1, j, :]
# skips the first point
pos2 = pos[1:, j, :]
# vector from pos1 to pos2
v12 = pos2 - pos1
# length of vector
r12 = np.linalg.norm(v12, axis=1)
f = (v12.T * (r12 - size)).T * ks
self.force[:-1, j, :] += f
self.force[1:, j, :] -= f

Note how the code above computes the force for all “horizontal” connections, and updates the forces on both points that are connected together.

To compute the updated positions and velocities we can use any of the standard physics integrators such as an Euler integrator (worst), Verlet integrator (better) or even a Runge-Kutta integrator (RK4, best). In our case we pick a simple Euler integrator since it uses a formulation that is easier to translate to Runge-Kutta RK4 in the end.

Structural forces

If we only consider connections between the four direct neighbours (left, right, up, down) then we will not get a very life like material. What we need to do is to preserve forces in the plane of the object (structural forces), and shearing forces as well as to resist bending forces.

The structural forces can be given directly with the four connections to the direct neighbours. These connections are illustrated with the black dashed lines below.

For the shearing forces, we need to add constraints along the diagonal, again by adding springs and optionally dampeners. This is illustrated by the red arrows below.

Finally, in order to prevent the fabric from bending too much we add springs and optionally dampeners along connections of length 2. This alone allows the springs and dampeners to preserve rotational forces by adding a force only whenever the distance between two points are not exactly twice the resting length of the springs, this happens if and only if the grid is bent. Thus creating a force that straightens the grid. These connections are illustrated by the green arrows below.

Note that the resting length of each spring is calculated as per the length of the corresponding connection in the the original grid, ie. the diagonals have sqrt(2) times the length of the direct connections. By changing the resting length (and connectivity) of the connections we can alter the default shape of the object. By changing the spring constants and dampening constants we can alter the type of material that is simulated.

Accelerating the simulation using TensorFlow

To speed up the calculations we will use TensorFlow to perform all array calculations. To get started with tensorflow take a look at one of the many tutorials that are available. In the code below we use the original tensorflow method of first building up a graph that describes the calculations to perform, as opposed to using the slower tensorflow eager mode that perform the calculations on the go.

We start by defining the position and velocities of the grid as tensorflow variables.

args = {"trainable": False, "dtype": tf.float32}
pos_t = tf.Variable(pos, name="pos", **args)
vel_t = tf.Variable(vel, name="vel", **args)

where pos and vel are pre-existing numpy arrays that contain the starting positions and starting velocity.

We define useful constants for the calculations such as step time, gravity etc. as tensor constants:

mass = tf.constant(1.0, name="mass")
dt = tf.constant(2e-3, dtype=tf.float32, name="dt")
gravity = tf.constant(np.array([0.0,-9.81,0.0]), dtype=tf.float32, name="g")
size = tf.constant(0.1, dtype=tf.float32, name="size")
...

We can calculate the internal forces that act on the cloth by converting the elementwise operation from before into tensor operations. We accumulate all the different force calculations into a list of tensors, and perform a final summation of them as a last step.


forces = []

# Spring forces for i+1
pos1 = pos[:-1, :, :]
pos2 = pos[1:, :, :]
v12 = pos2 – pos1
r12 = tf.norm(v12, axis=2)
f = (v12 * tf.expand_dims((r12 – size), axis=2)) * ks
f_before = -tf.pad(f, tf.constant([[1,0],[0,0],[0,0]]))
f_after = tf.pad(f, tf.constant([[0,1],[0,0],[0,0]]))
forces.append(f_before)
forces.append(f_after)

# Dampening for i+1
vel1 = vel[:-1, :, :]
vel2 = vel[1:, :, :]
f = (vel2 – vel1) * kd
f_before = -tf.pad(f, tf.constant([[1,0],[0,0],[0,0]]))
f_after = tf.pad(f, tf.constant([[0,1],[0,0],[0,0]]))
forces.append(f_before)
forces.append(f_after)

total_force = tf.add_n(forces)

Similarly to the example above we add the calculations for:

  • spring forces for neighbours on the same row/column: i+1, i+2, j+1, j+2,
  • spring forces for neighbours on diagonals: (i+1, j+1), and (i-1, j+1)
  • dampening forces for direct neighbours: i+1, j+1

We can also add collision forces with a ball and the ground in order to make for a more interesting simulation:

ball_center = tf.constant(np.array([0,0,0]), dtype=tf.float32, name="ball")
ball_radius = tf.constant(1.0, dtype=tf.float32, name="rad")
V = pos - ball_center
r = tf.norm(V, axis=2)
p = tf.maximum(tf.constant(0, dtype=tf.float32, name="0"), ball_radius - r)
r = tf.reshape(r, r.shape.dims+[tf.Dimension(1)])
p = tf.reshape(p, p.shape.dims+[tf.Dimension(1)])
force = V / r * p * tf.constant(1e5, dtype=tf.float32)

Note how we can reformulate the problem of collision detection into computing how far into the ball a point is and apply an outwards force, using a MAX operation to ensure that points that are not inside the object (ball_radius – r is negative) are not affected. This allows us to do collision detection without any conditional operators which would have been slow otherwise.

Finally, we add collisions with the ground in a similar manner and add gravity that affects all points.

The update step is done as a naive Euler implementation (for now).


self.delta_vel = self.force4 * self.dt / mass
self.delta_pos = self.vel_t * self.dt

self.step = tf.group(
self.vel_t.assign_add(self.delta_vel),
self.pos_t.assign_add(self.delta_pos))

When generating the animation below we alternate between a number of physics steps and extracting the position data from GPU memory to CPU memory and visualising the mesh:

for i in range(100): session.run(self.step)
self.pos = self.pos_t.eval(session=session)
self.draw()

If we where to use a more advanced integrator such as RK4 we could have a larger step-size without introducing instabilities to the simulation. That would overall serve to speed up the simulation and allow us to get away with fewer calls to TensorFlow.

Execution time

To see how effective our tensorflow implementation was we measured the time per single update step, ie. calculating the forces and updating the velocity and positions one time. We measure this time as a function of the total array size (width * height) since we know that the total execution time scales by the total number of points or the square of the grid-resolution.

As we can see in the graph below we have a fairly large constant offset in time that makes the number of points used barely have any effect below 40.000 points (200 in width and height). This is most likely caused by the time needed to start the GPU kernels containing the calculations above, which puts a limit to how useful this method is when simulating many smaller objects.

Read more

As I mentioned in previous blog posts Combine are expanding to Stockholm and we have now started the initiative to open an office.

Possibilities
Last week we, myself and my colleague Peter, visited Stockholm for interviews and customer meetings. We had some very interesting meetings regarding circular economy, autonomous drive and AI for Cleantech that we hope will lead to projects or prototype platforms. Stay tuned for more information in upcoming posts.

Job posts
We are still going through applications for the positions as Head of Stockholm, Data Scientist and Control Systems Engineer, so visit our homepage and apply!

We will start by recruiting suitable engineers followed by the manager, so the engineers might have the possibility to be part of the hiring process of their manager.

Office
Regarding the office space we aim to find an office near the central station. The main reason is that we want to decrease the need of transportation by car between offices.

Competence needed
There seems to be a big need for experience working with GPUs, Machine Learning, Deep Learning and AI. We also see the possibility to package solutions that we can deliver as projects from our office instead of engineers on site. We prefer building our business with both assignments on site and solutions.

Being honest
Finally, I would like to highlight an issue that is not linked to Stockholm but is something I feel is important for our profession.

When we presented Combine and our services we were well received. The fact that we focus on technology and how we can help our customers differs quite a lot from suppliers that only consider business possibilities without taking good partnership or customer success into account. Some of the customers were surprised that we were more interested in getting things right than finding an assignment here and now.

We prefer being honest, doing the right thing, delivering quality over time and focusing on people; and we believe that this way of working will lead to success. It is also good for the soul 😊

So, I’d like to end this blog post by Combine’s motto:

ENTER THE NEXT LEVEL
“Our vision is to enhance engineering organizations in the world. Enter the next level is our way of expressing this, by helping our clients reach a higher level in their business. Our success comes with the success of our clients.”

Finally,
Thank you for reading.
Erik Silfverberg

CEO, Combine Control Systems AB

Read more

Modelica is a non-proprietary language for object oriented equation based modeling maintained by the Modelica Association. Using Modelica, complex models can be built by combining components from different domains such as, mechanical, electrical, thermal and hydraulic. There are many libraries, both public and commercial, for modeling various types of system. Modelica models can be built and simulated using a wide range of tools, both commercial and free of charge.

Here a model of a residential house will be built using the public Modelica Buildings Library and the open source modeling and simulation environment Open Modelica.

The house that we are modeling is a one-story gable roof house with a solid ground floor. The model of the house will contain:

  • the envelope of the house
  • two air volumes, the residential area and the attic, separated by the internal ceiling
  • the interior walls of the house lumped into one wall
  • a solid ground floor with underfloor heating
  • a ventilation system with heat recovery
  • a fan coil unit

The heat transfer between the house and the environment is modeled using heat conduction and heat convection. The environment is described by the air temperature, wind speed and wind direction. Since we include the wind direction in the model we need to take the orientation of the

outside walls into consideration and cannot lump all walls into one. So first a model of an exterior wall is created that consist of three models from the Buildings Library:

  • HeatTransfer.Convection.Exterior extConv, a model of exterior convection that take windspeed and direction into account
  • HeatTransfer.Conduction.MultiLayer cond, a model of conduction through a multi-layer construction
  • HeatTransfer.Convection.Interior intConv, a model of interior convection

The input to the model is the outdoor conditions and the interaction with the indoor air is through the heat port, port. The parameters of the model are the area of the wall, the azimuth of the wall and the construction of the wall. The construction of the wall is specified as an instance of Buildings.HeatTransfer.Data.OpaqueConstructions.Generic with the materials of each layer. Each material specifies the layer thickness and the material properties such as density, thermal conductivity and specific heat capacity, also the number of states can be specified in the spatial discretization of each layer. Similar models are created for the roof and interior ceiling.

Now a model of the house can be put together using the created models. First the materials and constructions need to be specified for the different constructions, below is an excerpt of the Modelica code that shows the definition of the exterior wall construction:

constant Buildings.HeatTransfer.Data.Solids.Brick brickWall(x = 0.12);

constant Buildings.HeatTransfer.Data.Solids.InsulationBoard insulationWall(x = 0.10);

constant Buildings.HeatTransfer.Data.Solids.GypsumBoard gypsum(x = 0.013);

constant Buildings.HeatTransfer.Data.OpaqueConstructions.Generic wallLayers(nLay = 3, material = {brickWall, insulationWall, gypsum});

The air in the residential area and the attic are modeled using a Buildings.Fluid.MixingVolumes.MixingVolume which has a heat port and a variable number of fluid ports.

Now the various sub-models can be connected for the envelope and the interior air volumes. The heat ports of the wall, roof and ceiling segments are connected to the air volume that they are facing, and the outdoor conditions are connected to an external input to the house model.

Next the floor with underfloor heating and an input for internal heat load disturbances are added. The underfloor heating is modeled by inserting a prescribed heat flow between two layers of the floor and the internal heat load is modeled by connecteng a prescribed heat flow to the indoor air. The floor is connected to the ground which is set to a prescribed temperature of 10 °C.

 

The ventilation system that provides the house with fresh air is modeled using an exhaust fan, heat exchanger and a fluid boundary with a variable temperature connected to the outdoor temperature. The exhaust fan is modeled using a Buildings.Fluid.Movers.FlowControlled_m_flow with a constant mass flow rate determined by the specified air replacement time, typically 2 hours. To recover heat from the exhaust air an heat exchanger is used modeled by

Buildings.Fluid.HeatExchangers.ConstantEffectiveness with an efficiency of 80%. The ventilation system is connected to two fluid ports of the indoor air volume.

In a similar way the fan coil unit is modeled using a flow controlled fluid mover but instead of a heat exchanger a Buildings.Fluid.HeatExchangers.HeaterCooler_u is used with a specified max power of 4 kW. The mass flow rate of the fan is set as a function of the requested power starting from ¼ of the max flow at zero requested power to the max flow at the maximum requested power.

Then some temperature sensors and an energy usage calculation and the corresponding outputs are added to the model and the model of the house is complete.

Now we can use the house for simulation. First, we build a model for simulating the open loop responses to different inputs.

To get some understanding on how the house responds to the outdoor conditions and the different heating systems step responses are performed at an operating point when the outdoor temperature is 10 °C, the windspeed is 0 m/s, the wind direction is north, and the indoor temperature is 22 °C. Four step responses are simulated:

  • The outdoor temperature is raised to 11 °C
  • The wind speed is increased to 10 m/s
  • The fan coil power is increased by 200 W
  • The floor heating power is increased by 200 W

The figure shows that the all step responses settle in about the same time and reach a steady state in 1000 h, about 42 days.  However, the initial transient of the step is quite different, and it can also be seen that the fan coil unit raises the indoor temperature slightly more than the floor heating at 200 W. To make further comparisons the normalized step responses are plotted in two different time scales.

The plot of the normalized step responses in the 1000 h time scale confirms that the time to reach steady state is about the same. Looking at the plot showing the first 24 h of the step responses it shows that a change in the outdoor temperature or fan coil power initial changes the indoor temperature very quickly. This is due to that they are directly connected to the indoor air volume, that outdoor temperature through the ventilation system and the fan coil power through to the heater that heats the air that is blown through it.

Studying the step responses, the following conclusions can be drawn.

  • Heating the house with a fan coil unit is more energy efficient if only the indoor temperature is considered, using floor heating more energy is lost to heat transfer to the ground but a warm floor may give a higher perceived comfort for the occupants of the house.
  • If it is desired to keep the indoor temperature close to a specified setpoint at all times this can only be achieved using a fan coil unit.

This model of a house is not modeling all aspects of a real building, for instance there are no windows or doors in the building envelope and radiation heat transfer between the building and the environment is not modeled. This means that absolute energy performance calculations using this model may be inaccurate. However, the model can be used to evaluate different control strategies with respect to control and energy performance.

 

Read more

What’s you background story?

I moved to Linköping when I started studying Engineering Physics and Electrical Engineering which was some years ago. I found an interest in control systems, so much in fact that I spent three years after my initial studies to earn myself a licentiate degree in that field. I was eager to put my acquired knowledge to practical use and started working for a consulting company in the region. At that time, Combine didn’t have an office in Linköping.

How is it you came in to work at Combine?

Well, the first time I noticed Combine was a job advertisement in a newspaper. I think it was in Ny Teknik, but this was a long time ago. Although it sounded great they only had offices in Lund and Gothenburg at the time and moving was out of the question. But when I saw that they were opening a new office in Linköping I contacted them and here we are now.

What was it that sounded so great about Combine?

Combine is very focused on the fields in which they work, that is control systems and data science. The thing that caught my immediate interest in the advertisement was that they really pinpointed the field of control systems which I haven’t found any other company who has done in the same way. When I looked at the qualifications, I felt that everything matched me perfectly. Now that I work for Combine I can only agree with my initial feeling, instead of being the biggest Combine focuses on being the best.

You have experience of working as a consultant for a long time, and in different consulting companies. You are also very appreciated by our clients. What would you say is your success formula?

I don´t have a formula or a good answer for that matter, maybe I’m just suitable as a consultant. I like to learn new things and to face new challenges. I also feel a need to get an overview of things I work with directly, so that I can contribute as fast as possible. I guess that the social parts also contribute when it comes to be a good consultant.

At the moment you work part-time while also being on parental leave. How do you handle that?

Yes, me and my wife are blessed with two fantastic children and until the youngest will start preschool I only work half of the week. It hasn’t been a big issue since the client I am working for is very understanding. I try to repay the favor by being as productive as possible when I work.

Without mentioning the client, we can state that you work in the automotive business. Do you have a favorite car?

Not really. Thanks to my kids I would say the dumper Chuck in the animated series The Adventures of Chuck and Friends.

Read more

How did you learn about Combine?

It was just after I finished my PhD in physics at the University of Geneva, Switzerland. During this time I was looking for a job in Sweden to move here. It was actually at Charm, the job fair at Chalmers University, where I more or less stumbled into the booth of Combine. My interest in Combine was immediately awoken talking to my future colleagues and their technical problems which needed to be solved.

Which of your skills acquired during your PhD are you using in your daily work life?

Since my PhD was about the very fundamental physical properties of novel materials, this parts are not at all important for my daily work life. It is more the broad mathematics and physics knowledge as well as secondary skills you acquire during a PhD which are the ones I am using during my work day, e.g. the programming experience and data analysis skills. During my academic career I got very interested in developing my own data analysis tools and in optimizing our algorithms. When I came to the end of my PhD I was sure I wanted to continue in this direction, but working in industry.

Tell us about the different projects you were working on at Combine?

I started helping out with the development of Combines own data analysis tool “Sympathy for Data”, which I wished I had known about during my PhD. I believe, it would have saved me many hours of developing my own scripts and tools over and over again. I also like the visual representation to quickly grasp and structure a workflow. Furthermore, it appealed to me to contribute to open source software (Editor’s note: you can read more about Sympathy for data here.

It followed a smaller project implementing a server application before I started a two year on site project at one of our customers designing and helping implementing a framework for automated end to end verification. This last project was very challenging on many levels, from learning the customers needs, designing the system from the ground up, as well as fighting for the right recourses. But I am a person who likes a good challenge and uses it to grow on it. I believe I succeeded and left the group in a good place before I started my new role as the group manager Data Science Solutions in our Gothenburg office.

How do you see the future of your new group?

There are two things which are very important to me and Combine in general. Firstly, I want to provide our customers with the right solution, meaning quality and usefulness. And secondly I want to provide a great working environment for our consultants where they have the possibility to grow professionally and personally. I strongly believe that sharing knowledge between our onsite and inhouse consultants will boost our capabilities to provide our customers with the right and complete solution.

Read more

Combine has set out on a journey of adventures, exploring different industries with the following question in mind. How can we utilize the competence at Combine to help our customers “Enter the next level”?

In this blog post, we are exploring the retail business together with NIMPOS.

NIMPOS is a Swedish based company who offer a revolutionary simple and safe point of sale system suitable for both large and small companies thanks to its scalability. A full description of NIMPOS and their products can be found here. Having access to more and more data from transactions, NIMPOS is asking Combine for guidance on how to utilize the stored transaction data to help their customers enhance their business.

Combine develop and maintain a free and open source data analysis tool called Sympathy for Data. Sympathy is a software platform for data analysis and is built upon Python. It hides the complexity of handling large amounts of data in the data analysis process, which enables the user to focus on what is really important.

The first step in any data analysis task is getting access to the data. After creating a VPN connection, the data from NIMPOS database is easily imported into Sympathy by utilizing its powerful import routines.

Some of the data we got access to:

  • Reference ID (one per transaction)
  • Article ID
  • Article Name
  • Transaction Date
  • Quantity (number of sold articles)
  • Article Price

Now, with the data imported the powerful data processing capabilities of Sympathy is at our hands. The data is first preprocessed to filter out missing and unreasonable data, after which the analysis can start.

A few analyses have been implemented:

  1. Predicting the increase in the number of customers.
  2. Expected number of sold articles together with confidence bounds.
  3. Customer intensity variation
    • For each day of the week
    • Hour-by-hour for each weekday

An overview of the flow is presented in the figure.

Sadly we do not have any information to connect an individual transaction to unique customers, and no other customer features are available, e.g. age or sex, and this narrows down the possible analyses.

This post is the first in a series, where we have laid the ground for upcoming posts. We introduced the reader to the problem, some of the data, the tools, and a few analyses implemented.

In one of the upcoming posts, we will showcase the possibilities of connecting the strengths of Sympathy for Data, for processing and analyzing data, together with the interactive reporting made possible by Sympathy web services.

Stay tuned and don’t miss out on future posts. In the meantime, I suggest you read earlier posts or download Sympathy for Data and start playing around with some example flows. You won’t regret it!

Read more
Contact us