Blogg | Combine

Blogg

You have switched between testing, specifications and leadership in your assignments. Tell us more about this.
I really like to dig into the technology that makes products work (or not), but I also generate a lot of ideas on how to solve various problems. So, my assignments have varied between finding and analyzing the cause of problems to building tools and processes for aftermarket documentation, or leading activities aimed at test automation.

How has your career developed at Combine?
I had previously worked in the telematics area and aftermarket. At Combine I got into engineering IT and finally back to telematics, navigation specifically. I’m currently working as a product owner, which I suppose is a fancier title as well.

When Combine sends you to help a customer, what can the customer expect?
They can expect me to get deep into the technical workings of their products and to generate ideas for improvement. Nowadays I can add value to the customer as a solution architect or by leading development of tools and processes, typically for testing. In short, I often end up evolving my original assignments into something far more valuable for the customer; improved way of working.

What do you do outside work?
My family lives on a property with several buildings and large fields. We have horses and chickens.

I have recently installed solar panels and bought a Model 3, so I am looking at zero fuel costs with electricity to spare for other uses.
A property like that requires quite a lot of maintenance and renovation, but it also gives me the opportunity to ”not think about work” during my spare time. For instance, building a music studio or renovating an old car, or setting up a retro phone booth in the yard.

The engineers at Combine especially enjoyed my article on re-baking a surface mounted circuit board by heating it in the oven.
Since I originally was a real ski-bum going regularly to the Alps, I now frequently ride an Electrical Unicycle (EUC), anyone is welcome to get in touch and give it a try!

I have also joined Hemvärnet and enjoy it quite a lot.

Read more

Autonomous systems are expected to have a significant impact on many aspects of our society. One of them, the transportation of goods and people, is already undergoing a profound transformation toward autonomous delivery. Combine is contributing to the HUGO Delivery Robot by Berge AB by developing advanced positioning systems to tackle a major challenge in the field – the area close to the destination, where small-scale adjustments and precise solutions are required for timely and efficient delivery.   

The idea of ordering takeaway from a local restaurant and having it quickly delivered by autonomous robots not only sounds like scifi come to life, it also promises gains in efficiency and safety. So why are our streets not yet filled with R2D2’s nimble cousins whizzing about culinary delights or important packages? Unfortunately, there are still many challenges to be overcome, and localization is the most important among them. To solve it, we need accurate position and orientation data, a task we also face with the Hugo Delivery Robot. 

There are many different sensors that can be used for localization, but they all have different advantages and drawbacks. The most crucial difference are the different error types they can produce. The good news is that instead of causing overwhelming chaos, the differing nature of these error types can be used for cross-verification, like a high-frequency version of the cross-examination popularized by police procedurals, with sensors as criminals ratting each other out. 

IMU
IMU sensors are a very common sensor type that can be found even in a common phone or smartwatch. These sensors are generally measuring rotational velocity, linear acceleration and magnetic field by hall sensors. All these measurements are done by measuring capacitance and resistance at the silicone level. As depicted in the image below, the sensor structure is mostly dominated by loosely connected inertial masses, which shift their positions in response to sensor motion. By capturing the resulting changes in capacitance of the conductive wall elements, both linear acceleration and angular velocity can be measured. 

 

Tamara, Nesterenko & Koleda, Aleksei & E.s, Barbin. (2018). Integrated microelectromechanical gyroscope under shock loads. IOP Conference Series: Materials Science and Engineering. 289. 012003. 10.1088/1757-899X/289/1/012003. 

 On the other hand, measuring the Yaw/Pitch/Roll axes is trickier than measuring acceleration and velocity. The image below illustrates the problem – since angle measurement depends on the earth’s magnetic field B, an IMU sensor placed near other magnetic field sources such as electric motors might end up checking the motor’s condition instead of measuring angles. 

Mathias Schubert, Philipp Kühne, Vanya Darakchieva, and Tino Hofmann, ”Optical Hall effect—model description: tutorial,” J. Opt. Soc. Am. A 33, 1553-1568 (2016).

Since an IMU is measuring acceleration and velocity, this data should be integrated along time to determine angle and position. As we can guess, there is no sensor without error and all these errors are accumulating along time and position, causing the orientation data to drift with time, as depicted in the  graph below. A common first measure to get rid of this error is to merge the data with another source characterized by a different error type using a Kalman Filter, a proven approach with a wealth of field testing and literature. 

  1. Sa and P. Corke, 100Hz Onboard Vision for Quadrotor State Estimation, Australasian Conference on Robotics and Automation, 2012.

 As highly accurate IMUs can be prohibitively expensive to be employed in a robot fleet, reaching well into the 10k SEK range, a more sensible option is to buy low-end 9-Axis sensors with an internal fusion algorithm. However, these sensor types are sensitive to the geographical location, since they use the earth magnetic field for error correction. Typically, moving them by more than 100km would require recalibration. Since we are not considering fighter jets or long-distance cargo, however, they are a perfect choice for delivery robots servicing a fixed local area. 

Wheel Encoders
Wheel encoders are devices that measure position or velocity of wheels. These sensors are producing very clear data, as can be seen in the image below. 

 

Gougeon, Olivier & H Beheshti, Mohammad. (2017). 2D Navigation with a Differential Wheeled Unmanned Ground Vehicle. 10.13140/RG.2.2.20876.16006. 

Since the wheels have some slippage (especially in skid-steer vehicles), this data also drifts, defined by surface friction. This error is worse than the IMU data error because it is very hard to determine the surface friction constant, e.g. by using cameras or any other cheap sensor. A surface friction map of the environment would be an ideal, yet impractical solution. Another approach could involve using machine learning on a large-scale database of surface images and friction constants. As a more straightforward alternative, the wheel encoder data can also be fed into the Kalman Filter to be merged with other data sources. 

GPS Receiver
GPS is a ubiquitous technology inseparable from our daily lives. The GPS receiver needs to get the signal from 3 different satellites, all of which have to be from the same satellite group, to derive position and orientation data. Since the clock speed of the GPS receiver is limited, it can only measure the delay between different satellites very coarsely. The error characteristics of the receiver are shown in the error graph below. If we consider the world as an area of 5×5 meter squares, a low-end GPS receiver would only be able to determine the block it is located in. Were it located in an indoor environment, the resolution could degrade down to about 50 meters in each direction, since the satellite clock signals would be attenuated when propagating through a medium like steel or concrete. 

 

Bshara, Mussa & Orguner, Umut & Gustafsson, Fredrik & Van Biesen, Leo. (2012). ENHANCING GPS POSITIONING ACCURACY FROM THE GENERATION OF GROUND-TRUTH REFERENCE POINTS FOR ON-ROAD URBAN NAVIGATION.  

Kalman Filter
As mentioned above, the Kalman Filter is used to combine the advantages of individual sensors while suppressing their different errors. Thus, data that is accurate but imprecise can be combined with precise but inaccurate data to obtain the best of both worlds, a process known as sensor fusion. For example, the precise but inaccurate IMU data is corrected by the accurate but imprecise GPS position so that its drift can be reset once it travels outside the 5×5 meter grid or the absolute drift exceeds 2.5 meters on average. 

2.5 meters might seem like a very poor spatial resolution at first. However, when combined with the highly accurate data of wheel encoders, it is possible to achieve a positioning accuracy of 5 centimeters with a heading accuracy of 0.5 degrees for an autonomous robot. The results of applying a Kalman Filter to another fusion system are shown in the graph below. 

Wong, Alexander. (2019). Low-Cost Visual/Inertial Hybrid Motion Capture System for Wireless 3D Controllers. 

Point Cloud Data
Many other measures can help increase accuracy without the need for expensive sensors.  In our case, it was point cloud data which was generated by a stereo infrared camera. Stereo cameras are based on the same principle as human vision, in that they derive depth data by comparing the images produced by two different image sensors. Since the distance between the image sensors is a known parameter, frames can be easily processed to generate a point cloud (or depth data). 

 

In our specific case, the major problem was the rotation of the robot around its center axis. During this motion, its tires were folding under the wheels and acting like a spring. When the spring force exceeds the robot’s weight, it jumps by a few millimeters and causes a displacement which is not detected by the wheel encoders and can be considered as a noise by the IMU (because it is not a continuous motion but an impact). 

To solve this, the features in the point cloud were used. The algorithm checks for any flat surface in the point cloud and compares the differences in sensor data – IMU, the wheel encoder as well as the flat surface position and angle – to correct the robot’s heading and position data. 

This method did not derive the perfect result but improved our position and orientation data sufficiently while only marginally increasing the CPU load. The improvements point toward efficient implementations and accurate localizations, promising to soon bring a friendly R2D2 clone to your doorstep bearing delicious food! 

 

Read more

Basic physics for simulating fabrics

A common method to simulate cloth for computer graphics and the movie industry is to use a so called spring and dampener system. This method is based on simulating the fabrics by splitting them up on a grid and simulate the movement of the cloth at each point (intersection) on the grid. By enforcing forces and constraints that act on these points you can simulate many different types of fabrics, and by using a finer resolution grid you get more natural and realistic simulations.

By varying different constants during the simulation you can simulate many different types of fabrics. In the example below we are going for a fairly stiff and smooth material.

Each point on the grid can be simulated using Newtonian physics to update the velocity and position of the points. However, in order for the grid to behave as a fabric and not just a collection of points we need to add constraints and forces that act on these points. Typically this is done with the metaphor of springs and dampeners. We imagine that there exists a spring-like force between nearby points that applies a force to keep them at a set distance. Likewise we imagine that there exists dampeners in additions to the springs that simulates friction, getting rid of the unwanted oscillations that one would otherwise get from a perfect spring.

Consider the grid with springs and dampeners in the illustration above. We can here define the force acting on each point as a function of the extensions of the springs and the relative velocities of the two points. If we store the position and velocities that act on the grid points in the arrays P and V we can compute the forces that act on the points as follows. For the case of a single connection between grid cell [i,j] and [i+1,j]:

where k_s and k_d are the spring constant and dampening constant, respectively and l is the spring length at rest.

In order to efficiently calculate the forces over the whole grid, we use array operations to perform the calculations above in a single step. In numpy this can be formulated as follows for the spring part of the calculation:


# skips the last point
pos1 = pos[:-1, j, :]
# skips the first point
pos2 = pos[1:, j, :]
# vector from pos1 to pos2
v12 = pos2 - pos1
# length of vector
r12 = np.linalg.norm(v12, axis=1)
f = (v12.T * (r12 - size)).T * ks
self.force[:-1, j, :] += f
self.force[1:, j, :] -= f

Note how the code above computes the force for all ”horizontal” connections, and updates the forces on both points that are connected together.

To compute the updated positions and velocities we can use any of the standard physics integrators such as an Euler integrator (worst), Verlet integrator (better) or even a Runge-Kutta integrator (RK4, best). In our case we pick a simple Euler integrator since it uses a formulation that is easier to translate to Runge-Kutta RK4 in the end.

Structural forces

If we only consider connections between the four direct neighbours (left, right, up, down) then we will not get a very life like material. What we need to do is to preserve forces in the plane of the object (structural forces), and shearing forces as well as to resist bending forces.

The structural forces can be given directly with the four connections to the direct neighbours. These connections are illustrated with the black dashed lines below.

For the shearing forces, we need to add constraints along the diagonal, again by adding springs and optionally dampeners. This is illustrated by the red arrows below.

Finally, in order to prevent the fabric from bending too much we add springs and optionally dampeners along connections of length 2. This alone allows the springs and dampeners to preserve rotational forces by adding a force only whenever the distance between two points are not exactly twice the resting length of the springs, this happens if and only if the grid is bent. Thus creating a force that straightens the grid. These connections are illustrated by the green arrows below.

Note that the resting length of each spring is calculated as per the length of the corresponding connection in the the original grid, ie. the diagonals have sqrt(2) times the length of the direct connections. By changing the resting length (and connectivity) of the connections we can alter the default shape of the object. By changing the spring constants and dampening constants we can alter the type of material that is simulated.

Accelerating the simulation using TensorFlow

To speed up the calculations we will use TensorFlow to perform all array calculations. To get started with tensorflow take a look at one of the many tutorials that are available. In the code below we use the original tensorflow method of first building up a graph that describes the calculations to perform, as opposed to using the slower tensorflow eager mode that perform the calculations on the go.

We start by defining the position and velocities of the grid as tensorflow variables.

args = {"trainable": False, "dtype": tf.float32}
pos_t = tf.Variable(pos, name="pos", **args)
vel_t = tf.Variable(vel, name="vel", **args)

where pos and vel are pre-existing numpy arrays that contain the starting positions and starting velocity.

We define useful constants for the calculations such as step time, gravity etc. as tensor constants:

mass = tf.constant(1.0, name="mass")
dt = tf.constant(2e-3, dtype=tf.float32, name="dt")
gravity = tf.constant(np.array([0.0,-9.81,0.0]), dtype=tf.float32, name="g")
size = tf.constant(0.1, dtype=tf.float32, name="size")
...

We can calculate the internal forces that act on the cloth by converting the elementwise operation from before into tensor operations. We accumulate all the different force calculations into a list of tensors, and perform a final summation of them as a last step.


forces = []

# Spring forces for i+1
pos1 = pos[:-1, :, :]
pos2 = pos[1:, :, :]
v12 = pos2 – pos1
r12 = tf.norm(v12, axis=2)
f = (v12 * tf.expand_dims((r12 – size), axis=2)) * ks
f_before = -tf.pad(f, tf.constant([[1,0],[0,0],[0,0]]))
f_after = tf.pad(f, tf.constant([[0,1],[0,0],[0,0]]))
forces.append(f_before)
forces.append(f_after)

# Dampening for i+1
vel1 = vel[:-1, :, :]
vel2 = vel[1:, :, :]
f = (vel2 – vel1) * kd
f_before = -tf.pad(f, tf.constant([[1,0],[0,0],[0,0]]))
f_after = tf.pad(f, tf.constant([[0,1],[0,0],[0,0]]))
forces.append(f_before)
forces.append(f_after)

total_force = tf.add_n(forces)

Similarly to the example above we add the calculations for:

  • spring forces for neighbours on the same row/column: i+1, i+2, j+1, j+2,
  • spring forces for neighbours on diagonals: (i+1, j+1), and (i-1, j+1)
  • dampening forces for direct neighbours: i+1, j+1

We can also add collision forces with a ball and the ground in order to make for a more interesting simulation:

ball_center = tf.constant(np.array([0,0,0]), dtype=tf.float32, name="ball")
ball_radius = tf.constant(1.0, dtype=tf.float32, name="rad")
V = pos - ball_center
r = tf.norm(V, axis=2)
p = tf.maximum(tf.constant(0, dtype=tf.float32, name="0"), ball_radius - r)
r = tf.reshape(r, r.shape.dims+[tf.Dimension(1)])
p = tf.reshape(p, p.shape.dims+[tf.Dimension(1)])
force = V / r * p * tf.constant(1e5, dtype=tf.float32)

Note how we can reformulate the problem of collision detection into computing how far into the ball a point is and apply an outwards force, using a MAX operation to ensure that points that are not inside the object (ball_radius – r is negative) are not affected. This allows us to do collision detection without any conditional operators which would have been slow otherwise.

Finally, we add collisions with the ground in a similar manner and add gravity that affects all points.

The update step is done as a naive Euler implementation (for now).


self.delta_vel = self.force4 * self.dt / mass
self.delta_pos = self.vel_t * self.dt

self.step = tf.group(
self.vel_t.assign_add(self.delta_vel),
self.pos_t.assign_add(self.delta_pos))

When generating the animation below we alternate between a number of physics steps and extracting the position data from GPU memory to CPU memory and visualising the mesh:

for i in range(100): session.run(self.step)
self.pos = self.pos_t.eval(session=session)
self.draw()

If we where to use a more advanced integrator such as RK4 we could have a larger step-size without introducing instabilities to the simulation. That would overall serve to speed up the simulation and allow us to get away with fewer calls to TensorFlow.

Execution time

To see how effective our tensorflow implementation was we measured the time per single update step, ie. calculating the forces and updating the velocity and positions one time. We measure this time as a function of the total array size (width * height) since we know that the total execution time scales by the total number of points or the square of the grid-resolution.

As we can see in the graph below we have a fairly large constant offset in time that makes the number of points used barely have any effect below 40.000 points (200 in width and height). This is most likely caused by the time needed to start the GPU kernels containing the calculations above, which puts a limit to how useful this method is when simulating many smaller objects.

Read more

As I mentioned in previous blog posts Combine are expanding to Stockholm and we have now started the initiative to open an office.

Possibilities
Last week we, myself and my colleague Peter, visited Stockholm for interviews and customer meetings. We had some very interesting meetings regarding circular economy, autonomous drive and AI for Cleantech that we hope will lead to projects or prototype platforms. Stay tuned for more information in upcoming posts.

Job posts
We are still going through applications for the positions as Head of Stockholm, Data Scientist and Control Systems Engineer, so visit our homepage and apply!

We will start by recruiting suitable engineers followed by the manager, so the engineers might have the possibility to be part of the hiring process of their manager.

Office
Regarding the office space we aim to find an office near the central station. The main reason is that we want to decrease the need of transportation by car between offices.

Competence needed
There seems to be a big need for experience working with GPUs, Machine Learning, Deep Learning and AI. We also see the possibility to package solutions that we can deliver as projects from our office instead of engineers on site. We prefer building our business with both assignments on site and solutions.

Being honest
Finally, I would like to highlight an issue that is not linked to Stockholm but is something I feel is important for our profession.

When we presented Combine and our services we were well received. The fact that we focus on technology and how we can help our customers differs quite a lot from suppliers that only consider business possibilities without taking good partnership or customer success into account. Some of the customers were surprised that we were more interested in getting things right than finding an assignment here and now.

We prefer being honest, doing the right thing, delivering quality over time and focusing on people; and we believe that this way of working will lead to success. It is also good for the soul 😊

So, I’d like to end this blog post by Combine’s motto:

ENTER THE NEXT LEVEL
“Our vision is to enhance engineering organizations in the world. Enter the next level is our way of expressing this, by helping our clients reach a higher level in their business. Our success comes with the success of our clients.”

Finally,
Thank you for reading.
Erik Silfverberg

CEO, Combine Control Systems AB

Read more

Modelica is a non-proprietary language for object oriented equation based modeling maintained by the Modelica Association. Using Modelica, complex models can be built by combining components from different domains such as, mechanical, electrical, thermal and hydraulic. There are many libraries, both public and commercial, for modeling various types of system. Modelica models can be built and simulated using a wide range of tools, both commercial and free of charge.

Here a model of a residential house will be built using the public Modelica Buildings Library and the open source modeling and simulation environment Open Modelica.

The house that we are modeling is a one-story gable roof house with a solid ground floor. The model of the house will contain:

  • the envelope of the house
  • two air volumes, the residential area and the attic, separated by the internal ceiling
  • the interior walls of the house lumped into one wall
  • a solid ground floor with underfloor heating
  • a ventilation system with heat recovery
  • a fan coil unit

The heat transfer between the house and the environment is modeled using heat conduction and heat convection. The environment is described by the air temperature, wind speed and wind direction. Since we include the wind direction in the model we need to take the orientation of the

outside walls into consideration and cannot lump all walls into one. So first a model of an exterior wall is created that consist of three models from the Buildings Library:

  • HeatTransfer.Convection.Exterior extConv, a model of exterior convection that take windspeed and direction into account
  • HeatTransfer.Conduction.MultiLayer cond, a model of conduction through a multi-layer construction
  • HeatTransfer.Convection.Interior intConv, a model of interior convection

The input to the model is the outdoor conditions and the interaction with the indoor air is through the heat port, port. The parameters of the model are the area of the wall, the azimuth of the wall and the construction of the wall. The construction of the wall is specified as an instance of Buildings.HeatTransfer.Data.OpaqueConstructions.Generic with the materials of each layer. Each material specifies the layer thickness and the material properties such as density, thermal conductivity and specific heat capacity, also the number of states can be specified in the spatial discretization of each layer. Similar models are created for the roof and interior ceiling.

Now a model of the house can be put together using the created models. First the materials and constructions need to be specified for the different constructions, below is an excerpt of the Modelica code that shows the definition of the exterior wall construction:

constant Buildings.HeatTransfer.Data.Solids.Brick brickWall(x = 0.12);

constant Buildings.HeatTransfer.Data.Solids.InsulationBoard insulationWall(x = 0.10);

constant Buildings.HeatTransfer.Data.Solids.GypsumBoard gypsum(x = 0.013);

constant Buildings.HeatTransfer.Data.OpaqueConstructions.Generic wallLayers(nLay = 3, material = {brickWall, insulationWall, gypsum});

The air in the residential area and the attic are modeled using a Buildings.Fluid.MixingVolumes.MixingVolume which has a heat port and a variable number of fluid ports.

Now the various sub-models can be connected for the envelope and the interior air volumes. The heat ports of the wall, roof and ceiling segments are connected to the air volume that they are facing, and the outdoor conditions are connected to an external input to the house model.

Next the floor with underfloor heating and an input for internal heat load disturbances are added. The underfloor heating is modeled by inserting a prescribed heat flow between two layers of the floor and the internal heat load is modeled by connecteng a prescribed heat flow to the indoor air. The floor is connected to the ground which is set to a prescribed temperature of 10 °C.

 

The ventilation system that provides the house with fresh air is modeled using an exhaust fan, heat exchanger and a fluid boundary with a variable temperature connected to the outdoor temperature. The exhaust fan is modeled using a Buildings.Fluid.Movers.FlowControlled_m_flow with a constant mass flow rate determined by the specified air replacement time, typically 2 hours. To recover heat from the exhaust air an heat exchanger is used modeled by

Buildings.Fluid.HeatExchangers.ConstantEffectiveness with an efficiency of 80%. The ventilation system is connected to two fluid ports of the indoor air volume.

In a similar way the fan coil unit is modeled using a flow controlled fluid mover but instead of a heat exchanger a Buildings.Fluid.HeatExchangers.HeaterCooler_u is used with a specified max power of 4 kW. The mass flow rate of the fan is set as a function of the requested power starting from ¼ of the max flow at zero requested power to the max flow at the maximum requested power.

Then some temperature sensors and an energy usage calculation and the corresponding outputs are added to the model and the model of the house is complete.

Now we can use the house for simulation. First, we build a model for simulating the open loop responses to different inputs.

To get some understanding on how the house responds to the outdoor conditions and the different heating systems step responses are performed at an operating point when the outdoor temperature is 10 °C, the windspeed is 0 m/s, the wind direction is north, and the indoor temperature is 22 °C. Four step responses are simulated:

  • The outdoor temperature is raised to 11 °C
  • The wind speed is increased to 10 m/s
  • The fan coil power is increased by 200 W
  • The floor heating power is increased by 200 W

The figure shows that the all step responses settle in about the same time and reach a steady state in 1000 h, about 42 days.  However, the initial transient of the step is quite different, and it can also be seen that the fan coil unit raises the indoor temperature slightly more than the floor heating at 200 W. To make further comparisons the normalized step responses are plotted in two different time scales.

The plot of the normalized step responses in the 1000 h time scale confirms that the time to reach steady state is about the same. Looking at the plot showing the first 24 h of the step responses it shows that a change in the outdoor temperature or fan coil power initial changes the indoor temperature very quickly. This is due to that they are directly connected to the indoor air volume, that outdoor temperature through the ventilation system and the fan coil power through to the heater that heats the air that is blown through it.

Studying the step responses, the following conclusions can be drawn.

  • Heating the house with a fan coil unit is more energy efficient if only the indoor temperature is considered, using floor heating more energy is lost to heat transfer to the ground but a warm floor may give a higher perceived comfort for the occupants of the house.
  • If it is desired to keep the indoor temperature close to a specified setpoint at all times this can only be achieved using a fan coil unit.

This model of a house is not modeling all aspects of a real building, for instance there are no windows or doors in the building envelope and radiation heat transfer between the building and the environment is not modeled. This means that absolute energy performance calculations using this model may be inaccurate. However, the model can be used to evaluate different control strategies with respect to control and energy performance.

 

Read more

What’s you background story?

I moved to Linköping when I started studying Engineering Physics and Electrical Engineering which was some years ago. I found an interest in control systems, so much in fact that I spent three years after my initial studies to earn myself a licentiate degree in that field. I was eager to put my acquired knowledge to practical use and started working for a consulting company in the region. At that time, Combine didn’t have an office in Linköping.

How is it you came in to work at Combine?

Well, the first time I noticed Combine was a job advertisement in a newspaper. I think it was in Ny Teknik, but this was a long time ago. Although it sounded great they only had offices in Lund and Gothenburg at the time and moving was out of the question. But when I saw that they were opening a new office in Linköping I contacted them and here we are now.

What was it that sounded so great about Combine?

Combine is very focused on the fields in which they work, that is control systems and data science. The thing that caught my immediate interest in the advertisement was that they really pinpointed the field of control systems which I haven’t found any other company who has done in the same way. When I looked at the qualifications, I felt that everything matched me perfectly. Now that I work for Combine I can only agree with my initial feeling, instead of being the biggest Combine focuses on being the best.

You have experience of working as a consultant for a long time, and in different consulting companies. You are also very appreciated by our clients. What would you say is your success formula?

I don´t have a formula or a good answer for that matter, maybe I’m just suitable as a consultant. I like to learn new things and to face new challenges. I also feel a need to get an overview of things I work with directly, so that I can contribute as fast as possible. I guess that the social parts also contribute when it comes to be a good consultant.

At the moment you work part-time while also being on parental leave. How do you handle that?

Yes, me and my wife are blessed with two fantastic children and until the youngest will start preschool I only work half of the week. It hasn’t been a big issue since the client I am working for is very understanding. I try to repay the favor by being as productive as possible when I work.

Without mentioning the client, we can state that you work in the automotive business. Do you have a favorite car?

Not really. Thanks to my kids I would say the dumper Chuck in the animated series The Adventures of Chuck and Friends.

Read more

How did you learn about Combine?

It was just after I finished my PhD in physics at the University of Geneva, Switzerland. During this time I was looking for a job in Sweden to move here. It was actually at Charm, the job fair at Chalmers University, where I more or less stumbled into the booth of Combine. My interest in Combine was immediately awoken talking to my future colleagues and their technical problems which needed to be solved.

Which of your skills acquired during your PhD are you using in your daily work life?

Since my PhD was about the very fundamental physical properties of novel materials, this parts are not at all important for my daily work life. It is more the broad mathematics and physics knowledge as well as secondary skills you acquire during a PhD which are the ones I am using during my work day, e.g. the programming experience and data analysis skills. During my academic career I got very interested in developing my own data analysis tools and in optimizing our algorithms. When I came to the end of my PhD I was sure I wanted to continue in this direction, but working in industry.

Tell us about the different projects you were working on at Combine?

I started helping out with the development of Combines own data analysis tool ”Sympathy for Data”, which I wished I had known about during my PhD. I believe, it would have saved me many hours of developing my own scripts and tools over and over again. I also like the visual representation to quickly grasp and structure a workflow. Furthermore, it appealed to me to contribute to open source software (Editor’s note: you can read more about Sympathy for data here.

It followed a smaller project implementing a server application before I started a two year on site project at one of our customers designing and helping implementing a framework for automated end to end verification. This last project was very challenging on many levels, from learning the customers needs, designing the system from the ground up, as well as fighting for the right recourses. But I am a person who likes a good challenge and uses it to grow on it. I believe I succeeded and left the group in a good place before I started my new role as the group manager Data Science Solutions in our Gothenburg office.

How do you see the future of your new group?

There are two things which are very important to me and Combine in general. Firstly, I want to provide our customers with the right solution, meaning quality and usefulness. And secondly I want to provide a great working environment for our consultants where they have the possibility to grow professionally and personally. I strongly believe that sharing knowledge between our onsite and inhouse consultants will boost our capabilities to provide our customers with the right and complete solution.

Read more

Combine has set out on a journey of adventures, exploring different industries with the following question in mind. How can we utilize the competence at Combine to help our customers ”Enter the next level”?

In this blog post, we are exploring the retail business together with NIMPOS.

NIMPOS is a Swedish based company who offer a revolutionary simple and safe point of sale system suitable for both large and small companies thanks to its scalability. A full description of NIMPOS and their products can be found here. Having access to more and more data from transactions, NIMPOS is asking Combine for guidance on how to utilize the stored transaction data to help their customers enhance their business.

Combine develop and maintain a free and open source data analysis tool called Sympathy for Data. Sympathy is a software platform for data analysis and is built upon Python. It hides the complexity of handling large amounts of data in the data analysis process, which enables the user to focus on what is really important.

The first step in any data analysis task is getting access to the data. After creating a VPN connection, the data from NIMPOS database is easily imported into Sympathy by utilizing its powerful import routines.

Some of the data we got access to:

  • Reference ID (one per transaction)
  • Article ID
  • Article Name
  • Transaction Date
  • Quantity (number of sold articles)
  • Article Price

Now, with the data imported the powerful data processing capabilities of Sympathy is at our hands. The data is first preprocessed to filter out missing and unreasonable data, after which the analysis can start.

A few analyses have been implemented:

  1. Predicting the increase in the number of customers.
  2. Expected number of sold articles together with confidence bounds.
  3. Customer intensity variation
    • For each day of the week
    • Hour-by-hour for each weekday

An overview of the flow is presented in the figure.

Sadly we do not have any information to connect an individual transaction to unique customers, and no other customer features are available, e.g. age or sex, and this narrows down the possible analyses.

This post is the first in a series, where we have laid the ground for upcoming posts. We introduced the reader to the problem, some of the data, the tools, and a few analyses implemented.

In one of the upcoming posts, we will showcase the possibilities of connecting the strengths of Sympathy for Data, for processing and analyzing data, together with the interactive reporting made possible by Sympathy web services.

Stay tuned and don’t miss out on future posts. In the meantime, I suggest you read earlier posts or download Sympathy for Data and start playing around with some example flows. You won’t regret it!

Read more

Some thought after Pycon 2018

Python has become, if not the de-facto standard for data science, then at least one of the biggest contenders. As we wrote about in a previous entry, we sent a group of our top data engineers and developers to learn about the latest news in data science, and Python development in general. We share below some of the notes and impressions from this year’s Pycon conference for those of you that didn’t have the chance, or time to attend.

Ethics in data science

One very interesting and thought-provoking keynote talk was about the ethics of data science, and was held by Lorena Mesa from GitHub. She is a former member of the Obama campaign, as well as member of the Python Software Foundation board. In this talk, she presented experiences from the 2008 US-presidential campaign, and the role of data science in the rise of social media as a political platform. She also discussed the dangers that we have seen emerge from that in the years to follow. Data science have emerged as a powerful tool for spreading well intended information, not so well intended (dis-)information, or monitoring people for their political view, or even attempts at preemptive policing.

One of the most scary examples was a decidedly minority-report style scenario, in which police used an automated opaque system to give scores from 0 – 500 to estimate how likely individuals were to commit crime, and used this information to affect policing actions (this was done in Chicago, and there has been a strong backlash in media). An extra worrisome part of this is the black-box approach in which we cannot quite know what factors the system takes into consideration, or the biases that are inherent due to the data with which it has been built. Another example on this note was an investigation made by the American Civil Liberties Union (ACLU)  in which they used a facial recognition tool (and recommended settings) that had been sold to the police, and used it to match members of the U.S. Congress versus a database of 25000 mugshots. The system falsely matched 28 Congress members to mugshots, with a disproportionate number of these matches being against Congress members of colour. This is a tricky problem where the inherent socioeconomic issues laying behind the source material (the mugshots) are carried through to the predictions done by the system in non-obvious ways. Something that surely would need to be addressed, and being taken into consideration before we can allow ourselves to trust the results of such a system.

Finally, perhaps it is time for us data engineers to consider, and at least start the discussion about the larger ramifications of the type of data we collect, the algorithms we train, and how our results affect our society. Perhaps it is time for a hippocratic oath for data scientists?

Quantum Computing with Python

Over the last decade, Quantum computing has advanced from the realm of science fiction to actual machines in research labs, and now even to be available as a cloud computing resource. IBM Q is one of the frontier companies in research in Quantum computer, and they provide the open-source library qiskit, which allows anyone to experiment with Quantum computation algorithms. You can use qiskit to either run a simulator for your quantum computing programs, or you can use it connect over the cloud with an actual quantum computing machine housed at IBM’s facilities to test run the algorithms.

The size of the machines, counted as number of quantum bits, has been quite limited for a long time, but it is now fast approaching sizes that cannot conveniently be simulated with normal machines.

Contrary to popular belief, a quantum computer cannot solve NP hard problems in polynomial time, unless we also have P = NP.  Instead, the class of problems that can be solved by a quantum computer is called BQP, and it is known that BQP extends beyond P, and contains some problems in NP, but not NP-hard problems. We also know that BQP is a subset of PSPACE.

This has the consequence that we can easily solve important cryptographical problems such as prime-factorization quickly with a sufficiently large quantum computer, but we cannot necessarily solve eg. NP-complete problems (such as 3-SAT), or planning, or many of the other problems important for artificial intelligence. Nonetheless, the future of quantum computing is indeed exciting, and will completely change not just encryption, but also touch on almost all other parts of computer science. An exciting future made more accessible through the python library qiskit.

A developer amongst (data) journalists

Eléonore Mayola shared her insights from her involvement in an organization called J++, which stands for Journalism++, as a software developer who aids journalists in sifting through vasts troves of data to uncover newsworthy facts, and also teaches them basic programming skills. She showcased a number of examples of data-driven journalistic projects, ranging from interactive maps of Sweden displaying the statistics of moose hunts, or insurance prices, through the Panama Papers revelations, to The Migrants’ Files, a project tallying up the cost of the migrant crisis in terms of money, and lost human lives.

When it comes to her experience teaching journalists to code, some of the main takeaways presented were that even the most basic concepts, which many professional software developers would find trivial, can already have a big impact in this environment. Another point was that it is important to keep a reasonable pace, and avoid overwhelming students with too much information at once, and last, but not least, that the skills of software developers are sorely needed even in fields that many of us probably wouldn’t even consider working in.

Read more

The first model was a simple Equivalent Circuit Model (ECM) whose parameters first were identified to fit the model used for evaluation and then the ECM was used to perform the optimization on. The circuit can be seen in figure 1. The model used for evaluation was an advanced Electrochemical Model (EM) implemented in a framework called LIONSIMBA, which models the chemical reactions inside the battery with partial differential equations and therefore isn’t suitable for optimal control. The method used to fit the ECM to the EM could also be applied to fit the ECM to a physical battery, making it useful in real world applications as well.

Figure 1: ECM of a lithium-ion battery cell

The system of equations seen in equation 1 shows what the dynamics of the ECM looks like as well as the models used for temperature and State of Charge (SoC) estimation.

Since the goal was to charge as fast as possible, we wanted to minimize the charging time, which was done through minimum-time optimization. One way to solve minimum-time optimization problems, also the one used by us, can be seen in equation 2.

As there are a number of harmful phenomenon’s that can occur in a battery, additional constraints were needed as well. Two of the most important effects are lithium plating and overcharging, both of which we take into consideration. Both lead to decreased capacity, increased internal resistance and a higher rate of heat generation. It is known that there is some kind of connection between these effects and the voltage over the RC-pairs, vs, however not linear. This is why we applied a constraint to this voltage because without it, the solver would only take the temperature constraint into consideration which would lead to damaging the battery.

The EM allows us to see what happens inside of the battery in regard to the harmful effects when we input the current achieved through solving the optimization problem. One of the evaluated cases can be seen in figure, where both the result from the ECM and the EM are included. This case is for charging from 20-80% at an initial temperature of 15 C.

 

Figure 2: Results and model comparison for the EM and ECM.

The top left plot in the figure above shows the lithium plating voltage  which has to be kept above 0 and is controlled by the linear constraint put on vs, which is also shown. The top right plot shows if the battery is being overcharged, which also controlled by the constraint on vs. The bottom left plot shows the temperature and the bottom right one shows the current which is the result from solving the optimization problem.

The next thing we did was to compare our fast charging to a conventional charging method, namely the constant current-constant voltage (CC-CV) method. The constant current part was maximized for all cases to reach the same maximum values to make the comparison fair. The following plots are the same ones as above but compares our fast charging with CC-CV charging instead, showing that the fast charging is 22% faster and does not come as close to zero in terms of lithium plating voltage as the CC-CV method, although it has a higher average temperature due to the higher average input current.

 

Figure 3: Comparison between the optimized fast charging and CC-CV charging.

A summary of the charging times and the improvement over CC-CV can be seen in table 1 & 2 for charging from 20-80% and 10-95% for different temperatures respectively.

Conclusion
By performing optimization on an equivalent circuit model of a lithium-ion cell simulated in LIONSIMBA it was possible to achieve charging times that for some cases were up to 40% faster than with traditional CC-CV charging while still keeping the battery within the same constraints. To control the charging and avoid both lithium plating and overcharging a linear constraint was applied to the voltage over the two RC-pairs in the equivalent circuit model. The result clearly shows that the method has potential and that it should be possible to apply it on a physical battery even though it will be more difficult to choose constraints for the optimization.

Read more
Kontakta oss