Blog - Page 3 of 10 - Combine | Combine


A process using simulation and a neural network has been developed to investigate the possible benefits of using a completely digital approach to the friction estimation problem. Using simulation of vehicle dynamics, the process has removed errors that are very difficult to eliminate in the real world, but has also created problems that didn’t exist before, for example a mismatch between the real car and the simulated car.

So why do we need to improve friction estimation? Several problems exist with the development and testing process, read more about them in the earlier post!

In order to solve – or at least circumvent –  those problems, a digital vehicle model was created in a simulation environment created using Unity 3D. The digital vehicle is based on vehicle data from NEVS, and can be driven around and tested for any friction coefficient in any situation from ordinary driving to extreme manoeuvres, and the friction is always known.

When it comes to making an estimate, the friction estimation methods that are popular today are extremely complex and are based on complicated modelling of tyres with an immense amount of dynamic variables.

The purpose of the methods is to find a connection between the input variables and the friction coefficient using a complex physically derived connection. To circumvent the complexity, the “Universal approximation theorem” was instead used to find a purely mathematical connection between the inputs and the friction coefficient, by asking a different question.

A multilayer perceptron type of neural network with two hidden layers was used as it can approximate functions proven by the theorem, and can relatively quick give an insight into the performance of the process for evaluation. The neural network model was used to classify the sensor readings into 1 out of 4 categories of friction, from ice to dry asphalt.

In an optimal scenario, where the neural network has been trained on driving scenarios that are similar to the scenarios the neural network is evaluated on and little model mismatch, the network reached an estimation accuracy of 94.3%. However, when the network was evaluated on driving scenarios that were dissimilar to the training scenarios, and using a lot of model mismatch, it estimated 37.4% correctly.

The conclusion from the results being that the digitalisation process can be highly beneficial if used in a way where the difference between physical and digital vehicle is minimised and the model is trained on scenarios it will be exposed to. If the process is used in an adverse way only a set of additional errors has been introduced.

This blog post is based on a thesis that Jonas Karlsson has done at Combine in collaboration with NEVS in the fall of 2019. The full report will be published shortly and will be publicly available to interested parties.

Read more

Who are you?

My name is Amr Salem, and I’m 26 years old. I was born and raised in Cairo, Egypt. I have for as long as I can remember been an adventurous person. I dream of piloting an aircraft, which I think is one of the reasons I became an engineer: to explore new technical areas, push the limits, create something new, and bring innovations to life that will help people.

I also love to explore and connect with other countries and cultures, so travel is a big part of my life. When I do travel, I try to do it like one of the locals, as the cultural experience is so much bigger that way, and I get to meet new exciting people. In my spare time I like to learn new stuff, play soccer, or hang out with my friends. Another interest of mine that I have been able to exercise here in Gothenburg is horseback riding. I’m blessed with a landlady who also owns a stable, in which I’m lucky to be able to help out sometimes!

How come did you end up in Gothenburg?

I enrolled in an Italian school in Cairo, as that school was one of the prestigious engineering schools in Egypt. I did it for various reasons; one was, of course, the engineering part, another was the curiosity and longing to learn a new language. The education included both manual labor for CNCs and other types of machinery but also preparations for a continuation at a technical university.

When it was time for me to choose a university, I wanted to explore the world and experience adventure outside Egypt. Since I understood Italian from my high school, Italy was my first country of choice. I had a hard time deciding between a university in the beautiful city of Milan or the well-renowned University of Trento. In the end my better side won that battle, and I enrolled at the University of Trento.

I was excited when I traveled to Trento before my first day, since it was only my second time outside Egypt. The first time was only a short two weeks language course in Italy a couple of years before, and it was my first time traveling alone! In the end, I lived in Italy for a total of five years, and I believe I might end up in Italy again in the future!

Studying in Italy awakened a desire for more semesters abroad, and decided I wanted to study abroad during my masters as well. During my research for possibilities to study abroad, there was a university in Canada that caught my eye, the McMaster University in Hamilton. They had a thorough application process which involved both tests, interviews, and a bit of luck. The competition for the two positions was fierce, so I applied to the Erasmus program as well. But when I read the reply, I couldn’t believe my eyes at first. I was one of the two selected candidates!

An interesting thesis work was also vital to me, so I started to contact professors all around the globe.

My time in Canada was fantastic, the country is just so amazing, and the people there were so kind. I submerged myself into the books, and the semesters were passing by at a fast pace. I studied so hard that my friend caught me by surprise when he called in the middle of the night, asking if I still wanted an exchange year in Erasmus regime. So, after I had completed my exchange year in Canada, I went on to Sweden and Chalmers for a second exchange year. Here I stumbled upon one of the professors I had an interview with, the year before for a master thesis. As I approached him, I found to my surprise that he also remembered me, and my master thesis was solved.

The thesis work was performed at Volvo Group through Chalmers. The scope, limitations, and conclusions of the thesis work became a battle between the three stakeholders, who did not always agree  In all worked out in the end, even if it demanded several flights back to Trento, and now my solution is patent pending.


During my short professional carrier, I have been working in two big companies, and now I work in a smaller one (65 employees). The difference is striking. Combine is by far the best employer I had. The environment, the assignments, the people, and the flexibility it is on an entirely different level. Here I feel seen, and I’m entrusted to work on a small team working in a high paced project with hard deliveries, and they trust me to deliver according to my level, which I’m raising daily.

The project I’m working on right now is to deliver a measurement system for the train industry. It was scary in the beginning as I was moving away from my domain, mechatronics, and entered a field where I combine my skills in embedded systems with software engineering. But we have a technical lead in the project who helped set up the software architecture, and answered all questions with ease, so the project is steamrolling right now. I have never learned as much as I currently do. Everything I do, I can see how it is used in the final product, and that gives me a purpose to work even harder. It feels like I have embraced the challenged and adapted myself to it.

Next step

When the project has made a quality assured delivery to its customer, I would like to take on a new assignment. The ideal project should relate to control theory for aircrafts or autonomous drive. The idea of path planning and trajectory control are exciting topics which I find challenging and exciting. That is a rather new subject, and there is no clear path forward, and industry-standard makes the field even more appealing. I have tried to plan my future meticulously, but I have realized that it is the uncertainties and unexpected shifts in my plan that I have enjoyed the most, and thus, nowadays I try to include some margin in my plans for this sort of events.

Read more

In this latest release of Sympathy for Data, we packed a larger number of bug fixes and improvements to existing nodes as well as several new features. Here we want to present some of those briefly. 

To improve the development experience and user feedback possibilities, we spend some time on implementing an improved message handling and a new issue reporting system. Sympathy’s worker processes will send their messages, e.g., exceptions and warnings, instantly to the platform. You will no longer have to wait until you close your configuration windows or for an execution to finish before you see messages, warnings, and errors.   

Furthermore, sending issue reports to the developers of Sympathy for Data has become easier. This release added a feature to send issue and bug reports to us directly from within Sympathy. One can go either to the help menu or by right-clicking on a message in the “Messages” window to select “Report issue”. This feature will enable us to improve the stability and behavior of Sympathy even faster by collecting the relevant information directly. More details can be found in the documentation. Alongside the issue-reporting system, we added the possibility to send your anonymous user statistics with your explicit agreement. The data we collect includes only statistics such as which nodes are used, how often, and the execution time. Sympathy for Data is not sharing any personal information nor data on the analysis which is being performed. With the help of this data, we are planning to improve Sympathy for Data further and add functionality such as enhanced search and node recommendation systems in the upcoming releases.

If you are one of the users using the Figure Node but always found the interface a bit too complex, we have added a wizard functionality to the configuration panel. It allows you to select a plot type and configure its data inputs quickly. You will find it by clicking on the wizard’s hat on the top right (see image below).  

To help our users working with the text and json data format within Sympathy, we added, besides some new nodes for data format conversion, a search function into our text editor/viewer accessible as usual via Ctrl-F (Cmd-F on MacOS). 

For those users who use our Windows installer, we are happy to tell you that the documentation for the platform and standard library now comes pre-built with the installer. Furthermore, we made several other changes to how the documentation is built and which kind of information is added automatically, for example, deprecation warnings. For a list of all changes, see the news. 

Last but not least, we have updated several of the underlying python packages to recent versions. As usual, this is only a short selection of new features and fixes. Navigate to thenews sectionof the documentation to learn more about all changes in Sympathy for Data 1.6.2. If you want to hear more about Sympathy for Data and how it can help your organization to handle, manage, analyze, visualize data, and create reports, please do not hesitate to contact 

Read more

A possible way of categorizing the games would be to look at the information available when a decision needs to be made.

Chess and Risk are games that provide full information at any given time, but only in chess is the outcome of decisions given (the result of any sequence of moves is deterministic and has a single result). In Risk the momentary outcome might be influenced by rolling dice, so the result of a chain of events is still deterministic but there are many possible results depending on how the dice fall.
A further complication is the fact that there are other players and we cannot decide for them (which is true for all the games we are looking at).

In bridge card play, disregarding the bidding stage and information from that, partial information is available. The declarer (or “attacker” if you like) has 100% information on his sides’ assets, and 100% influence on what decisions his side makes. The defenders each have 50% information on their sides’ assets and 50% on the opposing sides assets. Furthermore, each defender can only decide on what card to play from their own hand, their partner makes his/her own decisions.
Interestingly, the information available grows, or rather the number of possibilities diminishes, with each card that is played. Possible card distributions are reduced rapidly and the defenders supply both each other and declarer with additional information. Decisions made by other players, as well as any agreed signaling conventions between the defenders are clues (defenders often have agreements on how to improve their partners available information, but this is also information to the declaring opponent).
Since all 52 cards are dealt, the contents of hidden hands is known, but not how the contents are distributed between the hands. A human might say that the game is mostly technical with a significant portion of psychology.

In poker there is not a lot of information available initially. Since Texas Hold’em is so popular we can use that as an example. The “raw” information (cards) consists of your own two cards. Later, first three and then one more and finally one more card is added to your known world of information. However, since there is betting at all these stages, most of the information available is what decisions your opponents make (or do not make). Since the balance between “raw” information and information gleaned from interpreting the opponents actions leans so heavily towards interpretation a human might say that the game is mainly psychological with a significant portion of odds calculation.

So how do algorithms compare to humans when it comes to making decisions in these games?

If we begin with chess we find a vast amount of literature, research and practical implemetations.
From a computing point of view, chess is typically a tree-search problem. Since the number of possible moves at each step easily exceeds 30 in the middle game, a brute force approach quickly becomes impractical. For instance, a search in 8 steps with choice of 40 different moves will require evaluation of >6500 billion positions.
In order to reduce the number of evaluations needed, pruning algorithms are used. The most popular one for this type of problem is called alpha-beta.
The reasoning behind alpha-beta is that opponent has no reason to pick a move that would give us the opportunity to pick a move that gives us a better position than we could achieve had the opponent picked another move, and vice versa. Alpha and Beta represent the max and min values used to prune the tree using this line of reasoning.
Let us consider the tree of depth 8 (depth 0 is the current position):

  • We traverse the tree left-to-right and therefore begin by reaching the bottom left position in the tree at depth 8 (we have made 4 moves and so has the opponent, who has made the last move which we will call O4). When we reach this position our evaluation algorithm is run and returns an assessment of the position, value X. We then continue to evaluate all the possible moves for the opponent (all O4) that stem from the same last move we made (call it U4). Our opponents best move will be the one with the lowest X, Alpha.
  • Now we move on to our next move at depth 7 and begin to evaluate the opponents moves at depth 8 in a similar fashion. The difference is that we now know that it would be silly of us to choose a move at depth 7 that would give the opponent the possibility of making a move with lower X than Alpha. As soon as we encounter any such possible move for the opponent we can prune that entire sub-tree. In effect Alpha is our lower bound.
  • As you have probably figured out by now, the reasoning at depth 6 will be reversed. The opponent would not willingly give us the opportunity to score above an upper bound that is the Beta value. Our lower bound is the opponents upper bound and vice versa.
  • Etc etc until the entire tree from depth 8 and up is aggregated, resulting in a choice of best move and an evaluation of that move

Theoretically the number of position evaluations can be reduced to something down to the square root of the brute force positions, assuming that excellent moves are available early in the evaluation. Developments in this area focus on good evaluation algorithms and getting good Alpha and Beta values early. For those interested you can look at the Scout improvement.

In contrast, expert human players focus on determining a limited selection of candidate moves. They evaluate a tree with much fewer possible moves at each depth.

Comparing computers to humans, computers are the winners since quite a few years back. There are still certain types of positions where the horizon (calculation depth) is too far away for a computer to make the correct decision, typically positions with interlocked pawn chains and no material gains in sight or end games where a small mistake ends in a draw due to rules limiting consecutive moves with nothing in particular happening. But computers will not choose these positions because programmers have developed opening books that avoid anti-computer type positions and some of the horizon problems are solved with end-game books containing specific solutions to some problems that are not solvable within the horizon.

From a “computers trying to solve problems better than humans” perspective chess is relatively easy. The Japanese game of Go is a similar example.

The Risk problem on the other hand appears both simple and difficult. Tree search type solutions (as for bridge) typically require c^d evaluations brute force, where c is the number of choices in each position and d is the depth, so it seems that we might accept a large c in exchange for a limited d. We would also have to consider the possible outcome of die rolls. With that line of reasoning we might examine all possible outcomes of die rolls (weighted according to probability) and perhaps achieve a decent algorithm by running a tree search for each die roll outcome combination. That seems simple, even if c becomes very large.
There has been work done using genetic algorithms and even TD(lambda) (machine learning) for Risk. This indicates that maybe it isn’t that simple after all.
However, I like to think that commercially available Risk games probably just use a simple greedy algorithm and perhaps even some manipulation of the die rolls for different settings. Greedy algorithms just grab as much as there is immediately available with no eye to the future.

  • Consider a tree of depth 2 with two branches at each level. Each node has a numerical value
  • Problem formulation: choose a path that gives the greatest sum of values in the nodes you have passed
  • A greedy algorithm will always choose the node with the highest value at the nearest level. This might lead to a selection of nodes with low values at the next level, while a different first choice might have led to a jackpot node. In the long run, all things being equal, a greedy algorithm will still outperform random behavior

Evaluation of a position seems to be a challenge, but perhaps a simplistic approach (the number of reinforcements generated by the position in the next round, for instance) is workable.

We might suspect that humans use a sort of semi-greedy approach. On the one hand grabbing is good. On the other hand there are reinforcements to be gained by holding continents until the next turn, and allowing opponents to generate vast numbers of reinforcements will eventually lose. Also the map is not symmetrical and the continents are not identical. So humans tend to balance greed against many other factors.

In bridge card play most programs use so called “double dummy” evaluations. This means simulating possible card distributions in the hidden hands and then running searches for the best card to play overall. An interesting point is that these evaluations have two not so obvious flaws:

  • If the number of simulations is too low, situations where the location of a certain card is actually a 50/50 proposition will instead be skewed in some direction. This can lead to selecting card plays that actively guess where the card is, while alternative plays would score better because guessing is sometimes avoided
  • The double dummy approach is similar to the approach described in chess and Risk (and Go etc etc). The difference is that the players do not actually see all the cards. Each search is just using one possible distribution of the cards. Therefore, algorithms assuming that the opponents will make the best play to counter any play we make are basically flawed. Given a choice, opponents will often choose the play with the best odds, which will be a weighted result based on many possible distributions of the cards.

More advanced algorithms consider the fact that the opponents will be in a similar situation to the one we are in. Such approaches are called “single dummy” in bridge. They tend to score significantly better in defense and better in offense (remember, the defenders have fragmented information and distributed decision-making).

Human experts play exactly like this. Sometimes a double dummy approach is sufficient, for instance if a play is found that always leads to the goal. The single dummy considerations gain weight the more the outcome is influenced by the opponents decisions than the specific cards they hold.

Humans still beat machines sometimes at bridge, but it seems the machines will be clearly better quite soon.

Poker is a real challenge, both for humans and computers. Information is clearly limited and any straightforward approach will probably only be successful for so-called “heads-up” (when only two players are involved).

Algorithm development has involved Bayes theoremNash equilibriumMonte Carlo simulation or neural networks, all of which are imperfect techniques. The current trend for poker AIs is to use reinforced learning or similar techniques, with huge training samples. Training is usually done AI vs AI.

Since poker is so much about interpreting the actions of other players, future development will probably be based on game theory and attempting to dynamically model the current opponents (so that game strategies can be individually adapted for each current opponent, where weaknesses can be exploited and bad risk/reward ratios avoided).

Human experts tend to (over?)exploit the risk/reward ratios (“I have an ace in hand, so I will bet enough that an opponent with no pair and no ace often will fold immediately”). They also tend to introduce a random element to make it more difficult for the opponents to build a mental model of them, while spending small or reasonable amounts of money early in a session (and observing the other players) to build their own models of the opponents. The fact that humans leak information (and observe such leaks) other than betting/not betting, size of bet etc etc, such as twitching, hesitating or talking a lot, makes the human live game more different from computer games than any other games we have looked at. It is also possible to put on some play-acting and try to fool any observers.

On-line poker is on the other hand basically the same regardless of human or AI players.

An interesting observation is that poker AIs do not always produce similar strategies to the ones human experts use. There is also absolutely no fear or personal loss involved in computer evaluations.

It is entertaining to think that very good human poker players either have the ability to turn off any fear of losing or that they simply lack the common sense to worry about losing lots of money and therefore can exploit any such weaknesses in other players. AI development has led to computer programs being able to perform well against humans. Recent developments have led to computer programs beating humans even in multi-player scenarios.

As a conclusion we can say that regardless of whether the games we look at have perfect or imperfect information algorithms are developed that enable computers to beat humans at the games. Considering the difficulty as well as the variation in the games one cannot help but consider if the same might be true for other areas in society. Balancing complex systems in society is a challenge that seems (and probably is) much more difficult than any of the games we have looked at, but perhaps algorithms could help us make better decisions or even be trusted to make the decisions for us.

Read more


Everybody know that physical exercise is related to a healthy lifestyle and something that doctors order from time to time. Even though it is more often seen as demanding and tedious then a happy time which give you a well-deserved time to reflect and contemplate. In mankind’s beginning there was no need for training sessions as we did get all the physical exercise our body needed and more due to our lifestyle and lack of technical innovations to ease our living conditions. 

We live in a present which demand so much of everyone all the time, the room for margin and error have shrunk to all time low! Iterations shall be done in days for something that took weeks or even months before and at the same time it would be great if we could cut the work force down as well to increase profits! This does increase the stress and impression a person, to handle this you need to manage all this in some way. One way to handle stress is to give yourself time to reflect and contemplate. Reflection will not only decrease your stress level it does also give tour mind time to learn and store all new knowledge and experience it have received today. This will also make you a better family man/woman as you now can be a more active person while socializing with friends and family. It is important that you equip yourself with tools and experience that lets you handle todays extreme living requirements! 

One great way to both give you time to reflect is by do some physical exercise especially low intensity cardio sessions. Believe it or not but go out for a run in the lovely forest and nature around you will not only be good physically but also mentally and give you time to reflect. As the doctor and writer Anders Hansen genial explains it in his book Hjärnstark: Hur motion och träning stärker din hjärnaour brain does grow and become more potent during physical exercises and cardio sessions in particular. There is a lot of articles and papers that investigate how the brain reacts to physical stimuli. For instance, accordance to findings a great way to decrease stress and increase resilience towards stress is regular low intensity cardio. In the book he also elaborates how important cardio sessions is for memory improvement and prevent that the brain deteriorate with age. This is for me (one which have study control theory) intuitive to understand as one of the hardest things you can make a mechatronics system do is to move as optimal as possible or according to some set of parameters. Professors and students have spent decades in search of the optimal control theory algorithm. The closest we have today to an optimal control algorithm is the MPC algorithm, once it is used it does use a lot of computational power for satisfactory performance. If you then relate that back to us humans that is what we do every day. As soon as we move our limbs, we activate our internal control theory algorithm, how shall we move upper arm and forearm to get our hand to grab the cup of coffee we want. The same problem occurs while doing cardio, running, biking or swimming is of course highly related to how fit you are and many tend to forget that you will also improve your running stride or a swim stroke to be more efficient! 

In my mind it is crystal clear that we need to activate our body physical not only for be a great athlete but also to be improve yourself as an engineer. This doesn’t seem as clear for everyone as it is for me. Of course, our athletes put effort into this to improve how the training is done and they get faster by the minute and new world records is performed in every big competition. Just two weeks ago in Vienna during the Ineos project, Eliud Kipchoge was the first person every to ran a marathon below two hours. It is a stunning feat to run in 21kmph for a full two hours!! That is something most people can’t even run when they sprint for a few seconds. Later the same day, on the other side of the world Jan Frodeno achieved a new world record at the Ironman World Championship, he completed 3.7km swim, 180km bike and a 42 km run in Kona Hawaii in just 7 hours and 51 minutes! This is all great and it shows that human body is still not at its limit.  

If you look at the world’s population, we have “never” been in worse shape. We live in our greatest time yet, we see innovations every day that will help us live a better and more luxurious life. At the same time, we don’t substitute our lack of physical movement our new lifestyle gives us with physical activity. Looking at running for instance, there are studies which look into marathon finishing times and how they have changed over time. In this study they have look at finishing time on 5K, 10K, half marathon and marathon and across all distances they can see that the average finishing time have increased. It is not only the average time of all attendees which have increase but among the fastest runners as well. So, you can’t argue that there is a higher number of recreational competitors which raise the total average.  

Humans is by nature lazy being that is why we can see so high innovations rate especially related to ease our way of living. As Bill Gates once said “I always choose a lazy person to do a hard job, because a lazy person will find an easy way to do it” whether you want to believe it or not I there is always some truth to a story how absurd it might seem.

It is a fact that there is a connection between physical exercise and health in general. At the beginning of humanity, there was no need for training as we got all the physical exercise, we needed, from just trying to survive. In today’s fast-paced life, there is often not much margin for errors. Tasks that took weeks earlier should be carried out in just a couple of days. 

One way to handle stress is to give yourself time to reflect. It will not only decrease your stress level but also give the mind time to learn and process new impressions. This will hopefully make you a better and balanced person and colleague, as you now can be more active with friends and family. 

One great way to give you time to reflect is training, especially low-intensity cardio sessions. Believe it or not, but if you go for a run in the forest, you will not only be in better physical shape but also improve mentally. As the doctor and author Anders Hansen explains in his book “Hjärnstark: Hur motion och träning stärker din hjärna” our brain does grow and become more potent during physical exercises and cardio sessions in particular. Several research papers investigate how the human brain reacts to physical stimuli. For instance, a great way to decrease stress and increase resistance towards stress is regular low-intensity cardio training. In the book, he also elaborates on the importance of cardio sessions for memory improvement and how it prevents the brain from deteriorating with age.  

For control systems engineers, it is intuitive to understand complex systems in different domains. While research has spent decades searching for the optimal control theory algorithm, the closest we have today to an optimal control algorithm is Model Predictive Control. The approach requires huge computational power for satisfactory performance and demands perfect models of the environment. One can see similarities to the human mind. As soon as we move our limbs, we activate our internal control theory algorithms. For example, when we grab a cup of coffee, we need to control our arm and hand in a synchronised fashion. A similar problem occurs during physical exercises such as running, biking or swimming where we control several body parts simultaneously. 

Athletes put much effort into improving their speed and technique to set new records. Just two weeks ago, during the Ineos project in Vienna, Eliud Kipchoge was the first person ever to run a marathon under two hours. Later the same day, on the other side of the planet, Jan Frodeno set a new world record at the Ironman World Championship. He completed the 3.7km swimming, 180km biking and 42 km running in Kona, Hawaii in just 7 hours and 51 minutes! These impressive results show that the human body is still not at its limit and what can be achieved with optimisation of mind and body. 

One does not need to run marathons and Ironmans, especially in such fast times, but it is obvious for us that we need to activate our body, not only to be good athletes but also to improve ourselves as engineers. That is one of the reasons why Combine prioritizes company activities such as running, obstacle course racing, or skiing. 

Read more

The Problem

In this blog we are going to create a model which counts the number of fingers (1-5) a hand is holding up based on a picture of that hand. First we need to collect data, this data will then need to be processed to a suitable format for deep learning. Once the data is ready, we will train a deep convolutional neural network and make a basic interface which shows the results in real-time.

Collecting Data

For our problem we need pictures of a hand holding up different numbers of fingers and corresponding labels. The labels should be the number of fingers the hand is holding up. For this purpose we will need a camera, more specifically a webcamera which we can connect to our computer. We don’t want to overcomplicate the problem right away, therefore we want to keep the data very consistent so let’s take all the pictures with a plain white background, for example a white desk, or perhaps a white wall. We want to place our camera at a distance so the hand fills the majority of the image, the ideal distance will depend on your camera but between 20-30 cm away from your plain white surface should be about right.

Our camera is now in place and we are ready to start taking pictures.. but we soon realize two problems. First, we don’t want to have to take potentially hundreds of pictures manually one-by-one, and secondly, we will need to somehow label the data with the correct label (i.e. number of fingers shown). Of course we could take a picture, and manually give it the correct label, but this process would be very time consuming. Let us automate the process instead. In this blog we will use the CV2 python library. This library will let us capture images from our camera from a python script, solving the first problem of having to manually take pictures.

We make a script which uses the CV2 library to capture an image from our camera, we will also take care of the labelling problem by telling the user how many fingers to hold up. Using the CV2 library we show the current image on screen, as well as some text telling the user how many fingers they should be holding up. Using a loop we can then take multiple pictures and for each one we will automatically label it with how many fingers we told the user to hold up (this assumes that the user follows the instructions correctly). Once we have taken a good amount of pictures, we let the user know its time to hold up a different amount of fingers. The pseudo-code of the process is shown below.

Figure 1: Pseudo code for collecting data

For fast reading and writing to disk, and to simplify later on, we recommend saving the data as a numpy array and not as images.

Data Pre-processing

We now have our images with corresponding label. We will now process the data so it can be used for training of a neural network. The first thing we need to do is to one-hot-encode our labels, if you have encountered classification problems before this should be familiar. Next we want to convert our images to gray-scale. In this problem we are not interested in any feature related to colour so we can work with gray-scale images instead to reduce the complexity. Next, we want to rescale our image pixel values, for example to lie between 0-1. Lastly, we will resize the image to reduce the complexity further (64×64 pixels).

We can either make a separate script which does the pre-processing, our we can add it to our data collection script, reducing the disk space and the need of running two different scripts. The pseudo-code for data collection + data pre-processing is shown below.

Figure 2: Pseudo code for collecting and pre-processing data

Here we also save all the images and labels together in one large numpy array instead of each of separately.

Training a model

With our data ready we can now define and train a model. Since we are working with images we will need a convolutional neural network. Our output should be a class, i.e. how many fingers are being held up. One excellent choice of framework for defining and training a neural network is Keras. With this toolbox we can easily create our network exactly as we want it and Keras will take care of all the difficult mathematical operations in the background. Below is the pseudo-code for defining our network, and training it on our data.

Figure 3: Pseudo code for collecting and pre-processing data

The exact architecture of the network, how much dropout is applied and activation functions can be altered to get the best performance on your dataset.

Showing the results

With our trained model, the last step is to make a simple interface to try out or model. For this we can revisit the CV2 library we used for data collection. Simply take a picture, run it through our trained model and show the picture + the resulting class from our model on screen. The pseudo-code and an example of how it can look is shown below.

Figure 4: Showing the results


Read more


The new office in Stockholm is in progress. Our first two engineers started Monday this week, and the office at Dalagatan is beginning to take form.

Welcome, Spyros and Michele, to the team.

Our focus now is to get exciting projects and assignments to our office. We are hopeful to have a specific project up and running soon.

Estimation of RUL

The interest in our start-up AiTree Technology is still high, where we believe that we will have customers signed-up in the near future. At this point, there seems to be a significant interest not only in predicting RUL but also in helping companies to build energy storage solutions. Stay tuned for more information.

Sympathy for Data

We will soon release a new updated version of the tool Sympathy for Data, version 1.6.2, that will have improvements in the platform, user interface, nodes as well as new functionality. Stay tuned for more information on version 1.6.2.

Market analysis

The year has so far been volatile, with downsizing in the automotive industry for consultants in general.

We have long claimed that being consistent with focusing on quality and specialist services would be the right strategy in the long run, rather than focus on EBIT for the year. It turned out that our strategy over the years is paying off. We have not seen any changes in demand for our services, even if we have not been able to grow as much as planned. Instead, we have used this period to increase the percentage of projects and the turnover from new customers with the target to be less dependent on just one industry in the future.

There is also a lot of technology-driven disruption going on (connected devices, automation of services, sustainability, and environmental adaptation, etc.). Our services should fit in nicely in that kind of future and transformation.

Moreover, I must say, it is rewarding that our strategy, to stick with our expertise, is as appealing to our customers as it is to me.

To summarize

The market, in general, is not as strong as before, but our Data Science services specifically continue to be in high demand. Therefore, we are still positive, yet careful, looking at the coming years.

We will continue to focus on cutting edge technology, being proud of what we do, continue to be honest, and at the same time, have fun. How hard can it be?!

Read more

No doubt that model-based design is one of the methods that brings the control, communications, signal processing and dynamic systems to a great level. Designing for example model predictive or even nonlinear control systems are more feasible and less error-prone using this approach.

Model-based design methodology, especially in the starting stages, is used in many elds such as automotive, aircraft, robotics and others. In the other industries for example the automation industry, they tend to neglect this phase and hard-code the software program in the PLC and connect it to a simulation platform to evaluate system performance. In this article, we are briefly discussing the improvement and the value of starting the design process with the model-based design approach and how that can impact the automation industry.

In the model-based design scheme, knowing the mathematical representation, it is easier for a developer to design the model of the plant. Based on that, one can synthesize a suitable controller for that plant using graphical-interface-user blocks that represent simple arithmetic, logic and other simple operations or even more complicated operations such as PID and model predictive control blocks that handle more complicated operations, in the absence of the actual hardware. This will save a huge amount of time for a developer if he would like to code the whole system. As a result it is much easier to debug and improve the control algorithms quality. What is more interesting, even without knowing the mathematical representation of the plant, you can model the plant by depicting electrical or mechanical circuits and connected to scopes or displays blocks to observe their outputs. Veri fication of the design could be handled through Model-In-The-Loop(MIL) and Hardware-In-The-Loop(HIL) simulations. In the MIL you can test and validate the simulated controller and plant in the early phases without physical components. Once the model is tested in MIL, you can output HDL code, C code, IEC61131-1 Structured Text (using PLC coder) and reports. In the HIL method, HIL simulators will be used to act as a real plant and will communicate with controllers through sensors and actuators. In which testing is more realistic and then you are ready to go to test the prototype.

There are many advantages of using the model-based design approach. For example, most of the veri cation and validation could be done earlier before the hardware exists, as well as, adding new features will take lesser time and the development schedule will be shortened. Moreover, some model-based design platforms provide code generation feature that is optimizing the code in which more memory space and high execution speed are provided. All in all, one can see the benefits of considering a model-based design approach in the development process and how that will increase the quality of the testing of the system and decrease the errors that could be expensive in the real application.

Read more

Introducing Transformers

The idea of training language models on large datasets and then using these pre-trained
models to enhance performance on smaller, similar datasets has been a crucial breakthrough
for progress in many NLP challenges. However, pre-training for a specific task and embedding
long-term sequential dependencies have been huge constraints to training more generalised
language models. Transformer models are unsupervised models capable of training on
unlabelled, unstructured text to perform a large array of downstream NLP tasks, including
question-and-answer for dialogue systems, named entity recognition (NER) and sequencelevel
tasks such as text generation. The typical Transformer architecture is illustrated below in
Figure 1:

Figure 1: Transformer Blocks [4]

As shown in Figure 1, the Transformer architecture consists of a block of encoders (left) and a
block of decoders (right). Instead of using a hidden state between layers (as in recurrent neural network architectures), the encodings themselves are passed between each encoder, and the
final encoder output is then passed to the first decoder in the decoder block. Each
encoder/decoder in the Transformer contains a self-attention layer, which aims to determine
which part of the sequence is most important when processing a particular word, e.g. in the
sentence “James enjoys the beach because he likes to swim”, the self-attention layer should
learn to link the word “he” to “James” as the most important for its embedding. Additionally,
each decoder contains an “Encoder-Decoder Attention” layer, which refers to the relative
importance of each encoder when the decoder predicts the output.


The Bidirectional Encoder Representation from Transformers, or BERT for short, is one of the
most influential Transformer-based models. It has earned its reputation from beating multiple
benchmark performances in various NLP tasks with its bi-directional attention mechanisms.
This means that BERT considers not only the previous context but also looks ahead when
learning embeddings. The BERT model focuses on building a language model and thus on the
encoder block of the Transformer. Figure 2 below shows the composition of BERT embeddings
as consisting of the word token embeddings, the segment embedding (for longer sequences)
and a positional embedding that keeps track of the input order:

Figure 2: BERT embeddings illustrated [1]

Fine-tuning BERT

To build our search engine, we first acknowledge that 72 data points is insufficient to fine-tune
the BERT model for our specific task. Instead, we make use of a benchmark dataset for
sentence similarity, STS-B, consisting of 8,000 pairs of semantically similar sentences from
news articles, captions and forums [3]. Since BERT is not specifically designed for sentence
embeddings, we use a modified version of BERT for sentence encoding (proposed by Reimers
and Gurevych [2]), which adds a pooling layer to the standard architecture and is trained with a
regression objective based on a siamese network, i.e. each sentence represents its own
network and their outputs are combined and then evaluated (see Figure 3). The regression
objective function here is the cosine similarity measure between these sentence embeddings,
which is used as a loss function for the fine-tuning task. From the bottom to the top, we see
that each sentence is first encoded using the standard BERT architecture, and thereafter our
pooling layer is applied to output another vector which is then used to compute the cosine similarity measure. As described in [2], we compute this similarity measure for each query and
the 72 docstrings that we obtain from the Sympathy modules and return the top 5 nodes
according to this measure.

Figure 3: Siamese BERT network for sentence similarity illustrated [2]

The Result

We have been able to build a working prototype of the semantic search engine for the 72
nodes currently available on Sympathy for Data, which we hope to integrate as a fully-fledged
plugin in the future. Our search engine performs impressively given that it has only been
trained on around 8,000 pairs of semantically similar sentences (i.e. 16,000 sentences). Below
is an illustrative example of how this works in practice.


[1] Devlin, J., Chang, M., Lee, K. and Toutanova, K. (2019). BERT: Pre-training of Deep
Bidirectional Transformers for Language Understanding. [online] Available at: [Accessed 24 Sep. 2019].
[2] Reimers, N. and Gurevych, I. (2019). Sentence-BERT: Sentence Embeddings using
Siamese BERT-Networks. [online] Available at:
[Accessed 24 Sep. 2019].
[3] Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, and Lucia Specia (2017)
SemEval-2017 Task 1: Semantic Textual Similarity Multilingual and Cross-lingual Focused
Evaluation Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval
[4] Models, H. (2019). Understanding Transformers in NLP: State-of-the-Art Models. [online]
Analytics Vidhya. Available at:
nlp-state-of-the-art-models/ [Accessed 24 Sep. 2019].

Read more

Electricity from heat

Well, no big news there. But how about using existing waste heat instead of burning oil or splitting atoms? Instead of superheating steam just settling for 70-120C source temperatures?
The technology is surprisingly simple, but clever. Here is some text and an image from the Climeon homepage (

The heat, from geothermal sources, industrial waste heat or power production, is fed to the Climeon unit. Inside the Climeon unit a heat exchanger transfers the heat to an internal fluid, which vaporizes due to its lower boiling point. The vapors are then expanded over a turbine to run a generator and produce electricity.

Fundamentally the same electricity generation scheme as a nuclear power plant, but no nuclear stuff.
The energy efficiency of, for instance, a nuclear plant design might be considered poor considering the amount of heat that is wasted (just cooled off for no gain). Plants that combine electricity generation and district heating are more efficient from that point of view, but perhaps transporting heat to remote districts using nuclear coolant is not a great idea.
In this case, the concept is to use heat that is already there and unused, so efficiency can instead be measured solely as the amount of electricity generated per unit of heat energy. If the source is geothermal it’s basically electricity for free, once you make your initial investment and maintenance allocations.

I think the concept is great and hope they do well.

Batteries, when they are no longer suitable for their initial purpose?

There seem to be four basic answers to this question

  1. We made our money while they worked, now we need to get rid of them at as low cost as possible
  2. We are hoping to recycle them efficiently and make use of that
  3. We are hoping someone else wants them and hopefully make use of that
  4. O boy, where did all these batteries come from?

The first answer is understandable, but not convincing from an environmental or “big picture” point of view. Established recycling technology for Lithium-Ion batteries has a couple of glaring drawbacks, mainly that it doesn’t work that well and that it is based on melting (which costs a lot of energy).

The second answer is hopeful and often based on the idea that recycling will improve. Research is underway, most promising is research based on technologies that have existed in the mining industry for over 100 years. The idea in mining is to crush the material and mix it up with fluid containing molecules that attach to the element one wishes to extract. The newly formed molecules float up to the surface of the fluid and can be skimmed off (or assume whatever property might make it easy to separate them from the fluid). Then a further stage filters out the desired element. The research is looking to do this similarly in steps, separating all the desired elements along the way.

The third answer is also hopeful. As we have discussed in previous posts, the idea of a functioning business with second and possibly third life applications for used batteries is quite dependent on buyers and sellers knowing the condition of the batteries. We are hoping to do something of our own in this area, as you know.

Unfortunately, the fourth answer does exist. I am not going to point any fingers and just leave it there.

Unless someone comes up with a better battery technology soon, we are looking at an ever-increasing need for answers 2 and 3 to win out.
Authorities are also unlikely to accept answers 1 or 4 in the long run, IMO (global perspective, visualize massive toxic junkyards in some third world country). The pressure is more likely to increase than decrease on manufacturers, and it will be interesting to see where in the value chain responsibilities land. Passing the buck will probably not be that easy without some serious documentation to show where the batteries went and who is responsible for them.

Pet project

To wind this up I am going to talk a bit about a pet project. We have been asked to demonstrate something on the theme “technology is fun” for an event (Netgroup anniversary) taking place at the Göteborg opera house.
I am going to attempt to build a plasma arc speaker. They have always caught my eye (you can look them up or watch some videos on Youtube), so even if it has already been done, I think it is a perfect fit considering the venue.

First, I would like to point out that this is a high-voltage design, so building it at home with a simple on/off switch is not a great idea if you have small (or overly curious) children running around. It can cause serious heart problems or kill you, and it produces ozone which can be lethal at concentrations of more than 50ppm. Great fun, right?

Anyway, the idea I am using is something like this

For the power source, I will use a standard 700W PC power supply, using the 12V output. This will go to the flyback transformer and switching MOSFETs.

The audio source will probably be an obsolete MP3 player. The signal will go to a 555, which will then control the switching MOSFETs (I’ll use 3 parallel STP40NF10L).

The flyback transformer has the property of being able to produce high voltages, in the X kV range. Also, instead of being fed by a DC source it is typically fed by a switched source in the XY kHz range.

My idea is to produce the arc between two stainless steel screws of some respectable dimension.

So, kV and kHz? This means we get a modulated plasma arc that can play the higher frequencies of music well. It should actually be able to do it very well, since there are no moving parts, unlike speaker membranes and similar. It won’t be very loud since I have no plans to ionize western Sweden or kill the guests at the event, but it will be fun to see if I can make it work.

If anyone feels a huge urge to fiddle around with it together with me, I am looking for someone who can prevent me from electrocuting myself and maybe has some ideas for an ozone trap.

Read more

    Contact us