Blogg | Combine

Blogg

If we instead turn our attention to the future and look at our medium-range goals, they are

  • Growing the business in preparation for our next initiative
  • Consolidating our business in Stockholm and to some extent in Linköping
  • Complementing our services in MBD with more embedded
  • Continue with the development of Sympathy for Data

However, the market might not always be moving in a direction that aligns perfectly with our goals.

For large customer accounts (in our case, this often means time & materials assignments for automotive customers), we should assume that margins might shrink due to customers worrying about their profitability and order book. At the same time, these same customers will probably continue to inquire for help on many interesting assignments. They need to maintain momentum during this period of technology and market offer disruption (As a Service, Electrification, Connection, Automation). A look at the statistics confirms this view, as the number of inquiries from these customers at the 19/20 break is about the same as for the 18/19 break, with some exceptions.

For medium-size customers and many other customers without a large R&D department, we think that services in Data Science (perhaps particularly Data Engineering) as well as SW development and cross-functional projects will continue to be in high demand.

Our MBD services are so well established that we run a risk of taking them for granted. An obvious initiative is to complement MBD with more embedded, but we might also consider co-simulation or other future technologies for the development of our MBD offer. There is an ROI limit somewhere that makes it much more likely that larger R&D customers need Modeling and Simulation services than smaller ones. This limit is also influenced by the complexity of the customers’ systems/products.

Our engineers are excellent!

Future Combine engineers need to be recruited in the same spirit in order to reach our goals. Still, we will need to focus more on finding engineers with some industrial or higher academic experience, even more so for our offices in Linköping and Stockholm. This does not mean that we should stop recruiting inexperienced but well-educated engineers, but perhaps limit this to Göteborg and, to some extent Lund, both of which have established customer relations that allow us to give our junior engineers exciting tasks.

Enter the Next Level!

Read more

Our biggest asset is our employees

I could, as usual, highlight our initiatives this year, discuss the market and the long term vision or strategy. However, this time, I would like to focus on what is most important in building a competitive company, and that is the people working there. At Combine, I have been surrounded by talented, inspiring colleagues, everyone eager to learn new stuff, but also to share their thoughts on technical content or other exciting topics.

We believe in empowering individuals and teams, and that it will have a significant impact if one can affect the situation and outcome. I think that Combine’s biggest strength is the ability to see each employee as an asset.

So, since this is my last blog post as the CEO of Combine, I would like to thank all current and previous colleagues. It has been an honor.

Christmas and donation to charity

It is soon Christmas, and during the holiday, we often think of people that don’t have the same privileges as we do.

For the last couple of years, we have donated a significant amount of money to Stadsmissionen as a Christmas gift.

We have chosen to continue this year as well, but this year to these four purposes instead:

  • Plan International – a brighter future starts with girls education
  • WWF – preserve biodiversity, reduce pollution and unsustainable consumption, contribute to sustainable utilization of renewable natural resources
  • ECPAT – contribution to the removal of more child sexual abuse images online
  • Swedish Childhood Cancer Fund – support to continue the long-term research, support affected children and their families, and come closer to our vision of eradicating childhood cancer

Rather than buying candy or a Christmas gift such as a towel, etc., we hope that you share our belief that this a better alternative.

We also hope that by doing this, we inspire other companies to do the same!

Make sure you take care of your family and yourself this Christmas.

Or as Ida Sand sings, in my favorite Christmas Carol at the time being: ”Now the time is here, make sure that you are near, everyone that you hold dear.”

Merry Christmas and a Happy New Year.

Read more

A process using simulation and a neural network has been developed to investigate the possible benefits of using a completely digital approach to the friction estimation problem. Using simulation of vehicle dynamics, the process has removed errors that are very difficult to eliminate in the real world, but has also created problems that didn’t exist before, for example a mismatch between the real car and the simulated car.

So why do we need to improve friction estimation? Several problems exist with the development and testing process, read more about them in the earlier post!

In order to solve – or at least circumvent –  those problems, a digital vehicle model was created in a simulation environment created using Unity 3D. The digital vehicle is based on vehicle data from NEVS, and can be driven around and tested for any friction coefficient in any situation from ordinary driving to extreme manoeuvres, and the friction is always known.

When it comes to making an estimate, the friction estimation methods that are popular today are extremely complex and are based on complicated modelling of tyres with an immense amount of dynamic variables.

The purpose of the methods is to find a connection between the input variables and the friction coefficient using a complex physically derived connection. To circumvent the complexity, the “Universal approximation theorem” was instead used to find a purely mathematical connection between the inputs and the friction coefficient, by asking a different question.

A multilayer perceptron type of neural network with two hidden layers was used as it can approximate functions proven by the theorem, and can relatively quick give an insight into the performance of the process for evaluation. The neural network model was used to classify the sensor readings into 1 out of 4 categories of friction, from ice to dry asphalt.

In an optimal scenario, where the neural network has been trained on driving scenarios that are similar to the scenarios the neural network is evaluated on and little model mismatch, the network reached an estimation accuracy of 94.3%. However, when the network was evaluated on driving scenarios that were dissimilar to the training scenarios, and using a lot of model mismatch, it estimated 37.4% correctly.

The conclusion from the results being that the digitalisation process can be highly beneficial if used in a way where the difference between physical and digital vehicle is minimised and the model is trained on scenarios it will be exposed to. If the process is used in an adverse way only a set of additional errors has been introduced.

This blog post is based on a thesis that Jonas Karlsson has done at Combine in collaboration with NEVS in the fall of 2019. The full report will be published shortly and will be publicly available to interested parties.

Read more

Who are you?

My name is Amr Salem, and I’m 26 years old. I was born and raised in Cairo, Egypt. I have for as long as I can remember been an adventurous person. I dream of piloting an aircraft, which I think is one of the reasons I became an engineer: to explore new technical areas, push the limits, create something new, and bring innovations to life that will help people.

I also love to explore and connect with other countries and cultures, so travel is a big part of my life. When I do travel, I try to do it like one of the locals, as the cultural experience is so much bigger that way, and I get to meet new exciting people. In my spare time I like to learn new stuff, play soccer, or hang out with my friends. Another interest of mine that I have been able to exercise here in Gothenburg is horseback riding. I’m blessed with a landlady who also owns a stable, in which I’m lucky to be able to help out sometimes!

How come did you end up in Gothenburg?

I enrolled in an Italian school in Cairo, as that school was one of the prestigious engineering schools in Egypt. I did it for various reasons; one was, of course, the engineering part, another was the curiosity and longing to learn a new language. The education included both manual labor for CNCs and other types of machinery but also preparations for a continuation at a technical university.

When it was time for me to choose a university, I wanted to explore the world and experience adventure outside Egypt. Since I understood Italian from my high school, Italy was my first country of choice. I had a hard time deciding between a university in the beautiful city of Milan or the well-renowned University of Trento. In the end my better side won that battle, and I enrolled at the University of Trento.

I was excited when I traveled to Trento before my first day, since it was only my second time outside Egypt. The first time was only a short two weeks language course in Italy a couple of years before, and it was my first time traveling alone! In the end, I lived in Italy for a total of five years, and I believe I might end up in Italy again in the future!

Studying in Italy awakened a desire for more semesters abroad, and decided I wanted to study abroad during my masters as well. During my research for possibilities to study abroad, there was a university in Canada that caught my eye, the McMaster University in Hamilton. They had a thorough application process which involved both tests, interviews, and a bit of luck. The competition for the two positions was fierce, so I applied to the Erasmus program as well. But when I read the reply, I couldn’t believe my eyes at first. I was one of the two selected candidates!

An interesting thesis work was also vital to me, so I started to contact professors all around the globe.

My time in Canada was fantastic, the country is just so amazing, and the people there were so kind. I submerged myself into the books, and the semesters were passing by at a fast pace. I studied so hard that my friend caught me by surprise when he called in the middle of the night, asking if I still wanted an exchange year in Erasmus regime. So, after I had completed my exchange year in Canada, I went on to Sweden and Chalmers for a second exchange year. Here I stumbled upon one of the professors I had an interview with, the year before for a master thesis. As I approached him, I found to my surprise that he also remembered me, and my master thesis was solved.

The thesis work was performed at Volvo Group through Chalmers. The scope, limitations, and conclusions of the thesis work became a battle between the three stakeholders, who did not always agree  In all worked out in the end, even if it demanded several flights back to Trento, and now my solution is patent pending.

Currently

During my short professional carrier, I have been working in two big companies, and now I work in a smaller one (65 employees). The difference is striking. Combine is by far the best employer I had. The environment, the assignments, the people, and the flexibility it is on an entirely different level. Here I feel seen, and I’m entrusted to work on a small team working in a high paced project with hard deliveries, and they trust me to deliver according to my level, which I’m raising daily.

The project I’m working on right now is to deliver a measurement system for the train industry. It was scary in the beginning as I was moving away from my domain, mechatronics, and entered a field where I combine my skills in embedded systems with software engineering. But we have a technical lead in the project who helped set up the software architecture, and answered all questions with ease, so the project is steamrolling right now. I have never learned as much as I currently do. Everything I do, I can see how it is used in the final product, and that gives me a purpose to work even harder. It feels like I have embraced the challenged and adapted myself to it.

Next step

When the project has made a quality assured delivery to its customer, I would like to take on a new assignment. The ideal project should relate to control theory for aircrafts or autonomous drive. The idea of path planning and trajectory control are exciting topics which I find challenging and exciting. That is a rather new subject, and there is no clear path forward, and industry-standard makes the field even more appealing. I have tried to plan my future meticulously, but I have realized that it is the uncertainties and unexpected shifts in my plan that I have enjoyed the most, and thus, nowadays I try to include some margin in my plans for this sort of events.

Read more

In this latest release of Sympathy for Data, we packed a larger number of bug fixes and improvements to existing nodes as well as several new features. Here we want to present some of those briefly. 

To improve the development experience and user feedback possibilities, we spend some time on implementing an improved message handling and a new issue reporting system. Sympathy’s worker processes will send their messages, e.g., exceptions and warnings, instantly to the platform. You will no longer have to wait until you close your configuration windows or for an execution to finish before you see messages, warnings, and errors.   

Furthermore, sending issue reports to the developers of Sympathy for Data has become easier. This release added a feature to send issue and bug reports to us directly from within Sympathy. One can go either to the help menu or by right-clicking on a message in the “Messages” window to select “Report issue”. This feature will enable us to improve the stability and behavior of Sympathy even faster by collecting the relevant information directly. More details can be found in the documentation. Alongside the issue-reporting system, we added the possibility to send your anonymous user statistics with your explicit agreement. The data we collect includes only statistics such as which nodes are used, how often, and the execution time. Sympathy for Data is not sharing any personal information nor data on the analysis which is being performed. With the help of this data, we are planning to improve Sympathy for Data further and add functionality such as enhanced search and node recommendation systems in the upcoming releases.

If you are one of the users using the Figure Node but always found the interface a bit too complex, we have added a wizard functionality to the configuration panel. It allows you to select a plot type and configure its data inputs quickly. You will find it by clicking on the wizard’s hat on the top right (see image below).  

To help our users working with the text and json data format within Sympathy, we added, besides some new nodes for data format conversion, a search function into our text editor/viewer accessible as usual via Ctrl-F (Cmd-F on MacOS). 

For those users who use our Windows installer, we are happy to tell you that the documentation for the platform and standard library now comes pre-built with the installer. Furthermore, we made several other changes to how the documentation is built and which kind of information is added automatically, for example, deprecation warnings. For a list of all changes, see the news. 

Last but not least, we have updated several of the underlying python packages to recent versions. As usual, this is only a short selection of new features and fixes. Navigate to thenews sectionof the documentation to learn more about all changes in Sympathy for Data 1.6.2. If you want to hear more about Sympathy for Data and how it can help your organization to handle, manage, analyze, visualize data, and create reports, please do not hesitate to contact ussympathy@combine.se. 

Read more

A possible way of categorizing the games would be to look at the information available when a decision needs to be made.

Chess and Risk are games that provide full information at any given time, but only in chess is the outcome of decisions given (the result of any sequence of moves is deterministic and has a single result). In Risk the momentary outcome might be influenced by rolling dice, so the result of a chain of events is still deterministic but there are many possible results depending on how the dice fall.
A further complication is the fact that there are other players and we cannot decide for them (which is true for all the games we are looking at).

In bridge card play, disregarding the bidding stage and information from that, partial information is available. The declarer (or “attacker” if you like) has 100% information on his sides’ assets, and 100% influence on what decisions his side makes. The defenders each have 50% information on their sides’ assets and 50% on the opposing sides assets. Furthermore, each defender can only decide on what card to play from their own hand, their partner makes his/her own decisions.
Interestingly, the information available grows, or rather the number of possibilities diminishes, with each card that is played. Possible card distributions are reduced rapidly and the defenders supply both each other and declarer with additional information. Decisions made by other players, as well as any agreed signaling conventions between the defenders are clues (defenders often have agreements on how to improve their partners available information, but this is also information to the declaring opponent).
Since all 52 cards are dealt, the contents of hidden hands is known, but not how the contents are distributed between the hands. A human might say that the game is mostly technical with a significant portion of psychology.

In poker there is not a lot of information available initially. Since Texas Hold’em is so popular we can use that as an example. The “raw” information (cards) consists of your own two cards. Later, first three and then one more and finally one more card is added to your known world of information. However, since there is betting at all these stages, most of the information available is what decisions your opponents make (or do not make). Since the balance between “raw” information and information gleaned from interpreting the opponents actions leans so heavily towards interpretation a human might say that the game is mainly psychological with a significant portion of odds calculation.

So how do algorithms compare to humans when it comes to making decisions in these games?

If we begin with chess we find a vast amount of literature, research and practical implemetations.
From a computing point of view, chess is typically a tree-search problem. Since the number of possible moves at each step easily exceeds 30 in the middle game, a brute force approach quickly becomes impractical. For instance, a search in 8 steps with choice of 40 different moves will require evaluation of >6500 billion positions.
In order to reduce the number of evaluations needed, pruning algorithms are used. The most popular one for this type of problem is called alpha-beta.
The reasoning behind alpha-beta is that opponent has no reason to pick a move that would give us the opportunity to pick a move that gives us a better position than we could achieve had the opponent picked another move, and vice versa. Alpha and Beta represent the max and min values used to prune the tree using this line of reasoning.
Let us consider the tree of depth 8 (depth 0 is the current position):

  • We traverse the tree left-to-right and therefore begin by reaching the bottom left position in the tree at depth 8 (we have made 4 moves and so has the opponent, who has made the last move which we will call O4). When we reach this position our evaluation algorithm is run and returns an assessment of the position, value X. We then continue to evaluate all the possible moves for the opponent (all O4) that stem from the same last move we made (call it U4). Our opponents best move will be the one with the lowest X, Alpha.
  • Now we move on to our next move at depth 7 and begin to evaluate the opponents moves at depth 8 in a similar fashion. The difference is that we now know that it would be silly of us to choose a move at depth 7 that would give the opponent the possibility of making a move with lower X than Alpha. As soon as we encounter any such possible move for the opponent we can prune that entire sub-tree. In effect Alpha is our lower bound.
  • As you have probably figured out by now, the reasoning at depth 6 will be reversed. The opponent would not willingly give us the opportunity to score above an upper bound that is the Beta value. Our lower bound is the opponents upper bound and vice versa.
  • Etc etc until the entire tree from depth 8 and up is aggregated, resulting in a choice of best move and an evaluation of that move

Theoretically the number of position evaluations can be reduced to something down to the square root of the brute force positions, assuming that excellent moves are available early in the evaluation. Developments in this area focus on good evaluation algorithms and getting good Alpha and Beta values early. For those interested you can look at the Scout improvement.

In contrast, expert human players focus on determining a limited selection of candidate moves. They evaluate a tree with much fewer possible moves at each depth.

Comparing computers to humans, computers are the winners since quite a few years back. There are still certain types of positions where the horizon (calculation depth) is too far away for a computer to make the correct decision, typically positions with interlocked pawn chains and no material gains in sight or end games where a small mistake ends in a draw due to rules limiting consecutive moves with nothing in particular happening. But computers will not choose these positions because programmers have developed opening books that avoid anti-computer type positions and some of the horizon problems are solved with end-game books containing specific solutions to some problems that are not solvable within the horizon.

From a “computers trying to solve problems better than humans” perspective chess is relatively easy. The Japanese game of Go is a similar example.

The Risk problem on the other hand appears both simple and difficult. Tree search type solutions (as for bridge) typically require c^d evaluations brute force, where c is the number of choices in each position and d is the depth, so it seems that we might accept a large c in exchange for a limited d. We would also have to consider the possible outcome of die rolls. With that line of reasoning we might examine all possible outcomes of die rolls (weighted according to probability) and perhaps achieve a decent algorithm by running a tree search for each die roll outcome combination. That seems simple, even if c becomes very large.
There has been work done using genetic algorithms and even TD(lambda) (machine learning) for Risk. This indicates that maybe it isn’t that simple after all.
However, I like to think that commercially available Risk games probably just use a simple greedy algorithm and perhaps even some manipulation of the die rolls for different settings. Greedy algorithms just grab as much as there is immediately available with no eye to the future.
Explanation:

  • Consider a tree of depth 2 with two branches at each level. Each node has a numerical value
  • Problem formulation: choose a path that gives the greatest sum of values in the nodes you have passed
  • A greedy algorithm will always choose the node with the highest value at the nearest level. This might lead to a selection of nodes with low values at the next level, while a different first choice might have led to a jackpot node. In the long run, all things being equal, a greedy algorithm will still outperform random behavior

Evaluation of a position seems to be a challenge, but perhaps a simplistic approach (the number of reinforcements generated by the position in the next round, for instance) is workable.

We might suspect that humans use a sort of semi-greedy approach. On the one hand grabbing is good. On the other hand there are reinforcements to be gained by holding continents until the next turn, and allowing opponents to generate vast numbers of reinforcements will eventually lose. Also the map is not symmetrical and the continents are not identical. So humans tend to balance greed against many other factors.

In bridge card play most programs use so called “double dummy” evaluations. This means simulating possible card distributions in the hidden hands and then running searches for the best card to play overall. An interesting point is that these evaluations have two not so obvious flaws:

  • If the number of simulations is too low, situations where the location of a certain card is actually a 50/50 proposition will instead be skewed in some direction. This can lead to selecting card plays that actively guess where the card is, while alternative plays would score better because guessing is sometimes avoided
  • The double dummy approach is similar to the approach described in chess and Risk (and Go etc etc). The difference is that the players do not actually see all the cards. Each search is just using one possible distribution of the cards. Therefore, algorithms assuming that the opponents will make the best play to counter any play we make are basically flawed. Given a choice, opponents will often choose the play with the best odds, which will be a weighted result based on many possible distributions of the cards.

More advanced algorithms consider the fact that the opponents will be in a similar situation to the one we are in. Such approaches are called “single dummy” in bridge. They tend to score significantly better in defense and better in offense (remember, the defenders have fragmented information and distributed decision-making).

Human experts play exactly like this. Sometimes a double dummy approach is sufficient, for instance if a play is found that always leads to the goal. The single dummy considerations gain weight the more the outcome is influenced by the opponents decisions than the specific cards they hold.

Humans still beat machines sometimes at bridge, but it seems the machines will be clearly better quite soon.

Poker is a real challenge, both for humans and computers. Information is clearly limited and any straightforward approach will probably only be successful for so-called “heads-up” (when only two players are involved).

Algorithm development has involved Bayes theoremNash equilibriumMonte Carlo simulation or neural networks, all of which are imperfect techniques. The current trend for poker AIs is to use reinforced learning or similar techniques, with huge training samples. Training is usually done AI vs AI.

Since poker is so much about interpreting the actions of other players, future development will probably be based on game theory and attempting to dynamically model the current opponents (so that game strategies can be individually adapted for each current opponent, where weaknesses can be exploited and bad risk/reward ratios avoided).

Human experts tend to (over?)exploit the risk/reward ratios (“I have an ace in hand, so I will bet enough that an opponent with no pair and no ace often will fold immediately”). They also tend to introduce a random element to make it more difficult for the opponents to build a mental model of them, while spending small or reasonable amounts of money early in a session (and observing the other players) to build their own models of the opponents. The fact that humans leak information (and observe such leaks) other than betting/not betting, size of bet etc etc, such as twitching, hesitating or talking a lot, makes the human live game more different from computer games than any other games we have looked at. It is also possible to put on some play-acting and try to fool any observers.

On-line poker is on the other hand basically the same regardless of human or AI players.

An interesting observation is that poker AIs do not always produce similar strategies to the ones human experts use. There is also absolutely no fear or personal loss involved in computer evaluations.

It is entertaining to think that very good human poker players either have the ability to turn off any fear of losing or that they simply lack the common sense to worry about losing lots of money and therefore can exploit any such weaknesses in other players. AI development has led to computer programs being able to perform well against humans. Recent developments have led to computer programs beating humans even in multi-player scenarios.

As a conclusion we can say that regardless of whether the games we look at have perfect or imperfect information algorithms are developed that enable computers to beat humans at the games. Considering the difficulty as well as the variation in the games one cannot help but consider if the same might be true for other areas in society. Balancing complex systems in society is a challenge that seems (and probably is) much more difficult than any of the games we have looked at, but perhaps algorithms could help us make better decisions or even be trusted to make the decisions for us.

Read more

Productivity

Everybody know that physical exercise is related to a healthy lifestyle and something that doctors order from time to time. Even though it is more often seen as demanding and tedious then a happy time which give you a well-deserved time to reflect and contemplate. In mankind’s beginning there was no need for training sessions as we did get all the physical exercise our body needed and more due to our lifestyle and lack of technical innovations to ease our living conditions. 

We live in a present which demand so much of everyone all the time, the room for margin and error have shrunk to all time low! Iterations shall be done in days for something that took weeks or even months before and at the same time it would be great if we could cut the work force down as well to increase profits! This does increase the stress and impression a person, to handle this you need to manage all this in some way. One way to handle stress is to give yourself time to reflect and contemplate. Reflection will not only decrease your stress level it does also give tour mind time to learn and store all new knowledge and experience it have received today. This will also make you a better family man/woman as you now can be a more active person while socializing with friends and family. It is important that you equip yourself with tools and experience that lets you handle todays extreme living requirements! 

One great way to both give you time to reflect is by do some physical exercise especially low intensity cardio sessions. Believe it or not but go out for a run in the lovely forest and nature around you will not only be good physically but also mentally and give you time to reflect. As the doctor and writer Anders Hansen genial explains it in his book Hjärnstark: Hur motion och träning stärker din hjärnaour brain does grow and become more potent during physical exercises and cardio sessions in particular. There is a lot of articles and papers that investigate how the brain reacts to physical stimuli. For instance, accordance to findings a great way to decrease stress and increase resilience towards stress is regular low intensity cardio. In the book he also elaborates how important cardio sessions is for memory improvement and prevent that the brain deteriorate with age. This is for me (one which have study control theory) intuitive to understand as one of the hardest things you can make a mechatronics system do is to move as optimal as possible or according to some set of parameters. Professors and students have spent decades in search of the optimal control theory algorithm. The closest we have today to an optimal control algorithm is the MPC algorithm, once it is used it does use a lot of computational power for satisfactory performance. If you then relate that back to us humans that is what we do every day. As soon as we move our limbs, we activate our internal control theory algorithm, how shall we move upper arm and forearm to get our hand to grab the cup of coffee we want. The same problem occurs while doing cardio, running, biking or swimming is of course highly related to how fit you are and many tend to forget that you will also improve your running stride or a swim stroke to be more efficient! 

In my mind it is crystal clear that we need to activate our body physical not only for be a great athlete but also to be improve yourself as an engineer. This doesn’t seem as clear for everyone as it is for me. Of course, our athletes put effort into this to improve how the training is done and they get faster by the minute and new world records is performed in every big competition. Just two weeks ago in Vienna during the Ineos project, Eliud Kipchoge was the first person every to ran a marathon below two hours. It is a stunning feat to run in 21kmph for a full two hours!! That is something most people can’t even run when they sprint for a few seconds. Later the same day, on the other side of the world Jan Frodeno achieved a new world record at the Ironman World Championship, he completed 3.7km swim, 180km bike and a 42 km run in Kona Hawaii in just 7 hours and 51 minutes! This is all great and it shows that human body is still not at its limit.  

If you look at the world’s population, we have “never” been in worse shape. We live in our greatest time yet, we see innovations every day that will help us live a better and more luxurious life. At the same time, we don’t substitute our lack of physical movement our new lifestyle gives us with physical activity. Looking at running for instance, there are studies which look into marathon finishing times and how they have changed over time. In this study they have look at finishing time on 5K, 10K, half marathon and marathon and across all distances they can see that the average finishing time have increased. It is not only the average time of all attendees which have increase but among the fastest runners as well. So, you can’t argue that there is a higher number of recreational competitors which raise the total average.  

Humans is by nature lazy being that is why we can see so high innovations rate especially related to ease our way of living. As Bill Gates once said “I always choose a lazy person to do a hard job, because a lazy person will find an easy way to do it” whether you want to believe it or not I there is always some truth to a story how absurd it might seem.

It is a fact that there is a connection between physical exercise and health in general. At the beginning of humanity, there was no need for training as we got all the physical exercise, we needed, from just trying to survive. In today’s fast-paced life, there is often not much margin for errors. Tasks that took weeks earlier should be carried out in just a couple of days. 

One way to handle stress is to give yourself time to reflect. It will not only decrease your stress level but also give the mind time to learn and process new impressions. This will hopefully make you a better and balanced person and colleague, as you now can be more active with friends and family. 

One great way to give you time to reflect is training, especially low-intensity cardio sessions. Believe it or not, but if you go for a run in the forest, you will not only be in better physical shape but also improve mentally. As the doctor and author Anders Hansen explains in his book ”Hjärnstark: Hur motion och träning stärker din hjärna” our brain does grow and become more potent during physical exercises and cardio sessions in particular. Several research papers investigate how the human brain reacts to physical stimuli. For instance, a great way to decrease stress and increase resistance towards stress is regular low-intensity cardio training. In the book, he also elaborates on the importance of cardio sessions for memory improvement and how it prevents the brain from deteriorating with age.  

For control systems engineers, it is intuitive to understand complex systems in different domains. While research has spent decades searching for the optimal control theory algorithm, the closest we have today to an optimal control algorithm is Model Predictive Control. The approach requires huge computational power for satisfactory performance and demands perfect models of the environment. One can see similarities to the human mind. As soon as we move our limbs, we activate our internal control theory algorithms. For example, when we grab a cup of coffee, we need to control our arm and hand in a synchronised fashion. A similar problem occurs during physical exercises such as running, biking or swimming where we control several body parts simultaneously. 

Athletes put much effort into improving their speed and technique to set new records. Just two weeks ago, during the Ineos project in Vienna, Eliud Kipchoge was the first person ever to run a marathon under two hours. Later the same day, on the other side of the planet, Jan Frodeno set a new world record at the Ironman World Championship. He completed the 3.7km swimming, 180km biking and 42 km running in Kona, Hawaii in just 7 hours and 51 minutes! These impressive results show that the human body is still not at its limit and what can be achieved with optimisation of mind and body. 

One does not need to run marathons and Ironmans, especially in such fast times, but it is obvious for us that we need to activate our body, not only to be good athletes but also to improve ourselves as engineers. That is one of the reasons why Combine prioritizes company activities such as running, obstacle course racing, or skiing. 

Read more

The Problem

In this blog we are going to create a model which counts the number of fingers (1-5) a hand is holding up based on a picture of that hand. First we need to collect data, this data will then need to be processed to a suitable format for deep learning. Once the data is ready, we will train a deep convolutional neural network and make a basic interface which shows the results in real-time.

Collecting Data

For our problem we need pictures of a hand holding up different numbers of fingers and corresponding labels. The labels should be the number of fingers the hand is holding up. For this purpose we will need a camera, more specifically a webcamera which we can connect to our computer. We don’t want to overcomplicate the problem right away, therefore we want to keep the data very consistent so let’s take all the pictures with a plain white background, for example a white desk, or perhaps a white wall. We want to place our camera at a distance so the hand fills the majority of the image, the ideal distance will depend on your camera but between 20-30 cm away from your plain white surface should be about right.

Our camera is now in place and we are ready to start taking pictures.. but we soon realize two problems. First, we don’t want to have to take potentially hundreds of pictures manually one-by-one, and secondly, we will need to somehow label the data with the correct label (i.e. number of fingers shown). Of course we could take a picture, and manually give it the correct label, but this process would be very time consuming. Let us automate the process instead. In this blog we will use the CV2 python library. This library will let us capture images from our camera from a python script, solving the first problem of having to manually take pictures.

We make a script which uses the CV2 library to capture an image from our camera, we will also take care of the labelling problem by telling the user how many fingers to hold up. Using the CV2 library we show the current image on screen, as well as some text telling the user how many fingers they should be holding up. Using a loop we can then take multiple pictures and for each one we will automatically label it with how many fingers we told the user to hold up (this assumes that the user follows the instructions correctly). Once we have taken a good amount of pictures, we let the user know its time to hold up a different amount of fingers. The pseudo-code of the process is shown below.

Figure 1: Pseudo code for collecting data

For fast reading and writing to disk, and to simplify later on, we recommend saving the data as a numpy array and not as images.

Data Pre-processing

We now have our images with corresponding label. We will now process the data so it can be used for training of a neural network. The first thing we need to do is to one-hot-encode our labels, if you have encountered classification problems before this should be familiar. Next we want to convert our images to gray-scale. In this problem we are not interested in any feature related to colour so we can work with gray-scale images instead to reduce the complexity. Next, we want to rescale our image pixel values, for example to lie between 0-1. Lastly, we will resize the image to reduce the complexity further (64×64 pixels).

We can either make a separate script which does the pre-processing, our we can add it to our data collection script, reducing the disk space and the need of running two different scripts. The pseudo-code for data collection + data pre-processing is shown below.

Figure 2: Pseudo code for collecting and pre-processing data

Here we also save all the images and labels together in one large numpy array instead of each of separately.

Training a model

With our data ready we can now define and train a model. Since we are working with images we will need a convolutional neural network. Our output should be a class, i.e. how many fingers are being held up. One excellent choice of framework for defining and training a neural network is Keras. With this toolbox we can easily create our network exactly as we want it and Keras will take care of all the difficult mathematical operations in the background. Below is the pseudo-code for defining our network, and training it on our data.

Figure 3: Pseudo code for collecting and pre-processing data

The exact architecture of the network, how much dropout is applied and activation functions can be altered to get the best performance on your dataset.

Showing the results

With our trained model, the last step is to make a simple interface to try out or model. For this we can revisit the CV2 library we used for data collection. Simply take a picture, run it through our trained model and show the picture + the resulting class from our model on screen. The pseudo-code and an example of how it can look is shown below.

Figure 4: Showing the results

 

Read more

Stockholm

The new office in Stockholm is in progress. Our first two engineers started Monday this week, and the office at Dalagatan is beginning to take form.

Welcome, Spyros and Michele, to the team.

Our focus now is to get exciting projects and assignments to our office. We are hopeful to have a specific project up and running soon.

Estimation of RUL

The interest in our start-up AiTree Technology is still high, where we believe that we will have customers signed-up in the near future. At this point, there seems to be a significant interest not only in predicting RUL but also in helping companies to build energy storage solutions. Stay tuned for more information.

Sympathy for Data

We will soon release a new updated version of the tool Sympathy for Data, version 1.6.2, that will have improvements in the platform, user interface, nodes as well as new functionality. Stay tuned for more information on version 1.6.2.

Market analysis

The year has so far been volatile, with downsizing in the automotive industry for consultants in general.

We have long claimed that being consistent with focusing on quality and specialist services would be the right strategy in the long run, rather than focus on EBIT for the year. It turned out that our strategy over the years is paying off. We have not seen any changes in demand for our services, even if we have not been able to grow as much as planned. Instead, we have used this period to increase the percentage of projects and the turnover from new customers with the target to be less dependent on just one industry in the future.

There is also a lot of technology-driven disruption going on (connected devices, automation of services, sustainability, and environmental adaptation, etc.). Our services should fit in nicely in that kind of future and transformation.

Moreover, I must say, it is rewarding that our strategy, to stick with our expertise, is as appealing to our customers as it is to me.

To summarize

The market, in general, is not as strong as before, but our Data Science services specifically continue to be in high demand. Therefore, we are still positive, yet careful, looking at the coming years.

We will continue to focus on cutting edge technology, being proud of what we do, continue to be honest, and at the same time, have fun. How hard can it be?!

Read more

No doubt that model-based design is one of the methods that brings the control, communications, signal processing and dynamic systems to a great level. Designing for example model predictive or even nonlinear control systems are more feasible and less error-prone using this approach.

Model-based design methodology, especially in the starting stages, is used in many elds such as automotive, aircraft, robotics and others. In the other industries for example the automation industry, they tend to neglect this phase and hard-code the software program in the PLC and connect it to a simulation platform to evaluate system performance. In this article, we are briefly discussing the improvement and the value of starting the design process with the model-based design approach and how that can impact the automation industry.

In the model-based design scheme, knowing the mathematical representation, it is easier for a developer to design the model of the plant. Based on that, one can synthesize a suitable controller for that plant using graphical-interface-user blocks that represent simple arithmetic, logic and other simple operations or even more complicated operations such as PID and model predictive control blocks that handle more complicated operations, in the absence of the actual hardware. This will save a huge amount of time for a developer if he would like to code the whole system. As a result it is much easier to debug and improve the control algorithms quality. What is more interesting, even without knowing the mathematical representation of the plant, you can model the plant by depicting electrical or mechanical circuits and connected to scopes or displays blocks to observe their outputs. Veri fication of the design could be handled through Model-In-The-Loop(MIL) and Hardware-In-The-Loop(HIL) simulations. In the MIL you can test and validate the simulated controller and plant in the early phases without physical components. Once the model is tested in MIL, you can output HDL code, C code, IEC61131-1 Structured Text (using PLC coder) and reports. In the HIL method, HIL simulators will be used to act as a real plant and will communicate with controllers through sensors and actuators. In which testing is more realistic and then you are ready to go to test the prototype.

There are many advantages of using the model-based design approach. For example, most of the veri cation and validation could be done earlier before the hardware exists, as well as, adding new features will take lesser time and the development schedule will be shortened. Moreover, some model-based design platforms provide code generation feature that is optimizing the code in which more memory space and high execution speed are provided. All in all, one can see the benefits of considering a model-based design approach in the development process and how that will increase the quality of the testing of the system and decrease the errors that could be expensive in the real application.

Read more
Kontakta oss