Blog - Page 3 of 11 - Combine | Combine

Blog

Productivity

Everybody know that physical exercise is related to a healthy lifestyle and something that doctors order from time to time. Even though it is more often seen as demanding and tedious then a happy time which give you a well-deserved time to reflect and contemplate. In mankind’s beginning there was no need for training sessions as we did get all the physical exercise our body needed and more due to our lifestyle and lack of technical innovations to ease our living conditions. 

We live in a present which demand so much of everyone all the time, the room for margin and error have shrunk to all time low! Iterations shall be done in days for something that took weeks or even months before and at the same time it would be great if we could cut the work force down as well to increase profits! This does increase the stress and impression a person, to handle this you need to manage all this in some way. One way to handle stress is to give yourself time to reflect and contemplate. Reflection will not only decrease your stress level it does also give tour mind time to learn and store all new knowledge and experience it have received today. This will also make you a better family man/woman as you now can be a more active person while socializing with friends and family. It is important that you equip yourself with tools and experience that lets you handle todays extreme living requirements! 

One great way to both give you time to reflect is by do some physical exercise especially low intensity cardio sessions. Believe it or not but go out for a run in the lovely forest and nature around you will not only be good physically but also mentally and give you time to reflect. As the doctor and writer Anders Hansen genial explains it in his book Hjärnstark: Hur motion och träning stärker din hjärnaour brain does grow and become more potent during physical exercises and cardio sessions in particular. There is a lot of articles and papers that investigate how the brain reacts to physical stimuli. For instance, accordance to findings a great way to decrease stress and increase resilience towards stress is regular low intensity cardio. In the book he also elaborates how important cardio sessions is for memory improvement and prevent that the brain deteriorate with age. This is for me (one which have study control theory) intuitive to understand as one of the hardest things you can make a mechatronics system do is to move as optimal as possible or according to some set of parameters. Professors and students have spent decades in search of the optimal control theory algorithm. The closest we have today to an optimal control algorithm is the MPC algorithm, once it is used it does use a lot of computational power for satisfactory performance. If you then relate that back to us humans that is what we do every day. As soon as we move our limbs, we activate our internal control theory algorithm, how shall we move upper arm and forearm to get our hand to grab the cup of coffee we want. The same problem occurs while doing cardio, running, biking or swimming is of course highly related to how fit you are and many tend to forget that you will also improve your running stride or a swim stroke to be more efficient! 

In my mind it is crystal clear that we need to activate our body physical not only for be a great athlete but also to be improve yourself as an engineer. This doesn’t seem as clear for everyone as it is for me. Of course, our athletes put effort into this to improve how the training is done and they get faster by the minute and new world records is performed in every big competition. Just two weeks ago in Vienna during the Ineos project, Eliud Kipchoge was the first person every to ran a marathon below two hours. It is a stunning feat to run in 21kmph for a full two hours!! That is something most people can’t even run when they sprint for a few seconds. Later the same day, on the other side of the world Jan Frodeno achieved a new world record at the Ironman World Championship, he completed 3.7km swim, 180km bike and a 42 km run in Kona Hawaii in just 7 hours and 51 minutes! This is all great and it shows that human body is still not at its limit.  

If you look at the world’s population, we have “never” been in worse shape. We live in our greatest time yet, we see innovations every day that will help us live a better and more luxurious life. At the same time, we don’t substitute our lack of physical movement our new lifestyle gives us with physical activity. Looking at running for instance, there are studies which look into marathon finishing times and how they have changed over time. In this study they have look at finishing time on 5K, 10K, half marathon and marathon and across all distances they can see that the average finishing time have increased. It is not only the average time of all attendees which have increase but among the fastest runners as well. So, you can’t argue that there is a higher number of recreational competitors which raise the total average.  

Humans is by nature lazy being that is why we can see so high innovations rate especially related to ease our way of living. As Bill Gates once said “I always choose a lazy person to do a hard job, because a lazy person will find an easy way to do it” whether you want to believe it or not I there is always some truth to a story how absurd it might seem.

It is a fact that there is a connection between physical exercise and health in general. At the beginning of humanity, there was no need for training as we got all the physical exercise, we needed, from just trying to survive. In today’s fast-paced life, there is often not much margin for errors. Tasks that took weeks earlier should be carried out in just a couple of days. 

One way to handle stress is to give yourself time to reflect. It will not only decrease your stress level but also give the mind time to learn and process new impressions. This will hopefully make you a better and balanced person and colleague, as you now can be more active with friends and family. 

One great way to give you time to reflect is training, especially low-intensity cardio sessions. Believe it or not, but if you go for a run in the forest, you will not only be in better physical shape but also improve mentally. As the doctor and author Anders Hansen explains in his book “Hjärnstark: Hur motion och träning stärker din hjärna” our brain does grow and become more potent during physical exercises and cardio sessions in particular. Several research papers investigate how the human brain reacts to physical stimuli. For instance, a great way to decrease stress and increase resistance towards stress is regular low-intensity cardio training. In the book, he also elaborates on the importance of cardio sessions for memory improvement and how it prevents the brain from deteriorating with age.  

For control systems engineers, it is intuitive to understand complex systems in different domains. While research has spent decades searching for the optimal control theory algorithm, the closest we have today to an optimal control algorithm is Model Predictive Control. The approach requires huge computational power for satisfactory performance and demands perfect models of the environment. One can see similarities to the human mind. As soon as we move our limbs, we activate our internal control theory algorithms. For example, when we grab a cup of coffee, we need to control our arm and hand in a synchronised fashion. A similar problem occurs during physical exercises such as running, biking or swimming where we control several body parts simultaneously. 

Athletes put much effort into improving their speed and technique to set new records. Just two weeks ago, during the Ineos project in Vienna, Eliud Kipchoge was the first person ever to run a marathon under two hours. Later the same day, on the other side of the planet, Jan Frodeno set a new world record at the Ironman World Championship. He completed the 3.7km swimming, 180km biking and 42 km running in Kona, Hawaii in just 7 hours and 51 minutes! These impressive results show that the human body is still not at its limit and what can be achieved with optimisation of mind and body. 

One does not need to run marathons and Ironmans, especially in such fast times, but it is obvious for us that we need to activate our body, not only to be good athletes but also to improve ourselves as engineers. That is one of the reasons why Combine prioritizes company activities such as running, obstacle course racing, or skiing. 

Read more

The Problem

In this blog we are going to create a model which counts the number of fingers (1-5) a hand is holding up based on a picture of that hand. First we need to collect data, this data will then need to be processed to a suitable format for deep learning. Once the data is ready, we will train a deep convolutional neural network and make a basic interface which shows the results in real-time.

Collecting Data

For our problem we need pictures of a hand holding up different numbers of fingers and corresponding labels. The labels should be the number of fingers the hand is holding up. For this purpose we will need a camera, more specifically a webcamera which we can connect to our computer. We don’t want to overcomplicate the problem right away, therefore we want to keep the data very consistent so let’s take all the pictures with a plain white background, for example a white desk, or perhaps a white wall. We want to place our camera at a distance so the hand fills the majority of the image, the ideal distance will depend on your camera but between 20-30 cm away from your plain white surface should be about right.

Our camera is now in place and we are ready to start taking pictures.. but we soon realize two problems. First, we don’t want to have to take potentially hundreds of pictures manually one-by-one, and secondly, we will need to somehow label the data with the correct label (i.e. number of fingers shown). Of course we could take a picture, and manually give it the correct label, but this process would be very time consuming. Let us automate the process instead. In this blog we will use the CV2 python library. This library will let us capture images from our camera from a python script, solving the first problem of having to manually take pictures.

We make a script which uses the CV2 library to capture an image from our camera, we will also take care of the labelling problem by telling the user how many fingers to hold up. Using the CV2 library we show the current image on screen, as well as some text telling the user how many fingers they should be holding up. Using a loop we can then take multiple pictures and for each one we will automatically label it with how many fingers we told the user to hold up (this assumes that the user follows the instructions correctly). Once we have taken a good amount of pictures, we let the user know its time to hold up a different amount of fingers. The pseudo-code of the process is shown below.

Figure 1: Pseudo code for collecting data

For fast reading and writing to disk, and to simplify later on, we recommend saving the data as a numpy array and not as images.

Data Pre-processing

We now have our images with corresponding label. We will now process the data so it can be used for training of a neural network. The first thing we need to do is to one-hot-encode our labels, if you have encountered classification problems before this should be familiar. Next we want to convert our images to gray-scale. In this problem we are not interested in any feature related to colour so we can work with gray-scale images instead to reduce the complexity. Next, we want to rescale our image pixel values, for example to lie between 0-1. Lastly, we will resize the image to reduce the complexity further (64×64 pixels).

We can either make a separate script which does the pre-processing, our we can add it to our data collection script, reducing the disk space and the need of running two different scripts. The pseudo-code for data collection + data pre-processing is shown below.

Figure 2: Pseudo code for collecting and pre-processing data

Here we also save all the images and labels together in one large numpy array instead of each of separately.

Training a model

With our data ready we can now define and train a model. Since we are working with images we will need a convolutional neural network. Our output should be a class, i.e. how many fingers are being held up. One excellent choice of framework for defining and training a neural network is Keras. With this toolbox we can easily create our network exactly as we want it and Keras will take care of all the difficult mathematical operations in the background. Below is the pseudo-code for defining our network, and training it on our data.

Figure 3: Pseudo code for collecting and pre-processing data

The exact architecture of the network, how much dropout is applied and activation functions can be altered to get the best performance on your dataset.

Showing the results

With our trained model, the last step is to make a simple interface to try out or model. For this we can revisit the CV2 library we used for data collection. Simply take a picture, run it through our trained model and show the picture + the resulting class from our model on screen. The pseudo-code and an example of how it can look is shown below.

Figure 4: Showing the results

 

Read more

Stockholm

The new office in Stockholm is in progress. Our first two engineers started Monday this week, and the office at Dalagatan is beginning to take form.

Welcome, Spyros and Michele, to the team.

Our focus now is to get exciting projects and assignments to our office. We are hopeful to have a specific project up and running soon.

Estimation of RUL

The interest in our start-up AiTree Technology is still high, where we believe that we will have customers signed-up in the near future. At this point, there seems to be a significant interest not only in predicting RUL but also in helping companies to build energy storage solutions. Stay tuned for more information.

Sympathy for Data

We will soon release a new updated version of the tool Sympathy for Data, version 1.6.2, that will have improvements in the platform, user interface, nodes as well as new functionality. Stay tuned for more information on version 1.6.2.

Market analysis

The year has so far been volatile, with downsizing in the automotive industry for consultants in general.

We have long claimed that being consistent with focusing on quality and specialist services would be the right strategy in the long run, rather than focus on EBIT for the year. It turned out that our strategy over the years is paying off. We have not seen any changes in demand for our services, even if we have not been able to grow as much as planned. Instead, we have used this period to increase the percentage of projects and the turnover from new customers with the target to be less dependent on just one industry in the future.

There is also a lot of technology-driven disruption going on (connected devices, automation of services, sustainability, and environmental adaptation, etc.). Our services should fit in nicely in that kind of future and transformation.

Moreover, I must say, it is rewarding that our strategy, to stick with our expertise, is as appealing to our customers as it is to me.

To summarize

The market, in general, is not as strong as before, but our Data Science services specifically continue to be in high demand. Therefore, we are still positive, yet careful, looking at the coming years.

We will continue to focus on cutting edge technology, being proud of what we do, continue to be honest, and at the same time, have fun. How hard can it be?!

Read more

No doubt that model-based design is one of the methods that brings the control, communications, signal processing and dynamic systems to a great level. Designing for example model predictive or even nonlinear control systems are more feasible and less error-prone using this approach.

Model-based design methodology, especially in the starting stages, is used in many elds such as automotive, aircraft, robotics and others. In the other industries for example the automation industry, they tend to neglect this phase and hard-code the software program in the PLC and connect it to a simulation platform to evaluate system performance. In this article, we are briefly discussing the improvement and the value of starting the design process with the model-based design approach and how that can impact the automation industry.

In the model-based design scheme, knowing the mathematical representation, it is easier for a developer to design the model of the plant. Based on that, one can synthesize a suitable controller for that plant using graphical-interface-user blocks that represent simple arithmetic, logic and other simple operations or even more complicated operations such as PID and model predictive control blocks that handle more complicated operations, in the absence of the actual hardware. This will save a huge amount of time for a developer if he would like to code the whole system. As a result it is much easier to debug and improve the control algorithms quality. What is more interesting, even without knowing the mathematical representation of the plant, you can model the plant by depicting electrical or mechanical circuits and connected to scopes or displays blocks to observe their outputs. Veri fication of the design could be handled through Model-In-The-Loop(MIL) and Hardware-In-The-Loop(HIL) simulations. In the MIL you can test and validate the simulated controller and plant in the early phases without physical components. Once the model is tested in MIL, you can output HDL code, C code, IEC61131-1 Structured Text (using PLC coder) and reports. In the HIL method, HIL simulators will be used to act as a real plant and will communicate with controllers through sensors and actuators. In which testing is more realistic and then you are ready to go to test the prototype.

There are many advantages of using the model-based design approach. For example, most of the veri cation and validation could be done earlier before the hardware exists, as well as, adding new features will take lesser time and the development schedule will be shortened. Moreover, some model-based design platforms provide code generation feature that is optimizing the code in which more memory space and high execution speed are provided. All in all, one can see the benefits of considering a model-based design approach in the development process and how that will increase the quality of the testing of the system and decrease the errors that could be expensive in the real application.

Read more

Introducing Transformers

The idea of training language models on large datasets and then using these pre-trained
models to enhance performance on smaller, similar datasets has been a crucial breakthrough
for progress in many NLP challenges. However, pre-training for a specific task and embedding
long-term sequential dependencies have been huge constraints to training more generalised
language models. Transformer models are unsupervised models capable of training on
unlabelled, unstructured text to perform a large array of downstream NLP tasks, including
question-and-answer for dialogue systems, named entity recognition (NER) and sequencelevel
tasks such as text generation. The typical Transformer architecture is illustrated below in
Figure 1:

Figure 1: Transformer Blocks [4]

As shown in Figure 1, the Transformer architecture consists of a block of encoders (left) and a
block of decoders (right). Instead of using a hidden state between layers (as in recurrent neural network architectures), the encodings themselves are passed between each encoder, and the
final encoder output is then passed to the first decoder in the decoder block. Each
encoder/decoder in the Transformer contains a self-attention layer, which aims to determine
which part of the sequence is most important when processing a particular word, e.g. in the
sentence “James enjoys the beach because he likes to swim”, the self-attention layer should
learn to link the word “he” to “James” as the most important for its embedding. Additionally,
each decoder contains an “Encoder-Decoder Attention” layer, which refers to the relative
importance of each encoder when the decoder predicts the output.

BERT

The Bidirectional Encoder Representation from Transformers, or BERT for short, is one of the
most influential Transformer-based models. It has earned its reputation from beating multiple
benchmark performances in various NLP tasks with its bi-directional attention mechanisms.
This means that BERT considers not only the previous context but also looks ahead when
learning embeddings. The BERT model focuses on building a language model and thus on the
encoder block of the Transformer. Figure 2 below shows the composition of BERT embeddings
as consisting of the word token embeddings, the segment embedding (for longer sequences)
and a positional embedding that keeps track of the input order:

Figure 2: BERT embeddings illustrated [1]

Fine-tuning BERT

To build our search engine, we first acknowledge that 72 data points is insufficient to fine-tune
the BERT model for our specific task. Instead, we make use of a benchmark dataset for
sentence similarity, STS-B, consisting of 8,000 pairs of semantically similar sentences from
news articles, captions and forums [3]. Since BERT is not specifically designed for sentence
embeddings, we use a modified version of BERT for sentence encoding (proposed by Reimers
and Gurevych [2]), which adds a pooling layer to the standard architecture and is trained with a
regression objective based on a siamese network, i.e. each sentence represents its own
network and their outputs are combined and then evaluated (see Figure 3). The regression
objective function here is the cosine similarity measure between these sentence embeddings,
which is used as a loss function for the fine-tuning task. From the bottom to the top, we see
that each sentence is first encoded using the standard BERT architecture, and thereafter our
pooling layer is applied to output another vector which is then used to compute the cosine similarity measure. As described in [2], we compute this similarity measure for each query and
the 72 docstrings that we obtain from the Sympathy modules and return the top 5 nodes
according to this measure.

Figure 3: Siamese BERT network for sentence similarity illustrated [2]

The Result

We have been able to build a working prototype of the semantic search engine for the 72
nodes currently available on Sympathy for Data, which we hope to integrate as a fully-fledged
plugin in the future. Our search engine performs impressively given that it has only been
trained on around 8,000 pairs of semantically similar sentences (i.e. 16,000 sentences). Below
is an illustrative example of how this works in practice.

References:

[1] Devlin, J., Chang, M., Lee, K. and Toutanova, K. (2019). BERT: Pre-training of Deep
Bidirectional Transformers for Language Understanding. [online] arXiv.org. Available at:
https://arxiv.org/abs/1810.04805 [Accessed 24 Sep. 2019].
[2] Reimers, N. and Gurevych, I. (2019). Sentence-BERT: Sentence Embeddings using
Siamese BERT-Networks. [online] arXiv.org. Available at: https://arxiv.org/abs/1908.10084
[Accessed 24 Sep. 2019].
[3] Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, and Lucia Specia (2017)
SemEval-2017 Task 1: Semantic Textual Similarity Multilingual and Cross-lingual Focused
Evaluation Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval
2017)
[4] Models, H. (2019). Understanding Transformers in NLP: State-of-the-Art Models. [online]
Analytics Vidhya. Available at: https://www.analyticsvidhya.com/blog/2019/06/understandingtransformers-
nlp-state-of-the-art-models/ [Accessed 24 Sep. 2019].

Read more

Electricity from heat

Well, no big news there. But how about using existing waste heat instead of burning oil or splitting atoms? Instead of superheating steam just settling for 70-120C source temperatures?
The technology is surprisingly simple, but clever. Here is some text and an image from the Climeon homepage (www.climeon.com):

The heat, from geothermal sources, industrial waste heat or power production, is fed to the Climeon unit. Inside the Climeon unit a heat exchanger transfers the heat to an internal fluid, which vaporizes due to its lower boiling point. The vapors are then expanded over a turbine to run a generator and produce electricity.

Fundamentally the same electricity generation scheme as a nuclear power plant, but no nuclear stuff.
The energy efficiency of, for instance, a nuclear plant design might be considered poor considering the amount of heat that is wasted (just cooled off for no gain). Plants that combine electricity generation and district heating are more efficient from that point of view, but perhaps transporting heat to remote districts using nuclear coolant is not a great idea.
In this case, the concept is to use heat that is already there and unused, so efficiency can instead be measured solely as the amount of electricity generated per unit of heat energy. If the source is geothermal it’s basically electricity for free, once you make your initial investment and maintenance allocations.

I think the concept is great and hope they do well.

Batteries, when they are no longer suitable for their initial purpose?

There seem to be four basic answers to this question

  1. We made our money while they worked, now we need to get rid of them at as low cost as possible
  2. We are hoping to recycle them efficiently and make use of that
  3. We are hoping someone else wants them and hopefully make use of that
  4. O boy, where did all these batteries come from?

The first answer is understandable, but not convincing from an environmental or “big picture” point of view. Established recycling technology for Lithium-Ion batteries has a couple of glaring drawbacks, mainly that it doesn’t work that well and that it is based on melting (which costs a lot of energy).

The second answer is hopeful and often based on the idea that recycling will improve. Research is underway, most promising is research based on technologies that have existed in the mining industry for over 100 years. The idea in mining is to crush the material and mix it up with fluid containing molecules that attach to the element one wishes to extract. The newly formed molecules float up to the surface of the fluid and can be skimmed off (or assume whatever property might make it easy to separate them from the fluid). Then a further stage filters out the desired element. The research is looking to do this similarly in steps, separating all the desired elements along the way.

The third answer is also hopeful. As we have discussed in previous posts, the idea of a functioning business with second and possibly third life applications for used batteries is quite dependent on buyers and sellers knowing the condition of the batteries. We are hoping to do something of our own in this area, as you know.

Unfortunately, the fourth answer does exist. I am not going to point any fingers and just leave it there.

Unless someone comes up with a better battery technology soon, we are looking at an ever-increasing need for answers 2 and 3 to win out.
Authorities are also unlikely to accept answers 1 or 4 in the long run, IMO (global perspective, visualize massive toxic junkyards in some third world country). The pressure is more likely to increase than decrease on manufacturers, and it will be interesting to see where in the value chain responsibilities land. Passing the buck will probably not be that easy without some serious documentation to show where the batteries went and who is responsible for them.

Pet project

To wind this up I am going to talk a bit about a pet project. We have been asked to demonstrate something on the theme “technology is fun” for an event (Netgroup anniversary) taking place at the Göteborg opera house.
I am going to attempt to build a plasma arc speaker. They have always caught my eye (you can look them up or watch some videos on Youtube), so even if it has already been done, I think it is a perfect fit considering the venue.

First, I would like to point out that this is a high-voltage design, so building it at home with a simple on/off switch is not a great idea if you have small (or overly curious) children running around. It can cause serious heart problems or kill you, and it produces ozone which can be lethal at concentrations of more than 50ppm. Great fun, right?

Anyway, the idea I am using is something like this

For the power source, I will use a standard 700W PC power supply, using the 12V output. This will go to the flyback transformer and switching MOSFETs.

The audio source will probably be an obsolete MP3 player. The signal will go to a 555, which will then control the switching MOSFETs (I’ll use 3 parallel STP40NF10L).

The flyback transformer has the property of being able to produce high voltages, in the X kV range. Also, instead of being fed by a DC source it is typically fed by a switched source in the XY kHz range.

My idea is to produce the arc between two stainless steel screws of some respectable dimension.

So, kV and kHz? This means we get a modulated plasma arc that can play the higher frequencies of music well. It should actually be able to do it very well, since there are no moving parts, unlike speaker membranes and similar. It won’t be very loud since I have no plans to ionize western Sweden or kill the guests at the event, but it will be fun to see if I can make it work.

If anyone feels a huge urge to fiddle around with it together with me, I am looking for someone who can prevent me from electrocuting myself and maybe has some ideas for an ozone trap.

Read more

INTRODUCTION

Today’s modern vehicles feature a large set of advanced driver-assistance systems (ADAS), such as electronic stability control, lane departure warning systems, anti-lock brakes, and several others. These systems are dependent on multiple inputs to model the current state of the vehicle as well as the environment, and one can argue that the vehicle’s interaction with the road is the most important input.

The tire-road friction is essential to the stability of the vehicle and have been found to be the most important factor in avoiding crashes. About a quarter of all crashes occur due to weather-related issues, and accidents are twice as likely to happen on wet asphalt compared to dry asphalt. There is however no way to accurately measure the available friction, so some type of estimation algorithm needs to be developed.

TODAY’S OBSTACLES

All of the systems undergo extensive testing and are required to be evaluated for a large number of test scenarios. However, this introduce two major issues.

First, real-world data can only be used to analyse what has already happened, thus there is no certainty about what would happen in an untested reality and there is an increased risk for unforeseen conditions and edge cases. It simply takes too much time to test enough driving cases.

Second, testing for a large set of scenarios is impractical as the actual value needs to be known in order to evaluate the system, which the value often is for the testing sites. For an environment where the actual friction is an estimate, i.e. a public road, the testing is prone to errors and limits the available testing sites with valid verification since otherwise the estimations are compared to other estimations. If the approach is to train a machine learning algorithm there is no reference value of a correct answer.

PROPOSED SOLUTION

To overcome these issues, simulation has been proposed as a solution. By using a digitally controlled environment, the true value of the tire-road friction is known. Furthermore, simulation allows for controllability, reproducibility, and standardization as measurement errors and uncertainties can be both eliminated and introduced at will. 

Simulation of high friction driving

Combine is currently doing a master thesis together with a major Swedish automotive company where we are investigating the possibility to digitalize the testing process. By using the generated simulation data, we will train a machine learning algorithm to estimate the tire-road friction. The master thesis is planned to be finalized by the end of the year, so stay tuned about the results. 

Read more

Who are you?

My name is Jannes Germishuys, and I am all the way from Cape Town, South Africa. I recently completed my master’s in data science here in Gothenburg and joined Combine straight after graduation. Before my segue into data science, I actually majored in actuarial science, more commonly known as insurance mathematics, and after my studies I worked at a data science startup for 2 years.

What brings you to Sweden?

One of my main reasons for choosing Sweden was that during my visits here, I was always amazed by the openness of people to innovation and technological progress. I realized that I wanted to deepen my knowledge and experience in such an environment and found a master’s programme that perfectly matched my interests. I also wanted to broaden my horizons by experiencing a different culture, and the diversity of Sweden’s academic and working environments made me feel welcomed as an international student.

Why did you end up choosing Combine?

My primary goal when I started job-hunting was to find a great team of people with a shared sense of drive and purpose. Within a few minutes of meeting Benedikt (group manager for Data Science Solutions Gothenburg) and the rest of the team, I immediately felt that it would be a great cultural fit. I was also drawn to the ‘Enter the next level’ philosophy, which means that the technical problems Combine takes on are not only relevant but also interesting and important for progress in data science.

Which areas of Data Science interest you the most and why?

I have been fortunate enough to be involved in a diverse array of projects, from building speech-to-text engines using natural language processing to modelling water distribution networks using probabilistic graphs. This means that I usually look for the interesting problems rather than the ones that match a particular part of the data science toolkit. However, during my years of work and study, I worked deeply in natural language processing and also developed a strong research interest, as I helped to develop the initial framework for Swedish fake news detection with the Research Institutes of Sweden (RISE) for my master’s thesis project.

Can you tell us an interesting fact that not many people know about you?

Sure. I think people may notice a slight twang in my accent, and that’s because I went to high school in the island nation of Mauritius in the Indian Ocean, where I learned French and became a certified open water diver.

Read more

Prediction of lithium-ion batteries complete lifetime

Combine is a co-founder of the company AiTree Technology AB. The vision is to provide a data-driven machine learning solution with the purpose to predict lithium-ion batteries (complete) lifetime from 1st life to end of life. The interest for our solution is immense, where we see both large, medium and small companies looking for a way to handle their batteries in a more efficient way.
Stay tuned for more information!

The IP of the tool “Sympathy for Data”

In April Combine acquired the Intellectual Property of the data science tool “Sympathy for Data”.
Our intention is to continue to license Sympathy as an open-source tool, where add-on products such as cloud services, cluster support, etc will be included in an enterprise license. The focus in now to develop functionality such as streaming support, cluster support, cloud services to further strengthen our ability to deliver kick-ass solutions to our customers.

Stockholm

I am glad to announce that we are moving ahead with the establishment of an office in Stockholm.
We have now signed the contract for the office at Dalagatan 7, close to the central station.
We have also signed our first two engineers in Stockholm. More information about this will follow after the summer.

Hardware In the Loop

Combine will together with a new partner develop and sell an off-the-shelf HIL solution.
All partners have the know-how and a strong network from previous work with vehicles, controls systems, and HIL solutions.
We aim to provide our customers with a more efficient, easily calibratable and plug-and-play solution that is built on open standards.

Ocean Data Factory

We are excited to announce that Combine will participate as AI experts in the collaborative work of building an Ocean Data Factory (ODF)!
ODF, which is a part of Vinnova’s investments to speed up development within AI, will be an arena to build competence and nurture innovation.
Data collected from the ocean poses challenges such as numerous data sources with varying characteristics and time scales, communication difficulties and harsh environment for the sensors which can lead to poor data quality. Overcoming these challenges using efficient AI will be vital for the future of the blue economy and sustainable ecosystems.

To summarize

The start of this year has been exciting with new initiatives that strengthen our position both as a specialist supplier but also as an innovative product development company. I believe that our investments will be fully up and running during this year, leading to more interesting opportunities in the future.

Now, I’m heading to Italy for some relaxation and vineyards.
Have a nice summer.

Read more

Introduction

One of the things that has been always drawing my attention is the automated
vehicular control strategies and how they could reshape the transport sector
dramatically. One of the methods that many automotive manufacturers have
been recently developing is what is called platooning. A platoon is a convoy of
trucks that maintains fixed inter-vehicular distances, as shown in the Figure 1,
and usually applied on highways.

Figure 1: Trucks Platoon

The advantages go beyond the driver’s convenience and comfort. Having a lead
truck with a large frontal-area would reduce the air drag force acting on the
succeeding vehicles. Therefore, the required torque to drive the trucks at cer-
tain speed will be decreased which lead to less fuel consumption. That means,
of course, less CO2 emissions and lower financial burdens.
However, in a single-vehicle level, there is another approach that has been inves-
tigated for a better fuel economy. This approach utilizes the future topography
information in order to optimize the speed and the gear for a vehicle travelling
in a hilly terrain by exploiting the vehicles’ potential and kinetic energies stor-
ages. In this approach the velocity will vary along the road depending on the
road gradient. The look-ahead strategy could be seen as a contradiction to the
platooning approach in which vehicles maintain almost the same speed along
the road.

HOW TO HANDLE IT?

A combination between these approaches could be implemented using the model
predictive control (MPC) scheme. Since there are many process constraints,
such as inter-vehicular distances, engine maximum torque, road velocity limits,
etc. MPC is a perfect candidate to handle these constraints especially that in
many cases the system will be operating close to the limits. The control design
could be handled in two approaches, the centralized control design and the
decoupled control design. In the centralized controller, as shown in the Figure
2, all the vehicles’ private data such as mass, engine specs, etc. in addition to
their states such as velocity and time headway are sent to the central predictive
controller via vehicle to vehicle communication, could be in one of the trucks
probably the lead vehicle or even in a cloud. One of the methods used for optimal
control is the convex quadratic programming problem (CQPP) in which every
local minimum is a global minimum. The problem is as follows

$$ min\,z = f_0(x) \\
f_i(x) \leq 0 \\
Ax = b $$

Where f0,f1,f2, …, fm, is the objective function, and the inequality constraints
are convex functions. However, the equality constraints are affine functions.
In the platoon case, some convexification is needed in order to get CQPP. Hense,
the problem is solved and the optimal speed and time headway references are
sent back to the vehicles’ local controllers. This approach optimizes the fuel
consumption for the whole platoon rather than individual vehicles in which the
group interest comes first. One of the drawbacks of this approach is that in order
to solve the problem you need to handle huge matrices since all the vehicles info
is handled at once. In other words, this approach is rather computationally
expensive.

Figure 2: Centralized adaptive cruise control

The decoupled architecture, as depicted in the Figure 3, could be a solution for
the computation capacity issues. Instead of handling the quadratic program-
ming (QP) problem for the whole platoon, each vehicle considers itself, which is
why called greedy. The problem starts to be solved from the leading vehicle and
goes backwards. Each vehicle solves the QP, considering the gaps in front of the
vehicle and the road topography, and sends states to the succeeding vehicles.
The pros of this approach are that trucks need not to share their private data
and the matrices sizes are much smaller. So the computation time is less than in
the greedy control strategy but the solution is not as optimal as the centralized
controller.

Figure 3: Greedy approach

CHALLENGES

As it is mentioned above, formulating a convex quadratic programing problem
is used to get the fuel-saving velocities. Since the vehicle dynamics are quite
nonlinear, linear approximations are needed, therefore, finding an appropriate
velocity reference is essential, assuming that the vehicle will be driven close
to the reference. Finding such reference should consider many factors such as
maximum traction force along the road, road limits and the cruise speed set by
the driver. One of the other challenges is gear optimization which could be solved
using dynamic programming. The complexity of dynamic programing problem
increases exponentially with the rise of the vehicles number, as a result, the
problem become computationally demanding, therefore, it is not very reliable
for the real-time implementation.

Read more

    Contact us