Blogg | Combine


Electricity from heat

Well, no big news there. But how about using existing waste heat instead of burning oil or splitting atoms? Instead of superheating steam just settling for 70-120C source temperatures?
The technology is surprisingly simple, but clever. Here is some text and an image from the Climeon homepage (

The heat, from geothermal sources, industrial waste heat or power production, is fed to the Climeon unit. Inside the Climeon unit a heat exchanger transfers the heat to an internal fluid, which vaporizes due to its lower boiling point. The vapors are then expanded over a turbine to run a generator and produce electricity.

Fundamentally the same electricity generation scheme as a nuclear power plant, but no nuclear stuff.
The energy efficiency of, for instance, a nuclear plant design might be considered poor considering the amount of heat that is wasted (just cooled off for no gain). Plants that combine electricity generation and district heating are more efficient from that point of view, but perhaps transporting heat to remote districts using nuclear coolant is not a great idea.
In this case, the concept is to use heat that is already there and unused, so efficiency can instead be measured solely as the amount of electricity generated per unit of heat energy. If the source is geothermal it’s basically electricity for free, once you make your initial investment and maintenance allocations.

I think the concept is great and hope they do well.

Batteries, when they are no longer suitable for their initial purpose?

There seem to be four basic answers to this question

  1. We made our money while they worked, now we need to get rid of them at as low cost as possible
  2. We are hoping to recycle them efficiently and make use of that
  3. We are hoping someone else wants them and hopefully make use of that
  4. O boy, where did all these batteries come from?

The first answer is understandable, but not convincing from an environmental or “big picture” point of view. Established recycling technology for Lithium-Ion batteries has a couple of glaring drawbacks, mainly that it doesn’t work that well and that it is based on melting (which costs a lot of energy).

The second answer is hopeful and often based on the idea that recycling will improve. Research is underway, most promising is research based on technologies that have existed in the mining industry for over 100 years. The idea in mining is to crush the material and mix it up with fluid containing molecules that attach to the element one wishes to extract. The newly formed molecules float up to the surface of the fluid and can be skimmed off (or assume whatever property might make it easy to separate them from the fluid). Then a further stage filters out the desired element. The research is looking to do this similarly in steps, separating all the desired elements along the way.

The third answer is also hopeful. As we have discussed in previous posts, the idea of a functioning business with second and possibly third life applications for used batteries is quite dependent on buyers and sellers knowing the condition of the batteries. We are hoping to do something of our own in this area, as you know.

Unfortunately, the fourth answer does exist. I am not going to point any fingers and just leave it there.

Unless someone comes up with a better battery technology soon, we are looking at an ever-increasing need for answers 2 and 3 to win out.
Authorities are also unlikely to accept answers 1 or 4 in the long run, IMO (global perspective, visualize massive toxic junkyards in some third world country). The pressure is more likely to increase than decrease on manufacturers, and it will be interesting to see where in the value chain responsibilities land. Passing the buck will probably not be that easy without some serious documentation to show where the batteries went and who is responsible for them.

Pet project

To wind this up I am going to talk a bit about a pet project. We have been asked to demonstrate something on the theme “technology is fun” for an event (Netgroup anniversary) taking place at the Göteborg opera house.
I am going to attempt to build a plasma arc speaker. They have always caught my eye (you can look them up or watch some videos on Youtube), so even if it has already been done, I think it is a perfect fit considering the venue.

First, I would like to point out that this is a high-voltage design, so building it at home with a simple on/off switch is not a great idea if you have small (or overly curious) children running around. It can cause serious heart problems or kill you, and it produces ozone which can be lethal at concentrations of more than 50ppm. Great fun, right?

Anyway, the idea I am using is something like this

For the power source, I will use a standard 700W PC power supply, using the 12V output. This will go to the flyback transformer and switching MOSFETs.

The audio source will probably be an obsolete MP3 player. The signal will go to a 555, which will then control the switching MOSFETs (I’ll use 3 parallel STP40NF10L).

The flyback transformer has the property of being able to produce high voltages, in the X kV range. Also, instead of being fed by a DC source it is typically fed by a switched source in the XY kHz range.

My idea is to produce the arc between two stainless steel screws of some respectable dimension.

So, kV and kHz? This means we get a modulated plasma arc that can play the higher frequencies of music well. It should actually be able to do it very well, since there are no moving parts, unlike speaker membranes and similar. It won’t be very loud since I have no plans to ionize western Sweden or kill the guests at the event, but it will be fun to see if I can make it work.

If anyone feels a huge urge to fiddle around with it together with me, I am looking for someone who can prevent me from electrocuting myself and maybe has some ideas for an ozone trap.

Read more


Today’s modern vehicles feature a large set of advanced driver-assistance systems (ADAS), such as electronic stability control, lane departure warning systems, anti-lock brakes, and several others. These systems are dependent on multiple inputs to model the current state of the vehicle as well as the environment, and one can argue that the vehicle’s interaction with the road is the most important input.

The tire-road friction is essential to the stability of the vehicle and have been found to be the most important factor in avoiding crashes. About a quarter of all crashes occur due to weather-related issues, and accidents are twice as likely to happen on wet asphalt compared to dry asphalt. There is however no way to accurately measure the available friction, so some type of estimation algorithm needs to be developed.


All of the systems undergo extensive testing and are required to be evaluated for a large number of test scenarios. However, this introduce two major issues.

First, real-world data can only be used to analyse what has already happened, thus there is no certainty about what would happen in an untested reality and there is an increased risk for unforeseen conditions and edge cases. It simply takes too much time to test enough driving cases.

Second, testing for a large set of scenarios is impractical as the actual value needs to be known in order to evaluate the system, which the value often is for the testing sites. For an environment where the actual friction is an estimate, i.e. a public road, the testing is prone to errors and limits the available testing sites with valid verification since otherwise the estimations are compared to other estimations. If the approach is to train a machine learning algorithm there is no reference value of a correct answer.


To overcome these issues, simulation has been proposed as a solution. By using a digitally controlled environment, the true value of the tire-road friction is known. Furthermore, simulation allows for controllability, reproducibility, and standardization as measurement errors and uncertainties can be both eliminated and introduced at will. 

Simulation of high friction driving

Combine is currently doing a master thesis together with a major Swedish automotive company where we are investigating the possibility to digitalize the testing process. By using the generated simulation data, we will train a machine learning algorithm to estimate the tire-road friction. The master thesis is planned to be finalized by the end of the year, so stay tuned about the results. 

Read more

Who are you?

My name is Jannes Germishuys, and I am all the way from Cape Town, South Africa. I recently completed my master’s in data science here in Gothenburg and joined Combine straight after graduation. Before my segue into data science, I actually majored in actuarial science, more commonly known as insurance mathematics, and after my studies I worked at a data science startup for 2 years.

What brings you to Sweden?

One of my main reasons for choosing Sweden was that during my visits here, I was always amazed by the openness of people to innovation and technological progress. I realized that I wanted to deepen my knowledge and experience in such an environment and found a master’s programme that perfectly matched my interests. I also wanted to broaden my horizons by experiencing a different culture, and the diversity of Sweden’s academic and working environments made me feel welcomed as an international student.

Why did you end up choosing Combine?

My primary goal when I started job-hunting was to find a great team of people with a shared sense of drive and purpose. Within a few minutes of meeting Benedikt (group manager for Data Science Solutions Gothenburg) and the rest of the team, I immediately felt that it would be a great cultural fit. I was also drawn to the ‘Enter the next level’ philosophy, which means that the technical problems Combine takes on are not only relevant but also interesting and important for progress in data science.

Which areas of Data Science interest you the most and why?

I have been fortunate enough to be involved in a diverse array of projects, from building speech-to-text engines using natural language processing to modelling water distribution networks using probabilistic graphs. This means that I usually look for the interesting problems rather than the ones that match a particular part of the data science toolkit. However, during my years of work and study, I worked deeply in natural language processing and also developed a strong research interest, as I helped to develop the initial framework for Swedish fake news detection with the Research Institutes of Sweden (RISE) for my master’s thesis project.

Can you tell us an interesting fact that not many people know about you?

Sure. I think people may notice a slight twang in my accent, and that’s because I went to high school in the island nation of Mauritius in the Indian Ocean, where I learned French and became a certified open water diver.

Read more

Prediction of lithium-ion batteries complete lifetime

Combine is a co-founder of the company AiTree Technology AB. The vision is to provide a data-driven machine learning solution with the purpose to predict lithium-ion batteries (complete) lifetime from 1st life to end of life. The interest for our solution is immense, where we see both large, medium and small companies looking for a way to handle their batteries in a more efficient way.
Stay tuned for more information!

The IP of the tool “Sympathy for Data”

In April Combine acquired the Intellectual Property of the data science tool “Sympathy for Data”.
Our intention is to continue to license Sympathy as an open-source tool, where add-on products such as cloud services, cluster support, etc will be included in an enterprise license. The focus in now to develop functionality such as streaming support, cluster support, cloud services to further strengthen our ability to deliver kick-ass solutions to our customers.


I am glad to announce that we are moving ahead with the establishment of an office in Stockholm.
We have now signed the contract for the office at Dalagatan 7, close to the central station.
We have also signed our first two engineers in Stockholm. More information about this will follow after the summer.

Hardware In the Loop

Combine will together with a new partner develop and sell an off-the-shelf HIL solution.
All partners have the know-how and a strong network from previous work with vehicles, controls systems, and HIL solutions.
We aim to provide our customers with a more efficient, easily calibratable and plug-and-play solution that is built on open standards.

Ocean Data Factory

We are excited to announce that Combine will participate as AI experts in the collaborative work of building an Ocean Data Factory (ODF)!
ODF, which is a part of Vinnova’s investments to speed up development within AI, will be an arena to build competence and nurture innovation.
Data collected from the ocean poses challenges such as numerous data sources with varying characteristics and time scales, communication difficulties and harsh environment for the sensors which can lead to poor data quality. Overcoming these challenges using efficient AI will be vital for the future of the blue economy and sustainable ecosystems.

To summarize

The start of this year has been exciting with new initiatives that strengthen our position both as a specialist supplier but also as an innovative product development company. I believe that our investments will be fully up and running during this year, leading to more interesting opportunities in the future.

Now, I’m heading to Italy for some relaxation and vineyards.
Have a nice summer.

Read more


One of the things that has been always drawing my attention is the automated
vehicular control strategies and how they could reshape the transport sector
dramatically. One of the methods that many automotive manufacturers have
been recently developing is what is called platooning. A platoon is a convoy of
trucks that maintains fixed inter-vehicular distances, as shown in the Figure 1,
and usually applied on highways.

Figure 1: Trucks Platoon

The advantages go beyond the driver’s convenience and comfort. Having a lead
truck with a large frontal-area would reduce the air drag force acting on the
succeeding vehicles. Therefore, the required torque to drive the trucks at cer-
tain speed will be decreased which lead to less fuel consumption. That means,
of course, less CO2 emissions and lower financial burdens.
However, in a single-vehicle level, there is another approach that has been inves-
tigated for a better fuel economy. This approach utilizes the future topography
information in order to optimize the speed and the gear for a vehicle travelling
in a hilly terrain by exploiting the vehicles’ potential and kinetic energies stor-
ages. In this approach the velocity will vary along the road depending on the
road gradient. The look-ahead strategy could be seen as a contradiction to the
platooning approach in which vehicles maintain almost the same speed along
the road.


A combination between these approaches could be implemented using the model
predictive control (MPC) scheme. Since there are many process constraints,
such as inter-vehicular distances, engine maximum torque, road velocity limits,
etc. MPC is a perfect candidate to handle these constraints especially that in
many cases the system will be operating close to the limits. The control design
could be handled in two approaches, the centralized control design and the
decoupled control design. In the centralized controller, as shown in the Figure
2, all the vehicles’ private data such as mass, engine specs, etc. in addition to
their states such as velocity and time headway are sent to the central predictive
controller via vehicle to vehicle communication, could be in one of the trucks
probably the lead vehicle or even in a cloud. One of the methods used for optimal
control is the convex quadratic programming problem (CQPP) in which every
local minimum is a global minimum. The problem is as follows

$$ min\,z = f_0(x) \\
f_i(x) \leq 0 \\
Ax = b $$

Where f0,f1,f2, …, fm, is the objective function, and the inequality constraints
are convex functions. However, the equality constraints are affine functions.
In the platoon case, some convexification is needed in order to get CQPP. Hense,
the problem is solved and the optimal speed and time headway references are
sent back to the vehicles’ local controllers. This approach optimizes the fuel
consumption for the whole platoon rather than individual vehicles in which the
group interest comes first. One of the drawbacks of this approach is that in order
to solve the problem you need to handle huge matrices since all the vehicles info
is handled at once. In other words, this approach is rather computationally

Figure 2: Centralized adaptive cruise control

The decoupled architecture, as depicted in the Figure 3, could be a solution for
the computation capacity issues. Instead of handling the quadratic program-
ming (QP) problem for the whole platoon, each vehicle considers itself, which is
why called greedy. The problem starts to be solved from the leading vehicle and
goes backwards. Each vehicle solves the QP, considering the gaps in front of the
vehicle and the road topography, and sends states to the succeeding vehicles.
The pros of this approach are that trucks need not to share their private data
and the matrices sizes are much smaller. So the computation time is less than in
the greedy control strategy but the solution is not as optimal as the centralized

Figure 3: Greedy approach


As it is mentioned above, formulating a convex quadratic programing problem
is used to get the fuel-saving velocities. Since the vehicle dynamics are quite
nonlinear, linear approximations are needed, therefore, finding an appropriate
velocity reference is essential, assuming that the vehicle will be driven close
to the reference. Finding such reference should consider many factors such as
maximum traction force along the road, road limits and the cruise speed set by
the driver. One of the other challenges is gear optimization which could be solved
using dynamic programming. The complexity of dynamic programing problem
increases exponentially with the rise of the vehicles number, as a result, the
problem become computationally demanding, therefore, it is not very reliable
for the real-time implementation.

Read more

The simplest diagnostic example would basically consist of two sensors y and z, which are measuring the same unknown quantity x. When considering that the sensor-values could include errors, f1 and f2, the resulting system become:

y = x + f1                           (eq. 1)
z = x + f2                           (eq. 2)

As x is the only unknown variable, this system of equations is overdetermined.
This enables the construction of a residual, that is a connection between known quantities that are equal to zero in a fault free scenario. Residuals are usually denoted with r, which in this case results in the following residual:

r = z – y = f2 – f1

The residual r has the possibility of detecting the physical faults f1 and f2, but there is no way to determine which of the faults that has caused r to deviate from zero. The ability to pinpoint which fault has caused the deviation is known as the isolability of the system. By adding a third equation to the system, full isolability is achieved.

u = x + f3                           (eq. 3)

r = z- y = f2 –f1
r1 = y – u = f1 – f3
r2 = z – u = f2 – f3

It is however possible to create residuals through which all 3 faults are detectable by combining all three equations, e.g.

r3 = z – 0.5y -0.5u.

This residual does not contribute any additional information compared to the information already given by r, r1 and r2, which follows from which equations that were used to create each residual.

{E1, E2} resulting in r                   (set 1)
{E1, E3] resulting in r1                 (set 2)
{E2, E3} resulting in r2                 (set 3)
{E1, E2, E3} resulting in r3           (set 4)

What differs the top 3 sets from the bottom one is that the top three are what is called Minimal Structurally Overdetermined sets of equations, also known as MSOs. The minimal part corresponds to an MSO not being a subset to any other overdetermined set of equations. Set 2 is a subset of set 4 for instance, but not vice versa. The structural nature part of MSOs enables analysis of very complex systems as it only takes the existence of unknown variables and faults into account and not in what way these are included into the equation. For example, equation 1 would structurally be summarized as x and f1 exist. For a system of equations, this can be plotted by using a matrix, where each row corresponds to an equation, and each column represents existence or non-existence of faults or unknown variables. This is called the Dulmage-Mendelsohn decomposition.

For additional information regarding computing MSOs, see Fault Diagnosis Toolbox on github. One interesting application of residuals is model validation. This application is possible due to the fact that if a model is correct, the residual value is likely to be low and vice versa.  If a model has a low accuracy, it is often of interest to pinpoint the low accuracy to a particular subpart of the model, if it is possible. This can be achieved by letting the faults {f1, f2, f3} represent model equation errors {fe1, fe2, fe3} and then generate residuals based on MSOs.

By using as few equations as possible in each residual, maximum isolability regarding model inaccuracy can be achieved. One method used to convert residual-values to one metric (in order to compare the validity of different model equations) is to compute the mean-values for all residuals sensitive to a specific fault fex, and then multiply these means together to a single value R_fex. The absolute value of R_fex doesn’t provide much information, but by comparing R_fex to values generated through residuals that are sensitive to other faults (fey, fez,..) an indication of model accuracy is achieved.
R_fe1 > R_fe2 -> equation 1 is likely of lower accuracy then equation 2.

For further information and examples on bigger models see (Karin Lockowandt, 2017, p.30).

Read more

ODF will be an arena to build competence and nurture innovation. It is open to all who believe that crunching data from the ocean is first of all fun, secondly, holds the answers to a sustainable blue economy and, thirdly, gets really productive when different competencies work together! Data collected from the ocean poses challenges such as numerous data sources with varying characteristics and time scales, communication difficulties and harsh environment for the sensors which can lead to poor data quality. Overcoming these challenges using efficient AI will be vital for the future of the blue economy and sustainable ecosystems.

ODF will be headed by professor Robin Teigland from Chalmers University of Technology. SCOOT (Swedish Centre for Ocean Observing Technology) takes on the coordinating role. Stay tuned for more information in the future.

ODF is part of Vinnova’s investment to speed up development within AI.

Read more


The issue of deep learning (DL) is a hot topic in modern society and is one of the most rapidly growing technical fields today. One of many subjects that could benefit from deep learning is control theory. Its nonlinearities enable implementation of a wider range of functions and adaptability to more complex systems. There has been significant progress in generating control policies in simulated environments using reinforcement learning. Algorithms are capable of solving complex physical control problems in continuous action spaces in a robust way. Even though the tasks are claimed to have real-world complexity it is hard to find an example of such high-level algorithms in an actual application. Moreover, we have found that in most applications in which these algorithms have been implemented, they have been trained on the hardware itself. This does not only enforce high demands on the hardware of the system but might be time-consuming or even practically infeasible for some systems. In these cases, a more efficient solution would be to train on a simulated system and transfer the algorithm to the real world. 

Furthermore, one might wonder if a traditional control method would perform better or worse on the same system. In order to recognize how well the deep learning algorithm is actually performing, it would be interesting to compare it to another method on a similar control level. 

The main purpose of this project was to provide an example of a fair comparison between a traditional control method and an algorithm based on DL, both run on a benchmark control problem. It should also demonstrate how algorithms developed in simulation can be transferred to a real physical system. 



Due to its unstable equilibrium point, the inverted pendulum setup is a commonly used benchmark in control theory. There can be found many variations of this system, all based on the same principal dynamics. An example of this is a unicycle which principal dynamics can be viewed as an inverted pendulum in two dimensions. Thus, as a platform to conduct our experiments, we constructed a unicycle. 

Figure 1: CAD model of the unicycle

Figure 1: CAD model of the unicycle



Our main focus for the design was to keep it as lightweight and simple as possible. To emphasise the low hardware requirements, we chose the low-cost ESP32 microcontroller to act as the brain of our system. On it, we implemented all sensor fusion and communication to surrounding electronics necessary to easily test the two control algorithms on hardware. We dedicated one core specifically for the two control algorithms and added a button to switch between the two algorithms with a simple press. 

To be used in simulation and control synthesis, we derived a nonlinear continuous-time mathematical model using Lagrangian dynamics. The unicycle is modelled as 3 parts, the wheel, the body and the reaction disk, including the inertia from all components in the hardware. It has 4 degrees of freedom; the spin of the wheel, the movement of the system, the pitch of the system and the rotation of the disk. The external forces on the system come from the disk and wheel motors. 


Controller Synthesis 

The infinite horizon linear quadratic regulator (LQR) is a model-based control problem which results in a state feedback controller. The feedback gain is determined offline by from an arbitrary initial state minimizing a weighted sequence of states and inputs over a time horizon that tends towards infinity. The LQR problem is one of the most commonly solved optimal control problems. As a mathematical model of the system is available and due to its characteristics, we implemented an LQR controller for this project. 

For our deep learning control of the unicycle, we chose proximal policy optimization (PPO). The method is built on a policy-based reinforcement learning which offers practical ways of dealing with continuous spaces and an infinite number of actions. The PPO has shown superiority in complex control tasks compared to other policy-based algorithms and is considered to be the state-of-the-art method for reinforced learning in continuous spaces. 

To make a long story short we trained the algorithm for the system by writing up the mathematical model of the unicycle in Python as an environment for the agent to train in. The actions the agent can take are the inputs to the two motors. After taking an action it moves to a new state and receives a reward. After some millions of iterations of taking actions and receiving rewards the agent eventually learns how to behave in this environment an creates a policy to stabilize the unicycle. 



Both methods successfully managed to stabilize the system. The LQR outperformed the PPO in most perspectives in which the hardware did not limit the control. As an example, in practice, the LQR managed to stabilize from a maximal pitch deviation of 28 degrees compared to the PPO method which managed 20 degrees. We observed this sub optimal behaviour of the PPO in several situations. Another example can be seen when applying an external impulse to the system. 

Figure 2

As can be seen, the LQR handles the impulse in a somewhat expected way while the PPO goes its own ways. 

This unexpected behaviour is not desirable for this system but we think it might be seen as beneficial for other systems. For example, systems with unspecified or even unknown optimal behaviour. However, for systems with a specified known optimal or expected behaviour, we would recommend the good old LQR, if applicable. 

Even when exposed to model errors, the PPO did not show any sign of unreliability compared to the LQR in states it had encountered during training. However, when introduced to unknown states, the performance of the PPO is impacted. By keeping the limits of the training environment general enough this should not be an issue. However, when dealing with systems with large or even unknown state limits, LQR is probably a safer option. 

We believe our project has shown a good and fair comparison between these two methods on the same system as well as has given a good and informative example of how a DL algorithm trained in simulation can be transferred to a real physical system. The unicycle is of course only an example of such a system, but we feel like we encountered a lot of interesting features that can be generalized and used to benefit other projects. If you have doubts, please read our report! 

Read more

“The cloud” or properly referred to as “Cloud Computing” is a general term used when talking about computations or data storage on a server at another location accessible through an internet connection, i.e. “in the cloud”. Hence the computation is not made on your local computer nor on a local server. 

The area of use for cloud computing has primarily been within data/file storage and offline computation, whereas the use of online computation is currently on the rise. It is particularly strong within the field of data science which looks at large amounts of data to get insight and retrieve relevant information for taking appropriate decisions. This is one of Combine’s specialty areas and our open-source tool Sympathy for Data and its cloud companion Sympathy Cloud.  

Connected services within the automotive industry 

An area where online cloud computation especially is on the rise is within the automotive industry. Because once one automotive manufacturer delivers functionality within a new growing area the race begins, and it has officially started. Some of the key players so far are: 

How it works 

One application of the cloud within the automotive industry is to keep track of the location of all connected vehicles at any time and assess whether a particular vehicle requires information for its current trajectory. This information could, for example, be of informative or safety critical character or crucial for the driver/vehicle to plan ahead. 

The assessment is broken down into localization, trajectory estimation and translation of everything to the same timeframe. Depending on where you put the responsibility of the final assessment, you could either have the cloud make the complete assessment or have the necessary information be sent down to the intended vehicle for final assessment through onboard data processing. 

The main application for this technology so far has been within the information flow to the driver of potential safety threats along the road ahead, such as hazardous obstacles (broken down vehicles or road work areas) or ambient information (road friction). But by introducing cloud-to-cloud communication, where for example one cloud can hold infrastructure information (such as traffic light information), the spectra of information are broadened. 

However, the main benefit of using the cloud would come by having this as a “mother of all sensors”-sensor, i.e. by being able to receive data of your surrounding which is statistically substantiated. It would then be able to assist in reaching level 5 automation for autonomous driving vehicles ( 

The main challenges on reaching there are: 

  • Latency; as it is a real-time system, the latency of both, communication as well as computations, are crucial. 
  • Computational reliability; as always, the computational reliability is crucial, but when being within an online cloud computation framework the information sent to the vehicle needs to be fully reliable to be able to relentlessly act on it. 
  • Having enough data; both having enough data from a vehicle to assess a situation but also having data from a sufficient number of vehicles. 

The future of online cloud computing 

As internet accessibility is getting better and data transmission is getting cheaper, especially with 5G on the horizon, the future of online cloud computing looks promising. If more and more companies go towards data sharing, the range of possible applications looks even more promising.  

Nevertheless, the main issue when collecting and sharing personal data (such as vehicle data or data from your mobile phone) in such quantity and quality, will, however, be integrity. Sharing and storing data continuously at every instant will open the door to misuses such as surveillance or tracking, no matter how anonymized the data may be. And thus, we are moving towards an “all-seeing eye” society. 

Read more

Kickstarted by the need for portable electronics such as phones and laptops and fueled by the increasing demand in storage capacity, battery technology has seen unprecedented improvements over the past decade.  Nevertheless, even the most advanced battery types share a common trait – they degrade over time, decreasing the amount of potential energy (or capacity) they can store. After they drop to about 80 % of their nominal capacity – typically after several thousand cycles – they are considered at the end of their so-called remaining useful life (RUL) and usually discarded. But the end of RUL does not have to mean retirement. Batteries are prime candidates for second-life applications in systems featuring lower requirements, stationary applications or auxiliary units. But how can we determine how long the battery has left until absolute failure? After all, any investment in cycling applications requires a safe assessment of remaining capacity and projected degradation trajectory. Enter another promising field experiencing rapid growth – machine learning. 

One can argue that the most crucial part of machine learning is acquiring wellannotated data. Even the most complex model will not be able to make sense of a dataset that is too small, missing key variables or of poor quality. Without good quality data, the model choice or parameter tuning is of little importance. The team behind ”Data-driven prediction of battery cycle life before capacity degradation” (Severson et al., 2019) provides an extensive dataset monitoring battery degradation. In their paper the authors claim the dataset to be the largest publicly available dataset so far, containing close to 97,000 charge-/discharge cycles of 124 commercial LiFePO4 battery cells. 

During the charge-/discharge cycles, the team continuously monitored voltage, current, temperature and internal resistance of the batteries. External factors influencing battery degradation were limited during the data generation process by performing measurements in a temperature-controlled environmental chamber set to 30 °C.  The batteries were subjected to varied fast-charging conditions to facilitate different degradation rates while keeping identical discharge conditions.   

In the feature engineering process, the authors found a single feature to have a linear correlation factor of -0.92 with the log of RUL. This feature was the log of the variance of the difference between discharge capacity curves, as a function of voltage, between the 100th and the 10th cycle. Thus, the engineered feature could be used by itself to achieve great prediction results of RUL after the 100th charge/discharge cycle. 

Using Elastic Nets, with default parameters, we obtained the following prediction results after the 100th cycle. 


Note that we decided to use the first cycle as a reference when creating the feature instead of the 10th suggested in the paper. Next, the degradation of five (randomly selected) individual batteries is shown together with their predicted RUL. Its exact value is difficult to predict correctly but may be of sufficient precision for simple classification tasks. 


While the engineered feature used has an outstanding correlation with RUL it is nevertheless very restrictive. Using such a feature basically means performing 100 charge/discharge cycles in a controlled environment before a prediction is possible. In a commercial setting, such a setup will be too time-consuming, costly and thus impractical. Therefore, finding other features that are less restrictive but still offer good predictive performance is important. For example, we found that using the first cycle as a reference instead of the 10th is a suitable candidate for predicting RUL, increasing the commercial viability of the prediction method. The figure below visualizes the correlation coefficient between the feature and RUL for each cycle up to 200 cycles. 


The figure tells us that the linear correlation is the largest around 100 cycles. It might still be possible to use the feature already after 40-50 cycles with a smaller reduction in performance. Keep in mind that this is the linear correlation most indicative of prediction performance for linear machine learning methods, while non-linear methods can find useful information for prediction even though the linear correlation is low. 

Until now, we have only considered one feature for predicting RUL, but several more can be engineered from the charge-/discharge cycle data. Introducing more features leads us to another problem, namely, feature selection. There are regression methods that can report on the feature importance for prediction following their training. An example of such a method is the Random Forest Regressor (RFR), which is also a non-linear estimator. Supplied below is an example of feature importance an RFR after fitting. 


Using the smallest subset with a joined total importance of more than 0.8 the top 6 features were selected and the following prediction chart was obtained on test data. 


As can be seen, the predictions are the best when the RUL is smaller than 200, but between 250-1500 the mean prediction is close to the RUL at the cost of increasing variance. Only half of the battery cells live longer than 850 cycles, which decreases the amount of training data for larger RULs and will introduce a bias towards better predictions for smaller RULs. 

The eager reader might be wondering about the elephant in the room by now – what about actual batteries? After all, they would be the focus of any real battery lifetime prediction efforts. The discussion above, however, has considered individual cells, tens or even hundreds of which are usually combined into battery packs to obtain the necessary voltage and current in commercial applications. Unfortunately, no public dataset of the same magnitude as the Stanford study is available. As a stopgap measure, we can construct a collection of virtual batteries by simply aggregating the cells to mimic existing products – e.g. a 72Ah LiFePO4 pack. This method is akin to bootstrapping, in this case choosing 68 cells with replacement from the dataset to obtain a collection of training and testing packs. It is also a poor approximation of reality, since it does not model any of the possible complex interactions within a pack. It thus servers more as a preview of possible future studies. We can then train a Random Forest Regressor on the whole lifetime until failure of the training selection and predict RUL of the testing collection. The figure below presents the resulting predictions in orange against the blue RUL lines of five selected packs, showing not only their striking similarity but also a high accuracy of our predictions (\(r^2\) ≈ 0.99). It serves as a depiction of our main idea – by aggregating the cells together we effectively “smooth” over their individual impurities, removing uncommon outliers and thus enhancing the predictive capabilities of our model.  


Of course, the applicability of this approach needs to be tested extensively against real-world battery health data. Gathering and analyzing it will be the next exciting step on our journey and our contribution toward saving the world. After all, we wholeheartedly agree with Mr. Anderson, and humbly add – the future is electric or not at all.  

AiTree Tech, together with Combine, is eager to help contribute to a better environment and a sustainable future. While many actors are focusing on today’s transfer of energy source from fossil fuels to batteries, we try to look ahead and are focusing on how to reuse batteries after their 1st and 2nd life. 


AiTree technology is providing a data driven machine learning solution with the purpose to predict lithium-ion batteries (complete) lifetime from 1st life to end of life.
Let’s start the journey togheter!  

Johan Stjernberg CEO@ AiTree Tech 
+46(0)733 80 08 44 

Erik Silfverberg CTO@ AiTree Tech 
+46(0)730 87 40 20 

AiTree Technology@LinkedIn 

Read more
Kontakta oss