Blogg | Combine

Blogg

Here is an example of an intuitive, easy to use interactive dashboard which shows the number of cases for different countries, and cumulative in the world: https://www.gohkokhan.com/corona-virus-interactive-dashboard-tweaked/ 

Another issue for the scientific world is how to gather all the information from different sources and regions. To date, around 29 000 scientific papers have been published focused on Covid-19 (https://www.theregister.co.uk/2020/03/17/ai_covid_19/). To go through and categorize, analyse and join the information from all these papers is a huge task if done manually. But, this is where AI could come into play. Using Natural Language Processing (NLP) techniques, an AI model could go through and gather all important information from all papers in a fraction of the time it would take a human to do the same. Without a doubt, more research on the subject will emerge and the dataset will only grow from here, so utilizing the power of our computers and AI methods will be essential to keep growing our knowledge globally going forward. In fact, this work has already been started in the U.S, and an open dataset is now available (https://pages.semanticscholar.org/coronavirus-research). This dataset is meant to be machine readable, to enable machine learning algorithms to utilize the information within.   

Lastly, we would like to mention that the largest social networks have announced that they will pick up the fight against misinformation on their platforms. As history already has proven, advertisements and misinformation online can really affect the populations behaviour, so this announcement is well received.  

Read more

One of the most significant tasks in developing a successful control system is calibration. Companies often have large teams responsible for calibration alone, and the result is more often than not “good enough” as it is extremely hard to find an optimal calibration. We believe that the best approach of calibration involves gaining an understanding of your system, work systematically, and to test a different set of calibrations repeatedly.  

The first part of the calibration done in simulation is often relatively easy, as one can see the result of a change in a parameter almost instantly. In practice, however, one often needs to compile and upload everything to the ECU in question for only a small parameter change. This approach is both time-consuming and complicated. As there exist few free of charge softwares tackling this problem, for smaller companies that do not have the same amount of resources, or for control engineers as well as students that develop control systems on their free time, this task grows even more demanding. To address this issue, we developed an application in which one can conveniently see the results of the calibration in real-time. Below, we will describe the concept of the application and how to use it.

Connection

To minimize the performance impact of the users’ ECU (further on referenced as a unit), the application communicates to a small external real-time device over Bluetooth. The wiring consists of connecting two wires to a UART port of the unit as well as positive and negative power. All communication is handle in the c++ library, so all the user needs to do is to call the basic library functions. If the user has performance to spare and an unused Bluetooth socket, it is possible to skip the external device and set up a Bluetooth connection directly to the unit. 

Calibration

Once connected to the device, the user can start to calibrate, which is done in the Calibration fragment of the application, see Figure 2.a. The calibration is done by adjusting the so-called “Tunebars”, either by directly writing the desired value or by dragging the bar left or right. Every time a value has been changed, the application will notify the change to the unit. For us at Combine, user customizability is a big deal, so the user can either add any number of custom Tunebars using an arbitrary parameter name or remove any Tunebars that are unnecessary, by pressing the “+” or “-“ buttons. Hence, the user can calibrate any arbitrary parameter the unit uses in real-time.   

To enable both parameter identification and fine-tuning, the precision of the Tunebar can be adjusted by changing its factor. By doing this, one can start with finding an approximal parameter value using a large factor. Once found, one can set the approximated value as an offset and fine-tune it using a lower factor, see Figure 2.b.   

One can also add any number of buttons, controlling either so-called “Main States” or “Modes”, see Figure 2.c. The Main States are defined as a mutually exclusive state that the device is acting in. This function is meant for the user to easily switch between control states that can only be activated one at a time, such as “start”, “stop”, or “cruise”. The modes, however, are not mutually exclusive and can be used to activate specific functions the user may have, e.g., switch on a light or activate a specific control objective. The Main States and the modes can be accessed through the navigation bar as well, to easily switch between them in all fragments of the application, see Figure 3.a. The user can define any number of arbitrary Main States and Modes through the c++ library.   

Finally, once the user has found a satisfactory calibration, or want to save the customized Tunebars and buttons, the layout with the current combination of Tunebars and each corresponding parameter value can be saved to be loaded in a later instance, see Figure 2.d. This way, the user can quickly go back to another combination of calibration values and compare what effect a specific change of parameter tuning had.  

Plot

In most cases, the human eye alone is not sufficient to decide whether a specific calibration is better or worse than another. For this reason, we added a plot function to the application as well. It can be accessed through the navigation bar, see Figure 3.a. The plot works by the user specifying what signals he or she is interested in observing. By specifying the pointers to these signals using the c++ library, the signals will automatically appear in a list in the plot fragment of the application, see Figure 3.b. In this dialog, the user can then choose what signal to plot and some plotting customizability. Similar to the calibration fragment, the user can also add or remove any number of plots and scroll between them, see Figure 3.c. 

Manual Control and Debug Console

One of the main objectives behind the application was to make it as generic as possible. In many cases, one can imagine sending reference signals as a useful feature. Even though this could be achieved using the Calibration fragment, the user could only give linear reference signals to the unit. There exist many systems in which angles and power are used as references, so we decided to include these cases in the initial release of the application. Thus, we created a fragment in which the user can give manual control references using a joystick yielding a steering angle and a reference power. To increase the customizability, one can specify how the joystick should work. For instance, the user can redefine the angle range in case the system has a hardware limit or redefine the angle depending in which quadrant the joystick is in. This can be achieved by defining the angle in one, two, or all four quadrants, see Figure 4.a. For quick access to the Main States and the Modes, the option to add buttons in this fragment was added as well. For a quick setup of the joystick and the buttons, we implemented a save and load function here as well.  

Finally, for the user to have full control over the unit, we also decided to add a debug console prompt. In this console, the user can transmit any message from the application to the unit or the other way around. Utilizing this, the user can define own commands in which the unit should respond in a specific way, see Figure 4.b. It can also be used as a debug console in which the unit transmits the status that the user wishes to observe. For easy observation, we decided to add the debug console to the Manual Control fragment. However, we also added it as a fragment of its own in order to get a larger prompt, see Figure 4.d. 

The source code of the project can be found on our GitHub page, see https://github.com/combine-control-systems/combine-connect 

Happy calibration!

Read more

Gathering information and understanding the spread of the disease is the first step to be able to control it. Important insights range from basic statistics such as mortality rate, to more complex correlations between e.g. a country’s health care system and the growth of infected people.

Around the world, data scientists are gathering data, extracting information and building models aimed to help experts deploy the best solutions and countermeasures in haltering the virus’ spread.

If you are an ambitious and curious data scientist, you might be interested in doing some digging yourself. Finding data is the of course the first step, and there is plenty available to find online. One option is to use the daily updated dataset provided by John Hopkins CSSE found on GitHub: https://github.com/CSSEGISandData/COVID-19. This repo is updated daily with statistics on number of infected people, deaths, and recoveries by country and region.

To get some ideas about what information can be extracted from the data, and what might be interesting, free online blog posts are a good source for inspiration, one example can be found here: https://towardsdatascience.com/9-fascinating-novel-coronavirus-statistics-and-data-visualizations-710cfa039dfd.

If you are interested in digging even deeper, more advanced methods can be used to predict the spread of the disease. One such example is described in https://towardsdatascience.com/using-kalman-filter-to-predict-corona-virus-spread-72d91b74cc8, where an adaptive Kalman filter is used for prediction of the virus spread.

Stay safe.

Read more

What is ODF?

Ocean Data Factory (ODF) Sweden, started in July 2019, is an initiative at the intersection of industry, academia and the public sector to liberate data from our oceans. ODF Sweden is a part of Vinnova’s investment to speed up national development within AI. The project has two main objectives: to build broader AI competence and to encourage innovation. At the heart of this mission is a principle of openness that encourages broad cross-disciplinary participation from anyone eager to use ocean data to address ocean challenges.


For more information visit the ODF Sweden website HERE


Combine in ODF

As an industrial partner, Combine contributes mainly in the design and implementation of AI and machine learning solutions for use cases within ODF.

General approach to use cases

Once ODF Sweden has selected a use case for further investigation, the methodology for AI implementation follows a key series of iterative steps:

  1. Data collection
  2. Data preparation and cleaning
  3. Setup of training, validation and test sets
  4. Training the models
  5. Evaluating model using suitable targets
  6. Interpreting model output
  7. Continue until output is actionable

During the first six months, our team focused mainly on the use case of the invasive species Dikerogammarus Villosus in the Baltic Sea region.

Use case 1: Invasive Species D. Villosus

Figure 1: D. Villosus (The Killer Shrimp) [1]

Initial problem formulation:
  • The killer shrimp’s (D. Villosus) presence has been recorded in rivers in Western Europe,
  • presumably by travelling through inland waterways from the Black Sea, and
  • assumed to be carried by cargo ships where ocean expanses are too vast to traverse.
Research Question:

Can Machine Learning methods help us predict the areas of the Baltic sea which would be suitable for the Killer Shrimp?

Data used

  • Presence data from the North Sea & Baltic Sea regions (roughly 3000 data points)
  • Pseudo-absence data from the Baltic Sea region (2.8 million data points)
  • Environmental rasters for key environmental drivers informed by subject experts which include: surface temperature, surface salinity, substrates, exposure and depth (averaged during the winter months, where appropriate).

Figure 2: Raster feature layers stacked onto a basemap [2]

Finding a needle in a haystack

There is an extreme class imbalance in the presence-absence data that merits additional caution when applying any machine learning classifier. In this case, a naive classifier would have an accuracy of roughly 99.9% if it simply always chooses the majority class – ”Absent”.

 

Figure 3: Confusion matrix with some common evaluation metrics

To evaluate model performance in a more useful way, we also consider the importance of each class. In this case, finding all the presence locations is more crucial than missing out on some absence locations, i.e. we can accept more False Positives (FP) than False Negatives (FN). In other words, we favour maximising the Recall score over the Precision score, which tells us how successful we are at identifying the presence locations. To evaluate this trade-off, we use the AUROC (Area under Receiver Operating Curve) which tells us how well our model discriminates between these two classes.

Figure 4: ROC curve example [3]

By looking at these metrics, we can separate naive majority classifier models (with an AUROC close to 0.5) from models that choose appropriate features to improve our classification performance on the positive (”Presence”) class (AUROC above 0.8).

Models used

Tree-based models (single and ensemble) seemed most appropriate as no feature selection or pre-processing had to be performed and could thus avoid such biases. In addition, tree-based models are easier to interpret which allows us to directly investigate model predictions and understand underlying driving factors. We also opted for a deep feed-forward neural network in order to capture more complex features than those provided by tree-based models alone.

Results

When we consider the AUROC and Recall metrics, we see that the Neural Network manages to outperform both other models. We also see that the strong F1 scores attached to the Decision Tree and Random Forest models were mainly due to their preference to predict the majority class.

Evaluating model decisions

Figure 5: Example of tree model decision on one test case based on SHAP values

Decision tree models allow us to look ”under the hood” and see how individual features contribute to decisions.

In this case, we make use of SHAP values first discussed by Lundberg and Lee [3] which use a game-theoretic approach to explain the contribution of each feature to the prediction. In Figure 5, we see both the magnitude and direction of the average impact of a feature on the decision to classify this case as ”Absent”. Some notable factors are that we have a sandy substrate (denoted by 1 in this model), and that the temperature is outside the normal range, but most of all the depth is out of the normal range of the D. Villosus which pushes towards the absence outcome.

Visualising model predictions and the potential impacts of climate change

Since our features come in the form of rasters (which are grids of cells with feature values), using our trained models we are able to make predictions for each cell in the raster grid. The output from the model is then the probability of ”presence” in that cell. Below, we have built a web application that helps us visualise the probabilities from some of these models, as well as the impact of future climate changes on these probabilities in the Baltic Sea. Specifically, notice the increased suitability of Åland and the Eastern Coast of Sweden (Östersjön) under future climate condition forecasts provided by the Swedish Meteorological Services (SMHI), one of the partners of ODF Sweden.


Click HERE to run our web application


Key takeaways

  • GIS modelling involves domain knowledge of the underlying phenomena which becomes very important for model output interpretation.
  • Data, data, data… The more data, the better our choices of models and the richer our potential insights.
  • Documentation of methods and data extraction methods is crucial to communicate methods and ideas to groups from a wide range of backgrounds.
  • Results should always be critically approached since assumptions about the data and the models strongly impact the model outcomes and success criteria.
  • The methods used have demonstrated that useful insights can be generated, which has raised many other interesting questions. For example, given the direction of currents along a particular coastline, which paths become most probable for shrimp migration?

Try it out on Kaggle:

Our progress has been fully documented on Kaggle Notebooks to encourage further discussion and collaboration:

Next steps

  • The nature of this project is that our problems continually evolve in line with our ability to access an increasing amount of data and better understand the important questions that need answering. This is clear in the transition of our methods from Notebook v1 to Notebook v2 in Kaggle. Our hope is that this will continue as outside participation increases and more data becomes available.
  • One avenue we are exploring is to expand the current deep learning model to raster features using a convolutional approach because there are spatial correlations in rasters that make our point-based model costly and inefficient. This would allow us to also forecast abundance figures and not simply presence, and answer a host of other questions (e.g. predicting raster density landscapes). To achieve this, we would need obtain significantly more data, either through collecting more data or augmenting the data we currently have. Figure 6 below illustrates what such a model might look like:

Figure 6: A convolution model proposed by Christophe Botella, Alexis Joly, Pierre Bonnet, Pascal Monestiez, François Munoz. [4]

 

  • Another future aim is to have a better understanding of the migration pattern of this species, which so far is assumed to travel through shipping traffic, but whose presence in particular parts of the Baltic Sea seem to show that there is more to the story. For e.g., recent data on currents in the Baltic sea along the coasts of Poland and Kaliningrad seem to show how currents drive migration of Killer Shrimp in this region.

Figure 7: Illustration of currents in the Baltic Sea along with presence of D. Villosus (in red)

 

  • Lastly, it is crucial that we continue to document and share the progress made and challenges encountered. In this way, we may be able to identify the lessons learned that are applicable to invasive species in general and those which apply specifically to this case so that methods may be carried over to new problems in ODF Sweden and beyond.

 

References:

[1] https://upload.wikimedia.org/wikipedia/commons/thumb/0/03/Scheme_amphipod_anatomy-en.svg/220px-Scheme_amphipod_anatomy-en.svg.png

[2] https://www.oceanecology.ca/species_model_data.jpg

[3] https://commons.wikimedia.org/wiki/File:Roc-draft-xkcd-style.svg

[4] Lundberg, Scott M and Lee, Su-In (2017). A Unified Approach to Interpreting Model Predictions. [online] arXiv.org. Available at: http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf [Accessed 07 Feb. 2020].

[5] Christophe Botella, Alexis Joly, Pierre Bonnet, Pascal Monestiez, François Munoz. A deep learning
approach to Species Distribution Modelling. Alexis Joly; Stefanos Vrochidis; Kostas Karatzas; Ari
Karppinen; Pierre Bonnet. Multimedia Tools and Applications for Environmental & Biodiversity
Informatics, Springer, pp.169-199, 2018, 978-3-319-76444-3. ff10.1007/978-3-319-76445-0_10ff. ffhal01834227f

Read more

There are many fields in which Deep Learning (DL) can be applied, such as computer vision, speech recognition, data science and many more. As we love cutting edge technology and specialize in control systems, we are interested in how DL can act as an alternative to traditional model-based control algorithms, such as LQR, MPC and H.

Last spring, a thesis was conducted here at Combine that investigated this issue by applying both an LQR controller and a DL controller based on a state-of-the-art reinforcement learning method on a practical system. In this post, we will dig a bit deeper in the comparison and the future capabilities of DL as an alternative to traditional control algorithms.

 

To begin with, one can start to compare which systems the algorithms can be applied to. Traditional control algorithms are often based on a linearized system. This implies that the system on which the controller is applied need to be linear enough in a direct neighborhood of the operating point. The controllers can only guarantee stability in the vicinity of that point. However, DL controllers (further on called policies) are nonlinear controllers, which means that they are in theory not limited to linear-like systems. In practice, on the other hand, things might not look as bright for the DL controller. For instance, some systems may be sensitive to failures and training on the real physical system may be hard or even practically infeasible. In literature, there are many examples of policies controlling a simulated system but its hard to find a high-level algorithm in an actual real-world application.

To address this problem, our thesis workers decided to construct such a critical system (a unicycle) and benchmarked the DL policy to a traditional LQR controller. They solved the issue of impractical real-world training by training the policy in simulation to then transfer it to the real system. A bit like the process of designing a traditional controller. This approach implies that one cannot eliminate the need of a mathematical model of the system that is often a challenge when constructing traditional controllers.

One can ask the question that if you cannot eliminate the need of a mathematical model, why would one choose the more complex DL policy compared to the model-based controller that is optimized for the system? There can be several reasons. First, one need to keep in mind that the model-based controller is only optimal for the linearized version of the system. If the system is highly nonlinear, one can get a better performance using the nonlinear DL approach. Secondly, one can include nonlinear features to the system considerably easier when designing a DL policy in comparison to a traditional controller. Let’s further dig into this using the unicycle as an example.

Imagine one were to design the controller to be robust to external disturbances (e.g. a push from the side). For the traditional controllers, one would have to model this push in a linear way and describe it in the frequency plane to then implement an Hcontroller. This is possible, but as the number of features one would like to add increase, the complexity of implementing it increases significantly. This is a problem if one wish to add another disturbance the unicycle should be robust to, e.g. signal noise. If one were to implement this using DL, the only thing one would need to do is to add a small subroutine to simulate the feature every now and then during the training. As one can model the features in a nonlinear way, this can be very powerful while keeping the implementation simple.

As promised, let’s compare the methods when deployed to a real-world application. In the thesis, both methods stabilized the unicycle in a satisfactory fashion. However, the traditional LQR controller outperformed the DL policy in most perspectives in which the hardware of the unicycle did not set any limitations, both in practice and simulation. This is most likely due to the policy converging to a local optimum. The impression is that developing a high performing DL policy requires more time and resources compared to a traditional controller. Due to thesis projects being limited in time, our thesis workers had to accept the stabilizing local optima solution and evaluate that policy. An interesting aspect would be to fine tune the policy on the actual hardware. This would give the DL controller a chance to train on the real system to increase its performance, something that is not possible using a traditional control method.

 

Another interesting aspect is how one can guarantee stability of the system. In traditional controllers, one can often verify stability within a state space using well known concepts. In DL policies it is harder to guarantee stability. One way of doing it is to state that the policy has stabilized in a number of simulations. However, if the state space is large, it may be time consuming to reach and explore all states in the state space, both during training and verification. DL policies may have a degraded performance if it reaches unexplored states. Signs of this was showed in the thesis, as tests were made in unexplored states and the policy showed worse performance in these states.

As a conclusion, DL policies can act as an alternative to traditional model-based controllers for certain systems. If one face a system which will be a subject to several set of known disturbances or a system that will reach nonlinear states, one should at least consider a DL approach. On the other hand, if one face a system which will act in a region in which the system can be described as linear, traditional model based linear controller is recommended. This is also the case for systems with a large state space. Finally, DL policies show signs of great potential, such as fine tuning the controller on a real system to increase practical performance or training it with subject to nonlinear features such as noise. In the future, we can see a potential of DL based controllers replacing traditional controllers on even more systems that is possible today.

If you are interested, you can read the complete thesis work here: https://hdl.handle.net/20.500.12380/300526

Read more

If we instead turn our attention to the future and look at our medium-range goals, they are

  • Growing the business in preparation for our next initiative
  • Consolidating our business in Stockholm and to some extent in Linköping
  • Complementing our services in MBD with more embedded
  • Continue with the development of Sympathy for Data

However, the market might not always be moving in a direction that aligns perfectly with our goals.

For large customer accounts (in our case, this often means time & materials assignments for automotive customers), we should assume that margins might shrink due to customers worrying about their profitability and order book. At the same time, these same customers will probably continue to inquire for help on many interesting assignments. They need to maintain momentum during this period of technology and market offer disruption (As a Service, Electrification, Connection, Automation). A look at the statistics confirms this view, as the number of inquiries from these customers at the 19/20 break is about the same as for the 18/19 break, with some exceptions.

For medium-size customers and many other customers without a large R&D department, we think that services in Data Science (perhaps particularly Data Engineering) as well as SW development and cross-functional projects will continue to be in high demand.

Our MBD services are so well established that we run a risk of taking them for granted. An obvious initiative is to complement MBD with more embedded, but we might also consider co-simulation or other future technologies for the development of our MBD offer. There is an ROI limit somewhere that makes it much more likely that larger R&D customers need Modeling and Simulation services than smaller ones. This limit is also influenced by the complexity of the customers’ systems/products.

Our engineers are excellent!

Future Combine engineers need to be recruited in the same spirit in order to reach our goals. Still, we will need to focus more on finding engineers with some industrial or higher academic experience, even more so for our offices in Linköping and Stockholm. This does not mean that we should stop recruiting inexperienced but well-educated engineers, but perhaps limit this to Göteborg and, to some extent Lund, both of which have established customer relations that allow us to give our junior engineers exciting tasks.

Enter the Next Level!

Read more

Our biggest asset is our employees

I could, as usual, highlight our initiatives this year, discuss the market and the long term vision or strategy. However, this time, I would like to focus on what is most important in building a competitive company, and that is the people working there. At Combine, I have been surrounded by talented, inspiring colleagues, everyone eager to learn new stuff, but also to share their thoughts on technical content or other exciting topics.

We believe in empowering individuals and teams, and that it will have a significant impact if one can affect the situation and outcome. I think that Combine’s biggest strength is the ability to see each employee as an asset.

So, since this is my last blog post as the CEO of Combine, I would like to thank all current and previous colleagues. It has been an honor.

Christmas and donation to charity

It is soon Christmas, and during the holiday, we often think of people that don’t have the same privileges as we do.

For the last couple of years, we have donated a significant amount of money to Stadsmissionen as a Christmas gift.

We have chosen to continue this year as well, but this year to these four purposes instead:

  • Plan International – a brighter future starts with girls education
  • WWF – preserve biodiversity, reduce pollution and unsustainable consumption, contribute to sustainable utilization of renewable natural resources
  • ECPAT – contribution to the removal of more child sexual abuse images online
  • Swedish Childhood Cancer Fund – support to continue the long-term research, support affected children and their families, and come closer to our vision of eradicating childhood cancer

Rather than buying candy or a Christmas gift such as a towel, etc., we hope that you share our belief that this a better alternative.

We also hope that by doing this, we inspire other companies to do the same!

Make sure you take care of your family and yourself this Christmas.

Or as Ida Sand sings, in my favorite Christmas Carol at the time being: ”Now the time is here, make sure that you are near, everyone that you hold dear.”

Merry Christmas and a Happy New Year.

Read more

A process using simulation and a neural network has been developed to investigate the possible benefits of using a completely digital approach to the friction estimation problem. Using simulation of vehicle dynamics, the process has removed errors that are very difficult to eliminate in the real world, but has also created problems that didn’t exist before, for example a mismatch between the real car and the simulated car.

So why do we need to improve friction estimation? Several problems exist with the development and testing process, read more about them in the earlier post!

In order to solve – or at least circumvent –  those problems, a digital vehicle model was created in a simulation environment created using Unity 3D. The digital vehicle is based on vehicle data from NEVS, and can be driven around and tested for any friction coefficient in any situation from ordinary driving to extreme manoeuvres, and the friction is always known.

When it comes to making an estimate, the friction estimation methods that are popular today are extremely complex and are based on complicated modelling of tyres with an immense amount of dynamic variables.

The purpose of the methods is to find a connection between the input variables and the friction coefficient using a complex physically derived connection. To circumvent the complexity, the “Universal approximation theorem” was instead used to find a purely mathematical connection between the inputs and the friction coefficient, by asking a different question.

A multilayer perceptron type of neural network with two hidden layers was used as it can approximate functions proven by the theorem, and can relatively quick give an insight into the performance of the process for evaluation. The neural network model was used to classify the sensor readings into 1 out of 4 categories of friction, from ice to dry asphalt.

In an optimal scenario, where the neural network has been trained on driving scenarios that are similar to the scenarios the neural network is evaluated on and little model mismatch, the network reached an estimation accuracy of 94.3%. However, when the network was evaluated on driving scenarios that were dissimilar to the training scenarios, and using a lot of model mismatch, it estimated 37.4% correctly.

The conclusion from the results being that the digitalisation process can be highly beneficial if used in a way where the difference between physical and digital vehicle is minimised and the model is trained on scenarios it will be exposed to. If the process is used in an adverse way only a set of additional errors has been introduced.

This blog post is based on a thesis that Jonas Karlsson has done at Combine in collaboration with NEVS in the fall of 2019. The full report will be published shortly and will be publicly available to interested parties.

Read more

Who are you?

My name is Amr Salem, and I’m 26 years old. I was born and raised in Cairo, Egypt. I have for as long as I can remember been an adventurous person. I dream of piloting an aircraft, which I think is one of the reasons I became an engineer: to explore new technical areas, push the limits, create something new, and bring innovations to life that will help people.

I also love to explore and connect with other countries and cultures, so travel is a big part of my life. When I do travel, I try to do it like one of the locals, as the cultural experience is so much bigger that way, and I get to meet new exciting people. In my spare time I like to learn new stuff, play soccer, or hang out with my friends. Another interest of mine that I have been able to exercise here in Gothenburg is horseback riding. I’m blessed with a landlady who also owns a stable, in which I’m lucky to be able to help out sometimes!

How come did you end up in Gothenburg?

I enrolled in an Italian school in Cairo, as that school was one of the prestigious engineering schools in Egypt. I did it for various reasons; one was, of course, the engineering part, another was the curiosity and longing to learn a new language. The education included both manual labor for CNCs and other types of machinery but also preparations for a continuation at a technical university.

When it was time for me to choose a university, I wanted to explore the world and experience adventure outside Egypt. Since I understood Italian from my high school, Italy was my first country of choice. I had a hard time deciding between a university in the beautiful city of Milan or the well-renowned University of Trento. In the end my better side won that battle, and I enrolled at the University of Trento.

I was excited when I traveled to Trento before my first day, since it was only my second time outside Egypt. The first time was only a short two weeks language course in Italy a couple of years before, and it was my first time traveling alone! In the end, I lived in Italy for a total of five years, and I believe I might end up in Italy again in the future!

Studying in Italy awakened a desire for more semesters abroad, and decided I wanted to study abroad during my masters as well. During my research for possibilities to study abroad, there was a university in Canada that caught my eye, the McMaster University in Hamilton. They had a thorough application process which involved both tests, interviews, and a bit of luck. The competition for the two positions was fierce, so I applied to the Erasmus program as well. But when I read the reply, I couldn’t believe my eyes at first. I was one of the two selected candidates!

An interesting thesis work was also vital to me, so I started to contact professors all around the globe.

My time in Canada was fantastic, the country is just so amazing, and the people there were so kind. I submerged myself into the books, and the semesters were passing by at a fast pace. I studied so hard that my friend caught me by surprise when he called in the middle of the night, asking if I still wanted an exchange year in Erasmus regime. So, after I had completed my exchange year in Canada, I went on to Sweden and Chalmers for a second exchange year. Here I stumbled upon one of the professors I had an interview with, the year before for a master thesis. As I approached him, I found to my surprise that he also remembered me, and my master thesis was solved.

The thesis work was performed at Volvo Group through Chalmers. The scope, limitations, and conclusions of the thesis work became a battle between the three stakeholders, who did not always agree  In all worked out in the end, even if it demanded several flights back to Trento, and now my solution is patent pending.

Currently

During my short professional carrier, I have been working in two big companies, and now I work in a smaller one (65 employees). The difference is striking. Combine is by far the best employer I had. The environment, the assignments, the people, and the flexibility it is on an entirely different level. Here I feel seen, and I’m entrusted to work on a small team working in a high paced project with hard deliveries, and they trust me to deliver according to my level, which I’m raising daily.

The project I’m working on right now is to deliver a measurement system for the train industry. It was scary in the beginning as I was moving away from my domain, mechatronics, and entered a field where I combine my skills in embedded systems with software engineering. But we have a technical lead in the project who helped set up the software architecture, and answered all questions with ease, so the project is steamrolling right now. I have never learned as much as I currently do. Everything I do, I can see how it is used in the final product, and that gives me a purpose to work even harder. It feels like I have embraced the challenged and adapted myself to it.

Next step

When the project has made a quality assured delivery to its customer, I would like to take on a new assignment. The ideal project should relate to control theory for aircrafts or autonomous drive. The idea of path planning and trajectory control are exciting topics which I find challenging and exciting. That is a rather new subject, and there is no clear path forward, and industry-standard makes the field even more appealing. I have tried to plan my future meticulously, but I have realized that it is the uncertainties and unexpected shifts in my plan that I have enjoyed the most, and thus, nowadays I try to include some margin in my plans for this sort of events.

Read more

In this latest release of Sympathy for Data, we packed a larger number of bug fixes and improvements to existing nodes as well as several new features. Here we want to present some of those briefly. 

To improve the development experience and user feedback possibilities, we spend some time on implementing an improved message handling and a new issue reporting system. Sympathy’s worker processes will send their messages, e.g., exceptions and warnings, instantly to the platform. You will no longer have to wait until you close your configuration windows or for an execution to finish before you see messages, warnings, and errors.   

Furthermore, sending issue reports to the developers of Sympathy for Data has become easier. This release added a feature to send issue and bug reports to us directly from within Sympathy. One can go either to the help menu or by right-clicking on a message in the “Messages” window to select “Report issue”. This feature will enable us to improve the stability and behavior of Sympathy even faster by collecting the relevant information directly. More details can be found in the documentation. Alongside the issue-reporting system, we added the possibility to send your anonymous user statistics with your explicit agreement. The data we collect includes only statistics such as which nodes are used, how often, and the execution time. Sympathy for Data is not sharing any personal information nor data on the analysis which is being performed. With the help of this data, we are planning to improve Sympathy for Data further and add functionality such as enhanced search and node recommendation systems in the upcoming releases.

If you are one of the users using the Figure Node but always found the interface a bit too complex, we have added a wizard functionality to the configuration panel. It allows you to select a plot type and configure its data inputs quickly. You will find it by clicking on the wizard’s hat on the top right (see image below).  

To help our users working with the text and json data format within Sympathy, we added, besides some new nodes for data format conversion, a search function into our text editor/viewer accessible as usual via Ctrl-F (Cmd-F on MacOS). 

For those users who use our Windows installer, we are happy to tell you that the documentation for the platform and standard library now comes pre-built with the installer. Furthermore, we made several other changes to how the documentation is built and which kind of information is added automatically, for example, deprecation warnings. For a list of all changes, see the news. 

Last but not least, we have updated several of the underlying python packages to recent versions. As usual, this is only a short selection of new features and fixes. Navigate to thenews sectionof the documentation to learn more about all changes in Sympathy for Data 1.6.2. If you want to hear more about Sympathy for Data and how it can help your organization to handle, manage, analyze, visualize data, and create reports, please do not hesitate to contact ussympathy@combine.se. 

Read more
Kontakta oss