Blog | Combine

Blog

As I mentioned in previous blog posts Combine are expanding to Stockholm and we have now started the initiative to open an office.

Possibilities
Last week we, myself and my colleague Peter, visited Stockholm for interviews and customer meetings. We had some very interesting meetings regarding circular economy, autonomous drive and AI for Cleantech that we hope will lead to projects or prototype platforms. Stay tuned for more information in upcoming posts.

Job posts
We are still going through applications for the positions as Head of Stockholm, Data Scientist and Control Systems Engineer, so visit our homepage and apply!

We will start by recruiting suitable engineers followed by the manager, so the engineers might have the possibility to be part of the hiring process of their manager.

Office
Regarding the office space we aim to find an office near the central station. The main reason is that we want to decrease the need of transportation by car between offices.

Competence needed
There seems to be a big need for experience working with GPUs, Machine Learning, Deep Learning and AI. We also see the possibility to package solutions that we can deliver as projects from our office instead of engineers on site. We prefer building our business with both assignments on site and solutions.

Being honest
Finally, I would like to highlight an issue that is not linked to Stockholm but is something I feel is important for our profession.

When we presented Combine and our services we were well received. The fact that we focus on technology and how we can help our customers differs quite a lot from suppliers that only consider business possibilities without taking good partnership or customer success into account. Some of the customers were surprised that we were more interested in getting things right than finding an assignment here and now.

We prefer being honest, doing the right thing, delivering quality over time and focusing on people; and we believe that this way of working will lead to success. It is also good for the soul 😊

So, I’d like to end this blog post by Combine’s motto:

ENTER THE NEXT LEVEL
“Our vision is to enhance engineering organizations in the world. Enter the next level is our way of expressing this, by helping our clients reach a higher level in their business. Our success comes with the success of our clients.”

Finally,
Thank you for reading.
Erik Silfverberg

CEO, Combine Control Systems AB

Read more

Modelica is a non-proprietary language for object oriented equation based modeling maintained by the Modelica Association. Using Modelica, complex models can be built by combining components from different domains such as, mechanical, electrical, thermal and hydraulic. There are many libraries, both public and commercial, for modeling various types of system. Modelica models can be built and simulated using a wide range of tools, both commercial and free of charge.

Here a model of a residential house will be built using the public Modelica Buildings Library and the open source modeling and simulation environment Open Modelica.

The house that we are modeling is a one-story gable roof house with a solid ground floor. The model of the house will contain:

  • the envelope of the house
  • two air volumes, the residential area and the attic, separated by the internal ceiling
  • the interior walls of the house lumped into one wall
  • a solid ground floor with underfloor heating
  • a ventilation system with heat recovery
  • a fan coil unit

The heat transfer between the house and the environment is modeled using heat conduction and heat convection. The environment is described by the air temperature, wind speed and wind direction. Since we include the wind direction in the model we need to take the orientation of the

outside walls into consideration and cannot lump all walls into one. So first a model of an exterior wall is created that consist of three models from the Buildings Library:

  • HeatTransfer.Convection.Exterior extConv, a model of exterior convection that take windspeed and direction into account
  • HeatTransfer.Conduction.MultiLayer cond, a model of conduction through a multi-layer construction
  • HeatTransfer.Convection.Interior intConv, a model of interior convection

The input to the model is the outdoor conditions and the interaction with the indoor air is through the heat port, port. The parameters of the model are the area of the wall, the azimuth of the wall and the construction of the wall. The construction of the wall is specified as an instance of Buildings.HeatTransfer.Data.OpaqueConstructions.Generic with the materials of each layer. Each material specifies the layer thickness and the material properties such as density, thermal conductivity and specific heat capacity, also the number of states can be specified in the spatial discretization of each layer. Similar models are created for the roof and interior ceiling.

Now a model of the house can be put together using the created models. First the materials and constructions need to be specified for the different constructions, below is an excerpt of the Modelica code that shows the definition of the exterior wall construction:

constant Buildings.HeatTransfer.Data.Solids.Brick brickWall(x = 0.12);

constant Buildings.HeatTransfer.Data.Solids.InsulationBoard insulationWall(x = 0.10);

constant Buildings.HeatTransfer.Data.Solids.GypsumBoard gypsum(x = 0.013);

constant Buildings.HeatTransfer.Data.OpaqueConstructions.Generic wallLayers(nLay = 3, material = {brickWall, insulationWall, gypsum});

The air in the residential area and the attic are modeled using a Buildings.Fluid.MixingVolumes.MixingVolume which has a heat port and a variable number of fluid ports.

Now the various sub-models can be connected for the envelope and the interior air volumes. The heat ports of the wall, roof and ceiling segments are connected to the air volume that they are facing, and the outdoor conditions are connected to an external input to the house model.

Next the floor with underfloor heating and an input for internal heat load disturbances are added. The underfloor heating is modeled by inserting a prescribed heat flow between two layers of the floor and the internal heat load is modeled by connecteng a prescribed heat flow to the indoor air. The floor is connected to the ground which is set to a prescribed temperature of 10 °C.

 

The ventilation system that provides the house with fresh air is modeled using an exhaust fan, heat exchanger and a fluid boundary with a variable temperature connected to the outdoor temperature. The exhaust fan is modeled using a Buildings.Fluid.Movers.FlowControlled_m_flow with a constant mass flow rate determined by the specified air replacement time, typically 2 hours. To recover heat from the exhaust air an heat exchanger is used modeled by

Buildings.Fluid.HeatExchangers.ConstantEffectiveness with an efficiency of 80%. The ventilation system is connected to two fluid ports of the indoor air volume.

In a similar way the fan coil unit is modeled using a flow controlled fluid mover but instead of a heat exchanger a Buildings.Fluid.HeatExchangers.HeaterCooler_u is used with a specified max power of 4 kW. The mass flow rate of the fan is set as a function of the requested power starting from ¼ of the max flow at zero requested power to the max flow at the maximum requested power.

Then some temperature sensors and an energy usage calculation and the corresponding outputs are added to the model and the model of the house is complete.

Now we can use the house for simulation. First, we build a model for simulating the open loop responses to different inputs.

To get some understanding on how the house responds to the outdoor conditions and the different heating systems step responses are performed at an operating point when the outdoor temperature is 10 °C, the windspeed is 0 m/s, the wind direction is north, and the indoor temperature is 22 °C. Four step responses are simulated:

  • The outdoor temperature is raised to 11 °C
  • The wind speed is increased to 10 m/s
  • The fan coil power is increased by 200 W
  • The floor heating power is increased by 200 W

The figure shows that the all step responses settle in about the same time and reach a steady state in 1000 h, about 42 days.  However, the initial transient of the step is quite different, and it can also be seen that the fan coil unit raises the indoor temperature slightly more than the floor heating at 200 W. To make further comparisons the normalized step responses are plotted in two different time scales.

The plot of the normalized step responses in the 1000 h time scale confirms that the time to reach steady state is about the same. Looking at the plot showing the first 24 h of the step responses it shows that a change in the outdoor temperature or fan coil power initial changes the indoor temperature very quickly. This is due to that they are directly connected to the indoor air volume, that outdoor temperature through the ventilation system and the fan coil power through to the heater that heats the air that is blown through it.

Studying the step responses, the following conclusions can be drawn.

  • Heating the house with a fan coil unit is more energy efficient if only the indoor temperature is considered, using floor heating more energy is lost to heat transfer to the ground but a warm floor may give a higher perceived comfort for the occupants of the house.
  • If it is desired to keep the indoor temperature close to a specified setpoint at all times this can only be achieved using a fan coil unit.

This model of a house is not modeling all aspects of a real building, for instance there are no windows or doors in the building envelope and radiation heat transfer between the building and the environment is not modeled. This means that absolute energy performance calculations using this model may be inaccurate. However, the model can be used to evaluate different control strategies with respect to control and energy performance.

 

Read more

What’s you background story?

I moved to Linköping when I started studying Engineering Physics and Electrical Engineering which was some years ago. I found an interest in control systems, so much in fact that I spent three years after my initial studies to earn myself a licentiate degree in that field. I was eager to put my acquired knowledge to practical use and started working for a consulting company in the region. At that time, Combine didn’t have an office in Linköping.

How is it you came in to work at Combine?

Well, the first time I noticed Combine was a job advertisement in a newspaper. I think it was in Ny Teknik, but this was a long time ago. Although it sounded great they only had offices in Lund and Gothenburg at the time and moving was out of the question. But when I saw that they were opening a new office in Linköping I contacted them and here we are now.

What was it that sounded so great about Combine?

Combine is very focused on the fields in which they work, that is control systems and data science. The thing that caught my immediate interest in the advertisement was that they really pinpointed the field of control systems which I haven’t found any other company who has done in the same way. When I looked at the qualifications, I felt that everything matched me perfectly. Now that I work for Combine I can only agree with my initial feeling, instead of being the biggest Combine focuses on being the best.

You have experience of working as a consultant for a long time, and in different consulting companies. You are also very appreciated by our clients. What would you say is your success formula?

I don´t have a formula or a good answer for that matter, maybe I’m just suitable as a consultant. I like to learn new things and to face new challenges. I also feel a need to get an overview of things I work with directly, so that I can contribute as fast as possible. I guess that the social parts also contribute when it comes to be a good consultant.

At the moment you work part-time while also being on parental leave. How do you handle that?

Yes, me and my wife are blessed with two fantastic children and until the youngest will start preschool I only work half of the week. It hasn’t been a big issue since the client I am working for is very understanding. I try to repay the favor by being as productive as possible when I work.

Without mentioning the client, we can state that you work in the automotive business. Do you have a favorite car?

Not really. Thanks to my kids I would say the dumper Chuck in the animated series The Adventures of Chuck and Friends.

Read more

How did you learn about Combine?

It was just after I finished my PhD in physics at the University of Geneva, Switzerland. During this time I was looking for a job in Sweden to move here. It was actually at Charm, the job fair at Chalmers University, where I more or less stumbled into the booth of Combine. My interest in Combine was immediately awoken talking to my future colleagues and their technical problems which needed to be solved.

Which of your skills acquired during your PhD are you using in your daily work life?

Since my PhD was about the very fundamental physical properties of novel materials, this parts are not at all important for my daily work life. It is more the broad mathematics and physics knowledge as well as secondary skills you acquire during a PhD which are the ones I am using during my work day, e.g. the programming experience and data analysis skills. During my academic career I got very interested in developing my own data analysis tools and in optimizing our algorithms. When I came to the end of my PhD I was sure I wanted to continue in this direction, but working in industry.

Tell us about the different projects you were working on at Combine?

I started helping out with the development of Combines own data analysis tool “Sympathy for Data”, which I wished I had known about during my PhD. I believe, it would have saved me many hours of developing my own scripts and tools over and over again. I also like the visual representation to quickly grasp and structure a workflow. Furthermore, it appealed to me to contribute to open source software (Editor’s note: you can read more about Sympathy for data here.

It followed a smaller project implementing a server application before I started a two year on site project at one of our customers designing and helping implementing a framework for automated end to end verification. This last project was very challenging on many levels, from learning the customers needs, designing the system from the ground up, as well as fighting for the right recourses. But I am a person who likes a good challenge and uses it to grow on it. I believe I succeeded and left the group in a good place before I started my new role as the group manager Data Science Solutions in our Gothenburg office.

How do you see the future of your new group?

There are two things which are very important to me and Combine in general. Firstly, I want to provide our customers with the right solution, meaning quality and usefulness. And secondly I want to provide a great working environment for our consultants where they have the possibility to grow professionally and personally. I strongly believe that sharing knowledge between our onsite and inhouse consultants will boost our capabilities to provide our customers with the right and complete solution.

Read more

Combine has set out on a journey of adventures, exploring different industries with the following question in mind. How can we utilize the competence at Combine to help our customers “Enter the next level”?

In this blog post, we are exploring the retail business together with NIMPOS.

NIMPOS is a Swedish based company who offer a revolutionary simple and safe point of sale system suitable for both large and small companies thanks to its scalability. A full description of NIMPOS and their products can be found here. Having access to more and more data from transactions, NIMPOS is asking Combine for guidance on how to utilize the stored transaction data to help their customers enhance their business.

Combine develop and maintain a free and open source data analysis tool called Sympathy for Data. Sympathy is a software platform for data analysis and is built upon Python. It hides the complexity of handling large amounts of data in the data analysis process, which enables the user to focus on what is really important.

The first step in any data analysis task is getting access to the data. After creating a VPN connection, the data from NIMPOS database is easily imported into Sympathy by utilizing its powerful import routines.

Some of the data we got access to:

  • Reference ID (one per transaction)
  • Article ID
  • Article Name
  • Transaction Date
  • Quantity (number of sold articles)
  • Article Price

Now, with the data imported the powerful data processing capabilities of Sympathy is at our hands. The data is first preprocessed to filter out missing and unreasonable data, after which the analysis can start.

A few analyses have been implemented:

  1. Predicting the increase in the number of customers.
  2. Expected number of sold articles together with confidence bounds.
  3. Customer intensity variation
    • For each day of the week
    • Hour-by-hour for each weekday

An overview of the flow is presented in the figure.

Sadly we do not have any information to connect an individual transaction to unique customers, and no other customer features are available, e.g. age or sex, and this narrows down the possible analyses.

This post is the first in a series, where we have laid the ground for upcoming posts. We introduced the reader to the problem, some of the data, the tools, and a few analyses implemented.

In one of the upcoming posts, we will showcase the possibilities of connecting the strengths of Sympathy for Data, for processing and analyzing data, together with the interactive reporting made possible by Sympathy web services.

Stay tuned and don’t miss out on future posts. In the meantime, I suggest you read earlier posts or download Sympathy for Data and start playing around with some example flows. You won’t regret it!

Read more

Some thought after Pycon 2018

Python has become, if not the de-facto standard for data science, then at least one of the biggest contenders. As we wrote about in a previous entry, we sent a group of our top data engineers and developers to learn about the latest news in data science, and Python development in general. We share below some of the notes and impressions from this year’s Pycon conference for those of you that didn’t have the chance, or time to attend.

Ethics in data science

One very interesting and thought-provoking keynote talk was about the ethics of data science, and was held by Lorena Mesa from GitHub. She is a former member of the Obama campaign, as well as member of the Python Software Foundation board. In this talk, she presented experiences from the 2008 US-presidential campaign, and the role of data science in the rise of social media as a political platform. She also discussed the dangers that we have seen emerge from that in the years to follow. Data science have emerged as a powerful tool for spreading well intended information, not so well intended (dis-)information, or monitoring people for their political view, or even attempts at preemptive policing.

One of the most scary examples was a decidedly minority-report style scenario, in which police used an automated opaque system to give scores from 0 – 500 to estimate how likely individuals were to commit crime, and used this information to affect policing actions (this was done in Chicago, and there has been a strong backlash in media). An extra worrisome part of this is the black-box approach in which we cannot quite know what factors the system takes into consideration, or the biases that are inherent due to the data with which it has been built. Another example on this note was an investigation made by the American Civil Liberties Union (ACLU)  in which they used a facial recognition tool (and recommended settings) that had been sold to the police, and used it to match members of the U.S. Congress versus a database of 25000 mugshots. The system falsely matched 28 Congress members to mugshots, with a disproportionate number of these matches being against Congress members of colour. This is a tricky problem where the inherent socioeconomic issues laying behind the source material (the mugshots) are carried through to the predictions done by the system in non-obvious ways. Something that surely would need to be addressed, and being taken into consideration before we can allow ourselves to trust the results of such a system.

Finally, perhaps it is time for us data engineers to consider, and at least start the discussion about the larger ramifications of the type of data we collect, the algorithms we train, and how our results affect our society. Perhaps it is time for a hippocratic oath for data scientists?

Quantum Computing with Python

Over the last decade, Quantum computing has advanced from the realm of science fiction to actual machines in research labs, and now even to be available as a cloud computing resource. IBM Q is one of the frontier companies in research in Quantum computer, and they provide the open-source library qiskit, which allows anyone to experiment with Quantum computation algorithms. You can use qiskit to either run a simulator for your quantum computing programs, or you can use it connect over the cloud with an actual quantum computing machine housed at IBM’s facilities to test run the algorithms.

The size of the machines, counted as number of quantum bits, has been quite limited for a long time, but it is now fast approaching sizes that cannot conveniently be simulated with normal machines.

Contrary to popular belief, a quantum computer cannot solve NP hard problems in polynomial time, unless we also have P = NP.  Instead, the class of problems that can be solved by a quantum computer is called BQP, and it is known that BQP extends beyond P, and contains some problems in NP, but not NP-hard problems. We also know that BQP is a subset of PSPACE.

This has the consequence that we can easily solve important cryptographical problems such as prime-factorization quickly with a sufficiently large quantum computer, but we cannot necessarily solve eg. NP-complete problems (such as 3-SAT), or planning, or many of the other problems important for artificial intelligence. Nonetheless, the future of quantum computing is indeed exciting, and will completely change not just encryption, but also touch on almost all other parts of computer science. An exciting future made more accessible through the python library qiskit.

A developer amongst (data) journalists

Eléonore Mayola shared her insights from her involvement in an organization called J++, which stands for Journalism++, as a software developer who aids journalists in sifting through vasts troves of data to uncover newsworthy facts, and also teaches them basic programming skills. She showcased a number of examples of data-driven journalistic projects, ranging from interactive maps of Sweden displaying the statistics of moose hunts, or insurance prices, through the Panama Papers revelations, to The Migrants’ Files, a project tallying up the cost of the migrant crisis in terms of money, and lost human lives.

When it comes to her experience teaching journalists to code, some of the main takeaways presented were that even the most basic concepts, which many professional software developers would find trivial, can already have a big impact in this environment. Another point was that it is important to keep a reasonable pace, and avoid overwhelming students with too much information at once, and last, but not least, that the skills of software developers are sorely needed even in fields that many of us probably wouldn’t even consider working in.

Read more

The first model was a simple Equivalent Circuit Model (ECM) whose parameters first were identified to fit the model used for evaluation and then the ECM was used to perform the optimization on. The circuit can be seen in figure 1. The model used for evaluation was an advanced Electrochemical Model (EM) implemented in a framework called LIONSIMBA, which models the chemical reactions inside the battery with partial differential equations and therefore isn’t suitable for optimal control. The method used to fit the ECM to the EM could also be applied to fit the ECM to a physical battery, making it useful in real world applications as well.

Figure 1: ECM of a lithium-ion battery cell

The system of equations seen in equation 1 shows what the dynamics of the ECM looks like as well as the models used for temperature and State of Charge (SoC) estimation.

Since the goal was to charge as fast as possible, we wanted to minimize the charging time, which was done through minimum-time optimization. One way to solve minimum-time optimization problems, also the one used by us, can be seen in equation 2.

As there are a number of harmful phenomenon’s that can occur in a battery, additional constraints were needed as well. Two of the most important effects are lithium plating and overcharging, both of which we take into consideration. Both lead to decreased capacity, increased internal resistance and a higher rate of heat generation. It is known that there is some kind of connection between these effects and the voltage over the RC-pairs, vs, however not linear. This is why we applied a constraint to this voltage because without it, the solver would only take the temperature constraint into consideration which would lead to damaging the battery.

The EM allows us to see what happens inside of the battery in regard to the harmful effects when we input the current achieved through solving the optimization problem. One of the evaluated cases can be seen in figure, where both the result from the ECM and the EM are included. This case is for charging from 20-80% at an initial temperature of 15 C.

 

Figure 2: Results and model comparison for the EM and ECM.

The top left plot in the figure above shows the lithium plating voltage  which has to be kept above 0 and is controlled by the linear constraint put on vs, which is also shown. The top right plot shows if the battery is being overcharged, which also controlled by the constraint on vs. The bottom left plot shows the temperature and the bottom right one shows the current which is the result from solving the optimization problem.

The next thing we did was to compare our fast charging to a conventional charging method, namely the constant current-constant voltage (CC-CV) method. The constant current part was maximized for all cases to reach the same maximum values to make the comparison fair. The following plots are the same ones as above but compares our fast charging with CC-CV charging instead, showing that the fast charging is 22% faster and does not come as close to zero in terms of lithium plating voltage as the CC-CV method, although it has a higher average temperature due to the higher average input current.

 

Figure 3: Comparison between the optimized fast charging and CC-CV charging.

A summary of the charging times and the improvement over CC-CV can be seen in table 1 & 2 for charging from 20-80% and 10-95% for different temperatures respectively.

Conclusion
By performing optimization on an equivalent circuit model of a lithium-ion cell simulated in LIONSIMBA it was possible to achieve charging times that for some cases were up to 40% faster than with traditional CC-CV charging while still keeping the battery within the same constraints. To control the charging and avoid both lithium plating and overcharging a linear constraint was applied to the voltage over the two RC-pairs in the equivalent circuit model. The result clearly shows that the method has potential and that it should be possible to apply it on a physical battery even though it will be more difficult to choose constraints for the optimization.

Read more

Agile budgeting
We handle most of our internal processes with an agile approach, meaning that we believe in high performance team that are autonomous to decentralize decision making. Personally, I’m an agilist who strongly believe in empowering individuals/teams and that the result and dedication will be improved if one can affect the situation and outcome.

So, we will, starting this year also implement an agile way of budgeting.

The reason is that the current way of steering the business and company is not effective when the market change and when we know less of the future. I must admit that I’ve never been a fan of the standard way of budgeting since we always see a divergence just a few months after new year each year.

Instead of setting a budget for the year we will instead work with a window of a 2 months in a prognosis and change the budget/prognosis after hand when we know more of the actual outcome. I’m optimistic that this way of working with budgets will be the “future way” of many companies.

Since we are SW and Data Science nerds this will mostly by automated by using different available systems. If you have more questions about the model, please contact me for more information.

Control Systems
Last year we strengthened our control groups on all sites, taking more advanced projects both in-house and at customer site. We see a continued strong market for our services in controls and embedded solutions for this the year as well.

Data Science (AI, Machine Learning, Deep Learning etc)
We have invested heavily in our ability to deliver even more advanced projects in Data Science. We have upgraded our computational hardware and hired several new data scientists and computer engineers. We are now providing complete data science solutions to our customers ranging from data analysis on big data to developing, training and deploying machine learning models.  Other than that, we will roll out our new tool “Sympathy Cloud Services” as an addon to the already established tool “Sympathy for Data” in combination will different toolkits for Machine Learning etc.

Stockholm
According to our business plan this is the year we start the expansion to Stockholm as well. And, why having a business plan without following it?!

The Stockholm region is very interesting from a technical perspective with cool high-tech companies in CleanTech, Energy, Telecom, Automotive etc. We will start looking for a manager in the region but in parallel start visiting customers and hiring engineers.

Clean Tech
What could be more important than our planet and the legacy to our children?

Therefore, we will put more energy in to be a part of the change required to overcome the climate crisis our generation will have to endure. We have, already today, customers in this segment but we will level up our focus in Clean Tech from now on.

“We are going to exit the fossil fuel era. It is inevitable.” – Elon Musk

Finally,
Thank you for reading.

Read more

Strategic initiatives
Worth mentioning for the year is our strategic efforts in the field of Data Science and the development of the tool Sympathy for Data. In the beginning of the year, we began the development of toolkits for Machine Learning, Deep Learning, Image Processing, etc. as well as a cloud service associated with Sympathy. As the demand of computational capacity and data analysis has increased significantly, we have acquired a new calculation server. This enables our customers to outsource projects and solutions such as Predictive Maintenance where we handle everything from hosting, ETL, calculation and reporting.

Marketing
Our new website and graphic profile were launched in 2018: www.combine.se.
The investment gives a clearer message of our strengths and technical depth, which strengthens both the possibility of recruitment and approaching new customers. In addition, we also implemented a much more focused social media marketing plan to visualize our high level of technology, highly skilled engineers and company atmosphere.

Market analysis
The year has been political overwhelming with trade war, Brexit, weaker trading on the stock exchange and a failure (in my opinion) to cooperate after the Swedish election. The vehicle cluster in Gothenburg is extremely strong, and despite slighter worse economic conditions, we are careful but still positive for coming years.

AI, Machine Learning, Deep Learning
For us, the year has been particularly interesting from a technical perspective since there has been a big interest on buzzwords such as AI, Machine Learning, Deep Learning. These are areas where we have been active for many years, but the market has not yet been mature and receptive. Therefore, we also see a strong interest in our expertise / experience in the field, and not least for the tool Sympathy for Data.

Parental leave
Since I have been on parental leave the second half of the year, I really want to point out the huge benefit we have in Sweden with parental leave. I think that everyone, especially men in leading positions should try to see the huge reward of being at home with their children. One should also take in to account the great benefit our society gain from equity between woman and men, both at home and at work.

Finally, I would like to thank all my wonderful colleagues. You make my job both easy and inspiring.

Thank you,
Erik Silfverberg

Read more

As one of the most prominent programming language in data science we often find ourselves implementing our own products, such as Sympathy for Data, as well as tools and other software for our clients using Python. As a member of the Python community and a contributor to Open-Source software we want to keep our developers at the forefront of the development of Python as a language itself as well as the developments of the whole Python ecosystem.

Tomorrow there is the PyCon Sweden conference in Stockholm with several keynote speakers well known in the Python community. Combine will of course participate and today four of our data engineers and software developers take the train up to Stockholm to stay the night and to listen to the talks and to contribute in discussions. If you’re there, try to catch us for a quick chat on Python or Sympathy for Data and how we use it to solve data science problems for our customers.

Read more
Contact us