Blog | Combine

Blog

As one of the most prominent programming language in data science we often find ourselves implementing our own products, such as Sympathy for Data, as well as tools and other software for our clients using Python. As a member of the Python community and a contributor to Open-Source software we want to keep our developers at the forefront of the development of Python as a language itself as well as the developments of the whole Python ecosystem.

Tomorrow there is the PyCon Sweden conference in Stockholm with several keynote speakers well known in the Python community. Combine will of course participate and today four of our data engineers and software developers take the train up to Stockholm to stay the night and to listen to the talks and to contribute in discussions. If you’re there, try to catch us for a quick chat on Python or Sympathy for Data and how we use it to solve data science problems for our customers.

Read more

The Commodore64 home computer was a major success back in the 1980’s. It still has a cult status and coders are still pushing its hardware to its limits.
The graphics in the C64 were limited. The multicolor bitmap mode has a resolution of 160×200 pixels where each pixel has an aspect ration of 2:1. A total of 16 predefined colors were available and for each character position a subset of four colors were allowed due to the design of the hardware. There are tricks available to emulate more colors utilizing interlaced video. Mapping the video signals to obtain an accurate RGB-representation of how the original colors appeared on a TV in the 1980’s have been studied by several people measuring video signals. As a result the 16 colors could be represented as shown here.

An interesting problem is how to translate an ordinary image to a similar resolution as the C64 (with the same pixel aspect ratio) and how to translate the colors to the fixed C64-palette. The naïve solution is to measure the euclidian distance between RGB-colors. The problem is that this distance does not represent the human perception of differences between colors properly. Luckily there are decades of research available published by the International Commission on Illumination (CIE). The RGB-model is not a good representation of how human vision registers colors. The CIE came up with an alternative model called XYZ (tristimulus values). The CIE also invented the “CIELAB color space”, also known as CIE L*a*b (or “Lab”), where L* is the lightness while a* is green-red and b* is blue-yellow color components. The CIELAB color space has been proved to be useful when calculating differences between colors according to various models.

First, the RGB-value has to be converted to an XYZ-representation. This is done using a linear transformation and depending on the device we are working with different matrices can be chosen. The L*a*b* values are then calculated from the XYZ-values.

In 1976 the CIE released CIE76 which simply is the euclidian distance between two color representations in the L*a*b* color space. CIE76 was followed by CIE94 in 1994 and is defined in the L*C*h* color space, where C* is chroma and h* is hue. CIE94 included parameters to be able to distinguish between color differences in graphic arts and on textiles. In 2000 a new definition was released called CIEDE2000 containing five additional corrections improving performance in the blue region.

Greyscale pictures are often easier to convert. The first picture is of a woman in water taken by Jeremy Bishop. You can click the image to see it in full size. There are five versions of each picture. From the left we have the original picture, converted image using euclidian distance in the RGB-space, distances using CIE76, CIE94 and finally CIEDE2000. In this case, CIE94 and CIEDE2000 give similar results while RGB and CIE76 wants to include other colors.

In the next image we have a picture of a woman, taken by Dani Vivanco, where there are many similar color tones over the whole image. RGB, CIE94 and CIEDE2000 looks quite similar, but CIE76 has problems and includes various blue tones.

Now we will look at a fairly complicated image taken by Damar Jati Pranandaru. There are big differences between the different methods and we could argue that CIEDE2000 is the best performer (which it should be). CIE76 still has problems with blue tones.

A picture taken by Alex Hawthorne shows how difficult the blue tones can be to capture properly. Look at the jacket of the woman and also the blueish tone of the snow in the background. CIE76 is still a bad performer with blue lips, a brown jacket and does not want to include any blue tones in the snow. CIEDE2000 is best at capturing the facial tones. RGB wants to include more blue color than the other models. CIEDE2000 has a small amount of blue.

Just to explore more of the blue performance we have a picture taken by Toa Heftiba. RGB is very keen on choosing blue colors for the background. CIE76 is including blue colors in the face and green colors for the highlight on the top of the head. We could argue whether CIE94 or CIEDE2000 is the best representation of the original image. CIEDE2000 adds more colors to the jacket.

From Gabriel Siverio we have a very difficult image to represent. RGB adds too much green to the skin. CIE76 has problems with blue colors where it should be brown/red. CIEDE2000 adds a small patch of green to the skin to represent a highlighted area.

Dark colors are interesting to try as well as in this photo by JC Gellidon. RGB wants to choose red/brown colors. CIE76 wants to go blue/purple while CIE94 and CIEDE2000 is choosing the two red colors available in the palette.

CIEDE2000 is good at picking up good colors in difficult images like this one taken by Marius Christensen. CIE76 replaces many blue colors with red/brown and CIEDE2000 adds a skin tone to the shadow of the face.

Caleb Lucas has a similar difficult image where CIEDE2000 outperforms the other color models. CIE76 performs the worst again by adding too much blue to the face.

The intention of the corrections of CIEDE2000 was to handle the blue tones better, and it is clear that it worked based on this final picture by Maria Badasian. RGB and CIEDE2000 are quite similar, but CIE76 and CIE94 differs more.

Measuring color differences in the RGB-space works in some cases but fails miserably in other cases. CIEDE2000 is performing very well and should be the primary choice when comparing colors based on human perception.

The source code used to generate images can be found at github.

Read more

How did you come in contact with Combine?
I came across a job advertisement on LinkedIn. The timing of it was perfect, as I was looking for a new job that was more in the line of my Ph.D. studies and research. Combine caught my eye, as they mentioned many of the skills that I felt I possessed and wanted to use on a daily basis.

Are you using these skills now, please describe a typical work day?
At the moment, I’m back in the automotive industry which I like. I have an assignment where I sit at my client’s facility, doing a lot of sensor fusion. Using existing sensors, I develop different algorithms to estimate the surroundings of a vehicle. The client work in an agile way which suits me well, with daily reconciliations to avoid problems. Apart from doing the software development, I also take part in testing it on the hardware. It’s fun to see the result of one’s work on real hardware. That was something I couldn’t do on my last assignment.

Why not?
I can’t talk about it but let’s just say that assignment wasn’t in the automotive industry.

Ok. Do you have any preferences regarding industries to work within?
I have to say that from my experience I like the automotive industry. My Ph.D. is in vehicular systems and I have many years of experience working in that field. It is a fast industry, where I as a software engineer can work with virtual models and simulations to get instant feedback from my work. When applicable, I can generate code automatically and upload the code to a vehicle and test my algorithms on the real hardware just shortly after I developed them. Another benefit is that it is well known to the public, I can easily explain what I do to friends and colleagues.

It sounds like you have a good assignment right now?
Yes, it is a really nice one. Although it would be fun to work even closer to my research, developing efficient diesel engines. But that would probably mean that I have to move, and with family and friends in Linköping I prefer to stay here.

How do you feel about the balance between work and family?
For me it works very well. At the moment I work about 85 % compared to a full-time employee. I have understood that it can be an issue in other countries. However, my current client is very understanding and supportive, so my hours are quite flexible. And that is also something I like about my work, for somehow I need to juggle work, family and other activities.

What kind of other activities do you have?

I play alto saxophone and goes running at least once a week.

Has Johan increased your interest in Combine? Want to know more about us and our colleagues? Please contact us and we can discuss how we can accommodate your needs and find the best solutions to your problems.

Read more

This blog post is a continuation on a series of posts using Sympathy for Data for doing image processing. Sympathy is an Open-Source tool for graphically programming data-flows which lends itself well for quickly setting up and testing different image processing and classical machine learning algorithms that we can use to classify objects in an industrial setting. We will show how we can perform simple object recognition using only a modicum of feature engineering and a very small dataset and some simple machine learning algorithms. By doing feature engineering on the input data we get a high precision training set for the machine learning algorithm sufficient for classifying objects. This can be contrasted with the shotgun approach of deep learning which requires wast datasets of training examples to solve the same task.

The task

In the previous entry we started on an algorithm for automatically extracting objects in an image taken top-down of objects against a neutral background. These example objects consists of a mix of screws, washers and nuts on a conveyor belt and we would ideally like to classify them in order to sort them in a later step.

The output from our previous step was a list containing the mask for each object found in the input image. We will continue from this step by using image processing to do feature engineering as a pre-processing step to using a simple machine learning algorithm to do the classification.

What is feature engineering?

Many simplistic approaches to object classification using machine learning feed the raw pixel data to machine learning algorithms such as support vector machines, random forest or classical neural networks in order to solve tasks such as the MNIST classification. While these approaches have been successful in small domains such as the 28×28 pixels images of MNIST it is much more problematic to do classification of arbitrarily sized objects in larger images due in part to an explosion of the number of parameters in the models which leads to a need for very large datasets. We cannot reasonably train a model with fewer examples than number of free parameters.

Solutions to this problem include either doing feature learning or feature engineering. While the former is within the purview of deep learning and out of the scope for our solution we can instead use the later by using classical image processing to extract new features that enables the machine learning algorithm to work with the images.

A classical algorithm used for pre-processing images before feeding them to machine learning algorithms is SIFT features (PDF). The original algorithm proposed in 1999 was considered by many to be a large step forward since it allows to extract a number of features for points on a real world object such that the points extracted from two different images of the same object would be close in the feature space regardless of the scale (size) and rotation of the object. This allows us to compare the features from the same object in two different images.

Each feature consisted of the XY position of a keypoint (eg. corners) as well as a multi-dimensional vector that describes that point in such a way that the description is mostly invariant under different scale, rotation and lighting conditions. While this algorithm have been used by many to do successful object recognition it is today not as often used due to being patented and due to many other new alternative algorithms for extracting image features.

One good free alternative to SIFT (and later SURF) is to use the ORB algorithm (PDF) which combines two other algorithms for keypoint detection and for creating a feature descriptor for each such point. We will base our solution around this algorithm.

Using ORB features for object recognition

Step one is to load an image containing only the object we want to detect, in this case an example with a number of screws.  We give this image to the ORB features extractor in the node image statistics.  With the default arguments we get an output that contain a number of XY points (see table below) as well as a feature vector  f0 … f255 describing each such point.  We can draw a small circle around each XY point in order to see which points have been extracted in the image, and we see that we have a number of such points for each object in the image at key locations such as the head and bottom of the screws.

Next we can train a one-class classification algorithm to match these features. Two options that are included in the default Sympathy are the isolation forest  and one-class support vector machine  algorithms. We will use the former to create a machine learning model that matches features that are present in the first image, while rejecting all other features. Note that by having only a single image with a few screws as a training example we are only doing a very light and cheap form of machine learning, and should adjust our expectations for the end result thereafter.

Before feeding the feature points to the classification algorithm we remove the XY coordinates by using the select columns from table node. We use the fit node to train the isolation forest using the features from the training image and we use the predict node to create a prediction for each of the features in a test image containing both screws, washers and nuts.

The output of the predict node is a single column Y with value +1 for features that match the original features and value -1 for all other features. We can use this Y value to determine a color to be drawn on top of each feature in order to see how the model handles each feature in the test image.

As we can see in the images above we have a large number of positive features (Y=1, white circles) for the screws in the image and mostly negative (Y=-1, black circles) for the washers and nuts.  In order to make a final classification we just need to count the number of features that are positive vs. the number of features that are negative for each object identified in the image. If the ratio of positive to negative features exceed a threshold (eg 0.6) then we classify the object as a screw.

We do this by creating a lambda subflow that takes two inputs. The first input should be the table with a y0 prediction for each feature. The second input should be an image mask.  Note on the main flow you can click on your lambda and select add input port to make these two ports visible so you can give a test inputs to them. By connecting the table with features/predictions as well as an input mask to the lambda it allows you to test-run the lambda on these values when you are editing the lambda.

Once we have our inputs to the lambda we take a look at the content of the lambda. You can right click on the lambda sub-flow and select edit just like you would on a normal subflow. The first thing we do inside the lambda is to use morphology to extend the border around each object, we want to get keypoints that are not only inside the objects but also along the border of them.

After that it is a small matter of extracting the value of the mask (true/false) at each keypoint and summing the keypoints that have value y0=1 and y0=-1 respectively in a calculator node. We do this by giving the XY coordinates for each keypoint to the Extract Image Data node, this gives a table with a single column ch0_values that gives the mask value at the XY coordinate for each keypoint. Next we can use the following expression in the calculator node to compute the ratio of positive to negative features for each object:

What this means is that we require a keypoint to both be inside the mask using the column ch0_values, and multiply it with a check if the column y0  had the value 1.  The output of this will be 1 only for the points that had y0=1 and that had True in the input mask. The sum of all these are the correct column which gives the number of features predicted true by the classifier.

Similarly, by doing the same but comparing the column y0 != 1 gives us the number of keypoints predicted false by the classifier.

The final step is to apply this lambda to the classified data and map it on each input mask, in order to get a classification for each object.

Note that we need to use apply first with the table as input since we only have one table that should be used for all invocations of the lambda. We use the map second since we have a list of input masks to check, and want a list with the outputs. Finally we can use the filter list predicate function to only keep the outputs that had a sufficiently high score.

The final output is a list of all the objects that was classified as a screw. Note that with the given threshold of 0.6 we miss two of the screws. You can experiment with different values of the threshold and with different parameters to the basic classifier (isolation forest) to get better results. You can also try to use the One class SVM node instead of an isolation forest.

Summary

We have shown how you can use the built-in nodes in Sympathy for Data to solve a simple image classification task using a simple one-class classifier machine learning node with ORB features as a pre-processing step on the image. The final system can work with only a single example of the object to detect, although with a high miss-classification chance. For better classifications more training examples and/or a different machine learning algorithm can be substituted for the isolation forest, but keeping the general ORB features as the feature engineering method.

Read more

Although usually forgotten, it is extremely important to remember that all these technologies run on computers. It doesn’t matter if they are in the cloud, some internal resources or any external company, at the bottom of the stack there is always a computer! The usual approach when the performance of the algorithm cannot be improved any more and more computing power is still required, is to scale the hardware the algorithm is running on, creating massive computer clusters that are often inefficient, expensive and difficult to understand. At this point one question might have already reached your mind. Why instead of trying to increase and scale the hardware we are running on, don’t we try to optimize it? It would be definitely cooler to have an algorithm running on a 20-node computer cluster than just on a big optimized server, but it is also way more expensive! All this brings us to today’s blog post topic: Linux Containers.

For several years, big data computing has been executed inside virtual machines due to mainly two reasons: resources optimization (the same algorithm is hardly ever running 24/7, but computers are, so usually several algorithms are installed and run in batches or simultaneously) and security/integrity (when different kind of algorithms are installed on a same computer, it is crucial that a breach in one of them does not affect the other). However, although modern hypervisors have close-to-native performances in terms of CPU usage, filesystem access is considerably slowed down compared to accessing the filesystem from outside the hypervisor. This problem can often be minimized, but it is impossible to completely fix due to the fact that hypervisor’s filesystem access has to go through the underlying manager before. To avoid this problem (and still with the security and resources optimization goals in mind), a different technique (known as OS virtualization, jails or containers) has been developed through the years but only reaching maturity and mainstream adoption in the last years[1] with the final implementation on Linux and the growth of Docker and Linux Containers. What differentiates containers from virtual machines is that processes are running isolated from the rest of the system, but still sharing the same kernel and in consequence having access to external devices in the same fashion (and thus, same performance) as the native system. Linux Containers is a tool designed with security in mind, able to run a fully functional OS sharing the kernel and a branch of the filesystem with the host but being completely unaware of the host or other containers existence. In consequence, it is an extremely useful tool for sharing resources on a same host. However, the fact that the kernel is shared between the containers and that limiting CPU and memory resources is still not as effective as with virtual machines, makes it not being a feasible implementation for “the Cloud”, which still lacks behind in terms of storage throughput and latency compared to our internal resources built on top of Linux Containers.

[1] As an example, FreeBSD jails were introduced in 2000 and Solaris containers in 2004, but full support was not finished in Linux kernel until 2013 and first user implementations were not usable until a bit later.

 

Read more

A cat does not like when its environment changes drastically. It might be stressed and start urinating on furniture to protest. Humans may not behave in the same way, but they have ways of showing discontent both directly and indirectly.

To transform an organization we need information to be able to act objectively. A company has a mixture of structured and unstructured data from which historic events up to now and how well we performed can be extracted. This is where most companies stop. Going further requires much more work.

The first step is to find out why we are where we are. Going here requires data-crunching, deduction, and discussions with domain experts. The results are reported to the management and we are done. The natural continuation is to find out what would happen if the historic trends would continue in the future and what to expect. Once we know all of this we need to find out what our next steps should be and decide on some action. An action is a mutation of the current state of affairs and requires a change, either small or big.

The change could involve minor adjustments which fit within existing processes (easy changes) or major changes which would disrupt existing daily patterns of the employees of the company. People are driven by their interests, and if their interests are harmed by change they would try to hold on to or increase their influence according to action theory. They could do so by (Learning to Change, Caluwé & Vermaak):

“…behaving unpredictably; by concealing information or distorting it; by imposing rules for the game, or, on the contrary, simply ignoring them; by forming coalitions; or by blackening somebody’s reputation.”

There are both formal and informal organizations where the latter is undocumented and constantly changing over time. To make people change (not just the formal organization), Caluwé & Vermaak discusses seven different ways to change things of which the last two are inappropriate to use in professional settings.

  1. Yellow: Change using power and processes to get everyone on the same wavelength.
  2. Blue: Rational change using blueprints with a given outcome (waterfall design).
  3. Red: Change using inducements and/or penalties.
  4. Green: Change by letting people grow through education and learning.
  5. White: Change through self-organization and evolution.
  6. Steel: Change using violence and repression.
  7. Silver: Change through circumstances (“if God wants”).

Which method (among the first five) to choose (or which to combine) depends on the company, the individuals within the company, and the culture. Data based change might often end up in the blue category, but depending on the conclusions made from the data and how much impact the change has on humans and company structures (both formal and informal), blue might not be the best choice. The conclusion is to never forget the humans involved because if the basic needs and relations of humans are disrupted major discontent might be the result. Pure rationality and objectivity do not always work out as expected.

Read more

It seems for me that it is a long way to travel to Sweden to do a master’s in engineering, why did you choose Sweden?
Yes, that is correct. The interest to do a master’s started out during my career as a hardware design engineer back in India. For most part of my bachelor’s in electrical and electronics in India I was interested in projects which was more hardware oriented. Right after graduation I did get a job to assemble/test electric control units (ECU) for solar inverters with a Swedish electrical company, ABB. During this time, I also became curious about how the software in the ECUs was created. Thus, I took the decision to chase a master’s degree and Sweden was close at hand due to its good universities and the positive work culture. 

Why did you choose Combine?
Well, when I reached the end of my studies I got involved in several hiring processes and few of them went far. It was the employees’ market at that time and the lack of engineers was striking. I bumped into Combine at a job fair for graduating students and thought that the guys in the booth was both social and technically experienced within the fields where I wanted to make a difference. Moreover, during the hiring process, I did get a completely different experience. The process was completely an eye-opener for me since the manager who was to hire me, forced me to answer difficult questions like, ‘What is your background?’, ‘Why did you choose engineering?’, ‘What do you want to do in your career?’ Those questions created a feeling that this company really cared about their employees and that feeling is still there. 

Tell me something from your assignment that you are proud of?
In my assignment, I do work with software/model-in-the-loop testing and how to streamline the testing process. In the beginning, I did tests by hand which was a tedious process and not creative enough. It didn’t take the team long before we started looking for other approaches to do tests more effective and increase the test coverage of the software. After some time, we stumbled upon an automatic testing framework and adapting this was both interesting and challenging. During the last year, we have tried with the help of Python and existing tools incorporate an automatic testing framework in a larger scale than what I have seen before. It was an intriguing task to set the whole framework up to work and I am really proud of the teamwork and co-operation during setting the framework up. Now my work is focused more on setting up the test requirements individually rather than the framework for the test itself. 

Is the work as an engineer what you had envisioned?
Well, I didn’t have any clear idea of what an engineer should do. With experience, a picture started to take form that an engineer is someone that solves problems and issues. The problems do vary in both shape and size but yes that is what I and my fellow colleagues do every day. I really enjoy what I do and would love to keep facing more challenging tasks in the future. 

You have been in Sweden for a few years now. What do you think about Sweden?
Well to begin with, I am extremely fond of the ‘fika’ culture in Sweden. It’s such a nice custom where in colleagues get together every week, just to have some cakes and coffee! Moreover, I really enjoy working in Sweden because of the support, flexibility and co-operation that you enjoy at your workplace.  The weather can be difficult at times, but after three winters in Sweden, I don’t feel it is going to bother me anymore. 

Read more

Earlier this month WARA PS, a part of the WASP arena, demonstrated an autonomous search-and-rescue system used at sea. This was done with one UAV automatically searching for people in the water while transmitting aerial images to the rescue center. An autonomous boat followed by a second UAV showing the situation closer to the water surface was guided by the first UAV when a person , in the water; thus making it possible for the person in the water to be saved by the boat.

Almost the same procedure is used when finding people on land. One UAV systematically scans a predetermined area for missing people. If a person in need is found, a second UAV is contacted which flies to the destination and releases a help package, such as a phone or medicine. All of this is performed automatically.

In January this year, a UAV dropped lifebuoys to swimmers in Australia who were in trouble, possible saving their lives.

But not only human lives can be saved by UAVs. Project Ngulia who aims to develop technical solutions to monitor rhinos and thus combat poaching are, among other solutions, looking at UAVs to support park rangers. This summer, a new agreement spanning over three years was signed between the Kenya Wildlife Service and Linköping University. Hopefully, UAVs can become a cost efficient and sustainable solution saving the black rhinos from extinction.

Some of the technologies used in the project are tested at Kolmården Wildlife Park, which is located in close proximity to the technical team at Linköping University while offering a realistic savannah. The test site also renders data that are not highly classified, which is the case in real parks and sanctuaries.

These are just a few examples of ongoing projects that uses UAVs to save lives. However, the sectors of UAV usage have increased greatly. They are now used in marketing, professional film making, construction, delivery, imaging, agriculture, family occasions, entertainment, inventory, weather forecast, environment, insurance, policing, sports and more.

The development and enhancements of UAVs are of great interest of Combine since it contains problems in one of our main fields of expertise. But equally interesting is how the UAVs are used, as saving lives and protecting wildlife are important steps in creating a better planet.

Read more

Introduction

At Combine, we play board games during so-called “Game Nights”. On several occasions, there have been discussions regarding how to shuffle cards efficiently (i.e. having an unpredictable order of the cards).

A deck of ordinary playing cards consists of four suites of 13 cards each giving a total of 52 cards. The total number of permutations of the card deck is given by:

$$
P^n_r = \frac{n!}{(n-r)!}
$$

where \(n = 52\) is the total number of cards and \(r = 52\) is the length of the sequence we want to generate from the \(n\) cards. Since \(n = r\) we obtain \(n! = 52!\), which is approximately \(8 \cdot 10^{67}\) combinations.

How to shuffle cards have been studied before and also discussed in other ways, and these sources have been used as a foundation for this text.

Given a deck of 52 cards, each card is numbered from 0 to 51 in order (\(F_i = i\)). After shuffling the deck we then know the id of each card in the new order. The Shannon Entropy is then calculated by first estimating the distribution of distances between each card in order:

$$
\Delta F_j = F_{j+1} – F_j
$$

The Shannon Entropy is then calculated using

$$
E = \sum_{j=0}^{51} -p_j \log_2(p_j)
$$

The variable \(p_j\) is a normalized histogram of distances between cards.

The maximum possible entropy is \(\log_2(52) = 5.7\) (measured in the unit “bits”), which is useful as a reference.
According to literature, it is enough to cut the deck and riffle shuffle seven times to obtain an unpredictable order.

Overhand Shuffle

Not everyone is able to perform the riffle shuffle and might instead use the overhand shuffle. We experimented with the overhand shuffle, wrote down the order of the cards for each shuffle, and ended up with the following increase in entropy (the red line is the maximum possible entropy).

The first iteration does not increase the entropy at all since the deck was only cut once without any shuffling taking place.

Hash Shuffle

One idea which was discussed at one Game Night was to shuffle cards using a method similar to how hash values are calculated in computer science. This is a deterministic shuffling method, but by adding some random elements, like cutting the deck, we get some interesting results.

The idea is to choose a number of piles and divide the cards between them. This would force cards to interleave with each other, introducing a distance between them. Just doing this once without any random elements gives the following entropies for different numbers of piles:

If we apply the hash shuffle twice and try all combinations between 2 and 10 piles we find that using 5 piles to start with and then 5 piles a second time again gives the highest entropy. In practice, you should cut the deck between each operation as well.

If you want to repeat the hash shuffle for a fixed number of piles you should obviously avoid 2 and 4 piles.

Riffle Shuffle

The riffle shuffle is claimed to be one of the best ways to shuffle a deck of cards. And, indeed, given the Shannon Entropy measure the riffle shuffle is by far the best way to shuffle according to the following figure:

The entropy rises very fast. Using other measures it is claimed that 7 riffle shuffles should be enough, and more than \(2 \log_2(52) = 11.4\) shuffles is not necessary.

Conclusion

When shuffling you should use the riffle shuffle. Just make sure that you cut the deck between each shuffle since the top and bottom cards tend to get stuck otherwise.

Read more
Contact us