Sensomind wins DGI innovation competition with a unique solution for usage optimization in sports facilities

Out of 80 applicants only 12 were selected for the bootcamp and pitch event in Copenhagen on the 24th of November.  Sensomind was selected amongst 7 winners to share the prize of 2 million DKK.   Our pitch revolved around how our AI analytics platform can be used to optimize usage of sports facilities and therefore get more out of the billions that are wasted every year on empty sport facilities.  Our technology relies on camera technology to measure in real-time down to individual court level how much they are used.  Moving forward, we will now team up with DGI to see how we can proliferate this technology.

Link to official press release:

https://www.dgi.dk/om/nyheder-og-presse/nyhedscenter/seneste-nyt/dgi-investerer-millioner-i-fremtidens-faellesskaber

 

Sportshub selected for DGI Impact bootcamp finals in Copenhagen (24th Nov)

Sportshub has been selected amongst 12 finalists to pitch at the DGI Impact bootcamp finals on the 24th of November.   DGI is the second largest sports organisation in Denmark and counts more than 1.5 million members.

Our Pitch

Many billions are spent every year in both Denmark and the rest of the world in building and maintaining sports facilities.   Fact remains however that most of these facilities are poorly utilized.  Most are empty during the day-time and only partially used during peak hours.  To combat this facility managers need better insights about how their facilities are used and need to be better at attracting users.   Sportshub solves these problems using two key technologies combined into one product:

  • A court monitoring package that provides 24/7 real-time counting of floor usage (down to individual courts, even on multi-sport game courts) provides the insights needed to measure and solve problems related to poor usage.
  • An automated stats and video analytics platform attracts more users by making it more exciting and motivational to train while providing the individual and team insights needed to take their game to the next level.

 

Three Myths About Digital Twins

Creating replicas of resources and processes has always made sense.  Architects create cardboard models of buildings before they build them.  Back in the Apollo-mission days NASA had a replica of the lunar lander so they could simulate problems and fixes.  Digital twins are just that and more.   They create a tangible representation of resources, processes, and even people.  They allow you to test scenarios (what happens if I exchange this component), run queries (find which machine parameters optimizes my yield), and for forecasting (what will the load be in my production line in half an hour).  Many large companies are starting to adopt digital twins for parts of their processes but even the best adopters still have much room to improve.  One of the big reasons for lack of adoption come from all the myths that have been built up over the last 20 years where both hype and poor technology have hampered progress.

Myth 1:  I need a CAD model of my process

Many digital twins were born out of highly accurate simulations.  Some of the pioneers in this area were the aerospace and car industries in the 90’s.  Heavy super-computers could make simulations of fluid dynamics which could be used to minimize drag.  Designers could visualize their designs etc.   Fact remains however that most systems and processes are about input and output.  This means that CAD simulations are not the only way to go.  A data-driven approach often allows much more accurate simulations as data is based on real measurements as opposed to the limitations of physical models in CAD.   Today even the aerospace and car industries are increasingly using data-driven approaches.  One of the biggest success stories are modern flight simulators which are now purely based on data and do not use traditional model-based approaches.  So the answer is clearly No, you don’t need CAD models and if you have gone down this route you might be heading down a blind alley.

Myth 2:  I need an army of statisticians to model my process

Data-driven approaches have historically been heavy to implement.  Traditionally you need good statisticians to model system components and make decisions about sensor noise models and which variables relate to which variables.   Thanks to modern machine learning algorithms like deep learning this is no longer needed.  How different process components work can now be efficiently modelled using historic data.  Deep learning takes care of modelling system components which means that you concentrate your efforts on generating insights instead of being bogged down by multi-variate statistics.  This means you get access to results faster, cheaper, and in a more flexible manner without having to maintain large amounts of code relating to your process.

Myth 3:  Machine learning algorithms require large amounts of data

A common misconception is that you need bucket loads of data.  Some machine learning algorithms have required large amounts of data to get a good solution.  Modern methods like deep learning however can be made very robust with fairly small datasets from your process.  It is maybe also worth pointing out that deep learning is not synonymous with big data.   You don’t need big data infrastructure like hadoop to get started.  Where it does overlap is that deep learning can handle arbitrarily large datasets but it is far from a requirement.  We have successfully trained systems with only a few hundred data samples but have also handled huge gigabyte datasets with the same software.

Quantum AI – Fact, Fiction, or Future?

We recently had the pleasure of seeing the newly opened Microsoft Quantum Materials Lab in Kgs. Lyngby Denmark.  Considering that quantum computing has been popping up in the media for almost 25 years, what is it that is causing it all of a sudden to create such hype?   One reason is that big companies like IBM, Intel, Microsoft and Google have started investing in it at an unprecedented scale.  The reason for them to do so is obvious, quantum computing promises almost infinite amounts of computing power.  Like nuclear fusion, it can truly change how the world works.   A Microsoft representative mentioned that they needed several data-centers just to simulate a machine with only, say, 50 qubits.   Qubits being the equivalent of bits in the quantum computing world.  Amongst other things even such a small machine promises instant brute-force decryption of even the toughest cryptographic codes.

To truly understand how this can impact AI you just need to scroll back a few years.  What really kicked off the modern AI revolution was that during the naughties (2000’s) really good software was built to train AI’s on GPU’s.  This made it all of a sudden feasible to train on large datasets.  If we now have a quantum AI computer then we would likely be able to instantly train on datasets which today take days or even weeks.  This means we can also start training on much tougher problems and get much better solutions for existing datasets.

So are we still 25 years away from a quantum AI computer we can start using?  The answer is No!   Quantum computing research is actually not just one thing.  A lot of the research is for a general purpose quantum computer.  However hybrid systems with many qubits already exist today that can solve more specific problems.  For example, the company D-wave have created a quantum annealing machine with staggering 2000 qubits that you can buy today if your pockets are deep enough.  Quantum annealing helps solve optimization problems via a popular method called Simulated Annealing.  Most people working with machine learning will have run into this method during their studies, without going into the details, it is a good algorithm for solving a number of machine learning problems.    This is also where it starts getting interesting for AI usage.  Today the interfaces are non-existent, but people at Google AI and other big co’s are working on bridging the gap between such machines and everyday AI tools such as tensorflow.  Within a few years expect that it will be possible to upload your tensorflow code to a quantum cloud service and have it train almost instantly.  In the start it will only be for specific types of AI problems but in the future it will solve a whole range of tasks today performed by GPU’s.   IBM already has a simple quantum computing  cloud service, since it works so fast, it is not rented by the hour but instead by the minute!

 

 

How AI is helping re-imagine the role of industrial vision

There is tremendous interest in how AI is changing industrial vision.  Our track on big data and artificial intelligence for industrial vision was fully booked at the High Tech Summit in Copenhagen, Denmark.  AI is leading to a paradigm shift in how quality monitoring and optimization is done today.

Many industrial vision companies still rely on software developers for their computer vision algorithms but they are slowly learning that AI can do the same things faster and more robust.  Many applications within food and pharma have been too hard to solve with traditional computer vision because of large product variations.  Now it is however possible to solve them just by training from examples.  This is also giving rise to new applications – we demonstrated our work together with Teknologisk Institut in slaughterhouse cobots that can adapt to and cut meat based on individual variation in animals (Augmented Cellular Meat Production – ACMP).   Although still slower than a human, the ability to parallelize the process in cell-based production will ultimately allow such systems to increase throughput while at the same time increasing redundancy and thus lower risks of down-times.

Applications within industrial vision are no longer restricted to individual machines and can effectively through AI-based digital twins optimize entire production lines, leading to higher quality products and increased yield.   By combining production level insights with industrial cameras it is possible to train complete systems that learn what to look for in images based on Key Performance Indicators.   We saw massive interest from food and pharma-based applications for optimizing fermentation tanks and other complex bio-related multi-step processes.  Understanding causes and effects in such processes is very complex for humans and traditional process models, but much easier for AI-based methods.  Also, why stop there?  The approach scales easily to supply chains – as long as we have access to the data then you no longer have to worry about modelling individual sensors and processes which previously was a show-stopper for your digitalization strategy.

 

 

 

What is Artificial Intelligence?

If there is a question that keeps popping up again and again amongst our clients, it is: what is Artificial Intelligence (AI)?  Depending on who you ask you will get a different answer.  The reason is simple, AI has its roots from multiple sources, including the scientific community, popular culture, and even geography plays a role.  If you ask the scientific community in the USA you might get that AI was invented by the military (DARPA) in the 1950’s.  Work in emulating human neurons dates back even further.   Either way our modern western understanding of what is AI has been heavily influenced by popular culture such as Hal in 2001 Space Odyssey (60’s), Skynet in Terminator (80’s), and more recently from modern TV series such as A.L.I.E. from The 100 (Netflix).  On the other hand cultures in Asia have had a different starting point for defining what is AI.  For example in Japan there is a culture/folklore surrounding “Tsukumogami” where inanimate objects may possess souls.  Even a rock may be considered to be intelligent.  This has led to a different understanding of AI and a much broader acceptance of things as intelligent.

In Western culture we have devised scientific methods such as the famous Turing test which seeks to determine if a system exhibits intelligent behaviour under the idea that if it looks and acts intelligently, then it might as well be intelligent.   Are there then any rules on when something can or cannot be an AI?  One of the big recent notions has been data-based intelligence, popularized by artificial neural networks where millions of neurons are trained to perform intelligent functions.  Earlier work saw the use of layered rule-based systems (subsumption architectures) where even simple interations of rules (E.g. if-then-else) gave rise to complicated interations.  Recently, the notion of hiveminds where crowd-sourcing is used for decision-making, ideation, etc can probably also be called AI even though in the end it relies on real human brains.

This gives rise to another question, how intelligent does a system or object need to be to be called an AI?  Hiveminds have been shown to provide super-human level intelligence, likewise many deep learning systems show super-human performance in highly specific operations like image recognition.  On the other hand most people will quickly point out that modern AI’s like Amazon Alexa and Microsoft Cortana are pretty dumb (most were also built around rule-based systems).   The answer is that AI’s probably do not need to be very smart if they attempt to mimic human behaviour.

Moving on to the scientific community, you will see throughout the internet Venn diagrams and Onion diagrams where Artificial Intelligence is tried to be described in terms of being an umbrella for machine learning tools (deep learning, optimization), or in terms of applications (expert systems, robotics, etc) or as a subset of various fields (like psychology, mathematics, etc).  Especially the tools part has been a bit difficult and boils down to what should an AI be able to do to be called an AI?  Most AI systems today do not learn anything on-the-fly.  They are in essence “programmed” once and then can only do that one thing.  Experts are then needed to “reprogram” (AKA train) for new tasks).   So systems that can more quickly learn and do it on-the-fly are clearly the next step for AI’s.  Current research in reinforcement learning and transfer learning are two such areas that seek to alleviate this obvious short-coming.

We hope this helped you get a better understanding of what AI is and inspired you to learn more.

// Sensomind Team