Senior Product Manager, NVXL Technology, Inc.
Part 1 of 2
Machine Learning is a buzz word everyone seems to be throwing around these days. A subset of artificial intelligence, machine learning essentially provides systems with the ability to automatically learn and improve from experience without being explicitly programmed.
The process of machine learning begins with observations or data, such as examples, direct experience, or instruction, so that the system can look for patterns in the data and make better decisions in the future based on the examples provided. The primary goal in machine learning is to allow the computers learn automatically without human intervention or assistance and adjust actions accordingly.
Some of the cooler recent advances in machine learning include using a combination of machine learning, natural language processing and information retrieval techniques. In 2001, IBM’s Watson beat two human champions in a Jeopardy! competition. This was followed a year later by Google’s neural network that learned to recognize cats by watching unlabeled images taken from frames of YouTube videos. Then, in 2014, Facebook researchers published their work on DeepFace, a system that uses neural networks to identify faces with 97.35% accuracy. The results were an impressive improvement of more than 27% over previous systems, rivaling human performance. That same year, Google showcased Sibyl, a proprietary platform for massively parallel machine learning used internally by Google to make predictions about user behavior and provide recommendations. Google topped themselves in 2016, with AlphaGo program, the first Computer Go program designed to beat an unhandicapped professional human player using a combination of machine learning and tree search techniques. Go is widely thought to be the most complicated board game in the world and Google has continued to make upgrades to the program.
In addition, there is reinforcement machine learning algorithms which comprise a learning method that interacts with its environment by producing actions and discovers errors or rewards. Trial and error search and delayed reward are the most relevant characteristics of reinforcement learning.
In essence, a system that is to be trained to do a particular job, learns on its own based on its previous experiences and outcomes while doing a similar kind of a job.
All of these forms of machine learning enable the analysis of massive quantities of data. And while machine learning generally delivers faster, more accurate results as part of identifying profitable opportunities, trends and patterns or dangerous risks, it may also require additional time and resources to train properly. Finally, in order to process these large volumes of information, massive compute power is needed.
Despite the growing need for more advanced compute performance, machine learning has been around for a very long time. In the 1940s, the first manually operated computer system, ENIAC, was invented. In the 1950s, the first computer game program claiming to be able to beat the checkers world champion was previewed.
Also, the model that drove compute power, Moore’s law, has slowed down dramatically in the past decade. From 1970 to 2008, computing performance experienced a 10,000-fold improvement. From 2008 until now, the industry has experienced just an 8 to 10x performance increase. Capacity demands remain daunting with no end in sight.
What is called for is a new generation of compute acceleration solutions that can drive all these compute-hungry applications. In Part 2 of this blog series, I will discuss NVXL’s game-changing approach and what it means for the future of compute acceleration.