Brave New World
How Machine Learning Will Shape Your Future
Part 3 of 3

By Julie DiBene
Director of Marketing
NVXL Technology, Inc.

The year is 2042 and it is 6 a.m. on a weekday. Music queues up, lights glow softly and that’s your wake-up call, your cue to get out of bed. Energy-conscious sensors turn everything off once your feet hit the floor and your shower starts cranking as you head into the bathroom. By the time you cross your bedroom floor, the water temp is exactly 107.5-deg F and you enjoy a perfect morning shower.

From the temperature of your morning shower and the type of coffee your smart pantry automatically reorders, to the way you live your life, your preferences and needs have long been tracked and recorded by machine learning (ML). Machine learning is a discipline of computer science that uses statistical techniques enabling computer systems to learn and progressively improve a specific task. This is done without data being explicitly programmed. Machine learning is a sub-set of Artificial Intelligence (AI) and explores the study and construction of algorithms that can learn from and make predictions.

Machine learning is closely aligned with computational statistics, which also focuses on prediction-making through the use of computers. Within the field of data analytics, machine learning is used to create complex models and algorithms for the purposes of prediction, generally known as predictive analytics. These analytical models allow researchers, data scientists, engineers, and analysts to construct consistent, fast, and repeatable calculations while discovering concealed insights via learning from historical relationships and trends found in the data.

Machine learning has long been inextricably woven into our reality. From learning from what online advertisements we watch so companies can make better product recommendations to identifying and tagging friends in photos, machine learning has long influenced many of the decisions we make every day.

Fast forward to 2042, and machine learning will have taken the quality of life and efficiency to new levels. Home assistants started back more than three decades ago. No longer do users simply ask Alexia to put on soothing music or order more cat food, machine learning has completely taken over managing the home, freeing up busy home dwellers to pursue other interests. From what you eat, to when you exercise, to your peak hours for productivity, machine learning has calculated the answer. By 2042, the automation of our domestic lives will have been honed to a fine science wherein nearly all aspects of our daily lives can be analyzed and made better by machine learning. Stuck in traffic? No problem, in route to work, you can peek inside your fridge and order whatever you need. No need to drop by the grocery store as your groceries can be ordered on the road and delivered to your door at your convenience.

As for transportation? By 2042, very few people will actually know how to drive or possess a driver’s license, much less own a car. Driverless car technology will have all but put the DMV out of business and new houses will no longer be built with garages to house cars. You will simply tell your PA to synch up everyone’s schedules with the driverless car service you use and, minutes after the kids depart for school, your PA will even remind you to send a verbal consent for your kid’s museum field trip. Kids having been forging their parent’s signatures since the first permission slip was sent home on papyrus millennium ago but voices are nearly impossible to copy. All the schools use voice recognition in the future including verbal confirmation that parents have received a copy of their children’s grades. And this is all before you take off for your virtual reality lesson on the court or with your non-human piano teacher.

While ML is compelling, challenges abound. These evolving technologies require massive amounts of compute power – more than any other problem facing machine learning. From 1970 to 2008, computing performance experienced a 10,000-fold improvement. However, from 2008 onward, the industry experienced just an 8x to 10x performance increase. This is problematic due to rapidly developing ML applications which come with extensive datasets and require the use of computationally intensive ML models. The training of these powerful ML models require terabytes or petabytes of data. The compute power must address hundreds of thousands of requests per second from billions of users with response time less than tens of milliseconds. With enough compute power, these applications will be able to quickly identify trends and patterns that would be otherwise difficult or too time-consuming to detect and take appropriate action. If the industry can address this voracious need for compute power, then machine learning is poised to change the world.

Before NVXL, compute performance was accomplished in several ways, mostly using a traditional over-provisioned approach wherein compute resources were allocated based on peak demand which essentially meant data centers had to run at peak capacity all the time. This was inefficient and came with a high TCO. Then along came time-based scaling which was commonly used in cloud datacenters to provision computing resources and was based on historical workload patterns. This approach utilized granular resource provisioning but the overall compute resource utilization was still less than optimal. When NVXL introduced polymorphic acceleration, real-time scaling became the technology of choice. Designed for minimal demand, it offered fine granular and far more efficient utilization of resources along with improved latency and far improved TCO. Also, back in the day, silos of specific computing architectures were the norm until NVXL’s polymorphic acceleration came along and offered software-defined acceleration platforms in real time using a flexible accelerator pool that could easily be expanded and upgraded. Real-time scaling is designed to meet minimum workload demands. This keeps the system utilization high, enables much higher levels of efficiency and reduces the Total Cost of Ownership (TCO).

NVXL is at the forefront of addressing ML inference with a revolutionary technology platform that is built from the ground up to accelerate and scale compute performance with industry-leading density and performance /power and performance/cost. Harnessing the power and flexibility of FPGAs, the NVXL acceleration platform integrates seamlessly with multiple frameworks related to Deep Learning and Apache Spark scalable machine learning. NVXL, with its massively scalable, software-defined acceleration platform, is shattering the boundaries of compute performance to enable the next generation of machine learning applications.

I invite you request a demonstration of this breakthrough technology or to continue the discussion of polymorphic acceleration and find out first hand, how we can help you take your compute intensive applications to the next level by contacting us at: Contact

Untitled Document