Microsoft at Hot Chips 2017 event, announced its new project – Project Brainwave, aimed at developing deep learning platform. It will offer accelerated and real time AI in the cloud.
The move seems to be a repercussion of Google’s step of introducing the new AI chip known as Tensor Processing Unit (TPU) in May. The chip was designed to optimize deep learning algorithms.
Microsoft’s project with the help of Intel’s new Stratix 10 FPGA chip (Field Programmable Gate Array), will offer high speed to machine learning operations, with the ability to run 39.5 teraflops with a latency of less than 1 millisecond.
The FPGAs will give the company more flexibility than other dedicated chips like that of Google, per Doug Burger – a Microsoft Engineer. This is because the chips can be reprogrammed after manufacturing.
“We designed the system for real-time AI, which means the system processes requests as fast as it receives them, with ultra-low latency,” Doug said in a blog. “Real-time AI is becoming increasingly important as cloud infrastructures process live data streams, whether they be search queries, videos, sensor streams, or interactions with users.”
Also, unlike its peers, Microsoft’s hardware platform for AI will support multiple deep learning frameworks like Google’s TensorFlow, Microsoft’s CNTK and Facebook’s Caffe2.
The Project Brainwave will be built with three basic layers – A high performing distributed system architecture, hardware DNN engine and a compiler and runtime for easy deployment of trained models.
In the blog post, Doug also mentioned that company will soon plan to bring this real-time AI facility to Azure users. This will let customers run complex machine learning models with maximum performance scale.
The new project is a clear indicator of Microsoft’s efforts to bring state-of-the-art capabilities to AI and introduce them to the market. Such projects will help it compete against its competitors like Amazon, IBM, Google and Apple.