Baidu, the leading internet-related and AI services provider, launched a new cloud-to-edge AI chip called Kunlun to meet the high processing demands of artificial intelligence (AI) workloads.
Announced at Baidu Create 2018 conference, the Kunlun can handle both cloud and edge scenarios like data centers, public cloud and autonomous vehicles. It includes two chip models— 818-300 (for training AI), and 818-100 (for inference).
“Kunlun is a high-performance and cost-effective solution for the high processing demands of AI. It leverages Baidu’s AI ecosystem, which includes AI scenarios like search ranking and deep learning frameworks like PaddlePaddle. Baidu’s years of experience in optimizing the performance of these AI services and frameworks afforded the company the expertise required to build a world class AI chip,” wrote Baidu in the announcement post.
AI is an emerging technology and its applications are dramatically increasing. The AI applications require more computational power. Traditional solutions come with limitations. Baidu said its new chip will be the right answer to this demand.
The company began development of an AI accelerator for deep learning in 2011, which was based on FPGA (field programmable gate array). Baidu claims that its Kunlun is around 30 times faster than the FPGA-based AI accelerator it built.
Talking of the specifications of Kunlun, the chip is made up of thousands of small cores, consists 512GBps memory bandwidth, and can perform 260 tera operations per second consuming 100 watts of power.
Kunlun will support the common open source deep learning algorithms, as well as a broad range of AI applications like voice recognition, search ranking, natural language processing, autonomous driving and large-scale recommendations.
Baidu will continue to work on Kunlun to expand its use-cases and more efficiently meet the demands of several fields in AI including intelligent vehicles, intelligent devices, voice recognition and image recognition.