Share This Post

Artificial Intelligence / Cloud News / Google

Google’s Cloud TPU chips now available in beta on its public cloud 

Google’s Cloud TPU chips now available in beta on its public cloud 

Google recently announced making available the beta version of Cloud Tensor Processing Units (TPUs) on Google Cloud Platform. This will help experts in machine learning (ML) to swiftly train and run their models.

The Google Cloud TPUs are chips designed to scale up and accelerate the TensorFlow-based AI workloads. Each Cloud TPU includes application-specific integrated circuits which are paired with 64GB of high-bandwidth memory on a single board.

A board can be used alone or can be connected with another board through a dedicated network. The connected boards form multi-petaflop ML supercomputers called TPU pods.

Each cloud TPU can provide up to 180 teraflops of performance, which will help enterprises to train several variants of business-critical ML model overnight, rather than waiting for days or weeks.

The Cloud TPUs come preconfigured, so that users can avoid the hassles associated with keeping the drivers up to date. Google will provide the same sophisticated security practices for these TPUs as it provides for all Google Cloud Services.

“Instead of committing the capital, time and expertise required to design, install and maintain an on-site ML computing cluster with specialized power, cooling, networking and storage requirements, you can benefit from large-scale, tightly-integrated ML infrastructure that has been heavily optimized at Google over many years,” wrote Google in a blog post.

The customers who run machine learning workloads on Cloud TPUs will be able to take advantage of storage, networking and data analytics technologies of Google Cloud Platform.

Additionally, the tensor processing units will provide models like RestNet-50 and RetinaNet, which will help users to solve image classification and object detection problems. The Transformer model from Tensor2Tensor allows users to try machine translation and language modeling.

When Google announced the Cloud TPUs last year, it mentioned that TPUs offer 15 to 30 times faster performance on AI workloads than contemporary GPUs and CPUs.

For accelerated VM instances like machine learning and data processing, Google delivers Nvidia Tesla GPUs via its on-demand Google Compute Engine.

Also read: Google Drive now enables users to comment on Microsoft files

Currently, Cloud TPUs are available in limited quantity, with the usage billed by second at the rate of $6.50 USD / Cloud TPU / hour.

Rate this post

Share This Post

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Skip to toolbar