Tech behemoth Google
On Monday, Google announced that it would make this chip accessible to other companies through a cloud-computing service which would allow them to purchase it for a fee. The chips will be called "Cloud Tensor Processing Units," or Cloud "TPUs." The chips work by providing customers with specialized circuits solely for the purpose of accelerating AI computation. Google tested using 64 of them to train ResNet-50 in only 30 minutes.
"We are trying to reach as many people as we can as quickly as we can," said Zak Stone, who works alongside the small team of Google engineers that designs these chips.
It is common for technology companies of this scale to design a lot of their own hardware in an insular fashion within their own facilities in order to cut costs and improve the efficiency of their multibillion-dollar data centers.
"This is about packing as much computing power as possible within a small area, within a heat budget, within a power budget," said Casey Bisson, who helps oversee a cloud computing service called Joyent.
In fact, Amazon
Potential applications for such a device include teaching computers to recognize objects, called computer vision technology. Lyft, for instance, has been trying to apply this technology to their development of driverless cars, attempting to get the vehicles to visually identify street signs or pedestrians. Considering that these chips were originally designed to render graphics for games and other software, they also have a huge potential for application in that department, though this is a market sector currently dominated by developer Nvidia
However, Google is unfazed by smaller competitors. "Google has become so big, it makes sense to invest in chips," said Fred Weber, who spent a decade as the chief technology officer at the chip maker AMD. "That gives them leverage. They can cut out the middleman."
Google is currently charging $6.50 per TPU per hour, though that pricing may shift once the hardware is generally available. Right now, Google is still throttling the Cloud TPU quotas available to its customers, but anyone can request access to the new chips. Once people get access to the Cloud TPUs, Google has several optimized reference models available that will let them start pulling the ropes and using the hardware to accelerate AI computation.