Google is a multinational corporation that is one of the main pillars of the online world. The company supports most of the online search inquiries and offers many other Internet-related products such as hardware platforms, operating systems, enterprise services, and many others. With so many operations under one roof, the company is in dire need of tools for its massive logistical efforts. Instead of relying on others, the tech giant preferred in-house solutions. This is how Google created its own AI chip that can empower its machine learning bots more than any other source on the market.
Google Stated that its AI Chip Is up to 30x Faster than CPUs
The new asset is called Tensor Processing Unit also known as TPU. Its existence was disclosed to some measure during a 2016 developer conference. All that people knew about them is that it was a solution created specifically for company’s machine-learning system called TensorFlow. It was only recently when Google has finally disclosed a more comprehensive view of its powerful AI chip.
Google takes pride in its new TPU as it has a 15x to 30x bigger capacity to operate its tasks than a combination between GPU and CPU. For their case study, Google documented the performance of Intel Haswell processors and Nvidia K80 GPUs. Its new invention is specifically designed to support an extensive load of daily machine learning workloads. The AI chip covers another substantial need of the tech giant namely power consumption. Thus, it can offer up to 80 times higher TeraOps/Watt.
The Project Has Been in the Making Since 2006
Google has also revealed another difference between its creation and all other similar products on the market. Usually, architects take advantage of convolutional neural networks. These are a certain kind of neural networks that adapts perfectly to tasks such as image recognition. On the other hand, Google uses instead multi-layer perceptrons.
The company admitted that it has been working on TPUs since 2006. This was when professionals realized that they could mix FPGAs, ASICS, and GPUs together into one product only. The project was accelerated in 2013 when the company needed either to came up with a new solution or spend a fortune to supplement their CPUs. The company released blueprints of its AI chip. However, there are small chances that the board will decide to make the asset available on the market. Nonetheless, now that everybody has access to information, they can learn more about this technology and construct something similar of their own.
Image source: 1