In order to meet its own demand and operation, Google developed the high performance special processor TPU for calculation of AI , rather than build more computing center. Google have mentioned in a blog post, the performance of TPU can competative with Intel. Even in some features are beyonded.
Google said, in the testing, the average speed of TPU when doing the AI calculation was 15-30 times than similar server Haswell CPU and NVIDIA K80 Intel GPU. More importantly, the performance of TPU is higher than ordinary GPU 25-80 times. Besides, Google engineers have also developed a software called CNN1 TPU. It can make the speed of TPU more than 70 times higher than normal CPU.
At the mean time, Google also said that due to the TPU made for the operation of machine learning, compared with the traditional CPU and GPU, decreased the number of transistor for calculations. And also, more power can be extruded from the transistor to keep more complex and powerful machine learning modules per second and accelerate the use of modules. To help the users get the answer more quickly.
Tablet pc with windows 10 operation system
Google pointed out that the team has been running the TPU for more than one year in the data center. And found that TPU can make the machine learning to improve a level of magnitude per watt. Roughly speaking, the equivalent of Wafer efficiency in Moore’s law go forward seven years or three generations. It is reported that, since 2015, Google data center has been using TPU to accelerate AI services, and acquired the desired results, which can handle the user’s request for sending faster, reduce feedback delay.
It is worth mentioning that, Google believes that the current TPU hardware and software is still a lot of room for optimization, such as assuming the use of NVIDIA K80 GPU memory in the GDDR5, then TPU can play a better performance, such as for tablet pc with windows 10 operation system or laptop computer for office operation. And also good for portable tablet computer in 10.1 inch with dual operation system.