Facebook’s Caffe2 framework brings AI to low-power mobile devices
Aiming to bring artificial intelligence to low-power mobile devices, Facebook Inc. a new open-source deep learning framework at its F8 developer conference Tuesday.
Called Caffe2, it’s the successor to its original deep learning framework Caffe, which was developed alongside researchers at Berkeley Artificial Intelligence Research lab. Caffe2’s GitHub page describes the framework as “an experimental refactoring of Caffe [that] allows a more flexible way to organize computation.”
Facebook elaborates more in a blog post, saying that Caffe2 is a “lightweight and modular deep learning framework emphasizing portability while maintaining scalability and performance.” The company worked closely with Amazon Web Services, Intel Corp., Microsoft Corp., Qualcomm Inc. and Nvidia Corp. to optimize Caffe2 for the cloud and mobile devices, it said.
Caffe2 can be used to program AI features into mobile devices, which will enable them to recognize images, speech, text and videos, and become more “situationally aware,” Facebook said. It’s significant because such capabilities have until now always been offloaded to remote servers in the cloud, meaning smartphones and other devices have to be connected to the web to offer this kind of functionality. Caffe2 is different, however, as it works within the low-power constraints of such devices, which means a connection to the cloud is no longer necessary to deliver AI capabilities.
Caffe2 can do this because it leverages the advanced computing power of next-generation mobile chips to accelerate deep learning tasks. In smartphones, for example, Caffe2 harnesses the power of Adreno graphics processing units and Hexagon digital signal processors on Qualcomm Inc.’s Snapdragon chips.
“We’re committed to providing the community with high-performance machine learning tools so that everyone can create intelligent apps and services,” Facebook said in a blog post on the Caffe2 website.
Facebook’s partners emphasized that Caffe2 is much faster than its predecessor Caffe. Benchmark tests from Intel, Qualcomm and Nvidia Corp. showed of significant performance and speed gains compared to other machine learning frameworks. Nvidia said in a blog post that Caffe2 is “significantly” faster on its high-end GPUs than the previous Caffe. Some of Nvidia’s latest GPUs for machine learning come with low-level floating computing capabilities that it says are instrumental for building neural networks that can make accurate inferences.
The post Facebook’s Caffe2 framework brings AI to low-power mobile devices appeared first on SiliconANGLE.