Like superheroes, deep learning packages usually have origin stories. Yangqing Jia created the Caffe project while earning his doctorate at U.C. Berkeley. The project continues as open source under the auspices of the Berkeley Vision and Learning Center (BVLC), with community contributions. The BVLC is now part of the broader Berkeley Artificial Intelligence Research (BAIR) Lab. Similarly, the scope of Caffe has been expanded beyond vision to include nonvisual deep learning problems, although the published models for Caffe are still overwhelmingly related to images and video.
Caffe is a deep learning framework made with expression, speed, and modularity in mind. Among the promised strengths are the way Caffe’s models and optimization are defined by configuration without hard-coding, as well as the option to switch between CPU and GPU by setting a single flag to train on a GPU machine, then deploy to commodity clusters or mobile devices.
To continue reading this article register now