20
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      A Scalable Framework for Acceleration of CNN Training on Deeply-Pipelined FPGA Clusters with Weight and Workload Balancing

      Preprint
      , , , ,

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Deep Neural Networks (DNNs) have revolutionized numerous applications, but the demand for ever more performance remains unabated. Scaling DNN computations to larger clusters is generally done by distributing tasks in batch mode using methods such as distributed synchronous SGD. Among the issues with this approach is that to make the distributed cluster work with high utilization, the workload distributed to each node must be large, which implies nontrivial growth in the SGD mini-batch size. In this paper, we propose a framework called FPDeep, which uses a hybrid of model and layer parallelism to configure distributed reconfigurable clusters to train DNNs. This approach has numerous benefits. First, the design does not suffer from batch size growth. Second, novel workload and weight partitioning leads to balanced loads of both among nodes. And third, the entire system is a fine-grained pipeline. This leads to high parallelism and utilization and also minimizes the time features need to be cached while waiting for back-propagation. As a result, storage demand is reduced to the point where only on-chip memory is used for the convolution layers. We evaluate FPDeep with the Alexnet, VGG-16, and VGG-19 benchmarks. Experimental results show that FPDeep has good scalability to a large number of FPGAs, with the limiting factor being the FPGA-to-FPGA bandwidth. With 6 transceivers per FPGA, FPDeep shows linearity up to 83 FPGAs. Energy efficiency is evaluated with respect to GOPs/J. FPDeep provides, on average, 6.36x higher energy efficiency than comparable GPU servers.

          Related collections

          Most cited references15

          • Record: found
          • Abstract: not found
          • Conference Proceedings: not found

          Optimizing FPGA-based Accelerator Design for Deep Convolutional Neural Networks

            Bookmark
            • Record: found
            • Abstract: not found
            • Article: not found

            FP-BNN: Binarized neural network on FPGA

              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              Serving DNNs in Real Time at Datacenter Scale with Project Brainwave

                Bookmark

                Author and article information

                Journal
                04 January 2019
                Article
                1901.01007
                9640c1aa-4484-4223-85fd-36df65e8651f

                http://arxiv.org/licenses/nonexclusive-distrib/1.0/

                History
                Custom metadata
                20 pages, 20 figures
                cs.LG cs.AR cs.DC stat.ML

                Machine learning,Networking & Internet architecture,Artificial intelligence,Hardware architecture

                Comments

                Comment on this article