Emergent Mind

Enabling Incremental Training with Forward Pass for Edge Devices

(2103.14007)
Published Mar 25, 2021 in cs.LG and cs.AR

Abstract

Deep Neural Networks (DNNs) are commonly deployed on end devices that exist in constantly changing environments. In order for the system to maintain it's accuracy, it is critical that it is able to adapt to changes and recover by retraining parts of the network. However, end devices have limited resources making it challenging to train on the same device. Moreover, training deep neural networks is both memory and compute intensive due to the backpropagation algorithm. In this paper we introduce a method using evolutionary strategy (ES) that can partially retrain the network enabling it to adapt to changes and recover after an error has occurred. This technique enables training on an inference-only hardware without the need to use backpropagation and with minimal resource overhead. We demonstrate the ability of our technique to retrain a quantized MNIST neural network after injecting noise to the input. Furthermore, we present the micro-architecture required to enable training on HLS4ML (an inference hardware architecture) and implement it in Verilog. We synthesize our implementation for a Xilinx Kintex Ultrascale Field Programmable Gate Array (FPGA) resulting in less than 1% resource utilization required to implement the incremental training.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.