Emergent Mind

BlockAMC: Scalable In-Memory Analog Matrix Computing for Solving Linear Systems

(2401.10042)
Published Jan 18, 2024 in cs.AR and cs.DC

Abstract

Recently, in-memory analog matrix computing (AMC) with nonvolatile resistive memory has been developed for solving matrix problems in one step, e.g., matrix inversion of solving linear systems. However, the analog nature sets up a barrier to the scalability of AMC, due to the limits on the manufacturability and yield of resistive memory arrays, non-idealities of device and circuit, and cost of hardware implementations. Aiming to deliver a scalable AMC approach for solving linear systems, this work presents BlockAMC, which partitions a large original matrix into smaller ones on different memory arrays. A macro is designed to perform matrix inversion and matrix-vector multiplication with the block matrices, obtaining the partial solutions to recover the original solution. The size of block matrices can be exponentially reduced by performing multiple stages of divide-and-conquer, resulting in a two-stage solver design that enhances the scalability of this approach. BlockAMC is also advantageous in alleviating the accuracy issue of AMC, especially in the presence of device and circuit non-idealities, such as conductance variations and interconnect resistances. Compared to a single AMC circuit solving the same problem, BlockAMC improves the area and energy efficiency by 48.83% and 40%, respectively.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.