Efficient FeFET Crossbar Accelerator for Binary Neural Networks
Taha Soliman1, Ricardo Olivo2, Tobias Kirchner1, Cecilia De la Parra1, Maximilian Lederer2, Thomas Kämpfe2, Andre Guntoro1 and Norbert Wehn3
1 Robert Bosch GmbH, Renningen, Germany 2 Fraunhofer IPMS, Center Nanoelectronic Technologies (CNT), Dresden, Germany 3 TU Kaiserslauten, Germany
This paper presents a novel ferroelectric field-effect transistor (FeFET) in-memory computing architecture dedicated to accelerate Binary Neural Networks (BNNs). We present inmemory convolution, batch normalization and dense layer processing through a grid of small crossbars with reduced unit size, which enables multiple bit operation and value accumulation. Additionally, we explore the possible operations parallelization for maximized computational performance. Simulation results show that our new architecture achieves a computing performance up to 2.46 TOPS while achieving a high power efficiency reaching 111.8 TOPS/Watt and an area of 0.026 mm2 in 22nm FDSOI technology.
[The authors opted for not publicly sharing a presentation video.]