Show simple item record

dc.contributor.authorYang, Taoen_US
dc.contributor.authorWei, Yadongen_US
dc.contributor.authorTu, Zhijunen_US
dc.contributor.authorZeng, Haolunen_US
dc.contributor.authorKinsy, Michel A.en_US
dc.contributor.authorZheng, Nanningen_US
dc.contributor.authorRen, Pengjuen_US
dc.date.accessioned2020-01-15T20:11:38Z
dc.date.available2020-01-15T20:11:38Z
dc.date.issued2019-10
dc.identifier.citationTao Yang, Yadong Wei, Zhijun Tu, Haolun Zeng, Michel A Kinsy, Nanning Zheng, Pengju Ren. 2019. "Design Space Exploration of Neural Network Activation Function Circuits." IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, Volume 38, Issue 10, pp. 1974 - 1978. https://doi.org/10.1109/tcad.2018.2871198
dc.identifier.issn0278-0070
dc.identifier.issn1937-4151
dc.identifier.urihttps://hdl.handle.net/2144/39100
dc.description.abstractThe widespread application of artificial neural networks has prompted researchers to experiment with field-programmable gate array and customized ASIC designs to speed up their computation. These implementation efforts have generally focused on weight multiplication and signal summation operations, and less on activation functions used in these applications. Yet, efficient hardware implementations of nonlinear activation functions like exponential linear units (ELU), scaled ELU (SELU), and hyperbolic tangent (tanh), are central to designing effective neural network accelerators, since these functions require lots of resources. In this paper, we explore efficient hardware implementations of activation functions using purely combinational circuits, with a focus on two widely used nonlinear activation functions, i.e., SELU and tanh. Our experiments demonstrate that neural networks are generally insensitive to the precision of the activation function. The results also prove that the proposed combinational circuit-based approach is very efficient in terms of speed and area, with negligible accuracy loss on the MNIST, CIFAR-10, and IMAGE NET benchmarks. Synopsys design compiler synthesis results show that circuit designs for tanh and SELU can save between ×3.13∼×7.69 and ×4.45∼×8.45 area compared to the look-up table/memory-based implementations, and can operate at 5.14 GHz and 4.52 GHz using the 28-nm SVT library, respectively. The implementation is available at: https://github.com/ThomasMrY/ActivationFunctionDemo.en_US
dc.format.extentp. 1974 - 1978en_US
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)en_US
dc.relation.ispartofIEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
dc.rightsCopyright notice: "© 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works."en_US
dc.subjectActivation functionsen_US
dc.subjectArtificial neural networks (ANNs)en_US
dc.subjectExponential linear units (ELUs)en_US
dc.subjectHyperbolic tangent (tanh)en_US
dc.subjectScaled ELUs (SELUs)en_US
dc.subjectElectrical and electronic engineeringen_US
dc.subjectComputer hardwareen_US
dc.subjectComputer hardware & architectureen_US
dc.titleDesign space exploration of neural network activation function circuitsen_US
dc.typeArticleen_US
dc.description.versionAccepted manuscripten_US
dc.identifier.doi10.1109/tcad.2018.2871198
pubs.elements-sourcecrossrefen_US
pubs.notesEmbargo: Not knownen_US
pubs.organisational-groupBoston Universityen_US
pubs.organisational-groupBoston University, College of Engineeringen_US
pubs.organisational-groupBoston University, College of Engineering, Department of Electrical & Computer Engineeringen_US
pubs.publication-statusPublisheden_US
dc.identifier.mycv401024


This item appears in the following Collection(s)

Show simple item record