Bachelor of Science
FPGA, Artificial neural network, Verilog
The huge cost of both time and power that comes with training an industry-level neural network has motivated the exploration of alternative hardware to base the neural network on. The Field Programmable Gate Array (FPGA), with its high degree of parallelism, is one of the prime candidates for increasing the efficiency of neural network training. In this investigation, a basic neural network structure that consists of two input neurons, three hidden layers and one output layer is implemented in Verilog to serve as a proof of concept to this idea. When designing different aspects of the neural network structure including the storage of weights on connections between neurons, timing for each layer to proceed forward or backward, and the update on the weights, multiple potential design choices were considered to exploit the parallelism of the FPGA. With the intention to train the network to perform addition, the neural network was designed to take two inputs and produce one output. The successful implementation of node parallelism in this investigation demonstrates the FPGA’s potential of running the neural network more efficiently than CPUs.
©2019 Luya Gao. Access limited to the Smith College community and other researchers while on campus. Smith College community members also may access from off-campus using a Smith College log-in. Other off-campus researchers may request a copy through Interlibrary Loan for personal use.
Gao, Luya, "Implementation of an FPGA-based artificial neural network" (2019). Honors Project, Smith College, Northampton, MA.
Off Campus Download