Implementation of Neural Networks using GPU


The KIPS Transactions:PartB , Vol. 11, No. 6, pp. 735-742, Oct. 2004
10.3745/KIPSTB.2004.11.6.735,   PDF Download:

Abstract

We present a new use of common graphics hardware to perform a faster artificial neural network. And we examine the use of GPU enhances the time performance of the image processing system using neural network. In the case of parallel computation of multiple input sets, the vector-matrix products become matrix-matrix multiplications. As a result, we can fully utilize the parallelism of GPU. Sigmoid operation and bias term addition are also implemented using pixel shader on GPU. Our preliminary result shows a performance enhancement of about thirty times faster using ATI RADEON 9800 XT board.


Statistics
Show / Hide Statistics

Statistics (Cumulative Counts from September 1st, 2017)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.


Cite this article
[IEEE Style]
K. S. Oh and K. C. Jung, "Implementation of Neural Networks using GPU," The KIPS Transactions:PartB , vol. 11, no. 6, pp. 735-742, 2004. DOI: 10.3745/KIPSTB.2004.11.6.735.

[ACM Style]
Kyoung Su Oh and Kee Chul Jung. 2004. Implementation of Neural Networks using GPU. The KIPS Transactions:PartB , 11, 6, (2004), 735-742. DOI: 10.3745/KIPSTB.2004.11.6.735.