The Inferior Olive (IO) in the brain, in conjunction with the cerebellum, is responsible for crucial sensorimotor-integration functions in humans. In this paper, we simulate a computationally challenging IO neuron model consisting of three compartments per neuron in a network arrangement on GPU platforms. Several GPU platforms of the two latest NVIDIA GPU architectures (Fermi, Kepler) have been used to simulate large-scale IO-neuron networks. These networks have been ported on 4 diverse GPU platforms and implementation has been optimized, scoring 3x speedups compared to its unoptimized version. The effect of GPU L1-cache and thread block size as well as the impact of numerical precision of the application on performance have been evaluated and best configurations have been chosen. In effect, a maximum speedup of 160x has been achieved with respect to a reference CPU platform.

hdl.handle.net/1765/85829
2015 Design, Automation and Test in Europe Conference and Exhibition, DATE 2015
Department of Neuroscience

Nguyen, H. A. D., Al-Ars, Z., Smaragdos, G., & Strydis, C. (2015). Accelerating complex brain-model simulations on GPU platforms. Presented at the 2015 Design, Automation and Test in Europe Conference and Exhibition, DATE 2015. Retrieved from http://hdl.handle.net/1765/85829