Exploring complex brain-simulation workloads on multi-GPU deployments
In-silico brain simulations are the de-facto tools computational neuroscientists use to understand large-scale and complex brain-function dynamics. Current brain simulators do not scale efficiently enough to large-scale problem sizes (e.g., >100,000 neurons) when simulating biophysically complex neuron models. The goal of this work is to explore the use of true multi-GPU acceleration through NVIDIA’s GPUDirect technology on computationally challenging brain models and to assess their scalability. The brain model used is a state-of-the-art, extended Hodgkin-Huxley, biophysically meaningful, three-compartmental model of the inferior-olivary nucleus. The Hodgkin-Huxley model is the most widely adopted conductance-based neuron representation, and thus the results from simulating this representative workload are relevant for many other brain experiments. Not only the actual network-simulation times but also the network-setup times were taken into account when designing and benchmarking the multi-GPU version, an aspect often ignored in similar previous work. Network sizes varying from 65K to 2M cells, with 10 and 1,000 synapses per neuron were executed on 8, 16, 24, and 32 GPUs. Without loss of generality, simulations were run for 100 ms of biological time. Findings indicate that communication overheads do not dominate overall execution while scaling the network size up is computationally tractable. This scalable design proves that large-network simulations of complex neural models are possible using a multi-GPU design with GPUDirect.
|Keywords||Multi-GPU, Multi-node, Neural networks|
|Persistent URL||dx.doi.org/10.1145/3371235, hdl.handle.net/1765/123823|
|Journal||Transactions on Architecture and Code Optimization|
Van Der Vlag, M.A. (Michiel A.), Smaragdos, G, Al-Ars, Z, & Strydis, C. (2019). Exploring complex brain-simulation workloads on multi-GPU deployments. Transactions on Architecture and Code Optimization, 16(4). doi:10.1145/3371235