Highlight

Novel Parallel Algorithms for Fast Multi-GPU-based Generation of Massive Scale-free Networks

network generated by cuPPA
Fig. 1. A visualization of a network generated by cuPPA using 10000 vertices.

Achievement

A novel parallel algorithm is presented for generating random scale-free networks using the preferential-attachment model. The algorithm, named cuPPA, is custom-designed for “single instruction multiple data (SIMD)” style of parallel processing supported by modern processors such as graphical processing units (GPUs). To the best of our knowledge, our algorithm is the first to exploit GPUs, and also the fastest implementation available today to generate scale-free networks using the preferential attachment model. A detailed performance study is presented to understand the scalability and runtime characteristics of the cuPPA algorithm. Also presented is another version of the algorithm called cuPPA-Hash tailored for multiple GPUs. On a single GPU, the original cuPPA algorithm delivers the best performance, but is challenging to port to multi-GPU implementation. For multi-GPU implementation, cuPPA-Hash has been used as the parallel algorithm to achieve a perfect linear speedup up to 4 GPUs.  In one of the best cases, when executed on an NVidia GeForce 1080 GPU, the original cuPPA generates a scale-free network of two billion edges in less than 3 seconds. On multi-GPU platforms, cuPPA-Hash generates a scale-free network of 16 billion edges in less than 7 seconds using a machine consisting of 4 NVidia Tesla P100 GPUs.

Significance and Impact

Networks are prevalent in many complex systems, such as circuits, chemical compounds, protein structures, biological networks, social networks, the Web, and XML documents. As these complex systems of today grow larger, the ability to generate progressively large random networks becomes all the more important. It is well known that the structure of larger networks is fundamentally different from that of small networks, and many patterns, such as communities, emerge only in massive networks. Sequential algorithms are able to generate networks with millions of edges in a reasonable amount of time, however, generating networks with billions of edges can take prohibitively large amount of time. This motivates the need for efficient parallel algorithms for generating such networks. Graphics processors (GPUs) are a cost-effective, energy-efficient, and widely available parallel processing platform.  The use of GPUs is prevalent in many areas such as scientific computation, complex simulations, big data analytics, machine learning, and data mining. However, there is a lack of GPU-based graph/network generators, especially for scale-free networks such as those based on the preferential attachment model. In this paper, we extend cuPPA, and present cuPPA-Hash, a hash based implementation of cuPPA. cuPPA-Hash is scalable to multiple GPUS. Using cuPPA-Hash we are able to generate a network with 16 billion edges in just 7 seconds. To the best of our knowledge, this is the first and the fastest GPU-based algorithm to generate networks using the exact preferential attachment model.

Research Details

  • Designed and implemented GPU-based parallel algorithm
  • Extended the algorithm with multi-GPU capability
  • Our algorithm is efficient and scalable to a large number of vertices
  • Generated networks have the same properties to those generated with sequential algorithms

Overview

A novel GPU-based algorithm, named cuPPA, has been presented, with a detailed performance study, and its combination of its scale and speed has been tested by achieving the ability to generate networks with up to two billion edges in under three seconds of wall clock time. The algorithm is customizable with respect to the structure of the network by varying a single parameter, namely, a probability measure that captures the preference style of new edges in the preferential attachment model. Also, a high amount of concurrency in the generator's workload per thread or processor is observed when that probability is at very small fractions greater than zero.

Last Updated: May 28, 2020 - 4:03 pm