The Future Is Gigantic Graphs: A Neighborhood Behold on Graph Processing Programs

The Future Is Gigantic Graphs: A Neighborhood Behold on Graph Processing Programs

By Sherif Sakr, Angela Bonifati, Hannes Voigt, Alexandru Iosup, Khaled Ammar, Renzo Angles, Walid Aref, Marcelo Arenas, Maciej Besta, Peter A. Boncz, Khuzaima Daudjee, Emanuele Della Valle, Stefania Dumbrava, Olaf Hartig, Bernhard Haslhofer, Tim Hegeman, Jan Hidders, Katja Hose, Adriana Iamnitchi, Vasiliki Kalavri, Hugo Kapp, Wim Martens, M. Tamer Özsu, Eric Peukert, Stefan Plantikow, Mohamed Ragab, Matei R. Ripeanu, Semih Salihoglu, Christian Schulz, Petra Selmer, Juan F. Sequeda, Joshua Shinavier

Communications of the ACM,
September 2021,
Vol. 64 No. 9, Pages 62-71



checkerboard patterns with red pins, illustration

Credit: Alli Torban

Graphs are, by nature, ‘unifying abstractions’ that would possibly leverage interconnectedness to portray, stumble on, predict, and existing true- and digital-world phenomena. Though true users and patrons of graph circumstances and graph workloads designate these abstractions, future complications will require new abstractions and programs. What must happen in the next decade for colossal graph processing to proceed to be triumphant?

Back to Top

Key Insights


We’re witnessing an unheard of increase of interconnected data, which underscores the important feature of graph processing in our society. As a substitute of a single, exemplary (“killer”) application, we thought huge graph processing programs underpinning many emerging but already complex and diverse data management ecosystems, in many areas of societal hobby.a

To name top just a few most stylish, outstanding examples, the significance of this self-discipline for practitioners is evidenced by the colossal number (bigger than 60,000) of folks registeredb to gain the Neo4j book Graph Algorithmsc in correct over one-and-a-half years, and by the wide hobby in the utilization of graph processing in the factitious intelligence (AI) and machine discovering out (ML) fields.d Moreover, the effectively timed Graphs 4 COVID-19 initiativee is evidence of the significance of large graph analytics in alleviating the pandemic.

Lecturers, originate-ups, and even huge tech firms similar to Google, Facebook, and Microsoft fill equipped diverse programs for managing and processing the rising presence of large graphs. Google’s PageRank (leisurely 1990s) showcased the vitality of Web-scale graph processing and motivated the come of the MapReduce programming mannequin, which modified into originally aged to simplify the come of the details structures aged to manage with searches, but has since been aged widely outside of Google to implement algorithms for colossal-scale graph processing.

Motivated by scalability, the 2010 Google Pregel “judge-like-a-vertex” mannequin enabled dispensed PageRank computation, whereas Facebook, Apache Giraph, and ecosystem extensions enhance extra account for computational fashions (similar to activity-basically based fully and never frequently dispensed) and data fashions (similar to diverse, maybe streamed, maybe wide-space data sources) purposeful for social community data. At the a related time, an increasing different of expend circumstances revealed RDBMS performance complications in managing highly linked data, motivating diverse startups and innovative merchandise, similar to Neo4j, Sparksee, and the present Amazon Neptune. Microsoft Trinity and later Azure SQL DB equipped an early dispensed database-oriented formulation to huge graph management.

The variety of fashions and programs led in the origin to the fragmentation of the market and an absence of sure route for the community. Opposing this pattern, we thought promising efforts to raise collectively the programming languages, ecosystem structure, and performance benchmarks. As now we fill argued, there would possibly be never this form of thing as a killer application that would possibly reduction to unify the community.

What must happen in the next decade for colossal graph processing to proceed to be triumphant?

Co-authored by a representative sample of the community (thought the sidebar, “A Joint Effort by the Pc Programs and Records Management Communities“), this article addresses the questions: What produce the next-decade huge-graph processing programs be aware like from the perspectives of the details management and the colossal-scale-programs communities?f What can we converse this day regarding the guiding have faith ideas of these programs in the next 10 years?

Resolve 1 outlines the complex pipeline of future huge graph processing programs. Records flows in from diverse sources (already graph-modeled as effectively as non-graph-modeled) and is persevered, managed, and manipulated with online transactional processing (OLTP) operations, similar to insertion, deletion, updating, filtering, projection, joining, uniting, and intersecting. The details is then analyzed, enriched, and condensed with online analytical processing (OLAP) operations, similar to grouping, aggregating, cutting, dicing, and rollup. At last, it is some distance disseminated and consumed by a diversity of applications, along side machine discovering out, similar to ML libraries and processing frameworks; industry intelligence (BI), similar to file producing and planning tools; scientific computing; visualization; and augmented actuality (for inspection and interplay by the user). Present that here is no longer most regularly a purely linear project and hybrid OLTP/OLAP processes can emerge. Genuinely extensive complexity stems from (intermediate) outcomes being fed attend into early-project steps, as indicated by the blue arrows.


Resolve 1. Illustration of a complex data pipeline for graph processing.

Shall we embrace, to compare coronaviruses and their affect on human and animal populations (as an illustration, the COVID-19 disease), the pipeline depicted in Resolve 1 will be purposed for 2 significant forms of evaluation: community-basically based fully ‘omics’ and drug-related search, and community-basically based fully epidemiology and unfold-prevention. For the light, the pipeline will fill the next steps:

  1. Initial genome sequencing outcomes in figuring out a related ailments.
  2. Textual swear (non-graph data) and structured (database) searches reduction name genes related to the disease.
  3. A community remedy coupled with diverse forms of simulations would possibly well existing diverse drug targets and actual inhibitors, and would possibly well outcome in effective prioritization of usable remedy and coverings.

For the latter, social media and space data, and data from diversified privacy-lovely sources, will be combined into social interplay graphs, which will be traversed to place mammoth-spreaders and mammoth-spreading events related to them, which would possibly well cease in the institution of prevention policies and containment actions. Then again, the present know-how of graph processing know-how can’t enhance this form of complex pipeline.

Shall we embrace, on the COVID-19 knowledge graph,g purposeful queries would possibly well also be posed against particular particular person graphsh inspecting the papers, patents, genes, and most influential COVID-19 authors. Then again, inspecting several data sources in a full-fledged graph processing pipeline at some point soon of just a few graph datasets, as illustrated in Resolve 1, raises many challenges for current graph database know-how. In this article, we formulate these challenges and construct our imaginative and prescient for subsequent-know-how, huge-graph processing programs by specializing in three significant aspects: abstractions, ecosystems, and performance. We show camouflage anticipated data fashions and query languages, and inherent relationships among them in lattice of abstractions and discuss about these abstractions and the flexibility of lattice structures to accommodate future graph data fashions and query languages. This will solidify the working out of the classic ideas of graph data extraction, alternate, processing, and evaluation, as illustrated in Resolve 1.

A 2d important ingredient, as we are in a position to discuss about, is the imaginative and prescient of an ecosystem governing huge graph processing programs and enabling the tuning of diverse ingredients, similar to OLAP/OLTP operations, workloads, standards, and performance needs. These aspects keep the massive processing programs extra complex than what modified into viewed in the last decade. Resolve 1 gives a high-level perception of this complexity when it involves inputs, outputs, processing needs, and last consumption of graph data.

A third ingredient is fancy and preserve a watch on performance in these future ecosystems. We fill important performance challenges to beat, from methodological aspects about performing meaningful, tractable, and reproducible experiments to practical aspects regarding the alternate-off of scalability with portability and interoperability.

Back to Top


Abstractions are broadly aged in programming languages, computational programs, and database programs, among others, to veil technical aspects in decide of additional user-friendly, domain-oriented logical views. Currently, users desire to fetch from a colossal spectrum of graph data fashions which would maybe be a related, but fluctuate when it involves expressiveness, designate, and supposed expend for querying and analytics. This ‘abstraction soup’ poses important challenges to be solved in due route.

Working out data fashions. At the moment time, graph data management confronts many data fashions (directed graphs, RDF, variants of property graphs, etc) with key challenges: deciding which data mannequin to fetch per expend case and mastering interoperability of data fashions where data from diversified fashions is combined (as in the @left-hand aspect of Resolve 1).

Both challenges require a deeper working out of data fashions regarding:

  1. How produce humans conceptualize data and data operations? How produce data fashions and their respective operators enhance or hinder the human thought project? Can we measure how “pure” or “intuitive” data fashions and their operators are?
  2. How can we quantify, compare, and (partly) uncover the (modeling and operational) expressive vitality of data fashions? Concretely, Resolve 2 illustrates a lattice for a series of graph data fashions. Learn backside-up, this lattice reveals which characteristic has to be added to a graph data mannequin to form a mannequin of richer expressiveness. The figure also underlines the diversity of data fashions aged in thought, algorithms, standards, and relatedi industry programs. How can we lengthen this comparative working out at some point soon of just a few data mannequin households, similar to graph, relational, or doc? What are the costs and advantages of selecting one mannequin over every other?
  3. Interoperability between diversified data fashions would possibly well also be achieved through mappings (semantic assertions at some point soon of ideas in diversified data fashions) or with train translations (for event, W3C’s R2RML). Are there long-established methods or building blocks for expressing such mappings (category thought, as an illustration)?


Resolve 2. Instance lattice reveals graph data mannequin variants with their mannequin characteristics.8

Finding out (1) requires significant investigators working with data and data fashions, which is unfamiliar in the details management self-discipline and would possibly well composed be carried out collaboratively with diversified fields, similar to human-computer interplay (HCI). Work on HCI and graphs exists, as an illustration, in HILDA workshops at Sigmod. Then again, these must no longer exploring the hunt jam of graph data fashions.

Finding out (2) and (3) can construct on existing work in database thought, but would possibly well additionally leverage findings from neighboring computer science communities on comparison, featurization, graph summarization, visualization, and mannequin transformation. Shall we embrace, graph summarization22 has been broadly exploited to present succinct representations of graph properties in graph mining1 but they’ve seldom been aged by graph processing programs to keep processing extra atmosphere friendly, extra effective, and extra user centered. Shall we embrace, approximate query processing for property graphs can’t depend on sampling as done by its relational counterpart and would possibly well desire to make expend of quotient summaries for query answering.

Common sense-basically based fully and declarative formalisms. Common sense gives a unifying formalism for expressing queries, optimizations, integrity constraints, and integration ideas. Starting from Codd’s seminal perception touching on logical formulae to relational queries,12 many first uncover (FO) logic fragments fill been aged to formally outline query languages with super properties similar to decidable overview. Graph query languages are essentially a syntactic variant of FO augmented with recursive capabilities.

We’re witnessing an unheard of increase of interconnected data, which underscores the important feature of graph processing in our society.

Common sense gives a yardstick for reasoning about graph queries and graph constraints. Certainly, a promising line of compare is the application of formal tools, similar to mannequin checking, theorem proving,15 and testing to place the practical correctness of complex graph processing programs, in long-established, and of graph database programs, in particular.

The affect of logic is pivotal no longer top to database languages, but additionally as a foundation for combining logical reasoning with statistical discovering out in AI. Logical reasoning derives boom notions about a part of data by logical deduction. Statistical discovering out derives boom notions by discovering out statistical fashions on identified data and applying it to new data. Both leverage the topological structure of graphs (ontologies and files graphsj or graph embeddings similar to Node2vecd to form better insights than on non-linked data). Then again, both happen to be isolated. Combining both ways can lead to mandatory developments.

Shall we embrace, deep discovering out (unsupervised feature discovering out) applied to graphs lets in us to deduce structural regularities and form meaningful representations for graphs that would possibly well be additional leveraged by indexing and querying mechanisms in graph databases and exploited for logical reasoning. As every other instance, probabilistic fashions and causal relationships would possibly well also be naturally encoded in property graphs and are the premise of evolved-graph neural networks.k Property graphs allow us to synthesize extra correct fashions for ML pipelines, thanks to their inherent expressivity and embedded domain knowledge.

These considerations unveil important originate questions as follows: How can statistical discovering out, graph processing, and reasoning be combined and integrated? Which underlying formalisms keep that imaginable? How can we weigh between the 2 mechanisms?

Algebraic operators for graph processing. Currently, there would possibly be never this form of thing as a normal graph algebra. The consequence of the Graph Search files from Language (GQL) Standardization Project would possibly well affect the have faith of a graph algebra alongside existing and emerging expend circumstances.25 Then again, subsequent-know-how graph processing programs would possibly well composed take care of questions about their algebraic ingredients.

What are the classic operators of this algebra in comparison with diversified algebras (relation, neighborhood, quiver or route, incidence, or monadic algebra comprehensions)? What core graph algebra would possibly well composed graph processing programs enhance? Are there graph analytical operators to incorporate on this algebra? Can this graph algebra be combined and integrated with an algebra of kinds to keep form-programs extra expressive and to facilitate form checking?

A “relational-like” graph algebra in an enviornment to particular the total first-uncover queries11 and enhanced with a graph pattern-matching operator16 sounds like a true initiating point. Then again, doubtlessly the most interesting graph-oriented queries are navigational, similar to reachability queries, and would possibly well’t be expressed with puny recursion of relational algebra.3,8 Moreover, relational algebra is a closed algebra; that is, enter(s) and output of every and each operator is a relation, which makes relational algebra operators composable. Must composed we aim for a closed-graph algebra that encompasses both members of the family and graphs?

Contemporary graph query engines combine algebra operators and advert hoc graph algorithms into complex workloads, which complicates implementation and impacts performance. An implementation in accordance with a single algebra also seems utopic. A question language with long-established Turing Machine capabilities (like a programming language), nevertheless, entails tractability and feasibility complications.2 Algebraic operators that work in both centralized and dispensed environments, and that would possibly well be exploited by both graph algorithms and ML fashions similar to GNNs, graphlets, and graph embeddings, will be highly super for the future.

Back to Top


Ecosystems behave otherwise from mere programs of programs; they couple many programs developed for diversified applications and with diversified processes. Resolve 1 exemplifies the complexity of a graph processing ecosystem through high-performance OLAP and OLTP pipelines working collectively. What are the ecosystem-related challenges?

Workloads in graph processing ecosystems. Workloads affect both the life like requirements (what a graph processing ecosystem will be in an enviornment to present) and the non-practical (how effectively). Watch data25 features to pipelines, as in Resolve 1: complex workflows, combining heterogeneous queries and algorithms, managing and processing diverse datasets, with characteristics summarized in the sidebar “Known Properties of Graph Processing Workloads.”

In Resolve 1, graph processing links to long-established processing, along side ML, as effectively as to domain-particular processing ecosystems, similar to simulation and numerical methods in science and engineering, aggregation and modeling in industry analytics, and rating and recommendation in social media.

Standards for data fashions and query languages. Graph processing ecosystem standards can present a frequent technical foundation, thereby increasing the mobility of applications, tooling, builders, users, and stakeholders. Standards for both OLTP and OLAP workloads would possibly well composed standardize the details mannequin, the details manipulation and data definition language, and the alternate codecs. They would composed be without complications adoptable by existing implementations and likewise enable new implementations in the SQL-basically based fully technological panorama.

It is important that standards possess existing industry practices by following broadly aged graph query languages. To this cease, ISO/IEC started the GQL Standardization Project in 2019 to stipulate GQL as a new graph query language. GQL is backed by 10 nationwide standards our bodies with representatives from significant industry vendors and enhance from the property graph community as represented by the Linked Records Benchmarks Council (LDBC).l

With an initial point of curiosity on transactional workloads, GQL will enhance composable graph querying over just a few, maybe overlapping, graphs utilizing enhanced long-established route queries (RPQs),3 graph transformation (views), and graph updating capabilities. GQL enhances RPQs with pattern quantification, rating, and route-aggregation. Syntactically, GQL combines SQL type with visible graph patterns pioneered by Cypher.14

Long-timeframe, it would possibly actually well also be purposeful to standardize building blocks of graph algorithms, analytical APIs and workflow definitions, graph embedding ways, and benchmarks.28 Then again, substantial adoption for these aspects requires maturation.

Reference structure. We name the direct of defining a reference structure for colossal graph processing. The early definition of a reference structure has greatly benefited the dialogue spherical the have faith, improvement, and deployment of cloud and grid computing solutions.13

For huge graph processing, our major perception is that many graph processing ecosystems match the frequent reference structure of datacenters,18 from which Resolve 3 derives. The Spark ecosystem depicted here is one amongst thousands of imaginable instantiations. The trouble is to take the evolving graph processing self-discipline.


Resolve 3. A reference structure for graph processing ecosystems.

Previous scale-up vs. scale-out. Many graph platforms point of curiosity both on scale-up or scale-out. Each has relative advantages.27 Previous merely reconciling scale-up and scale-out, we envision a scalability continuum: given a various workload, the ecosystem would routinely mediate run it, and on what extra or much less heterogeneous infrastructure, meeting provider-level agreements (SLAs).

Hundreds of mechanisms and ways exist to place into effect scale-up and scale-out decisions, similar to data and work partitioning, migration, offloading, replication, and elastic scaling. All decisions would possibly well also be taken statically or dynamically, utilizing diverse optimization and discovering out ways.

Dynamic and streaming aspects. Future graph processing ecosystems would possibly well composed take care of dynamic and streaming graph data. A dynamic graph extends the long-established conception of a graph to legend for updates (insertions, changes, deletions) such that the present and outdated states would possibly well also be seamlessly queried. Streaming graphs can develop indefinitely as new data arrives. They are most regularly unbounded, thus the underlying programs are unable to preserve the total graph tell. The sliding window semantics6 allow the 2 notions to be unified, with insertions and deletions being thought about as arrivals and removals from the window.

Since current streaming processing technologies are quite easy, for event aggregations and projections as in industrial graph processing libraries (similar to Gelly on Apache Flink), the need for “complex graph data streams” is obvious, along with extra evolved graph analytics and ML advert hoc operators. One more compare trouble is to name the graph-query processing operators that would possibly well be evaluated on dynamic and streaming graphs whereas taking into legend recursive operators7,23 and route-oriented semantics, as mandatory for long-established query languages similar to GQL and G-Core.4

Graph processing platforms are also dynamic; discovering, working out, and controlling the dynamic phenomena that occur in complex graph processing ecosystems is an originate trouble. As graph processing ecosystems change into extra mainstream and are embedded in better data-processing pipelines, we quiz to extra and extra extra be aware identified programs phenomena, similar to performance variability, the presence of cascading failures, and autoscaling resources. What new phenomena will emerge? What programming abstractions20 and programs ways can respond to them?

Back to Top


Graph processing raises phenomenal performance challenges, from the shortcoming of a broadly aged performance metric diversified than response time to the methodological direct of comparing graph processing programs at some point soon of architectures and tuning processes to performance portability and reproducibility. Such challenges change into worthy extra daunting for graph processing ecosystems.

Benchmarks, performance dimension, and methodological aspects. Graph processing suffers from methodological disorders such as diversified computing disciplines.5,24 Running comprehensive graph processing experiments, especially at scale, lacks tractability9—that is, the ability to implement, deploy, and experiment inner an affordable amount of time and worth. As in diversified computing disciplines,5,24 we desire new, reproducible, experimental methodologies.

Graph processing also raises phenomenal challenges in performance dimension and benchmarking related to complex workloads and data pipelines (Resolve 1). Even reputedly minute HPAD diversifications, as an illustration the graph’s level distribution, can fill important performance implications.17,26 The shortcoming of interoperability hinders stunning comparisons and benchmarking. Indexing and sampling ways would possibly well show camouflage purposeful to pork up and predict the runtime and performance of graph queries,8,21,30 no longer easy the communities of colossal-scale programs, data management, data mining, and ML.

As a substitute of a single, exemplary (“killer”) application, we thought huge graph processing programs underpinning many emerging but already complex and diverse data management ecosystems.

Graph processing programs depend on complex runtimes that combine tool and hardware platforms. It on the total is a horrifying activity to take system-under-take a look at performance—along side parallelism, distribution, streaming vs. batch operation—and take a look at the operation of maybe hundreds of libraries, companies and products, and runtime programs show camouflage in true-world deployments.

We envision a mix of approaches. As in diversified computing disciplines,5,24 we desire new, reproducible experimental methodologies. Concrete questions come up: How can we facilitate mercurial but meaningful performance testing? How can we outline extra faithful metrics for executing a graph algorithm, query, program, or workflow? How can we generate workloads with combined operations, protecting temporal, spatial, and streaming aspects? How can we benchmark pipelines, along side ML and simulation? We also need organizations similar to the LDBC to curate benchmark sharing and to audit bencmark utilization in apply.

Specialization vs. portability and interoperability. There would possibly be mighty stress between specializing graph processing stacks for performance causes and enabling productivity for the domain scientist, through portability and interoperability.

Specialization, through custom tool and especially hardware acceleration, outcomes in important performance improvements. Specialization to graph workloads, as eminent in the sidebar, makes a speciality of diversity and irregularitym in graph processing: sheer dataset-scale (addressed by Pregel and later by the originate offer mission, Giraph), the (truncated) vitality-lawlike distributions for vertex levels (PowerGraph), localized and community-oriented updates (GraphChi), diverse vertex-level distributions at some point soon of datasets (PGX.D, PowerLyra), irregular or non-local vertex salvage entry to (Mosaic), affinity to specialised hardware (the BGL household, HAGGLE,, and extra.

The high-performance computing domain proposed specialised abstractions and C++ libraries for them, and high-performance and atmosphere friendly runtimes at some point soon of heterogeneous hardware. Examples encompass BGL,28 CombBLAS, and GraphBLAS. Records management approaches, along side Neo4j, GEMS,10 and Cray’s Urika, point of curiosity on convenient query languages similar to SPARQL and Cypher to be determined portability. Ongoing work also makes a speciality of (custom) accelerators.

Portability through reusable ingredients seems promising, but no long-established graph library or query language at the 2d exists. Bigger than 100 huge graph processing programs exist, but they produce no longer enhance portability: graph programs will soon desire to enhance repeatedly evolving processes.

Lastly, interoperability capability integrating graph processing into broader workflows with multi-domain tools. Integration with ML and data mining processes, and with simulation and decision-making devices, seems important but is no longer supported by existing frameworks.

A memex for colossal graph processing programs. Impressed by Vannevar Bush’s 1940s idea of personal memex, and by a 2010s specialization into a Distributed Programs Memex,19 we posit that it would maybe be both interesting and purposeful to keep a Gigantic Graph Memex for gathering, archiving, and retrieving meaningful operational files about such programs. This will be well-known for discovering out about and eradicating performance and related disorders, to enable extra inventive designs and lengthen automation, and for meaningful and reproducible testing, similar to ideas building-block in natty graph processing.

Back to Top


Graphs are a mainstay abstraction in this day’s data-processing pipelines. How can future huge graph processing and database programs present highly scalable, atmosphere friendly, and diverse querying and analytical capabilities, as demanded by true-world requirements?

To kind out this quiz, now we fill undertaken a community formulation. We started through a Dagstuhl Seminar and, rapidly after, fashioned the structured connections equipped here. We fill targeted listed here on three interrelated ingredients: abstractions, ecosystems, and performance. For every of these ingredients, and at some point soon of them, now we fill equipped a survey into what’s subsequent.

Simplest time can clarify if our predictions present purposeful directions to the community. For the time being, join us in fixing the complications of large graph processing. The long term is huge graphs.


Resolve. Peek the authors discuss about this work in the irregular Communications video.

Back to Top


1. Aggarwal, C.C. and Wang, H. Managing and mining graph data. Advances in Database Programs 40. Springer, (2010).

2. Aho, A.V. and Ullman, J.D. Universality of data retrieval languages. In Proceedings of the 6th ACM SIGACT-SIGPLAN Symposium on Principles of Programming Languages (1979) 110–119.

3. Angles, R. et al. Foundations of trendy query languages for graph databases. ACM Computing Surveys 50, 5 (2017), 68:1–68: 40.

4. Angles, R. et al. G-CORE: A core for future graph query languages. SIGMOD Conf. (2018), 1421–1432.

5. Angriman, E. et al. Guidelines for experimental algorithmics: A case discover about in community evaluation. Algorithms 12, 7 (2019), 127.

6. Babcock, B., Babu S., Datar, M., Motwani, R., and Widom, J. Units and disorders in data movement programs. PODS (2002), 1–16.

7. Bonifati, A., Dumbrava, S., and Gallego Arias, E.J. Certified graph survey repairs with long-established datalog. Plan Pract. Log. Program. 18, 3–4 (2018), 372–389.

8. Bonifati, A., Fletcher, G.H.L., Voigt, H., and Yakovets, N. Querying graphs. Synthesis Lectures on Records Management. Morgan & Claypool Publishers (2018).

9. Bonifati, A., Holubová, I., Prat-Pérez, A., and Sakr S. Graph generators: Instruct of the art work and originate challenges. ACM Comput. Surv. 53, 2 (2020), 36:1–36: 30.

10. Castellana, V.G. et al. In-memory graph databases for web-scale data. IEEE Pc 48, 3 (2015), 24–35.

11. Chandra, A.K. Plan of database queries. PODS (1988), 1–9.

12. Codd, E.F. A relational mannequin of data for colossal shared data banks. Commun. ACM 13, 6 (June 1970), 377–387.

13. Foster, I. and Kesselman, C. The Grid 2: Blueprint for a Current Computing Infrastructure. Elsevier (2003).

14. Francis, N. et al. Cypher: An evolving query language for property graphs. SIGMOD Convention (2018), 1433– 1445.

15. Gonthier, G. et al. A machine-checked proof of the unfamiliar uncover theorem. Intern. Conf. Interactive Theorem Proving (2013), 163–179.

16. He, H. and Singh, A.K. Graphs-at-a-time: Search files from language and salvage entry to methods for graph databases. SIGMOD Convention (2008), 405–418.

17. Iosup, A. et al. LDBC Graphalytics: A benchmark for colossal-scale graph evaluation on parallel and dispensed platforms. In Proc. VLDB Endow. 9, 13 (2016), 1317–1328.

18. Iosup, A. et al. Massivizing computer programs: A imaginative and prescient to fancy, have faith, and engineer computer ecosystems through and former stylish dispensed programs. ICDCS (2018), 1224–1237.

19. Iosup, A. et al. The AtLarge imaginative and prescient on the have faith of dispensed programs and ecosystems. ICDCS (2019), 1765–1776.

20. Kalavri, V., Vlassov, V., and Haridi, S. High-level programming abstractions for dispensed graph processing. IEEE Trans. Knowl. Records Eng. 30, 2 (2018), 305–324.

21. Leskovec, J. and Faloutsos, C. Sampling from colossal graphs. KDD (2006), 631–636.

22. Liu, Y., Safavi, T., Dighe, A., and Koutra, D. Graph summarization methods and applications: A survey. ACM Comput. Surv. 51, 3 (2018) 62:1–62: 34.

23. Pacaci, A., Bonifati, A., and Özsu, M.T. Unparalleled route query overview on streaming graphs. SIGMOD Conf. (2020), 1415–1430

24. Papadopoulos, A.V. et al. Methodological ideas for reproducible performance overview in cloud computing. IEEE Trans. Machine Engineering (2020), 93–94.

25. Sahu, S. et al. The ubiquity of colossal graphs and hideous challenges of graph processing: Prolonged survey. Proc. VLDB Endow. J. 29, 2 (2020), 595–618.

26. Saleem, M. et al. How representative is a SPARQL benchmark? An evaluation of RDF triplestore benchmarks. WWW Conf. (2019), 1623–1633.

27. Salihoglu, S. and Özsu, M.T. Response to “Scale up or scale out for graph processing.” IEEE Cyber web Computing 22, 5 (2018), 18–24.

28. Siek, J.G., Lee, L.Q., and Lumsdaine, A. The increase graph library: Particular person files and reference manual. Addison-Wesley (2002).

29. Uta, A., Varbanescu, A.L., Musaafir, A., Lemaire, C., and Iosup, A. Exploring HPC and huge data convergence: A graph processing discover about on Intel Knights Landing. CLUSTER (2018), 66–77.

30. Zhao, P. and Han, J. On graph query optimization in colossal networks. In Proc. VLDB Endow. 3, 1 (2010), 340–351.

Back to Top


Sherif Sakr modified into a professor at the Institute of Pc Science at College of Tartu, Estonia. He passed away on March 25, 2020 at the age of 40.

Angela Bonifati ([email protected]) is a professor at Lyon 1 College and Liris CNRS in Villeurbanne, France.

Hannes Voigt is a tool engineer at Neo4j, Germany.

Alexandru Iosup is a professor at Vrije Universiteit Amsterdam and a visiting professor at Delft College of Know-how, The Netherlands.

Back to Top


a. As indicated by a user survey12 and by a scientific literature survey of 18 application domains, along side biology, security, logistics and planning, social sciences, chemistry, and finance. Check out

b. Check out

c. Check out

d. Many highly cited articles enhance this commentary, along side “Inductive Representation Finding out on Neat Graphs” by W. Hamilton et al. (2017) and “DeepWalk: Online Finding out of Social Representations” by B. Perozzi et al. (2014);

e. Check out

f. The summary of the Dagstuhl seminar. Check out

g. Check out

h. Check out

i. The figure doesn’t aim to present a complete checklist of Graph DBMS merchandise. Please consult, as an illustration, and diversified market surveys for comprehensive overviews.

j. A most stylish practical instance is the COVID-19 Files Graph:

k. “A Complete Watch on Graph Neural Networks” by Z. Wu et al, 2019; abs/1901.00596.

l. Check out

m. Irregularity will be viewed as the opposite of the locality thought recurrently leveraged in computing.

Back to Top

Sidebar: A Joint Effort by the Pc Programs and Records Management Communities

The authors of this article met in Dec. 2019 in Dagstuhl for Seminar 19491 on Gigantic Graph processing programs.a The seminar gathered a various neighborhood of 41 fine quality researchers from the details management and colossal-scale-programs communities. It modified into an beautiful opportunity to originate the dialogue about subsequent-decade opportunities and challenges for graph processing.

Right here is a community publication The first four authors co-organized the community event ensuing listed here and coordinated the introduction of this manuscript. All diversified authors contributed equally to this compare. Sadly, Sherif Sakr passed away at some point soon of the length following the event and the completion of this article. This article is printed in memoriam.


Back to Top

Sidebar: Known Properties of Graph Processing Workloads

Graph workloads would possibly well show camouflage several properties:

  1. Graph workloads are purposeful for many, vastly diverse domains.24,25,26 Important aspects encompass edge orientation, similar to properties/timestamps for edges and nodes; graph methods (neighborhood statistics, pathfinding and traversal, and subgraph mining); programming fashions (judge-like-a-vertex, judge-like-an-edge, and judge-like-a-subgraph); diverse graph sizes, along side trillion-edge graphs;26 and query and project selectivities.9
  2. Graph workloads would possibly well also be highly irregular, mixing (short-timeframe) data-intensive and compute-intensive phases.26 The provision of irregularity, similar to diversified datasets, algorithms, and computing platforms, greatly impacts performance. Their interdependency kinds the Hardware-Platform-Algorithm-Dataset (HPAD) Law.29
  3. Graph processing uses a complex pipeline, combining a diversity of initiatives diversified than querying and algorithms.1,24 From passe data management, workloads encompass: transactional (OLTP) workloads in multi-user environments, with many short, discrete, seemingly atomic transactions; and analytical (OLAP) workloads with fewer users but complex and resource-intensive queries or processing jobs, with longer runtime (minutes). In type initiatives also encompass extract, remodel, load (ETL); visualization; cleaning; mining; and debugging and testing, along side synthetic graph know-how.
  4. Scalability, interactivity, and usability affect how graph users keep their workloads.24

The Digital Library is printed by the Affiliation for Computing Equipment. Copyright © 2021 ACM, Inc.

No entries found


Learn More

Leave a Reply

Your email address will not be published. Required fields are marked *