Introducing TensorFlow GNN 1.0: Building Graph Neural Networks at Large Scale

in

Are you tired of dealing with those ***** tabular datasets? Do you crave the thrill of working with graphs instead? Well, my friend, have I got news for you!”

Introducing TensorFlow GNN 1.0: Building Graph Neural Networks at Large Scale!

That’s right, we’ve finally released our latest library designed to make it easy to work with graph structured data using TensorFlow. And let me tell ya, this thing is a game-changer. No more struggling with messy CSV files or dealing with the headache of working with heterogeneous graphs (you know what I mean).

So why use GNNs? Well, for starters, they’re pretty ***** useful when it comes to answering questions about multiple characteristics of these graphs. By working at the graph level, we try to predict characteristics of the entire graph like identifying circles in a graph that might represent sub-molecules or perhaps close social relationships. GNNs can be used on node-level tasks, to classify the nodes of a graph and predict partitions and affinity in a graph similar to image classification or segmentation. Finally, we can use GNNs at the edge level to discover connections between entities, perhaps using GNNs to “prune” edges to identify the state of objects in a scene.

But enough about why you should care how this library works! The initial release of the TF-GNN library contains a number of utilities and features for use by beginners and experienced users alike, including:

1) A high-level Keras-style API to create GNN models that can easily be composed with other types of models. GNNs are often used in combination with ranking, deep-retrieval (dual-encoders) or mixed with other types of models (image, text, etc.)

2) A well-defined schema to declare the topology of a graph, and tools to validate it. This schema describes the shape of its training data and serves to guide other tools.

3) A GraphTensor composite tensor type which holds graph data, can be batched, and has graph manipulation routines available.

4) Various efficient broadcast and pooling operations on nodes and edges, and related tools.

5) A library of standard baked convolutions, that can be easily extended by ML engineers/researchers.

6) An encoding of graph-shaped training data on disk, as well as a library used to parse this data into a data structure from which your model can extract the various features.

So what are you waiting for? Go ahead and give it a spin! And if you have any questions or feedback, feel free to reach out to us at [email protected] we’d love to hear from you!

Later!

SICORPS