How Neural Networks See Social Networks

Download Slides

We have started using a novel deep learning architecture as part of Lynx’s Spark-based social network analytics platform over the last year. A common worry with neural network applications is that they are “black boxes”: data goes in, good predictions come out, but nobody can explain how. We had to fix this not only to reassure clients but also to help us debug and improve our graph-oriented neural models. We have reviewed the recent advances in feature visualization and attribution.

Then we figured out how to modify these techniques to work with graph convolutional networks. Feature visualization is about generating examples to illustrate what a specific neuron is looking for. Good feature visualizations translate to human concepts what kind of patterns the neural network looks for. Attribution is about explaining specific predictions. Good attribution highlights the patterns in the data that the network based its decision on.

I will start the talk by introducing the problem and giving some context. The rest of the time will be split between explaining how our solution works and walking through an example where it gives interesting insights.

Session hashtag: #DLSAIS11



« back
About Dániel Darabos

Daniel Darabos has been building the graph analytics system described in the talk as a Software Engineer at Lynx Analytics R&D for the past three years. Before that, Daniel worked on ads serving and machine learning as a Site Reliability Engineer at Google.

About Janos Maginecz

Janos joined Lynx in 2016 to work on their big graph analytics platform. He's now one of the founding members of the Lynx Research Team, focusing on neural network application on graphs.