ServiceNow Research

Exploring the Design Space of Generative Diffusion Processes for Sparse Graphs

Abstract

We extend score-based generative diffusion processes (GDPs) to sparse graphs and other inherently discrete data, with a focus on scalability. GDPs apply diffusion to training samples, then learn a reverse process generating new samples out of noise. Previous work applying GDPs to discrete data effectively relax discrete variables to continuous ones. Our approach is different: we consider jump diffusion (\textit{i.e.}, diffusion with punctual discontinuities) in $\mathbb{R}^d \times \mathcal{G}$ where $\mathcal{G}$ models discrete components of the data. We focus our attention on sparse graphs: our \textsc{Dissolve} process gradually breaks apart a graph $(V,E) \in \mathcal{G}$ in a certain number of distinct jump events. This confers significant advantages compared to GDPs that use less efficient representations and/or that destroy the graph information in a sudden manner. Gaussian kernels allow for efficient training with denoising score matching; standard GDP methods can be adapted with just an extra argument to the score function. We consider improvement opportunities for \textsc{Dissolve} and discuss necessary conditions to generalize to other kinds of inherently discrete data.

Publication
Workshop at the Neural Information Processing Systems (NeurIPS)
Pierre-André Noël
Pierre-André Noël
Applied Research Scientist

Applied Research Scientist at Low Data Learning located at Montreal, QC, Canada.