Imagination is a crucial aspect of human intelligence that enables us to combine concepts in novel ways and make sense of new situations. Such capacity for compositional reasoning about unseen scenarios is not yet attainable for machine learning models. In this work, we build upon the notion of imagination to propose a modular framework for compositional data augmentation in the context of visual analogical reasoning. Our method, denoted Object-centric Compositional Neural Module Network (OC-NMN), decomposes visual generative reasoning tasks into a series of primitives that are applied to objects without using a domain-specific language. We show that our modular architectural choices can be used to generate new training tasks that lead to better out-of-distribution generalization. We compare our model to existing and new baselines in proposed visual reasoning benchmark that consists of applying arithmetic operations to visual digits.