ServiceNow recherche

Few-shot Learning

A Guide To Effectively Leveraging LLMs for Low-Resource Text Summarization: Data Augmentation and Semi-supervised Approaches
Existing approaches for low-resource text summarization primarily employ large language models (LLMs) like GPT-3 or GPT-4 at inference …
MixSumm: Topic-based Data Augmentation using LLMs for Low-resource Extractive Text Summarization
Low-resource extractive text summarization is a vital but heavily underexplored area of research. Prior literature either focuses on …
PromptMix: A Class Boundary Augmentation Method for Large Language Model Distillation
Data augmentation is a widely used technique to address the problem of text classification when there is a limited amount of training …
MAPL: Parameter-Efficient Adaptation of Unimodal Pre-Trained Models for Vision-Language Few-Shot Prompting
Large pre-trained models have proved to be remarkable zero- and (prompt-based) few-shot learners in unimodal vision and language tasks. …
Towards Learning to Imitate from a Single Video Demonstration
Agents that can learn to imitate given video observation – without direct access to state or action information are more …
A Closer Look at Embedding Propagation for Manifold Smoothing
Supervised training of neural networks requires a large amount of manually annotated data and the resulting networks tend to be …
Overcoming challenges in leveraging GANs for few-shot data augmentation
In this paper, we explore the use of GAN-based few-shot data augmentation as a method to improve few-shot classification performance. …
A Survey of Self-Supervised and Few-Shot Object Detection
Labeling data is often expensive and time-consuming, especially for tasks such as object detection and instance segmentation, which …
Synbols: Probing Learning Algorithms with Synthetic Datasets
Progress in the field of machine learning has been fueled by the introduction of benchmark datasets pushing the limits of existing …
Embedding Propagation: Smoother Manifold for Few-Shot Classification
Few-shot classification is challenging because the data distribution of the training set can be widely different to the test set as …