1

Evaluating In-Context Learning of Libraries for Code Generation
Contemporary Large Language Models (LLMs) exhibit a high degree of code generation and comprehension capability. A particularly …
Reducing hallucination in structured outputs via Retrieval-Augmented Generation
A common and fundamental limitation of Generative AI (GenAI) is its propensity to hallucinate. While large language models (LLM) have …
Investigating Interaction Friction in Generative AI: Improving User Experience and Decision-Making
Incorporating ethical principles of human-centered AI, such as fostering human autonomy and mindful decision-making, challenges the …
Efficient Dynamics Modeling in Interactive Environments with Koopman Theory
The accurate modeling of dynamics in interactive environments is critical for successful long-range prediction. Such a capability could …
Multi-View Causal Representation Learning with Partial Observability
We present a unified framework for studying the identifiability of representations learned from simultaneously observed views, such as …
TACTIS-2: Better, Faster, Simpler Attentional Copulas for Multivariate Time Series
We introduce a new model for multivariate probabilistic time series prediction, designed to flexibly address a range of tasks including …
Workflow discovery in low data regimes
Text-based dialogues are now widely used to solve real-world problems. In cases where solution strategies are already known, they can …
Generalization bounds with arbitrary complexity measures
In statistical learning theory, a generalization bound usually involves a complexity measure imposed by the considered theoretical …
GEO-Bench: Toward Foundation Models for Earth Monitoring
Recent progress in self-supervision shows that pre-training large neural networks on vast amounts of unsupervised data can lead to …
LLM aided semi-supervision for efficient Extractive Dialog Summarization
Generating high-quality summaries for chat dialogs often requires large labeled datasets. We propose a method to efficiently use …