9

An Empirical Exploration of Trust Dynamics in LLM Supply Chains
With the widespread proliferation of AI systems, trust in AI is an important and timely topic to navigate. Researchers so far have …
Understanding Stakeholders' Perceptions and Needs Across the LLM Supply Chain
Explainability and transparency of AI systems are undeniably important, leading to several research studies and tools addressing them. …
IntentGPT: Few-shot Intent Discovery with Large Language Models
In today’s digitally driven world, dialogue systems play a pivotal role in enhancing user interactions, from customer service to …
Self-evaluation and self-prompting to improve the reliability of LLMs
In order to safely deploy Large Language Models (LLMs), they must be capable of dynamically adapting their behavior based on their …
WorkArena: How Capable are Web Agents at Solving Common Knowledge Work Tasks?
We study the use of large language model-based agents for interacting with software via web browsers. Unlike prior work, we focus on …
Towards Disentangled High-level Causal Explanations in Text
In this work, we propose a causal representation learning framework for learning disentangled and intervenable high-level explanations …
A Sparsity Principle for Partially Observable Causal
Causal representation learning (CRL) aims at identifying high-level causal variables from low-level data, e.g. images. Current methods …
Capture the Flag: Uncovering Data Insights with Large Language Models
The extraction of a small number of relevant insights from vast amounts of data is a crucial component of data-driven decision-making. …
Lag-Llama: A Foundation Model for Probabilistic Time Series Forecasting
In this work, we present Lag-Llama, a general-purpose probabilistic time series forecasting model trained on a large collection of time …
Multi-View Causal Representation Learning with Partial Observability
We present a unified framework for studying the identifiability of representations learned from simultaneously observed views, such as …