About
People
Publications
Open Source
Demos
Events
Blog
Careers
Contact
English
English
Français
ServiceNow
ServiceNow AI Research
Tags
Generative AI
ServiceNow AI Research
Generative AI
XC-Cache: Cross-Attending to Cached Context for Efficient LLM Inference
In-context learning (ICL) approaches typically leverage prompting to condition decoder-only language model generation on reference …
João Monteiro
,
Étienne Marcotte
,
Pierre-André Noël
,
Valentina Zantedeschi
,
David Vazquez
,
Nicolas Chapados
,
Christopher Pal
,
Perouz Taslakian
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2024.
PDF
Cite
Code
Fine-Tuning Web Agents: It Works, But It's Trickier Than You Think
Recent advancements in large language models (LLMs) have sparked interest in developing autonomous web agents capable of performing …
Massimo Caccia
,
Megh Thakkar
,
Léo Boisvert
,
Thibault Le Sellier De Chezelles
,
Alexandre Piche
,
Nicolas Chapados
,
Alexandre Drouin
,
Maxime Gasse
,
Alexandre Lacoste
NOW AI Conference (NOWAI), 2024.
PDF
Cite
An Ecosystem for Web Agents: WorkArena, BrowserGym, AgentLab and more
The BrowserGym ecosystem addresses the growing need for efficient evaluation and benchmarking of web agents, particularly those …
Alexandre Lacoste
,
Maxime Gasse
,
Thibault Le Sellier De Chezelles
,
Massimo Caccia
,
Léo Boisvert
,
Megh Thakkar
,
Alexandre Drouin
,
Nicolas Chapados
Montreal AI Symposium (MAIS), 2024.
Cite
Multimodal foundation world models for generalist embodied agents
Learning generalist embodied agents, able to solve multitudes of tasks in different domains is a long-standing problem. Reinforcement …
Pietro Mazzaglia
,
Tim Verbelen
,
Bart Dhoedt
,
Aaron Courville
,
Sai Rajeswar Mudumba
Workshop at the International Conference of Machine Learning (ICML), 2024.
PDF
Cite
Code
Evaluating In-Context Learning of Libraries for Code Generation
Contemporary Large Language Models (LLMs) exhibit a high degree of code generation and comprehension capability. A particularly …
Arkil Patel
,
Siva Reddy
,
Dzmitry Bahdanau
,
Pradeep Dasigi
North American Chapter of the Association for Computational Linguistics (NAACL), 2024.
PDF
Cite
Code
Reducing hallucination in structured outputs via Retrieval-Augmented Generation
A common and fundamental limitation of Generative AI (GenAI) is its propensity to hallucinate. While large language models (LLM) have …
Patrice Béchard
,
Orlando Marquez
North American Chapter of the Association for Computational Linguistics (NAACL), 2024.
PDF
Cite
Video
Exploring validation metrics for offline model-based optimisation with diffusion models
In model-based optimisation (MBO) we are interested in using machine learning to design candidates that maximise some measure of reward …
Christopher Beckham
,
Alexandre Piche
,
David Vazquez
,
Christopher Pal
Transactions on Machine Learning Research (TMLR), 2024.
PDF
Cite
Code
Investigating Interaction Friction in Generative AI: Improving User Experience and Decision-Making
Incorporating ethical principles of human-centered AI, such as fostering human autonomy and mindful decision-making, challenges the …
Pauline Malaguti
,
Alexander J. Karran
,
Di Le
,
Hayley Mortin
,
Constantinos K. Coursaris
,
Sylvain Sénécal
,
Pierre-Majorique Léger
Special Interest Group On Computer-Human Interaction, 2024.
PDF
Cite
IntentGPT: Few-shot Intent Discovery with Large Language Models
In today’s digitally driven world, dialogue systems play a pivotal role in enhancing user interactions, from customer service to …
Juan A. Rodriguez
,
Nicholas Botzer
,
David Vazquez
,
Christopher Pal
,
Marco Pedersoli
,
Issam H. Laradji
Workshop at the International Conference of Learning Representation (ICLR), 2024.
PDF
Cite
Self-evaluation and self-prompting to improve the reliability of LLMs
In order to safely deploy Large Language Models (LLMs), they must be capable of dynamically adapting their behavior based on their …
Alexandre Piche
,
Aristides Milios
,
Dzmitry Bahdanau
,
Christopher Pal
Workshop at the International Conference of Learning Representation (ICLR), 2024.
PDF
Cite
Video
«
»
Cite
×