About
People
Publications
Open Source
Demos
Events
Blog
Careers
Contact
English
English
Français
ServiceNow
ServiceNow AI Research
Tags
Efficient Inference
ServiceNow AI Research
Efficient Inference
Auto-Cypher: Improving LLMs on Cypher generation via LLM-supervised generation-verification framework
Graph databases like Neo4j are gaining popularity for handling complex, interconnected data, over traditional relational databases in …
Aman Tiwari
,
Shiva Krishna Reddy Malay
,
Vikas Yadav
,
Masoud Hashemi
,
Sathwik Tejaswi Madhusudhan
North American Chapter of the Association for Computational Linguistics (NAACL), 2025.
PDF
Cite
Unifying Autoregressive and Diffusion-Based Sequence Generation
We take significant steps toward unifying autoregressive and diffusion-based sequence generation by extending the SEDD discrete …
Nima Fathi
,
Torsten Scholak
,
Pierre-André Noël
Workshop at the International Conference of Learning Representation (ICLR), 2025.
PDF
Cite
XC-Cache: Cross-Attending to Cached Context for Efficient LLM Inference
In-context learning (ICL) approaches typically leverage prompting to condition decoder-only language model generation on reference …
João Monteiro
,
Étienne Marcotte
,
Pierre-André Noël
,
Valentina Zantedeschi
,
David Vazquez
,
Nicolas Chapados
,
Christopher Pal
,
Perouz Taslakian
Workshop at the Neural Information Processing Systems (NeurIPS), 2024.
PDF
Cite
Code
Layer-Wise Quantization: A Pragmatic and Effective Method for Quantizing LLMs Beyond Integer Bit-Levels
We present a simple meta quantization approach that quantizes different layers of a large language model (LLM) at different bit levels, …
Razvan-Gabriel Dumitru
,
Vikas Yadav
,
Rishabh Maheshwary
,
Paul-Ioan Clotan
,
Sathwik Tejaswi Madhusudhan
,
Mihai Surdeanu
ArXiv, 2024.
PDF
Cite
Code
Cite
×