Leon Chlon

Leon Chlon

I'm a machine-learning researcher working on AI safety, LLM reliability, and the mechanistic understanding of foundation models. My recent work asks why language models hallucinate, and how structured inductive biases, symmetry-aware learning, and a Bayesian perspective can make them more reliable and interpretable.

I also work on world models, multimodal learning, and physics-informed ML, where principled modeling bridges theory with scalable systems. Currently Visiting Fellow at the University of Oxford (Torr Vision Group), and founder of Hassana Labs, a research organization opening doors in AI for researchers from marginalized communities. I'm writing an open-source book, Information Geometry for Generative Models, released free with donations going to Lebanese refugees.

email · cv (pdf) · github · linkedin · arxiv


News


Selected publications

  1. Predictable Compression Failures: Why Language Models Actually Hallucinate

    L. Chlon · ICML 2026 (accepted)

  2. LLMs are Bayesian, in Expectation, not in Realization

    L. Chlon · NeurIPS 2026 (under review)

  3. Attention Deficits in Language Models: Causal Explanations for Procedural Hallucinations

    L. Chlon · NeurIPS 2026 (under review)


Open source

Other public research code: ITO (★65) · teta (★33) · factuality-slice (★31) · aecf (★4).

Core contributions to DeepMind Optax and HuggingFace; speedup in SAM image processing. EWOR Fellowship (0.1% acceptance rate).


Selected work


Talks & writing


Education