AI understands text, images, audio, and video.
But the real world runs on time.
Every heartbeat, price tick, sensor pulse, machine log, and user click is a temporal signal.
Current models can't reason about them.
We're changing that.
A New Class of Foundation Models
Time-Series Language Models (TSLMs) are multimodal foundation models with time series as a native modality, next to text, enabling direct reasoning, explanation, and forecasting over temporal data in natural language.
Our research shows order-of-magnitude gains in temporal reasoning while running on smaller, faster backbones. TSLMs are not an add-on. They're a new modality for AI.
Open Core, Frontier Edge
OpenTSLM: Lightweight base models trained on public data, released openly. They set the standard for temporal reasoning and power a global developer and research ecosystem.
Frontier TSLMs: Advanced proprietary models trained on specialized data, delivering enterprise-grade performance and powering APIs, fine-tuning, and vertical solutions.
Our Vision
We're building the temporal interface for AI - the layer that connects continuous real-world signals to intelligent decisions and autonomous agents.
A universal TSLM will power proactive healthcare, adaptive robotics, resilient infrastructure, and new forms of human-AI collaboration.
About Us
OpenTSLM is a team of scientists, engineers, and builders from ETH, Stanford, Harvard, Cambridge, TUM, CDTM, Google, Meta, AWS, and beyond. We are the original authors of the OpenTSLM paper.