The next wave of intelligent systems will autonomously adapt to real-world feedback. We're building adaptive, self-improving agents and their interaction environments, with a mission to design systems for a future with safe superintelligence.

Latest
NeurIPS Workshop Paper
What we're building
Research on RL, demonstrating that training on the hardest 10% of examples yields up to 47% performance gains when post-training language models with GRPO under budget constraints. Insights on why difficult examples maximize learning efficiency, providing practical guidance for better post-training.
Read paper
Freysa becomes available on Telegram and WhatsApp. She remembers conversations, generates media, and evolves her own opinions about users. In group chats, she adapts contextually, sometimes choosing to remain silent. A step toward self-evolving agent behavior.
A privacy-first mobile app running embeddings locally in a TEE with gpt-oss-120 and DeepSeek-R1. Supports GPT-5 via VPN-like proxy routing, offering both maximum privacy and hybrid local/cloud model integration.
Try Now
A family of epigenomic foundation models (90M, 600M, 7B) trained on 1.9T tokens of methylated/unmethylated DNA. Introduces stacked hierarchical attention and alignment embeddings, achieving SOTA in early Alzheimer's and Parkinson's detection.
Read paper
A lightweight local model under 1B parameters that replaces sensitive information on your device with semantically similar placeholders before queries leave. It preserves context and restores the original meaning in responses so your AI stays useful without exposing your data.
Read our blog postWe're well-funded ($30M from aligned investors) and actively hiring for specific roles across our key initiatives. Many of us work in-person in San Francisco, but we are open to high-agency remote team members. Email us at contact@eternis.ai
