DeepJournalDeepJournal
February 4, 2026

Kimi K2.5: A New Open-Source LLM Advancing the Frontier

For years, closed-source models such as GPT-5.2 and Claude Opus 4.5 have set the reference benchmarks for large language models. With Kimi K2.5, Moonshot AI introduces a large-scale open-source multimodal LLM that competes directly with these systems, marking a significant step forward for open-source AI.

Kimi K2.5 features 1 trillion total parameters with 32 billion active parameters per token, making it the largest open-source LLM released to date. Its public release under an open-source license positions it as a credible alternative to proprietary models in both performance and capability.

Benchmark Performance and Capabilities

In internal evaluations, Kimi K2.5 delivers competitive results on several major benchmarks used to evaluate reasoning, agentic workflows, and coding:

BenchmarkKimi K2.5GPT‑5.2Claude 4.5 Opus
HLE‑Full50.245.543.2
SWE‑Bench Verified76.880.080.9
MMMU‑Pro78.579.574.0
VideoMMMU86.685.984.4
BrowseComp74.957.859.2

Available in DeepJournal with Confidential Encryption

DeepJournal integrates Kimi K2.5 as one of its confidential encrypted models available for users. The model executes within secure hardware-isolated environments (TEEs), and all requests and responses are encrypted end-to-end, ensuring that journal content and prompts remain private even while being processed by the LLM.

DeepJournal users can select Moonshot’s Kimi K2.5 alongside other open-source models such as:

  • DeepSeek R1 (DeepSeek)
  • GPT-OSS 120B (OpenAI)

This integration allows users to leverage cutting-edge open-source LLM capabilities without giving up privacy.