The AI model 'MiniMax-M1', which can handle the world's longest context window of 1 million token inputs and 80,000 token outputs, was trained for only 78 million yen and is now available as open source and available for anyone to download.



MiniMax , an AI company based in Shanghai, China, has announced that it will open-source its latest large-scale language model (LLM), MiniMax-M1 . MiniMax-M1 supports the world's longest context window of 1 million tokens input and 80,000 tokens output, and is touted as using the most advanced agent among open-source models.




GitHub - MiniMax-AI/MiniMax-M1: MiniMax-M1, the world's first open-weight, large-scale hybrid-attention reasoning model.
https://github.com/MiniMax-AI/MiniMax-M1

MiniMax-M1 - a MiniMaxAI Collection
https://huggingface.co/collections/MiniMaxAI/minimax-m1-68502ad9634ec0eeac8cf094

MiniMax touts MiniMax-M1 as 'the world's first open-weight, large-scale hybrid attention inference model.' MiniMax-M1 combines a hybrid Mixture-of-Experts (MoE) architecture with the Lightning Attention mechanism.

Developed based on MiniMax-Text-01 , MiniMax-M1 contains a total of 456 billion parameters, with 45.9 billion parameters active per token. Like MiniMax-Text-01, MiniMax-M1 natively supports a context window of 1 million tokens, eight times larger than DeepSeek R1 .

MiniMax-M1 employs a Lightning Attention mechanism, enabling efficient scaling of test computations. For example, with a context window of 100,000 tokens, MiniMax-M1 consumes only 25% fewer FLOPs than DeepSeek R1. This characteristic makes MiniMax-M1 particularly well-suited for complex tasks requiring long context windows and extensive thinking.

The graph below compares MiniMax-M1's performance against leading commercial AI models in competitive mathematics, coding, software engineering, agent tool use, and text comprehension tasks. The red bar shows MiniMax-M1, which performs comparable to competing AI models in all tasks.



Examples of how to use MiniMax-M1 are as follows:

Simply drop a prompt to instantly generate an HTML page with a canvas-based animated particle background.




'We asked the MiniMax-M1 to create a typing speed test for us, and it produced a clean and functional web app that tracks WPM in real time.
No plugins, no setup required. Just practical, production-ready AI. 👇'




The following HTML page was generated with the prompt, 'Create an HTML page with a canvas-based animated particle background. The particles should move smoothly and connect as they approach each other. Add heading text to the center of the canvas.' It was generated using HTML canvas elements and JavaScript.




This is a maze generator and pathfinding visualization tool generated by entering the following: 'Create a maze generator and pathfinding visualization tool. Randomly generate a maze and visualize how an algorithm solves it step by step. Use canvas and animation to make it visually appealing.'




AI instructor Min Choi has also shared examples of using the MiniMax-M1, such as a Netflix clone app that can play trailers.




AI strategist David Hendrickson reports, 'We compared the newly released MiiniMax-M1 80B with Claude Opus 4. The MiiniMax-M1 outperformed the Claude Opus 4 in several benchmarks, particularly the longer context-driven benchmarks.'



MiniMax is part of a group of AI startups backed by internet giants Tencent and Alibaba, which have raised billions of dollars in funding over the past year, but DeepSeek's success has forced most of them to scale back on basic research and focus on applied research.

However, the announcement of the MiiniMax-M1 suggests that MiniMax has found a different path, according to AI media implicator.ai . Rather than retreating from model development, MiiniMax may have found its own path by focusing on efficiency and open access.

The training budget for the MiiniMax-M1 was only $534,700 (approximately 78 million yen), suggesting that cutting-edge AI models no longer require vast corporate resources.

in AI,   Video,   Software, Posted by logu_ii