Skip to Content

Microsoft Releases Largest 1-Bit LLM, Letting Powerful AI Run on Some Older Hardware

19 April 2025 by
Microsoft Releases Largest 1-Bit LLM, Letting Powerful AI Run on Some Older Hardware
Naruto Uzumaki


Microsoft Unveils 1-Bit Compact LLM that Runs on CPUs


Microsoft has recently unveiled BitNet b1.58 2B4T, the largest 1-bit large language model (LLM) to date. This open-source model is designed to run efficiently on CPUs, including older hardware, without the need for specialized GPUs.​

Key Features of BitNet b1.58 2B4T

  • Parameter Size: Approximately 2 billion parameters.​Reddit+7Reddit+7Medium+7
  • Training Data: Trained on a massive dataset of 4 trillion tokens, equivalent to about 33 million books .
  • Weight Quantization: Utilizes ternary weight values (-1, 0, +1), effectively making it a "1.58-bit" model. This approach significantly reduces memory usage and computational requirements compared to traditional models .​
  • Performance: Despite its low-bit architecture, BitNet b1.58 2B4T performs competitively with other models of similar size, such as Meta’s LLaMa 3.2 1B and Google’s Gemma 3 1B, in various benchmarks .​
  • Resource Efficiency: The model consumes only about 400MB of memory, which is less than 30% of what comparable models require, making it suitable for devices with limited resources .​dware

BitNet b1.58 2B4T's efficient design allows it to run effectively on CPUs, including those found in older systems. For instance, it has been demonstrated to operate on Apple's M2 chip without the need for additional hardware acceleration . This makes advanced AI capabilities more accessible to users with legacy hardware.​

Availability

The model is openly available under the MIT license and can be accessed through platforms like Hugging Face. Additionally, Microsoft has provided tools such as bitnet.cpp to facilitate efficient inference on various CPU architectures .​

For a more in-depth understanding of 1-bit LLMs and their implications, you might find the following video informative:



Sign in to leave a comment