News

Microsoft researchers have developed — and released — a hyper-efficient AI model that can run on CPUs, including Apple's M2.
LLM BitNet enables fast, energy-saving AI on CPUs, outperforming LLaMA 3.2 1B with just 0.4GB memory and no GPU needed.
Memory requirements are the most obvious advantage of reducing the complexity of a model's internal weights. The BitNet b1.58 ...
Microsoft’s model BitNet b1.58 2B4T is available on Hugging Face but doesn’t run on GPU and requires a proprietary framework.
What just happened? Microsoft has introduced BitNet b1.58 2B4T, a new type of large language model engineered for exceptional efficiency. Unlike conventional AI models that rely on 16- or 32-bit ...
BitNet employs a radical simplification: using ternary weights with just three possible values (-1, 0, +1), technically implemented as 1.58 bits per weight rather than the full precision used by ...
Final Thoughts The progress of BitNet b1.58 2B4T on a 1997 Pentium II CPU signifies the ability of optimisation and invention. It emphasises that even old systems can significantly help the ...
Microsoft put BitNet b1.58 2B4T on Hugging Face, a collaboration platform for the AI community. “We introduce BitNet b1.58 2B4T, the first open-source, native 1-bit Large Language Model (LLM ...
"Microsoft researchers claim they’ve developed the largest-scale 1-bit AI model, also known as a 'bitnet,' to date. Called BitNet b1.58 2B4T, it’s openly available under an MIT license and can run on ...