Small changes in the large language models (LLMs) at the heart of AI applications can result in substantial energy savings, according to a report released by the United Nations Educational, Scientific ...
San Sebastian, Spain – June 12, 2025: Multiverse Computing has developed CompactifAI, a compression technology capable of reducing the size of LLMs (Large Language Models) by up to 95 percent while ...
Morning Overview on MSN
A quantum trick is shrinking bloated AI models fast
Artificial intelligence has grown so large and power hungry that even cutting edge data centers strain to keep up, yet a technique borrowed from quantum physics is starting to carve these systems down ...
Tech Xplore on MSN
Shrinking AI memory boosts accuracy, study finds
Researchers have developed a new way to compress the memory used by AI models to increase their accuracy in complex tasks or help save significant amounts of energy.
一些您可能无法访问的结果已被隐去。
显示无法访问的结果
反馈