Crypto Gloom

An MIT study found that AI image generators can run 30 times faster.

Researchers at the Massachusetts Institute of Technology (MIT) have made significant progress in making artificial intelligence (AI) image generators run 30 times faster through experiments with new technology.

Experts said in a preprint paper that the technique involves rolling the multi-step process used in diffusion models into a single step. This method, called ‘distribution matching distillation’ (DMD), allows new AI models to mirror the functionality of existing image generators without the hassle of going through a ‘100-step process’.

Diffusion models such as Midjourney and Stable Diffusion typically rely on complex processes from input to output. Most models rely on image information generators, decoders, and several steps for “noising”. This is a long process that becomes more complicated with image quality.

DMD adopts a “teacher-student” approach to lay the foundation for a concise model that can behave in the same way as a complex AI image generator. A closer look at the operation of DMD reveals the integration of generative adversarial networks (GANs) and diffusion models, opening up numerous possibilities.

Researchers point to a variety of advantages associated with DMD, including computational power and time savings. They also noted that the DMD reduced image generation from 2.59 seconds to 90 milliseconds without affecting output quality.

“Our work is a novel method that accelerates current diffusion models such as Stable Diffusion and DALLE-3 by a factor of 30,” said lead researcher Tianwei Yin. “These advances not only significantly reduce computation times, but also maintain, if not surpass, the quality of the generated visual content.”

DMD achieves the feat by using two main components: regression loss and distribution matching loss. The first component simplifies the training process, while the distribution matching loss ensures correlation with the actual frequency of occurrence.

“Reducing the number of interactions has been the holy grail of diffusion models since their inception,” said researcher Fredo Durand. “We are excited to finally support single-step image creation. This can significantly reduce computing costs and accelerate processes.”

LLM is not excluded

Large-scale language models while researchers work on diffusion models
(LLM) and other emerging technologies are enjoying significant innovation. In mid-March, a group of Chinese researchers unveiled a new compression technique for LLM to bypass hardware limitations during deployment.

The researchers noted that by using techniques to prune unnecessary parameters, users can significantly save on inference costs without training new modes. Dubbed ShortGPT, the research paper notes that the method “significantly outperforms previous state-of-the-art (SOTA) methods in model pruning.”

For artificial intelligence (AI) to function properly within the law and succeed in the face of growing challenges, it must integrate enterprise blockchain systems that ensure data input quality and ownership. This helps keep your data safe while ensuring immutability. data. Check out CoinGeek’s coverage Learn more about this new technology. Why enterprise blockchain will become the backbone of AI.

See: Blockchain can hold AI accountable

youtube videoyoutube video

Are you new to blockchain? To learn more about blockchain technology, check out CoinGeek’s Blockchain for Beginners section, our ultimate resource guide.