LLM Inference on GPUs (Intel)


A technical paper titled “Efficient LLM inference solution on Intel GPU” was published by researchers at Intel Corporation. Abstract: "Transformer based Large Language Models (LLMs) have been widely used in many fields, and the efficiency of LLM inference becomes hot topic in real applications. However, LLMs are usually complicatedly designed in model structure with massive operations and... » read more

Efficient LLM Inference With Limited Memory (Apple)


A technical paper titled “LLM in a flash: Efficient Large Language Model Inference with Limited Memory” was published by researchers at Apple. Abstract: "Large language models (LLMs) are central to modern natural language processing, delivering exceptional performance in various tasks. However, their intensive computational and memory requirements present challenges, especially for device... » read more

LLM Inference On CPUs (Intel)


A technical paper titled “Efficient LLM Inference on CPUs” was published by researchers at Intel. Abstract: "Large language models (LLMs) have demonstrated remarkable performance and tremendous potential across a wide range of tasks. However, deploying these models has been challenging due to the astronomical amount of model parameters, which requires a demand for large memory capacity an... » read more