24 lines
1.5 KiB
JSON
24 lines
1.5 KiB
JSON
{
|
|
"title": "Joint MoE Scaling Laws: Mixture of Experts Can Be Memory Efficient",
|
|
"authors": [
|
|
"Jan Ludziejewski",
|
|
"Maciej Pióro",
|
|
"Jakub Krajewski",
|
|
"Maciej Stefaniak",
|
|
"Michał Krutul",
|
|
"Jan Małaśnicki",
|
|
"Marek Cygan",
|
|
"Piotr Sankowski",
|
|
"Kamil Adamczewski",
|
|
"Piotr Miłoś",
|
|
"Sebastian Jaszczur"
|
|
],
|
|
"abstract": "Mixture of Experts (MoE) architectures have significantly increased\ncomputational efficiency in both research and real-world applications of\nlarge-scale machine learning models. However, their scalability and efficiency\nunder memory constraints remain relatively underexplored. In this work, we\npresent joint scaling laws for dense and MoE models, incorporating key factors\nsuch as the number of active parameters, dataset size, and the number of\nexperts. Our findings provide a principled framework for selecting the optimal\nMoE configuration under fixed memory and compute budgets. Surprisingly, we show\nthat MoE models can be more memory-efficient than dense models, contradicting\nconventional wisdom. To derive and validate the theoretical predictions of our\nscaling laws, we conduct over 280 experiments with up to 2.7B active parameters\nand up to 5B total parameters. These results offer actionable insights for\ndesigning and deploying MoE models in practical large-scale training scenarios.",
|
|
"pdf_url": "http://arxiv.org/pdf/2502.05172v1",
|
|
"entry_id": "http://arxiv.org/abs/2502.05172v1",
|
|
"categories": [
|
|
"cs.LG",
|
|
"cs.AI",
|
|
"cs.CL"
|
|
]
|
|
} |