Skip to main content

5 docs tagged with "neural magic"

View All Tags

Optimizing LLMs

Optimize large language models (LLMs) for efficient inference using one-shot pruning and quantization. Learn how to improve model performance and reduce costs without sacrificing accuracy.

Sparse Fine-Tuning With LLMs

Improve the performance of your large language models (LLMs) through fine-tuning with Neural Magic's SparseML. Optimize LLMs for specific tasks while maintaining accuracy.

Sparse Transferring LLMs

Adapt large language models (LLMs) to new domains and tasks using sparse transfer learning with Neural Magic's SparseML. Maintain accuracy while optimizing for efficiency.

What is Neural Magic?

Neural Magic empowers you to optimize and deploy deep learning models on CPUs with GPU-class performance. Unlock efficiency, accessibility, and cost savings with our software solutions.