Copyright Notice:

The documents distributed by this server have been provided by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Publications of SPCL

E. Frantar, S. Ashkboos, T. Hoefler, D. Alistarh:

 GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers

(In The Eleventh International Conference on Learning Representations, May 2023)

Abstract

Generative Pre-trained Transformer (GPT) models set themselves apart through breakthrough performance across complex language modelling tasks, but also by their extremely high computational and storage costs. Specifically, due to their massive size, even inference for large, highly-accurate GPT models may require multiple performant GPUs to execute, which limits the usability of such models. While there is emerging work on relieving this pressure via model compression, the applicability and performance of existing compression techniques is limited by the scale and complexity of GPT models. In this paper, we address this challenge, and propose GPTQ, a new one-shot weight quantization method based on approximate second-order information, that is both highly-accurate and highly-efficient. Specifically, GPTQ can quantize GPT models with 175 billion parameters in approximately four GPU hours, reducing the bitwidth down to 3 or 4 bits per weight, with negligible accuracy degradation relative to the uncompressed baseline. Our method more than doubles the compression gains relative to previously-proposed one-shot quantization methods, preserving accuracy, allowing us for the first time to execute an 175 billion-parameter model inside a single GPU. We show experimentally that these improvements can be leveraged for end-to-end inference speedups over FP16, of around 2x when using high-end GPUs (NVIDIA A100) and 4x when using more cost-effective ones (NVIDIA A6000).

Documents

download article:
access preprint on arxiv:
 

BibTeX

@inproceedings{,
  author={Elias Frantar and Saleh Ashkboos and Torsten Hoefler and Dan Alistarh},
  title={{GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers}},
  year={2023},
  month={05},
  booktitle={The Eleventh International Conference on Learning Representations},
  doi={10.48550/arXiv.2311.18526},
}