NVfp4 request with 16bit activations

#5
by chriswritescode - opened

I do not have the resources to convert the model. I would greatly appreciate this.

QuantTrio org
edited Sep 29

Hi Chris, appreciate your interest in this repo!

Just to clarify, what you’re asking for with “16-bit activations” is essentially what AWQ already does (weights are quantized to 4-bit, while activations stay in 16-bit).

NVFP4 is different: in vLLM it currently only supports static quantization, which means it requires a large and representative calibration dataset to estimate activation ranges. Without quantization-aware finetuning (QAT) to recover accuracy, I'm afraid NVFP4 could perform worse than expected.

Until dynamic NVFP4 becomes available, there are no plans to provide NVFP4 builds here yet. For now, AWQ and MXFP4 remain the recommended options.

Sign up or log in to comment