davidberenstein1957's picture
Add files using upload-large-folder tool
f207f55 verified
metadata
base_model:
  - HuggingFaceTB/SmolLM2-360M
language:
  - en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
  - safetensors
  - onnx
  - transformers.js
  - pruna-ai

Model Card for davidberenstein1957/SmolLM2-360M-Instruct-smashed

This model was created using the pruna library. Pruna is a model optimization framework built for developers, enabling you to deliver more efficient models with minimal implementation overhead.

Usage

First things first, you need to install the pruna library:

pip install pruna

You can use the transformers library to load the model but this might not include all optimizations by default.

To ensure that all optimizations are applied, use the pruna library to load the model using the following code:

from pruna import PrunaModel

loaded_model = PrunaModel.from_pretrained(
    "davidberenstein1957/SmolLM2-360M-Instruct-smashed"
)

For inference, you can use the inference methods of the original model like shown in the original model card. Alternatively, you can visit the full documentation here for more information.

Smash Configuration

The compression configuration of the model is stored in the smash_config.json file, which describes the optimization methods that were applied to the model.

{
    "batcher": null,
    "cacher": null,
    "compiler": null,
    "factorizer": null,
    "pruner": null,
    "quantizer": "torchao",
    "torchao_excluded_modules": "none",
    "torchao_quant_type": "int8dq",
    "batch_size": 1,
    "device": "cpu",
    "device_map": null,
    "save_fns": [
        "save_before_apply"
    ],
    "load_fns": [
        "transformers"
    ],
    "reapply_after_load": {
        "factorizer": null,
        "pruner": null,
        "quantizer": "torchao",
        "cacher": null,
        "compiler": null,
        "batcher": null
    }
}

🌍 Join the Pruna AI community!

Twitter GitHub LinkedIn Discord Reddit