File size: 2,843 Bytes
f207f55
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
base_model:
- HuggingFaceTB/SmolLM2-360M
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- safetensors
- onnx
- transformers.js
- pruna-ai
---

# Model Card for davidberenstein1957/SmolLM2-360M-Instruct-smashed

This model was created using the [pruna](https://github.com/PrunaAI/pruna) library. Pruna is a model optimization framework built for developers, enabling you to deliver more efficient models with minimal implementation overhead.

## Usage

First things first, you need to install the pruna library:

```bash
pip install pruna
```

You can [use the transformers library to load the model](https://huggingface.co/davidberenstein1957/SmolLM2-360M-Instruct-smashed?library=transformers) but this might not include all optimizations by default.

To ensure that all optimizations are applied, use the pruna library to load the model using the following code:

```python
from pruna import PrunaModel

loaded_model = PrunaModel.from_pretrained(
    "davidberenstein1957/SmolLM2-360M-Instruct-smashed"
)
```

For inference, you can use the inference methods of the original model like shown [in the original model card](https://huggingface.co/davidberenstein1957/SmolLM2-360M-Instruct-smashed?library=transformers). Alternatively, you can visit the full documentation [here](https://pruna.readthedocs.io/en/latest/index.html) for more information.

## Smash Configuration

The compression configuration of the model is stored in the `smash_config.json` file, which describes the optimization methods that were applied to the model.

```bash
{
    "batcher": null,
    "cacher": null,
    "compiler": null,
    "factorizer": null,
    "pruner": null,
    "quantizer": "torchao",
    "torchao_excluded_modules": "none",
    "torchao_quant_type": "int8dq",
    "batch_size": 1,
    "device": "cpu",
    "device_map": null,
    "save_fns": [
        "save_before_apply"
    ],
    "load_fns": [
        "transformers"
    ],
    "reapply_after_load": {
        "factorizer": null,
        "pruner": null,
        "quantizer": "torchao",
        "cacher": null,
        "compiler": null,
        "batcher": null
    }
}
```

## 🌍 Join the Pruna AI community!

[![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI)
[![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI)
[![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.com/invite/rskEr4BZJx)
[![Reddit](https://img.shields.io/reddit/subreddit-subscribers/PrunaAI?style=social)](https://www.reddit.com/r/PrunaAI/)