File size: 1,954 Bytes
7d89f91
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45cf90c
7d89f91
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---

library_name: transformers
tags:
  - custom_generate
---

## Description
Implementation of [Contrastive Search](https://huggingface.co/blog/introducing-csearch), a decoding strategy that jointly optimizes model confidence and a degeneration penalty to produce fluent, coherent, and low-repetition text. At each step, the model considers the top-k candidate tokens and selects the one maximizing:

score(v) = (1 - alpha) * p(v | context) - alpha * max_cosine_similarity(h_v, H_context)

where `alpha` is the trade-off between confidence and the cosine-similarity-based penalty.

This strategy typically:

- Reduces repetition compared to greedy/beam search
- Preserves semantic coherence better than pure sampling

---

## Base model

- `Qwen/Qwen2.5-0.5B-Instruct` (example)

---

## Model compatibility

- Decoder and encoder-decoder transformer models for causal LM

---

## Additional Arguments

- `top_k` (int): Number of candidate tokens to consider each step (e.g., 4)
- `penalty_alpha` (float): Weight of the degeneration penalty (e.g., 0.6)

Tips:
- Larger `top_k` explores more candidates but increases compute
- `penalty_alpha` in [0.3, 0.8] often works well; `0.0` reduces to greedy

---

## Output Type changes

(none) — returns the same structure as standard `transformers` generation

---

## Example usage

```py
from transformers import AutoModelForCausalLM, AutoTokenizer, infer_device

device = infer_device()

model_id = "Qwen/Qwen2.5-0.5B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto").to(device)

inputs = tokenizer(["DeepMind Company is"], return_tensors="pt").to(device)

# Contrastive search
gen_out = model.generate(
    **inputs,
    custom_generate="contrastive_search",
    penalty_alpha=0.6,
    top_k=4,
    max_new_tokens=128,
    trust_remote_code=True,
)

print(tokenizer.batch_decode(gen_out, skip_special_tokens=True))
```