AhmedSSoliman commited on
Commit
5ab43c8
·
verified ·
1 Parent(s): 22c5446

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,570 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ------
2
+
3
+ language:base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
4
+
5
+ - enlibrary_name: peft
6
+
7
+ license: apache-2.0pipeline_tag: text-generation
8
+
9
+ library_name: transformerstags:
10
+
11
+ tags:- base_model:adapter:unsloth/gpt-oss-20b-unsloth-bnb-4bit
12
+
13
+ - medical- grpo
14
+
15
+ - healthcare- lora
16
+
17
+ - clinical-reasoning- transformers
18
+
19
+ - rlhf- trl
20
+
21
+ - grpo- unsloth
22
+
23
+ - lora---
24
+
25
+ - digital-twin
26
+
27
+ - gpt-oss# Model Card for Model ID
28
+
29
+ base_model: openai/gpt-oss-20b
30
+
31
+ datasets:<!-- Provide a quick summary of what the model is/does. -->
32
+
33
+ - FreedomIntelligence/medical-o1-reasoning-SFT
34
+
35
+ pipeline_tag: text-generation
36
+
37
+ model-index:
38
+
39
+ - name: gpt-oss-20b-digital-twin-v1## Model Details
40
+
41
+ results:
42
+
43
+ - task:### Model Description
44
+
45
+ type: text-generation
46
+
47
+ name: Medical Question Answering<!-- Provide a longer summary of what this model is. -->
48
+
49
+ metrics:
50
+
51
+ - type: format_compliance
52
+
53
+ value: 95
54
+
55
+ name: Reasoning Structure Compliance- **Developed by:** [More Information Needed]
56
+
57
+ - type: semantic_accuracy- **Funded by [optional]:** [More Information Needed]
58
+
59
+ value: 85- **Shared by [optional]:** [More Information Needed]
60
+
61
+ name: Medical Accuracy- **Model type:** [More Information Needed]
62
+
63
+ ---- **Language(s) (NLP):** [More Information Needed]
64
+
65
+ - **License:** [More Information Needed]
66
+
67
+ # GPT-OSS-20B Medical Digital Twin v1 🫀- **Finetuned from model [optional]:** [More Information Needed]
68
+
69
+
70
+
71
+ A 20-billion parameter Medical Digital Twin AI trained using GRPO (Group Relative Policy Optimization) on OpenAI's GPT-OSS-20B base model.### Model Sources [optional]
72
+
73
+
74
+
75
+ ## 🌟 Model Description<!-- Provide the basic links for the model. -->
76
+
77
+
78
+
79
+ This model acts as a **Medical Digital Twin** - simulating physiological reasoning processes before providing medical responses. It's specifically designed to:- **Repository:** [More Information Needed]
80
+
81
+ - **Paper [optional]:** [More Information Needed]
82
+
83
+ - 🧠 **Show Clinical Reasoning**: Uses `<think>` tags to demonstrate step-by-step diagnostic thinking- **Demo [optional]:** [More Information Needed]
84
+
85
+ - 👥 **Dual Communication**: Adapts tone for patient support or physician collaboration
86
+
87
+ - 🎯 **Accuracy-Focused**: Trained with semantic similarity rewards for medical correctness## Uses
88
+
89
+ - ⚡ **Large-Scale**: 20B parameters with efficient LoRA fine-tuning
90
+
91
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
92
+
93
+ ## 🏗️ Architecture
94
+
95
+ ### Direct Use
96
+
97
+ | Component | Specification |
98
+
99
+ |-----------|---------------|<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
100
+
101
+ | **Base Model** | OpenAI GPT-OSS-20B (20 billion parameters) |
102
+
103
+ | **Training Method** | GRPO (Reinforcement Learning from Human Feedback) |[More Information Needed]
104
+
105
+ | **Adaptation** | LoRA (Low-Rank Adaptation) rank 64 |
106
+
107
+ | **Quantization** | 4-bit NF4 for memory efficiency |### Downstream Use [optional]
108
+
109
+ | **Context Length** | 4,096 tokens |
110
+
111
+ | **Hardware Used** | NVIDIA A100 80GB |<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
112
+
113
+
114
+
115
+ ## 🎓 Training Details[More Information Needed]
116
+
117
+
118
+
119
+ ### Dataset### Out-of-Scope Use
120
+
121
+ - **Source**: [FreedomIntelligence/medical-o1-reasoning-SFT](https://huggingface.co/datasets/FreedomIntelligence/medical-o1-reasoning-SFT)
122
+
123
+ - **Language**: English medical Q&A with reasoning chains<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
124
+
125
+ - **Size**: 500 curated examples
126
+
127
+ - **Focus**: Clinical reasoning, differential diagnosis, patient safety[More Information Needed]
128
+
129
+
130
+
131
+ ### Training Configuration## Bias, Risks, and Limitations
132
+
133
+ ```python
134
+
135
+ # Hyperparameters<!-- This section is meant to convey both technical and sociotechnical limitations. -->
136
+
137
+ Max Sequence Length: 4,096 tokens
138
+
139
+ LoRA Rank: 64[More Information Needed]
140
+
141
+ Batch Size: 1 (per device)
142
+
143
+ Gradient Accumulation: 16 steps### Recommendations
144
+
145
+ Effective Batch Size: 16
146
+
147
+ Learning Rate: 3e-6<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
148
+
149
+ Training Steps: 300
150
+
151
+ Optimizer: AdamW (β1=0.9, β2=0.999)Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
152
+
153
+ LR Schedule: Cosine with 5% warmup
154
+
155
+ Precision: BFloat16## How to Get Started with the Model
156
+
157
+ ```
158
+
159
+ Use the code below to get started with the model.
160
+
161
+ ### Reward Functions
162
+
163
+ [More Information Needed]
164
+
165
+ 1. **Format Reward** (90% initial weight → 40% final):
166
+
167
+ - Encourages structured reasoning with `<think>` tags## Training Details
168
+
169
+ - Rewards: -1.0 (no tags) to +2.0 (excellent reasoning)
170
+
171
+ - Adaptive weight: decreases as format compliance improves### Training Data
172
+
173
+
174
+
175
+ 2. **Semantic Reward** (10% initial weight → 60% final):<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
176
+
177
+ - Measures answer accuracy via cosine similarity
178
+
179
+ - Compares model output to ground truth medical responses[More Information Needed]
180
+
181
+ - Uses sentence-transformers embeddings
182
+
183
+ ### Training Procedure
184
+
185
+ ### Training Results
186
+
187
+ - ✅ Format Compliance: 95%+ (responses use structured reasoning)<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
188
+
189
+ - ✅ Semantic Accuracy: 85%+ similarity to expert answers
190
+
191
+ - ✅ Convergence: Stable after 150 steps#### Preprocessing [optional]
192
+
193
+ - ✅ Total Training Time: ~15 hours on A100 80GB
194
+
195
+ [More Information Needed]
196
+
197
+ ## 💻 Usage
198
+
199
+
200
+
201
+ ### Installation#### Training Hyperparameters
202
+
203
+
204
+
205
+ ```bash- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
206
+
207
+ pip install torch transformers unsloth sentence-transformers
208
+
209
+ ```#### Speeds, Sizes, Times [optional]
210
+
211
+
212
+
213
+ ### Basic Inference (CPU/GPU)<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
214
+
215
+
216
+
217
+ ```python[More Information Needed]
218
+
219
+ from transformers import AutoModelForCausalLM, AutoTokenizer
220
+
221
+ import torch## Evaluation
222
+
223
+
224
+
225
+ # Load model<!-- This section describes the evaluation protocols and provides the results. -->
226
+
227
+ model_id = "AhmedSSoliman/gpt-oss-20b-digital-twin-v1"
228
+
229
+ tokenizer = AutoTokenizer.from_pretrained(model_id)### Testing Data, Factors & Metrics
230
+
231
+ model = AutoModelForCausalLM.from_pretrained(
232
+
233
+ model_id,#### Testing Data
234
+
235
+ torch_dtype=torch.float16,
236
+
237
+ device_map="auto", # Automatically uses GPU if available<!-- This should link to a Dataset Card if possible. -->
238
+
239
+ )
240
+
241
+ [More Information Needed]
242
+
243
+ # System prompt
244
+
245
+ system_prompt = """You are a Medical Digital Twin AI.#### Factors
246
+
247
+ Step 1: Analyze within <think> tags with detailed clinical reasoning.
248
+
249
+ Step 2: Provide a clear, actionable response."""<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
250
+
251
+
252
+
253
+ # Format input[More Information Needed]
254
+
255
+ messages = [
256
+
257
+ {"role": "system", "content": system_prompt},#### Metrics
258
+
259
+ {"role": "user", "content": "I have chest pain radiating to my left arm. What should I do?"}
260
+
261
+ ]<!-- These are the evaluation metrics being used, ideally with a description of why. -->
262
+
263
+
264
+
265
+ # Tokenize[More Information Needed]
266
+
267
+ inputs = tokenizer.apply_chat_template(
268
+
269
+ messages, ### Results
270
+
271
+ tokenize=True,
272
+
273
+ add_generation_prompt=True,[More Information Needed]
274
+
275
+ return_tensors="pt"
276
+
277
+ ).to(model.device)#### Summary
278
+
279
+
280
+
281
+ # Generate
282
+
283
+ outputs = model.generate(
284
+
285
+ inputs,## Model Examination [optional]
286
+
287
+ max_new_tokens=1024,
288
+
289
+ temperature=0.6,<!-- Relevant interpretability work for the model goes here -->
290
+
291
+ top_p=0.9,
292
+
293
+ do_sample=True[More Information Needed]
294
+
295
+ )
296
+
297
+ ## Environmental Impact
298
+
299
+ response = tokenizer.decode(outputs[0][inputs.shape[1]:], skip_special_tokens=True)
300
+
301
+ print(response)<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
302
+
303
+ ```
304
+
305
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
306
+
307
+ ### Optimized Inference with Unsloth (Faster)
308
+
309
+ - **Hardware Type:** [More Information Needed]
310
+
311
+ ```python- **Hours used:** [More Information Needed]
312
+
313
+ from unsloth import FastLanguageModel- **Cloud Provider:** [More Information Needed]
314
+
315
+ - **Compute Region:** [More Information Needed]
316
+
317
+ model, tokenizer = FastLanguageModel.from_pretrained(- **Carbon Emitted:** [More Information Needed]
318
+
319
+ model_name="AhmedSSoliman/gpt-oss-20b-digital-twin-v1",
320
+
321
+ max_seq_length=4096,## Technical Specifications [optional]
322
+
323
+ dtype=None, # Auto-detect
324
+
325
+ load_in_4bit=True, # 4-bit quantization (~40GB VRAM)### Model Architecture and Objective
326
+
327
+ )
328
+
329
+ [More Information Needed]
330
+
331
+ FastLanguageModel.for_inference(model) # Enable inference optimizations
332
+
333
+ ### Compute Infrastructure
334
+
335
+ # Use same generation code as above
336
+
337
+ ```[More Information Needed]
338
+
339
+
340
+
341
+ ### Hardware Requirements#### Hardware
342
+
343
+
344
+
345
+ | Precision | VRAM Required | Speed | Use Case |[More Information Needed]
346
+
347
+ |-----------|---------------|-------|----------|
348
+
349
+ | 4-bit NF4 | 40GB+ | Fast | Production inference |#### Software
350
+
351
+ | 8-bit | 60GB+ | Medium | Balanced |
352
+
353
+ | FP16 | 80GB+ | Slower | Full precision |[More Information Needed]
354
+
355
+ | CPU (FP32) | 80GB RAM | Very Slow | No GPU available |
356
+
357
+ ## Citation [optional]
358
+
359
+ ## 📊 Example Output
360
+
361
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
362
+
363
+ **Input:**
364
+
365
+ ```**BibTeX:**
366
+
367
+ A 45-year-old male presents with chest pain radiating to the left arm,
368
+
369
+ sweating, and shortness of breath for 30 minutes. Assessment?[More Information Needed]
370
+
371
+ ```
372
+
373
+ **APA:**
374
+
375
+ **Output:**
376
+
377
+ ```[More Information Needed]
378
+
379
+ <think>
380
+
381
+ This presentation is highly concerning for acute coronary syndrome (ACS), ## Glossary [optional]
382
+
383
+ specifically myocardial infarction (MI). Key clinical features:
384
+
385
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
386
+
387
+ 1. Typical angina pattern: chest pain with left arm radiation
388
+
389
+ 2. Associated symptoms: diaphoresis (sweating), dyspnea[More Information Needed]
390
+
391
+ 3. Duration: 30 minutes (prolonged, not fleeting)
392
+
393
+ 4. Age and gender: 45-year-old male (at-risk demographic)## More Information [optional]
394
+
395
+
396
+
397
+ Differential diagnoses to consider:[More Information Needed]
398
+
399
+ - STEMI (ST-elevation MI) - most critical
400
+
401
+ - NSTEMI (Non-ST-elevation MI)## Model Card Authors [optional]
402
+
403
+ - Unstable angina
404
+
405
+ - Less likely: pulmonary embolism, aortic dissection[More Information Needed]
406
+
407
+
408
+
409
+ This is a time-sensitive emergency requiring immediate medical intervention.## Model Card Contact
410
+
411
+ The "golden hour" for reperfusion therapy is critical for outcomes.
412
+
413
+ </think>[More Information Needed]
414
+
415
+ ### Framework versions
416
+
417
+ ⚠️ **EMERGENCY - CALL 911 IMMEDIATELY**
418
+
419
+ - PEFT 0.18.0
420
+ This patient is experiencing a potential **heart attack** (myocardial infarction).
421
+
422
+ **Immediate Actions:**
423
+ 1. Call 911 or activate emergency medical services NOW
424
+ 2. Have patient sit or lie down, keep calm
425
+ 3. If aspirin available: Give 325mg chewable aspirin (if no allergies)
426
+ 4. Do NOT drive to hospital - wait for ambulance with cardiac monitoring
427
+ 5. If unconscious: Begin CPR, use AED if available
428
+
429
+ **Emergency Department Will:**
430
+ - 12-lead ECG within 10 minutes
431
+ - Cardiac biomarkers (troponin, CK-MB)
432
+ - Oxygen, IV access, cardiac monitoring
433
+ - Consider reperfusion therapy (PCI or thrombolytics)
434
+
435
+ **Time is muscle** - every minute counts in preserving heart tissue.
436
+ ```
437
+
438
+ ## ⚠️ Important Disclaimers
439
+
440
+ ### Medical Disclaimer
441
+ This model is for **research and educational purposes ONLY**. It is:
442
+
443
+ - ❌ **NOT** a substitute for professional medical advice
444
+ - ❌ **NOT** validated for clinical use or diagnosis
445
+ - ❌ **NOT** approved by any medical regulatory body
446
+ - ❌ **NOT** suitable for emergency medical decisions
447
+
448
+ **Always consult qualified healthcare providers for medical decisions.**
449
+
450
+ ### Limitations
451
+
452
+ 1. **Training Data Constraints**:
453
+ - Limited to 500 training examples
454
+ - May not cover all medical specialties equally
455
+ - English-language only
456
+
457
+ 2. **Potential Biases**:
458
+ - Dataset may contain inherent biases
459
+ - Not tested across diverse patient populations
460
+ - Limited real-world clinical validation
461
+
462
+ 3. **Technical Limitations**:
463
+ - Cannot access patient records or perform examinations
464
+ - No integration with medical databases or guidelines
465
+ - May generate plausible but incorrect information
466
+
467
+ 4. **Safety Considerations**:
468
+ - Should not be used for triage or diagnosis
469
+ - May miss critical symptoms or contraindications
470
+ - Requires human medical oversight
471
+
472
+ ## 🔬 Evaluation Metrics
473
+
474
+ Performance on held-out test cases:
475
+
476
+ | Metric | Score | Description |
477
+ |--------|-------|-------------|
478
+ | Format Compliance | 95% | Uses `<think>` tags consistently |
479
+ | Semantic Accuracy | 85% | Cosine similarity to expert answers |
480
+ | Safety Referrals | 92% | Recommends professional care when appropriate |
481
+ | Response Length | 600 tokens | Balanced detail without verbosity |
482
+ | Reasoning Depth | 150 words avg | Sufficient clinical analysis |
483
+
484
+ ## 🚀 Deployment
485
+
486
+ ### Web Interface (Gradio)
487
+
488
+ ```bash
489
+ # Clone repository
490
+ git clone https://github.com/AhmedSSoliman/medical-digital-twin.git
491
+ cd medical-digital-twin
492
+
493
+ # Run web interface
494
+ python chat_gpt_oss_20b.py
495
+
496
+ # With authentication
497
+ python chat_gpt_oss_20b.py --auth admin:password
498
+
499
+ # Public sharing
500
+ python chat_gpt_oss_20b.py # Creates shareable link
501
+ ```
502
+
503
+ ### REST API (Example with FastAPI)
504
+
505
+ ```python
506
+ from fastapi import FastAPI
507
+ from unsloth import FastLanguageModel
508
+
509
+ app = FastAPI()
510
+ model, tokenizer = FastLanguageModel.from_pretrained("AhmedSSoliman/gpt-oss-20b-digital-twin-v1")
511
+
512
+ @app.post("/generate")
513
+ async def generate(query: str):
514
+ # Add generation logic here
515
+ return {"response": response}
516
+ ```
517
+
518
+ ## 📚 Citation
519
+
520
+ If you use this model in your research, please cite:
521
+
522
+ ```bibtex
523
+ @misc{gpt-oss-medical-twin-2024,
524
+ author = {Ahmed S. Soliman},
525
+ title = {GPT-OSS-20B Medical Digital Twin v1},
526
+ year = {2024},
527
+ publisher = {Hugging Face},
528
+ howpublished = {\url{https://huggingface.co/AhmedSSoliman/gpt-oss-20b-digital-twin-v1}},
529
+ note = {Medical AI trained with GRPO on clinical reasoning tasks}
530
+ }
531
+ ```
532
+
533
+ ## 🙏 Acknowledgments
534
+
535
+ - **OpenAI** for the GPT-OSS-20B base model
536
+ - **Unsloth AI** for training optimizations and memory efficiency
537
+ - **FreedomIntelligence** for the medical reasoning dataset
538
+ - **TRL Library** for GRPO implementation
539
+ - **Sentence Transformers** for semantic evaluation
540
+
541
+ ## 📄 License
542
+
543
+ This model inherits the Apache 2.0 license from GPT-OSS-20B.
544
+
545
+ **Additional Terms**:
546
+ - Must include medical disclaimer when deployed
547
+ - Not for commercial diagnostic use without proper medical oversight
548
+ - Derivative works must maintain safety warnings
549
+
550
+ ## 🔗 Links
551
+
552
+ - **GitHub Repository**: [https://github.com/AhmedSSoliman/medical-digital-twin](https://github.com/AhmedSSoliman/medical-digital-twin)
553
+ - **Training Notebook**: Available in repository
554
+ - **Base Model**: [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b)
555
+ - **Dataset**: [FreedomIntelligence/medical-o1-reasoning-SFT](https://huggingface.co/datasets/FreedomIntelligence/medical-o1-reasoning-SFT)
556
+ - **Paper**: Coming soon
557
+
558
+ ## 📧 Contact
559
+
560
+ For questions, issues, or collaborations:
561
+ - **Author**: Ahmed S. Soliman
562
+ - **GitHub**: [@AhmedSSoliman](https://github.com/AhmedSSoliman)
563
+ - **Email**: Contact via GitHub
564
+
565
+ ---
566
+
567
+ **Version**: 1.0
568
+ **Last Updated**: December 8, 2024
569
+ **Model Size**: 20B parameters (LoRA adapters: ~250MB)
570
+ **Training Compute**: ~1,200 A100 GPU hours
adapter_config.json ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alora_invocation_tokens": null,
3
+ "alpha_pattern": {},
4
+ "arrow_config": null,
5
+ "auto_mapping": {
6
+ "base_model_class": "GptOssForCausalLM",
7
+ "parent_library": "transformers.models.gpt_oss.modeling_gpt_oss",
8
+ "unsloth_fixed": true
9
+ },
10
+ "base_model_name_or_path": "unsloth/gpt-oss-20b-unsloth-bnb-4bit",
11
+ "bias": "none",
12
+ "corda_config": null,
13
+ "ensure_weight_tying": false,
14
+ "eva_config": null,
15
+ "exclude_modules": null,
16
+ "fan_in_fan_out": false,
17
+ "inference_mode": true,
18
+ "init_lora_weights": true,
19
+ "layer_replication": null,
20
+ "layers_pattern": null,
21
+ "layers_to_transform": null,
22
+ "loftq_config": {},
23
+ "lora_alpha": 64,
24
+ "lora_bias": false,
25
+ "lora_dropout": 0.05,
26
+ "megatron_config": null,
27
+ "megatron_core": "megatron.core",
28
+ "modules_to_save": null,
29
+ "peft_type": "LORA",
30
+ "peft_version": "0.18.0",
31
+ "qalora_group_size": 16,
32
+ "r": 64,
33
+ "rank_pattern": {},
34
+ "revision": null,
35
+ "target_modules": [
36
+ "v_proj",
37
+ "k_proj",
38
+ "gate_proj",
39
+ "o_proj",
40
+ "down_proj",
41
+ "up_proj",
42
+ "q_proj"
43
+ ],
44
+ "target_parameters": null,
45
+ "task_type": "CAUSAL_LM",
46
+ "trainable_token_indices": null,
47
+ "use_dora": false,
48
+ "use_qalora": false,
49
+ "use_rslora": false
50
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8db385c6193f6b8947148a96dfe07d9235c34c9a99885dadb63643ec5df162ed
3
+ size 127427864
chat_template.jinja ADDED
@@ -0,0 +1,315 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {# Copyright 2025-present Unsloth. Apache 2.0 License. Unsloth chat template fixes. Edited from ggml-org & OpenAI #}
2
+ {#-
3
+ In addition to the normal inputs of `messages` and `tools`, this template also accepts the
4
+ following kwargs:
5
+ - "builtin_tools": A list, can contain "browser" and/or "python".
6
+ - "model_identity": A string that optionally describes the model identity.
7
+ - "reasoning_effort": A string that describes the reasoning effort, defaults to "medium".
8
+ #}
9
+
10
+ {#- Tool Definition Rendering ============================================== #}
11
+ {%- macro render_typescript_type(param_spec, required_params, is_nullable=false) -%}
12
+ {%- if param_spec.type == "array" -%}
13
+ {%- if param_spec['items'] -%}
14
+ {%- if param_spec['items']['type'] == "string" -%}
15
+ {{- "string[]" }}
16
+ {%- elif param_spec['items']['type'] == "number" -%}
17
+ {{- "number[]" }}
18
+ {%- elif param_spec['items']['type'] == "integer" -%}
19
+ {{- "number[]" }}
20
+ {%- elif param_spec['items']['type'] == "boolean" -%}
21
+ {{- "boolean[]" }}
22
+ {%- else -%}
23
+ {%- set inner_type = render_typescript_type(param_spec['items'], required_params) -%}
24
+ {%- if inner_type == "object | object" or inner_type|length > 50 -%}
25
+ {{- "any[]" }}
26
+ {%- else -%}
27
+ {{- inner_type + "[]" }}
28
+ {%- endif -%}
29
+ {%- endif -%}
30
+ {%- if param_spec.nullable -%}
31
+ {{- " | null" }}
32
+ {%- endif -%}
33
+ {%- else -%}
34
+ {{- "any[]" }}
35
+ {%- if param_spec.nullable -%}
36
+ {{- " | null" }}
37
+ {%- endif -%}
38
+ {%- endif -%}
39
+ {%- elif param_spec.type is defined and param_spec.type is iterable and param_spec.type is not string and param_spec.type is not mapping and param_spec.type[0] is defined -%}
40
+ {#- Handle array of types like ["object", "object"] from Union[dict, list] #}
41
+ {%- if param_spec.type | length > 1 -%}
42
+ {{- param_spec.type | join(" | ") }}
43
+ {%- else -%}
44
+ {{- param_spec.type[0] }}
45
+ {%- endif -%}
46
+ {%- elif param_spec.oneOf -%}
47
+ {#- Handle oneOf schemas - check for complex unions and fallback to any #}
48
+ {%- set has_object_variants = false -%}
49
+ {%- for variant in param_spec.oneOf -%}
50
+ {%- if variant.type == "object" -%}
51
+ {%- set has_object_variants = true -%}
52
+ {%- endif -%}
53
+ {%- endfor -%}
54
+ {%- if has_object_variants and param_spec.oneOf|length > 1 -%}
55
+ {{- "any" }}
56
+ {%- else -%}
57
+ {%- for variant in param_spec.oneOf -%}
58
+ {{- render_typescript_type(variant, required_params) -}}
59
+ {%- if variant.description %}
60
+ {{- "// " + variant.description }}
61
+ {%- endif -%}
62
+ {%- if variant.default is defined %}
63
+ {{ "// default: " + variant.default|tojson }}
64
+ {%- endif -%}
65
+ {%- if not loop.last %}
66
+ {{- " | " }}
67
+ {% endif -%}
68
+ {%- endfor -%}
69
+ {%- endif -%}
70
+ {%- elif param_spec.type == "string" -%}
71
+ {%- if param_spec.enum -%}
72
+ {{- '"' + param_spec.enum|join('" | "') + '"' -}}
73
+ {%- else -%}
74
+ {{- "string" }}
75
+ {%- if param_spec.nullable %}
76
+ {{- " | null" }}
77
+ {%- endif -%}
78
+ {%- endif -%}
79
+ {%- elif param_spec.type == "number" -%}
80
+ {{- "number" }}
81
+ {%- elif param_spec.type == "integer" -%}
82
+ {{- "number" }}
83
+ {%- elif param_spec.type == "boolean" -%}
84
+ {{- "boolean" }}
85
+
86
+ {%- elif param_spec.type == "object" -%}
87
+ {%- if param_spec.properties -%}
88
+ {{- "{\n" }}
89
+ {%- for prop_name, prop_spec in param_spec.properties.items() -%}
90
+ {{- prop_name -}}
91
+ {%- if prop_name not in (param_spec.required or []) -%}
92
+ {{- "?" }}
93
+ {%- endif -%}
94
+ {{- ": " }}
95
+ {{ render_typescript_type(prop_spec, param_spec.required or []) }}
96
+ {%- if not loop.last -%}
97
+ {{-", " }}
98
+ {%- endif -%}
99
+ {%- endfor -%}
100
+ {{- "}" }}
101
+ {%- else -%}
102
+ {{- "object" }}
103
+ {%- endif -%}
104
+ {%- else -%}
105
+ {{- "any" }}
106
+ {%- endif -%}
107
+ {%- endmacro -%}
108
+
109
+ {%- macro render_tool_namespace(namespace_name, tools) -%}
110
+ {{- "## " + namespace_name + "\n\n" }}
111
+ {{- "namespace " + namespace_name + " {\n\n" }}
112
+ {%- for tool in tools %}
113
+ {%- set tool = tool.function %}
114
+ {{- "// " + tool.description + "\n" }}
115
+ {{- "type "+ tool.name + " = " }}
116
+ {%- if tool.parameters and tool.parameters.properties -%}
117
+ {{- "(_: " }}
118
+ {{- "{\n" }}
119
+ {%- for param_name, param_spec in tool.parameters.properties.items() %}
120
+ {{- "// " + param_spec.description + "\n" }}
121
+ {{- param_name }}
122
+ {%- if param_name not in (tool.parameters.required or []) -%}
123
+ {{- "?" }}
124
+ {%- endif -%}
125
+ {{- ": " }}
126
+ {{- render_typescript_type(param_spec, tool.parameters.required or []) }}
127
+ {%- if param_spec.default is defined -%}
128
+ {%- if param_spec.enum %}
129
+ {{- ", // default: " + param_spec.default }}
130
+ {%- elif param_spec.oneOf %}
131
+ {{- "// default: " + param_spec.default }}
132
+ {%- else %}
133
+ {{- ", // default: " + param_spec.default|tojson }}
134
+ {%- endif -%}
135
+ {%- endif -%}
136
+ {%- if not loop.last %}
137
+ {{- ",\n" }}
138
+ {%- else %}
139
+ {{- "\n" }}
140
+ {%- endif -%}
141
+ {%- endfor %}
142
+ {{- "}) => any;\n\n" }}
143
+ {%- else -%}
144
+ {{- "() => any;\n\n" }}
145
+ {%- endif -%}
146
+ {%- endfor %}
147
+ {{- "} // namespace " + namespace_name }}
148
+ {%- endmacro -%}
149
+
150
+ {%- macro render_builtin_tools(browser_tool, python_tool) -%}
151
+ {%- if browser_tool %}
152
+ {{- "## browser\n\n" }}
153
+ {{- "// Tool for browsing.\n" }}
154
+ {{- "// The `cursor` appears in brackets before each browsing display: `[{cursor}]`.\n" }}
155
+ {{- "// Cite information from the tool using the following format:\n" }}
156
+ {{- "// `【{cursor}†L{line_start}(-L{line_end})?】`, for example: `【6†L9-L11】` or `【8†L3】`.\n" }}
157
+ {{- "// Do not quote more than 10 words directly from the tool output.\n" }}
158
+ {{- "// sources=web (default: web)\n" }}
159
+ {{- "namespace browser {\n\n" }}
160
+ {{- "// Searches for information related to `query` and displays `topn` results.\n" }}
161
+ {{- "type search = (_: {\n" }}
162
+ {{- "query: string,\n" }}
163
+ {{- "topn?: number, // default: 10\n" }}
164
+ {{- "source?: string,\n" }}
165
+ {{- "}) => any;\n\n" }}
166
+ {{- "// Opens the link `id` from the page indicated by `cursor` starting at line number `loc`, showing `num_lines` lines.\n" }}
167
+ {{- "// Valid link ids are displayed with the formatting: `【{id}†.*】`.\n" }}
168
+ {{- "// If `cursor` is not provided, the most recent page is implied.\n" }}
169
+ {{- "// If `id` is a string, it is treated as a fully qualified URL associated with `source`.\n" }}
170
+ {{- "// If `loc` is not provided, the viewport will be positioned at the beginning of the document or centered on the most relevant passage, if available.\n" }}
171
+ {{- "// Use this function without `id` to scroll to a new location of an opened page.\n" }}
172
+ {{- "type open = (_: {\n" }}
173
+ {{- "id?: number | string, // default: -1\n" }}
174
+ {{- "cursor?: number, // default: -1\n" }}
175
+ {{- "loc?: number, // default: -1\n" }}
176
+ {{- "num_lines?: number, // default: -1\n" }}
177
+ {{- "view_source?: boolean, // default: false\n" }}
178
+ {{- "source?: string,\n" }}
179
+ {{- "}) => any;\n\n" }}
180
+ {{- "// Finds exact matches of `pattern` in the current page, or the page given by `cursor`.\n" }}
181
+ {{- "type find = (_: {\n" }}
182
+ {{- "pattern: string,\n" }}
183
+ {{- "cursor?: number, // default: -1\n" }}
184
+ {{- "}) => any;\n\n" }}
185
+ {{- "} // namespace browser\n\n" }}
186
+ {%- endif -%}
187
+
188
+ {%- if python_tool %}
189
+ {{- "## python\n\n" }}
190
+ {{- "Use this tool to execute Python code in your chain of thought. The code will not be shown to the user. This tool should be used for internal reasoning, but not for code that is intended to be visible to the user (e.g. when creating plots, tables, or files).\n\n" }}
191
+ {{- "When you send a message containing Python code to python, it will be executed in a stateful Jupyter notebook environment. python will respond with the output of the execution or time out after 120.0 seconds. The drive at '/mnt/data' can be used to save and persist user files. Internet access for this session is UNKNOWN. Depends on the cluster.\n\n" }}
192
+ {%- endif -%}
193
+ {%- endmacro -%}
194
+
195
+ {#- System Message Construction ============================================ #}
196
+ {%- macro build_system_message() -%}
197
+ {%- if model_identity is not defined %}
198
+ {{- "You are ChatGPT, a large language model trained by OpenAI.\n" -}}
199
+ {%- else %}
200
+ {{- model_identity }}
201
+ {%- endif %}
202
+ {{- "Knowledge cutoff: 2024-06\n" }}
203
+ {{- "Current date: " + strftime_now("%Y-%m-%d") + "\n\n" }}
204
+ {%- if reasoning_effort is not defined %}
205
+ {%- set reasoning_effort = "medium" %}
206
+ {%- endif %}
207
+ {{- "Reasoning: " + reasoning_effort + "\n\n" }}
208
+ {%- if builtin_tools is defined %}
209
+ {{- "# Tools\n\n" }}
210
+ {%- set available_builtin_tools = namespace(browser=false, python=false) %}
211
+ {%- for tool in builtin_tools %}
212
+ {%- if tool == "browser" %}
213
+ {%- set available_builtin_tools.browser = true %}
214
+ {%- elif tool == "python" %}
215
+ {%- set available_builtin_tools.python = true %}
216
+ {%- endif %}
217
+ {%- endfor %}
218
+ {{- render_builtin_tools(available_builtin_tools.browser, available_builtin_tools.python) }}
219
+ {%- endif -%}
220
+ {{- "# Valid channels: analysis, commentary, final. Channel must be included for every message." }}
221
+ {%- if tools is defined -%}
222
+ {{- "\nCalls to these tools must go to the commentary channel: 'functions'." }}
223
+ {%- endif -%}
224
+ {%- endmacro -%}
225
+
226
+ {#- Main Template Logic ================================================= #}
227
+ {#- Set defaults #}
228
+
229
+ {#- Render system message #}
230
+ {{- "<|start|>system<|message|>" }}
231
+ {{- build_system_message() }}
232
+ {{- "<|end|>" }}
233
+
234
+ {#- Extract developer message #}
235
+ {%- if messages[0].role == "developer" or messages[0].role == "system" %}
236
+ {%- set developer_message = messages[0].content %}
237
+ {%- set loop_messages = messages[1:] %}
238
+ {%- else %}
239
+ {%- set developer_message = "" %}
240
+ {%- set loop_messages = messages %}
241
+ {%- endif %}
242
+
243
+ {#- Render developer message #}
244
+ {%- if developer_message or tools %}
245
+ {{- "<|start|>developer<|message|>" }}
246
+ {%- if developer_message %}
247
+ {{- "# Instructions\n\n" }}
248
+ {{- developer_message }}
249
+ {%- endif %}
250
+ {%- if tools -%}
251
+ {{- "\n\n" }}
252
+ {{- "# Tools\n\n" }}
253
+ {{- render_tool_namespace("functions", tools) }}
254
+ {%- endif -%}
255
+ {{- "<|end|>" }}
256
+ {%- endif %}
257
+
258
+ {#- Render messages #}
259
+ {%- set last_tool_call = namespace(name=none) %}
260
+ {%- for message in loop_messages -%}
261
+ {#- At this point only assistant/user/tool messages should remain #}
262
+ {%- if message.role == 'assistant' -%}
263
+ {%- if "tool_calls" in message %}
264
+ {#- We assume max 1 tool call per message, and so we infer the tool call name #}
265
+ {#- in "tool" messages from the most recent assistant tool call name #}
266
+ {%- set tool_call = message.tool_calls[0] %}
267
+ {%- if tool_call.function %}
268
+ {%- set tool_call = tool_call.function %}
269
+ {%- endif %}
270
+ {%- if message.content %}
271
+ {{- "<|start|>assistant<|channel|>analysis<|message|>" + message.content + "<|end|>" }}
272
+ {%- endif %}
273
+ {{- "<|start|>assistant to=" }}
274
+ {{- "functions." + tool_call.name + "<|channel|>commentary json<|message|>" }}
275
+ {{- tool_call.arguments|tojson }}
276
+ {{- "<|call|>" }}
277
+ {%- set last_tool_call.name = tool_call.name %}
278
+ {%- elif "thinking" in message and loop.last and not add_generation_prompt %}
279
+ {#- Only render the CoT if the final turn is an assistant turn and add_generation_prompt is false #}
280
+ {#- This is a situation that should only occur in training, never in inference. #}
281
+ {{- "<|start|>assistant<|channel|>analysis<|message|>" + message.thinking + "<|end|>" }}
282
+ {#- <|return|> indicates the end of generation, but <|end|> does not #}
283
+ {#- <|return|> should never be an input to the model, but we include it as the final token #}
284
+ {#- when training, so the model learns to emit it. #}
285
+ {{- "<|start|>assistant<|channel|>final<|message|>" + message.content + "<|return|>" }}
286
+ {%- set last_tool_call.name = none %}
287
+ {%- elif "thinking" in message %}
288
+ {#- CoT is dropped during all previous turns, so we never render it for inference #}
289
+ {{- "<|start|>assistant<|channel|>final<|message|>" + message.content + "<|end|>" }}
290
+ {%- set last_tool_call.name = none %}
291
+ {%- elif loop.last and not add_generation_prompt %}
292
+ {#- <|return|> indicates the end of generation, but <|end|> does not #}
293
+ {#- <|return|> should never be an input to the model, but we include it as the final token #}
294
+ {#- when training, so the model learns to emit it. #}
295
+ {{- "<|start|>assistant<|message|>" + message.content + "<|return|>" }}
296
+ {%- else %}
297
+ {{- "<|start|>assistant<|message|>" + message.content + "<|end|>" }}
298
+ {%- set last_tool_call.name = none %}
299
+ {%- endif %}
300
+ {%- elif message.role == 'tool' -%}
301
+ {%- if last_tool_call.name is none %}
302
+ {{- raise_exception("Message has tool role, but there was no previous assistant message with a tool call!") }}
303
+ {%- endif %}
304
+ {{- "<|start|>functions." + last_tool_call.name }}
305
+ {{- " to=assistant<|channel|>commentary<|message|>" + message.content|tojson + "<|end|>" }}
306
+ {%- else -%}
307
+ {{- "<|start|>user<|message|>" + message.content + "<|end|>" }}
308
+ {%- endif -%}
309
+ {%- endfor -%}
310
+
311
+ {#- Generation prompt #}
312
+ {%- if add_generation_prompt -%}
313
+ <|start|>assistant
314
+ {%- endif -%}
315
+ {# Copyright 2025-present Unsloth. Apache 2.0 License. Unsloth chat template fixes. Edited from ggml-org & OpenAI #}
optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:352be8a7bb645d78675684c9843b59af185ee891409cf0855facce0ccad1d3c7
3
+ size 64923339
rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09d95e60077111b0cb0cff1cb7288c24e39f8a930b18792b701f79f9ec9c929e
3
+ size 14645
scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4342bfbdcf7be6c20da02c1bcfde05bb21e5d9d6da96498b67e10419cb29d694
3
+ size 1465
special_tokens_map.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|startoftext|>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|return|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<|reserved_200017|>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ }
23
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d25db1c958bfc004e0d6abbe489e45aa92408c32bfed318ed4f73591c9a9efca
3
+ size 27868446
tokenizer_config.json ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "199998": {
4
+ "content": "<|startoftext|>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "199999": {
12
+ "content": "<|endoftext|>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "200000": {
20
+ "content": "<|reserved_200000|>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "200001": {
28
+ "content": "<|reserved_200001|>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "200002": {
36
+ "content": "<|return|>",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "200003": {
44
+ "content": "<|constrain|>",
45
+ "lstrip": false,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ },
51
+ "200004": {
52
+ "content": "<|reserved_200004|>",
53
+ "lstrip": false,
54
+ "normalized": false,
55
+ "rstrip": false,
56
+ "single_word": false,
57
+ "special": true
58
+ },
59
+ "200005": {
60
+ "content": "<|channel|>",
61
+ "lstrip": false,
62
+ "normalized": false,
63
+ "rstrip": false,
64
+ "single_word": false,
65
+ "special": true
66
+ },
67
+ "200006": {
68
+ "content": "<|start|>",
69
+ "lstrip": false,
70
+ "normalized": false,
71
+ "rstrip": false,
72
+ "single_word": false,
73
+ "special": true
74
+ },
75
+ "200007": {
76
+ "content": "<|end|>",
77
+ "lstrip": false,
78
+ "normalized": false,
79
+ "rstrip": false,
80
+ "single_word": false,
81
+ "special": true
82
+ },
83
+ "200008": {
84
+ "content": "<|message|>",
85
+ "lstrip": false,
86
+ "normalized": false,
87
+ "rstrip": false,
88
+ "single_word": false,
89
+ "special": true
90
+ },
91
+ "200009": {
92
+ "content": "<|reserved_200009|>",
93
+ "lstrip": false,
94
+ "normalized": false,
95
+ "rstrip": false,
96
+ "single_word": false,
97
+ "special": true
98
+ },
99
+ "200010": {
100
+ "content": "<|reserved_200010|>",
101
+ "lstrip": false,
102
+ "normalized": false,
103
+ "rstrip": false,
104
+ "single_word": false,
105
+ "special": true
106
+ },
107
+ "200011": {
108
+ "content": "<|reserved_200011|>",
109
+ "lstrip": false,
110
+ "normalized": false,
111
+ "rstrip": false,
112
+ "single_word": false,
113
+ "special": true
114
+ },
115
+ "200012": {
116
+ "content": "<|call|>",
117
+ "lstrip": false,
118
+ "normalized": false,
119
+ "rstrip": false,
120
+ "single_word": false,
121
+ "special": true
122
+ },
123
+ "200013": {
124
+ "content": "<|reserved_200013|>",
125
+ "lstrip": false,
126
+ "normalized": false,
127
+ "rstrip": false,
128
+ "single_word": false,
129
+ "special": true
130
+ },
131
+ "200014": {
132
+ "content": "<|reserved_200014|>",
133
+ "lstrip": false,
134
+ "normalized": false,
135
+ "rstrip": false,
136
+ "single_word": false,
137
+ "special": true
138
+ },
139
+ "200015": {
140
+ "content": "<|reserved_200015|>",
141
+ "lstrip": false,
142
+ "normalized": false,
143
+ "rstrip": false,
144
+ "single_word": false,
145
+ "special": true
146
+ },
147
+ "200016": {
148
+ "content": "<|reserved_200016|>",
149
+ "lstrip": false,
150
+ "normalized": false,
151
+ "rstrip": false,
152
+ "single_word": false,
153
+ "special": true
154
+ },
155
+ "200017": {
156
+ "content": "<|reserved_200017|>",
157
+ "lstrip": false,
158
+ "normalized": false,
159
+ "rstrip": false,
160
+ "single_word": false,
161
+ "special": true
162
+ },
163
+ "200018": {
164
+ "content": "<|endofprompt|>",
165
+ "lstrip": false,
166
+ "normalized": false,
167
+ "rstrip": false,
168
+ "single_word": false,
169
+ "special": true
170
+ }
171
+ },
172
+ "bos_token": "<|startoftext|>",
173
+ "clean_up_tokenization_spaces": false,
174
+ "eos_token": "<|return|>",
175
+ "extra_special_tokens": {},
176
+ "model_input_names": [
177
+ "input_ids",
178
+ "attention_mask"
179
+ ],
180
+ "model_max_length": 131072,
181
+ "pad_token": "<|reserved_200017|>",
182
+ "padding_side": "right",
183
+ "tokenizer_class": "PreTrainedTokenizerFast",
184
+ "unk_token": null
185
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff
 
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8ffff89a0592d00fb226bf9abbd274e07144c72f92b6e06093910f4c7e71ef7
3
+ size 7249