--- library_name: peft license: mit base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B tags: - generated_from_trainer model-index: - name: DeepSeek-R1-Distill-Qwen-1.5B-2-contract-sections-classification-v4-10 results: [] --- [Visualize in Weights & Biases](https://wandb.ai/mvgdr/classificacao-secoes-contratos-v4-deepseek-r1-distil-qwen/runs/eb6kb40u) # DeepSeek-R1-Distill-Qwen-1.5B-2-contract-sections-classification-v4-10 This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.5912 - Accuracy Evaluate: 0.2065 - Precision Evaluate: 0.2225 - Recall Evaluate: 0.2011 - F1 Evaluate: 0.2053 - Accuracy Sklearn: 0.2065 - Precision Sklearn: 0.2227 - Recall Sklearn: 0.2065 - F1 Sklearn: 0.2079 - Acuracia Rotulo Objeto: 0.2355 - Acuracia Rotulo Obrigacoes: 0.2290 - Acuracia Rotulo Valor: 0.2808 - Acuracia Rotulo Vigencia: 0.1969 - Acuracia Rotulo Rescisao: 0.1939 - Acuracia Rotulo Foro: 0.1538 - Acuracia Rotulo Reajuste: 0.1886 - Acuracia Rotulo Fiscalizacao: 0.0662 - Acuracia Rotulo Publicacao: 0.3892 - Acuracia Rotulo Pagamento: 0.0616 - Acuracia Rotulo Casos Omissos: 0.5616 - Acuracia Rotulo Sancoes: 0.0183 - Acuracia Rotulo Dotacao Orcamentaria: 0.0385 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy Evaluate | Precision Evaluate | Recall Evaluate | F1 Evaluate | Accuracy Sklearn | Precision Sklearn | Recall Sklearn | F1 Sklearn | Acuracia Rotulo Objeto | Acuracia Rotulo Obrigacoes | Acuracia Rotulo Valor | Acuracia Rotulo Vigencia | Acuracia Rotulo Rescisao | Acuracia Rotulo Foro | Acuracia Rotulo Reajuste | Acuracia Rotulo Fiscalizacao | Acuracia Rotulo Publicacao | Acuracia Rotulo Pagamento | Acuracia Rotulo Casos Omissos | Acuracia Rotulo Sancoes | Acuracia Rotulo Dotacao Orcamentaria | |:-------------:|:-----:|:-----:|:---------------:|:-----------------:|:------------------:|:---------------:|:-----------:|:----------------:|:-----------------:|:--------------:|:----------:|:----------------------:|:--------------------------:|:---------------------:|:------------------------:|:------------------------:|:--------------------:|:------------------------:|:----------------------------:|:--------------------------:|:-------------------------:|:-----------------------------:|:-----------------------:|:------------------------------------:| | 3.7843 | 1.0 | 1000 | 3.6004 | 0.0655 | 0.0875 | 0.0674 | 0.0486 | 0.0655 | 0.1125 | 0.0655 | 0.0557 | 0.0744 | 0.0337 | 0.1203 | 0.0394 | 0.0388 | 0.2923 | 0.0890 | 0.0032 | 0.0 | 0.0 | 0.0 | 0.0092 | 0.1758 | | 3.235 | 2.0 | 2000 | 3.2248 | 0.0762 | 0.0869 | 0.0680 | 0.0601 | 0.0762 | 0.1029 | 0.0762 | 0.0721 | 0.1178 | 0.0825 | 0.1662 | 0.0787 | 0.0499 | 0.1731 | 0.0854 | 0.0063 | 0.0148 | 0.0109 | 0.0049 | 0.0275 | 0.0659 | | 2.8811 | 3.0 | 3000 | 3.0158 | 0.125 | 0.1463 | 0.1190 | 0.1212 | 0.125 | 0.1467 | 0.125 | 0.1250 | 0.1426 | 0.1650 | 0.2407 | 0.1601 | 0.0471 | 0.1308 | 0.0569 | 0.0063 | 0.0148 | 0.0145 | 0.5074 | 0.0275 | 0.0330 | | 2.681 | 4.0 | 4000 | 2.8826 | 0.1635 | 0.1933 | 0.1554 | 0.1597 | 0.1635 | 0.1936 | 0.1635 | 0.1638 | 0.1446 | 0.2323 | 0.2951 | 0.1601 | 0.1330 | 0.1077 | 0.1210 | 0.0568 | 0.1527 | 0.0145 | 0.5419 | 0.0275 | 0.0330 | | 2.495 | 5.0 | 5000 | 2.7849 | 0.1777 | 0.1958 | 0.1675 | 0.1711 | 0.1777 | 0.1985 | 0.1777 | 0.1774 | 0.1612 | 0.2525 | 0.2980 | 0.1654 | 0.1801 | 0.1038 | 0.1423 | 0.0631 | 0.2020 | 0.0217 | 0.5419 | 0.0183 | 0.0275 | | 2.3573 | 6.0 | 6000 | 2.7106 | 0.197 | 0.2106 | 0.1877 | 0.1907 | 0.197 | 0.2128 | 0.197 | 0.1963 | 0.2211 | 0.2576 | 0.2865 | 0.1680 | 0.1967 | 0.1077 | 0.1708 | 0.0662 | 0.3202 | 0.0326 | 0.5616 | 0.0183 | 0.0330 | | 2.2544 | 7.0 | 7000 | 2.6566 | 0.201 | 0.2131 | 0.1950 | 0.1969 | 0.201 | 0.2139 | 0.201 | 0.2001 | 0.2231 | 0.2441 | 0.2865 | 0.1601 | 0.1911 | 0.1308 | 0.1851 | 0.0662 | 0.3842 | 0.0507 | 0.5616 | 0.0183 | 0.0330 | | 2.198 | 8.0 | 8000 | 2.6198 | 0.2015 | 0.2157 | 0.1961 | 0.1994 | 0.2015 | 0.2156 | 0.2015 | 0.2019 | 0.2335 | 0.2357 | 0.2865 | 0.1549 | 0.1884 | 0.1423 | 0.1851 | 0.0662 | 0.3842 | 0.0543 | 0.5616 | 0.0183 | 0.0385 | | 2.1718 | 9.0 | 9000 | 2.5984 | 0.206 | 0.2216 | 0.2004 | 0.2046 | 0.206 | 0.2219 | 0.206 | 0.2073 | 0.2335 | 0.2290 | 0.2808 | 0.2021 | 0.1939 | 0.15 | 0.1886 | 0.0662 | 0.3842 | 0.0580 | 0.5616 | 0.0183 | 0.0385 | | 2.1267 | 10.0 | 10000 | 2.5912 | 0.2065 | 0.2225 | 0.2011 | 0.2053 | 0.2065 | 0.2227 | 0.2065 | 0.2079 | 0.2355 | 0.2290 | 0.2808 | 0.1969 | 0.1939 | 0.1538 | 0.1886 | 0.0662 | 0.3892 | 0.0616 | 0.5616 | 0.0183 | 0.0385 | ### Framework versions - PEFT 0.14.0 - Transformers 4.48.3 - Pytorch 2.6.0+cu124 - Datasets 3.3.0 - Tokenizers 0.21.0