SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2 on the rag_finetuning_for_engineering dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/all-MiniLM-L6-v2
- Maximum Sequence Length: 256 tokens
- Output Dimensionality: 384 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertModel'})
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the π€ Hub
model = SentenceTransformer("zacCMU/miniLM2-ENG3")
# Run inference
sentences = [
'10.7 Information to be provided to the FIA and Competitors \na) In order that an FIA observer may be appointed, Competitors must inform the FIA and all \nother Competitors of any planned TPC, PE or DE at least 72 hours before it is due to \ncommence, and the following information must be provided: \ni) The precise specification of the car(s) to be used. ii) The name(s) of the driver(s). iii) The type of activity.',
'Competitors must notify the FIA and other teams at least 72 hours in advance of any planned technical testing, physical evaluations, or development exercises, providing detailed information about the cars, drivers, and nature of the activity.',
"The aerodynamic design of a Formula 1 car's rear wing is crucial in determining its overall downforce and drag characteristics, requiring a delicate balance between speed and stability.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, -0.0570, 0.9087],
# [-0.0570, 1.0000, 0.0005],
# [ 0.9087, 0.0005, 1.0000]])
Training Details
Training Dataset
rag_finetuning_for_engineering
- Dataset: rag_finetuning_for_engineering at bddb325
- Size: 637 training samples
- Columns:
anchor,positive, andnegative - Approximate statistics based on the first 637 samples:
anchor positive negative type string string string details - min: 9 tokens
- mean: 112.88 tokens
- max: 256 tokens
- min: 18 tokens
- mean: 60.28 tokens
- max: 176 tokens
- min: 15 tokens
- mean: 35.03 tokens
- max: 57 tokens
- Samples:
anchor positive negative A penalty in accordance with Article 54.3d) will be imposed on any
driver who fails to start the race from the pit lane. If any driver needs assistance after the fifteen (15) second signal, he must raise his arm and,
when the remainder of the cars able to do so have left the pit lane, marshals will be instructed
to push the car into the inner lane. In this case, marshals with yellow flags will stand beside any
car concerned to warn drivers behind.A driver who fails to start the race from the pit lane will incur a penalty. If a driver requires assistance after the 15-second signal, they must signal for help and marshals will then guide their car into the inner lane, warning other drivers with yellow flags.The aerodynamic design of modern Formula 1 cars requires a delicate balance between downforce and drag to achieve optimal speed on the track.If a driver wishes to leave his car before it is weighed , he
must ask the Technical Delegate to weigh him in order that this weight may be added to
that of the car. e) If a car stops on the circuit during the qualifying session or the sprint qualifying session
and the driver leaves the car, he must go to the FIA garage immediately on his return to
the pit lane in order for his weight to be established. 35.2 After the sprint session or the race any classified car may be weighed.To avoid penalties, a driver must ensure their weight is accurately recorded before leaving their car, either by having the Technical Delegate weigh them or by being weighed in the FIA garage after returning to the pit lane. This process is crucial during qualifying sessions, sprint qualifying sessions, and after the sprint session or the race.The aerodynamic design of a Formula 1 car's rear wing plays a crucial role in generating downforce, but its impact on the overall handling and stability of the vehicle is often overlooked by teams in their pursuit of speed.d) When leaving the pits a driver may overtake, or be overtaken by, another car on the track
before he reaches the second safety car line. e) When the safety car is returning to the pits it may be overtaken by cars on the track once
it has reached the first safety car line. f) Whilst in the pit entry road, pit lane or pit exit road a driver may overtake another car
which is also in one of these three areas.When exiting the pits, a driver is allowed to overtake or be overtaken by another car on the track before reaching the second safety car line. Additionally, the safety car can be overtaken by cars on the track once it has reached the first safety car line, and drivers can also overtake each other while in the pit entry road, pit lane, or pit exit road.The aerodynamic design of modern Formula 1 cars relies heavily on complex computational fluid dynamics simulations to optimize their downforce and drag characteristics. - Loss:
TripletLosswith these parameters:{ "distance_metric": "TripletDistanceMetric.EUCLIDEAN", "triplet_margin": 5 }
Training Hyperparameters
Non-Default Hyperparameters
per_device_train_batch_size: 16learning_rate: 5e-06num_train_epochs: 5
All Hyperparameters
Click to expand
overwrite_output_dir: Falsedo_predict: Falseeval_strategy: noprediction_loss_only: Trueper_device_train_batch_size: 16per_device_eval_batch_size: 8per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 5e-06weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 5max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.0warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falsebf16: Falsefp16: Falsefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}parallelism_config: Nonedeepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torch_fusedoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthproject: huggingfacetrackio_space_id: trackioddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsehub_revision: Nonegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters:auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: noneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseliger_kernel_config: Noneeval_use_gather_object: Falseaverage_tokens_across_devices: Trueprompts: Nonebatch_sampler: batch_samplermulti_dataset_batch_sampler: proportionalrouter_mapping: {}learning_rate_mapping: {}
Training Logs
| Epoch | Step | Training Loss |
|---|---|---|
| 0.25 | 10 | 5.5537 |
| 0.5 | 20 | 5.4171 |
| 0.75 | 30 | 5.3388 |
| 1.0 | 40 | 5.2232 |
| 1.25 | 50 | 5.1218 |
| 1.5 | 60 | 5.1178 |
| 1.75 | 70 | 5.0529 |
| 2.0 | 80 | 4.9699 |
| 2.25 | 90 | 4.9154 |
| 2.5 | 100 | 4.8769 |
| 2.75 | 110 | 4.8344 |
| 3.0 | 120 | 4.7987 |
| 3.25 | 130 | 4.6987 |
| 3.5 | 140 | 4.6958 |
| 3.75 | 150 | 4.5954 |
| 4.0 | 160 | 4.6363 |
| 4.25 | 170 | 4.5681 |
| 4.5 | 180 | 4.533 |
| 4.75 | 190 | 4.5215 |
| 5.0 | 200 | 4.5214 |
Framework Versions
- Python: 3.12.12
- Sentence Transformers: 5.1.2
- Transformers: 4.57.1
- PyTorch: 2.9.0+cu126
- Accelerate: 1.11.0
- Datasets: 4.0.0
- Tokenizers: 0.22.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
TripletLoss
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
- Downloads last month
- 6
Model tree for zacCMU/miniLM2-ENG3
Base model
sentence-transformers/all-MiniLM-L6-v2