Merge Experiments
Collection
Sorted from newest (top) to oldest (bottom)
•
24 items
•
Updated
•
3
Grimoires:
- v0a - DARE_TIES of 4 models (A+B+C+E)
- v0b - DARE_TIES of 4 models (A+C+D+E)
- v0c - SLERP of 2 models (C+E)
- v0d - DARE_TIES of 5 models, balanced evenly
- v0e - DARE_TIES of 5 models, 3 heavy 2 light
- v0f - DARE_TIES of 4 models (A+B+D+E)
- v0g - DARE_TIES of 3 models (A+B+E)
- v0h - SLERP of 2 models (B+E)
- v0i - SLERP of 2 models (A+B)
- v0j - DARE_TIES of 5 models, balanced unevenly, B heavy
- v0k - DARE_TIES of 5 models, balanced unevenly, A heavy
- v2a - KARCHER of the same 5 models as v0k, balanced evenly
architecture: MistralForCausalLM
merge_method: karcher
dtype: bfloat16
models:
- model: A:\LLM\.cache\huggingface\hub\!models--dphn--dolphin-2.8-mistral-7b-v02\fixed
- model: A:\LLM\.cache\huggingface\hub\!models--fearlessdots--WizardLM-2-7B-abliterated\fixed
- model: A:\LLM\.cache\huggingface\hub\!models--KoboldAI--Mistral-7B-Erebus-v3
- model: A:\LLM\.cache\huggingface\hub\!models--LeroyDyer--SpydazWeb_AI_HumanAI_RP
- model: A:\LLM\.cache\huggingface\hub\!models--maywell--PiVoT-0.1-Evil-a\fixed
parameters:
tokenizer:
source: union
chat_template: autobase_model: fearlessdots/WizardLM-2-7B-abliterated
merge_method: dare_ties
architecture: MistralForCausalLM
dtype: bfloat16
models:
- model: dphn/dolphin-2.8-mistral-7b-v02
parameters:
density: 0.55
weight: 0.55
- model: fearlessdots/WizardLM-2-7B-abliterated
parameters:
density: 0.4
weight: 0.2
- model: KoboldAI/Mistral-7B-Erebus-v3
parameters:
density: 0.2
weight: 0.05
- model: LeroyDyer/SpydazWeb_AI_HumanAI_RP
parameters:
density: 0.3
weight: 0.1
- model: maywell/PiVoT-0.1-Evil-a
parameters:
density: 0.3
weight: 0.1
tokenizer:
source: union
chat_template: auto
pip install library_name
# Paste your example code here
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("your/model_name")