🧙 Warlock 7B v2

Dark Warlock

Forbidden Knowledge

This is a powerful, arcane merge of pre-trained language models, summoned forth from the infernal void using mergekit.

An uncensored and creative merge of Mistral 7B finetunes. v0k (v1) was determined to be the most unique and powerful of the original Warlock prototypes from September 2025.

Two months later however, Karcher was tested (in addition to using a vocab_resizer script). v2a is superior to v0k and has been released.
  • "I'm sorry, but it seems like you have a very twisted mindset." - Warlock v1 (8% Refusal Rate)
  • "You're asking for some pretty sick advice here. But since you asked, I'll give it to you." - Warlock v2 (4% Refusal Rate)
Output is sometimes improved with the default Kobold.cpp Jailbreak enabled and Top-NSigma set to 1.26.
Grimoires:
- v0a - DARE_TIES of 4 models (A+B+C+E)
- v0b - DARE_TIES of 4 models (A+C+D+E)
- v0c - SLERP of 2 models (C+E)
- v0d - DARE_TIES of 5 models, balanced evenly
- v0e - DARE_TIES of 5 models, 3 heavy 2 light
- v0f - DARE_TIES of 4 models (A+B+D+E)
- v0g - DARE_TIES of 3 models (A+B+E)
- v0h - SLERP of 2 models (B+E)
- v0i - SLERP of 2 models (A+B)
- v0j - DARE_TIES of 5 models, balanced unevenly, B heavy
- v0k - DARE_TIES of 5 models, balanced unevenly, A heavy
- v2a - KARCHER of the same 5 models as v0k, balanced evenly

Ritual of Merging

Incantation Method


This model was merged using the holistic [Karcher] incantation.

Arcane Configuration

The following runes were inscribed to produce this model (v2a):
architecture: MistralForCausalLM
merge_method: karcher
dtype: bfloat16
models:
  - model: A:\LLM\.cache\huggingface\hub\!models--dphn--dolphin-2.8-mistral-7b-v02\fixed
  - model: A:\LLM\.cache\huggingface\hub\!models--fearlessdots--WizardLM-2-7B-abliterated\fixed
  - model: A:\LLM\.cache\huggingface\hub\!models--KoboldAI--Mistral-7B-Erebus-v3
  - model: A:\LLM\.cache\huggingface\hub\!models--LeroyDyer--SpydazWeb_AI_HumanAI_RP
  - model: A:\LLM\.cache\huggingface\hub\!models--maywell--PiVoT-0.1-Evil-a\fixed
parameters:
tokenizer:
source: union
chat_template: auto

Three of the models required an initial passthrough patch via the vocab_resizer set to 32000.

v0k yaml:
base_model: fearlessdots/WizardLM-2-7B-abliterated
merge_method: dare_ties
architecture: MistralForCausalLM
dtype: bfloat16
models:
  - model: dphn/dolphin-2.8-mistral-7b-v02
    parameters:
      density: 0.55
      weight: 0.55
  - model: fearlessdots/WizardLM-2-7B-abliterated
    parameters:
      density: 0.4
      weight: 0.2
  - model: KoboldAI/Mistral-7B-Erebus-v3
    parameters:
      density: 0.2
      weight: 0.05
  - model: LeroyDyer/SpydazWeb_AI_HumanAI_RP
    parameters:
      density: 0.3
      weight: 0.1
  - model: maywell/PiVoT-0.1-Evil-a
    parameters:
      density: 0.3
      weight: 0.1
tokenizer:
source: union
chat_template: auto

Channeling the Model

To channel the power of this model, one must first install the necessary conduits.
pip install library_name

Example Invocation:

# Paste your example code here
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("your/model_name")
Downloads last month
11
Safetensors
Model size
7B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Naphula/Warlock-7B-v2

Collection including Naphula/Warlock-7B-v2