Image-Text-to-Text
Transformers
Safetensors
English
Chinese
qwen2_5_vl
image-to-text
Document
VLM
OCR
VL
Camel
Openpdf
text-generation-inference
Extraction
Linking
Markdown
Document Digitization
Intelligent Document Processing (IDP)
Intelligent Word Recognition (IWR)
Optical Mark Recognition (OMR)
conversational
File size: 5,093 Bytes
d63cea7 c1f26f1 ade7797 f74c657 ade7797 d204310 c1f26f1 bc484b8 3338c39 bc484b8 ade7797 554a99d ade7797 4abe606 12b49d1 4abe606 ade7797 f74c657 ade7797 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 |
---
license: apache-2.0
pipeline_tag: image-text-to-text
language:
- en
- zh
base_model:
- prithivMLmods/Camel-Doc-OCR-062825
library_name: transformers
tags:
- Document
- VLM
- OCR
- VL
- Camel
- Openpdf
- text-generation-inference
- Extraction
- Linking
- Markdown
- Document Digitization
- Intelligent Document Processing (IDP)
- Intelligent Word Recognition (IWR)
- Optical Mark Recognition (OMR)
---

# **Gliese-OCR-7B-Post1.0**
> The **Gliese-OCR-7B-Post1.0** model is a fine-tuned version of **[Camel-Doc-OCR-062825](https://huggingface.co/prithivMLmods/Camel-Doc-OCR-062825)**, optimized for **Document Retrieval**, **Content Extraction**, and **Analysis Recognition**. Built on top of the Qwen2.5-VL architecture, this model enhances document comprehension capabilities with focused training on the Opendoc2-Analysis-Recognition dataset for superior document analysis and information extraction tasks.
> [!note]
This model shows significant improvements in [LaTeX rendering and Markdown rendering for OCR tasks](https://huggingface.co/prithivMLmods/Gliese-OCR-7B-Post1.0/blob/main/Gliese-OCR-7B-Post1.0(4-bit)-reportlab/Gliese_OCR_7B_Post1_0(4_bit)_reportlab.ipynb).
# Key Enhancements
* **Context-Aware Multimodal Extraction and Linking for Documents**: Advanced capability for understanding document context and establishing connections between multimodal elements within documents.
* **Enhanced Document Retrieval**: Designed to efficiently locate and extract relevant information from complex document structures and layouts.
* **Superior Content Extraction**: Optimized for precise extraction of structured and unstructured content from diverse document formats.
* **Analysis Recognition**: Specialized in recognizing and interpreting analytical content, charts, tables, and visual data representations.
* **State-of-the-Art Performance Across Resolutions**: Achieves competitive results on OCR and visual QA benchmarks such as DocVQA, MathVista, RealWorldQA, and MTVQA.
* **Video Understanding up to 20+ minutes**: Supports detailed comprehension of long-duration videos for content summarization, Q\&A, and multi-modal reasoning.
* **Visually-Grounded Device Interaction**: Enables mobile/robotic device operation via visual inputs and text-based instructions using contextual understanding and decision-making logic.
# Quick Start with Transformers
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"prithivMLmods/Gliese-OCR-7B-Post1.0", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("prithivMLmods/Gliese-OCR-7B-Post1.0")
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
# Intended Use
This model is intended for:
* Context-aware multimodal extraction and linking for complex document structures.
* High-fidelity document retrieval and content extraction from various document formats.
* Analysis recognition of charts, graphs, tables, and visual data representations.
* Document-based question answering for educational and enterprise applications.
* Extraction and LaTeX formatting of mathematical expressions from printed or handwritten content.
* Retrieval and summarization from long documents, slides, and multi-modal inputs.
* Multilingual document analysis and structured content extraction for global use cases.
* Robotic or mobile automation with vision-guided contextual interaction.
# Limitations
* May show degraded performance on extremely low-quality or occluded images.
* Not optimized for real-time applications on low-resource or edge devices due to computational demands.
* Variable accuracy on uncommon or low-resource languages/scripts.
* Long video processing may require substantial memory and is not optimized for streaming applications.
* Visual token settings affect performance; suboptimal configurations can impact results.
* In rare cases, outputs may contain hallucinated or contextually misaligned information. |