Overview¶
MinerU VL text extraction module.
MinerU VL is a vision-language model for document layout detection and text/table/equation recognition. It performs two-step extraction: 1. Layout Detection: Detect regions with types (text, table, equation, etc.) 2. Content Recognition: Extract content from each detected region
Example
from omnidocs.tasks.text_extraction import MinerUVLTextExtractor
from omnidocs.tasks.text_extraction.mineruvl import MinerUVLTextPyTorchConfig
# Initialize with PyTorch backend
extractor = MinerUVLTextExtractor(
backend=MinerUVLTextPyTorchConfig(device="cuda")
)
# Extract text
result = extractor.extract(image)
print(result.content)
# Extract with detailed blocks
result, blocks = extractor.extract_with_blocks(image)
for block in blocks:
print(f"{block.type}: {block.content[:50]}...")
MinerUVLTextAPIConfig
¶
Bases: BaseModel
API backend config for MinerU VL text extraction.
Connects to a deployed VLLM server with OpenAI-compatible API.
Example
MinerUVLTextExtractor
¶
Bases: BaseTextExtractor
MinerU VL text extractor with layout-aware extraction.
Performs two-step extraction: 1. Layout detection (detect regions) 2. Content recognition (extract text/table/equation from each region)
Supports multiple backends: - PyTorch (HuggingFace Transformers) - VLLM (high-throughput GPU) - MLX (Apple Silicon) - API (VLLM OpenAI-compatible server)
Example
from omnidocs.tasks.text_extraction import MinerUVLTextExtractor
from omnidocs.tasks.text_extraction.mineruvl import MinerUVLTextPyTorchConfig
extractor = MinerUVLTextExtractor(
backend=MinerUVLTextPyTorchConfig(device="cuda")
)
result = extractor.extract(image)
print(result.content) # Combined text + tables + equations
print(result.blocks) # List of ContentBlock objects
Initialize MinerU VL text extractor.
| PARAMETER | DESCRIPTION |
|---|---|
backend
|
Backend configuration (PyTorch, VLLM, MLX, or API)
TYPE:
|
Source code in omnidocs/tasks/text_extraction/mineruvl/extractor.py
extract
¶
extract(
image: Union[Image, ndarray, str, Path],
output_format: Literal["html", "markdown"] = "markdown",
) -> TextOutput
Extract text with layout-aware two-step extraction.
| PARAMETER | DESCRIPTION |
|---|---|
image
|
Input image (PIL Image, numpy array, or file path)
TYPE:
|
output_format
|
Output format ('html' or 'markdown')
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
TextOutput
|
TextOutput with extracted content and metadata |
Source code in omnidocs/tasks/text_extraction/mineruvl/extractor.py
extract_with_blocks
¶
extract_with_blocks(
image: Union[Image, ndarray, str, Path],
output_format: Literal["html", "markdown"] = "markdown",
) -> tuple[TextOutput, List[ContentBlock]]
Extract text and return both TextOutput and ContentBlocks.
This method provides access to the detailed block information including bounding boxes and block types.
| PARAMETER | DESCRIPTION |
|---|---|
image
|
Input image
TYPE:
|
output_format
|
Output format
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
tuple[TextOutput, List[ContentBlock]]
|
Tuple of (TextOutput, List[ContentBlock]) |
Source code in omnidocs/tasks/text_extraction/mineruvl/extractor.py
MinerUVLTextMLXConfig
¶
Bases: BaseModel
MLX backend config for MinerU VL text extraction on Apple Silicon.
Uses MLX-VLM for efficient inference on M1/M2/M3/M4 chips.
Example
MinerUVLTextPyTorchConfig
¶
Bases: BaseModel
PyTorch/HuggingFace backend config for MinerU VL text extraction.
Uses HuggingFace Transformers with Qwen2VLForConditionalGeneration.
Example
BlockType
¶
Bases: str, Enum
MinerU VL block types (22 categories).
ContentBlock
¶
Bases: BaseModel
A detected content block with type, bounding box, angle, and content.
Coordinates are normalized to [0, 1] range relative to image dimensions.
to_absolute
¶
Convert normalized bbox to absolute pixel coordinates.
Source code in omnidocs/tasks/text_extraction/mineruvl/utils.py
MinerUSamplingParams
¶
MinerUSamplingParams(
temperature: Optional[float] = 0.0,
top_p: Optional[float] = 0.01,
top_k: Optional[int] = 1,
presence_penalty: Optional[float] = 0.0,
frequency_penalty: Optional[float] = 0.0,
repetition_penalty: Optional[float] = 1.0,
no_repeat_ngram_size: Optional[int] = 100,
max_new_tokens: Optional[int] = None,
)
Bases: SamplingParams
Default sampling parameters optimized for MinerU VL.
Source code in omnidocs/tasks/text_extraction/mineruvl/utils.py
SamplingParams
dataclass
¶
SamplingParams(
temperature: Optional[float] = None,
top_p: Optional[float] = None,
top_k: Optional[int] = None,
presence_penalty: Optional[float] = None,
frequency_penalty: Optional[float] = None,
repetition_penalty: Optional[float] = None,
no_repeat_ngram_size: Optional[int] = None,
max_new_tokens: Optional[int] = None,
)
Sampling parameters for text generation.
MinerUVLTextVLLMConfig
¶
Bases: BaseModel
VLLM backend config for MinerU VL text extraction.
Uses VLLM for high-throughput GPU inference with: - PagedAttention for efficient KV cache - Continuous batching - Optimized CUDA kernels
Example
convert_otsl_to_html
¶
Convert OTSL table format to HTML.
Source code in omnidocs/tasks/text_extraction/mineruvl/utils.py
357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 | |
parse_layout_output
¶
Parse layout detection model output into ContentBlocks.
Source code in omnidocs/tasks/text_extraction/mineruvl/utils.py
api
¶
API backend configuration for MinerU VL text extraction.
MinerUVLTextAPIConfig
¶
Bases: BaseModel
API backend config for MinerU VL text extraction.
Connects to a deployed VLLM server with OpenAI-compatible API.
Example
extractor
¶
MinerU VL text extractor with layout-aware two-step extraction.
MinerU VL performs document extraction in two steps: 1. Layout Detection: Detect regions with types (text, table, equation, etc.) 2. Content Recognition: Extract text/table/equation content from each region
MinerUVLTextExtractor
¶
Bases: BaseTextExtractor
MinerU VL text extractor with layout-aware extraction.
Performs two-step extraction: 1. Layout detection (detect regions) 2. Content recognition (extract text/table/equation from each region)
Supports multiple backends: - PyTorch (HuggingFace Transformers) - VLLM (high-throughput GPU) - MLX (Apple Silicon) - API (VLLM OpenAI-compatible server)
Example
from omnidocs.tasks.text_extraction import MinerUVLTextExtractor
from omnidocs.tasks.text_extraction.mineruvl import MinerUVLTextPyTorchConfig
extractor = MinerUVLTextExtractor(
backend=MinerUVLTextPyTorchConfig(device="cuda")
)
result = extractor.extract(image)
print(result.content) # Combined text + tables + equations
print(result.blocks) # List of ContentBlock objects
Initialize MinerU VL text extractor.
| PARAMETER | DESCRIPTION |
|---|---|
backend
|
Backend configuration (PyTorch, VLLM, MLX, or API)
TYPE:
|
Source code in omnidocs/tasks/text_extraction/mineruvl/extractor.py
extract
¶
extract(
image: Union[Image, ndarray, str, Path],
output_format: Literal["html", "markdown"] = "markdown",
) -> TextOutput
Extract text with layout-aware two-step extraction.
| PARAMETER | DESCRIPTION |
|---|---|
image
|
Input image (PIL Image, numpy array, or file path)
TYPE:
|
output_format
|
Output format ('html' or 'markdown')
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
TextOutput
|
TextOutput with extracted content and metadata |
Source code in omnidocs/tasks/text_extraction/mineruvl/extractor.py
extract_with_blocks
¶
extract_with_blocks(
image: Union[Image, ndarray, str, Path],
output_format: Literal["html", "markdown"] = "markdown",
) -> tuple[TextOutput, List[ContentBlock]]
Extract text and return both TextOutput and ContentBlocks.
This method provides access to the detailed block information including bounding boxes and block types.
| PARAMETER | DESCRIPTION |
|---|---|
image
|
Input image
TYPE:
|
output_format
|
Output format
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
tuple[TextOutput, List[ContentBlock]]
|
Tuple of (TextOutput, List[ContentBlock]) |
Source code in omnidocs/tasks/text_extraction/mineruvl/extractor.py
mlx
¶
MLX backend configuration for MinerU VL text extraction (Apple Silicon).
MinerUVLTextMLXConfig
¶
Bases: BaseModel
MLX backend config for MinerU VL text extraction on Apple Silicon.
Uses MLX-VLM for efficient inference on M1/M2/M3/M4 chips.
Example
pytorch
¶
PyTorch/HuggingFace backend configuration for MinerU VL text extraction.
MinerUVLTextPyTorchConfig
¶
Bases: BaseModel
PyTorch/HuggingFace backend config for MinerU VL text extraction.
Uses HuggingFace Transformers with Qwen2VLForConditionalGeneration.
Example
utils
¶
MinerU VL utilities for document extraction.
Contains data structures, parsing, prompts, and post-processing functions for MinerU VL document extraction pipeline.
This file contains code adapted from mineru-vl-utils
https://github.com/opendatalab/mineru-vl-utils https://pypi.org/project/mineru-vl-utils/
The original mineru-vl-utils is licensed under AGPL-3.0: Copyright (c) OpenDataLab https://github.com/opendatalab/mineru-vl-utils/blob/main/LICENSE.md
Adapted components
- BlockType enum (from structs.py)
- ContentBlock data structure (from structs.py)
- OTSL to HTML table conversion (from post_process/otsl2html.py)
BlockType
¶
Bases: str, Enum
MinerU VL block types (22 categories).
ContentBlock
¶
Bases: BaseModel
A detected content block with type, bounding box, angle, and content.
Coordinates are normalized to [0, 1] range relative to image dimensions.
to_absolute
¶
Convert normalized bbox to absolute pixel coordinates.
Source code in omnidocs/tasks/text_extraction/mineruvl/utils.py
SamplingParams
dataclass
¶
SamplingParams(
temperature: Optional[float] = None,
top_p: Optional[float] = None,
top_k: Optional[int] = None,
presence_penalty: Optional[float] = None,
frequency_penalty: Optional[float] = None,
repetition_penalty: Optional[float] = None,
no_repeat_ngram_size: Optional[int] = None,
max_new_tokens: Optional[int] = None,
)
Sampling parameters for text generation.
MinerUSamplingParams
¶
MinerUSamplingParams(
temperature: Optional[float] = 0.0,
top_p: Optional[float] = 0.01,
top_k: Optional[int] = 1,
presence_penalty: Optional[float] = 0.0,
frequency_penalty: Optional[float] = 0.0,
repetition_penalty: Optional[float] = 1.0,
no_repeat_ngram_size: Optional[int] = 100,
max_new_tokens: Optional[int] = None,
)
Bases: SamplingParams
Default sampling parameters optimized for MinerU VL.
Source code in omnidocs/tasks/text_extraction/mineruvl/utils.py
convert_bbox
¶
Convert bbox from model output (0-1000) to normalized format (0-1).
Source code in omnidocs/tasks/text_extraction/mineruvl/utils.py
parse_angle
¶
Parse rotation angle from model output tail string.
parse_layout_output
¶
Parse layout detection model output into ContentBlocks.
Source code in omnidocs/tasks/text_extraction/mineruvl/utils.py
get_rgb_image
¶
Convert image to RGB mode.
prepare_for_layout
¶
prepare_for_layout(
image: Image,
layout_size: Tuple[int, int] = LAYOUT_IMAGE_SIZE,
) -> Image.Image
Prepare image for layout detection.
Source code in omnidocs/tasks/text_extraction/mineruvl/utils.py
resize_by_need
¶
Resize image if needed based on aspect ratio constraints.
Source code in omnidocs/tasks/text_extraction/mineruvl/utils.py
prepare_for_extract
¶
prepare_for_extract(
image: Image,
blocks: List[ContentBlock],
prompts: Dict[str, str] = None,
sampling_params: Dict[str, SamplingParams] = None,
skip_types: set = None,
) -> Tuple[
List[Image.Image],
List[str],
List[SamplingParams],
List[int],
]
Prepare cropped images for content extraction.
Source code in omnidocs/tasks/text_extraction/mineruvl/utils.py
convert_otsl_to_html
¶
Convert OTSL table format to HTML.
Source code in omnidocs/tasks/text_extraction/mineruvl/utils.py
357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 | |
simple_post_process
¶
Simple post-processing: convert OTSL tables to HTML.
Source code in omnidocs/tasks/text_extraction/mineruvl/utils.py
vllm
¶
VLLM backend configuration for MinerU VL text extraction.
MinerUVLTextVLLMConfig
¶
Bases: BaseModel
VLLM backend config for MinerU VL text extraction.
Uses VLLM for high-throughput GPU inference with: - PagedAttention for efficient KV cache - Continuous batching - Optimized CUDA kernels