Skip to content

Extractor

LightOn text extractor with multi-backend support.

LightOn OCR is optimized for document text extraction and recognition. Supports multiple backends: PyTorch, VLLM, and MLX.

LightOnTextExtractor

LightOnTextExtractor(backend: LightOnTextBackendConfig)

Bases: BaseTextExtractor

LightOn text extractor with multi-backend support.

LightOn OCR is optimized for document text extraction with multi-lingual capabilities.

Supports multiple backends: - PyTorch (HuggingFace Transformers) - VLLM (high-throughput GPU) - MLX (Apple Silicon) - API (VLLM OpenAI-compatible server)

Example
from omnidocs.tasks.text_extraction import LightOnTextExtractor
from omnidocs.tasks.text_extraction.lighton import LightOnTextPyTorchConfig

# PyTorch backend
extractor = LightOnTextExtractor(
    backend=LightOnTextPyTorchConfig(device="cuda", torch_dtype="bfloat16")
)
result = extractor.extract(image)
print(result.content)

# VLLM backend for high-throughput inference
from omnidocs.tasks.text_extraction.lighton import LightOnTextVLLMConfig

extractor = LightOnTextExtractor(
    backend=LightOnTextVLLMConfig(gpu_memory_utilization=0.85)
)
result = extractor.extract(image)

Initialize LightOn text extractor.

PARAMETER DESCRIPTION
backend

Backend configuration (PyTorch, VLLM, MLX, or API)

TYPE: LightOnTextBackendConfig

Source code in omnidocs/tasks/text_extraction/lighton/extractor.py
def __init__(self, backend: LightOnTextBackendConfig):
    """
    Initialize LightOn text extractor.

    Args:
        backend: Backend configuration (PyTorch, VLLM, MLX, or API)
    """
    self.backend_config = backend
    self._client = None
    self._processor = None
    self._loaded = False
    self._sampling_params_class = None
    self._mlx_config = None
    self._apply_chat_template = None
    self._load_model()

extract

extract(
    image: Union[Image, ndarray, str, Path],
    output_format: Literal["html", "markdown"] = "markdown",
) -> TextOutput

Extract text from an image.

PARAMETER DESCRIPTION
image

Input image (PIL Image, numpy array, or file path)

TYPE: Union[Image, ndarray, str, Path]

output_format

Desired output format ('html' or 'markdown')

TYPE: Literal['html', 'markdown'] DEFAULT: 'markdown'

RETURNS DESCRIPTION
TextOutput

TextOutput with extracted text content

Source code in omnidocs/tasks/text_extraction/lighton/extractor.py
def extract(
    self,
    image: Union[Image.Image, np.ndarray, str, Path],
    output_format: Literal["html", "markdown"] = "markdown",
) -> TextOutput:
    """
    Extract text from an image.

    Args:
        image: Input image (PIL Image, numpy array, or file path)
        output_format: Desired output format ('html' or 'markdown')

    Returns:
        TextOutput with extracted text content
    """
    if not self._loaded:
        raise RuntimeError("Model not loaded.")

    # Prepare image
    image_obj = self._prepare_image(image)

    # Run extraction
    config_type = type(self.backend_config).__name__

    if config_type == "LightOnTextPyTorchConfig":
        raw_output = self._extract_pytorch(image_obj)
    elif config_type == "LightOnTextVLLMConfig":
        raw_output = self._extract_vllm(image_obj)
    elif config_type == "LightOnTextMLXConfig":
        raw_output = self._extract_mlx(image_obj)
    else:
        raise TypeError(f"Unknown backend: {config_type}")

    # Post-process output
    content = _simple_post_process(raw_output)

    # Convert to desired format
    if output_format == "html":
        content = self._markdown_to_html(content)

    return TextOutput(
        content=content,
        format=OutputFormat(output_format),
        raw_output=raw_output,
        plain_text=content if output_format == "markdown" else self._html_to_text(content),
        image_width=image_obj.width,
        image_height=image_obj.height,
        model_name="lighton-ocr",
    )