Skip to content

PyTorch

PyTorch/HuggingFace backend configuration for Qwen3-VL text extraction.

QwenTextPyTorchConfig

Bases: BaseModel

PyTorch/HuggingFace backend configuration for Qwen text extraction.

This backend uses the transformers library with PyTorch for local GPU inference. Requires: torch, transformers, accelerate, qwen-vl-utils

Example
config = QwenTextPyTorchConfig(
        model="Qwen/Qwen3-VL-8B-Instruct",
        device="cuda",
        torch_dtype="bfloat16",
    )