__oneoff__
MLX-VLM: Run VLMs on Mac with MLX Inference & Fine-Tuning
MLX-VLM package runs vision-language models (VLMs) and omni models on Apple Silicon via MLX, supporting text/image/audio/video inference, multi-modal inputs, CLI/UI/server APIs, and LoRA fine-tuning.