Align Anything

Training All-Modality Models to Follow Instructions with Language Feedback

Align Anything

An open-source framework for aligning any-to-any modality large models with human intentions and values using language feedback.

PKU-Alignment

GitHub Paper


Key Features

  1. Multi-Modality Support: Alignment across text, image, audio, and video modalities in various input/output combinations.

  2. Multiple Alignment Algorithms: Supports SFT, DPO, PPO, GRPO, and other alignment methods out of the box.

  3. Highly Modular Architecture: Designed for easy customization and extension to different tasks and model types.

  4. Vision-Language-Action Training: Dedicated support for embodied AI models with safety-focused alignment.