Align Anything
Training All-Modality Models to Follow Instructions with Language Feedback
Align Anything
An open-source framework for aligning any-to-any modality large models with human intentions and values using language feedback.
PKU-Alignment
Key Features
-
Multi-Modality Support: Alignment across text, image, audio, and video modalities in various input/output combinations.
-
Multiple Alignment Algorithms: Supports SFT, DPO, PPO, GRPO, and other alignment methods out of the box.
-
Highly Modular Architecture: Designed for easy customization and extension to different tasks and model types.
-
Vision-Language-Action Training: Dedicated support for embodied AI models with safety-focused alignment.