SLVMEval: Synthetic Meta Evaluation Benchmark for Text-to-Long Video Generation

📰 ArXiv cs.AI

arXiv:2603.29186v1 Announce Type: cross Abstract: This paper proposes the synthetic long-video meta-evaluation (SLVMEval), a benchmark for meta-evaluating text-to-video (T2V) evaluation systems. The proposed SLVMEval benchmark focuses on assessing these systems on videos of up to 10,486 s (approximately 3 h). The benchmark targets a fundamental requirement, namely, whether the systems can accurately assess video quality in settings that are easy for humans to assess. We adopt a pairwise comparis

Published 1 Apr 2026
Read full paper → ← Back to News