DeepTest Tool Competition 2026: Benchmarking an LLM-Based Automotive Assistant

📰 ArXiv cs.AI

arXiv:2604.12615v1 Announce Type: new Abstract: This report summarizes the results of the first edition of the Large Language Model (LLM) Testing competition, held as part of the DeepTest workshop at ICSE 2026. Four tools competed in benchmarking an LLM-based car manual information retrieval application, with the objective of identifying user inputs for which the system fails to appropriately mention warnings contained in the manual. The testing solutions were evaluated based on their effectiven

Published 15 Apr 2026
Read full paper → ← Back to Reads