MyAdvantech Registration

MyAdvantech is a personalized portal for Advantech customers. By becoming an Advantech member, you can receive latest product news, webinar invitations and special eStore offers.

Sign up today to get 24/7 quick access to your account information.

LLM Assisted Vehicle Accident Assessment

9/22/2025

Drafting traffic accident adjudication documents is traditionally labor-intensive and prone to error. Each document requires strict adherence to legal terminology, regulatory compliance, and logical consistency. The manual approach often involves repeated reviews, consuming valuable time, reducing efficiency, and eroding public trust in fairness.   

By adopting AI and LLM technologies, supervisory agencies can streamline this process. LLMs analyze multimodal accident data—including text reports, videos, and images—extract critical information, and automatically generate structured outputs such as safety assessments, collision reports, and insurance claims. This approach significantly enhances accuracy, efficiency, and resource utilization.

Challenges

A customer in Taiwan requested an LLM-based solution to address the following challenges: 

  • High Data Security Costs: Managing sensitive legal data on cloud platforms requires major investment in privacy protection and secure data transmission. 
  • Knowledge Integration: Incorporating complex internal knowledge bases demands extensive effort and time, which slows overall efficiency. 
  • Consistent Accuracy: Ensuring LLMs comply with strict legal, data security, and privacy regulations is critical. A fine-tuning solution is essential to maintain accuracy and reliability.

Advantech Solutions

Advantech provides comprehensive on-premise hardware and software integrated LLM solutions for documenting traffic accidents, including the AIR-030 high-performance LLM inference system, the AIR-520 LLM fine-tuning system (supporting Gemma 3 27B), and Edge AI SDK/GenAI Studio. These solutions are fully deployed at the edge, ensuring data security and privacy.   

Advantech GenAI Studio offers a complete workflow for LLM fine-tuning and training, enabling the customer to organize reference materials—such as traffic regulations, accident records, committee recommendations, and ruling reports—into a structured Question-Asset dataset on the AIR-520 edge AI HPC. With GenAI Studio, our customer can load the Gemma 3 27B model and perform fine-tuning. Once training is complete, model validation and testing can be carried out directly. Before deploying the fine-tuned model to a chatbot inference service, GenAI Studio also supports conversion into an INT4 quantized version, significantly boosting inference efficiency.   

The entire end-to-end process—from dataset preparation, training, and validation to deployment—was seamlessly executed on the AIR-520 edge server, equipped with an AMD EPYC 7003 processor, two NVIDIA RTX™ 6000 Ada GPUs, and 2TB ai100 AI SSDs. The AIR-520 helps reduce costs by extending GPU vRAM capacity. Its integrated 1200W PSU (700W total output) ensures reliable 24/7 operation.   

The fine-tuned model is then deployed on the AIR-030 edge AI inference system accelerated by NVIDIA Jetson AGX Orin™, along with the Edge AI SDK, to host the fine-tuned and quantized Gemma 3 27B model and deliver dedicated LLM inference services. Overall, Advantech’s integrated solution not only streamlines document review, data retrieval, and report generation but also ensures cost efficiency when scaling across multiple sites.

Benefits 

  • Work efficiency is expected to increase by over 60%, reducing the time for drafting adjudication documents from hours/days to minutes. 
  • Reduced error rate, minimizing disputes caused by manual drafting or judgment errors. 
  • Legal professionals can focus on more complex case analysis, truly optimizing human resource allocation. 
  • Enhance public trust and confidence, enabling centralized data processing and shaping government image.