Secure and Scalable Multi-Modal Vehicle Systems: A Cloud-Based Framework for Real-Time LLM-Driven Interactions
Main Article Content
Abstract
This research explores the development of Secure and Scalable Multi-Modal Vehicle Systems using a Cloud-Based Framework for Real-Time LLM-Driven Interactions. Integrating multi-modal interaction capabilities with Large Language Models (LLMs) and cloud computing infrastructure enhances the functionality of automotive systems. The framework leverages sensor data from cameras, LiDAR, and radar for real-time processing in the cloud, enabling tasks such as object detection, driver assistance, and navigation support. Robust cybersecurity measures ensure data integrity and privacy throughout the system. Experimental evaluations with Tesla Model S vehicles demonstrate high accuracy in object detection, low-latency processing, and efficient resource utilization. The study contributes to advancing driver safety, comfort, and the evolution of autonomous vehicle technologies, emphasizing scalability, security, and user-centric design in automotive applications.
Downloads
Article Details
This work is licensed under a Creative Commons Attribution 4.0 International License.