Research on Optimizing Virtual Reality User Experience Based on Large Language Models
DOI:
https://doi.org/10.54097/zgaxvc97Keywords:
Large language model, Virtual reality, User experience optimizationAbstract
With the rapid development of virtual reality (VR) technology, how to further improve the user's experience in this field has become a research hotspot. Based on Large Language Model (LLM), this paper discusses its application and optimization path in VR field. Firstly, the basic principle and core technology of LLM are expounded, and its working mechanism is analyzed emphatically. Then, the application of LLM in VR field is discussed, including virtual assistant, intelligent recommendation, natural language interaction and multi-modal collaboration. Finally, a path for optimizing virtual reality user experience based on LLM is proposed, aiming to improve the accuracy of voice interaction, realize personalized content recommendation, optimize the interaction quality of dialogue system and strengthen multi-modal data fusion, so as to enhance the immersion and interactivity of virtual reality.
Downloads
References
[1] Douglas M R .Large Language Models [J].Communications of the ACM, 2023, 66:7 - 7.
[2] Giachos I , Batzaki E , Papakitsos E C ,et al.A Natural Language Generation Algorithm for Greek by Using Hole Semantics and a Systemic Grammatical Formalism[J].Journal of Computer Science Research, 2023, 5(4):27-37.
[3] Feng B .Research on the Application Effects of Artificial Intelligence in Personalized Marketing[J].Journal of Computer and Communications, 2024, 12(11):10.
[4] Rong Z .Application of Natural Language Processing in Virtual Experience AI Interaction Design[J].Journal of Intelligent Learning Systems and Applications, 2024, 16(4):15.
[5] Meena Y K , Arya K V .Multimodal interaction and IoT applications[J].Multimedia Tools and Applications, 2023, 82(4):4781-4785.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Journal of Computing and Electronic Information Management

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.