US20240371371
2024-11-07
Physics
G10L15/22
The system and method described focus on enhancing user interactions across different domains and languages within a metaverse environment. Users enter a metaverse session using avatars, where they interact with a virtual advisor avatar. The system processes voice queries to identify the user's preferred language and the query's domain, ensuring responses are provided in the user's preferred language.
This technology addresses challenges in user-entity interactions, particularly where advisors may lack the necessary knowledge or language skills to assist users effectively. The system aims to streamline communication by leveraging cross-domain and cross-linguistic capabilities, enhancing service efficiency and user satisfaction.
The system receives voice queries from users, determines their preferred language, and identifies the issue's domain. It searches a knowledge server for relevant responses. If a matching response is found, it is converted into a voice response in the user's preferred language and communicated via the virtual advisor avatar. If no direct response is available, the system involves a regional advisor who can assist in real-time, with translation support if necessary.
The described system facilitates effective communication between users and advisors, even when language barriers exist. It updates its knowledge base with each interaction, reducing future reliance on physical advisors. This approach not only improves service efficiency but also optimizes resource usage, such as memory and network bandwidth.
A metaverse server facilitates user authentication and avatar generation for metaverse sessions. A decentralized computing system processes voice queries, identifying language preferences and query domains. Secondary bots within this system manage query responses, ensuring they match user preferences before relaying them through the metaverse server to the user.