US20240115954
2024-04-11
Human necessities
A63F13/69
The technology focuses on transforming two-dimensional images into three-dimensional neural radiance fields (NeRF), which are then tailored to match personalized text descriptions related to a player. This innovation allows for rapid generation of unique game items that fit the specific character requests provided by players, enhancing the gaming experience.
Creating characters and their accessories in computer games is traditionally a lengthy process. By utilizing advanced algorithms and models, developers can streamline this process, making it easier and faster to generate characters and items. The system leverages existing techniques like text-to-image generation to facilitate real-time creation of highly personalized in-game assets.
The method involves using a Contrastive Language-Image Pre-training (CLIP) model that evaluates how well an image aligns with the provided text. Based on this assessment, a modified NeRF is created from a base model, which can subsequently be converted into a polygonal mesh. This mesh represents the virtual character's accessory, ready for integration into various computer simulations.
The personalization aspect is crucial; the text descriptions used to generate game items can be derived from player preferences or affinities towards certain games. This ensures that the generated accessories are not only unique but also resonate with individual players, enhancing their engagement and satisfaction within the game environment.
The described system operates within a broader ecosystem of consumer electronics, including various gaming consoles and devices. It supports seamless communication between client and server components over networks, ensuring efficient data exchange and enabling the rapid deployment of personalized game items across different platforms.