US20250085131
2025-03-13
Physics
G01C21/3626
The patent application describes innovative methods and systems for enhancing automatic content generation, imaging, and navigation using generative artificial intelligence (AI). By integrating various inputs such as mapping data, original imagery, live information, and user-generated data, the system generates prompts that serve as intermediate outputs. These prompts are then fed into generative AI systems, which produce enhanced images, maps, and navigation directions. This approach leverages advanced AI models, including neural networks, to improve the quality and relevance of the generated content.
Current mapping applications often rely on static satellite images, which can be outdated and lack detail. Generative AI platforms like OpenAI and MidJourney use natural language processing (NLP) to create input data for content generation. However, these systems require users to understand specific prompt conventions, which can be cumbersome. Existing mapping solutions like Google Earth and Google Maps offer basic integrations of weather data and 3D models but lack real-time satellite imagery and comprehensive environmental effects.
The proposed system addresses these limitations by integrating various data sources to produce enhanced audiovisual content. Inputs such as live traffic and weather data, user-generated selections, and existing imagery are processed to create prompts. These prompts are used by generative AI systems to produce enhanced outputs like images or videos that reflect real-time conditions. For instance, a search for a business can result in an updated image with current weather effects or traffic conditions.
This technology offers significant improvements for navigation and content generation applications. Users searching for properties or businesses can receive real-time images or videos that reflect current appearances. Driving directions can be accompanied by dynamic imagery that updates as the user progresses along a route. The system's ability to incorporate live data ensures that the content remains relevant and informative.
The application includes detailed descriptions of various embodiments and processes illustrated in accompanying figures. These include flowcharts depicting the generation of user interfaces, prompts, content layers, and map updates. The figures demonstrate how inputs are processed to enhance map images with overlays based on weather data and other external information. Additionally, examples show how 2D and 3D map images can be augmented with rendered objects and environmental effects to provide a more immersive experience.