US20250219859
2025-07-03
Electricity
H04L12/1822
The patent application describes a collaborative platform featuring a digital collaboration assistant that enhances meeting experiences by continuously monitoring and analyzing shared content in real-time. This content includes voice, text chat messages, shared links, documents, and presentation materials. The assistant updates a structured summary log of important meeting content and interacts with participants to answer questions or provide additional information during the meeting.
Traditional collaborative platforms facilitate content sharing during meetings, improving document sharing, task tracking, and information exchange. However, existing digital collaboration assistants are limited to capturing transcripts and generating post-meeting summaries, often resulting in excessive and irrelevant information. The disclosed solution addresses these shortcomings by providing real-time analysis and interaction capabilities.
The collaborative platform is designed to operate in environments like Microsoftยฎ Teamsยฎ, supporting phone/video calls, chat threads, email threads, document sharing, and task tracking. It leverages a digital collaboration AI assistant that monitors and analyzes meeting content in real-time or near real-time. The assistant updates structured summary logs of important content and interacts with participants to enhance the meeting experience.
The system includes a collaborative platform server connected to multiple user devices via a network. Participants can share and update meeting content through an application linked to the server. Content discussed or shared is stored in an insight database or semantic memory, accessible based on permissions. Participants can customize their collaborative environment displayed on their device's user interface.
The collaboration assistant comprises several components: a collaboration monitor, content analyzer, insight extractor, collaboration interactor, structured summary log generator, and meeting summary generator. These components utilize machine-learning models to extract semantic meaning from content objects and determine actions based on quantitative similarity between embeddings. This setup allows the assistant to perform diverse functions using general generative models.