JEPA (Joint Embedding Predictive Architecture) from Meta, an emerging technique, operates in latent space rather than pixel space, similar to inpainting but with deeper semantic layers.
生データの代わりに埋め込みを転送することで、プライバシー保護を実現できます
まだ、FL-related Protocolの設計によって、新規制がさらに増えそうです
Many types of information cannot be perceived and collected through direct sensor-based sensing, like predicting traffic congestion and Identifying potential security threats in real-time. For such ground applications where straightforward sensing is unavailable, it is challenging to infer and predict the real info by directly fusing various unobvious supporting sensory data, highlighting a key flaw in traditional multimodal research.
Distributed scene understanding among distributed camera agents. Cameras need to understand what they see and NOT just do classification.
Predicting Traffic Congestion:
Challenge: Direct sensors like traffic cameras and road sensors provide data on vehicle counts and speeds. However, predicting congestion accurately requires integrating additional data such as weather conditions, public event schedules, and historical traffic patterns, which are not directly sensed. Traditional methods might not capture the nuanced factors affecting traffic flow, leading to less effective congestion management.
Identifying potential security threats in real-time:
Challenge: Surveillance cameras and police reports provide explicit data on incidents. However, predicting and preventing threats requires analyzing unobvious supporting data like social media trends, crowd movement patterns, and anonymous tip-offs, which are not directly captured by sensors. Reliance on direct sensor data alone can result in missed early indicators of potential threats, compromising public safety.
Monitoring air quality in urban areas:
Challenge: Direct sensors measure pollutants like CO2 and particulate matter. However, understanding the full impact on air quality requires data on traffic patterns, industrial activities, weather conditions, and population density, which are not all directly sensed. Traditional approaches might fail to integrate these diverse data sources effectively, resulting in incomplete or delayed responses to air quality issues.
In autonomous driving, real-time integration of data from multiple sensors (LIDAR, cameras, GPS, etc.) allows the vehicle to adapt to changing road conditions and traffic situations promptly
Our overarching goal is to enhance the prediction ability of sensing systems by deriving & integrating meaningful insights from diverse distributed data.
To the limitations of traditional multimodal research, our solution involves developing a ”Distributed Brain" mechanism that integrates diverse data modalities through distributed networks and leverages LLMs for semantic extraction to enable comprehensive, context-rich, real-time analysis and predictive insights.