Monitoring and interpreting audio-visual streams to support operators in emergencies
In situations where multiple operators observe large areas from a control room through numerous audio-visual cameras, emergencies lead to large volumes of incoming information that must be monitored, interpreted and acted upon. The SenseCity project seeks to develop multimodal scene analysis algorithms, a scalable real-time monitoring platform and optimal workflows to help operators accomplish their tasks more effectively.
Thanks to recent technological advances, large areas can now easily be observed remotely from control rooms. This is especially useful in public domain monitoring as well as transport and industrial applications such as the surveillance of bridges, tunnels, oil refineries, chemical plants and ports. However, the data overload resulting from these monitoring possibilities puts additional strain on human operators in charge of surveilling and analyzing the scene.
SenseCity seeks to lighten the burden by overcoming two main challenges. First, the project aims to reduce errors and time needed to detect an incident and alert the operator. Second, it wants to ensure that everyone involved in the process – from the police department to the fire brigade and medical services – receives a personalized information feed in case of emergencies.
Current solutions present all operators involved in a monitoring activity with an identical visualization of the available information. SenseCity seeks to construct a context-aware, multimodal monitoring platform that simultaneously analyzes video and audio data and can be used in real time by multiple parties on many devices. The project will also investigate machine learning techniques to automate part of the monitoring, data interpretation and information visualization process.
SenseCity relies on industrial and academic experts in control rooms, workflow optimization, context-aware graphical user interfaces, machine learning, distributed intelligence in Internet of Things (IoT) environments and smart cities, and more. The main innovation goals are as follows:
SenseCity will deliver a proof-of-concept for a platform that allows the detection of events and anomalies over a large set of devices in an urban context. It will be tested in the City of Things living lab and open new avenues for applications in public domain and industrial surveillance.
“SenseCity aims to support human operators who rely on audiovisual data from remote cameras to observe, interpret and act upon emergency situations. To achieve that goal, the project seeks to develop multimodal scene analysis algorithms, a scalable real-time monitoring platform and optimal workflows.”
Monitoring and interpreting audio-visual streams to support operators in emergencies
SenseCity is an imec.icon research project funded by imec and Agentschap Innoveren & Ondernemen.
It started on 01.06.2019 and ended 31.05.2021.