Adhuran, Jayasingam, Khan, Nabeel and Martini, Maria G. (2024) Lossy encoding of time-aggregated neuromorphic vision sensor data based on point cloud compression. In: Workshop on Neuromorphic Vision : Advantages and Applications of Event Cameras (NeVi 2024); 29 Sep 2024, Milan, Italy. (Lecture Notes in Computer Science)
Abstract
Neuromorphic vision sensors capture visual scenes reporting only light intensity changes in the form of spikes or events, represented by their location in the (x, y) plane, timestamp and polarity (positive or negative change). This enables an extremely high temporal resolution and high dynamic range, but also a compact representation of visual data and the relevant sensors operate with very limited energy requirements. Such data can be further compressed prior to transmission, e.g. in an Internet of Things scenario. We have shown in previous work that lossless compression can be achieved by appropriately representing the data as a point cloud and adopting point cloud compression. In this paper, we show that we can compress the data much further if we accept minor losses in data representation. For this purpose, we propose a modification of a classical point cloud encoder and define quality metrics specific to this use case. Results are reported in terms of achievable compression ratios for a specific compression level and different time aggregation intervals and in terms of spatial and temporal distortion vs. bits per event, supporting coding decisions based on the compromise between quality and bitrate.
Actions (Repository Editors)
![]() |
Item Control Page |