I aim to design a technological solution to interpret the socially occupied space automatically. We defined the following concepts to work on identifying group dynamics:
Physically occupied space: it is a taken tangible area whose appropriation is physically represented by a human body or an object.
Socially occupied space: it is a taken tangible area, in which no physical object represents the appropriation of the physical area, but its occupancy is recognized due to a social agreement.
We focus on what information we need to interpret the socially occupied space and how we get it from depth sensors. We can get the body’s location, body orientation, and viewing direction from physically occupied spaces.
The following objectives guide this research project:
- Find an approach to technically measure the social features at the spatial level to construct socially occupied spaces.
- Implement a set of algorithms to process social features and analyse the construction of socially occupied spaces.
- Evaluate the algorithm comparing the detected socially occupied space with realistic situations in closed spaces.
- Build a sensor network system to observe movement in a larger space to interpret the socially occupied space across time.
Technologies implemented (up to 03/2022)
Python, C#, desktop application, depth cameras, skeleton tracking, spatial-temporal data analysis, social behaviour modelling, unsupervised machine learning
- Kinect v2 camera data analysis in conference proceedings
- Kinect v2, Azure Kinect, Zed2i camera data analysis in journal submission (pre-print)
- Desktop application in C# with Kinect v2 implementation
- Python code for social space analysis version 1.0 implemented. Results submitted in conference proceedings and journal (pre-print)
- TODO integration Azure Kinect in C# application
- TODO integrate visualization social spaces inside application