The data collected during the measurement campaign was processed and aggregated using both standard IT tools (relational type NoSQL databases), GIS packages, and statistical tools, as well as proprietary scripts developed mainly in the python and C++ languages. Changing of the coordinate system also played an important role in the data conversion and integration process. The image data processed by cameras were localized in the local plane coordinate system determined by the camera lens. With the use of transformation points (about 100 measurement points for each camera - Fig. 1), it was possible to perform non-linear conversion of the coordinates to the PUW 2000 national geodetic coordinate system (zone 7) and to integrate the measurement data into the GIS spatial database. The coordinates of the transformation points were determined using surveying methods. In order to automate the coordinate conversion process, a set of 7 nonlinear MLP-type neural networks was developed, which, after training, were written as C++ scripts and used in the conversion process.
In order to standardize the process of analysis of the resulting data, a set of square basic fields with 2-meter sides was developed and superimposed on the topographic content in the spatial database. The json format files obtained in the process of recognition of object movements (see the excerpt below) were then converted to a Mongo DB database where the intensity of use of the individual basic fields by pedestrian/bike/car objects at different time intervals was analyzed. In order to verify the results obtained from the image analysis, this information was compared with the compiled information on the number of beam crossings for the 6 IoT sensors located within the campus.
Fig. 1 The location of the transformation points for a part of the image recorded by camera no. 3 (“Golski”).