Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,111 @@
|
|
1 |
-
---
|
2 |
-
license: other
|
3 |
-
license_name: sla0044
|
4 |
-
license_link: >-
|
5 |
-
https://github.com/STMicroelectronics/stm32ai-modelzoo/human_activity_recognition/ign/ST_pretrainedmodel_public_dataset/LICENSE.md
|
6 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
license_name: sla0044
|
4 |
+
license_link: >-
|
5 |
+
https://github.com/STMicroelectronics/stm32ai-modelzoo/human_activity_recognition/ign/ST_pretrainedmodel_public_dataset/LICENSE.md
|
6 |
+
---
|
7 |
+
# IGN HAR model
|
8 |
+
|
9 |
+
## **Use case** : `Human activity recognition`
|
10 |
+
|
11 |
+
# Model description
|
12 |
+
|
13 |
+
IGN is acronym of Ignatov, and is a convolutional neural network (CNN) based model for performing the human activity recognition (HAR) task based on the 3D accelerometer data. In this work we use a modified version of the IGN model presented in the [paper[2]](#2). It uses the 3D raw data with gravity rotation and supression filter as preprocessing. This is a light model with very small foot prints in terms of FLASH and RAM as well as computational requirements.
|
14 |
+
|
15 |
+
This network supports any input size greater than (20 x 3 x 1) but we recommend to use at least (24 x 3 x 1), i.e. a window length of 24 samples. In this folder we provide IGN models trained with two different window lenghts [24 and 48].
|
16 |
+
|
17 |
+
The only input required to the model is the input shape, dropout ratio, and the number of output classes.
|
18 |
+
|
19 |
+
In this folder you will find multiple copies of the IGN model pretrained on a public dataset ([WISDM](https://www.cis.fordham.edu/wisdm/dataset.php)) and a custom dataset collected by ST (mobility_v1).
|
20 |
+
|
21 |
+
## Network information
|
22 |
+
|
23 |
+
|
24 |
+
| Network Information | Value |
|
25 |
+
|:-----------------------:|:---------------:|
|
26 |
+
| Framework | TensorFlow |
|
27 |
+
| Params | 3,064 |
|
28 |
+
|
29 |
+
|
30 |
+
## Network inputs / outputs
|
31 |
+
|
32 |
+
|
33 |
+
For an input resolution of wl x 3 x 1 and P classes
|
34 |
+
|
35 |
+
| Input Shape | Description |
|
36 |
+
| :----:| :-----------: |
|
37 |
+
| (1, wl, 3, 1) | Single ( wl x 3 x 1 ) matrix of accelerometer values, `wl` is window lenght, for 3 axes and 1 is channel in FLOAT32.|
|
38 |
+
|
39 |
+
| Output Shape | Description |
|
40 |
+
| :----:| :-----------: |
|
41 |
+
| (1, P) | Per-class confidence for P classes in FLOAT32|
|
42 |
+
|
43 |
+
|
44 |
+
## Recommended platforms
|
45 |
+
|
46 |
+
|
47 |
+
| Platform | Supported | Recommended |
|
48 |
+
|:----------:|:-----------:|:-----------:|
|
49 |
+
| STM32L4 | [x] | [] |
|
50 |
+
| STM32U5 | [x] | [x] |
|
51 |
+
|
52 |
+
|
53 |
+
# Performances
|
54 |
+
|
55 |
+
## Metrics
|
56 |
+
|
57 |
+
Measures are done with [STM32Cube.AI Dev Cloud version](https://stm32ai-cs.st.com/home) 10.0.0 with enabled input/output allocated options and balanced optimization. The inference time is reported is calculated using **STM32Cube.AI version 10.0.0**, on STM32 board **B-U585I-IOT02A** running at Frequency of **160 MHz**.
|
58 |
+
|
59 |
+
|
60 |
+
Reference memory footprint and inference times for IGN models are given in the table below. The accuracies are provided in the sections after for two datasets.
|
61 |
+
|
62 |
+
|
63 |
+
| Model | Format | Input Shape | Series | Activation RAM (KiB) | Runtime RAM (KiB) | Weights Flash (KiB) | Code Flash (KiB) | Total RAM (KiB)| Total Flash (KiB) | Inference Time (msec) | STM32Cube.AI version |
|
64 |
+
|:-----------------------------------------------------------------------------:|:---------:|:-----------:|:-------:|:--------------------:|:-----------------:|:-------------------:|:----------------:|:--------------:|:-----------------:|:---------------------:|:---------------------:|
|
65 |
+
| [IGN wl 24](https://github.com/STMicroelectronics/stm32ai-modelzoo/human_activity_recognition/ign/ST_pretrainedmodel_public_dataset/WISDM/ign_wl_24/ign_wl_24.h5) | FLOAT32 | 24 x 3 x 1 | STM32U5 | 2.03 | 1.91 | 11.97 | 13.61 | 3.94 | 25.58 | 2.25 | 10.0.0 |
|
66 |
+
| [IGN wl 48](https://github.com/STMicroelectronics/stm32ai-modelzoo/human_activity_recognition/ign/ST_pretrainedmodel_public_dataset/WISDM/ign_wl_48/ign_wl_48.h5) | FLOAT32 | 48 x 3 x 1 | STM32U5 | 4.56 | 1.91 | 38.97 | 13.61 | 6.47 | 52.58 | 8.17 | 10.0.0 |
|
67 |
+
|
68 |
+
|
69 |
+
|
70 |
+
|
71 |
+
### Accuracy with mobility_v1 dataset
|
72 |
+
|
73 |
+
|
74 |
+
Dataset details: A custom dataset and not publically available, Number of classes: 5 [Stationary, Walking, Jogging, Biking, Vehicle]. **(We kept only 4, [Stationary, Walking, Jogging, Biking]) and removed Driving**, Number of input frames: 81,151 (for wl = 24), and 40,575 for (wl = 48).
|
75 |
+
|
76 |
+
|
77 |
+
| Model | Format | Resolution | Accuracy (%)|
|
78 |
+
|:--------------------------------------------------------------------------------------------:|:------:|:----------:|:-----------:|
|
79 |
+
| [IGN wl 24](https://github.com/STMicroelectronics/stm32ai-modelzoo/human_activity_recognition/ign/ST_pretrainedmodel_custom_dataset/mobility_v1/ign_wl_24/ign_wl_24.h5) | FLOAT32| 24 x 3 x 1 | 94.64 |
|
80 |
+
| [IGN wl 48](https://github.com/STMicroelectronics/stm32ai-modelzoo/human_activity_recognition/ign/ST_pretrainedmodel_custom_dataset/mobility_v1/ign_wl_48/ign_wl_48.h5) | FLOAT32| 48 x 3 x 1 | 95.01 |
|
81 |
+
|
82 |
+
Confusion matrix for IGN wl 24 with Float32 weights for mobility_v1 dataset is given below.
|
83 |
+
|
84 |
+

|
85 |
+
|
86 |
+
|
87 |
+
### Accuracy with WISDM dataset
|
88 |
+
|
89 |
+
|
90 |
+
Dataset details: [link](([WISDM]("https://www.cis.fordham.edu/wisdm/dataset.php"))) , License [CC BY 2.0](https://creativecommons.org/licenses/by/2.0/) , Quotation[[1]](#1) , Number of classes: 4 (we are combining [Upstairs and Downstairs into Stairs] and [Standing and Sitting into Stationary]), Number of samples: 45,579 (at wl = 24), and 22,880 (at wl = 48).
|
91 |
+
|
92 |
+
| Model | Format | Resolution | Accuracy (%) |
|
93 |
+
|:-------------------------------------------------------------------------------------:|:-------:|:----------:|:-------------:|
|
94 |
+
| [IGN wl 24](https://github.com/STMicroelectronics/stm32ai-modelzoo/human_activity_recognition/ign/ST_pretrainedmodel_public_dataset/WISDM/ign_wl_24/ign_wl_24.h5) | FLOAT32 | 24 x 3 x 1 | 91.7 |
|
95 |
+
| [IGN wl 48](https://github.com/STMicroelectronics/stm32ai-modelzoo/human_activity_recognition/ign/ST_pretrainedmodel_public_dataset/WISDM/ign_wl_48/ign_wl_48.h5) | FLOAT32 | 48 x 3 x 1 | 93.67 |
|
96 |
+
|
97 |
+
|
98 |
+
## Retraining and Integration in a simple example:
|
99 |
+
|
100 |
+
Please refer to the stm32ai-modelzoo-services GitHub [here](https://github.com/STMicroelectronics/stm32ai-modelzoo-services)
|
101 |
+
|
102 |
+
|
103 |
+
|
104 |
+
# References
|
105 |
+
|
106 |
+
<a id="1">[1]</a>
|
107 |
+
“WISDM : Human activity recognition datasets". [Online]. Available: "https://www.cis.fordham.edu/wisdm/dataset.php".
|
108 |
+
|
109 |
+
<a id="2">[2]</a>
|
110 |
+
“Real-time human activity recognition from accelerometer data using Convolutional Neural Networks, Andrey Ignatove". [Online]. Available: "https://www.sciencedirect.com/science/article/abs/pii/S1568494617305665?via%3Dihub".
|
111 |
+
|