@@ -154,30 +154,8 @@ bbn_create_truth_coco \
154
154
- ` /angel_system/model_files/models/hands_model.pt ` : hand detection trained model
155
155
- ` /angel_system/model_files/models/r18_det.pt ` : object detection trained model
156
156
157
-
158
- ## Training Procedure
159
-
160
- We take the following steps:
161
-
162
- 1 . train object detection model
163
- 2 . Generate activity classification truth COCO file.
164
- 3 . predict objects in the scene
165
- 4 . predict poses and patient bounding boxes in the scene
166
- 5 . generate interaction feature vectors for the TCN
167
- 6 . train the TCN
168
-
169
- ### Example with M2
170
- Contents:
171
- - [ Train Object Detection Model] ( #train-object-detection-model )
172
- - [ Generate activity classification truth COCO file] ( #generate-activity-classification-truth-coco-file )
173
- - [ Generate Object Predictions in the Scene] ( #generate-object-predictions-in-the-scene )
174
- - [ Generate Pose Predictions] ( #generate-pose-predictions )
175
- - [ Configure TCN Training Experiment] ( #configure-tcn-training-experiment )
176
- - [ Run TCN Training] ( #run-tcn-training )
177
-
178
- #### Train Object Detection Model
179
- First we train the detection model on annotated data.
180
- This would be the same data source for both the lab and professional data.
157
+ ## Train or acquire an Object Detector
158
+ Quick-start example for Yolo v7:
181
159
```
182
160
python3 python-tpl/yolov7/yolov7/train.py \
183
161
--workers 8 --device 0 --batch-size 4 \
@@ -189,6 +167,27 @@ python3 python-tpl/yolov7/yolov7/train.py \
189
167
--name m2_all_v1_example
190
168
```
191
169
170
+ ## Train or acquire Pose Estimator
171
+ TODO:
172
+
173
+ ## Activity Classifier Training Procedure
174
+
175
+ We take the following steps:
176
+
177
+ 1 . Generate activity classification truth COCO file.
178
+ 2 . predict objects in the scene
179
+ 3 . predict poses and patient bounding boxes in the scene
180
+ 4 . generate interaction feature vectors for the TCN
181
+ 5 . train the TCN
182
+
183
+ The following will use file path and value examples for the Medical M2
184
+ Tourniquet use-case.
185
+ - [ Generate activity classification truth COCO file] ( #generate-activity-classification-truth-coco-file )
186
+ - [ Generate Object Predictions in the Scene] ( #generate-object-predictions-in-the-scene )
187
+ - [ Generate Pose Predictions] ( #generate-pose-predictions )
188
+ - [ Configure TCN Training Experiment] ( #configure-tcn-training-experiment )
189
+ - [ Run TCN Training] ( #run-tcn-training )
190
+
192
191
#### Generate activity classification truth COCO file
193
192
Generate the truth MS-COCO file for per-frame activity truth annotations.
194
193
This example presumes we are using BBN Medical data as our source (as of
@@ -331,80 +330,13 @@ kwcoco_guided_subset \
331
330
```
332
331
333
332
#### Run TCN Training
334
- TODO
335
-
336
- ## Example with R18
337
-
338
- First we train the detection model on annotated data. This would be the same
339
- data source for both the lab and professional data
333
+ Quick-start:
340
334
```
341
- cd yolo7
342
- python yolov7/train.py \
343
- --workers 8 \
344
- --device 0 \
345
- --batch-size 4 \
346
- --data configs/data/PTG/medical/r18_task_objects.yaml \
347
- --img 768 768 \
348
- --cfg configs/model/training/PTG/medical/yolov7_r18.yaml \
349
- --weights weights/yolov7.pt \
350
- --project /data/PTG/medical/training/yolo_object_detector/train/ \
351
- --name r18_all_v1_example
352
- ```
353
-
354
- ###### Note on training on lab data <a name = " lab_data " ></a >:
355
- since we do not have detection GT for lab data, this is our start point for training the TCN on the lab data
356
-
357
- Next, we generate detection predictions in kwcoco file using the following script. Note that this
358
- ```
359
- python yolov7/detect_ptg.py \
360
- --tasks r18 \
361
- --weights /data/PTG/medical/training/yolo_object_detector/train/r18_all_v1_example/weights/best.pt \
362
- --project /data/PTG/medical/training/yolo_object_detector/detect/ \
363
- --name r18_all_example \
364
- --device 0 \
365
- --img-size 768 \
366
- --conf-thres 0.25
367
- cd TCN_HPL/tcn_hpl/data/utils/pose_generation/configs
368
- ```
369
-
370
- with the above scripts, we should get a kwcoco file at:
371
- ```
372
- /data/PTG/medical/training/yolo_object_detector/detect/r18_all_example/
373
- ```
374
-
375
- Edit ` TCN_HPL/tcn_hpl/data/utils/pose_generation/configs/main.yaml ` with the
376
- task in hand (here, we use r18), the path to the output detection kwcoco, and
377
- where to output kwcoco files from our pose generation step.
378
- ```
379
- cd ..
380
- python generate_pose_data.py
381
- cd TCN_HPL/tcn_hpl/data/utils
382
- ```
383
- At this stage, there should be a new kwcoco file generated in the field defined
384
- at ` main.yaml ` :
385
- ```
386
- data:
387
- save_root: <path-to-kwcoco-file-with-pose-and-detections>
388
- ```
389
-
390
- Next, edit the ` /TCN_HPL/configs/experiment/r18/feat_v6.yaml ` file with the
391
- correct experiment name and kwcoco file in the following fields:
335
+ train_command \
336
+ experiment=m2/feat_locsconfs \
337
+ paths.root_dir="$PWD" \
338
+ task_name=my_m2_training
392
339
```
393
- exp_name: <experiment-name>
394
- path:
395
- dataset_kwcoco: <path-to-kwcoco-with-poses-and-dets>
396
- ```
397
-
398
- Then run the following commands to generate features:
399
- ```
400
- python ptg_datagenerator –task r18 --data_type <bbn or gyges> --config-root <root-to-TCN-HPL-configs> --ptg-root <path-to-local-angel-systen-repo>
401
- cd TCN_HPL/tcn_hpl
402
- python train.py experiment=r18/feat_v6
403
- ```
404
-
405
- ==At this point, we have our trained model at the path specified in our config file. For real-time execurtion, we would need to copy it over to angel_system/model_files==
406
- The TCN training script produced a ` text_activity_preds.mscoco.json ` which is used by the Global Step Predictor. That file should be copied to ` /angel_system/model_files/coco/ ` .
407
-
408
340
409
341
## Docker local testing
410
342
0 commit comments