Skip to content

KopiSoftware/TRT_Ultra_Fast_Lane_Detect

Repository files navigation

TRT_Ultra_Fast_Lane_Detect

TRT_Ultra_Fast_Lane_Detect is an implementation of converting Ultra fast lane detection into tensorRT model by Python API. There are some other works in our project are listed below:

  • The detection procedure is encapsulated.
  • The pytorch model is transformed into onnx model and trt model.
  • The trt models have different versions: FP32, FP16, INT8.
  • The Tusimple data set can be compressed by /calibration_data/make_mini_tusimple.py. There are redundancies in the Tusimple data set, for only 20-th frames are used. The compressed tusimple data set takes about 1GB.

The original project, model, and paper is available from https://github.com/cfzd/Ultra-Fast-Lane-Detection

Ultra-Fast-Lane-Detection

PyTorch implementation of the paper "Ultra Fast Structure-aware Deep Lane Detection".

Updates: Our paper has been accepted by ECCV2020.

alt text

The evaluation code is modified from SCNN and Tusimple Benchmark.

Caffe model and prototxt can be found here.

Trained models

The trained models can be obtained by the following table:

Dataset Metric paper Metric This repo Avg FPS on GTX 1080Ti Model
Tusimple 95.87 95.82 306 GoogleDrive/BaiduDrive(code:bghd)
CULane 68.4 69.7 324 GoogleDrive/BaiduDrive(code:w9tw)

Installation

pip3 install -r requirement.txt

Convert

Above all, you have to train or download a 4 lane model trained by the Ultra Fast Lane Detection pytorch version. You have to change some codes, if you want to use different lane number.

Now, we have a trained pytorch model "model.pth".

  1. Use torch2onnx.py to convert the the model into onnx model. You should rename your model as "model.pth". The original configuration file is configs/tusimple_4.py.
`python3 configs/${config_file}.py `
  1. Use onnx_to_tensorrt.py to convert the onnx model in to tensorRT model (FP16, FP32).
`python3 onnx_to_tensorrt.py -p ${mode_in_fp16_or_fp32} --model ${model_name}` 
  1. Use onnx_to_tensorrt.py to convert the onnx model in to tensorRT model (INT8).
`python3 onnx_to_tensorrt.py  --model ${model_name}`
  1. Run tensorrt_run.py to activate detection
`python tensorrt_run.py --model ${model_name}` 

Evalutaion

Pytorch libtorch tensorRT(FP32) tensorRT(FP16) tensorRT(int8)
GTX1060 55fps 55fps 55fps Unsupported 99fps
Xavier AGX 27fps 27fps -- -- --
Jetson TX1 8fps 8fps 8fps 16fps Unsupported
Jetson nano A01(4GB) -- -- -- 8fps Unsupported

Where "--" denotes the experiment hasn't been completed yet. Anyone with untested equipment can send his results to the issues. The results will be adopted.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages