Stage 3: TensorRT Engine Conversion/Inference - [ ] Investigate using TensorRT Engine Plugins to bake the pre/post processing into the engine file itself (with amy) - [ ] Repair the comparison scripts under /python_wip and /conversion_tools and report on results of different methods - [ ] For all the verify functions run them for a few random inputs and average values (warmup) - [ ] Copy logic for comparing the prediction consistency and confidence independently using valery's functions in ONNX_verify - [ ] [MNIST Inference Example](https://github.com/NVIDIA/TensorRT/tree/main/samples/sampleOnnxMNIST) - [ ] [QuickStart Guide](https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html#run-engine-python) - [ ] Take a look at the [Google Drive](https://drive.google.com/drive/folders/1VhBxbrtv_gMllZ_UmPlYBvmKxV4Yi8Fn?usp=sharing) for Nvidia Examples Specific Engine Plugins/Extras for Viewing in Drive: - [ ] crop and resize - [ ] efficientNMS - [ ] flattenConcat - [ ] NMS plugin - [ ] nvpluginfasterRCNN - [ ] resizeNearest - [ ] polygraphy