To have your deep learning models perform properly in noisy environments it is critical to understand what is the noise grade in the target environment. This will allow your system to decide how to behave according to the existing noise.
In this notebook I describe how to grade the effects in the environment using cameras (as passive sensors). However, the technique is applicable to activate sensor data as well.
Next, I will show how to use this information to manage the deep learning models on a device in real-time more effectively to keep the object detection/classification accuracy as high as possible.
In this notebook you will see the following:

- Use numpy built-in FFT to transform images to frequency domain.
- Use numpy built-in fftShift to shift the low/high frequency components.
- use numpy built-in inverse FFT to reconstruct the shifted images.
- Observe the difference and grade the noise in the environment accordingly.
References: