-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Very random results?? #21
Comments
Because I used the ICDAR15 vocab to correct it. You can add --use_vocab=False to disable it. Anyway this model is not very good, I'm training a new one now. |
hi,can you tell me the version of tensorflow?I have encounted some problems when run the eval. |
1.12.0
| |
乔峙
|
|
邮箱:qiaozhi_jlu@163.com
|
签名由 网易邮箱大师 定制
On 07/11/2019 16:19, learn01one wrote:
hi,can you tell me the version of tensorflow?I have encounted some problems when run the eval.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
--use_vocab=Fasle did not help. The results are again random. When can we expect a better model?? |
Sorry, since I’m still doing experiments, I’m not sure when I can get a better model.
| |
乔峙
|
|
邮箱:qiaozhi_jlu@163.com
|
签名由 网易邮箱大师 定制
On 07/11/2019 16:25, Pramay wrote:
Because I used the ICDAR15 vocab to correct it. You can add --use_vocab=False to disable it. Anyway this model is not very good, I'm training a new one now.
…--use_vocab=Fasle did not help. The results are again random. When can we expect a better model??
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
hi,Pay20Y,can you give me some guide for this: when i run train.i have try every solutions.thanks |
Maybe you should check the version of tensorflow, you can try a new version. |
hi,Pay20Y,are you sure for python2+tensorflow 1.12.0 |
|
hi,Pay20Y,Thank you for your previous reply,I almost run the demo smoothly,just met the cv error |
opencv-python==4.1.0.25 |
The reason the results are random is because the dropout is still applied to the recognition part in eval.py.
After disabling the dropout, the recognition part still performs very poorly. Mostly outputs the same result (recognizing letter "E" no matter what):
I guess there's a bug somewhere in the implementation of the recognition branch resulting in poorly trained model..? |
Thanks, I really ignored dropout in LSTM. And the results you list above have processed by functino ground_truth_to_word? Since "E" is No,40 in char_vector declared in config.py. |
@Pay20Y, no post-processing, the array listed above is raw output from: |
Sorry, I misunderstand what you mean. I have a question about BatchNorm in CRNN. Since the num of RoI is different in every batch. So CRNN is always train with a variable batch size. I wonder it is harm to BatchNorm in CRNN. Do you have any idea? |
Yeah, I understand what you mean. This is a good question, but I'm really not sure if it does any harm or not. Maybe it does if the number of RoIs is very small in some batches..? I guess simply varying the batch size itself is not a problem. Just guessing based on this paper: |
|
I tested the model as it is and got very poor results. Any idea why??



The text was updated successfully, but these errors were encountered: