You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add bfloat16 + fp16 support to fractional_max_pool for CUDA and CPU (pytorch#116950)
Adds bfloat16 to fractional_max_pool. If op supports fp32 and fp16, it really should support bf16 for the most part. Most but not all ops satisfy this, so I am adding support for the few that do not.
Pull Request resolved: pytorch#116950
Approved by: https://github.com/lezcano
0 commit comments