You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
fromptflopsimportget_model_complexity_infomacs, params=get_model_complexity_info(model, tuple(10), as_strings=True, backend='pytorch',
print_per_layer_stat=True, verbose=True) # TypeError: 'int' object is not iterable
fromptflopsimportget_model_complexity_infomacs, params=get_model_complexity_info(model, (10,1), as_strings=True, backend='pytorch',
print_per_layer_stat=True, verbose=True)
"""Warning: module Embedding is treated as a zero-op.Warning: module LSTMNet is treated as a zero-op.Flops estimation was not finished successfully because of the following exception:<class 'RuntimeError'> : Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.cuda.FloatTensor instead (while checking arguments for embedding)Computational complexity: NoneNumber of parameters: NoneTotal:0 MacModule: GlobalFlops estimation was not finished successfully because of the following exception:<class 'RuntimeError'> : Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.cuda.FloatTensor instead (while checking arguments for embedding)Computational complexity: NoneNumber of parameters: NoneTotal Num Params in loaded model: 143404Traceback (most recent call last): File "f:\Desktop\School_Stuff\Programming\AI\.venv\Lib\site-packages\ptflops\pytorch_engine.py", line 68, in get_flops_pytorch _ = flops_model(batch) ^^^^^^^^^^^^^^^^^^ File "f:\Desktop\School_Stuff\Programming\AI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "f:\Desktop\School_Stuff\Programming\AI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1844, in _call_impl return inner() ^^^^^^^ File "f:\Desktop\School_Stuff\Programming\AI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1790, in inner result = forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<ipython-input-3-05186ac86b6c>", line 113, in forward self.charEmbeddingLayer(x) File "f:\Desktop\School_Stuff\Programming\AI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "f:\Desktop\School_Stuff\Programming\AI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "f:\Desktop\School_Stuff\Programming\AI\.venv\Lib\site-packages\torch\nn\modules\sparse.py", line 190, in forward return F.embedding( ^^^^^^^^^^^^ File "f:\Desktop\School_Stuff\Programming\AI\.venv\Lib\site-packages\torch\nn\functional.py", line 2551, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^"""
The text was updated successfully, but these errors were encountered:
Batches seem to be constructed from the
input_res
hereMy model is a very simple LSTM character predictor and it uses the
nn.Embedding
to encode the input character into an input vector.Thus my input shape is has 32,10 (batch size is 32, 10 character in an input sequence)
so ideally my input_res should be (10)
but (10) is not a tuple, and (10,) has nothing in its second index.
so this line
torch.ones(()).new_empty((1, *input_res)
becomes
torch.ones(()).new_empty((1, *(10))
ortorch.ones(()).new_empty((1, *(10,1))
ortorch.ones(()).new_empty((1, *(10,))
Code: https://pastebin.com/5mxn1zxP
Test Cases:
The text was updated successfully, but these errors were encountered: