Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information #587

Open
1 of 3 tasks
Precola opened this issue Oct 22, 2024 · 0 comments
Open
1 of 3 tasks

Comments

@Precola
Copy link

Precola commented Oct 22, 2024

RSet TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information

We are glad to add new developers who are volunteering to help solve issues to the above table.

Issue type

  • [x ] Bug Report
  • Feature Request
  • Help wanted
  • Other

SpikingJelly version

0.0.0.0.14

Description

...

Minimal code to reproduce the error/bug

from spikingjelly.activation_based.neuron import LIFNode, IFNode

...

self.lif = IFNode(step_mode='m', detach_reset=True, backend='torch', v_threshold=0.1)

Error

    spike_seq, self.v = self.jit_eval_multi_step_forward_hard_reset(x_seq, self.v, self.v_threshold,

Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information


You can suppress this exception and fall back to eager by setting:
    import torch._dynamo
    torch._dynamo.config.suppress_errors = True

Although "import torch._dynamo
torch._dynamo.config.suppress_errors = True"
works, I still want to know how to avoid this issue without using "torch._dynamo.config.suppress_errors = True".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant