Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nrf5340 support #1656

Open
wants to merge 2 commits into
base: develop
Choose a base branch
from
Open

nrf5340 support #1656

wants to merge 2 commits into from

Conversation

maxd-nordic
Copy link
Contributor

Adds support for the nrf5340


def __init__(self, session):
super(NRF53XX, self).__init__(session, self.MEMORY_MAP)
self._svd_location = SVDFile.from_builtin("nrf5340_application.svd") # TODO
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ideally, I would load both SVD for the specific cores. But how to do this? (seems not that important though)

@maxd-nordic
Copy link
Contributor Author

Still got a problem to solve here: Flashing works after a mass-erase. If there is already a program on the chip, it fails on the network core with a hard fault or something similar:
0008375 C target was not halted as expected after calling flash algorithm routine (IPSR=3) [__main__]

@maxd-nordic
Copy link
Contributor Author

@flit Could you take a look? I'll try to understand the flash algo code (they are from the pack), maybe I can find something.

@mrenner42
Copy link

Hi @maxd-nordic
I tried your PR and flashing after an mass-erase works almost as expected.
Unfortunately, I get the following error when flashing the appcore:

0012682 C flash program page failure (address 0x00ff8000; result code 0x68) [__main__]

Seems like flashing UICR fails with decimal value of 104 in register r0. Any idea what causes this result code?

@maxd-nordic
Copy link
Contributor Author

Hi @mrenner42! Cool, that you're interested in this! Unfortunately, I've run into similar issues and couldn't figure out how to fix them yet. Maybe, I'll try building a simpler flash algo and see if that does the trick. IIRC, I had one version that only worked on a freshly erase chip and another that only worked on an already programmed one.

Signed-off-by: Maximilian Deubel <maximilian.deubel@nordicsemi.no>
Signed-off-by: Maximilian Deubel <maximilian.deubel@nordicsemi.no>
@maxd-nordic maxd-nordic changed the base branch from develop to main November 19, 2024 14:43
@maxd-nordic maxd-nordic marked this pull request as ready for review November 19, 2024 14:43
@maxd-nordic maxd-nordic changed the title WIP: nrf5340 support nrf5340 support Nov 19, 2024
@maxd-nordic
Copy link
Contributor Author

@mrenner42 Would you like to try the latest version? It seems promising to me!

@mrenner42
Copy link

@maxd-nordic Very cool, thanks a lot. I'll give it a go this week and report back!

@mrenner42
Copy link

@maxd-nordic I'm sorry for replying only just jet.

We tried your latest version and it works most of the time.
However, opening the probe multiple times in a row leads to one of the two errors below. They appear in random order. Might be more an issue with DAP probe / pyocd in general?

  File "tests/test_release_update_recovery.py", line 134, in initial_setup_target
    common_test_functions.teardown_target(
  File "tests/common/common_test_functions.py", line 54, in teardown_target
    with DebuggerProbeFactory(device_under_test).probe as probe:
  File "tests/common/debugger_probe.py", line 138, in __enter__
    self._session.open()
  File "_venv/test/lib/python3.10/site-packages/pyocd/core/session.py", line 549, in open
    self._probe.open()
  File "_venv/test/lib/python3.10/site-packages/pyocd/probe/cmsis_dap_probe.py", line 316, in open
    raise self._convert_exception(exc) from exc
pyocd.core.exceptions.ProbeError: Unable to claim interface for probe xxxxxxxxxxxxxx

or

tests/test_release_update_recovery.py:54: in device_reset_workaround
    with DebuggerProbeFactory(device_under_test).probe as probe:
tests/common/debugger_probe.py:134: in __enter__
    self._session.open()
_venv/test/lib/python3.10/site-packages/pyocd/core/session.py:553: in open
    self._board.init()
_venv/test/lib/python3.10/site-packages/pyocd/board/board.py:143: in init
    self.target.init()
_venv/test/lib/python3.10/site-packages/pyocd/coresight/coresight_target.py:123: in init
    super().init()
_venv/test/lib/python3.10/site-packages/pyocd/core/soc_target.py:174: in init
    seq.invoke()
_venv/test/lib/python3.10/site-packages/pyocd/utility/sequencer.py:213: in invoke
    resultSequence.invoke()
_venv/test/lib/python3.10/site-packages/pyocd/utility/sequencer.py:208: in invoke
    resultSequence = call()
_venv/test/lib/python3.10/site-packages/pyocd/target/family/target_nRF53.py:782: in persist_unlock_app
    self.write_uicr_app(0x00FF8000, 0x50FA50FA)
_venv/test/lib/python3.10/site-packages/pyocd/target/family/target_nRF53.py:803: in write_uicr_app
    return self._write_uicr(addr, value, self.cores[0], 0x50039000)
_venv/test/lib/python3.10/site-packages/pyocd/target/family/target_nRF53.py:822: in _write_uicr
    self._wait_nvmc_ready(core, nvmc_base)
_venv/test/lib/python3.10/site-packages/pyocd/target/family/target_nRF53.py:839: in _wait_nvmc_ready
    if core.read32(nvmc_base + 0x400) != 0x00000000:  # NVMC.READY != BUSY
_venv/test/lib/python3.10/site-packages/pyocd/core/memory_interface.py:116: in read32
    return self.read_memory(addr, 32, now)
_venv/test/lib/python3.10/site-packages/pyocd/coresight/cortex_m.py:601: in read_memory
    result = self.ap.read_memory(addr, transfer_size, now)
_venv/test/lib/python3.10/site-packages/pyocd/utility/concurrency.py:29: in _locking
    return func(self, *args, **kwargs)
_venv/test/lib/python3.10/site-packages/pyocd/coresight/ap.py:1122: in _read_memory
    result = read_mem_cb()
_venv/test/lib/python3.10/site-packages/pyocd/coresight/ap.py:1100: in read_mem_cb
    res = result_cb() # type: ignore # ignore possibly unbound result_cb
_venv/test/lib/python3.10/site-packages/pyocd/coresight/dap.py:923: in read_ap_cb
    result = result_cb()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

    def read_ap_result_callback():
        try:
            value = result()
            TRACE.debug("trace: ... read_ap(addr=%#010x) -> %#010x", addr, value)
            return value
        except DAPAccess.Error as error:
            TRACE.debug("trace: ... read_ap(addr=%#010x) -> error(%s)", addr, error)
>           raise self._convert_exception(error) from error
E           pyocd.core.exceptions.TransferFaultError: Memory transfer fault @ 0x50039400-0x50039403

_venv/test/lib/python3.10/site-packages/pyocd/probe/cmsis_dap_probe.py:606: TransferFaultError

@maxd-nordic
Copy link
Contributor Author

@mrenner42 the first one is a probe issue. the second one looks like a problem with core access - that could be my target implementation.

@mrenner42
Copy link

@maxd-nordic Yeah, that's what I thought. The issue occur in one of our python tests where we reset (SW reset) the target during DFU updates (UART). I was able to work around the TransferFaultError by disabling the auto_unlock for the reset procedure. The "unable to claim" issue was probably due to parallel accessing the probe.

Now I occasionally get the following exception:

tests/common/debugger_probe.py:124: in __enter__
    self._session.open()
_venv/test/lib/python3.10/site-packages/pyocd/core/session.py:553: in open
    self._board.init()
_venv/test/lib/python3.10/site-packages/pyocd/board/board.py:143: in init
    self.target.init()
_venv/test/lib/python3.10/site-packages/pyocd/coresight/coresight_target.py:123: in init
    super().init()
_venv/test/lib/python3.10/site-packages/pyocd/core/soc_target.py:174: in init
    seq.invoke()
_venv/test/lib/python3.10/site-packages/pyocd/utility/sequencer.py:208: in invoke
    resultSequence = call()
_venv/test/lib/python3.10/site-packages/pyocd/target/family/target_nRF53.py:773: in check_part_info
    partno = self.read32(0x00FF020C)
_venv/test/lib/python3.10/site-packages/pyocd/core/memory_interface.py:116: in read32
    return self.read_memory(addr, 32, now)
_venv/test/lib/python3.10/site-packages/pyocd/core/soc_target.py:232: in read_memory
    return self.selected_core_or_raise.read_memory(addr, transfer_size, now)
_venv/test/lib/python3.10/site-packages/pyocd/coresight/cortex_m.py:601: in read_memory
    result = self.ap.read_memory(addr, transfer_size, now)
_venv/test/lib/python3.10/site-packages/pyocd/utility/concurrency.py:29: in _locking
    return func(self, *args, **kwargs)
_venv/test/lib/python3.10/site-packages/pyocd/coresight/ap.py:1122: in _read_memory
    result = read_mem_cb()
_venv/test/lib/python3.10/site-packages/pyocd/coresight/ap.py:1100: in read_mem_cb
    res = result_cb() # type: ignore # ignore possibly unbound result_cb
_venv/test/lib/python3.10/site-packages/pyocd/coresight/dap.py:923: in read_ap_cb
    result = result_cb()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

    def read_ap_result_callback():
        try:
            value = result()
            TRACE.debug("trace: ... read_ap(addr=%#010x) -> %#010x", addr, value)
            return value
        except DAPAccess.Error as error:
            TRACE.debug("trace: ... read_ap(addr=%#010x) -> error(%s)", addr, error)
>           raise self._convert_exception(error) from error
E           pyocd.core.exceptions.TransferTimeoutError

_venv/test/lib/python3.10/site-packages/pyocd/probe/cmsis_dap_probe.py:606: TransferTimeoutError

I'm not sure, could it be sort of the same issue as the TransferFaultError?

@ithinuel ithinuel changed the base branch from main to develop February 22, 2025 09:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants