Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nvnetdrv v2 part 2 #689

Merged
merged 7 commits into from
Feb 26, 2025
Merged

nvnetdrv v2 part 2 #689

merged 7 commits into from
Feb 26, 2025

Conversation

Ryzee119
Copy link
Contributor

@Ryzee119 Ryzee119 commented Dec 29, 2024

Continuation from #686

Will likely end up in 3 parts.

Primary change here are to pass ring sizes to the init function instead of #defines. Previously we could have different hardware ring size and queue size. Its actually much simpler to just have the hardware ring size exactly the same size as the queue. It also works better with LWIP as we can create pbuf pools to exactly match our requirement.

Most other commits are just some minor bug fixes and house keeping.

This fixes a bug that on soft reset or coming from XDK app where the network would not retrieve an IP address in some cases.

Copy link
Member

@thrimbor thrimbor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for taking so long to get around to reviewing this.
Most of it is completely fine (as usual 😃 ), but I left a bit of feedback on the changes to nvnetdrv_acquire_tx_descriptors.

@@ -441,17 +447,25 @@ int nvnetdrv_init (size_t rx_buffer_count, nvnetdrv_rx_callback_t rx_callback)
}

// Allocate memory for TX and RX ring descriptors.
void *descriptors = MmAllocateContiguousMemoryEx((RX_RING_SIZE + TX_RING_SIZE) * sizeof(struct descriptor_t), 0,
void *descriptors = MmAllocateContiguousMemoryEx((g_rxRingSize + g_txRingSize) * sizeof(struct descriptor_t), 0,
Copy link

@disean disean Feb 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For performance reasons, you may want to align this pointer on a 64-byte boundary.

Suggested change
void *descriptors = MmAllocateContiguousMemoryEx((g_rxRingSize + g_txRingSize) * sizeof(struct descriptor_t), 0,
// Allocate memory for TX and RX ring descriptors + alignment.
void *descriptors = MmAllocateContiguousMemoryEx((g_rxRingSize + g_txRingSize) * sizeof(struct descriptor_t) + (64 - 1), 0,
// Round up to the closest multiple of the cache line boundary.
descriptors = (void *)((size_t)descriptors + (64 - (size_t)descriptors) % 64);

Copy link
Member

@JayFoxRox JayFoxRox Feb 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MmAllocateContiguousMemoryEx should allocate physical memory pages, so I'm not sure if this could ever not be aligned to 0x1000 (which is a multiple of 0x40)?

(There's some other mallocs which could be aligned_alloc but I'm not sure if there's actual benefit to be expected here)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah it's always page-aligned. MmAllocateContiguousMemoryEx has an alignment parameter, but that only does something if you want a POT alignment bigger than 4KiB.

@thrimbor thrimbor merged commit 4809e79 into XboxDev:master Feb 26, 2025
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

4 participants