-
Notifications
You must be signed in to change notification settings - Fork 403
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cleanup mrecx entry if current rx processing failed #10718
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Di Wang <ddiwang@google.com>
@shefty Could you please have a look? Thanks. |
@@ -1114,6 +1114,8 @@ static void xnet_complete_rx(struct xnet_ep *ep, ssize_t ret) | |||
cq_error: | |||
FI_WARN(&xnet_prov, FI_LOG_EP_DATA, | |||
"msg recv failed ret = %zd (%s)\n", ret, fi_strerror((int)-ret)); | |||
if (rx_entry->ctrl_flags & XNET_MULTI_RECV) | |||
xnet_srx_cancel_rx(ep->srx, &ep->srx->rx_queue, rx_entry->context); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the surface, this doesn't look right. xnet_srx_cancel_rx() does:
slist_foreach(queue, cur, prev) {
xfer_entry = container_of(cur, struct xnet_xfer_entry, entry);
if (xfer_entry->context == context) {
slist_remove(queue, cur, prev);
xnet_report_error(xfer_entry, FI_ECANCELED);
xnet_free_xfer(xnet_srx2_progress(srx), xfer_entry);
return true;
}
}
We don't need to search for the entry, since we already have it, and the code below calls xnet_report_error() and xnet_free_xfer(). So, this should end up with a double free, plus the error completion generated below will follow the cancel.
I think we just need the slist_remove() here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Btw, I'm not even sure about the slist_remove(). That may not be needed anyway. Can you describe the problem you're seeing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, this is not for the current xfer_entry, but for the entry added in xnet_start_recv->xnet_alter_mrecv
https://github.com/ofiwg/libfabric/blob/main/prov/tcp/src/xnet_progress.c#L675
So we need use context to the search the entry from the list?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue we are seeing is that the stale rx_entry is being left in the rx_queue, with stale context, which is added by xnet_start_recv-> xnet_alter_mrecv(). But if the rx entry receiving failed, then only the current entry is removed, but not the mrecv entry, which can be picked up by the following progress, so causing the problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a multi-receive buffer. Why does the remaining buffer space need to be removed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah in current mercury implementation will cleanup context if any rx failed, i.e. completion callback of the multiple receive will be called immediately and the context are freed, but any rx entries with the context needs to be removed/freed. Unfortunately.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One assumption that we had been making was that if the current multi-recv operation reports an error from the fi cq then we mark that operation as completed, meaning we assumed that no further receives could be made even if there is still space in the buffer. Is that behavior documented somewhere ?
What you are saying is that if the current multi-recv operation reports a failure then we should just keep proceeding regardless until the FI_MULTI_RECV flag gets set, marking the completion of the operation ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think that behavior is documented anywhere, and I don't think that is the behavior we would want. There could be many simultaneously active messages associated with a single posted multi-recv buffer. E.g. the messages could be from different peers. An error with one message shouldn't impact the other messages which may be in progress.
The patch here is only considering any remaining buffer not currently assigned to an incoming message. But, more generally, that may not be the case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And, yes, I think you would continue processing until the FI_MULTI_RECV flag is set.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok thanks, that was useful to know. Looks like it was not always reported though with tcp so I've opened #10721
mrecx entry should be removed if the current rx processing failed, otherwise the following process from the upper layer may pick up this stale rx.