-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Error http: ContentLength=401 with Body length 0 #541
Comments
Hey, thanks for opening the issue. The use of func (logger *MyLogger) SendLogEvent(event *MyLogEvent) error {
// Marshal the struct
s, err := json.Marshal(event)
if err != nil {
return WrapError("failed to marshal log event", err)
}
// Do the request
_, err := logger.client.Index(
context.Background(),
opensearchapi.IndexReq{
Index: logger.logsIndexName,
Body: bytes.NewReader(s),
},
)
if err != nil {
return err
}
// Success
return nil
} |
OK, this morning I'm not getting the errors. I found where the error was being raised: ncopy, err = t.doBodyCopy(w, io.LimitReader(body, t.ContentLength))
if err != nil {
return err
}
var nextra int64
nextra, err = t.doBodyCopy(io.Discard, body)
ncopy += nextra
}
if err != nil {
return err
}
}
...
if !t.ResponseToHEAD && t.ContentLength != -1 && t.ContentLength != ncopy {
return fmt.Errorf("http: ContentLength=%d with Body length %d",
t.ContentLength, ncopy)
} Seems like not all of the request body was being written yet there were no I/O errors. Thanks for the info on how to use the API. I have updated my code. |
@merlinz01 Glad this was helpful. Should we close this? |
Now I'm getting some of these errors again even though the connectivity has much improved. So far they only show up when I set |
OK, I'm getting it. So when it retries a failed attempt, the second time the request body is an empty opensearch-go/opensearchtransport/opensearchtransport.go Lines 239 to 268 in ea3e57a
On line 248, an |
Here's how I fixed it: req.GetBody = func() (io.ReadCloser, error) {
// We have to return a new reader each time so that retries don't read from an already-consumed body.
// This does not do any copying of the data.
return io.NopCloser(bytes.NewReader(buf.Bytes())), nil
} The code for non-gzipped bodies (line 261) does a similar thing it by implicitly copying the I can open a PR if you want. |
Most definitely, with tests please. I think ideally the reader would be rewound only if retry is needed. |
We could implicitly copy the buffer like is done for non-gzipped bodies but IMHO it's better to be explicit. The second method below only makes two small struct allocations so it's not really a performance question. req.GetBody = func() (io.ReadCloser, error) {
b := *buf
return io.NopCloser(&b), nil
} vs. req.GetBody = func() (io.ReadCloser, error) {
reader := bytes.NewReader(buf.Bytes())
return io.NopCloser(reader), nil
} |
What is the bug?
I'm getting lots of these errors when indexing documents into a data stream:
http: ContentLength=401 with Body length 0
How can one reproduce the bug?
Here's my code:
What is the expected behavior?
No error.
What is your host/environment?
Debian 12 and Linux Mint 21.3
opensearch-go v4.0.0
Do you have any screenshots?
Do you have any additional context?
The text was updated successfully, but these errors were encountered: