-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(l1): receipts invalid size libmdbx #2065
base: main
Are you sure you want to change the base?
Conversation
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
great work!
while let Some(receipt) = IndexedChunk::read_from_db(&mut cursor, key)? { | ||
receipts.push(receipt); | ||
receipt_index += 1; | ||
key = (*block_hash, receipt_index).into(); | ||
} | ||
|
||
Ok(receipts.into_iter().map(|receipt| receipt.to()).collect()) | ||
Ok(receipts) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there might be a risk of infinite looping here. If your block uses 256 chunks then after chunk with index 255
the addition overflows, and you'll ask for chunk with index 0
of the same hash, and thus start the whole cycle again forever.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we should store the size somewhere.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I considered that as well. The size is defined as a u8, meaning we can have up to 256 chunks. However, if each value is 2022 bytes, we'd need to store around 517,632 bytes (~0.5 MB) to reach this limit. Tbh, I am not sure about the log size in ethereum and whether a receipt can take that amount of space.
For now, as we discussed offline, we'll keep it as is and address any limitations if they arise. I'll also update the code to explicitly reflect this limit and leave a clear warning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Solved in 18b2941.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Works for me in the server! Let's wait for @Oppen's approval anyways
Motivation
Syncing holesky.
Description
While syncing on Holesky, we encountered blocks that exceeded the maximum allowed entry size in libmdbx. Increasing the page size was not a viable solution, as even the maximum allowable size in libmdbx wasn't enough for some blocks.
To address this, we implemented an
IndexedChunk
trait that splits oversized data into smaller chunks, ensuring each piece stays within the maximum value size limit. Since libmdbx is structured as a B+ tree map, we had to ensure correct order of chunks. We achieved this by appending a byte representing the chunk index, this way we preserve the correct sequence of chunks during retrieval.Moving forward, we should consider whether other tables might also exceed this size limit. If so, we can extend this trait.
Closes None