Replies: 8 comments 13 replies
-
We need to make sure to put "sensible" defaults in the newly added fields in order to make the change backward compatible and not mutually exclusive with our existing batching strategy. I am not sure what does the |
Beta Was this translation helpful? Give feedback.
-
I think that |
Beta Was this translation helpful? Give feedback.
-
Why batch_length? batch_size seems intuitive enough. |
Beta Was this translation helpful? Give feedback.
-
maximum_buffer_size signifies that there is a minimum_buffer_size somewhere too. Do we have plans to allow some range here? If not, then just buffer_size seems intuitive enough again. |
Beta Was this translation helpful? Give feedback.
-
linger_time might require me to search for the time unit being followed by this configuration. linger_ms itself speaks for me to use milliseconds only. |
Beta Was this translation helpful? Give feedback.
-
Even if background send is set to false, we might still need buffer_size. As there could be a back-pressure situation or some erroneous client implementation. In these cases, client would need the capability to ensure these producers don't end up taking all the memory available. Buffer_size or buffer_memory should be set always with may be a default value of say 33 MB. |
Beta Was this translation helpful? Give feedback.
-
"how errors on send should be handled when buffer is not full" Errors for any sync send should be immediately communicated. These could be failures or timeouts. Errors for any async send should have a callback method that can be triggered. Clients might need to log them, or send them to another dead-letter-queue, or may be do something custom with it. |
Beta Was this translation helpful? Give feedback.
-
@MasteMind @spetz @numinnex |
Beta Was this translation helpful? Give feedback.
-
At 25.02.2025 there was interesting discussion on discord.
The discussion revolved around performance optimizations and feature enhancements in iggy compared to Kafka. Participants debated strategies such as sticky partitioning, batching, and compression, noting that intelligent client implementations have a significant impact on overall throughput and latency. They compared Kafka’s round-robin partitioning with sticky partitioning, emphasizing that filling one partition’s batch before moving to the next can greatly improve performance and cost efficiency by reducing network load.
The conversation also touched on API for sending messages in background and error handling during background sends. Participants discussed the merits of different failure handling strategies, such as blocking until a send succeeds, blocking with a timeout, or failing immediately. To address these options, they proposed introducing a new enum,
IggyBackgroundSendFailureMode
, which would allow clients to choose the desired behavior when a background send fails.As a summary of this discussion, I propose task to support this API:
Create an extension for the
IggyProducerBuilder
that introduces configurable parameters for background sending.Changes in existing fields:
batch_size
- should be changed tobatch_length
.send_interval
- should be changed tolinger_time
, determines how long messages should remain in the queue before sendingAdd new fields (optional?):
background_send
- determines if send in background functionality should be enabledbatch_size
- value (in bytes) of how much data should be buffered before sendingmaximum_buffer_size
- maximum value (in bytes) of buffered data, if exceeded error will be reported (seeIggyBackgroundSendFailureMode
)failure_mode
- determines behavior ofsend_messages
when buffer is full (or in general on failure)?Ensure that the new API integrates seamlessly with the current
send_messages
method, providing optional background sending.Handle all cases with conflicting parameters set in builder, e.g.
maximum_buffer_size
should not be set whenbackground_send
is set to false.Is is not certain how errors on send should be handled when buffer is not full. Perhaps error channel to communicate errors would be sufficient.
Proposal of enum:
@spetz @numinnex what's your opinion on this?
Beta Was this translation helpful? Give feedback.
All reactions