-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Shard may not be allocated because of SameShardAllocationDecider and total_shards_per_node #12957
Comments
@kkewwei : Thanks for filing the issue. Currently, these AllocationDeciders are run independently and hence these issues happen when they start interfering with each other. I agree anytime there are unassigned shards due to deciders which are not transient, the rebalancer needs to find the next optimal routing by moving around shard. |
@shwetathareja , If we should add AllocationDeciders to deal with this unassinged shards? If you agree, I'd be happy to try to implement it. Or do you have other ideas, I'd be happy to ask. |
@kkewwei feel free to take a stab at the solution, will be happy to review the PR. |
@kkewwei The conflict of AllocationDeciders(hard constraint) will cause the unassigned, I think AllocationConstraint( soft constraint) is a better choice for index shard balance. You can just disable the |
@ViggoC, I get what you meaning, just delete But it seems don't solve the such case: the index with 3 shards and 1 replicas: Now only the shard2(r) is not allocated, even with |
@kkewwei I this case, |
@ViggoC My understanding: In this case, the shards count in node0, node2 is 2, INDEX_SHARD_PER_NODE_BREACH_CONSTRAINT_ID will no longer works. Can you explain "shard2(r) can be allocated to node0 or node1 with INDEX_SHARD_PER_NODE_BREACH_CONSTRAINT_ID, then rebalance step can relocate the other shard on the node to node2", I am confused about this process, very thank you. |
@kkewwei SameShardAllocationDecider will reject the allocation of shard2(r) in node2, so the node0 and node1 are the candidates. And |
Describe the bug
Shard will not be allocated because of SameShardAllocationDecider And total_shards_per_node, result of _cluster/allocation/explain is as follows:
I've encountered this several times, If we should optimize shard allocation, or actively migrate other shards to the node to break this unhealthy balance?
Related component
Cluster Manager
Host/Environment (please complete the following information):
The text was updated successfully, but these errors were encountered: