-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add new vshard configuration options #2186
Conversation
66da703
to
2ef7f9d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for a quick PR!
I'm wondering, did you test rebalancer
option with the vshard version >= 0.1.25?
I didn't see auto tests in the PR, and also it seems, you've added rebalancer
option on top of vshard config, when it should be on either replicaset or replica level.
Here is the link to the commit which adds support for rebalancer option.
0f26253
to
6616190
Compare
5b551f8
to
e5f96af
Compare
Hi! I cannot figure out how can I simply disable rebalancer with rebalancer_mode = 'off' option. It would be great to be able to do it in |
What about adding ability to manipulate this option before bootstrapping? the same as with bucket_count. In my case, I would like to bootstrap cluster knowing rebalancer is disabled Lines 782 to 793 in bcb7eae
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
I've checked rebalancer_mode
option. It seems to work well.
But I had some issues with rebalancer
option. I've tried to set rebalancer
option to true to the working cluster using this query:
{
"query": "mutation editTopology($replicasets:[EditReplicasetInput]){cluster{edit_topology(replicasets:$replicasets){replicasets{uuid weight roles alias rebalancer}}}}",
"variables": {
"replicasets": [
{
"weight": 0,
"uuid": "ebef4119-7e39-41d5-a684-6f99e83c98b0",
"alias": "core-1",
"rebalancer": true
}
]
}
}
The query was successful, but the response was wrong:
{
"data": {
"cluster": {
"edit_topology": {
"replicasets": [
{
"alias": "core-1",
"rebalancer": null,
"roles": [
"vshard-router",
"vshard-storage",
"app.roles.sharding-storage",
"app.roles.queue"
],
"uuid": "ebef4119-7e39-41d5-a684-6f99e83c98b0",
"weight": 0
}
]
}
}
}
}
Check out the "rebalancer": null
part
It seems that it has not been passed to graphql Replicaset object since it is actually has been saved into cartridge cw config:
r-1> cartridge.config_get_deepcopy('topology').replicasets['ebef4119-7e39-41d5-a684-6f99e83c98b0']
---
- weight: 0
vshard_group: default
alias: core-1
master:
- 20f05d26-0560-4b8b-848f-90558f7e496d
all_rw: false
roles:
app.roles.sharding-storage: true
app.roles.queue: true
vshard-storage: true
ddl-manager: true
metrics: true
vshard-router: true
rebalancer: true
...
I suppose you need to add rebalancer
field to the replicaset in get-topology
module, somewhere here:
cartridge/cartridge/lua-api/get-topology.lua
Lines 104 to 125 in d1a11f4
replicasets[replicaset_uuid] = { | |
uuid = replicaset_uuid, | |
roles = {}, | |
status = 'healthy', | |
master = { | |
uri = 'void', | |
uuid = 'void', | |
status = 'void', | |
message = 'void', | |
}, | |
active_master = { | |
uri = 'void', | |
uuid = 'void', | |
status = 'void', | |
message = 'void', | |
}, | |
weight = nil, | |
vshard_group = replicaset.vshard_group, | |
servers = {}, | |
all_rw = replicaset.all_rw or false, | |
alias = replicaset.alias or 'unnamed', | |
} |
And here is get_replicasets
call lacking rebalancer
field:
r-1> lua_api_topology.get_replicasets('ebef4119-7e39-41d5-a684-6f99e83c98b0')
---
- - &0
active_master: &1
disabled: false
message:
labels: []
electable: true
uuid: 20f05d26-0560-4b8b-848f-90558f7e496d
uri: localhost:3303
alias: s-1
clock_delta: 2.3e-05
replicaset: *0
priority: 1
status: healthy
master: *1
status: healthy
all_rw: false
vshard_group: default
alias: core-1
weight: 0
servers:
- *1
roles:
- vshard-router
- vshard-storage
- app.roles.sharding-storage
- app.roles.queue
uuid: ebef4119-7e39-41d5-a684-6f99e83c98b0
...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
I didn't forget about
Close #2185