-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feedback requested: gh-ost on Galera / XtraDB cluster? #224
Comments
That's actually on my list to test next week. It seems like the easiest way to do it would be to hang an asynchronous replica off of the cluster (which is a supported operation) and run gh-ost there to minimize load on the synchronous cluster.
I'll let you know how the testing in our dev cluster works. |
We use Galera/XtraDB and would definitely like to use |
Thank both! Do you foresee a particular issue with the final |
This applies to all writes that One thing that happens with Galera/XtraDB is the potential to deadlock "frequently". In theory this can happen between threads on a normal mysql master, but it's very rare. This is caused by the rollup of multiple transactions into a single locking operation between the multiple galera masters. The default for this is 16 transactions, but is tunable0. I personally don't recommend changing this, but some people do.
|
@SuperQ |
👍 💯 🎱 |
@shlomi-noach We test gh-ost on a Galera staging cluster and get a lot of deadlocks it looks like it never stops:
The staging environment is not a slave of production and has not to deal with live data. |
@Frank-Leemkuil a few questions:
|
|
|
Since named locks and |
Neither work in Galera? Does |
Let me clarify (and apologize for not posting this last week):
When you issue
(all on node 1)
After killing a session that had a lock, running There may be some way to safely do this with Galera, but I have not yet found it. All tests were executed on Percona XtraDB Cluster 5.6.30. |
@Roguelazer thank you so much for elaborating. On Your comment also presents a bigger problem: a lock on the migrated table will lock other tables as well. That'll easily cause a complete stall. Last, @Frank-Leemkuil 's output shows "Session lock gh-ost.506.lock expected to be found but wasn't", meaning I do not intend to work on any of these in the immediate future; Galera is not on our priority list, and personally I do not have production experience with Galera. If anyone reading this wants to give this a try please let me know. |
I used docker-compose to test a minimal 3-node percona xtradb cluster. The main blocker is the default cut-over algorithm, as described before, but |
doesnt work for me when i've
and i run:
i'll get
|
@timglabisch for sake of clarity, can you specify what doesn't work for you? Did you mean using |
@shlomi-noach sorry for confusion, i updated my comment |
So, this works okay-ish on 5.6. On 5.7, the explicit LOCK TABLE is forbidden (which is used even during the two-step process). It's not immediately clear to me why we need a |
LOCK/UNLOCK TABLES can only be executed on one PXC node (5.6 or 5.7 with pxc_strict_mode != ‘ENFORCING’), so if using one node for write requests, the data consistency will be guaranteed by As described before, the way PXC manages DDL statements(Total Order Isolation) would unintentionally cause |
Dear community,
At this time we are not using Galera/XtraDB Cluster. We haven't tested
gh-ost
on Galera. We're curious to hear:gh-ost
on Galera makes sense, given you can use Rolling Schema Upgrades.Thank you!
The text was updated successfully, but these errors were encountered: