Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix two issues encountered when testing LHv2 provisioning (backport #142) #144

Merged
merged 2 commits into from
Sep 27, 2024

Conversation

mergify[bot]
Copy link

@mergify mergify bot commented Sep 27, 2024

Problem:

  1. Disks added for use by LHv2 are added successfully, but their status remains stuck with an error "spec and status of disks on node $NAME are being syncing and please retry later."
  2. Active LHv2 volumes attached to VMs appear as /dev/dm-* and /dev/nvme* devices on the host. NDM sees these and thinks they're real disks, and then creates new BD CRs from those devices.

Solution:

  1. Add a call to reflect.DeepEqual() to avoid unnecessary updates and resultant weird state.
  2. Abuse the existing vendor filter to pick up SPDK devices, and explicitly exclude /dev/dm- devices in ApplyExcludeFiltersForDisk().

Note: I intend to handle this more cleanly later as part of harvester/harvester#5059

Related Issue:
harvester/harvester#5274

Test plan:
N/A


This is an automatic backport of pull request #142 done by [Mergify](https://mergify.com).

Previously, every call to LonghornV2Provisioner.Update() would
trigger an udpate of the Longhorn node object and a requeue
even if nothing had changed.  This resulted in the BD getting
suck with the AddedToNode condition set to False and an error:

  admission webhook "validator.longhorn.io" denied the request:
  spec and status of disks on node harvester-node-0 are being
  syncing and please retry later.

Now we only try to sync if the DiskDriver has actually changed,
and everything behaves correctly.

Signed-off-by: Tim Serong <tserong@suse.com>
(cherry picked from commit 77c4672)
When Longhorn V2 volumes are created and attached to VMs, they
appear on Harvester hosts as /dev/dm-* and /dev/nvme* devices.
This is problematic, because NDM thinks those things are actual
disks and creates BD CRs from them.

This change is a bit of a hack until I finish the work for
harvester/harvester#5059.

Signed-off-by: Tim Serong <tserong@suse.com>
(cherry picked from commit 05ca4c7)
@Vicente-Cheng Vicente-Cheng merged commit e9d0b47 into v0.7.x Sep 27, 2024
4 of 5 checks passed
@Vicente-Cheng Vicente-Cheng deleted the mergify/bp/v0.7.x/pr-142 branch October 23, 2024 02:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants