Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Kuberay RayJob MultiKueue adapter #3892

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

mszadkow
Copy link
Contributor

@mszadkow mszadkow commented Dec 19, 2024

What type of PR is this?

/kind feature

What this PR does / why we need it:

We want to be bale to run Kuberay workloads RayJob in MultiKueue.
RayCluster will come as separate PR.

Which issue(s) this PR fixes:

Relates to #3822

Special notes for your reviewer:

Does this PR introduce a user-facing change?

NONE

@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. kind/feature Categorizes issue or PR as related to a new feature. labels Dec 19, 2024
@k8s-ci-robot
Copy link
Contributor

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Dec 19, 2024
@mszadkow
Copy link
Contributor Author

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Dec 19, 2024
Copy link

netlify bot commented Dec 19, 2024

Deploy Preview for kubernetes-sigs-kueue canceled.

Name Link
🔨 Latest commit 1801ddc
🔍 Latest deploy log https://app.netlify.com/sites/kubernetes-sigs-kueue/deploys/677ce541aaa1670008f8ca02

Makefile-deps.mk Outdated Show resolved Hide resolved
@@ -70,6 +70,7 @@ IMAGE_TAG ?= $(IMAGE_REPO):$(GIT_TAG)
JOBSET_VERSION = $(shell $(GO_CMD) list -m -f "{{.Version}}" sigs.k8s.io/jobset)
KUBEFLOW_VERSION = $(shell $(GO_CMD) list -m -f "{{.Version}}" github.com/kubeflow/training-operator)
KUBEFLOW_MPI_VERSION = $(shell $(GO_CMD) list -m -f "{{.Version}}" github.com/kubeflow/mpi-operator)
KUBERAY_VERSION = $(shell $(GO_CMD) list -m -f "{{.Version}}" github.com/ray-project/kuberay/ray-operator)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So far it's the version without managedBy.
We don't have a ray-operator image that supports managedBy, that would have to be a custom one...
latest -> 1.2.2, which is the last release

export KUBERAY_MANIFEST="${ROOT_DIR}/dep-crds/ray-operator/default/"
export KUBERAY_IMAGE=bitnami/kuberay-operator:${KUBERAY_VERSION/#v}
export KUBERAY_RAY_IMAGE=rayproject/ray:2.9.0
export KUBERAY_RAY_IMAGE_ARM=rayproject/ray:2.9.0-aarch64
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this one is for us people working on macOS, it's vital for development, not so much for prod - I think it should stay

hack/e2e-common.sh Outdated Show resolved Hide resolved
@mszadkow mszadkow force-pushed the feature/kuberay-multikueue-adapter branch from d27f12c to ad4df0d Compare December 19, 2024 11:03
test/e2e/multikueue/e2e_test.go Outdated Show resolved Hide resolved
test/integration/multikueue/multikueue_test.go Outdated Show resolved Hide resolved
@mszadkow mszadkow force-pushed the feature/kuberay-multikueue-adapter branch from ad4df0d to b13ce9a Compare December 30, 2024 12:36
@k8s-ci-robot k8s-ci-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Dec 30, 2024
@mszadkow mszadkow changed the title [Feature] Kuberay MultiKueue adapter [Feature] Kuberay RayJob MultiKueue adapter Dec 30, 2024
@mszadkow
Copy link
Contributor Author

/retest

@mszadkow mszadkow force-pushed the feature/kuberay-multikueue-adapter branch 2 times, most recently from d3974b4 to 3c3a0c7 Compare December 30, 2024 13:38
@mszadkow mszadkow force-pushed the feature/kuberay-multikueue-adapter branch from 3c3a0c7 to 01f5354 Compare December 30, 2024 14:46
@mszadkow mszadkow marked this pull request as ready for review December 30, 2024 14:47
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Dec 30, 2024
@mszadkow mszadkow force-pushed the feature/kuberay-multikueue-adapter branch 13 times, most recently from 5c05b98 to 4e83f8b Compare January 2, 2025 12:06
@mszadkow mszadkow force-pushed the feature/kuberay-multikueue-adapter branch 3 times, most recently from 906cc02 to 6f49e56 Compare January 7, 2025 08:07
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: mszadkow
Once this PR has been reviewed and has the lgtm label, please assign tenzen-y for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@mszadkow mszadkow force-pushed the feature/kuberay-multikueue-adapter branch from 6f49e56 to 1801ddc Compare January 7, 2025 08:26
@mszadkow
Copy link
Contributor Author

mszadkow commented Jan 7, 2025

Although the PR for Kuberay RayJob was ready for review before New Year, I couldn’t finish to work on it due to peculiar issue.

Only CI integration tests failed (problem not spotted locally), but before running any test.

IT failed on “Running Suite: Multikueue” and stuck forever.

Further investigation and enabling ginkgo output gave little more details.


Namely Envtest failed on setup and with couple of different yet relating errors:


 unable to install CRDs onto control plane: unable to create CRD instances: unable to create CRD "pytorchjobs.kubeflow.org": Post "https://127.0.0.1:44817/apis/apiextensions.k8s.io/v1/customresourcedefinitions": http2: client connection lost
 
unable to install CRDs onto control plane: unable to create CRD instances: unable to create CRD "rayjobs.ray.io": Post "https://127.0.0.1:33909/apis/apiextensions.k8s.io/v1/customresourcedefinitions": http2: client connection lost

 unable to install CRDs onto control plane: unable to create CRD instances: unable to create CRD "rayjobs.ray.io": Post "https://127.0.0.1:37813/apis/apiextensions.k8s.io/v1/customresourcedefinitions": http2: client connection lost
 
unable to install CRDs onto control plane: unable to create CRD instances: unable to create CRD "workloads.kueue.x-k8s.io": Post "https://127.0.0.1:32965/apis/apiextensions.k8s.io/v1/customresourcedefinitions": http2: client connection lost

Those errors were distributed over all cluster routines (as clusters run in separate go routines).



During the investigation with Mykhailo we were able to boil down the issue to simple addition of “ray-operator” CRDS to the suite.
#3920 - was our testing ground


We also checked:


  • extended CRD installation times for testEnv

  • extended WorkerLostTimeout option in SetupControllers

  • increase INTEGRATION_API_LOG_LEVEL verbosity - this gave some insight but basically we saw that ETCD lost connection without learning the cause:
    E0103 11:50:24.628853 28403 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 1.969052ms, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"


Finally what worked was to reduce the INTEGRATION_NPROCS.

Which is still not understood why, but we assume that maybe apiserver of envtest got overwhelmed and possibly test-infra requires resource update.

We don’t have any proof that this will work, but some people reportedly so similar issues (not the exact ones):
kubernetes-sigs/controller-runtime#1246 (comment)


Maybe any of you guys encounter similar issue in the past?

/cc @mimowo @PBundyra @tenzen-y

Copy link
Contributor

@mimowo mimowo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that you had to reduce INTEGRATION_NPROCS I suspect the cluster running the integration tests is resource constrained. However, I'm wondering if INTEGRATION_API_LOG_LEVEL=1 or --output-interceptor-mode=none could be making a difference here?

func (j *RayJob) IsSuspended() bool {
return j.Spec.Suspend
}

func (j *RayJob) IsActive() bool {
return j.Status.JobDeploymentStatus != rayv1.JobDeploymentStatusSuspended
return (j.Status.JobDeploymentStatus != rayv1.JobDeploymentStatusSuspended) && (j.Status.JobDeploymentStatus != rayv1.JobDeploymentStatusNew)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this change needed? Maybe this is a fix for regular RayJobs too? In that case we should do it in a separate PR so that it can be cherry-picked.

Comment on lines +500 to +503
E2eKuberayTestImage := "rayproject/ray:2.9.0"
if runtime.GOOS == "darwin" {
E2eKuberayTestImage = "rayproject/ray:2.9.0-aarch64"
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we read the image from the KUBERAY_RAY_IMAGE

}}
}

func (j *JobWrapper) RayJobSpecsDefault() *JobWrapper {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why to introduce it? Could the default be already set with MakeJob as so far?

Adding this to most (or all) tests makes the diff larger unnecesarily.

@mimowo
Copy link
Contributor

mimowo commented Jan 7, 2025

To check if the tests are stable:
/test pull-kueue-test-integration-main
/test pull-kueue-test-multikueue-e2e-main

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note-none Denotes a PR that doesn't merit a release note. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants