-
Notifications
You must be signed in to change notification settings - Fork 17
Cluster operator UI broken on OpenShift 4.2 - "Invariant Violation" #214
Comments
@nktl How are you accessing the StorageOS UI? |
@nktl I think that may be the issue then. The The StorageOS UI is available on any node running StorageOS on port 5705. You can access the StorageOS UI through the StorageOS service, using port-forwarding or perhaps it works via the Openshift UI. Alternatively, you can access your Nodes directly you can try :5705 to get to the browser. |
Thanks, the funny thing is that the link/view in question actually works when cluster is in provisioning state, then it breaks after deployment is completed - so it looks like some value from the completed deployment is bugging it out - so might be worth investigating still. You are correct however that I misnamed this thing 'UI', it seems to be just a cluster status page. On the actual UI front, it looks to be operational - but I am not exactly sure what the creds are, the ones from the documentation (storageos/storageos) do not seem to work, neither the api creds set during the provisioning process. |
That's very useful to know - thank you! The credentials for the UI are what you created in step 5 if you followed the Openshift 4.1 docs. If not then it's the apiUsername and apiPassword that are defined in the secret that you created as part of the installation process. |
Hi @nktl, thanks for raising this. We're going to try and reproduce this - it looks like an Openshift error but we might be passing it data in the wrong format. We made some changes to the status format recently (to be more compliant with Openshift!) |
We are testing brand new, vanilla StroageOS deployment on OpenShift 4. 2 - when browsing cluster operator UI status URL:
https://cluster-console.app.local/k8s/ns/storageos-namespace/clusterserviceversions/storageosoperator.v1.5.1/storageos.com~v1~StorageOSCluster/example-storageos
We are getting the following error:
The cluster seems to work fine though and we can provision volumes - so it looks like an UI glitch (although it makes it rather hard to manage / monitor state...)
The text was updated successfully, but these errors were encountered: