You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I just fought with DiskSpaceCollector because it didn't report the disk usage for a fuse.glusterfs mounted directory.
The first problem (trivial) was that in the default config "gluster" is given instead of "fuse.gluster" . No problem, corrected.
But even after correction the metrics were silently dropped.
I found that the if at diskspace.py:153 assumes the "device" starts with a '/'. Too bad, usually mountpoints for gluster use the format
srv1[,srv2]:volume_name
It's possible to use
/srv1[,srv2]:volume_name
(with a leading '/') but I think it's quite uncommon.
I made that if a no-op (adding as first condition "(1==1) or" ), but I think it could be better to consider the GlusterFS case and/or give a meaningful message stating why that mountpoint is getting discarded.
HIH
The text was updated successfully, but these errors were encountered:
Is this still occurring with master? By reading the collector code, it looks like it should be working. If not, can you give sanitized output of /proc/mounts?
Hi.
I just fought with DiskSpaceCollector because it didn't report the disk usage for a fuse.glusterfs mounted directory.
The first problem (trivial) was that in the default config "gluster" is given instead of "fuse.gluster" . No problem, corrected.
But even after correction the metrics were silently dropped.
I found that the if at diskspace.py:153 assumes the "device" starts with a '/'. Too bad, usually mountpoints for gluster use the format
srv1[,srv2]:volume_name
It's possible to use
/srv1[,srv2]:volume_name
(with a leading '/') but I think it's quite uncommon.
I made that if a no-op (adding as first condition "(1==1) or" ), but I think it could be better to consider the GlusterFS case and/or give a meaningful message stating why that mountpoint is getting discarded.
HIH
The text was updated successfully, but these errors were encountered: