You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What stands out is that the threading has either a large overhead or all of eBPF is counted into it.
py-spy definitely has an issue with applying percentages as none of those make any sense since they do not add up to 100%.
I am thus only looking on TotalTime.
In the user space get_data uses the most time. Suprisingly the file open calls are not that costly.
Also interesting is that get_table takes a lot of time. Why? For a simple lookup / memory copy? How big are these tables? can we maybe clear them more often?
The sorting also does not seem to be an issue.
So all in all: Looks fine, apart from the wild eBPF functions and threading overhead.
I profiled the
powerletrics -i 1000
command with py-spy.Once with RAPL and once where I segemented the file open calls for reading memory and cmdline.
Without RAPL and with segemented file open
What stands out is that the threading has either a large overhead or all of eBPF is counted into it.
py-spy
definitely has an issue with applying percentages as none of those make any sense since they do not add up to 100%.I am thus only looking on TotalTime.
In the user space
get_data
uses the most time. Suprisingly the file open calls are not that costly.Also interesting is that
get_table
takes a lot of time. Why? For a simple lookup / memory copy? How big are these tables? can we maybe clear them more often?The sorting also does not seem to be an issue.
So all in all: Looks fine, apart from the wild eBPF functions and threading overhead.
With RAPL data
The
rapl_metrics_provider_thread
costs a lot of CPU time. Something is wrong here I believe ...The text was updated successfully, but these errors were encountered: