Message ID | 1617160475-1550-1-git-send-email-cang@codeaurora.org (mailing list archive) |
---|---|
Headers | show |
Series | Introduce hba performance monitoring sysfs nodes | expand |
On 3/30/21 8:14 PM, Can Guo wrote: > It works like: > /sys/bus/platform/drivers/ufshcd/*/monitor # echo 4096 > monitor_chunk_size > /sys/bus/platform/drivers/ufshcd/*/monitor # echo 1 > monitor_enable > /sys/bus/platform/drivers/ufshcd/*/monitor # grep ^ /dev/null * > monitor_chunk_size:4096 > monitor_enable:1 > read_nr_requests:17 > read_req_latency_avg:169 > read_req_latency_max:594 > read_req_latency_min:66 > read_req_latency_sum:2887 > read_total_busy:2639 > read_total_sectors:136 > write_nr_requests:116 > write_req_latency_avg:440 > write_req_latency_max:4921 > write_req_latency_min:23 > write_req_latency_sum:51052 > write_total_busy:19584 > write_total_sectors:928 Are any of these attributes UFS-specific? If not, isn't this functionality that should be added to the block layer instead of to the UFS driver? Thanks, Bart.
On 2021-03-31 11:34, Bart Van Assche wrote: > On 3/30/21 8:14 PM, Can Guo wrote: >> It works like: >> /sys/bus/platform/drivers/ufshcd/*/monitor # echo 4096 > >> monitor_chunk_size >> /sys/bus/platform/drivers/ufshcd/*/monitor # echo 1 > monitor_enable >> /sys/bus/platform/drivers/ufshcd/*/monitor # grep ^ /dev/null * >> monitor_chunk_size:4096 >> monitor_enable:1 >> read_nr_requests:17 >> read_req_latency_avg:169 >> read_req_latency_max:594 >> read_req_latency_min:66 >> read_req_latency_sum:2887 >> read_total_busy:2639 >> read_total_sectors:136 >> write_nr_requests:116 >> write_req_latency_avg:440 >> write_req_latency_max:4921 >> write_req_latency_min:23 >> write_req_latency_sum:51052 >> write_total_busy:19584 >> write_total_sectors:928 > > Are any of these attributes UFS-specific? If not, isn't this > functionality that should be added to the block layer instead of to the > UFS driver? > Hi Bart, I didn't think that before because we've already have the powerful "blktrace" tool to collect the overall statistics of each layer. I add this because I find it really come handy when debug/analyze/profile UFS driver/HW performance. And there will be UFS-specific nodes to be added later to monitor statistics like UFS scaling, gating, doorbell, write booster, HPB and etc. Thanks. Can Guo. > Thanks, > > Bart.
> On 2021-03-31 11:34, Bart Van Assche wrote: > > On 3/30/21 8:14 PM, Can Guo wrote: > >> It works like: > >> /sys/bus/platform/drivers/ufshcd/*/monitor # echo 4096 > > >> monitor_chunk_size > >> /sys/bus/platform/drivers/ufshcd/*/monitor # echo 1 > monitor_enable > >> /sys/bus/platform/drivers/ufshcd/*/monitor # grep ^ /dev/null * > >> monitor_chunk_size:4096 > >> monitor_enable:1 > >> read_nr_requests:17 > >> read_req_latency_avg:169 > >> read_req_latency_max:594 > >> read_req_latency_min:66 > >> read_req_latency_sum:2887 > >> read_total_busy:2639 > >> read_total_sectors:136 > >> write_nr_requests:116 > >> write_req_latency_avg:440 > >> write_req_latency_max:4921 > >> write_req_latency_min:23 > >> write_req_latency_sum:51052 > >> write_total_busy:19584 > >> write_total_sectors:928 > > > > Are any of these attributes UFS-specific? If not, isn't this > > functionality that should be added to the block layer instead of to the > > UFS driver? > > > > Hi Bart, > > I didn't think that before because we've already have the powerful > "blktrace" > tool to collect the overall statistics of each layer. > > I add this because I find it really come handy when > debug/analyze/profile > UFS driver/HW performance. And there will be UFS-specific nodes to be > added later to monitor statistics like UFS scaling, gating, doorbell, > write > booster, HPB and etc. We are using a designated analysis tool (web-based, a lot of fancy graphs etc.) that relies on ftrace - upiu tracer etc. Once the raw data is there - the options/insights are endless. Thanks, Avri > > Thanks. > > Can Guo. > > > Thanks, > > > > Bart.
On 2021-03-31 14:35, Avri Altman wrote: >> On 2021-03-31 11:34, Bart Van Assche wrote: >> > On 3/30/21 8:14 PM, Can Guo wrote: >> >> It works like: >> >> /sys/bus/platform/drivers/ufshcd/*/monitor # echo 4096 > >> >> monitor_chunk_size >> >> /sys/bus/platform/drivers/ufshcd/*/monitor # echo 1 > monitor_enable >> >> /sys/bus/platform/drivers/ufshcd/*/monitor # grep ^ /dev/null * >> >> monitor_chunk_size:4096 >> >> monitor_enable:1 >> >> read_nr_requests:17 >> >> read_req_latency_avg:169 >> >> read_req_latency_max:594 >> >> read_req_latency_min:66 >> >> read_req_latency_sum:2887 >> >> read_total_busy:2639 >> >> read_total_sectors:136 >> >> write_nr_requests:116 >> >> write_req_latency_avg:440 >> >> write_req_latency_max:4921 >> >> write_req_latency_min:23 >> >> write_req_latency_sum:51052 >> >> write_total_busy:19584 >> >> write_total_sectors:928 >> > >> > Are any of these attributes UFS-specific? If not, isn't this >> > functionality that should be added to the block layer instead of to the >> > UFS driver? >> > >> >> Hi Bart, >> >> I didn't think that before because we've already have the powerful >> "blktrace" >> tool to collect the overall statistics of each layer. >> >> I add this because I find it really come handy when >> debug/analyze/profile >> UFS driver/HW performance. And there will be UFS-specific nodes to be >> added later to monitor statistics like UFS scaling, gating, doorbell, >> write >> booster, HPB and etc. > We are using a designated analysis tool (web-based, a lot of fancy > graphs etc.) that relies on ftrace - upiu tracer etc. > Once the raw data is there - the options/insights are endless. > Hi Avri, Yeah, one can dig out a lot of info from ftrace/systrace raw data. But, most important, ftrace/systrace has below disadvantages [1] Enabling UFS/SCSI ftrace itself can impact UFS performance (a lot) as per our profiling [2] One needs a parser tool (only if they have one) to get the wanted results So we usually use ftrace to analyze some sequences, e.g., cmd-response, suspend-resume, gating and scaling, but not quite suitable for analyzing performance, see [1]. These nodes provide us a swift method to look into statistics during runtime [2]. Please let me know if you have any concerns w.r.t the change. Thanks, Can Guo. > Thanks, > Avri >> >> Thanks. >> >> Can Guo. >> >> > Thanks, >> > >> > Bart.
> On 2021-03-31 14:35, Avri Altman wrote: > >> On 2021-03-31 11:34, Bart Van Assche wrote: > >> > On 3/30/21 8:14 PM, Can Guo wrote: > >> >> It works like: > >> >> /sys/bus/platform/drivers/ufshcd/*/monitor # echo 4096 > > >> >> monitor_chunk_size > >> >> /sys/bus/platform/drivers/ufshcd/*/monitor # echo 1 > > monitor_enable > >> >> /sys/bus/platform/drivers/ufshcd/*/monitor # grep ^ /dev/null * > >> >> monitor_chunk_size:4096 > >> >> monitor_enable:1 > >> >> read_nr_requests:17 > >> >> read_req_latency_avg:169 > >> >> read_req_latency_max:594 > >> >> read_req_latency_min:66 > >> >> read_req_latency_sum:2887 > >> >> read_total_busy:2639 > >> >> read_total_sectors:136 > >> >> write_nr_requests:116 > >> >> write_req_latency_avg:440 > >> >> write_req_latency_max:4921 > >> >> write_req_latency_min:23 > >> >> write_req_latency_sum:51052 > >> >> write_total_busy:19584 > >> >> write_total_sectors:928 > >> > > >> > Are any of these attributes UFS-specific? If not, isn't this > >> > functionality that should be added to the block layer instead of to the > >> > UFS driver? > >> > > >> > >> Hi Bart, > >> > >> I didn't think that before because we've already have the powerful > >> "blktrace" > >> tool to collect the overall statistics of each layer. > >> > >> I add this because I find it really come handy when > >> debug/analyze/profile > >> UFS driver/HW performance. And there will be UFS-specific nodes to be > >> added later to monitor statistics like UFS scaling, gating, doorbell, > >> write > >> booster, HPB and etc. > > We are using a designated analysis tool (web-based, a lot of fancy > > graphs etc.) that relies on ftrace - upiu tracer etc. > > Once the raw data is there - the options/insights are endless. > > > > Hi Avri, > > Yeah, one can dig out a lot of info from ftrace/systrace raw data. > But, most important, ftrace/systrace has below disadvantages > > [1] Enabling UFS/SCSI ftrace itself can impact UFS performance (a lot) > as per our profiling > [2] One needs a parser tool (only if they have one) to get the wanted > results > > So we usually use ftrace to analyze some sequences, e.g., cmd-response, > suspend-resume, gating and scaling, but not quite suitable for analyzing > performance, see [1]. > > These nodes provide us a swift method to look into statistics during > runtime [2]. > > Please let me know if you have any concerns w.r.t the change. No - not really. It's just this sort of things tend to grow... Thanks, Avri > > Thanks, > > Can Guo. > > > Thanks, > > Avri > >> > >> Thanks. > >> > >> Can Guo. > >> > >> > Thanks, > >> > > >> > Bart.