Message ID | 20181220173026.3857-2-jcrouse@codeaurora.org (mailing list archive) |
---|---|
State | Not Applicable, archived |
Headers | show |
Series | arm64: dts: sdm845: Add sdm845 GPU interconnect | expand |
Hi, On Thu, Dec 20, 2018 at 9:30 AM Jordan Crouse <jcrouse@codeaurora.org> wrote: > > Try to get the interconnect path for the GPU and vote for the maximum > bandwidth to support all frequencies. This is needed for performance. > Later we will want to scale the bandwidth based on the frequency to > also optimize for power but that will require some device tree > infrastructure that does not yet exist. > > v5: Remove hardcoded interconnect name and just use the default > v4: Don't use a port string at all to skip the need for names in the DT > v3: Use macros and change port string per Georgi Djakov > > Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> > --- > > drivers/gpu/drm/msm/Kconfig | 1 + > drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 20 ++++++++++++++++++++ > drivers/gpu/drm/msm/adreno/adreno_gpu.c | 9 +++++++++ > drivers/gpu/drm/msm/msm_gpu.h | 3 +++ > 4 files changed, 33 insertions(+) There is very little difference between this an the previous version [1]. Maybe could have kept Rob Clark's Acked-by? The only change was: - gpu->icc_path = of_icc_get(dev, "gfx-mem"); + gpu->icc_path = of_icc_get(dev, NULL); Also: I assume that this is still intended to go through Georgi's tree? [1] https://lkml.kernel.org/r/20181207170656.13208-1-jcrouse@codeaurora.org -Doug
Hi, On Thu, Dec 20, 2018 at 9:30 AM Jordan Crouse <jcrouse@codeaurora.org> wrote: > > Try to get the interconnect path for the GPU and vote for the maximum > bandwidth to support all frequencies. This is needed for performance. > Later we will want to scale the bandwidth based on the frequency to > also optimize for power but that will require some device tree > infrastructure that does not yet exist. > > v5: Remove hardcoded interconnect name and just use the default nit: ${SUBJECT} says v3, but this is v5. I'll put in my usual plug for considering "patman" to help post patches. Even though it lives in the u-boot git repo it's still a gem for kernel work. <http://git.denx.de/?p=u-boot.git;a=blob;f=tools/patman/README> > @@ -85,6 +89,12 @@ static void __a6xx_gmu_set_freq(struct a6xx_gmu *gmu, int index) > dev_err(gmu->dev, "GMU set GPU frequency error: %d\n", ret); > > gmu->freq = gmu->gpu_freqs[index]; > + > + /* > + * Eventually we will want to scale the path vote with the frequency but > + * for now leave it at max so that the performance is nominal. > + */ > + icc_set(gpu->icc_path, 0, MBps_to_icc(7216)); You'll need to change icc_set() here to icc_set_bw() to match v13, AKA: - https://patchwork.kernel.org/patch/10766335/ - https://lkml.kernel.org/r/20190116161103.6937-2-georgi.djakov@linaro.org > @@ -695,6 +707,9 @@ int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu) > if (ret) > goto out; > > + /* Set the bus quota to a reasonable value for boot */ > + icc_set(gpu->icc_path, 0, MBps_to_icc(3072)); This will also need to change to icc_set_bw() > @@ -781,6 +798,9 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu) > /* Tell RPMh to power off the GPU */ > a6xx_rpmh_stop(gmu); > > + /* Remove the bus vote */ > + icc_set(gpu->icc_path, 0, 0); This will also need to change to icc_set_bw() I have the same questions for this series that I had in response to the email ("[v5 2/3] drm/msm/dpu: Integrate interconnect API in MDSS") <https://lkml.kernel.org/r/CAD=FV=XUeMTGH+CDwGs3PfK4igdQrCbwucw7_2ViBc4i7grvxg@mail.gmail.com> Copy / pasting here (with minor name changes) so folks don't have to follow links / search email. == I'm curious what the plan is for landing this series. Rob / Gerogi: do you have any preference? Options I'd imagine: A) Wait until interconnect lands (in 5.1?) and land this through msm-next in the version after (5.2?) B) Georgi provides an immutable branch for interconnect when his lands (assuming he's landing via pull request) and that gets pulled into the the relevant drm tree. C) Rob Acks this series and indicates that it should go in through Gerogi's tree (probably only works if Georgi plans to send a pull request). If we're going this route then (IIUC) we'd want to land this in Gerogi's tree sooner rather than later so it can get some bake time? NOTE: as per my prior reply, I believe Rob has already Acked this patch. Does anyone have a preference? It's be nice if whoever is planning to land this could indicate whether they'd prefer Jordan send a new version to handle the API change or if the relevant maintainer can just do the fixup when the patch lands. Thanks! -Doug
On Fri, Jan 18, 2019 at 1:06 PM Doug Anderson <dianders@chromium.org> wrote: > > Hi, > > On Thu, Dec 20, 2018 at 9:30 AM Jordan Crouse <jcrouse@codeaurora.org> wrote: > > > > Try to get the interconnect path for the GPU and vote for the maximum > > bandwidth to support all frequencies. This is needed for performance. > > Later we will want to scale the bandwidth based on the frequency to > > also optimize for power but that will require some device tree > > infrastructure that does not yet exist. > > > > v5: Remove hardcoded interconnect name and just use the default > > nit: ${SUBJECT} says v3, but this is v5. > > I'll put in my usual plug for considering "patman" to help post > patches. Even though it lives in the u-boot git repo it's still a gem > for kernel work. > <http://git.denx.de/?p=u-boot.git;a=blob;f=tools/patman/README> > > > > @@ -85,6 +89,12 @@ static void __a6xx_gmu_set_freq(struct a6xx_gmu *gmu, int index) > > dev_err(gmu->dev, "GMU set GPU frequency error: %d\n", ret); > > > > gmu->freq = gmu->gpu_freqs[index]; > > + > > + /* > > + * Eventually we will want to scale the path vote with the frequency but > > + * for now leave it at max so that the performance is nominal. > > + */ > > + icc_set(gpu->icc_path, 0, MBps_to_icc(7216)); > > You'll need to change icc_set() here to icc_set_bw() to match v13, AKA: > > - https://patchwork.kernel.org/patch/10766335/ > - https://lkml.kernel.org/r/20190116161103.6937-2-georgi.djakov@linaro.org > > > > @@ -695,6 +707,9 @@ int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu) > > if (ret) > > goto out; > > > > + /* Set the bus quota to a reasonable value for boot */ > > + icc_set(gpu->icc_path, 0, MBps_to_icc(3072)); > > This will also need to change to icc_set_bw() > > > > @@ -781,6 +798,9 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu) > > /* Tell RPMh to power off the GPU */ > > a6xx_rpmh_stop(gmu); > > > > + /* Remove the bus vote */ > > + icc_set(gpu->icc_path, 0, 0); > > This will also need to change to icc_set_bw() > > > I have the same questions for this series that I had in response to > the email ("[v5 2/3] drm/msm/dpu: Integrate interconnect API in MDSS") > <https://lkml.kernel.org/r/CAD=FV=XUeMTGH+CDwGs3PfK4igdQrCbwucw7_2ViBc4i7grvxg@mail.gmail.com> > > > Copy / pasting here (with minor name changes) so folks don't have to > follow links / search email. > > == > > I'm curious what the plan is for landing this series. Rob / Gerogi: > do you have any preference? Options I'd imagine: > > A) Wait until interconnect lands (in 5.1?) and land this through > msm-next in the version after (5.2?) > > B) Georgi provides an immutable branch for interconnect when his lands > (assuming he's landing via pull request) and that gets pulled into the > the relevant drm tree. > > C) Rob Acks this series and indicates that it should go in through > Gerogi's tree (probably only works if Georgi plans to send a pull > request). If we're going this route then (IIUC) we'd want to land > this in Gerogi's tree sooner rather than later so it can get some bake > time? NOTE: as per my prior reply, I believe Rob has already Acked > this patch. > I'm ok to ack and have it land via Georgi's tree, if Georgi wants to do that. Or otherwise, I could maybe coordinate w/ airlied to send a 2nd late msm-next pr including the gpu and display interconnect patches. BR, -R > > Does anyone have a preference? It's be nice if whoever is planning to > land this could indicate whether they'd prefer Jordan send a new > version to handle the API change or if the relevant maintainer can > just do the fixup when the patch lands. > > > Thanks! > > > -Doug
Hi, On Fri, Jan 18, 2019 at 10:06 AM Doug Anderson <dianders@chromium.org> wrote: > It's be nice if whoever is planning to > land this could indicate whether they'd prefer Jordan send a new > version to handle the API change or if the relevant maintainer can > just do the fixup when the patch lands. Breadcrumbs: Jordan went ahead and posted a new version ("[PATCH v6] drm/msm/a6xx: Add support for an interconnect path"): https://patchwork.kernel.org/patch/10771501/ -Doug
Hi Rob, On 1/18/19 21:16, Rob Clark wrote: > On Fri, Jan 18, 2019 at 1:06 PM Doug Anderson <dianders@chromium.org> wrote: >> >> Hi, >> >> On Thu, Dec 20, 2018 at 9:30 AM Jordan Crouse <jcrouse@codeaurora.org> wrote: >>> >>> Try to get the interconnect path for the GPU and vote for the maximum >>> bandwidth to support all frequencies. This is needed for performance. >>> Later we will want to scale the bandwidth based on the frequency to >>> also optimize for power but that will require some device tree >>> infrastructure that does not yet exist. >>> >>> v5: Remove hardcoded interconnect name and just use the default >> >> nit: ${SUBJECT} says v3, but this is v5. >> >> I'll put in my usual plug for considering "patman" to help post >> patches. Even though it lives in the u-boot git repo it's still a gem >> for kernel work. >> <http://git.denx.de/?p=u-boot.git;a=blob;f=tools/patman/README> >> >> >>> @@ -85,6 +89,12 @@ static void __a6xx_gmu_set_freq(struct a6xx_gmu *gmu, int index) >>> dev_err(gmu->dev, "GMU set GPU frequency error: %d\n", ret); >>> >>> gmu->freq = gmu->gpu_freqs[index]; >>> + >>> + /* >>> + * Eventually we will want to scale the path vote with the frequency but >>> + * for now leave it at max so that the performance is nominal. >>> + */ >>> + icc_set(gpu->icc_path, 0, MBps_to_icc(7216)); >> >> You'll need to change icc_set() here to icc_set_bw() to match v13, AKA: >> >> - https://patchwork.kernel.org/patch/10766335/ >> - https://lkml.kernel.org/r/20190116161103.6937-2-georgi.djakov@linaro.org >> >> >>> @@ -695,6 +707,9 @@ int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu) >>> if (ret) >>> goto out; >>> >>> + /* Set the bus quota to a reasonable value for boot */ >>> + icc_set(gpu->icc_path, 0, MBps_to_icc(3072)); >> >> This will also need to change to icc_set_bw() >> >> >>> @@ -781,6 +798,9 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu) >>> /* Tell RPMh to power off the GPU */ >>> a6xx_rpmh_stop(gmu); >>> >>> + /* Remove the bus vote */ >>> + icc_set(gpu->icc_path, 0, 0); >> >> This will also need to change to icc_set_bw() >> >> >> I have the same questions for this series that I had in response to >> the email ("[v5 2/3] drm/msm/dpu: Integrate interconnect API in MDSS") >> <https://lkml.kernel.org/r/CAD=FV=XUeMTGH+CDwGs3PfK4igdQrCbwucw7_2ViBc4i7grvxg@mail.gmail.com> >> >> >> Copy / pasting here (with minor name changes) so folks don't have to >> follow links / search email. >> >> == >> >> I'm curious what the plan is for landing this series. Rob / Gerogi: >> do you have any preference? Options I'd imagine: >> >> A) Wait until interconnect lands (in 5.1?) and land this through >> msm-next in the version after (5.2?) >> >> B) Georgi provides an immutable branch for interconnect when his lands >> (assuming he's landing via pull request) and that gets pulled into the >> the relevant drm tree. >> >> C) Rob Acks this series and indicates that it should go in through >> Gerogi's tree (probably only works if Georgi plans to send a pull >> request). If we're going this route then (IIUC) we'd want to land >> this in Gerogi's tree sooner rather than later so it can get some bake >> time? NOTE: as per my prior reply, I believe Rob has already Acked >> this patch. >> > > I'm ok to ack and have it land via Georgi's tree, if Georgi wants to > do that. Or otherwise, I could maybe coordinate w/ airlied to send a > 2nd late msm-next pr including the gpu and display interconnect > patches. I'm fine either way. But it would be nice if both patches (this one and the dt-bindings go together. The v6 of this patch applies cleanly to my tree, but the next one (2/3) with the dt-bindings doesn't. Thanks, Georgi
Hi, On Mon, Jan 21, 2019 at 9:13 AM Georgi Djakov <georgi.djakov@linaro.org> wrote: > > Hi Rob, > > On 1/18/19 21:16, Rob Clark wrote: > > On Fri, Jan 18, 2019 at 1:06 PM Doug Anderson <dianders@chromium.org> wrote: > >> > >> Hi, > >> > >> On Thu, Dec 20, 2018 at 9:30 AM Jordan Crouse <jcrouse@codeaurora.org> wrote: > >>> > >>> Try to get the interconnect path for the GPU and vote for the maximum > >>> bandwidth to support all frequencies. This is needed for performance. > >>> Later we will want to scale the bandwidth based on the frequency to > >>> also optimize for power but that will require some device tree > >>> infrastructure that does not yet exist. > >>> > >>> v5: Remove hardcoded interconnect name and just use the default > >> > >> nit: ${SUBJECT} says v3, but this is v5. > >> > >> I'll put in my usual plug for considering "patman" to help post > >> patches. Even though it lives in the u-boot git repo it's still a gem > >> for kernel work. > >> <http://git.denx.de/?p=u-boot.git;a=blob;f=tools/patman/README> > >> > >> > >>> @@ -85,6 +89,12 @@ static void __a6xx_gmu_set_freq(struct a6xx_gmu *gmu, int index) > >>> dev_err(gmu->dev, "GMU set GPU frequency error: %d\n", ret); > >>> > >>> gmu->freq = gmu->gpu_freqs[index]; > >>> + > >>> + /* > >>> + * Eventually we will want to scale the path vote with the frequency but > >>> + * for now leave it at max so that the performance is nominal. > >>> + */ > >>> + icc_set(gpu->icc_path, 0, MBps_to_icc(7216)); > >> > >> You'll need to change icc_set() here to icc_set_bw() to match v13, AKA: > >> > >> - https://patchwork.kernel.org/patch/10766335/ > >> - https://lkml.kernel.org/r/20190116161103.6937-2-georgi.djakov@linaro.org > >> > >> > >>> @@ -695,6 +707,9 @@ int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu) > >>> if (ret) > >>> goto out; > >>> > >>> + /* Set the bus quota to a reasonable value for boot */ > >>> + icc_set(gpu->icc_path, 0, MBps_to_icc(3072)); > >> > >> This will also need to change to icc_set_bw() > >> > >> > >>> @@ -781,6 +798,9 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu) > >>> /* Tell RPMh to power off the GPU */ > >>> a6xx_rpmh_stop(gmu); > >>> > >>> + /* Remove the bus vote */ > >>> + icc_set(gpu->icc_path, 0, 0); > >> > >> This will also need to change to icc_set_bw() > >> > >> > >> I have the same questions for this series that I had in response to > >> the email ("[v5 2/3] drm/msm/dpu: Integrate interconnect API in MDSS") > >> <https://lkml.kernel.org/r/CAD=FV=XUeMTGH+CDwGs3PfK4igdQrCbwucw7_2ViBc4i7grvxg@mail.gmail.com> > >> > >> > >> Copy / pasting here (with minor name changes) so folks don't have to > >> follow links / search email. > >> > >> == > >> > >> I'm curious what the plan is for landing this series. Rob / Gerogi: > >> do you have any preference? Options I'd imagine: > >> > >> A) Wait until interconnect lands (in 5.1?) and land this through > >> msm-next in the version after (5.2?) > >> > >> B) Georgi provides an immutable branch for interconnect when his lands > >> (assuming he's landing via pull request) and that gets pulled into the > >> the relevant drm tree. > >> > >> C) Rob Acks this series and indicates that it should go in through > >> Gerogi's tree (probably only works if Georgi plans to send a pull > >> request). If we're going this route then (IIUC) we'd want to land > >> this in Gerogi's tree sooner rather than later so it can get some bake > >> time? NOTE: as per my prior reply, I believe Rob has already Acked > >> this patch. > >> > > > > I'm ok to ack and have it land via Georgi's tree, if Georgi wants to > > do that. Or otherwise, I could maybe coordinate w/ airlied to send a > > 2nd late msm-next pr including the gpu and display interconnect > > patches. > > I'm fine either way. But it would be nice if both patches (this one and > the dt-bindings go together. The v6 of this patch applies cleanly to my > tree, but the next one (2/3) with the dt-bindings doesn't. Ah, right. You need to be based upon commit 85437cddf4e5 ("dt-bindings: drm/msm/a6xx: Document GMU and update GPU bindings") from Rob Clark's msm-next AKA <git://people.freedesktop.org/~robclark/linux> ...so I guess the easiest is to have the bindings could go through Rob Clark's tree and the code through you tree if that's what people want to do? -Doug
diff --git a/drivers/gpu/drm/msm/Kconfig b/drivers/gpu/drm/msm/Kconfig index 843a9d40c05e..990c4350f0c4 100644 --- a/drivers/gpu/drm/msm/Kconfig +++ b/drivers/gpu/drm/msm/Kconfig @@ -5,6 +5,7 @@ config DRM_MSM depends on ARCH_QCOM || (ARM && COMPILE_TEST) depends on OF && COMMON_CLK depends on MMU + depends on INTERCONNECT || !INTERCONNECT select QCOM_MDT_LOADER if ARCH_QCOM select REGULATOR select DRM_KMS_HELPER diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c index 0fb4718ef0df..781b601c6045 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -2,6 +2,7 @@ /* Copyright (c) 2017-2018 The Linux Foundation. All rights reserved. */ #include <linux/clk.h> +#include <linux/interconnect.h> #include <linux/pm_opp.h> #include <soc/qcom/cmd-db.h> @@ -63,6 +64,9 @@ static bool a6xx_gmu_gx_is_on(struct a6xx_gmu *gmu) static void __a6xx_gmu_set_freq(struct a6xx_gmu *gmu, int index) { + struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu); + struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; + struct msm_gpu *gpu = &adreno_gpu->base; int ret; gmu_write(gmu, REG_A6XX_GMU_DCVS_ACK_OPTION, 0); @@ -85,6 +89,12 @@ static void __a6xx_gmu_set_freq(struct a6xx_gmu *gmu, int index) dev_err(gmu->dev, "GMU set GPU frequency error: %d\n", ret); gmu->freq = gmu->gpu_freqs[index]; + + /* + * Eventually we will want to scale the path vote with the frequency but + * for now leave it at max so that the performance is nominal. + */ + icc_set(gpu->icc_path, 0, MBps_to_icc(7216)); } void a6xx_gmu_set_freq(struct msm_gpu *gpu, unsigned long freq) @@ -680,6 +690,8 @@ int a6xx_gmu_reset(struct a6xx_gpu *a6xx_gpu) int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu) { + struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; + struct msm_gpu *gpu = &adreno_gpu->base; struct a6xx_gmu *gmu = &a6xx_gpu->gmu; int status, ret; @@ -695,6 +707,9 @@ int a6xx_gmu_resume(struct a6xx_gpu *a6xx_gpu) if (ret) goto out; + /* Set the bus quota to a reasonable value for boot */ + icc_set(gpu->icc_path, 0, MBps_to_icc(3072)); + a6xx_gmu_irq_enable(gmu); /* Check to see if we are doing a cold or warm boot */ @@ -735,6 +750,8 @@ bool a6xx_gmu_isidle(struct a6xx_gmu *gmu) int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu) { + struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; + struct msm_gpu *gpu = &adreno_gpu->base; struct a6xx_gmu *gmu = &a6xx_gpu->gmu; u32 val; @@ -781,6 +798,9 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu) /* Tell RPMh to power off the GPU */ a6xx_rpmh_stop(gmu); + /* Remove the bus vote */ + icc_set(gpu->icc_path, 0, 0); + clk_bulk_disable_unprepare(gmu->nr_clocks, gmu->clocks); pm_runtime_put_sync(gmu->dev); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index bfeea50fca8a..6629dc3506eb 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -18,6 +18,7 @@ */ #include <linux/ascii85.h> +#include <linux/interconnect.h> #include <linux/kernel.h> #include <linux/pm_opp.h> #include <linux/slab.h> @@ -695,6 +696,11 @@ static int adreno_get_pwrlevels(struct device *dev, DBG("fast_rate=%u, slow_rate=27000000", gpu->fast_rate); + /* Check for an interconnect path for the bus */ + gpu->icc_path = of_icc_get(dev, NULL); + if (IS_ERR(gpu->icc_path)) + gpu->icc_path = NULL; + return 0; } @@ -732,10 +738,13 @@ int adreno_gpu_init(struct drm_device *drm, struct platform_device *pdev, void adreno_gpu_cleanup(struct adreno_gpu *adreno_gpu) { + struct msm_gpu *gpu = &adreno_gpu->base; unsigned int i; for (i = 0; i < ARRAY_SIZE(adreno_gpu->info->fw); i++) release_firmware(adreno_gpu->fw[i]); + icc_put(gpu->icc_path); + msm_gpu_cleanup(&adreno_gpu->base); } diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index fc4040e24a6b..66e0f28dfed8 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -19,6 +19,7 @@ #define __MSM_GPU_H__ #include <linux/clk.h> +#include <linux/interconnect.h> #include <linux/regulator/consumer.h> #include "msm_drv.h" @@ -118,6 +119,8 @@ struct msm_gpu { struct clk *ebi1_clk, *core_clk, *rbbmtimer_clk; uint32_t fast_rate; + struct icc_path *icc_path; + /* Hang and Inactivity Detection: */ #define DRM_MSM_INACTIVE_PERIOD 66 /* in ms (roughly four frames) */
Try to get the interconnect path for the GPU and vote for the maximum bandwidth to support all frequencies. This is needed for performance. Later we will want to scale the bandwidth based on the frequency to also optimize for power but that will require some device tree infrastructure that does not yet exist. v5: Remove hardcoded interconnect name and just use the default v4: Don't use a port string at all to skip the need for names in the DT v3: Use macros and change port string per Georgi Djakov Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org> --- drivers/gpu/drm/msm/Kconfig | 1 + drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 20 ++++++++++++++++++++ drivers/gpu/drm/msm/adreno/adreno_gpu.c | 9 +++++++++ drivers/gpu/drm/msm/msm_gpu.h | 3 +++ 4 files changed, 33 insertions(+)