diff mbox

[v5,1/2] dt-bindings: Documentation for qcom, llcc

Message ID 1524524972-12014-2-git-send-email-rishabhb@codeaurora.org (mailing list archive)
State Superseded, archived
Delegated to: Andy Gross
Headers show

Commit Message

Rishabh Bhatnagar April 23, 2018, 11:09 p.m. UTC
Documentation for last level cache controller device tree bindings,
client bindings usage examples.

Signed-off-by: Channagoud Kadabi <ckadabi@codeaurora.org>
Signed-off-by: Rishabh Bhatnagar <rishabhb@codeaurora.org>
---
 .../devicetree/bindings/arm/msm/qcom,llcc.txt      | 60 ++++++++++++++++++++++
 1 file changed, 60 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/arm/msm/qcom,llcc.txt

--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

--
To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Rob Herring (Arm) April 27, 2018, 2:21 p.m. UTC | #1
On Mon, Apr 23, 2018 at 04:09:31PM -0700, Rishabh Bhatnagar wrote:
> Documentation for last level cache controller device tree bindings,
> client bindings usage examples.
> 
> Signed-off-by: Channagoud Kadabi <ckadabi@codeaurora.org>
> Signed-off-by: Rishabh Bhatnagar <rishabhb@codeaurora.org>
> ---
>  .../devicetree/bindings/arm/msm/qcom,llcc.txt      | 60 ++++++++++++++++++++++
>  1 file changed, 60 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/arm/msm/qcom,llcc.txt

My comments on v4 still apply.

Rob
--
To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Rishabh Bhatnagar April 27, 2018, 10:57 p.m. UTC | #2
On 2018-04-27 07:21, Rob Herring wrote:
> On Mon, Apr 23, 2018 at 04:09:31PM -0700, Rishabh Bhatnagar wrote:
>> Documentation for last level cache controller device tree bindings,
>> client bindings usage examples.
>> 
>> Signed-off-by: Channagoud Kadabi <ckadabi@codeaurora.org>
>> Signed-off-by: Rishabh Bhatnagar <rishabhb@codeaurora.org>
>> ---
>>  .../devicetree/bindings/arm/msm/qcom,llcc.txt      | 60 
>> ++++++++++++++++++++++
>>  1 file changed, 60 insertions(+)
>>  create mode 100644 
>> Documentation/devicetree/bindings/arm/msm/qcom,llcc.txt
> 
> My comments on v4 still apply.
> 
> Rob

Hi Rob,
Reposting our replies to your comments on v4:

This is partially true, a bunch of SoCs would support this design but
clients IDs are not expected to change. So Ideally client drivers could
hard code these IDs.

However I have other concerns of moving the client Ids in the driver.
The way the APIs implemented today are as follows:
#1. Client calls into system cache driver to get cache slice handle
with the usecase Id as input.
#2. System cache driver gets the phandle of system cache instance from
the client device to obtain the private data.
#3. Based on the usecase Id perform look up in the private data to get
cache slice handle.
#4. Return the cache slice handle to client

If we don't have the connection between client & system cache then the
  private data needs to declared as static global in the system cache 
driver,
that limits us to have just once instance of system cache block.

Please let us know what you think.
--
To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Rob Herring (Arm) April 30, 2018, 2:33 p.m. UTC | #3
On Fri, Apr 27, 2018 at 5:57 PM,  <rishabhb@codeaurora.org> wrote:
> On 2018-04-27 07:21, Rob Herring wrote:
>>
>> On Mon, Apr 23, 2018 at 04:09:31PM -0700, Rishabh Bhatnagar wrote:
>>>
>>> Documentation for last level cache controller device tree bindings,
>>> client bindings usage examples.
>>>
>>> Signed-off-by: Channagoud Kadabi <ckadabi@codeaurora.org>
>>> Signed-off-by: Rishabh Bhatnagar <rishabhb@codeaurora.org>
>>> ---
>>>  .../devicetree/bindings/arm/msm/qcom,llcc.txt      | 60
>>> ++++++++++++++++++++++
>>>  1 file changed, 60 insertions(+)
>>>  create mode 100644
>>> Documentation/devicetree/bindings/arm/msm/qcom,llcc.txt
>>
>>
>> My comments on v4 still apply.
>>
>> Rob
>
>
> Hi Rob,
> Reposting our replies to your comments on v4:
>
> This is partially true, a bunch of SoCs would support this design but
> clients IDs are not expected to change. So Ideally client drivers could
> hard code these IDs.
>
> However I have other concerns of moving the client Ids in the driver.
> The way the APIs implemented today are as follows:
> #1. Client calls into system cache driver to get cache slice handle
> with the usecase Id as input.
> #2. System cache driver gets the phandle of system cache instance from
> the client device to obtain the private data.
> #3. Based on the usecase Id perform look up in the private data to get
> cache slice handle.
> #4. Return the cache slice handle to client
>
> If we don't have the connection between client & system cache then the
>  private data needs to declared as static global in the system cache driver,
> that limits us to have just once instance of system cache block.

How many instances do you have?

It is easier to put the data into the kernel and move it to DT later
than vice-versa. I don't think it is a good idea to do a custom
binding here and one that only addresses caches and nothing else in
the interconnect. So either we define an extensible and future-proof
binding or put the data into the kernel for now.

Rob
--
To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Rishabh Bhatnagar May 1, 2018, 12:37 a.m. UTC | #4
On 2018-04-30 07:33, Rob Herring wrote:
> On Fri, Apr 27, 2018 at 5:57 PM,  <rishabhb@codeaurora.org> wrote:
>> On 2018-04-27 07:21, Rob Herring wrote:
>>> 
>>> On Mon, Apr 23, 2018 at 04:09:31PM -0700, Rishabh Bhatnagar wrote:
>>>> 
>>>> Documentation for last level cache controller device tree bindings,
>>>> client bindings usage examples.
>>>> 
>>>> Signed-off-by: Channagoud Kadabi <ckadabi@codeaurora.org>
>>>> Signed-off-by: Rishabh Bhatnagar <rishabhb@codeaurora.org>
>>>> ---
>>>>  .../devicetree/bindings/arm/msm/qcom,llcc.txt      | 60
>>>> ++++++++++++++++++++++
>>>>  1 file changed, 60 insertions(+)
>>>>  create mode 100644
>>>> Documentation/devicetree/bindings/arm/msm/qcom,llcc.txt
>>> 
>>> 
>>> My comments on v4 still apply.
>>> 
>>> Rob
>> 
>> 
>> Hi Rob,
>> Reposting our replies to your comments on v4:
>> 
>> This is partially true, a bunch of SoCs would support this design but
>> clients IDs are not expected to change. So Ideally client drivers 
>> could
>> hard code these IDs.
>> 
>> However I have other concerns of moving the client Ids in the driver.
>> The way the APIs implemented today are as follows:
>> #1. Client calls into system cache driver to get cache slice handle
>> with the usecase Id as input.
>> #2. System cache driver gets the phandle of system cache instance from
>> the client device to obtain the private data.
>> #3. Based on the usecase Id perform look up in the private data to get
>> cache slice handle.
>> #4. Return the cache slice handle to client
>> 
>> If we don't have the connection between client & system cache then the
>>  private data needs to declared as static global in the system cache 
>> driver,
>> that limits us to have just once instance of system cache block.
> 
> How many instances do you have?
> 
> It is easier to put the data into the kernel and move it to DT later
> than vice-versa. I don't think it is a good idea to do a custom
> binding here and one that only addresses caches and nothing else in
> the interconnect. So either we define an extensible and future-proof
> binding or put the data into the kernel for now.
> 
> Rob
Hi rob,
Currently we have only instance but how do you propose we handle 
multiple
instances in future?
Currently we do a lookup in the private data of the driver to get the 
slice
handle but, if we were to remove the client connection we will have to 
make
lookup table as global and we can't have more than one instance.
Also, can you suggest any extensible interconnect binding that we can 
refer
to?
--
To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Rob Herring (Arm) May 8, 2018, 3:35 p.m. UTC | #5
On Mon, Apr 30, 2018 at 05:37:49PM -0700, rishabhb@codeaurora.org wrote:
> On 2018-04-30 07:33, Rob Herring wrote:
> > On Fri, Apr 27, 2018 at 5:57 PM,  <rishabhb@codeaurora.org> wrote:
> > > On 2018-04-27 07:21, Rob Herring wrote:
> > > > 
> > > > On Mon, Apr 23, 2018 at 04:09:31PM -0700, Rishabh Bhatnagar wrote:
> > > > > 
> > > > > Documentation for last level cache controller device tree bindings,
> > > > > client bindings usage examples.
> > > > > 
> > > > > Signed-off-by: Channagoud Kadabi <ckadabi@codeaurora.org>
> > > > > Signed-off-by: Rishabh Bhatnagar <rishabhb@codeaurora.org>
> > > > > ---
> > > > >  .../devicetree/bindings/arm/msm/qcom,llcc.txt      | 60
> > > > > ++++++++++++++++++++++
> > > > >  1 file changed, 60 insertions(+)
> > > > >  create mode 100644
> > > > > Documentation/devicetree/bindings/arm/msm/qcom,llcc.txt
> > > > 
> > > > 
> > > > My comments on v4 still apply.
> > > > 
> > > > Rob
> > > 
> > > 
> > > Hi Rob,
> > > Reposting our replies to your comments on v4:
> > > 
> > > This is partially true, a bunch of SoCs would support this design but
> > > clients IDs are not expected to change. So Ideally client drivers
> > > could
> > > hard code these IDs.
> > > 
> > > However I have other concerns of moving the client Ids in the driver.
> > > The way the APIs implemented today are as follows:
> > > #1. Client calls into system cache driver to get cache slice handle
> > > with the usecase Id as input.
> > > #2. System cache driver gets the phandle of system cache instance from
> > > the client device to obtain the private data.
> > > #3. Based on the usecase Id perform look up in the private data to get
> > > cache slice handle.
> > > #4. Return the cache slice handle to client
> > > 
> > > If we don't have the connection between client & system cache then the
> > >  private data needs to declared as static global in the system cache
> > > driver,
> > > that limits us to have just once instance of system cache block.
> > 
> > How many instances do you have?
> > 
> > It is easier to put the data into the kernel and move it to DT later
> > than vice-versa. I don't think it is a good idea to do a custom
> > binding here and one that only addresses caches and nothing else in
> > the interconnect. So either we define an extensible and future-proof
> > binding or put the data into the kernel for now.
> > 
> > Rob
> Hi rob,
> Currently we have only instance but how do you propose we handle multiple
> instances in future?

Worry about that when you have more that one. If it's only a  
theoretical possibility then it can wait.

> Currently we do a lookup in the private data of the driver to get the slice
> handle but, if we were to remove the client connection we will have to make
> lookup table as global and we can't have more than one instance.
> Also, can you suggest any extensible interconnect binding that we can refer
> to?

There's been some work to add interconnect support for QCom chips. ATM, 
there is no binding for it and it is just a kernel driver and subsystem. 
I'm sure you can Google that to find as easily as me.

Rob
--
To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/Documentation/devicetree/bindings/arm/msm/qcom,llcc.txt b/Documentation/devicetree/bindings/arm/msm/qcom,llcc.txt
new file mode 100644
index 0000000..c30d433
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/msm/qcom,llcc.txt
@@ -0,0 +1,60 @@ 
+== Introduction==
+
+LLCC (Last Level Cache Controller) provides last level of cache memory in SOC,
+that can be shared by multiple clients. Clients here are different cores in the
+SOC, the idea is to minimize the local caches at the clients and migrate to
+common pool of memory. Cache memory is divided into partitions called slices
+which are assigned to clients. Clients can query the slice details, activate
+and deactivate them.
+
+Properties:
+- compatible:
+	Usage: required
+	Value type: <string>
+	Definition: must be "qcom,sdm845-llcc"
+
+- reg:
+	Usage: required
+	Value Type: <prop-encoded-array>
+	Definition: Start address and the range of the LLCC registers.
+
+- #cache-cells:
+	Usage: required
+	Value Type: <u32>
+	Definition: Number of cache cells, must be 1
+
+- max-slices:
+	usage: required
+	Value Type: <u32>
+	Definition: Number of cache slices supported by hardware
+
+Example:
+
+	llcc: qcom,llcc@1100000 {
+		compatible = "qcom,sdm845-llcc";
+		reg = <0x1100000 0x250000>;
+		#cache-cells = <1>;
+		max-slices = <32>;
+	};
+
+== Client ==
+
+Properties:
+- cache-slice-names:
+Usage: required
+	Value type: <stringlist>
+	Definition: A set of names that identify the usecase names of a
+		client that uses cache slice. These strings are
+		used to look up the cache slice entries by name.
+
+- cache-slices:
+	Usage: required
+	Value type: <prop-encoded-array>
+	Definition: The tuple has phandle to llcc device as the first
+			argument and the second argument is the usecase
+			id of the client.
+For Example:
+	venus {
+		cache-slice-names = "vidsc0", "vidsc1";
+		cache-slices = <&llcc VIDSC0_ID>, <&llcc VIDSC1_ID>;
+	};