diff mbox

[v6,07/10] ARM: OMAP2+: gpmc: generic timing calculation

Message ID 83b963e4ebcc1009a26a8c6419c785ac6d742c0b.1345524670.git.afzal@ti.com (mailing list archive)
State New, archived
Headers show

Commit Message

Afzal Mohammed Aug. 21, 2012, 10:45 a.m. UTC
Presently there are three peripherals that gets it timing
by runtime calculation. Those peripherals can work with
frequency scaling that affects gpmc clock. But timing
calculation for them are in different ways.

Here a generic runtime calculation method is proposed. Input
to this function were selected so that they represent timing
variables that are present in peripheral datasheets. Motive
behind this was to achieve DT bindings for the inputs as is.
Even though a few of the tusb6010 timings could not be made
directly related to timings normally found on peripherals,
expressions used were translated to those that could be
justified.

There are possibilities of improving the calculations, like
calculating timing for read & write operations in a more
similar way. Expressions derived here were tested for async
onenand on omap3evm (as vanilla Kernel does not have omap3evm
onenand support, local patch was used). Other peripherals,
tusb6010, smc91x calculations were validated by simulating
on omap3evm.

Regarding "we_on" for onenand async, it was found that even
for muxed address/data, it need not be greater than
"adv_wr_off", but rather could be derived from write setup
time for peripheral from start of access time, hence would
more be in line with peripheral timings. With this method
it was working fine. If it is required in some cases to
have "we_on" same as "wr_data_mux_bus" (i.e. greater than
"adv_wr_off"), another variable could be added to indicate
it. But such a requirement is not expected though.

Whole of this exercise is being done to achieve driver and
DT conversion. If timings could not be calculated in a
peripheral agnostic way, either gpmc driver would have to
be peripheral gnostic or a wrapper arrangement over gpmc
driver would be required.

Signed-off-by: Afzal Mohammed <afzal@ti.com>
---
 arch/arm/mach-omap2/gpmc.c             |  302 ++++++++++++++++++++++++++++++++
 arch/arm/plat-omap/include/plat/gpmc.h |   61 +++++++
 2 files changed, 363 insertions(+), 0 deletions(-)

Comments

Hunter, Jon Aug. 23, 2012, 2:58 a.m. UTC | #1
Hi Afzal,

On 08/21/2012 05:45 AM, Afzal Mohammed wrote:
> Presently there are three peripherals that gets it timing
> by runtime calculation. Those peripherals can work with
> frequency scaling that affects gpmc clock. But timing
> calculation for them are in different ways.
> 
> Here a generic runtime calculation method is proposed. Input
> to this function were selected so that they represent timing
> variables that are present in peripheral datasheets. Motive
> behind this was to achieve DT bindings for the inputs as is.
> Even though a few of the tusb6010 timings could not be made
> directly related to timings normally found on peripherals,
> expressions used were translated to those that could be
> justified.
> 
> There are possibilities of improving the calculations, like
> calculating timing for read & write operations in a more
> similar way. Expressions derived here were tested for async
> onenand on omap3evm (as vanilla Kernel does not have omap3evm
> onenand support, local patch was used). Other peripherals,
> tusb6010, smc91x calculations were validated by simulating
> on omap3evm.
> 
> Regarding "we_on" for onenand async, it was found that even
> for muxed address/data, it need not be greater than
> "adv_wr_off", but rather could be derived from write setup
> time for peripheral from start of access time, hence would
> more be in line with peripheral timings. With this method
> it was working fine. If it is required in some cases to
> have "we_on" same as "wr_data_mux_bus" (i.e. greater than
> "adv_wr_off"), another variable could be added to indicate
> it. But such a requirement is not expected though.
> 
> Whole of this exercise is being done to achieve driver and
> DT conversion. If timings could not be calculated in a
> peripheral agnostic way, either gpmc driver would have to
> be peripheral gnostic or a wrapper arrangement over gpmc
> driver would be required.
> 
> Signed-off-by: Afzal Mohammed <afzal@ti.com>
> ---
>  arch/arm/mach-omap2/gpmc.c             |  302 ++++++++++++++++++++++++++++++++
>  arch/arm/plat-omap/include/plat/gpmc.h |   61 +++++++
>  2 files changed, 363 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/arm/mach-omap2/gpmc.c b/arch/arm/mach-omap2/gpmc.c
> index d005b3a..d8e5b08 100644
> --- a/arch/arm/mach-omap2/gpmc.c
> +++ b/arch/arm/mach-omap2/gpmc.c
> @@ -233,6 +233,18 @@ unsigned int gpmc_round_ns_to_ticks(unsigned int time_ns)
>  	return ticks * gpmc_get_fclk_period() / 1000;
>  }
>  
> +unsigned int gpmc_ticks_to_ps(unsigned int ticks)
> +{
> +	return ticks * gpmc_get_fclk_period();
> +}
> +
> +unsigned int gpmc_round_ps_to_ticks(unsigned int time_ps)
> +{
> +	unsigned long ticks = gpmc_ps_to_ticks(time_ps);
> +
> +	return ticks * gpmc_get_fclk_period();
> +}
> +
>  static inline void gpmc_cs_modify_reg(int cs, int reg, u32 mask, bool value)
>  {
>  	u32 l;
> @@ -884,6 +896,296 @@ static void __init gpmc_mem_init(void)
>  	}
>  }
>  
> +static u32 gpmc_round_ps_to_sync_clk(u32 time_ps, u32 sync_clk)
> +{
> +	u32 temp;
> +	int div;
> +
> +	div = gpmc_calc_divider(sync_clk);
> +	temp = gpmc_ps_to_ticks(time_ps);
> +	temp = (temp + div - 1) / div;
> +	return gpmc_ticks_to_ps(temp * div);
> +}
> +
> +/* can the cycles be avoided ? */

What is the above comment referring too?

> +static int gpmc_calc_sync_read_timings(struct gpmc_timings *gpmc_t,
> +				struct gpmc_device_timings *dev_t)
> +{
> +	bool mux = dev_t->mux;
> +	u32 temp;
> +
> +	/* adv_rd_off */
> +	temp = dev_t->t_avdp_r;
> +	/* mux check required ? */
> +	if (mux) {
> +		/* t_avdp not to be required for sync, only added for tusb this
> +		 * indirectly necessitates requirement of t_avdp_r & t_avdp_w
> +		 * instead of having a single t_avdp
> +		 */
> +		temp = max_t(u32, temp,	gpmc_t->clk_activation * 1000 +
> +							dev_t->t_avdh);
> +		temp = max_t(u32,
> +			(gpmc_t->adv_on + gpmc_ticks_to_ns(1)) * 1000, temp);
> +	}
> +	gpmc_t->adv_rd_off = gpmc_round_ps_to_ticks(temp) / 1000;
> +
> +	/* oe_on */
> +	temp = dev_t->t_oeasu; /* remove this ? */
> +	if (mux) {
> +		temp = max_t(u32, temp,
> +			gpmc_t->clk_activation * 1000 + dev_t->t_ach);
> +		temp = max_t(u32, temp, (gpmc_t->adv_rd_off +
> +				gpmc_ticks_to_ns(dev_t->cyc_aavdh_oe)) * 1000);
> +	}
> +	gpmc_t->oe_on = gpmc_round_ps_to_ticks(temp) / 1000;
> +
> +	/* access */
> +	/* any scope for improvement ?, by combining oe_on & clk_activation,
> +	 * need to check whether access = clk_activation + round to sync clk ?
> +	 */
> +	temp = max_t(u32, dev_t->t_iaa,	dev_t->cyc_iaa * gpmc_t->sync_clk);
> +	temp += gpmc_t->clk_activation * 1000;
> +	if (dev_t->cyc_oe)
> +		temp = max_t(u32, temp, (gpmc_t->oe_on +
> +				gpmc_ticks_to_ns(dev_t->cyc_oe)) * 1000);
> +	gpmc_t->access = gpmc_round_ps_to_ticks(temp) / 1000;
> +
> +	gpmc_t->oe_off = gpmc_t->access + gpmc_ticks_to_ns(1);
> +	gpmc_t->cs_rd_off = gpmc_t->oe_off;
> +
> +	/* rd_cycle */
> +	temp = max_t(u32, dev_t->t_cez_r, dev_t->t_oez);
> +	temp = gpmc_round_ps_to_sync_clk(temp, gpmc_t->sync_clk) +
> +							gpmc_t->access * 1000;
> +	/* barter t_ce_rdyz with t_cez_r ? */
> +	if (dev_t->t_ce_rdyz)
> +		temp = max_t(u32, temp,
> +				gpmc_t->cs_rd_off * 1000 + dev_t->t_ce_rdyz);
> +	gpmc_t->rd_cycle = gpmc_round_ps_to_ticks(temp) / 1000;
> +
> +	return 0;
> +}

[...]

> diff --git a/arch/arm/plat-omap/include/plat/gpmc.h b/arch/arm/plat-omap/include/plat/gpmc.h
> index 1cafbfd..e59a932 100644
> --- a/arch/arm/plat-omap/include/plat/gpmc.h
> +++ b/arch/arm/plat-omap/include/plat/gpmc.h
> @@ -152,6 +152,67 @@ struct gpmc_timings {
>  	struct gpmc_bool_timings bool_timings;
>  };
>  
> +/* Device timings in picoseconds */

Why pico seconds and not nanoseconds? I understand you may need to
temporarily convert to pico-secs for rounding, but when providing timing
it seems nano-secs is more suitable.

> +struct gpmc_device_timings {
> +	u32     t_ceasu;	/* address setup to CS valid */
> +	u32     t_avdasu;	/* address setup to ADV valid */
> +	/* XXX: try to combine t_avdp_r & t_avdp_w. Issue is
> +	 * of tusb using these timings even for sync whilst
> +	 * ideally for adv_rd/(wr)_off it should have considered
> +	 * t_avdh instead. This indirectly necessitates r/w
> +	 * variations of t_avdp as it is possible to have one
> +	 * sync & other async
> +	 */
> +	u32	t_avdp_r;	/* ADV low time (what about t_cer ?) */
> +	u32	t_avdp_w;
> +	u32	t_aavdh;	/* address hold time */
> +	u32     t_oeasu;	/* address setup to OE valid */
> +	u32     t_aa;		/* access time from ADV assertion */
> +	u32     t_iaa;		/* initial access time */
> +	u32     t_oe;		/* access time from OE assertion */
> +	u32     t_ce;		/* access time from CS asertion */
> +	u32     t_rd_cycle;	/* read cycle time */
> +	u32     t_cez_r;	/* read CS deassertion to high Z */
> +	u32     t_cez_w;	/* write CS deassertion to high Z */
> +	u32     t_oez;		/* OE deassertion to high Z */
> +	u32     t_weasu;	/* address setup to WE valid */
> +	u32     t_wpl;		/* write assertion time */
> +	u32     t_wph;		/* write deassertion time */
> +	u32     t_wr_cycle;	/* write cycle time */
> +
> +	u32	clk;
> +	u32	t_bacc;		/* burst access valid clock to output delay */
> +	u32	t_ces;		/* CS setup time to clk */
> +	u32	t_avds;		/* ADV setup time to clk */
> +	u32	t_avdh;		/* ADV hold time from clk */
> +	u32     t_ach;		/* address hold time from clk */
> +	u32	t_rdyo;		/* clk to ready valid */
> +
> +	u32	t_ce_rdyz;	/* XXX: description ?, or use t_cez instead */
> +	u32	t_ce_avd;	/* CS on to ADV on delay */
> +
> +	/* XXX: check the possibility of combining
> +	 * cyc_aavhd_oe & cyc_aavdh_we
> +	 */
> +	u8	cyc_aavdh_oe;
> +	u8	cyc_aavdh_we;
> +	u8	cyc_oe;
> +	u8	cyc_wpl;
> +	u32     cyc_iaa;
> +
> +	bool	mux;		/* address & data muxed */
> +	bool	sync_write;	/* synchronous write */
> +	bool	sync_read;	/* synchronous read */
> +
> +	bool	ce_xdelay;
> +	bool	avd_xdelay;
> +	bool	oe_xdelay;
> +	bool	we_xdelay;
> +};

I am a little concerned about the above timings structure. For example,
if I am adding support for a new devices it is not clear ...

1. Which are required
2. Which are applicable for async, sync, address-data multiplexed etc.
3. Exactly how these relate to the fields in the gpmc registers.

I understand that this is based upon how timings for onenand and tusb
are being calculated today, but I am not sure that this is the way to go
for all devices. Personally, I would like to see us get away from how
those devices are calculating timings for any new device.

In general, I am concerned that we are abstracting the timings further
from the actual register fields. For example, the timings parameter
"t_ceasu" is described "address setup to CS valid" which is not
incorrect but this value is really just programming the CSONTIME field
and so why not call this cs_on?

So although this may consolidate how the timings are calculated today, I
am concerned it will be confusing to add timings for a new device. At
least if I am calculating timings, I am taking the timing information
for the device and translating that to the how I need to program the
gpmc register fields.

Cheers
Jon
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Tony Lindgren Aug. 24, 2012, 7:54 p.m. UTC | #2
* Jon Hunter <jon-hunter@TI.COM> [120822 19:58]:
> 
> So although this may consolidate how the timings are calculated today, I
> am concerned it will be confusing to add timings for a new device. At
> least if I am calculating timings, I am taking the timing information
> for the device and translating that to the how I need to program the
> gpmc register fields.

Yes agreed. Also as some values make sense only in cycles, converting them
back and forth to time is wrong. So at least some values should have an
option to specify them in cycles directly, and then ignore any time based
values.

Regards,

Tony
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Afzal Mohammed Aug. 27, 2012, 10:37 a.m. UTC | #3
Hi Jon,

On Thu, Aug 23, 2012 at 08:28:44, Hunter, Jon wrote:
> On 08/21/2012 05:45 AM, Afzal Mohammed wrote:

> > +/* can the cycles be avoided ? */
> 
> What is the above comment referring too?

This was added in the initial stages and refers to the usage of
cycles in struct gpmc_device_timings. I wanted to avoid usage
of cycles, but it seems it is required logically as well. This
was mentioned as a note for future to find out whether at any
future point of time this can be removed.

> > +/* Device timings in picoseconds */
> 
> Why pico seconds and not nanoseconds? I understand you may need to
> temporarily convert to pico-secs for rounding, but when providing timing
> it seems nano-secs is more suitable.

For more accuracy, if you see some of the tusb6010 calculation
are in picoseconds, this can be true for any device although
only tusb does so. If we hold on to picoseconds till the last
moment, value would be more accurate. For eg. 300 ps has to be
used in the case of tusb, and with ns, it can't be accounted for

> I am a little concerned about the above timings structure. For example,
> if I am adding support for a new devices it is not clear ...
> 
> 1. Which are required
> 2. Which are applicable for async, sync, address-data multiplexed etc.
> 3. Exactly how these relate to the fields in the gpmc registers.

Please see at the end

> 
> I understand that this is based upon how timings for onenand and tusb
> are being calculated today, but I am not sure that this is the way to go
> for all devices. Personally, I would like to see us get away from how
> those devices are calculating timings for any new device.

You cannot do away with many of the those, as logically they
are right. Eg. read data should be available at access time,
assuming a zero data hold time, we can very well derive an
expression as,

read de-assertion time (oe_off) = access time plus 1 gpmc clock,

and this is what the existing calculations do, and also the
generic routine. There could be other constraints, but this
certainly should be one (I do realize that oe_off could be
optimized to be less than access time, by relying on read
hold time, then again effect of it would be in similar way
for different peripherals, but let's forget about
optimization in the beginning)

> In general, I am concerned that we are abstracting the timings further
> from the actual register fields. For example, the timings parameter
> "t_ceasu" is described "address setup to CS valid" which is not
> incorrect but this value is really just programming the CSONTIME field
> and so why not call this cs_on?

Timing fields of struct gpmc_device_timings are selected such
that they should be bindable by DT. At least one of the peripheral
datasheet has these fields. If user knows timings in terms of
gpmc values, he can directly use struct gpmc_timings, but
then all the values should be updated in struct gpmc_timings.
User should not update some of the values in terms of
peripheral timings, others in terms of gpmc timings, that
would make things complex and does not seem the right way
to me.

cs_on and other gpmc aware timings could be binded by DT, not as
peripheral timing, but as gpmc timing.

Bindings for peripheral DT timings should be something that
can be obtained from peripheral datasheet, here accidentally
it is same as gpmc timings, but not most timings are this way.
Also there could be other constraints that can come for cs_on,
even though I have not come across any till now.

> 
> So although this may consolidate how the timings are calculated today, I
> am concerned it will be confusing to add timings for a new device. At
> least if I am calculating timings, I am taking the timing information
> for the device and translating that to the how I need to program the
> gpmc register fields.

If I am not wrong, GPMC IP has been present for around a
decade, and so far I have not come across any generic time
calculation method that can be applied to all peripherals.
Getting to work same peripheral for a different gpmc
frequency is another problem.

Here we are trying to generalize based on the understanding of
gpmc & peripheral timings, as well as by a kind of reverse
engineering without most of the hardware or datasheet. Think of
this as an attempt to create one, let it evolve and become a
robust one. If a new user want to add a peripheral, let him add
support, input timings are selected such that it is found in
peripheral datasheet, if that does not work, let him try
to add any new timing field or alter existing expression
as he deems to be proper and verify that it does not break
others. And I can help provided I am not heavily loaded
with other works.

And at least for initial users, they are expected to have
some grasp on how to calculate timings, such a user will
not be much worried about your 3 concerns above, anyway as
of now they need to have a good grasp on it.

Meanwhile I will try to document more.

Regards
Afzal
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Afzal Mohammed Aug. 27, 2012, 11:46 a.m. UTC | #4
Hi Tony,

On Sat, Aug 25, 2012 at 01:24:47, Tony Lindgren wrote:

> Yes agreed. Also as some values make sense only in cycles, converting them
> back and forth to time is wrong. So at least some values should have an
> option to specify them in cycles directly, and then ignore any time based
> values.

Which values in struct gpmc_device_timings are you referring to ?

Values that need to have an option to specify in cycles has been provided
such an option, while not updating time based field value, you can
achieve what you mentioned above. But if you want that to be handled in
in generic routine, I will do so.

Regards
Afzal
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Hunter, Jon Aug. 27, 2012, 8:30 p.m. UTC | #5
Hi Afzal,

On 08/27/2012 05:37 AM, Mohammed, Afzal wrote:
> On Thu, Aug 23, 2012 at 08:28:44, Hunter, Jon wrote:

[snip]

>> I understand that this is based upon how timings for onenand and tusb
>> are being calculated today, but I am not sure that this is the way to go
>> for all devices. Personally, I would like to see us get away from how
>> those devices are calculating timings for any new device.
> 
> You cannot do away with many of the those, as logically they
> are right. Eg. read data should be available at access time,
> assuming a zero data hold time, we can very well derive an
> expression as,

I am not saying that we do away with them for current devices, just
maintain them as is.

> read de-assertion time (oe_off) = access time plus 1 gpmc clock,
> 
> and this is what the existing calculations do, and also the
> generic routine. There could be other constraints, but this
> certainly should be one (I do realize that oe_off could be
> optimized to be less than access time, by relying on read
> hold time, then again effect of it would be in similar way
> for different peripherals, but let's forget about
> optimization in the beginning)
> 
>> In general, I am concerned that we are abstracting the timings further
>> from the actual register fields. For example, the timings parameter
>> "t_ceasu" is described "address setup to CS valid" which is not
>> incorrect but this value is really just programming the CSONTIME field
>> and so why not call this cs_on?
> 
> Timing fields of struct gpmc_device_timings are selected such
> that they should be bindable by DT. At least one of the peripheral
> datasheet has these fields.

Right, but these are not applicable to every device and so I worry this
could be confusing. However, more documentation may help clear this up.

> If user knows timings in terms of
> gpmc values, he can directly use struct gpmc_timings, but
> then all the values should be updated in struct gpmc_timings.
> User should not update some of the values in terms of
> peripheral timings, others in terms of gpmc timings, that
> would make things complex and does not seem the right way
> to me.
> 
> cs_on and other gpmc aware timings could be binded by DT, not as
> peripheral timing, but as gpmc timing.
> 
> Bindings for peripheral DT timings should be something that
> can be obtained from peripheral datasheet, here accidentally
> it is same as gpmc timings, but not most timings are this way.
> Also there could be other constraints that can come for cs_on,
> even though I have not come across any till now.
> 
>>
>> So although this may consolidate how the timings are calculated today, I
>> am concerned it will be confusing to add timings for a new device. At
>> least if I am calculating timings, I am taking the timing information
>> for the device and translating that to the how I need to program the
>> gpmc register fields.
> 
> If I am not wrong, GPMC IP has been present for around a
> decade, and so far I have not come across any generic time
> calculation method that can be applied to all peripherals.

Yes not an easy problem to solve :-(

> Getting to work same peripheral for a different gpmc
> frequency is another problem.
> 
> Here we are trying to generalize based on the understanding of
> gpmc & peripheral timings, as well as by a kind of reverse
> engineering without most of the hardware or datasheet. Think of
> this as an attempt to create one, let it evolve and become a
> robust one. If a new user want to add a peripheral, let him add
> support, input timings are selected such that it is found in
> peripheral datasheet, if that does not work, let him try
> to add any new timing field or alter existing expression
> as he deems to be proper and verify that it does not break
> others. And I can help provided I am not heavily loaded
> with other works.

So long as it is maintainable ;-)

> And at least for initial users, they are expected to have
> some grasp on how to calculate timings, such a user will
> not be much worried about your 3 concerns above, anyway as
> of now they need to have a good grasp on it.

I would consider myself to be an initial user and I am concerned,
doesn't that count?

An example, would be the following where you have 4 timing parameters
for access time. You need to dig through the code to understand how
these are being used.

+	u32     t_aa;		/* access time from ADV assertion */
+	u32     t_iaa;		/* initial access time */
+	u32     t_oe;		/* access time from OE assertion */
+	u32     t_ce;		/* access time from CS asertion */

> Meanwhile I will try to document more.

Yes more documentation is definitely needed.

Cheers
Jon
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Afzal Mohammed Aug. 28, 2012, 12:21 p.m. UTC | #6
Hi Jon,

On Tue, Aug 28, 2012 at 02:00:13, Hunter, Jon wrote:
> On 08/27/2012 05:37 AM, Mohammed, Afzal wrote:

> > And at least for initial users, they are expected to have
> > some grasp on how to calculate timings, such a user will
> > not be much worried about your 3 concerns above, anyway as
> > of now they need to have a good grasp on it.
> 
> I would consider myself to be an initial user and I am concerned,
> doesn't that count?

Yes sir, what I meant was new users who want to have runtime
calculation using the generic timing routine. For the
peripherals already making use of custom timing routine,
I am into that role too, hence the patches 8-10.

And if you have any board that makes use of existing custom
timing calculation routines (OneNAND, tusb6010 or smc91x),
can you please give this series a try.

Regards
Afzal
--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/arch/arm/mach-omap2/gpmc.c b/arch/arm/mach-omap2/gpmc.c
index d005b3a..d8e5b08 100644
--- a/arch/arm/mach-omap2/gpmc.c
+++ b/arch/arm/mach-omap2/gpmc.c
@@ -233,6 +233,18 @@  unsigned int gpmc_round_ns_to_ticks(unsigned int time_ns)
 	return ticks * gpmc_get_fclk_period() / 1000;
 }
 
+unsigned int gpmc_ticks_to_ps(unsigned int ticks)
+{
+	return ticks * gpmc_get_fclk_period();
+}
+
+unsigned int gpmc_round_ps_to_ticks(unsigned int time_ps)
+{
+	unsigned long ticks = gpmc_ps_to_ticks(time_ps);
+
+	return ticks * gpmc_get_fclk_period();
+}
+
 static inline void gpmc_cs_modify_reg(int cs, int reg, u32 mask, bool value)
 {
 	u32 l;
@@ -884,6 +896,296 @@  static void __init gpmc_mem_init(void)
 	}
 }
 
+static u32 gpmc_round_ps_to_sync_clk(u32 time_ps, u32 sync_clk)
+{
+	u32 temp;
+	int div;
+
+	div = gpmc_calc_divider(sync_clk);
+	temp = gpmc_ps_to_ticks(time_ps);
+	temp = (temp + div - 1) / div;
+	return gpmc_ticks_to_ps(temp * div);
+}
+
+/* can the cycles be avoided ? */
+static int gpmc_calc_sync_read_timings(struct gpmc_timings *gpmc_t,
+				struct gpmc_device_timings *dev_t)
+{
+	bool mux = dev_t->mux;
+	u32 temp;
+
+	/* adv_rd_off */
+	temp = dev_t->t_avdp_r;
+	/* mux check required ? */
+	if (mux) {
+		/* t_avdp not to be required for sync, only added for tusb this
+		 * indirectly necessitates requirement of t_avdp_r & t_avdp_w
+		 * instead of having a single t_avdp
+		 */
+		temp = max_t(u32, temp,	gpmc_t->clk_activation * 1000 +
+							dev_t->t_avdh);
+		temp = max_t(u32,
+			(gpmc_t->adv_on + gpmc_ticks_to_ns(1)) * 1000, temp);
+	}
+	gpmc_t->adv_rd_off = gpmc_round_ps_to_ticks(temp) / 1000;
+
+	/* oe_on */
+	temp = dev_t->t_oeasu; /* remove this ? */
+	if (mux) {
+		temp = max_t(u32, temp,
+			gpmc_t->clk_activation * 1000 + dev_t->t_ach);
+		temp = max_t(u32, temp, (gpmc_t->adv_rd_off +
+				gpmc_ticks_to_ns(dev_t->cyc_aavdh_oe)) * 1000);
+	}
+	gpmc_t->oe_on = gpmc_round_ps_to_ticks(temp) / 1000;
+
+	/* access */
+	/* any scope for improvement ?, by combining oe_on & clk_activation,
+	 * need to check whether access = clk_activation + round to sync clk ?
+	 */
+	temp = max_t(u32, dev_t->t_iaa,	dev_t->cyc_iaa * gpmc_t->sync_clk);
+	temp += gpmc_t->clk_activation * 1000;
+	if (dev_t->cyc_oe)
+		temp = max_t(u32, temp, (gpmc_t->oe_on +
+				gpmc_ticks_to_ns(dev_t->cyc_oe)) * 1000);
+	gpmc_t->access = gpmc_round_ps_to_ticks(temp) / 1000;
+
+	gpmc_t->oe_off = gpmc_t->access + gpmc_ticks_to_ns(1);
+	gpmc_t->cs_rd_off = gpmc_t->oe_off;
+
+	/* rd_cycle */
+	temp = max_t(u32, dev_t->t_cez_r, dev_t->t_oez);
+	temp = gpmc_round_ps_to_sync_clk(temp, gpmc_t->sync_clk) +
+							gpmc_t->access * 1000;
+	/* barter t_ce_rdyz with t_cez_r ? */
+	if (dev_t->t_ce_rdyz)
+		temp = max_t(u32, temp,
+				gpmc_t->cs_rd_off * 1000 + dev_t->t_ce_rdyz);
+	gpmc_t->rd_cycle = gpmc_round_ps_to_ticks(temp) / 1000;
+
+	return 0;
+}
+
+static int gpmc_calc_sync_write_timings(struct gpmc_timings *gpmc_t,
+				struct gpmc_device_timings *dev_t)
+{
+	bool mux = dev_t->mux;
+	u32 temp;
+
+	/* adv_wr_off */
+	temp = dev_t->t_avdp_w;
+	if (mux) {
+		temp = max_t(u32, temp,
+			gpmc_t->clk_activation * 1000 + dev_t->t_avdh);
+		temp = max_t(u32,
+			(gpmc_t->adv_on + gpmc_ticks_to_ns(1)) * 1000, temp);
+	}
+	gpmc_t->adv_wr_off = gpmc_round_ps_to_ticks(temp) / 1000;
+
+	/* wr_data_mux_bus */
+	temp = max_t(u32, dev_t->t_weasu,
+			gpmc_t->clk_activation * 1000 + dev_t->t_rdyo);
+	/* shouldn't mux be kept as a whole for wr_data_mux_bus ?,
+	 * and in that case remember to handle we_on properly
+	 */
+	if (mux) {
+		temp = max_t(u32, temp,
+			gpmc_t->adv_wr_off * 1000 + dev_t->t_aavdh);
+		temp = max_t(u32, temp, (gpmc_t->adv_wr_off +
+				gpmc_ticks_to_ns(dev_t->cyc_aavdh_we)) * 1000);
+	}
+	gpmc_t->wr_data_mux_bus = gpmc_round_ps_to_ticks(temp) / 1000;
+
+	/* we_on */
+	if (gpmc_capability & GPMC_HAS_WR_DATA_MUX_BUS)
+		gpmc_t->we_on = gpmc_round_ps_to_ticks(dev_t->t_weasu) / 1000;
+	else
+		gpmc_t->we_on = gpmc_t->wr_data_mux_bus;
+
+	/* wr_access */
+	/* gpmc_capability check reqd ? , even if not, will not harm */
+	gpmc_t->wr_access = gpmc_t->access;
+
+	/* we_off */
+	temp = gpmc_t->we_on * 1000 + dev_t->t_wpl;
+	temp = max_t(u32, temp,
+			(gpmc_t->wr_access + gpmc_ticks_to_ns(1)) * 1000);
+	temp = max_t(u32, temp,
+		(gpmc_t->we_on + gpmc_ticks_to_ns(dev_t->cyc_wpl)) * 1000);
+	gpmc_t->we_off = gpmc_round_ps_to_ticks(temp) / 1000;
+
+	gpmc_t->cs_wr_off = gpmc_round_ps_to_ticks(gpmc_t->we_off * 1000 +
+							dev_t->t_wph) / 1000;
+
+	/* wr_cycle */
+	temp = gpmc_round_ps_to_sync_clk(dev_t->t_cez_w, gpmc_t->sync_clk);
+	temp += gpmc_t->wr_access * 1000;
+	/* barter t_ce_rdyz with t_cez_w ? */
+	if (dev_t->t_ce_rdyz)
+		temp = max_t(u32, temp,
+				 gpmc_t->cs_wr_off * 1000 + dev_t->t_ce_rdyz);
+	gpmc_t->wr_cycle = gpmc_round_ps_to_ticks(temp) / 1000;
+
+	return 0;
+}
+
+static int gpmc_calc_async_read_timings(struct gpmc_timings *gpmc_t,
+				struct gpmc_device_timings *dev_t)
+{
+	bool mux = dev_t->mux;
+	u32 temp;
+
+	/* adv_rd_off */
+	temp = dev_t->t_avdp_r;
+	if (mux)
+		temp = max_t(u32,
+			(gpmc_t->adv_on + gpmc_ticks_to_ns(1)) * 1000, temp);
+	gpmc_t->adv_rd_off = gpmc_round_ps_to_ticks(temp) / 1000;
+
+	/* oe_on */
+	temp = dev_t->t_oeasu;
+	if (mux)
+		temp = max_t(u32, temp,
+			gpmc_t->adv_rd_off * 1000 + dev_t->t_aavdh);
+	gpmc_t->oe_on = gpmc_round_ps_to_ticks(temp) / 1000;
+
+	/* access */
+	temp = max_t(u32, dev_t->t_iaa, /* remove t_iaa in async ? */
+				gpmc_t->oe_on * 1000 + dev_t->t_oe);
+	temp = max_t(u32, temp,
+				gpmc_t->cs_on * 1000 + dev_t->t_ce);
+	temp = max_t(u32, temp,
+				gpmc_t->adv_on * 1000 + dev_t->t_aa);
+	gpmc_t->access = gpmc_round_ps_to_ticks(temp) / 1000;
+
+	gpmc_t->oe_off = gpmc_t->access + gpmc_ticks_to_ns(1);
+	gpmc_t->cs_rd_off = gpmc_t->oe_off;
+
+	/* rd_cycle */
+	temp = max_t(u32, dev_t->t_rd_cycle,
+			gpmc_t->cs_rd_off * 1000 + dev_t->t_cez_r);
+	temp = max_t(u32, temp,
+			gpmc_t->oe_off * 1000 + dev_t->t_oez);
+	gpmc_t->rd_cycle = gpmc_round_ps_to_ticks(temp) / 1000;
+
+	return 0;
+}
+
+static int gpmc_calc_async_write_timings(struct gpmc_timings *gpmc_t,
+				struct gpmc_device_timings *dev_t)
+{
+	bool mux = dev_t->mux;
+	u32 temp;
+
+	/* adv_wr_off */
+	temp = dev_t->t_avdp_w;
+	if (mux)
+		temp = max_t(u32,
+			(gpmc_t->adv_on + gpmc_ticks_to_ns(1)) * 1000, temp);
+	gpmc_t->adv_wr_off = gpmc_round_ps_to_ticks(temp) / 1000;
+
+	/* wr_data_mux_bus */
+	temp = dev_t->t_weasu;
+	if (mux) {
+		temp = max_t(u32, temp,
+			gpmc_t->adv_wr_off * 1000 + dev_t->t_aavdh);
+		temp = max_t(u32, temp, (gpmc_t->adv_wr_off +
+				gpmc_ticks_to_ns(dev_t->cyc_aavdh_we)) * 1000);
+	}
+	gpmc_t->wr_data_mux_bus = gpmc_round_ps_to_ticks(temp) / 1000;
+
+	/* we_on */
+	if (gpmc_capability & GPMC_HAS_WR_DATA_MUX_BUS)
+		gpmc_t->we_on = gpmc_round_ps_to_ticks(dev_t->t_weasu) / 1000;
+	else
+		gpmc_t->we_on = gpmc_t->wr_data_mux_bus;
+
+	/* we_off */
+	temp = gpmc_t->we_on * 1000 + dev_t->t_wpl;
+	gpmc_t->we_off = gpmc_round_ps_to_ticks(temp) / 1000;
+
+	gpmc_t->cs_wr_off = gpmc_round_ps_to_ticks((gpmc_t->we_off * 1000 +
+				dev_t->t_wph)) / 1000;
+
+	/* wr_cycle */
+	temp = max_t(u32, dev_t->t_wr_cycle,
+				gpmc_t->cs_wr_off * 1000 + dev_t->t_cez_w);
+	gpmc_t->wr_cycle = gpmc_round_ps_to_ticks(temp) / 1000;
+
+	return 0;
+}
+
+static int gpmc_calc_sync_common_timings(struct gpmc_timings *gpmc_t,
+			struct gpmc_device_timings *dev_t)
+{
+	u32 temp;
+
+	gpmc_t->sync_clk = gpmc_calc_divider(dev_t->clk) *
+						gpmc_get_fclk_period();
+
+	gpmc_t->page_burst_access = gpmc_round_ps_to_sync_clk(
+					dev_t->t_bacc,
+					gpmc_t->sync_clk) / 1000;
+
+	temp = max_t(u32, dev_t->t_ces, dev_t->t_avds);
+	gpmc_t->clk_activation = gpmc_round_ps_to_ticks(temp) / 1000;
+
+	if (gpmc_calc_divider(gpmc_t->sync_clk) != 1)
+		return 0;
+
+	if (dev_t->ce_xdelay)
+		gpmc_t->bool_timings.cs_extra_delay = true;
+	if (dev_t->avd_xdelay)
+		gpmc_t->bool_timings.adv_extra_delay = true;
+	if (dev_t->oe_xdelay)
+		gpmc_t->bool_timings.oe_extra_delay = true;
+	if (dev_t->we_xdelay)
+		gpmc_t->bool_timings.we_extra_delay = true;
+
+	return 0;
+}
+
+static int gpmc_calc_common_timings(struct gpmc_timings *gpmc_t,
+			struct gpmc_device_timings *dev_t)
+{
+	u32 temp;
+
+	/* cs_on */
+	gpmc_t->cs_on = gpmc_round_ns_to_ticks(dev_t->t_ceasu / 1000);
+
+	/* adv_on */
+	temp = dev_t->t_avdasu;
+	if (dev_t->t_ce_avd)
+		temp = max_t(u32, temp,
+				gpmc_t->cs_on * 1000 + dev_t->t_ce_avd);
+	gpmc_t->adv_on = gpmc_round_ns_to_ticks(temp / 1000);
+
+	if (dev_t->sync_write || dev_t->sync_read)
+		gpmc_calc_sync_common_timings(gpmc_t, dev_t);
+
+	return 0;
+}
+
+int gpmc_calc_timings(struct gpmc_timings *gpmc_t,
+			struct gpmc_device_timings *dev_t)
+{
+	memset(gpmc_t, 0, sizeof(*gpmc_t));
+
+	gpmc_calc_common_timings(gpmc_t, dev_t);
+
+	if (dev_t->sync_read)
+		gpmc_calc_sync_read_timings(gpmc_t, dev_t);
+	else
+		gpmc_calc_async_read_timings(gpmc_t, dev_t);
+
+	if (dev_t->sync_write)
+		gpmc_calc_sync_write_timings(gpmc_t, dev_t);
+	else
+		gpmc_calc_async_write_timings(gpmc_t, dev_t);
+
+	return 0;
+}
+
 static int __init gpmc_init(void)
 {
 	u32 l;
diff --git a/arch/arm/plat-omap/include/plat/gpmc.h b/arch/arm/plat-omap/include/plat/gpmc.h
index 1cafbfd..e59a932 100644
--- a/arch/arm/plat-omap/include/plat/gpmc.h
+++ b/arch/arm/plat-omap/include/plat/gpmc.h
@@ -152,6 +152,67 @@  struct gpmc_timings {
 	struct gpmc_bool_timings bool_timings;
 };
 
+/* Device timings in picoseconds */
+struct gpmc_device_timings {
+	u32     t_ceasu;	/* address setup to CS valid */
+	u32     t_avdasu;	/* address setup to ADV valid */
+	/* XXX: try to combine t_avdp_r & t_avdp_w. Issue is
+	 * of tusb using these timings even for sync whilst
+	 * ideally for adv_rd/(wr)_off it should have considered
+	 * t_avdh instead. This indirectly necessitates r/w
+	 * variations of t_avdp as it is possible to have one
+	 * sync & other async
+	 */
+	u32	t_avdp_r;	/* ADV low time (what about t_cer ?) */
+	u32	t_avdp_w;
+	u32	t_aavdh;	/* address hold time */
+	u32     t_oeasu;	/* address setup to OE valid */
+	u32     t_aa;		/* access time from ADV assertion */
+	u32     t_iaa;		/* initial access time */
+	u32     t_oe;		/* access time from OE assertion */
+	u32     t_ce;		/* access time from CS asertion */
+	u32     t_rd_cycle;	/* read cycle time */
+	u32     t_cez_r;	/* read CS deassertion to high Z */
+	u32     t_cez_w;	/* write CS deassertion to high Z */
+	u32     t_oez;		/* OE deassertion to high Z */
+	u32     t_weasu;	/* address setup to WE valid */
+	u32     t_wpl;		/* write assertion time */
+	u32     t_wph;		/* write deassertion time */
+	u32     t_wr_cycle;	/* write cycle time */
+
+	u32	clk;
+	u32	t_bacc;		/* burst access valid clock to output delay */
+	u32	t_ces;		/* CS setup time to clk */
+	u32	t_avds;		/* ADV setup time to clk */
+	u32	t_avdh;		/* ADV hold time from clk */
+	u32     t_ach;		/* address hold time from clk */
+	u32	t_rdyo;		/* clk to ready valid */
+
+	u32	t_ce_rdyz;	/* XXX: description ?, or use t_cez instead */
+	u32	t_ce_avd;	/* CS on to ADV on delay */
+
+	/* XXX: check the possibility of combining
+	 * cyc_aavhd_oe & cyc_aavdh_we
+	 */
+	u8	cyc_aavdh_oe;
+	u8	cyc_aavdh_we;
+	u8	cyc_oe;
+	u8	cyc_wpl;
+	u32     cyc_iaa;
+
+	bool	mux;		/* address & data muxed */
+	bool	sync_write;	/* synchronous write */
+	bool	sync_read;	/* synchronous read */
+
+	bool	ce_xdelay;
+	bool	avd_xdelay;
+	bool	oe_xdelay;
+	bool	we_xdelay;
+};
+
+extern int gpmc_calc_timings(struct gpmc_timings *gpmc_t,
+				struct gpmc_device_timings *dev_t);
+
 struct gpmc_nand_regs {
 	void __iomem	*gpmc_status;
 	void __iomem	*gpmc_nand_command;