I came across this article on http://www.design-reuse.com/articles/25029/memory-subsystem-model-noc-performance.html on Design and Re-use. In my experience as a design engineer working with memory applications, for quite sometime I have been intrigued about how a DDR2 or a SDRAM Interface can be evaluated from a performance point of view. This month’s edition of Design And Re-use features something interesting about Performance Evaluation of DRAMs for Network on Chip applications. DDR Controller experts might see the content in the article as trivial but this is something which I could understand easily and is articulated very well.
Engineers who work on or want to work on DRAMs will really like reading this one. Apart from stressing the parameters for a DDR memory interface design, the article gives you an insight on how memory interfaces are tricky for bandwidth pressed designs. The article takes use an example of a Network On Chip(NoC) design to document how DDR memories are critical to such applications where huge amount of data buffering cannot be avoided as well as random access becomes absolutely necessary.
Quick Points I take back from the article
- Skip banks rather than rows for efficient latency handling
- Keep the moderate burst lengths, not too short or not too long. (4 to 8 )
- Efficiency of an realistic DDR controller ranges from around 50% to 80% of the total bandwidth? So what does that mean? Run your Controller at 1.5x the frequency of your incoming data stream
- Do not use a dumb slave model to quote performance numbers, use it as only a best case number which you will long to meet.
- DDR Controllers are tricky to implement and a careless “design” can scrap your project for it is difficult to analyze exact behavior of slaves.
- Have a Scheduler module before the Controller if possible, for not all real applications have data streams that are well-partitioned (arranged according to banks)