View Single Post
Posts: 1,418 | Thanked: 1,541 times | Joined on Feb 2008
#91
Originally Posted by fanoush View Post
So for such playback you must sent only one rectangle = whole video frame (and then you are limited by bandwidth).
No, you just transfer the envelope, i.e. the smallest rectangle that contains all the changed ones. In other words, if your game uses 640x480 window, you only transfer that window.

Also the overhead of starting and stopping the transfer may be bigger than sending one bigger rectangle.
From what I have seen in other ARM-based architectures, it is as simple as writing into a few DMA controller registers (starting and ending addresses + stride). To make it more complicated, Epson must have really screwed things up.

And BTW, OMAP is system on chip, it is not clear every part (DSP, MPU, IVA, 3d accelerator - each being separate CPU with own caches, private SRAM, even private MMU units ...) can directly access any other part.
No, no, it usually does not get this disjoint in a SoC. If you have got a 3D engine and a video buffer in your chip, there is 99% probability that the 3D engine will render into that buffer. Adding another 600kB video buffer to the chip is prohibitively expensive.

Yes and such step causes delay and you must stop drawing until frame is transferred.
You don't need to stop drawing, just need to know where your current DMA pointer is and not overwrite that spot.

Anyway, as for complexity, feel free to study omapfb, rfbi, dispc and blizzard drivers (and lcd_mipid.c but that one does not add any complexity) in linux sources (in drivers/video/omap/), each handling different part of hardware puzzle.
Thanks, now I know where to look.

I don't understand it completely but seeing code of those drivers in kernel is good (or bad) enough for me to feel pity for anyone who must touch that code and may be tasked to throw 3d acceleration to the mix :-)
I have seen worse code.