|
2013-02-21
, 12:45
|
|
Posts: 148 |
Thanked: 216 times |
Joined on Jul 2010
@ Yerevan
|
#52
|
The Following 2 Users Say Thank You to norayr For This Useful Post: | ||
Tags |
i hate you all, i love you all, just shoot him, just shoot me, nokia defiled, popcorn time, what the fork? |
|
I wasn't talking about SoCs, rather normal designs, which have a typical mcu, bus and maybe 8-9 memory parts etc.
What does the ftl do when there are not enough blocks to left to cover the "virtual size" of the block device? There is no solution, since a block device per definition cannot decrease in size.
Soon we'll all be using mlc-nand, which is very rare to get without bad blocks from the factory.
When using ftl It's very difficult to predict what will happen during brownouts etc. just that fact makes it unsuitable for embedded designs in general.
In my experience the speedups I've seen when using ubifs instead of ftl were huge.
Here's a good explanation of pros and cons of ftl.
Using raw nand with a suitable filesystem allows you to use the flash memory as you'd use a hard drive normally, without worrying about waring it out prematurely. You can switch on things like logging etc. And since it's transparent, you can see if your bad-blocks list is increasing at an alarming rate. (with ftl that would be hidden, or a proprietary interface)
An FTL chip will also utilise at least as much power as you'd have required for the added cpu-time and ram, so there's no difference here.
IMO FTL is just a temporary solution to use flash with old filesystems and operating systems, we don't really need it anymore and should get rid of it.