Hi Martin, From looking through the database code and structure to produce the migration script, I think I've found a few points where the code and schema could be more efficient. When you're retrieving tiles, you're pulling the epoch_timestamp from the databases. Except when you're checking if you need (or want) to download an updated tile, this is unnecessary, so just wastes CPU cycles.
After retrieving details from the lookup table, it seems inefficient to find the right tile in the store table using x,y,z when you've already done that on the lookup table. It would be more efficient to have a single primary index in the store table, referenced in the lookup table entry.
Moving on logically from the previous idea, it might be possible to make the lookup table more efficient. At the moment you need to work through three indexes (x,y,z) to find the right tile. It might be more processor efficient to have a single index on this table which is either a combination of the x,y,z parameters or a hash based on them (neither option is simple, but would probably speed things up a worthwhile amount).
This downloading is a serious problem. I'm trying to download a 40km area just at the lowest level of zoom that OSM will go to, that's around 350k tiles, but it keeps crashing at 15k, which I already have.
How to get around it?
Tried switching it to just save the tiles, so that I'd later import them into the sql db, but the same thing, just bombs, WTF?