View Single Post
Posts: 1,548 | Thanked: 7,510 times | Joined on Apr 2010 @ Czech Republic
#799
Originally Posted by beermad View Post
Hi Martin,

From looking through the database code and structure to produce the migration script, I think I've found a few points where the code and schema could be more efficient.
  • When you're retrieving tiles, you're pulling the epoch_timestamp from the databases. Except when you're checking if you need (or want) to download an updated tile, this is unnecessary, so just wastes CPU cycles.
Good catch.
Originally Posted by beermad View Post
  • After retrieving details from the lookup table, it seems inefficient to find the right tile in the store table using x,y,z when you've already done that on the lookup table. It would be more efficient to have a single primary index in the store table, referenced in the lookup table entry.
Well, I'm not really a database architect - I basically just thought quite a long time about how to make an universal schema and asked a friend who works with databases a bit. Well, like this, the lookup and store databases are independent - so when the lookup one is corrupted, it should be possible to regenerate it just form the stores.
Originally Posted by beermad View Post
  • Moving on logically from the previous idea, it might be possible to make the lookup table more efficient. At the moment you need to work through three indexes (x,y,z) to find the right tile. It might be more processor efficient to have a single index on this table which is either a combination of the x,y,z parameters or a hash based on them (neither option is simple, but would probably speed things up a worthwhile amount).
Yeah - say we have x=1 y=2 and z=17 -> 1217, also x=1,y=21,z=7 -> 1217 ...
A hash with separators might work though: 1,2,17 vs 1,21,7 - would something like this be usable ?

Also, would it be possible to maintain backward compatibility by adding this new indexes and still storing the old info ? Converting all the existing database files users might already have would be quite a headache and also some developers might be already working on supporting the format in its current form (IIRC the CloudGPS developer, maybe also some others).

There is a version filed in the schema, so it would be possible to do something like this:
  • 1 = current version
  • 2 = old info + new indexes
  • 3 = just new indexes


Originally Posted by 白い熊 View Post
This downloading is a serious problem. I'm trying to download a 40km area just at the lowest level of zoom that OSM will go to, that's around 350k tiles, but it keeps crashing at 15k, which I already have.
I remember getting this some time ago. I'll run a few large downloads to check if I can still reproduce it.
Still, this seems more like a bug in Glib that is being triggered by modRana. There is even a post mentioning a similar behaviour (glib and long lists).

Originally Posted by 白い熊 View Post
How to get around it?
First, try the new version that was just released - some unrelated changes might have fixed it.
If this does not help, you can try some other batch download software, like Gmapcatcher and then importing the tiles with the SQLite import script.

Originally Posted by 白い熊 View Post
Tried switching it to just save the tiles, so that I'd later import them into the sql db, but the same thing, just bombs, WTF?
Interesting - looks like it is independent from the storage method. So the long lists + threaded access to them might really be the issue.
__________________
modRana: a flexible GPS navigation system
Mieru: a flexible manga and comic book reader
Universal Components - a solution for native looking yet component set independent QML appliactions (QtQuick Controls 2 & Silica supported as backends)