View Single Post
Posts: 58 | Thanked: 42 times | Joined on Jan 2010
#4
Originally Posted by nidO View Post
Am I right in thinking this isn't terribly stable at the moment? I left this running on two VM's overnight and each processed a few chunks fine, then both downloaded new copies of the 11.3GB dump and since this time theyre throwing out various mediawiki/database errors every time they get given a new job to process.
Thank you for setting up the VMs. I'm seeing that your clients upload zero-length archives and I also got your error logs. With the next automatic update, the issue could be fixed, if only the user database is damaged. If important databases (i.e. databases with content) are damaged, you should remove the files "wikilang", "wikidate" and "commonsdate" in the state subdirectory to force the client to grab new database files.
If you want to check if everything is ok, you can point your apache to the mediawiki directory and open it in the browser.