View Single Post
speculatrix's Avatar
Posts: 880 | Thanked: 264 times | Joined on Feb 2007 @ Cambridge, UK
#73
yes, it's not trivial to work out how much file system I/O is caused when torrenting.

if you were downloading a 1G file by www or ftp, then each load of blocks would result in the file being extended, inodes updated/created each time. overall, there wouldn't be too many "hot spots" in the file system.

most torrent clients I've seen create a dummy file as big as the one being downloaded so that it can write each chunk as it comes, so that's probably BETTER than simply appending to a file as the filesys will create the chain of inodes etc on one go. however, depending on the design of the torrent client, the metadata might thrash the filesys unless it only uses RAM, but very large files will probably eat a lot of RAM for state information... my guess.

so, we're getting a bit technical, I'd like to know the answer, but maybe this is now the wrong forum!