Speed up FindObsoleteFiles

Summary:
Here's one solution we discussed on speeding up FindObsoleteFiles. Keep a set of all files in DBImpl and update the set every time we create a file. I probably missed few other spots where we create a file.

It might speed things up a bit, but makes code uglier. I don't really like it.

Much better approach would be to abstract all file handling to a separate class. Think of it as layer between DBImpl and Env. Having a separate class deal with file namings and deletion would benefit both code cleanliness (especially with huge DBImpl) and speed things up. It will take a huge effort to do this, though.

Let's discuss offline today.

Test Plan: Ran ./db_stress, verified that files are getting deleted

Reviewers: dhruba, haobo, kailiu, emayanke

Reviewed By: dhruba

Differential Revision: https://reviews.facebook.net/D13827
This commit is contained in:
Igor Canadi
2013-11-08 15:23:46 -08:00
parent dd218bbc88
commit 1510339e52
8 changed files with 169 additions and 124 deletions

View File

@@ -387,8 +387,7 @@ struct Options {
bool disable_seek_compaction;
// The periodicity when obsolete files get deleted. The default
// value is 0 which means that obsolete files get removed after
// every compaction run.
// value is 6 hours.
uint64_t delete_obsolete_files_period_micros;
// Maximum number of concurrent background jobs, submitted to