mirror of
https://github.com/XRPLF/rippled.git
synced 2025-12-06 17:27:55 +00:00
Speed up FindObsoleteFiles
Summary: Here's one solution we discussed on speeding up FindObsoleteFiles. Keep a set of all files in DBImpl and update the set every time we create a file. I probably missed few other spots where we create a file. It might speed things up a bit, but makes code uglier. I don't really like it. Much better approach would be to abstract all file handling to a separate class. Think of it as layer between DBImpl and Env. Having a separate class deal with file namings and deletion would benefit both code cleanliness (especially with huge DBImpl) and speed things up. It will take a huge effort to do this, though. Let's discuss offline today. Test Plan: Ran ./db_stress, verified that files are getting deleted Reviewers: dhruba, haobo, kailiu, emayanke Reviewed By: dhruba Differential Revision: https://reviews.facebook.net/D13827
This commit is contained in:
@@ -387,8 +387,7 @@ struct Options {
|
||||
bool disable_seek_compaction;
|
||||
|
||||
// The periodicity when obsolete files get deleted. The default
|
||||
// value is 0 which means that obsolete files get removed after
|
||||
// every compaction run.
|
||||
// value is 6 hours.
|
||||
uint64_t delete_obsolete_files_period_micros;
|
||||
|
||||
// Maximum number of concurrent background jobs, submitted to
|
||||
|
||||
Reference in New Issue
Block a user