summaryrefslogtreecommitdiff
path: root/bugs/chunk-size-based-on-data-type-and-size.mdwn
diff options
context:
space:
mode:
Diffstat (limited to 'bugs/chunk-size-based-on-data-type-and-size.mdwn')
-rw-r--r--bugs/chunk-size-based-on-data-type-and-size.mdwn16
1 files changed, 0 insertions, 16 deletions
diff --git a/bugs/chunk-size-based-on-data-type-and-size.mdwn b/bugs/chunk-size-based-on-data-type-and-size.mdwn
deleted file mode 100644
index 4fcfc33..0000000
--- a/bugs/chunk-size-based-on-data-type-and-size.mdwn
+++ /dev/null
@@ -1,16 +0,0 @@
-[[!tag obnam-wishlist]]
-
-Would it be beneficial to choose the size of data chunks based on the
-type of data, or its size? Text data might benefit from small chunks,
-whereas video data might benefit from large chunks, for example.
-Possibly the size of a file would be a sufficient indicator.
-
-The goal would be to reduce the number of chunks, without reducing
-the effectiveness of de-duplication.
-
-This needs to be measurements of real data to see if there are interesting
-parameters.
-
---liw
-
-[[done]] interesting ideas, but old wishlist bug. --liw