I'm working with a tar.gz file (196 GB). This file needs to be extracted (240 GB) in place, and I would like the total disk usage not to go over 240 GB during the process (or close to it as possible). Is there any way this can be done?
I am familiar with --remove-files. I don't think this will limit the total disk requirements. I also understand that gunzip automatically extracts and removes the source file (but the .tar would still need to be extracted). Does the gunzip <file> command require more disk space than the extracted tar?
To summarize:
- Question 1: What is the simplest way to extract a .tar.gz in-place with minimal hard storage overhead
- Question 2: Can I set up the sub-operations
gunzip <file>andtar --remove-filesto do the above?
My backup solution is to store chunked data and extract iteratively, but I would rather have a simple non-recursive solution.
… | tar -xf -you need… | gzip -cd | tar -xf -. I haven't tested this in detail but the whole approach seems quite sane. – Kamil Maciorowski Dec 11 '17 at 21:17