2

I'm working with a tar.gz file (196 GB). This file needs to be extracted (240 GB) in place, and I would like the total disk usage not to go over 240 GB during the process (or close to it as possible). Is there any way this can be done?

I am familiar with --remove-files. I don't think this will limit the total disk requirements. I also understand that gunzip automatically extracts and removes the source file (but the .tar would still need to be extracted). Does the gunzip <file> command require more disk space than the extracted tar?

To summarize:

  • Question 1: What is the simplest way to extract a .tar.gz in-place with minimal hard storage overhead
  • Question 2: Can I set up the sub-operations gunzip <file> and tar --remove-files to do the above?

My backup solution is to store chunked data and extract iteratively, but I would rather have a simple non-recursive solution.

chub500
  • 121

1 Answers1

0

If you have a high speed Internet (Fast enough to upload the archive file)

OR

Link of the archive file from where you downloaded

OR

Link to google drive for archive

Use online Archive extractor...

1) upload or link the archive file

2) download the extracted contents