If you ran OPTIMIZE TABLE table1; multiple times and the the file table1.ibd is the same size, then it is already defragmented as much as possible.
It is possible that the rows you deleted had very little VARCHAR, TEXT, or BLOB data or even NULL values. Running OPTIMIZE TABLE after deleting such rows won't really make a significant dent in space.
If all your InnoDB tables reside in .ibd tables, then you should look at the file called ibdata1 and see how big it is. Back on Apr 23, 2013, I wrote How can Innodb ibdata1 file grows by 5X even with innodb_file_per_table set?. This where I discuss how ibdata1 can hold transaction info for MVCC. That space is never reclaimed with OPTIMIZE TABLE regardless of innodb_file_per_table being on or off.
You may want to look into the InnoDB Infrastructure CleanUp section of my StackOverflow post you referred to.
UPDATE 2022-11-20 21:37 EST
Your Comment
First, when I implement the solution of mentioned link the size reduces from 1TB to 300GB. But Now the size reaches 500GB meaning 200GB increases for the same number of rows. Second, I run OPTIMIZE TABLE three times for the same table with the same result (no decrease). Third, my tables are (pId, fId,textField), and textField contains big JSON. Finally, ibdata1 is only 1557MB. What can I do now, please?
If you are storing JSON data uncompressed, each UPDATE of a JSON field could easily cause severe fragmentation if the JSON column value is increased in size. This may explain why the table the grows in size.
If even you run the following in week's time
SELECT AVG(LENGTH(textfield)) AverageLength FROM table1;
OPTIMIZE TABLE table1;
SELECT AVG(LENGTH(textfield)) AverageLength FROM table1;
The reduction of the average length would be miniscule at best. Just the slightest increase in the size of the JSON value would be enough to make that fragmentation. Just picture it : a 3KB (3072 bytes) JSON value increasing in size by just a few bytes (say 10 bytes). That would be enough to have this slightly larger JSON (now 3082 bytes) written across a different data pages (each InnoDB data page is 16K) while the old value (length 3072 bytes) is left wide open as some kind of pigeon hole in the old value's InnoDB page location.
SUGGESTION
You may need to consider using the COMPRESS() and UNCOMPRESS() functions when storing JSON values. It could greatly control the amount of residual fragmentation.
Please test it in a staging environment.
First, copy table1 in another database in staging (called table1_copy).
Then, run the following:
SELECT
SUM(LENGTH(textfield)),SUM(LENGTH(COMPRESS(textfield)))
INTO @bytes_uncompressed,@bytes_compressed
FROM table1_copy;
SET @bytes_diff = @bytes_uncompressed - @bytes_compressed,
SELECT @bytes_uncompressed,@bytes_compressed,@bytes_diff\G
If the compression is worth it, the
ALTER TABLE table1_copy SET textfield = COMPRESS(textfield);
Now the really bad news : Your developers must now run UNCOMPRESS(textfield) to get the JSON value out of the table and must use
COMPRESS(textfield) upon every INSERT and UPDATE.
This is really all that I can suggest. The rest of this requires some elbow grease on the part of your developers. If the compression is worth it and you have the same INSERT and UPDATE performance
optimize TABLE table1;(Also, for table2 and table3), The table size should shrink dramatically because in previous days I delete multi-million rows. My question is, why the size of the tables do not decrease? – Niyaz Nov 20 '22 at 15:02lloutput for 14 days (if we can't provide an answer before then). – Rick James Nov 21 '22 at 03:59