I'm currently having daily backups of my files on uploaded to S3 bucket using aws s3 mv command, i.e.:
aws s3 mv $webroot/backups/db/ s3://my-backups/$date/db
Everything works, except that recently my host (MediaTemple) started automatically disabling my server, because, apparently, those uploads started hitting some traffic bytes per second limits that are in place on all their servers.
Is there some way to limit the bandwidth of the aws s3 mv using parameters or aws configuration options?
I understand there are some 3rd party Linux utilities that can do that. I.e. throttle or trickle. However, I'd like to avoid additional software, if there's a built-in way to do that with Amazon's own tools.
max_concurrent_requestsand see if that helps. I'm not uploading a lot of files, just a few really large ones. So I'm not sure if the S3's multipart chunked uploads are considered "concurrent requests" or not. – martynasma Dec 09 '15 at 19:53max_concurrent_requestsworked. Thanks! – martynasma Dec 10 '15 at 06:02