calculating the size of objects in AWS S3 buckets
I ran into an problem recently, my AWS S3 bucket shows total size of 30 TB. However when i try to get the total size of individual folders, it is not summing up more than 2 TB.
I used the below command
aws s3 ls --summarize --human-readable --recursive s3://BUCKETNAME
also, in the AWS console, I used the calculate total size option, am I missing something. I feel that there might be sizes and number of objects based on the versions, if this is the case, I do not get a query that can show the total size by individual file with versions. Is there some query or command that I can use to get the right size and objects that are taking huge size.
jeanid last edited by
You might have incomplete multipart upload in your S3 bucket which does not show up in the aws console.
If the complete multipart upload request isn’t sent successfully, Amazon S3 will not assemble the parts and will not create any object. The parts remain in your Amazon S3 account until the multipart upload completes or is aborted, and you pay for the parts that are stored in Amazon S3. These parts are charged according to the storage class specified when the parts were uploaded.
You can follow the guide https://aws.amazon.com/blogs/aws-cloud-financial-management/discovering-and-deleting-incomplete-multipart-uploads-to-lower-amazon-s3-costs/ to check if that's what taking up the storage in your S3 bucket.
If you do not have a lifecycle policy to abort incomplete multipart upload, you should probably do so.