I am trying to access my s3 bucket using a application deployed on my tomcat running on ec2.
I could see lots of posts related to this, but look like most of them complaint about not having proper access. I have proper access to all buckets, I am able to upload the file from another application using different application like jenkins s3 plugin without any issues. I am clueless why this should happen only for a java web application deployed on tomcat. I have confirmed below things.
- The ec2 instance was created with an IAM role.
- The IAM role has write access to bucket. The puppet scripts is able to write to bucket.
- Tried with other application to check the IAM role and it is working fine with out any issues.
As per my understanding if I do not specify any credentials while creating the S3 bucket client(AmazonS3Client ),it will take the IAM role authentication as default.
This is a sample function which I wrote to test the permission.
public boolean checkWritePermission(String bucketName) { AmazonS3Client amazonS3Client=new AmazonS3Client(); LOG.info("Checking bucket write permission....."); boolean hasWritePermissions = false; final ObjectMetadata metadata = new ObjectMetadata(); metadata.setContentLength(0); // Create empty content final InputStream emptyContent = new ByteArrayInputStream(new byte[0]); // Create a PutObjectRequest with test object final PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, "TestDummy.txt", emptyContent, metadata); try { if (amazonS3Client.putObject(putObjectRequest) != null) { LOG.info("Permissions validated!"); // User has write permissions, TestPassed. hasWritePermissions = true; } } catch (AmazonClientException s3Ex) { LOG.warn("Write permissions not available!", s3Ex.getMessage()); LOG.error("Write permissions not available!", s3Ex); } return hasWritePermissions; }com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: xxxxxxxxxxxxxx).
I am trying to access a bucket and all its object using AWS SDK but while running the code i am getting an error as Exception in thread "main" com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: X), S3 Extended Request ID: Y=
Kindly suggest, where i am lacking and why access denied error is occurring although i have taken all following permission to the bucket:
s3:GetObject s3:GetObjectVersion s3:GetObjectAcl s3:GetBucketAcl s3:GetBucketCORS s3:GetBucketLocation s3:GetBucketLogging s3:ListBucket s3:ListBucketVersions s3:ListBucketMultipartUploads s3:GetObjectTorrent s3:GetObjectVersionAclCode is as follows:
AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey); ClientConfiguration clientConfig = new ClientConfiguration(); clientConfig.setProtocol(Protocol.HTTP); AmazonS3 conn = new AmazonS3Client(credentials, clientConfig); conn.setEndpoint(bucketName); Bucket bucket = conn.createBucket(bucketName); ObjectListing objects = conn.listObjects(bucket.getName()); do { for (S3ObjectSummary objectSummary : objects.getObjectSummaries()) { System.out.println(objectSummary.getKey() + "\t" + objectSummary.getSize() + "\t" + StringUtils.fromDate(objectSummary.getLastModified())); } objects = conn.listNextBatchOfObjects(objects); } while (objects.isTruncated());
OneCricketeer
157k18 gold badges115 silver badges218 bronze badges
asked Jun 10, 2014 at 11:02
Go to IAM and check whether the user [ Access Key & Secret Key ] which is being used for the API has the previliges to use S3 Based API.
Attached S3 Policy to the specified User - try with S3 Full Access; you can fine-grain the access once this works. For More Information Check this Link [ Managing IAM Policies ]
answered Jun 10, 2014 at 11:12
Naveen VijayNaveen Vijay
15.4k6 gold badges68 silver badges88 bronze badges
5
The problem is now solved. There were following issue to the code:
- The end point was not correct, There should be a correct end point.
- There was not enough permission given to the bucket. A list of complete permission should be taken before using the bucket in AWS SDK.
Below is the correct code
AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey); ClientConfiguration clientConfig = new ClientConfiguration(); clientConfig.setProtocol(Protocol.HTTP); AmazonS3 conn = new AmazonS3Client(credentials, clientConfig); conn.setEndpoint("correct end point"); Bucket bucket = conn.createBucket(bucketName); ObjectListing objects = conn.listObjects(bucket.getName()); do { for (S3ObjectSummary objectSummary : objects.getObjectSummaries()) { System.out.println(objectSummary.getKey() + "\t" + objectSummary.getSize() + "\t" + StringUtils.fromDate(objectSummary.getLastModified())); } objects = conn.listNextBatchOfObjects(objects); } while (objects.isTruncated());
OneCricketeer
157k18 gold badges115 silver badges218 bronze badges
answered Jun 12, 2014 at 4:35
gkbstargkbstar
5551 gold badge4 silver badges16 bronze badges
5
I was getting the same exception and this is how I fixed it.
The S3 bucket objects were encrypted using service-side KMS. I had to add the app/lambda role as a user to the encryption key.
answered Jun 29, 2020 at 13:25
abbasabbas
5,7252 gold badges35 silver badges34 bronze badges
1
In permission tab of bucket, i uncheck:
- Manage public access control lists (ACLs) for this bucket
- Block new public ACLs and uploading public objects (Recommended)
and the problem gone.
answered Dec 1, 2018 at 4:15
phongntphongnt
5115 silver badges13 bronze badges
If you still see the error even after setting the right IAM policy and checking the bucket/path, check the apache http client dependency. The apache http client 4.5.5 works fine, while 4.5.7 and above fails for some weird reason (not properly encoding the folder path separators). You will have to explicitly set the apache http client version to 4.5.5 in that case.. or at least some other version that works.
answered Mar 4, 2019 at 21:03
// This is save s3 buket image code byte imageBytes[] = request.getThumbnail().readAllBytes(); InputStream inputStream = new ByteArrayInputStream(imageBytes); ObjectMetadata metadata = new ObjectMetadata(); metadata.setContentLength(imageBytes.length); metadata.setContentType("image/png"); String image = String.valueOf(System.currentTimeMillis()); // String image=String.valueOf(System.getProperty(String.valueOf(inputRequest))); // String image=String.valueOf(System.getProperty(String.valueOf(inputRequest))); String key = "image/" + image; s3.putObject(new PutObjectRequest(bucketName, key, inputStream, metadata) .withCannedAcl(CannedAccessControlList.Private));
answered Oct 5, 2021 at 6:23
Please check and add your region
answered Apr 1 at 3:54