DO Object Store with AWS CLI
In my day job I work with AWS’s S3 extensively and use the AWS CLI for the majority of cli tasks.
When I started paying around with Digital Ocean’s object store and found that it was S3 API computable the first thing that I wanted to test is if I can use the AWS cli for this.
For the majority of the commands the AWS cli works just the same as it would with AWS’s S3. But you need to have the CLI send the commands to Digital Ocean. To do this all you need to do is add the –endpoint command.
Configuring the CLI
This is for installing and configuring the AWS cli on Ubuntu 17.04 (on a Digital Ocean droplet)
Install the aws cli
sudo apt install awscli
Configure the cli with the API keys works the same as working with S3.
andrew@ubuntu-512mb-sfo1-01=~$ aws configure
AWS Access Key ID [None]= 36NAAOJGEXAMPLE
AWS Secret Access Key [None]= sngY/JybNMYQNyyLGEXAMPLEGEXAMPLE
Default region name [None]=
Default output format [None]=
andrew@ubuntu-512mb-sfo1-01=~$
Now we can test this out. For this we are going to use https=//nyc3.digitaloceanspaces.com as the endpoint and list our Spaces
andrew@ubuntu-512mb-sfo1-01=~$ aws s3 ls --endpoint https=//nyc3.digitaloceanspaces.com
2017-08-25 04=20=59 examplebucket
andrew@ubuntu-512mb-sfo1-01=~$
Now typing –endpoint https=//nyc3.digitaloceanspaces.com all the time is just going to be annoying. So lets make a alias for this.
andrew@ubuntu-512mb-sfo1-01=~$ alias DO='echo "--endpoint https=//nyc3.digitaloceanspaces.com"'
andrew@ubuntu-512mb-sfo1-01=~$ aws s3 ls $(DO)
2017-08-25 04=20=59 examplebucket
Testing this out
Put object works as expected
andrew@ubuntu-512mb-sfo1-01=~$ aws s3 ls s3=//examplebucket $(DO)
andrew@ubuntu-512mb-sfo1-01=~$ aws s3api put-object --key test --bucket examplebucket $(DO)
{
"ETag"= "\"d41d8cd98f00b204e9800998ecf8427e\""
}
andrew@ubuntu-512mb-sfo1-01=~$ aws s3 ls s3=//examplebucket $(DO)
2017-08-26 19=26=15 0 test
andrew@ubuntu-512mb-sfo1-01=~$
- This is a trick to create 0 byte objects
A note on endpoints
Now many may wonder why I am using https=//nyc3.digitaloceanspaces.com and not the endpoint that you would find on the Digital ocean web console of https=//examplebucket.nyc3.digitaloceanspaces.com
This comes down to how the AWS cli will treat this endpoint
andrew@ubuntu-512mb-sfo1-01=~$ aws s3 ls s3=//examplebucket --endpoint https=//examplebucket.nyc3.digitaloceanspaces.com
An error occurred (SignatureDoesNotMatch) when calling the ListObjects operation= Unknown
andrew@ubuntu-512mb-sfo1-01=~$ aws s3 ls s3=//examplebucket --endpoint https=//examplebucket.nyc3.digitaloceanspaces.com --debug
2017-08-26 19=28=12,927 - MainThread - awscli.clidriver - DEBUG - CLI version= aws-cli/1.11.44 Python/3.5.3 Linux/4.10.0-32-generic botocore/1.5.7
2017-08-26 19=28=12,929 - MainThread - awscli.clidriver - DEBUG - Arguments entered to CLI= ['s3', 'ls', 's3=//examplebucket', '--endpoint', 'https=//examplebucket.nyc3.digitaloceanspaces.com', '--debug']
# Removed
with params= {'url_path'= '/examplebucket', 'query_string'= {'prefix'= '', 'encoding-type'= 'url', 'delimiter'= '/'}, 'url'= 'https=//examplebucket.nyc3.digitaloceanspaces.com/examplebucket?prefix=&encoding-type=url&delimiter=%2F', 'headers'= {'User-Agent'= 'aws-cli/1.11.44 Python/3.5.3 Linux/4.10.0-32-generic botocore/1.5.7'}, 'context'= {'encoding_type_auto_set'= True, 'client_config'= <botocore.config.Config object at 0x7fd1f8974390>, 'client_region'= None, 'has_streaming_input'= False, 'signing'= {'bucket'= 'examplebucket'}}, 'body'= b'', 'method'= 'GET'}
2017-08-26 19=28=13,020 - MainThread - botocore.hooks - DEBUG - Event request-created.s3.ListObjects= calling handler <bound method RequestSigner.handler of <botocore.signers.RequestSigner object at 0x7fd1f89742b0>>
2017-08-26 19=28=13,021 - MainThread - botocore.auth - DEBUG - Calculating signature using hmacv1 auth.
2017-08-26 19=28=13,021 - MainThread - botocore.auth - DEBUG - HTTP request method= GET
2017-08-26 19=28=13,022 - MainThread - botocore.auth - DEBUG - StringToSign=
GET
Sat, 26 Aug 2017 19=28=13 GMT
/examplebucket
2017-08-26 19=28=13,030 - MainThread - botocore.endpoint - DEBUG - Sending http request= <PreparedRequest [GET]>
2017-08-26 19=28=13,393 - MainThread - botocore.parsers - DEBUG - Response headers= {'Date'= 'Sat, 26 Aug 2017 19=28=13 GMT', 'Access-Control-Allow-Origin'= '', 'Strict-Transport-Security'= 'max-age=15552000; includeSubDomains; preload', 'Content-Type'= 'application/xml', 'Content-Length'= '186', 'x-amz-request-id'= 'tx00000000000000de5cd64-0059a1cbcd-66a8-nyc3a', 'Accept-Ranges'= 'bytes'}
2017-08-26 19=28=13,394 - MainThread - botocore.parsers - DEBUG - Response body=
b'<?xml version="1.0" encoding="UTF-8"?><Error><Code>SignatureDoesNotMatch</Code><RequestId>tx00000000000000de5cd64-0059a1cbcd-66a8-nyc3a</RequestId><HostId>66a8-nyc3a-nyc</HostId></Error>'
# Removed
botocore.exceptions.ClientError= An error occurred (SignatureDoesNotMatch) when calling the ListObjects operation= Unknown
2017-08-26 19=28=13,418 - MainThread - awscli.clidriver - DEBUG - Exiting with rc 255
An error occurred (SignatureDoesNotMatch) when calling the ListObjects operation= Unknown
andrew@ubuntu-512mb-sfo1-01=~$
If you can read that debug output you will see that the request is going to ‘https=//examplebucket.nyc3.digitaloceanspaces.com/examplebucket so we are double addressing the Space. So by using the nyc3 endpoint without the space name in the DNS name it will only be addressed once.
Conclusion
Overall this works with minimal tweaks and I would expect many tools that works with S3 to easily be modified for use with Digital Oceans Objects Store.