Access key per space
Please prioritise having individual keys for individual spaces. It's pretty useless with multiple products or clients. I can't use it for backup for instance, because every other project will be able to access these backups if compromised.
Granular API token access to Object Storage (Spaces)
DigitalOcean does not yet have granular API token access as an option. As the API tokens are now give access to all Spaces on the same account which is not optimal for us as a web agency. We could create an account for each client, but this will give us hundreds of accounts and much more extra work for our bookkeeper.
Automatically purge file from CDN when the file is identified changed or removed
This seems like a bug or missing feature (I know it can be workarounded using DigitalOcean API or doctl compute cdn flush $cdn_id --files $file ). The issue is when doing any file changes in DigitalOcean Spaces e.g. uploading an updated file using s3cmd - it is still stuck in CDN with the old file. Why DigitalOcean doesn't automatically purge file from CDN when the file is identified as changed or even removed? For example if https://cdn.mydomain.com/test.jpg doesn't exist, after I upload test.jpg to space https://cdn.mydomain.com/test.jpg would display it without requiring me to purge CDN cache. It is not consistent with case if test.jpg already exist and after I delete it or upload modified test.jpg, now https://cdn.mydomain.com/test.jpg still be stuck with the old file. DigitalOcean should automatically identify change and purge it from CDN cache.
separate read, write and delete access keys
separate read from write and from delete with certain access keys. example: We may want to use spaces to upload backups to the s3 compatible storage. However if the server gets compromised and the keys get stolen, the backups may also be deleted. This is a security risk. Its a good habit to separate the 'deletion' of old backups by another instance. Hence separation of delete from the read and the read/write.
Spaces needs access logs.. so we can know what's happening under the hood It's the only reason I use AWS CloudFront instead.. here's their access log: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/AccessLogs.html
GEO Replication for Spaces
We would like to ability to replicate a space / bucket between regions. I see two ways this could be implemented. Since a Space name is globally unique. You could have an option to host that space in xxxx locations and have data automatically replicate to each region. Allow the user to choose two or more spaces in any region to create a replication rule. I think there should also be two modes, master/slave and master/master. Replication should be automatic and fast. In master/slave all changes created on one space / region would replicate to the other region but not vice versa. Where as with master/master changes made in either space will be replicated to all other spaces in the replication pool.
disk utilization for volumes
Currently the Graphs do not display disk utilization for attached volumes. This should be added.
Become a cloudflare hosting partner
I'm not aware of all the benefits or costs of you becoming a cloudflare hosting partner but apparently it means we will get access to the Railgun feature which is usually reserved for sites on the premium business package. Railgun is a seriously cool and useful tool for dynamic websites: http://www.cloudflare.com/railgun I think as coldflare becomes more ubiquitous for website owners, this will be a highly attractive feature!