How to interact with your project’s media storage¶

See also

Working with your project’s media storage in Python applications.

Your cloud project’s media file storage is held on an S3 service - typically Amazon Web Services’sS3 service, or another S3 provider. Currently, most projects useAmazon’s own S3 service, or Exoscale for projects in our Swiss region.

Locally, your projects store their media in the /data/media directory, and you can interactwith those directly. Then, you can use the Divio tools to push and pull media to the Cloud if required.

Occasionally you may need direct access to the S3 storage bucket for yourproject. You can manage this using a client of your choice that supports S3 andthe particular storage provider.

Interact with your project’s Cloud S3 storage¶

Warning

Note that S3 file operations tend to be destructive and do not necessarily have the samebehaviours you may be used to from other models, such as FTP. It’s important that you know whatyou are doing and understand the consequences of any actions or commands.

Obtain your storage access details¶

In the Control Panel for your project, visit the /doctor URL. For each of the Test and Liveservers, you’ll see a DEFAULT_STORAGE_DSN value listed, for example:
'Default storage DSN value'
This value contains the details you will need to use with a file transfer client for access to thestorage bucket. The two examples below show which sections of the DSN correspond to the differentparameters, for the hosts s3.amazonaws.com and sos.exo.io:

  1. s3://AKAIIEJALP7LUT6ODIJA:TZJYGCfUZheXG%2BwANMFabbotgBs6d2lxZW06OIbD@example-test-68564d3f78d04cd2935f-8f20b19.aldryn-media.io.s3-eu-central-1.amazonaws.com/?domain=example-test-68564d3f78d04cd2935f-8f20b19.aldryn-media.io
  2. key secret bucket name region endpoint
  3. s3://EXO52e55beb39187195ddff72219:iITF12F1t321tim9zBxITexrvL_bAghgK_z4w1hEuu00@example-test-765482644ac540dbb23367cf3837580b-f0596a8.sos-ch-dk-2.exo.io/?auth=s3

The key identifies you as a user.

The secret may contain some symbols encoded as hexadecimal values, and you will need to changethem back before using them:

  • %2B must be changed to +
  • %2F must be changed to /
    For any other values beginning with % use a conversion table.

The bucket name identifies the resource you wish to work with.

The region is contained in the endpoint, the S3 host name. Sometimes it may be implicit, asin the case of Amazon’s default us-east-1:

Provider Endpoint Region Location
Amazon s3.amazonaws.com us-east-1 US East (N. Virginia)
s3-eu-central-1.amazonaws.com eu-central-1 EU (Frankfurt)
s3-eu-west-2.amazonaws.com eu-west-2 EU (London)
Exoscale sos-ch-dk-2.exo.io ch-dk-2 Switzerland

See Amazon’s S3 regions table for more information aboutregions and their names.

The endpoint is the address that the client will need to connect to.

Save the parameters¶

Copy and paste each of these parameters into a text file, so you have them ready for use. Now thatyou have obtained the connection parameters, you can use them to connect with the client of yourchoice.

Choose a client¶

How-to guides are provided below for connecting to our storage using:

  • AWS CLI, Amazon’s official S3 client
  • s3cmd, an alternative command-line utility
  • Transmit, a popular storage client for Macintosh
  • CyberDuck, a popular storage client for Macintosh and Windows

Connect using AWS CLI¶

AWS CLI documentation is Amazon’s official S3 client. It’s afree, Python-based application.

Install and configure AWS CLI¶

Run:

  1. pip install awscli
  2. aws configure

You will be prompted for some of the storage access parametersvalues, extracted from the DSN, that you copied earlier.

  • AWS Access Key ID - key
  • AWS Secret Access Key - secret key
  • Default region name - storage region
  • Default output format - leave blank

Interact with your storage¶

Run aws s3 followed by options, commands and parameters. For example, to list the contents of abucket:

  1. aws s3 ls example-test-68564d3f78d04cd2935f-8f20b19.aldryn-media.io
  2. PRE filer_public/
  3. PRE filer_public_thumbnails/

Or, to copy (cp) a file from your own computer to S3:

  1. aws s3 cp example.png s3://example-test-68564d3f78d04cd2935f-8f20b19.aldryn-media.io/example.png
  2. upload: ./example.png to s3://example-test-68564d3f78d04cd2935f-8f20b19.aldryn-media.io/example.png

Using AWS CLI with other providers

For non-AWS providers, such as Exoscale, you will need to add the —url-endpoint option tothe command, as the AWS CLI assumes an endpoint on .amazonaws.com/. For the Exoscaleexample above, you would use:

  1. aws s3 --endpoint-url=https://sos-ch-dk-2.exo.io [...]

Note that the scheme (typically https://) must be included.

Additional usage information¶

Run aws s3 help for more information on commands, or refer to the AWS CLI Command Reference. The AWS CLI can maintainmultiple profiles and offers other features but it’s beyond the scope of this documentation toexplain that here.

The aws configure command stores the configuration in ~/.aws.

Connect using s3cmd¶

S3cmd is a free Python-based command line tool and client foruploading, retrieving and managing data in Amazon S3 and other cloud storage service providers thatuse the S3 protocol.

Install and configure s3cmd¶

Run:

  1. pip install s3cmd
  2. s3cmd --configure

You will be prompted for some of the storage access parametersvalues, extracted from the DSN, that you copied earlier:

  • Access Key - enter the key from the DSN
  • Secret Key - enter the secret key from the DSN
  • Default Region - enter the storage region
  • S3 Endoint - enter the endpoint from the DSN
    All other settings can be left untouched.

When you have entered the values, s3cmd will offer to test a connection with them (note that whenusing AWS, this will fail - ignore this).

Interact with your storage¶

Run s3cmd followed by options, commands and parameters. For example, to list the contents of abucket:

  1. s3cmd ls s3://example-test-68564d3f78d04cd2935f-8f20b19.aldryn-media.io

Note that the scheme (s3://) is required in front of the bucket name.

Additional usage information¶

Run s3cmd for more information on commands, or refer to Usage.

Using s3cmd you can take advantage of —recursive properties for iterating over the entirebucket contents; however it’s beyond the scope of this documentation to explain this or otherfeatures here.

s3cmd —configure creates a configuration file at ~/.s3cfg.

Connect using Transmit¶

Install the Transmit file transfer application for Macintosh.

Create a new connection. You will need to enter some of the storage access parameters values, extracted from the DSN, that you copied earlier:

Setting Value
Protocol Amazon S3
Address endpoint
Access Key ID key
Password secret key
Remote Path bucket name

Cyberduck¶

Install Cyberduck.

Create a new bookmark (note that you cannot simply use the Open Connection dialog, becausethis will not allow you to provide the required bucket name in order to proceed). You will beprompted for some of the storage access parameters values,extracted from the DSN, that you copied earlier:

Setting Value
Protocol Amazon S3
Server endpoint
Access Key ID key
Path (in More Options) bucket name

On attempting to connect, you will be prompted for the Secret Access Key; use the secret key.

For Exoscale (Divio Cloud Swiss region) deployments, you can also download and install theExoscale profile for Cyberduck, which includes someprepared configuration.

Use Divio tools for local access to Cloud storage¶

The project’s media files can be found in the /data/media directory, andcan be managed and manipulated in the normal way on your own computer.

Be aware that if you edit project files locally, your operating system may savesome hidden files. When you push your media to the cloud, these hidden fileswill be pushed too. This will however not usually present a problem.

Pushing and pulling media files¶

The Divio app includes an option to Upload (push) andDownload (pull) media files to and from the cloud test server.

The Divio CLI includes pulland push commands that target the test or liveserver as required.

Warning

Note that all push and pull operations completely replace all files atthe destination, and do not perform any merges of assets. Locally, the/data/media directory will be deleted and replaced; on the cloud, theentire bucket will be replaced.

Limitations¶

You may encounter some file transfer size limitations when pushing and pullingmedia using the Divio app or the Divio CLI. Interacting directly with theS3 storage bucket is a way around this.

It can also be much faster, and allows selective changes to files in the system.

Storage ACLs (Access Control Lists)¶

When uploading files to your storage, note that you may need to specifyexplicitly the ACLs - in effect, the file permissions - on the files. If youdon’t set the correct ACLs, you may find that attempts to retrieve them (forexample in a web browser) give an “access denied” error.

On AWS S3, the public-read ACLis set by default. This is the ACL required for general use.

原文: http://docs.divio.com/en/latest/how-to/interact-storage.html