Switch from public to private buckets • Commercial Edition
WARNING
Starting with v1.4.0 of the Commercial edition Plane will use private storage buckets for any file uploaded to your Plane instance.
INFO
New installations with default storage, which is MiniO, don't need to change anything. For S3 or S3-compatible storage, please see this.
While you can use the current public storage paradigm that Plane has followed so far, we highly recommend you migrate to private storage buckets which ensure greater security and give you more control over how files are accessed.
INFO
To keep public storage on external S3 compatible services, you still have to update your CORS policy.
See the instructions to switch to private storage by the provider you use below.
For default MinIO storage
Simply run the command ↓.
docker exec -it <api_container> python manage.py update_bucketA successful run keeps any public files you already have accessible while moving you to private storage.
For external storage • S3 or S3 compatible
There are two parts to this—updating your CORS policy and then switching to private storage.
Update bucket's CORS policy
WARNING
This step is critical if you are using external storage to ensure continued functionality.
Here’s a sample CORS policy for your reference. Just replace <YOUR_DOMAIN> with your actual domain and apply the policy to your bucket.
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"POST",
"PUT",
"DELETE",
"HEAD"
],
"AllowedOrigins": [
"<YOUR_DOMAIN>",
],
"ExposeHeaders": [
"ETag",
"x-amz-server-side-encryption",
"x-amz-request-id",
"x-amz-id-2"
],
"MaxAgeSeconds": 3000
}
]Switch to private storage
WARNING
Don't start from here if you haven't updated your CORS policy.
To migrate from public to private bucket storage, follow the instructions below:
First, make sure you have the following permissions on your S3 bucket. If you don't, make changes to get those permissions on your bucket first.
s3:GetObject
So you can access your public files so far To access existing objects publiclys3:ListBucket
So you can apply policies to your bucket for public accesss3:PutObject
So you can create new filess3:PutBucketPolicy
So you can update your buckets' policy
Now, run the command ↓.
bashdocker exec -it <api_container> python manage.py update_bucketTIP
- If the command finds the necessary permissions missing, it will generate a
permissions.jsonfile which you can use to update your bucket policy manually. Here’s how thepermissions.jsonfile should look.
bash{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": [ "arn:aws:s3:::<bucket_name>/<object_1>", "arn:aws:s3:::<bucket_name>/<object_2>" ] } ] }- To copy the
permissions.jsonfile to the local machine, run the command ↓.
bashdocker cp <api_container>:/code/permissions.json .- If the command finds the necessary permissions missing, it will generate a

