From 081b14007314062560ffc6fdf6688c243cf9413d Mon Sep 17 00:00:00 2001
From: shiyuhang <1136742008@qq.com>
Date: Mon, 30 Mar 2026 12:38:15 +0800
Subject: [PATCH 1/9] premium export
---
TOC-tidb-cloud-premium.md | 3 +
tidb-cloud/premium/external-storage.md | 250 +++++++++++++++++++++++++
tidb-cloud/premium/premium-export.md | 192 +++++++++++++++++++
3 files changed, 445 insertions(+)
create mode 100644 tidb-cloud/premium/external-storage.md
create mode 100644 tidb-cloud/premium/premium-export.md
diff --git a/TOC-tidb-cloud-premium.md b/TOC-tidb-cloud-premium.md
index a1080a1bb844d..e4b6e8e8389f7 100644
--- a/TOC-tidb-cloud-premium.md
+++ b/TOC-tidb-cloud-premium.md
@@ -210,6 +210,9 @@
- [Import Parquet Files from Cloud Storage](/tidb-cloud/import-parquet-files-serverless.md)
- [Import Snapshot Files from Cloud Storage](/tidb-cloud/import-snapshot-files-serverless.md)
- [Import Data Using MySQL CLI](/tidb-cloud/premium/import-with-mysql-cli-premium.md)
+ - Export Data from TiDB Cloud Premium
+ - [Export Data from Premium](/tidb-cloud/premium/premium-export.md)
+ - [Configure External Storage Access](/tidb-cloud/premium/external-storage.md)
- Reference
- [Configure External Storage Access for TiDB Cloud](/tidb-cloud/configure-external-storage-access.md)
- [Naming Conventions for Data Import](/tidb-cloud/naming-conventions-for-data-import.md)
diff --git a/tidb-cloud/premium/external-storage.md b/tidb-cloud/premium/external-storage.md
new file mode 100644
index 0000000000000..7ae7561b7a8f7
--- /dev/null
+++ b/tidb-cloud/premium/external-storage.md
@@ -0,0 +1,250 @@
+---
+title: Configure External Storage Access
+summary: Learn how to configure cross-account access to an external storage such as Amazon Simple Storage Service (Amazon S3).
+aliases: ['/tidbcloud/serverless-external-storage']
+---
+
+# Configure External Storage Access
+
+If you want to export data to an external storage in a TiDB Cloud instance, you need to configure cross-account access. This document describes how to configure access to an external storage for {{{ .premium }}} instances.
+
+## Configure Amazon S3 access
+
+To allow a TiDB Cloud instance to export data to your Amazon S3 bucket, configure the bucket access for the instance using either of the following methods:
+
+- [Use a Role ARN](#configure-amazon-s3-access-using-a-role-arn): use a Role ARN to access your Amazon S3 bucket.
+- [Use an AWS access key](#configure-amazon-s3-access-using-an-aws-access-key): use the access key of an IAM user to access your Amazon S3 bucket.
+
+### Configure Amazon S3 access using a Role ARN
+
+It is recommended that you use [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) to create a role ARN. Take the following steps to create one:
+
+> **Note:**
+>
+> Role ARN access to Amazon S3 is only supported for instances with AWS as the cloud provider. If you use a different cloud provider, use an AWS access key instead. For more information, see [Configure Amazon S3 access using an AWS access key](#configure-amazon-s3-access-using-an-aws-access-key).
+
+1. Open the **Export** page for your target instance.
+
+ 1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**TiDB Instances**](https://tidbcloud.com/tidbs) page.
+
+ 2. Click the name of your target instance to go to its overview page, and then click **Data** > **Export** in the left navigation pane.
+
+2. Open the **Add New ARN** dialog.
+
+ 1. Click **Export data**.
+ 2. Choose **Amazon S3** in **Target Connection**.
+ 3. Fill in the **Folder URI** field.
+ 3. Choose **AWS Role ARN** and click **Click here to create new one with AWS CloudFormation**.
+
+3. Create a role ARN with an AWS CloudFormation template.
+
+ 1. In the **Add New ARN** dialog, click **AWS Console with CloudFormation Template**.
+
+ 2. Log in to the [AWS Management Console](https://console.aws.amazon.com) and you will be redirected to the AWS CloudFormation **Quick create stack** page.
+
+ 3. Fill in the **Role Name**.
+
+ 4. Acknowledge to create a new role and click **Create stack** to create the role ARN.
+
+ 5. After the CloudFormation stack is executed, you can click the **Outputs** tab and find the Role ARN value in the **Value** column.
+
+ 
+
+If you have any trouble creating a role ARN with AWS CloudFormation, you can take the following steps to create one manually:
+
+
+Click here to see details
+
+1. In the **Add New ARN** dialog described in previous instructions, click **Having trouble? Create Role ARN manually**. You will get the **TiDB Cloud Account ID** and **TiDB Cloud External ID**.
+
+2. In the AWS Management Console, create a managed policy for your Amazon S3 bucket.
+
+ 1. Sign in to the [AWS Management Console](https://console.aws.amazon.com/) and open the [Amazon S3 console](https://console.aws.amazon.com/s3/).
+
+ 2. In the **Buckets** list, choose the name of your bucket with the source data, and then click **Copy ARN** to get your S3 bucket ARN (for example, `arn:aws:s3:::tidb-cloud-source-data`). Take a note of the bucket ARN for later use.
+
+ 
+
+ 3. Open the [IAM console](https://console.aws.amazon.com/iam/), click **Policies** in the left navigation pane, and then click **Create Policy**.
+
+ 
+
+ 4. On the **Create policy** page, click the **JSON** tab.
+
+ 5. Configure the policy in the policy text field according to your needs. The following is an example that you can use to export data from a TiDB Cloud instance.
+
+ - Exporting data from a TiDB Cloud instance needs the **s3:PutObject** and **s3:ListBucket** permissions.
+
+ ```json
+ {
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Sid": "VisualEditor0",
+ "Effect": "Allow",
+ "Action": [
+ "s3:PutObject"
+ ],
+ "Resource": "//*"
+ },
+ {
+ "Sid": "VisualEditor1",
+ "Effect": "Allow",
+ "Action": [
+ "s3:ListBucket"
+ ],
+ "Resource": ""
+ }
+ ]
+ }
+ ```
+
+ In the policy text field, replace the following configurations with your own values.
+
+ - `"Resource": "//*"`. For example:
+
+ - If your source data is stored in the root directory of the `tidb-cloud-source-data` bucket, use `"Resource": "arn:aws:s3:::tidb-cloud-source-data/*"`.
+ - If your source data is stored in the `mydata` directory of the bucket, use `"Resource": "arn:aws:s3:::tidb-cloud-source-data/mydata/*"`.
+
+ Make sure that `/*` is added to the end of the directory so TiDB Cloud can access all files in this directory.
+
+ - `"Resource": ""`, for example, `"Resource": "arn:aws:s3:::tidb-cloud-source-data"`.
+
+ - If you have enabled AWS Key Management Service key (SSE-KMS) with customer-managed key encryption, make sure the following configuration is included in the policy. `"arn:aws:kms:ap-northeast-1:105880447796:key/c3046e91-fdfc-4f3a-acff-00597dd3801f"` is a sample KMS key of the bucket.
+
+ ```
+ {
+ "Sid": "AllowKMSkey",
+ "Effect": "Allow",
+ "Action": [
+ "kms:Decrypt"
+ ],
+ "Resource": "arn:aws:kms:ap-northeast-1:105880447796:key/c3046e91-fdfc-4f3a-acff-00597dd3801f"
+ }
+ ```
+
+ - If the objects in your bucket have been copied from another encrypted bucket, the KMS key value needs to include the keys of both buckets. For example, `"Resource": ["arn:aws:kms:ap-northeast-1:105880447796:key/c3046e91-fdfc-4f3a-acff-00597dd3801f","arn:aws:kms:ap-northeast-1:495580073302:key/0d7926a7-6ecc-4bf7-a9c1-a38f0faec0cd"]`.
+
+ 6. Click **Next**.
+
+ 7. Set a policy name, add a tag of the policy (optional), and then click **Create policy**.
+
+3. In the AWS Management Console, create an access role for TiDB Cloud and get the role ARN.
+
+ 1. In the [IAM console](https://console.aws.amazon.com/iam/), click **Roles** in the left navigation pane, and then click **Create role**.
+
+ 
+
+ 2. To create a role, fill in the following information:
+
+ - In **Trusted entity type**, select **AWS account**.
+ - In **An AWS account**, select **Another AWS account**, and then paste the TiDB Cloud account ID to the **Account ID** field.
+ - In **Options**, click **Require external ID (Best practice when a third party will assume this role)**, and then paste the TiDB Cloud External ID to the **External ID** field.
+
+ 3. Click **Next** to open the policy list, choose the policy you just created, and then click **Next**.
+
+ 4. In **Role details**, set a name for the role, and then click **Create role** in the lower-right corner. After the role is created, the list of roles is displayed.
+
+ 5. In the list of roles, click the name of the role that you just created to go to its summary page, and then you can get the role ARN.
+
+ 
+
+
+
+### Configure Amazon S3 access using an AWS access key
+
+It is recommended that you use an IAM user (instead of the AWS account root user) to create an access key.
+
+Take the following steps to configure an access key:
+
+1. Create an IAM user. For more information, see [creating an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console).
+
+2. Use your AWS account ID or account alias, and your IAM user name and password to sign in to [the IAM console](https://console.aws.amazon.com/iam).
+
+3. Create an access key. For more information, see [creating an access key for an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey).
+
+> **Note:**
+>
+> TiDB Cloud does not store your access keys. It is recommended that you [delete the access key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey) after the import or export is complete.
+
+## Configure Azure Blob Storage access
+
+To allow TiDB Cloud to export data to your Azure Blob container, you need to create a service SAS token for the container.
+
+You can create a SAS token either using an [Azure ARM template](https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/overview) (recommended) or manual configuration.
+
+To create a SAS token using an Azure ARM template, take the following steps:
+
+1. Open the **Export** page for your target instance.
+
+ 1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**TiDB Instances**](https://tidbcloud.com/tidbs) page.
+
+ 2. Click the name of your target instance to go to its overview page, and then click **Data** > **Export** in the left navigation pane.
+
+2. Open the **Generate New SAS Token via ARM Template Deployment** dialog.
+
+ 1. Click **Export Data**.
+
+ 2. Choose **Azure Blob Storage** in **Target Connection**.
+
+ 2. Click **Click here to create a new one with Azure ARM template** under the SAS Token field.
+
+3. Create a SAS token with the Azure ARM template.
+
+ 1. In the **Generate New SAS Token via ARM Template Deployment** dialog, click **Click to open the Azure Portal with the pre-configured ARM template**.
+
+ 2. After logging in to Azure, you will be redirected to the Azure **Custom deployment** page.
+
+ 3. Fill in the **Resource group** and **Storage Account Name** in the **Custom deployment** page. You can get all the information from the storage account overview page where the container is located.
+
+ 
+
+ 4. Click **Review + create** or **Next** to review the deployment. Click **Create** to start the deployment.
+
+ 5. After it completes, you will be redirected to the deployment overview page. Navigate to the **Outputs** section to get the SAS token.
+
+If you have any trouble creating a SAS token with the Azure ARM template, take the following steps to create one manually:
+
+
+Click here to see details
+
+1. On the [Azure Storage account](https://portal.azure.com/#browse/Microsoft.Storage%2FStorageAccounts) page, click your storage account to which the container belongs.
+
+2. On your **Storage account** page, click the **Security+network**, and then click **Shared access signature**.
+
+ 
+
+3. On the **Shared access signature** page, create a service SAS token with needed permissions as follows. For more information, see [Create a service SAS token](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview).
+
+ 1. In the **Allowed services** section, choose the **Blob** service.
+ 2. In the **Allowed Resource types** section, choose **Container** and **Object**.
+ 3. In the **Allowed permissions** section, choose the **Read** and **Write** permissions.
+
+ 4. Adjust **Start and expiry date/time** as needed.
+ 5. You can keep the default values for other settings.
+
+ 
+
+4. Click **Generate SAS and connection string** to generate the SAS token.
+
+
+
+## Configure Alibaba Cloud Object Storage Service (OSS) access
+
+To allow TiDB Cloud to export data to your Alibaba Cloud OSS bucket, you need to create an AccessKey pair for the bucket.
+
+Take the following steps to configure an AccessKey pair:
+
+1. Create a RAM user and get the AccessKey pair. For more information, see [Create a RAM user](https://www.alibabacloud.com/help/en/ram/user-guide/create-a-ram-user).
+
+ In the **Access Mode** section, select **Using permanent AccessKey to access**.
+
+2. Create a custom policy with the required permissions. For more information, see [Create custom policies](https://www.alibabacloud.com/help/en/ram/user-guide/create-a-custom-policy).
+
+ - In the **Effect** section, select **Allow**.
+ - In the **Service** section, select **Object Storage Service**.
+ - In the **Action** section, select `oss:PutObject` and `oss:GetBucketInfo` permissions.
+
+ - In the **Resource** section, select the bucket and the objects in the bucket.
+
+3. Attach the custom policies to the RAM user. For more information, see [Grant permissions to a RAM user](https://www.alibabacloud.com/help/en/ram/user-guide/grant-permissions-to-the-ram-user).
diff --git a/tidb-cloud/premium/premium-export.md b/tidb-cloud/premium/premium-export.md
new file mode 100644
index 0000000000000..663f6ad5bc04a
--- /dev/null
+++ b/tidb-cloud/premium/premium-export.md
@@ -0,0 +1,192 @@
+---
+title: Export Data from {{{ .premium }}}
+summary: Learn how to export data from {{{ .premium }}} instances.
+---
+
+# Export Data from {{{ .premium }}}
+
+TiDB Cloud enables you to export data from a {{{ .premium }}} instance to an external storage service. You can use the exported data for backup, migration, data analysis, or other purposes.
+
+While you can also export data using tools such as [mysqldump](https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html) and TiDB [Dumpling](https://docs.pingcap.com/tidb/dev/dumpling-overview), the export feature provided by TiDB Cloud offers a more convenient and efficient way to export data from an instance. It brings the following benefits:
+
+- Convenience: the export service provides a simple and easy-to-use way to export data from an instance, eliminating the need for additional tools or resources.
+- Isolation: the export service uses separate computing resources, ensuring isolation from the resources used by your online services.
+- Consistency: the export service ensures the consistency of the exported data without causing locks, which does not affect your online services.
+
+> **Note:**
+>
+> Please export data smaller than 100 GiB; otherwise, the process may fail. If you need to export larger datasets, please [contact Us](https://www.pingcap.com/contact-us)
+
+## Export locations
+
+You can export data to the following external storage locations:
+
+- [Amazon S3](https://aws.amazon.com/s3/)
+- [Azure Blob Storage](https://azure.microsoft.com/en-us/services/storage/blobs/)
+- [Alibaba Cloud Object Storage Service (OSS)](https://www.alibabacloud.com/product/oss)
+
+### Amazon S3
+
+To export data to Amazon S3, you need to provide the following information:
+
+- URI: `s3:////`
+- One of the following access credentials:
+ - [An access key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html): make sure the access key has the `s3:PutObject` and `s3:ListBucket` permissions.
+ - [A role ARN](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html): make sure the role ARN (Amazon Resource Name) has the `s3:PutObject` and `s3:ListBucket` permissions. Note that only instances hosted on AWS support the role ARN.
+
+For more information, see [Configure External Storage Access](/tidb-cloud/premium/external-storage.md#configure-amazon-s3-access).
+
+### Azure Blob Storage
+
+To export data to Azure Blob Storage, you need to provide the following information:
+
+- URI: `azure://.blob.core.windows.net///` or `https://.blob.core.windows.net///`
+- Access credential: a [shared access signature (SAS) token](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview) for your Azure Blob Storage container. Make sure the SAS token has the `Read` and `Write` permissions on the `Container` and `Object` resources.
+
+For more information, see [Configure External Storage Access](/tidb-cloud/premium/external-storage.md#configure-azure-blob-storage-access).
+
+### Alibaba Cloud OSS
+
+To export data to Alibaba Cloud OSS, you need to provide the following information:
+
+- URI: `oss:////`
+- Access credential: An [AccessKey pair](https://www.alibabacloud.com/help/en/ram/user-guide/create-an-accesskey-pair) for your Alibaba Cloud account. Make sure the AccessKey pair has the `oss:PutObject` and `oss:GetBucketInfo` permissions.
+
+For more information, see [Configure External Storage Access](/tidb-cloud/premium/external-storage.md#configure-alibaba-cloud-object-storage-service-oss-access).
+
+## Export options
+
+### Data filtering
+
+TiDB Cloud console supports exporting data with the selected databases and tables.
+
+### Data formats
+
+You can export data in the following formats:
+
+- `SQL`: export data in SQL format.
+- `CSV`: export data in CSV format. You can specify the following options:
+ - `delimiter`: specify the delimiter used in the exported data. The default delimiter is `"`.
+ - `separator`: specify the character used to separate fields in the exported data. The default separator is `,`.
+ - `header`: specify whether to include a header row in the exported data. The default value is `true`.
+ - `null-value`: specify the string that represents a NULL value in the exported data. The default value is `\N`.
+
+The schema and data are exported according to the following naming conventions:
+
+| Item | Not compressed | Compressed |
+|-----------------|-------------------------------|--------------------------------------------------------------|
+| Database schema | {database}-schema-create.sql | {database}-schema-create.sql.{compression-type} |
+| Table schema | {database}.{table}-schema.sql | {database}.{table}-schema.sql.{compression-type} |
+| Data | {database}.{table}.{0001}.csv | {database}.{table}.{0001}.csv.{compression-type} |
+| Data | {database}.{table}.{0001}.sql | {database}.{table}.{0001}.sql.{compression-type} |
+
+### Data compression
+
+You can compress the exported CSV and SQL data using the following algorithms:
+
+- `gzip` (default): compress the exported data with `gzip`.
+- `snappy`: compress the exported data with `snappy`.
+- `zstd`: compress the exported data with `zstd`.
+- `none`: do not compress the exported `data`.
+
+## Examples
+
+### Export data to Amazon S3
+
+1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**TiDB Instances**](https://tidbcloud.com/tidbs) page.
+
+ > **Tip:**
+ >
+ > You can use the combo box in the upper-left corner to switch between organizations, projects, and instances.
+
+2. Click the name of your target instance to go to its overview page, and then click **Data** > **Export** in the left navigation pane.
+
+3. On the **Export** page, click **Export Data** in the upper-right corner:
+
+ - **Task Name**: enter a name for the export task. The default value is `SNAPSHOT_{snapshot_time}`.
+ - **Source Connection**: enter Username and Password of your TiDB Instance, and then click **Test Connection** to check them.
+ - **Target Connection**:
+ - **Storage Provider**: choose Amazon S3
+ - **Folder URI**: enter the URI of the Amazon S3 with the `s3:////` format.
+ - **Bucket Access**: choose one of the following access credentials and then fill in the credential information:
+ - **AWS Role ARN**: enter the role ARN that has the permission to access the bucket. It is recommended to create the role ARN with AWS CloudFormation. For more information, see [Configure External Storage Access](/tidb-cloud/premium/external-storage.md#configure-amazon-s3-access).
+ - **AWS Access Key**: enter the access key ID and access key secret that have the permission to access the bucket.
+ - **Exported Data**: choose the databases or tables you want to export.
+ - **Data Format**: choose **SQL** or **CSV**.
+ - **Compression**: choose **Gzip**, **Snappy**, **Zstd**, or **None**.
+
+4. Click **Export**.
+
+### Export data to Azure Blob Storage
+
+1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**TiDB Instances**](https://tidbcloud.com/tidbs) page.
+
+ > **Tip:**
+ >
+ > You can use the combo box in the upper-left corner to switch between organizations, projects, and instances.
+
+2. Click the name of your target instance to go to its overview page, and then click **Data** > **Export** in the left navigation pane.
+
+3. On the **Export** page, click **Export Data** in the upper-right corner:
+
+ - **Task Name**: enter a name for the export task. The default value is `SNAPSHOT_{snapshot_time}`.
+ - **Source Connection**: enter Username and Password of your TiDB Instance, and then click **Test Connection** to check them.
+ - **Target Connection**:
+ - **Storage Provider**: choose Azure Blob Storage
+ - **Folder URI**: enter the URI of Azure Blob Storage with the `azure://.blob.core.windows.net///` format.
+ - **SAS Token**: enter the SAS token that has the permission to access the container. It is recommended to create a SAS token with the [Azure ARM template](https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/). For more information, see [Configure External Storage Access](/tidb-cloud/premium/external-storage.md#configure-azure-blob-storage-access).
+ - **Exported Data**: choose the databases or tables you want to export.
+ - **Data Format**: choose **SQL** or **CSV**.
+ - **Compression**: choose **Gzip**, **Snappy**, **Zstd**, or **None**.
+
+4. Click **Export**.
+
+### Export data to Alibaba Cloud OSS
+
+1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**TiDB Instances**](https://tidbcloud.com/tidbs) page.
+
+ > **Tip:**
+ >
+ > You can use the combo box in the upper-left corner to switch between organizations, projects, and instances.
+
+2. Click the name of your target instance to go to its overview page, and then click **Data** > **Export** in the left navigation pane.
+
+3. On the **Export** page, click **Export Data** in the upper-right corner:
+
+ - **Task Name**: enter a name for the export task. The default value is `SNAPSHOT_{snapshot_time}`.
+ - **Source Connection**: enter Username and Password of your TiDB Instance, and then click **Test Connection** to check them.
+ - **Target Connection**:
+ - **Storage Provider**: choose Alibaba Cloud OSS
+ - **Folder URI**: enter the Alibaba Cloud OSS URI where you want to export the data, in the `oss:////` format.
+ - **AccessKey ID** and **AccessKey Secret**: enter the AccessKey ID and AccessKey Secret that have the permission to access the bucket.
+ - **Exported Data**: choose the databases or tables you want to export.
+ - **Data Format**: choose **SQL** or **CSV**.
+ - **Compression**: choose **Gzip**, **Snappy**, **Zstd**, or **None**.
+
+4. Click **Export**.
+
+### Cancel an export task
+
+To cancel an ongoing export task, take the following steps:
+
+1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**TiDB Instances**](https://tidbcloud.com/tidbs) page.
+
+ > **Tip:**
+ >
+ > You can use the combo box in the upper-left corner to switch between organizations, projects, and instances.
+
+2. Click the name of your target instance to go to its overview page, and then click **Data** > **Import** in the left navigation pane.
+
+3. On the **Import** page, click **Export** to view the export task list.
+
+4. Choose the export task you want to cancel, and then click **Action**.
+
+5. Choose **Cancel** in the drop-down list. Note that you can only cancel the export task that is in the **Running** status.
+
+## Export speed
+
+The export speed for **{{{ .premium }}}** is up to 200 MiB/s.
+
+## Pricing
+
+The export service is free during the beta period. You only need to pay for the [Request Units (RUs)](/tidb-cloud/tidb-cloud-glossary.md#request-unit-ru) generated during the export process of successful or canceled tasks. For failed export tasks, you will not be charged.
From d157667e7cf52a5fa6265cfe2885b5c617659b17 Mon Sep 17 00:00:00 2001
From: shi yuhang <52435083+shiyuhang0@users.noreply.github.com>
Date: Mon, 30 Mar 2026 12:48:00 +0800
Subject: [PATCH 2/9] Apply suggestions from code review
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
---
tidb-cloud/premium/external-storage.md | 22 +++++++++++-----------
tidb-cloud/premium/premium-export.md | 14 +++++++-------
2 files changed, 18 insertions(+), 18 deletions(-)
diff --git a/tidb-cloud/premium/external-storage.md b/tidb-cloud/premium/external-storage.md
index 7ae7561b7a8f7..248545fed7681 100644
--- a/tidb-cloud/premium/external-storage.md
+++ b/tidb-cloud/premium/external-storage.md
@@ -6,7 +6,7 @@ aliases: ['/tidbcloud/serverless-external-storage']
# Configure External Storage Access
-If you want to export data to an external storage in a TiDB Cloud instance, you need to configure cross-account access. This document describes how to configure access to an external storage for {{{ .premium }}} instances.
+If you want to export data from a TiDB Cloud instance to an external storage, you need to configure cross-account access. This document describes how to configure access to an external storage for {{{ .premium }}} instances.
## Configure Amazon S3 access
@@ -17,7 +17,7 @@ To allow a TiDB Cloud instance to export data to your Amazon S3 bucket, configur
### Configure Amazon S3 access using a Role ARN
-It is recommended that you use [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) to create a role ARN. Take the following steps to create one:
+We recommend that you use [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) to create a role ARN. Take the following steps to create one:
> **Note:**
>
@@ -34,7 +34,7 @@ It is recommended that you use [AWS CloudFormation](https://docs.aws.amazon.com/
1. Click **Export data**.
2. Choose **Amazon S3** in **Target Connection**.
3. Fill in the **Folder URI** field.
- 3. Choose **AWS Role ARN** and click **Click here to create new one with AWS CloudFormation**.
+ 4. Choose **AWS Role ARN** and click **Click here to create new one with AWS CloudFormation**.
3. Create a role ARN with an AWS CloudFormation template.
@@ -61,7 +61,7 @@ If you have any trouble creating a role ARN with AWS CloudFormation, you can tak
1. Sign in to the [AWS Management Console](https://console.aws.amazon.com/) and open the [Amazon S3 console](https://console.aws.amazon.com/s3/).
- 2. In the **Buckets** list, choose the name of your bucket with the source data, and then click **Copy ARN** to get your S3 bucket ARN (for example, `arn:aws:s3:::tidb-cloud-source-data`). Take a note of the bucket ARN for later use.
+ 2. In the **Buckets** list, choose the name of your target bucket, and then click **Copy ARN** to get your S3 bucket ARN (for example, `arn:aws:s3:::tidb-cloud-source-data`). Take a note of the bucket ARN for later use.

@@ -85,7 +85,7 @@ If you have any trouble creating a role ARN with AWS CloudFormation, you can tak
"Action": [
"s3:PutObject"
],
- "Resource": "//*"
+ "Resource": "//*"
},
{
"Sid": "VisualEditor1",
@@ -101,10 +101,10 @@ If you have any trouble creating a role ARN with AWS CloudFormation, you can tak
In the policy text field, replace the following configurations with your own values.
- - `"Resource": "//*"`. For example:
+ - `"Resource": "//*"`. For example:
- - If your source data is stored in the root directory of the `tidb-cloud-source-data` bucket, use `"Resource": "arn:aws:s3:::tidb-cloud-source-data/*"`.
- - If your source data is stored in the `mydata` directory of the bucket, use `"Resource": "arn:aws:s3:::tidb-cloud-source-data/mydata/*"`.
+ - If you want to export data to the root directory of the `tidb-cloud-source-data` bucket, use `"Resource": "arn:aws:s3:::tidb-cloud-source-data/*"`.
+ - If you want to export data to the `mydata` directory of the bucket, use `"Resource": "arn:aws:s3:::tidb-cloud-source-data/mydata/*"`.
Make sure that `/*` is added to the end of the directory so TiDB Cloud can access all files in this directory.
@@ -165,7 +165,7 @@ Take the following steps to configure an access key:
> **Note:**
>
-> TiDB Cloud does not store your access keys. It is recommended that you [delete the access key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey) after the import or export is complete.
+> TiDB Cloud does not store your access keys. For security, we recommend that you [delete the access key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey) after the import or export is complete.
## Configure Azure Blob Storage access
@@ -187,7 +187,7 @@ To create a SAS token using an Azure ARM template, take the following steps:
2. Choose **Azure Blob Storage** in **Target Connection**.
- 2. Click **Click here to create a new one with Azure ARM template** under the SAS Token field.
+ 3. Click **Click here to create a new one with Azure ARM template** under the SAS Token field.
3. Create a SAS token with the Azure ARM template.
@@ -214,7 +214,7 @@ If you have any trouble creating a SAS token with the Azure ARM template, take t

-3. On the **Shared access signature** page, create a service SAS token with needed permissions as follows. For more information, see [Create a service SAS token](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview).
+3. On the **Shared access signature** page, create a service SAS token with the required permissions as follows. For more information, see [Create a service SAS token](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview).
1. In the **Allowed services** section, choose the **Blob** service.
2. In the **Allowed Resource types** section, choose **Container** and **Object**.
diff --git a/tidb-cloud/premium/premium-export.md b/tidb-cloud/premium/premium-export.md
index 663f6ad5bc04a..339d2b8bd8ee2 100644
--- a/tidb-cloud/premium/premium-export.md
+++ b/tidb-cloud/premium/premium-export.md
@@ -15,7 +15,7 @@ While you can also export data using tools such as [mysqldump](https://dev.mysql
> **Note:**
>
-> Please export data smaller than 100 GiB; otherwise, the process may fail. If you need to export larger datasets, please [contact Us](https://www.pingcap.com/contact-us)
+> Please export data smaller than 100 GiB; otherwise, the process may fail. If you need to export larger datasets, please [contact us](https://www.pingcap.com/contact-us)
## Export locations
@@ -87,7 +87,7 @@ You can compress the exported CSV and SQL data using the following algorithms:
- `gzip` (default): compress the exported data with `gzip`.
- `snappy`: compress the exported data with `snappy`.
- `zstd`: compress the exported data with `zstd`.
-- `none`: do not compress the exported `data`.
+- `none`: do not compress the exported data.
## Examples
@@ -101,14 +101,14 @@ You can compress the exported CSV and SQL data using the following algorithms:
2. Click the name of your target instance to go to its overview page, and then click **Data** > **Export** in the left navigation pane.
-3. On the **Export** page, click **Export Data** in the upper-right corner:
+3. On the **Export** page, click **Export Data** in the upper-right corner and configure the following settings:
- **Task Name**: enter a name for the export task. The default value is `SNAPSHOT_{snapshot_time}`.
- - **Source Connection**: enter Username and Password of your TiDB Instance, and then click **Test Connection** to check them.
+ - **Source Connection**: enter the username and password of your TiDB instance, and then click **Test Connection** to check them.
- **Target Connection**:
- **Storage Provider**: choose Amazon S3
- **Folder URI**: enter the URI of the Amazon S3 with the `s3:////` format.
- - **Bucket Access**: choose one of the following access credentials and then fill in the credential information:
+ - **Bucket Access**: choose one of the following access credentials and then fill in the credential information:
- **AWS Role ARN**: enter the role ARN that has the permission to access the bucket. It is recommended to create the role ARN with AWS CloudFormation. For more information, see [Configure External Storage Access](/tidb-cloud/premium/external-storage.md#configure-amazon-s3-access).
- **AWS Access Key**: enter the access key ID and access key secret that have the permission to access the bucket.
- **Exported Data**: choose the databases or tables you want to export.
@@ -175,9 +175,9 @@ To cancel an ongoing export task, take the following steps:
>
> You can use the combo box in the upper-left corner to switch between organizations, projects, and instances.
-2. Click the name of your target instance to go to its overview page, and then click **Data** > **Import** in the left navigation pane.
+2. Click the name of your target instance to go to its overview page, and then click **Data** > **Export** in the left navigation pane.
-3. On the **Import** page, click **Export** to view the export task list.
+3. On the **Export** page, view the export task list.
4. Choose the export task you want to cancel, and then click **Action**.
From e6f2142dcf2277bde1ffbcd92356e1963f0d7bdf Mon Sep 17 00:00:00 2001
From: shiyuhang <1136742008@qq.com>
Date: Mon, 30 Mar 2026 12:50:39 +0800
Subject: [PATCH 3/9] remove price
---
tidb-cloud/premium/premium-export.md | 8 --------
1 file changed, 8 deletions(-)
diff --git a/tidb-cloud/premium/premium-export.md b/tidb-cloud/premium/premium-export.md
index 339d2b8bd8ee2..8f00ea1ee90a7 100644
--- a/tidb-cloud/premium/premium-export.md
+++ b/tidb-cloud/premium/premium-export.md
@@ -182,11 +182,3 @@ To cancel an ongoing export task, take the following steps:
4. Choose the export task you want to cancel, and then click **Action**.
5. Choose **Cancel** in the drop-down list. Note that you can only cancel the export task that is in the **Running** status.
-
-## Export speed
-
-The export speed for **{{{ .premium }}}** is up to 200 MiB/s.
-
-## Pricing
-
-The export service is free during the beta period. You only need to pay for the [Request Units (RUs)](/tidb-cloud/tidb-cloud-glossary.md#request-unit-ru) generated during the export process of successful or canceled tasks. For failed export tasks, you will not be charged.
From 46a32f6d10e648e713c42f705d0b54bea7f5d6df Mon Sep 17 00:00:00 2001
From: shi yuhang <52435083+shiyuhang0@users.noreply.github.com>
Date: Mon, 30 Mar 2026 12:56:44 +0800
Subject: [PATCH 4/9] Apply suggestions from code review
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
---
TOC-tidb-cloud-premium.md | 2 +-
tidb-cloud/premium/external-storage.md | 2 +-
tidb-cloud/premium/premium-export.md | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/TOC-tidb-cloud-premium.md b/TOC-tidb-cloud-premium.md
index e4b6e8e8389f7..fa6b86b33699e 100644
--- a/TOC-tidb-cloud-premium.md
+++ b/TOC-tidb-cloud-premium.md
@@ -211,7 +211,7 @@
- [Import Snapshot Files from Cloud Storage](/tidb-cloud/import-snapshot-files-serverless.md)
- [Import Data Using MySQL CLI](/tidb-cloud/premium/import-with-mysql-cli-premium.md)
- Export Data from TiDB Cloud Premium
- - [Export Data from Premium](/tidb-cloud/premium/premium-export.md)
+ - [Export Data from TiDB Cloud Premium](/tidb-cloud/premium/premium-export.md)
- [Configure External Storage Access](/tidb-cloud/premium/external-storage.md)
- Reference
- [Configure External Storage Access for TiDB Cloud](/tidb-cloud/configure-external-storage-access.md)
diff --git a/tidb-cloud/premium/external-storage.md b/tidb-cloud/premium/external-storage.md
index 248545fed7681..5c0442d245272 100644
--- a/tidb-cloud/premium/external-storage.md
+++ b/tidb-cloud/premium/external-storage.md
@@ -31,7 +31,7 @@ We recommend that you use [AWS CloudFormation](https://docs.aws.amazon.com/AWSCl
2. Open the **Add New ARN** dialog.
- 1. Click **Export data**.
+ 1. Click **Export Data**.
2. Choose **Amazon S3** in **Target Connection**.
3. Fill in the **Folder URI** field.
4. Choose **AWS Role ARN** and click **Click here to create new one with AWS CloudFormation**.
diff --git a/tidb-cloud/premium/premium-export.md b/tidb-cloud/premium/premium-export.md
index 8f00ea1ee90a7..b4bcf3a9fb867 100644
--- a/tidb-cloud/premium/premium-export.md
+++ b/tidb-cloud/premium/premium-export.md
@@ -127,7 +127,7 @@ You can compress the exported CSV and SQL data using the following algorithms:
2. Click the name of your target instance to go to its overview page, and then click **Data** > **Export** in the left navigation pane.
-3. On the **Export** page, click **Export Data** in the upper-right corner:
+3. On the **Export** page, click **Export Data** in the upper-right corner:
- **Task Name**: enter a name for the export task. The default value is `SNAPSHOT_{snapshot_time}`.
- **Source Connection**: enter Username and Password of your TiDB Instance, and then click **Test Connection** to check them.
From aa865121de51062ee4f060b7c86569101c5d4fa1 Mon Sep 17 00:00:00 2001
From: shiyuhang <1136742008@qq.com>
Date: Mon, 30 Mar 2026 12:58:07 +0800
Subject: [PATCH 5/9] some fix
---
tidb-cloud/premium/premium-export.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tidb-cloud/premium/premium-export.md b/tidb-cloud/premium/premium-export.md
index b4bcf3a9fb867..973db791c8115 100644
--- a/tidb-cloud/premium/premium-export.md
+++ b/tidb-cloud/premium/premium-export.md
@@ -130,7 +130,7 @@ You can compress the exported CSV and SQL data using the following algorithms:
3. On the **Export** page, click **Export Data** in the upper-right corner:
- **Task Name**: enter a name for the export task. The default value is `SNAPSHOT_{snapshot_time}`.
- - **Source Connection**: enter Username and Password of your TiDB Instance, and then click **Test Connection** to check them.
+ - **Source Connection**: enter **Username** and **Password** of your TiDB Instance, and then click **Test Connection** to check them.
- **Target Connection**:
- **Storage Provider**: choose Azure Blob Storage
- **Folder URI**: enter the URI of Azure Blob Storage with the `azure://.blob.core.windows.net///` format.
@@ -154,7 +154,7 @@ You can compress the exported CSV and SQL data using the following algorithms:
3. On the **Export** page, click **Export Data** in the upper-right corner:
- **Task Name**: enter a name for the export task. The default value is `SNAPSHOT_{snapshot_time}`.
- - **Source Connection**: enter Username and Password of your TiDB Instance, and then click **Test Connection** to check them.
+ - **Source Connection**: enter **Username** and **Password** of your TiDB Instance, and then click **Test Connection** to check them.
- **Target Connection**:
- **Storage Provider**: choose Alibaba Cloud OSS
- **Folder URI**: enter the Alibaba Cloud OSS URI where you want to export the data, in the `oss:////` format.
From 4c57d97e9dbfb0b8ef169babb4a8a9dbd1a063c7 Mon Sep 17 00:00:00 2001
From: shiyuhang <1136742008@qq.com>
Date: Mon, 30 Mar 2026 13:37:24 +0800
Subject: [PATCH 6/9] remove list bucket permission
---
tidb-cloud/premium/premium-export.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tidb-cloud/premium/premium-export.md b/tidb-cloud/premium/premium-export.md
index 973db791c8115..23622eb024c74 100644
--- a/tidb-cloud/premium/premium-export.md
+++ b/tidb-cloud/premium/premium-export.md
@@ -31,8 +31,8 @@ To export data to Amazon S3, you need to provide the following information:
- URI: `s3:////`
- One of the following access credentials:
- - [An access key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html): make sure the access key has the `s3:PutObject` and `s3:ListBucket` permissions.
- - [A role ARN](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html): make sure the role ARN (Amazon Resource Name) has the `s3:PutObject` and `s3:ListBucket` permissions. Note that only instances hosted on AWS support the role ARN.
+ - [An access key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html): make sure the access key has the `s3:PutObject` permissions.
+ - [A role ARN](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html): make sure the role ARN (Amazon Resource Name) has the `s3:PutObject` permissions. Note that only instances hosted on AWS support the role ARN.
For more information, see [Configure External Storage Access](/tidb-cloud/premium/external-storage.md#configure-amazon-s3-access).
From fc1957d60f7191d8a131ed93ff509959d1f8500b Mon Sep 17 00:00:00 2001
From: shiyuhang <1136742008@qq.com>
Date: Mon, 30 Mar 2026 14:23:53 +0800
Subject: [PATCH 7/9] remove alias
---
tidb-cloud/premium/external-storage.md | 2 --
1 file changed, 2 deletions(-)
diff --git a/tidb-cloud/premium/external-storage.md b/tidb-cloud/premium/external-storage.md
index 5c0442d245272..2b6e55c9e9bed 100644
--- a/tidb-cloud/premium/external-storage.md
+++ b/tidb-cloud/premium/external-storage.md
@@ -1,7 +1,6 @@
---
title: Configure External Storage Access
summary: Learn how to configure cross-account access to an external storage such as Amazon Simple Storage Service (Amazon S3).
-aliases: ['/tidbcloud/serverless-external-storage']
---
# Configure External Storage Access
@@ -219,7 +218,6 @@ If you have any trouble creating a SAS token with the Azure ARM template, take t
1. In the **Allowed services** section, choose the **Blob** service.
2. In the **Allowed Resource types** section, choose **Container** and **Object**.
3. In the **Allowed permissions** section, choose the **Read** and **Write** permissions.
-
4. Adjust **Start and expiry date/time** as needed.
5. You can keep the default values for other settings.
From 06af4b777f622714da744cf64a0c7c87970f309f Mon Sep 17 00:00:00 2001
From: Aolin
Date: Thu, 2 Apr 2026 16:43:58 +0800
Subject: [PATCH 8/9] premium: replace generic "instance" with "{{{ .premium
}}} instance" and refine UI instructions in premium-export.md
---
tidb-cloud/premium/premium-export.md | 42 ++++++++++++++--------------
1 file changed, 21 insertions(+), 21 deletions(-)
diff --git a/tidb-cloud/premium/premium-export.md b/tidb-cloud/premium/premium-export.md
index 23622eb024c74..7cf321cc5cfdd 100644
--- a/tidb-cloud/premium/premium-export.md
+++ b/tidb-cloud/premium/premium-export.md
@@ -7,15 +7,15 @@ summary: Learn how to export data from {{{ .premium }}} instances.
TiDB Cloud enables you to export data from a {{{ .premium }}} instance to an external storage service. You can use the exported data for backup, migration, data analysis, or other purposes.
-While you can also export data using tools such as [mysqldump](https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html) and TiDB [Dumpling](https://docs.pingcap.com/tidb/dev/dumpling-overview), the export feature provided by TiDB Cloud offers a more convenient and efficient way to export data from an instance. It brings the following benefits:
+While you can also export data using tools such as [mysqldump](https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html) and TiDB [Dumpling](https://docs.pingcap.com/tidb/dev/dumpling-overview), the export feature provided by TiDB Cloud offers a more convenient and efficient way to export data from a {{{ .premium }}} instance. It brings the following benefits:
-- Convenience: the export service provides a simple and easy-to-use way to export data from an instance, eliminating the need for additional tools or resources.
+- Convenience: the export service provides a simple and easy-to-use way to export data from a {{{ .premium }}} instance, eliminating the need for additional tools or resources.
- Isolation: the export service uses separate computing resources, ensuring isolation from the resources used by your online services.
- Consistency: the export service ensures the consistency of the exported data without causing locks, which does not affect your online services.
> **Note:**
>
-> Please export data smaller than 100 GiB; otherwise, the process may fail. If you need to export larger datasets, please [contact us](https://www.pingcap.com/contact-us)
+> The maximum export size is 100 GiB. Exports larger than this limit might fail. To export more data or request a higher export speed, contact [TiDB Cloud Support](/tidb-cloud/tidb-cloud-support.md).
## Export locations
@@ -31,8 +31,8 @@ To export data to Amazon S3, you need to provide the following information:
- URI: `s3:////`
- One of the following access credentials:
- - [An access key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html): make sure the access key has the `s3:PutObject` permissions.
- - [A role ARN](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html): make sure the role ARN (Amazon Resource Name) has the `s3:PutObject` permissions. Note that only instances hosted on AWS support the role ARN.
+ - [An access key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html): make sure the access key has the `s3:PutObject` permission.
+ - [A role ARN](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html): make sure the role ARN (Amazon Resource Name) has the `s3:PutObject` permission. Note that only {{{ .premium }}} instances hosted on AWS support the role ARN.
For more information, see [Configure External Storage Access](/tidb-cloud/premium/external-storage.md#configure-amazon-s3-access).
@@ -99,14 +99,14 @@ You can compress the exported CSV and SQL data using the following algorithms:
>
> You can use the combo box in the upper-left corner to switch between organizations, projects, and instances.
-2. Click the name of your target instance to go to its overview page, and then click **Data** > **Export** in the left navigation pane.
+2. Click the name of your target {{{ .premium }}} instance to go to its overview page, and then click **Data** > **Export** in the left navigation pane.
-3. On the **Export** page, click **Export Data** in the upper-right corner and configure the following settings:
+3. On the **Export** page, click **Export Data** in the upper-right corner. Then configure the following settings:
- **Task Name**: enter a name for the export task. The default value is `SNAPSHOT_{snapshot_time}`.
- - **Source Connection**: enter the username and password of your TiDB instance, and then click **Test Connection** to check them.
- - **Target Connection**:
- - **Storage Provider**: choose Amazon S3
+ - **Source Connection**: enter **Username** and **Password** of your {{{ .premium }}} instance, and then click **Test Connection** to verify the credentials.
+ - **Target Connection**:
+ - **Storage Provider**: choose Amazon S3.
- **Folder URI**: enter the URI of the Amazon S3 with the `s3:////` format.
- **Bucket Access**: choose one of the following access credentials and then fill in the credential information:
- **AWS Role ARN**: enter the role ARN that has the permission to access the bucket. It is recommended to create the role ARN with AWS CloudFormation. For more information, see [Configure External Storage Access](/tidb-cloud/premium/external-storage.md#configure-amazon-s3-access).
@@ -125,14 +125,14 @@ You can compress the exported CSV and SQL data using the following algorithms:
>
> You can use the combo box in the upper-left corner to switch between organizations, projects, and instances.
-2. Click the name of your target instance to go to its overview page, and then click **Data** > **Export** in the left navigation pane.
+2. Click the name of your target {{{ .premium }}} instance to go to its overview page, and then click **Data** > **Export** in the left navigation pane.
-3. On the **Export** page, click **Export Data** in the upper-right corner:
+3. On the **Export** page, click **Export Data** in the upper-right corner. Then configure the following settings:
- **Task Name**: enter a name for the export task. The default value is `SNAPSHOT_{snapshot_time}`.
- - **Source Connection**: enter **Username** and **Password** of your TiDB Instance, and then click **Test Connection** to check them.
- - **Target Connection**:
- - **Storage Provider**: choose Azure Blob Storage
+ - **Source Connection**: enter **Username** and **Password** of your {{{ .premium }}} instance, and then click **Test Connection** to verify the credentials.
+ - **Target Connection**:
+ - **Storage Provider**: choose Azure Blob Storage.
- **Folder URI**: enter the URI of Azure Blob Storage with the `azure://.blob.core.windows.net///` format.
- **SAS Token**: enter the SAS token that has the permission to access the container. It is recommended to create a SAS token with the [Azure ARM template](https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/). For more information, see [Configure External Storage Access](/tidb-cloud/premium/external-storage.md#configure-azure-blob-storage-access).
- **Exported Data**: choose the databases or tables you want to export.
@@ -149,14 +149,14 @@ You can compress the exported CSV and SQL data using the following algorithms:
>
> You can use the combo box in the upper-left corner to switch between organizations, projects, and instances.
-2. Click the name of your target instance to go to its overview page, and then click **Data** > **Export** in the left navigation pane.
+2. Click the name of your target {{{ .premium }}} instance to go to its overview page, and then click **Data** > **Export** in the left navigation pane.
-3. On the **Export** page, click **Export Data** in the upper-right corner:
+3. On the **Export** page, click **Export Data** in the upper-right corner:
- **Task Name**: enter a name for the export task. The default value is `SNAPSHOT_{snapshot_time}`.
- - **Source Connection**: enter **Username** and **Password** of your TiDB Instance, and then click **Test Connection** to check them.
- - **Target Connection**:
- - **Storage Provider**: choose Alibaba Cloud OSS
+ - **Source Connection**: enter **Username** and **Password** of your {{{ .premium }}} instance, and then click **Test Connection** to verify the credentials.
+ - **Target Connection**:
+ - **Storage Provider**: choose Alibaba Cloud OSS.
- **Folder URI**: enter the Alibaba Cloud OSS URI where you want to export the data, in the `oss:////` format.
- **AccessKey ID** and **AccessKey Secret**: enter the AccessKey ID and AccessKey Secret that have the permission to access the bucket.
- **Exported Data**: choose the databases or tables you want to export.
@@ -175,7 +175,7 @@ To cancel an ongoing export task, take the following steps:
>
> You can use the combo box in the upper-left corner to switch between organizations, projects, and instances.
-2. Click the name of your target instance to go to its overview page, and then click **Data** > **Export** in the left navigation pane.
+2. Click the name of your target {{{ .premium }}} instance to go to its overview page, and then click **Data** > **Export** in the left navigation pane.
3. On the **Export** page, view the export task list.
From 30de18d54bf3cc783eaa000318994cdf117e9980 Mon Sep 17 00:00:00 2001
From: Aolin
Date: Fri, 3 Apr 2026 11:13:41 +0800
Subject: [PATCH 9/9] premium: merge external-storage.md into
configure-external-storage-access.md
---
TOC-tidb-cloud-premium.md | 4 +-
.../configure-external-storage-access.md | 62 +++--
tidb-cloud/premium/external-storage.md | 248 ------------------
tidb-cloud/premium/premium-export.md | 10 +-
4 files changed, 53 insertions(+), 271 deletions(-)
delete mode 100644 tidb-cloud/premium/external-storage.md
diff --git a/TOC-tidb-cloud-premium.md b/TOC-tidb-cloud-premium.md
index fa6b86b33699e..79fd1e831d488 100644
--- a/TOC-tidb-cloud-premium.md
+++ b/TOC-tidb-cloud-premium.md
@@ -133,6 +133,7 @@
- [Connect via Private Endpoint with AWS](/tidb-cloud/premium/connect-to-premium-via-aws-private-endpoint.md)
- [Connect via Private Endpoint with Alibaba Cloud](/tidb-cloud/premium/connect-to-premium-via-alibaba-cloud-private-endpoint.md)
- [Back Up and Restore TiDB Cloud Data](/tidb-cloud/premium/backup-and-restore-premium.md)
+ - [Export Data from {{{ .premium }}}](/tidb-cloud/premium/premium-export.md)
- Use an HTAP Cluster with TiFlash
- [TiFlash Overview](/tiflash/tiflash-overview.md)
- [Create TiFlash Replicas](/tiflash/create-tiflash-replicas.md)
@@ -210,9 +211,6 @@
- [Import Parquet Files from Cloud Storage](/tidb-cloud/import-parquet-files-serverless.md)
- [Import Snapshot Files from Cloud Storage](/tidb-cloud/import-snapshot-files-serverless.md)
- [Import Data Using MySQL CLI](/tidb-cloud/premium/import-with-mysql-cli-premium.md)
- - Export Data from TiDB Cloud Premium
- - [Export Data from TiDB Cloud Premium](/tidb-cloud/premium/premium-export.md)
- - [Configure External Storage Access](/tidb-cloud/premium/external-storage.md)
- Reference
- [Configure External Storage Access for TiDB Cloud](/tidb-cloud/configure-external-storage-access.md)
- [Naming Conventions for Data Import](/tidb-cloud/naming-conventions-for-data-import.md)
diff --git a/tidb-cloud/configure-external-storage-access.md b/tidb-cloud/configure-external-storage-access.md
index 3796e472fbd11..8b3c5b4a4e87f 100644
--- a/tidb-cloud/configure-external-storage-access.md
+++ b/tidb-cloud/configure-external-storage-access.md
@@ -22,7 +22,7 @@ If you need to configure these external storages for a TiDB Cloud Dedicated clus
## Configure Amazon S3 access
-To allow a TiDB Cloud clusterinstance to access the source data in your Amazon S3 bucket, configure the bucket access for the clusterinstance using either of the following methods:
+To allow a TiDB Cloud clusterinstance to access your Amazon S3 bucket, configure the bucket access for the clusterinstance using either of the following methods:
- [Use a Role ARN](#configure-amazon-s3-access-using-a-role-arn): use a Role ARN to access your Amazon S3 bucket.
- [Use an AWS access key](#configure-amazon-s3-access-using-an-aws-access-key): use the access key of an IAM user to access your Amazon S3 bucket.
@@ -35,11 +35,11 @@ It is recommended that you use [AWS CloudFormation](https://docs.aws.amazon.com/
>
> Role ARN access to Amazon S3 is only supported for clustersinstances with AWS as the cloud provider. If you use a different cloud provider, use an AWS access key instead. For more information, see [Configure Amazon S3 access using an AWS access key](#configure-amazon-s3-access-using-an-aws-access-key).
-1. Open the **Import** page for your target clusterinstance.
+1. Open the **Import** or **Export** page for your target clusterinstance.
1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**Clusters**](https://tidbcloud.com/project/clusters) page of your project.navigate to the [**TiDB Instances**](https://tidbcloud.com/tidbs) page.
- 2. Click the name of your target clusterinstance to go to its overview page, and then click **Data** > **Import** in the left navigation pane.
+ 2. Click the name of your target clusterinstance to go to its overview page, and then click **Data** > **Import** or **Data** > **Export** in the left navigation pane.
2. Open the **Add New ARN** dialog.
@@ -51,10 +51,23 @@ It is recommended that you use [AWS CloudFormation](https://docs.aws.amazon.com/
- If you want to export data to Amazon S3, open the **Add New ARN** dialog as follows:
- 1. Click **Export data to...** > **Amazon S3**. If your clusterinstance has neither imported nor exported any data before, click **Click here to export data to...** > **Amazon S3** at the bottom of the page.
+
+
+ 1. Click **Export data to...** > **Amazon S3**. If your cluster has neither imported nor exported any data before, click **Click here to export data to...** > **Amazon S3** at the bottom of the page.
2. Fill in the **Folder URI** field.
3. Choose **AWS Role ARN** and click **Click here to create new one with AWS CloudFormation**.
+
+
+
+
+ 1. Click **Export Data**.
+ 2. Choose **Amazon S3** in **Target Connection**.
+ 3. Fill in the **Folder URI** field.
+ 4. Choose **AWS Role ARN** and click **Click here to create new one with AWS CloudFormation**.
+
+
+
3. Create a role ARN with an AWS CloudFormation template.
1. In the **Add New ARN** dialog, click **AWS Console with CloudFormation Template**.
@@ -80,7 +93,7 @@ If you have any trouble creating a role ARN with AWS CloudFormation, you can tak
1. Sign in to the [AWS Management Console](https://console.aws.amazon.com/) and open the [Amazon S3 console](https://console.aws.amazon.com/s3/).
- 2. In the **Buckets** list, choose the name of your bucket with the source data, and then click **Copy ARN** to get your S3 bucket ARN (for example, `arn:aws:s3:::tidb-cloud-source-data`). Take a note of the bucket ARN for later use.
+ 2. In the **Buckets** list, choose the name of your target bucket, and then click **Copy ARN** to get your S3 bucket ARN (for example, `arn:aws:s3:::tidb-cloud-source-data`). Take a note of the bucket ARN for later use.

@@ -107,7 +120,7 @@ If you have any trouble creating a role ARN with AWS CloudFormation, you can tak
"s3:GetObjectVersion",
"s3:PutObject"
],
- "Resource": "//*"
+ "Resource": "//*"
},
{
"Sid": "VisualEditor1",
@@ -123,10 +136,10 @@ If you have any trouble creating a role ARN with AWS CloudFormation, you can tak
In the policy text field, replace the following configurations with your own values.
- - `"Resource": "//*"`. For example:
+ - `"Resource": "//*"`, where `` is the target directory for exported data or the source directory for imported data. For example:
- - If your source data is stored in the root directory of the `tidb-cloud-source-data` bucket, use `"Resource": "arn:aws:s3:::tidb-cloud-source-data/*"`.
- - If your source data is stored in the `mydata` directory of the bucket, use `"Resource": "arn:aws:s3:::tidb-cloud-source-data/mydata/*"`.
+ - If your data for import or export is in the root directory of the `tidb-cloud-source-data` bucket, use `"Resource": "arn:aws:s3:::tidb-cloud-source-data/*"`.
+ - If your data for import or export is in the `mydata` directory of the bucket, use `"Resource": "arn:aws:s3:::tidb-cloud-source-data/mydata/*"`.
Make sure that `/*` is added to the end of the directory so TiDB Cloud can access all files in this directory.
@@ -221,7 +234,7 @@ Take the following steps to configure a service account key:
-
+
## Configure Azure Blob Storage access
@@ -231,17 +244,36 @@ You can create a SAS token either using an [Azure ARM template](https://learn.mi
To create a SAS token using an Azure ARM template, take the following steps:
-1. Open the **Import** page for your target cluster.
+1. Open the **Import** or **Export** page for your target clusterinstance.
- 1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**Clusters**](https://tidbcloud.com/project/clusters) page of your project.
+ 1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**Clusters**](https://tidbcloud.com/project/clusters) page of your project.navigate to the [**TiDB Instances**](https://tidbcloud.com/tidbs) page.
- 2. Click the name of your target cluster to go to its overview page, and then click **Data** > **Import** in the left navigation pane.
+ 2. Click the name of your target clusterinstance to go to its overview page, and then click **Data** > **Import** or **Data** > **Export** in the left navigation pane.
2. Open the **Generate New SAS Token via ARM Template Deployment** dialog.
- 1. Click **Export data to...** > **Azure Blob Storage**. If your cluster has neither imported nor exported any data before, click **Click here to export data to...** > **Azure Blob Storage** at the bottom of the page.
+ - If you want to import data from Azure Blob Storage:
+
+ 1. Click **Import from Azure Blob Storage**.
+ 2. Fill in the **Folder URI** field.
+ 3. In the **SAS Token** field, click **Click here to create a new one with Azure ARM template**.
+
+ - If you want to export data to Azure Blob Storage:
+
+
+
+ 1. Click **Export data to...** > **Azure Blob Storage**. If your cluster has neither imported nor exported any data before, click **Click here to export data to...** > **Azure Blob Storage** at the bottom of the page.
+ 2. Scroll down to the **Azure Blob Storage Settings** area, and then click **Click here to create a new one with Azure ARM template** under the SAS Token field.
+
+
+
+
+
+ 1. Click **Export Data**.
+ 2. Choose **Azure Blob Storage** in **Target Connection**.
+ 3. Click **Click here to create a new one with Azure ARM template** under the SAS Token field.
- 2. Scroll down to the **Azure Blob Storage Settings** area, and then click **Click here to create a new one with Azure ARM template** under the SAS Token field.
+
3. Create a SAS token with the Azure ARM template.
diff --git a/tidb-cloud/premium/external-storage.md b/tidb-cloud/premium/external-storage.md
deleted file mode 100644
index 2b6e55c9e9bed..0000000000000
--- a/tidb-cloud/premium/external-storage.md
+++ /dev/null
@@ -1,248 +0,0 @@
----
-title: Configure External Storage Access
-summary: Learn how to configure cross-account access to an external storage such as Amazon Simple Storage Service (Amazon S3).
----
-
-# Configure External Storage Access
-
-If you want to export data from a TiDB Cloud instance to an external storage, you need to configure cross-account access. This document describes how to configure access to an external storage for {{{ .premium }}} instances.
-
-## Configure Amazon S3 access
-
-To allow a TiDB Cloud instance to export data to your Amazon S3 bucket, configure the bucket access for the instance using either of the following methods:
-
-- [Use a Role ARN](#configure-amazon-s3-access-using-a-role-arn): use a Role ARN to access your Amazon S3 bucket.
-- [Use an AWS access key](#configure-amazon-s3-access-using-an-aws-access-key): use the access key of an IAM user to access your Amazon S3 bucket.
-
-### Configure Amazon S3 access using a Role ARN
-
-We recommend that you use [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) to create a role ARN. Take the following steps to create one:
-
-> **Note:**
->
-> Role ARN access to Amazon S3 is only supported for instances with AWS as the cloud provider. If you use a different cloud provider, use an AWS access key instead. For more information, see [Configure Amazon S3 access using an AWS access key](#configure-amazon-s3-access-using-an-aws-access-key).
-
-1. Open the **Export** page for your target instance.
-
- 1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**TiDB Instances**](https://tidbcloud.com/tidbs) page.
-
- 2. Click the name of your target instance to go to its overview page, and then click **Data** > **Export** in the left navigation pane.
-
-2. Open the **Add New ARN** dialog.
-
- 1. Click **Export Data**.
- 2. Choose **Amazon S3** in **Target Connection**.
- 3. Fill in the **Folder URI** field.
- 4. Choose **AWS Role ARN** and click **Click here to create new one with AWS CloudFormation**.
-
-3. Create a role ARN with an AWS CloudFormation template.
-
- 1. In the **Add New ARN** dialog, click **AWS Console with CloudFormation Template**.
-
- 2. Log in to the [AWS Management Console](https://console.aws.amazon.com) and you will be redirected to the AWS CloudFormation **Quick create stack** page.
-
- 3. Fill in the **Role Name**.
-
- 4. Acknowledge to create a new role and click **Create stack** to create the role ARN.
-
- 5. After the CloudFormation stack is executed, you can click the **Outputs** tab and find the Role ARN value in the **Value** column.
-
- 
-
-If you have any trouble creating a role ARN with AWS CloudFormation, you can take the following steps to create one manually:
-
-
-Click here to see details
-
-1. In the **Add New ARN** dialog described in previous instructions, click **Having trouble? Create Role ARN manually**. You will get the **TiDB Cloud Account ID** and **TiDB Cloud External ID**.
-
-2. In the AWS Management Console, create a managed policy for your Amazon S3 bucket.
-
- 1. Sign in to the [AWS Management Console](https://console.aws.amazon.com/) and open the [Amazon S3 console](https://console.aws.amazon.com/s3/).
-
- 2. In the **Buckets** list, choose the name of your target bucket, and then click **Copy ARN** to get your S3 bucket ARN (for example, `arn:aws:s3:::tidb-cloud-source-data`). Take a note of the bucket ARN for later use.
-
- 
-
- 3. Open the [IAM console](https://console.aws.amazon.com/iam/), click **Policies** in the left navigation pane, and then click **Create Policy**.
-
- 
-
- 4. On the **Create policy** page, click the **JSON** tab.
-
- 5. Configure the policy in the policy text field according to your needs. The following is an example that you can use to export data from a TiDB Cloud instance.
-
- - Exporting data from a TiDB Cloud instance needs the **s3:PutObject** and **s3:ListBucket** permissions.
-
- ```json
- {
- "Version": "2012-10-17",
- "Statement": [
- {
- "Sid": "VisualEditor0",
- "Effect": "Allow",
- "Action": [
- "s3:PutObject"
- ],
- "Resource": "//*"
- },
- {
- "Sid": "VisualEditor1",
- "Effect": "Allow",
- "Action": [
- "s3:ListBucket"
- ],
- "Resource": ""
- }
- ]
- }
- ```
-
- In the policy text field, replace the following configurations with your own values.
-
- - `"Resource": "//*"`. For example:
-
- - If you want to export data to the root directory of the `tidb-cloud-source-data` bucket, use `"Resource": "arn:aws:s3:::tidb-cloud-source-data/*"`.
- - If you want to export data to the `mydata` directory of the bucket, use `"Resource": "arn:aws:s3:::tidb-cloud-source-data/mydata/*"`.
-
- Make sure that `/*` is added to the end of the directory so TiDB Cloud can access all files in this directory.
-
- - `"Resource": ""`, for example, `"Resource": "arn:aws:s3:::tidb-cloud-source-data"`.
-
- - If you have enabled AWS Key Management Service key (SSE-KMS) with customer-managed key encryption, make sure the following configuration is included in the policy. `"arn:aws:kms:ap-northeast-1:105880447796:key/c3046e91-fdfc-4f3a-acff-00597dd3801f"` is a sample KMS key of the bucket.
-
- ```
- {
- "Sid": "AllowKMSkey",
- "Effect": "Allow",
- "Action": [
- "kms:Decrypt"
- ],
- "Resource": "arn:aws:kms:ap-northeast-1:105880447796:key/c3046e91-fdfc-4f3a-acff-00597dd3801f"
- }
- ```
-
- - If the objects in your bucket have been copied from another encrypted bucket, the KMS key value needs to include the keys of both buckets. For example, `"Resource": ["arn:aws:kms:ap-northeast-1:105880447796:key/c3046e91-fdfc-4f3a-acff-00597dd3801f","arn:aws:kms:ap-northeast-1:495580073302:key/0d7926a7-6ecc-4bf7-a9c1-a38f0faec0cd"]`.
-
- 6. Click **Next**.
-
- 7. Set a policy name, add a tag of the policy (optional), and then click **Create policy**.
-
-3. In the AWS Management Console, create an access role for TiDB Cloud and get the role ARN.
-
- 1. In the [IAM console](https://console.aws.amazon.com/iam/), click **Roles** in the left navigation pane, and then click **Create role**.
-
- 
-
- 2. To create a role, fill in the following information:
-
- - In **Trusted entity type**, select **AWS account**.
- - In **An AWS account**, select **Another AWS account**, and then paste the TiDB Cloud account ID to the **Account ID** field.
- - In **Options**, click **Require external ID (Best practice when a third party will assume this role)**, and then paste the TiDB Cloud External ID to the **External ID** field.
-
- 3. Click **Next** to open the policy list, choose the policy you just created, and then click **Next**.
-
- 4. In **Role details**, set a name for the role, and then click **Create role** in the lower-right corner. After the role is created, the list of roles is displayed.
-
- 5. In the list of roles, click the name of the role that you just created to go to its summary page, and then you can get the role ARN.
-
- 
-
-
-
-### Configure Amazon S3 access using an AWS access key
-
-It is recommended that you use an IAM user (instead of the AWS account root user) to create an access key.
-
-Take the following steps to configure an access key:
-
-1. Create an IAM user. For more information, see [creating an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console).
-
-2. Use your AWS account ID or account alias, and your IAM user name and password to sign in to [the IAM console](https://console.aws.amazon.com/iam).
-
-3. Create an access key. For more information, see [creating an access key for an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey).
-
-> **Note:**
->
-> TiDB Cloud does not store your access keys. For security, we recommend that you [delete the access key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey) after the import or export is complete.
-
-## Configure Azure Blob Storage access
-
-To allow TiDB Cloud to export data to your Azure Blob container, you need to create a service SAS token for the container.
-
-You can create a SAS token either using an [Azure ARM template](https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/overview) (recommended) or manual configuration.
-
-To create a SAS token using an Azure ARM template, take the following steps:
-
-1. Open the **Export** page for your target instance.
-
- 1. Log in to the [TiDB Cloud console](https://tidbcloud.com/) and navigate to the [**TiDB Instances**](https://tidbcloud.com/tidbs) page.
-
- 2. Click the name of your target instance to go to its overview page, and then click **Data** > **Export** in the left navigation pane.
-
-2. Open the **Generate New SAS Token via ARM Template Deployment** dialog.
-
- 1. Click **Export Data**.
-
- 2. Choose **Azure Blob Storage** in **Target Connection**.
-
- 3. Click **Click here to create a new one with Azure ARM template** under the SAS Token field.
-
-3. Create a SAS token with the Azure ARM template.
-
- 1. In the **Generate New SAS Token via ARM Template Deployment** dialog, click **Click to open the Azure Portal with the pre-configured ARM template**.
-
- 2. After logging in to Azure, you will be redirected to the Azure **Custom deployment** page.
-
- 3. Fill in the **Resource group** and **Storage Account Name** in the **Custom deployment** page. You can get all the information from the storage account overview page where the container is located.
-
- 
-
- 4. Click **Review + create** or **Next** to review the deployment. Click **Create** to start the deployment.
-
- 5. After it completes, you will be redirected to the deployment overview page. Navigate to the **Outputs** section to get the SAS token.
-
-If you have any trouble creating a SAS token with the Azure ARM template, take the following steps to create one manually:
-
-
-Click here to see details
-
-1. On the [Azure Storage account](https://portal.azure.com/#browse/Microsoft.Storage%2FStorageAccounts) page, click your storage account to which the container belongs.
-
-2. On your **Storage account** page, click the **Security+network**, and then click **Shared access signature**.
-
- 
-
-3. On the **Shared access signature** page, create a service SAS token with the required permissions as follows. For more information, see [Create a service SAS token](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview).
-
- 1. In the **Allowed services** section, choose the **Blob** service.
- 2. In the **Allowed Resource types** section, choose **Container** and **Object**.
- 3. In the **Allowed permissions** section, choose the **Read** and **Write** permissions.
- 4. Adjust **Start and expiry date/time** as needed.
- 5. You can keep the default values for other settings.
-
- 
-
-4. Click **Generate SAS and connection string** to generate the SAS token.
-
-
-
-## Configure Alibaba Cloud Object Storage Service (OSS) access
-
-To allow TiDB Cloud to export data to your Alibaba Cloud OSS bucket, you need to create an AccessKey pair for the bucket.
-
-Take the following steps to configure an AccessKey pair:
-
-1. Create a RAM user and get the AccessKey pair. For more information, see [Create a RAM user](https://www.alibabacloud.com/help/en/ram/user-guide/create-a-ram-user).
-
- In the **Access Mode** section, select **Using permanent AccessKey to access**.
-
-2. Create a custom policy with the required permissions. For more information, see [Create custom policies](https://www.alibabacloud.com/help/en/ram/user-guide/create-a-custom-policy).
-
- - In the **Effect** section, select **Allow**.
- - In the **Service** section, select **Object Storage Service**.
- - In the **Action** section, select `oss:PutObject` and `oss:GetBucketInfo` permissions.
-
- - In the **Resource** section, select the bucket and the objects in the bucket.
-
-3. Attach the custom policies to the RAM user. For more information, see [Grant permissions to a RAM user](https://www.alibabacloud.com/help/en/ram/user-guide/grant-permissions-to-the-ram-user).
diff --git a/tidb-cloud/premium/premium-export.md b/tidb-cloud/premium/premium-export.md
index 7cf321cc5cfdd..57748c3c3b669 100644
--- a/tidb-cloud/premium/premium-export.md
+++ b/tidb-cloud/premium/premium-export.md
@@ -34,7 +34,7 @@ To export data to Amazon S3, you need to provide the following information:
- [An access key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html): make sure the access key has the `s3:PutObject` permission.
- [A role ARN](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html): make sure the role ARN (Amazon Resource Name) has the `s3:PutObject` permission. Note that only {{{ .premium }}} instances hosted on AWS support the role ARN.
-For more information, see [Configure External Storage Access](/tidb-cloud/premium/external-storage.md#configure-amazon-s3-access).
+For more information, see [Configure External Storage Access](/tidb-cloud/configure-external-storage-access.md#configure-amazon-s3-access).
### Azure Blob Storage
@@ -43,7 +43,7 @@ To export data to Azure Blob Storage, you need to provide the following informat
- URI: `azure://.blob.core.windows.net///` or `https://.blob.core.windows.net///`
- Access credential: a [shared access signature (SAS) token](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview) for your Azure Blob Storage container. Make sure the SAS token has the `Read` and `Write` permissions on the `Container` and `Object` resources.
-For more information, see [Configure External Storage Access](/tidb-cloud/premium/external-storage.md#configure-azure-blob-storage-access).
+For more information, see [Configure External Storage Access](/tidb-cloud/configure-external-storage-access.md#configure-azure-blob-storage-access).
### Alibaba Cloud OSS
@@ -52,7 +52,7 @@ To export data to Alibaba Cloud OSS, you need to provide the following informati
- URI: `oss:////`
- Access credential: An [AccessKey pair](https://www.alibabacloud.com/help/en/ram/user-guide/create-an-accesskey-pair) for your Alibaba Cloud account. Make sure the AccessKey pair has the `oss:PutObject` and `oss:GetBucketInfo` permissions.
-For more information, see [Configure External Storage Access](/tidb-cloud/premium/external-storage.md#configure-alibaba-cloud-object-storage-service-oss-access).
+For more information, see [Configure External Storage Access](/tidb-cloud/configure-external-storage-access.md#configure-alibaba-cloud-object-storage-service-oss-access).
## Export options
@@ -109,7 +109,7 @@ You can compress the exported CSV and SQL data using the following algorithms:
- **Storage Provider**: choose Amazon S3.
- **Folder URI**: enter the URI of the Amazon S3 with the `s3:////` format.
- **Bucket Access**: choose one of the following access credentials and then fill in the credential information:
- - **AWS Role ARN**: enter the role ARN that has the permission to access the bucket. It is recommended to create the role ARN with AWS CloudFormation. For more information, see [Configure External Storage Access](/tidb-cloud/premium/external-storage.md#configure-amazon-s3-access).
+ - **AWS Role ARN**: enter the role ARN that has the permission to access the bucket. It is recommended to create the role ARN with AWS CloudFormation. For more information, see [Configure External Storage Access](/tidb-cloud/configure-external-storage-access.md#configure-amazon-s3-access).
- **AWS Access Key**: enter the access key ID and access key secret that have the permission to access the bucket.
- **Exported Data**: choose the databases or tables you want to export.
- **Data Format**: choose **SQL** or **CSV**.
@@ -134,7 +134,7 @@ You can compress the exported CSV and SQL data using the following algorithms:
- **Target Connection**:
- **Storage Provider**: choose Azure Blob Storage.
- **Folder URI**: enter the URI of Azure Blob Storage with the `azure://.blob.core.windows.net///` format.
- - **SAS Token**: enter the SAS token that has the permission to access the container. It is recommended to create a SAS token with the [Azure ARM template](https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/). For more information, see [Configure External Storage Access](/tidb-cloud/premium/external-storage.md#configure-azure-blob-storage-access).
+ - **SAS Token**: enter the SAS token that has the permission to access the container. It is recommended to create a SAS token with the [Azure ARM template](https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/). For more information, see [Configure External Storage Access](/tidb-cloud/configure-external-storage-access.md#configure-azure-blob-storage-access).
- **Exported Data**: choose the databases or tables you want to export.
- **Data Format**: choose **SQL** or **CSV**.
- **Compression**: choose **Gzip**, **Snappy**, **Zstd**, or **None**.