interface S3BucketProps
| Language | Type name |
|---|---|
.NET | Amazon.CDK.AWS.KinesisFirehose.S3BucketProps |
Go | github.com/aws/aws-cdk-go/awscdk/v2/awskinesisfirehose#S3BucketProps |
Java | software.amazon.awscdk.services.kinesisfirehose.S3BucketProps |
Python | aws_cdk.aws_kinesisfirehose.S3BucketProps |
TypeScript (source) | aws-cdk-lib » aws_kinesisfirehose » S3BucketProps |
Props for defining an S3 destination of an Amazon Data Firehose delivery stream.
Example
// Create service roles for the delivery stream and destination.
// These can be used for other purposes and granted access to different resources.
// They must include the Amazon Data Firehose service principal in their trust policies.
// Two separate roles are shown below, but the same role can be used for both purposes.
const deliveryStreamRole = new iam.Role(this, 'Delivery Stream Role', {
assumedBy: new iam.ServicePrincipal('firehose.amazonaws.com'),
});
const destinationRole = new iam.Role(this, 'Destination Role', {
assumedBy: new iam.ServicePrincipal('firehose.amazonaws.com'),
});
// Specify the roles created above when defining the destination and delivery stream.
declare const bucket: s3.Bucket;
const destination = new firehose.S3Bucket(bucket, { role: destinationRole });
new firehose.DeliveryStream(this, 'Delivery Stream', {
destination: destination,
role: deliveryStreamRole,
});
Properties
| Name | Type | Description |
|---|---|---|
| buffering | Duration | The length of time that Firehose buffers incoming data before delivering it to the S3 bucket. |
| buffering | Size | The size of the buffer that Amazon Data Firehose uses for incoming data before delivering it to the S3 bucket. |
| compression? | Compression | The type of compression that Amazon Data Firehose uses to compress the data that it delivers to the Amazon S3 bucket. |
| data | Data | The input format, output format, and schema config for converting data from the JSON format to the Parquet or ORC format before writing to Amazon S3. |
| data | string | A prefix that Amazon Data Firehose evaluates and adds to records before writing them to S3. |
| encryption | IKey | The AWS KMS key used to encrypt the data that it delivers to your Amazon S3 bucket. |
| error | string | A prefix that Amazon Data Firehose evaluates and adds to failed records before writing them to S3. |
| file | string | Specify a file extension. |
| logging | ILogging | Configuration that determines whether to log errors during data transformation or delivery failures, and specifies the CloudWatch log group for storing error logs. |
| processor? | IData | The data transformation that should be performed on the data before writing to the destination. |
| processors? | IData[] | The data transformation that should be performed on the data before writing to the destination. |
| role? | IRole | The IAM role associated with this destination. |
| s3 | Destination | The configuration for backing up source records to S3. |
| time | Time | The time zone you prefer. |
bufferingInterval?
Type:
Duration
(optional, default: Duration.seconds(300))
The length of time that Firehose buffers incoming data before delivering it to the S3 bucket.
Minimum: Duration.seconds(0) Maximum: Duration.seconds(900)
bufferingSize?
Type:
Size
(optional, default: Size.mebibytes(5) when record data format conversion is disabled, Size.mebibytes(128) when it is enabled)
The size of the buffer that Amazon Data Firehose uses for incoming data before delivering it to the S3 bucket.
Minimum: Size.mebibytes(1) when record data format conversion is disabled, Size.mebibytes(64) when it is enabled Maximum: Size.mebibytes(128)
compression?
Type:
Compression
(optional, default: UNCOMPRESSED)
The type of compression that Amazon Data Firehose uses to compress the data that it delivers to the Amazon S3 bucket.
The compression formats SNAPPY or ZIP cannot be specified for Amazon Redshift destinations because they are not supported by the Amazon Redshift COPY operation that reads from the S3 bucket.
dataFormatConversion?
Type:
Data
(optional, default: no data format conversion is done)
The input format, output format, and schema config for converting data from the JSON format to the Parquet or ORC format before writing to Amazon S3.
dataOutputPrefix?
Type:
string
(optional, default: "YYYY/MM/DD/HH")
A prefix that Amazon Data Firehose evaluates and adds to records before writing them to S3.
This prefix appears immediately following the bucket name.
encryptionKey?
Type:
IKey
(optional, default: Data is not encrypted.)
The AWS KMS key used to encrypt the data that it delivers to your Amazon S3 bucket.
errorOutputPrefix?
Type:
string
(optional, default: "YYYY/MM/DD/HH")
A prefix that Amazon Data Firehose evaluates and adds to failed records before writing them to S3.
This prefix appears immediately following the bucket name.
fileExtension?
Type:
string
(optional, default: The default file extension appended by Data Format Conversion or S3 compression features)
Specify a file extension.
It will override the default file extension appended by Data Format Conversion or S3 compression features such as .parquet or .gz.
File extension must start with a period (.) and can contain allowed characters: 0-9a-z!-_.*'().
loggingConfig?
Type:
ILogging
(optional, default: errors will be logged and a log group will be created for you.)
Configuration that determines whether to log errors during data transformation or delivery failures, and specifies the CloudWatch log group for storing error logs.
processor?
⚠️ Deprecated: Use processors instead.
Type:
IData
(optional, default: no data transformation will occur.)
The data transformation that should be performed on the data before writing to the destination.
processors?
Type:
IData[]
(optional, default: no data transformation will occur.)
The data transformation that should be performed on the data before writing to the destination.
role?
Type:
IRole
(optional, default: a role will be created with default permissions.)
The IAM role associated with this destination.
Assumed by Amazon Data Firehose to invoke processors and write to destinations
s3Backup?
Type:
Destination
(optional, default: source records will not be backed up to S3.)
The configuration for backing up source records to S3.
timeZone?
Type:
Time
(optional, default: UTC)
The time zone you prefer.

.NET
Go
Java
Python
TypeScript (