camel-aws2-s3-kafka-connector sink configuration
Connector Description: Store and retrieve objects from AWS S3 Storage Service using AWS SDK version 2.x.
When using camel-aws2-s3-kafka-connector as sink make sure to use the following Maven dependency to have support for the connector:
<dependency>
<groupId>org.apache.camel.kafkaconnector</groupId>
<artifactId>camel-aws2-s3-kafka-connector</artifactId>
<version>x.x.x</version>
<!-- use the same version as your Camel Kafka connector version -->
</dependency>
To use this sink connector in Kafka connect you’ll need to set the following connector.class
connector.class=org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SinkConnector
The camel-aws2-s3 sink connector supports 71 options, which are listed below.
Name | Description | Default | Priority |
---|---|---|---|
Required Bucket name or ARN. |
HIGH |
||
Reference to a com.amazonaws.services.s3.AmazonS3 in the registry. |
MEDIUM |
||
An S3 Presigner for Request, used mainly in createDownloadLink operation. |
MEDIUM |
||
Setting the autocreation of the S3 bucket bucketName. This will apply also in case of moveAfterRead option enabled and it will create the destinationBucket if it doesn’t exist already. |
false |
MEDIUM |
|
Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. |
false |
MEDIUM |
|
If we want to use a POJO request as body or not. |
false |
MEDIUM |
|
The policy for this queue to set in the com.amazonaws.services.s3.AmazonS3#setBucketPolicy() method. |
MEDIUM |
||
To define a proxy host when instantiating the SQS client. |
MEDIUM |
||
Specify a proxy port to be used inside the client definition. |
MEDIUM |
||
To define a proxy protocol when instantiating the S3 client One of: [HTTP] [HTTPS]. Enum values:
|
"HTTPS" |
MEDIUM |
|
The region in which S3 client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You’ll need to use the name Region.EU_WEST_1.id(). |
MEDIUM |
||
If we want to trust all certificates in case of overriding the endpoint. |
false |
MEDIUM |
|
Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. |
MEDIUM |
||
Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. |
false |
MEDIUM |
|
Define the customer algorithm to use in case CustomerKey is enabled. |
MEDIUM |
||
Define the id of Customer key to use in case CustomerKey is enabled. |
MEDIUM |
||
Define the MD5 of Customer key to use in case CustomerKey is enabled. |
MEDIUM |
||
The number of messages composing a batch in streaming upload mode. |
10 |
MEDIUM |
|
The batch size (in bytes) in streaming upload mode. |
1000000 |
MEDIUM |
|
Delete file object after the S3 file has been uploaded. |
false |
MEDIUM |
|
Setting the key name for an element in the bucket through endpoint parameter. |
MEDIUM |
||
Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. |
false |
MEDIUM |
|
If it is true, camel will upload the file with multi part format, the part size is decided by the option of partSize. |
false |
MEDIUM |
|
The naming strategy to use in streaming upload mode One of: [progressive] [random]. Enum values:
|
"progressive" |
MEDIUM |
|
The operation to do in case the user don’t want to do only an upload One of: [copyObject] [listObjects] [deleteObject] [deleteBucket] [listBuckets] [getObject] [getObjectRange] [createDownloadLink]. Enum values:
|
MEDIUM |
||
Setup the partSize which is used in multi part upload, the default size is 25M. |
26214400L |
MEDIUM |
|
The restarting policy to use in streaming upload mode One of: [override] [lastPart]. Enum values:
|
"override" |
MEDIUM |
|
The storage class to set in the com.amazonaws.services.s3.model.PutObjectRequest request. |
MEDIUM |
||
When stream mode is true the upload to bucket will be done in streaming. |
false |
MEDIUM |
|
While streaming upload mode is true, this option set the timeout to complete upload. |
MEDIUM |
||
Define the id of KMS key to use in case KMS is enabled. |
MEDIUM |
||
Define if KMS must be used or not. |
false |
MEDIUM |
|
Define if Customer Key must be used or not. |
false |
MEDIUM |
|
Amazon AWS Access Key. |
MEDIUM |
||
Amazon AWS Secret Key. |
MEDIUM |
||
Reference to a com.amazonaws.services.s3.AmazonS3 in the registry. |
MEDIUM |
||
An S3 Presigner for Request, used mainly in createDownloadLink operation. |
MEDIUM |
||
Setting the autocreation of the S3 bucket bucketName. This will apply also in case of moveAfterRead option enabled and it will create the destinationBucket if it doesn’t exist already. |
false |
MEDIUM |
|
The component configuration. |
MEDIUM |
||
Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. |
false |
MEDIUM |
|
If we want to use a POJO request as body or not. |
false |
MEDIUM |
|
The policy for this queue to set in the com.amazonaws.services.s3.AmazonS3#setBucketPolicy() method. |
MEDIUM |
||
To define a proxy host when instantiating the SQS client. |
MEDIUM |
||
Specify a proxy port to be used inside the client definition. |
MEDIUM |
||
To define a proxy protocol when instantiating the S3 client One of: [HTTP] [HTTPS]. Enum values:
|
"HTTPS" |
MEDIUM |
|
The region in which S3 client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You’ll need to use the name Region.EU_WEST_1.id(). |
MEDIUM |
||
If we want to trust all certificates in case of overriding the endpoint. |
false |
MEDIUM |
|
Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. |
MEDIUM |
||
Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. |
false |
MEDIUM |
|
Define the customer algorithm to use in case CustomerKey is enabled. |
MEDIUM |
||
Define the id of Customer key to use in case CustomerKey is enabled. |
MEDIUM |
||
Define the MD5 of Customer key to use in case CustomerKey is enabled. |
MEDIUM |
||
The number of messages composing a batch in streaming upload mode. |
10 |
MEDIUM |
|
The batch size (in bytes) in streaming upload mode. |
1000000 |
MEDIUM |
|
Delete file object after the S3 file has been uploaded. |
false |
MEDIUM |
|
Setting the key name for an element in the bucket through endpoint parameter. |
MEDIUM |
||
Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. |
false |
MEDIUM |
|
If it is true, camel will upload the file with multi part format, the part size is decided by the option of partSize. |
false |
MEDIUM |
|
The naming strategy to use in streaming upload mode One of: [progressive] [random]. Enum values:
|
"progressive" |
MEDIUM |
|
The operation to do in case the user don’t want to do only an upload One of: [copyObject] [listObjects] [deleteObject] [deleteBucket] [listBuckets] [getObject] [getObjectRange] [createDownloadLink]. Enum values:
|
MEDIUM |
||
Setup the partSize which is used in multi part upload, the default size is 25M. |
26214400L |
MEDIUM |
|
The restarting policy to use in streaming upload mode One of: [override] [lastPart]. Enum values:
|
"override" |
MEDIUM |
|
The storage class to set in the com.amazonaws.services.s3.model.PutObjectRequest request. |
MEDIUM |
||
When stream mode is true the upload to bucket will be done in streaming. |
false |
MEDIUM |
|
While streaming upload mode is true, this option set the timeout to complete upload. |
MEDIUM |
||
Define the id of KMS key to use in case KMS is enabled. |
MEDIUM |
||
Define if KMS must be used or not. |
false |
MEDIUM |
|
Define if Customer Key must be used or not. |
false |
MEDIUM |
|
Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. |
true |
MEDIUM |
|
Amazon AWS Access Key. |
MEDIUM |
||
Amazon AWS Secret Key. |
MEDIUM |
The camel-aws2-s3 sink connector supports 1 converters out of the box, which are listed below.
-
org.apache.camel.kafkaconnector.aws2s3.converters.S3ObjectConverter
The camel-aws2-s3 sink connector supports 3 transforms out of the box, which are listed below.
-
org.apache.camel.kafkaconnector.aws2s3.transformers.JSONToRecordTransforms
-
org.apache.camel.kafkaconnector.aws2s3.transformers.RecordToJSONTransforms
-
org.apache.camel.kafkaconnector.aws2s3.transformers.S3ObjectTransforms
The camel-aws2-s3 sink connector supports 1 aggregation strategies out of the box, which are listed below.
-
org.apache.camel.kafkaconnector.aws2s3.aggregation.NewlineAggregationStrategy