camel-hdfs-kafka-connector sink configuration

Connector Description: Read and write from/to an HDFS filesystem using Hadoop 2.x.

When using camel-hdfs-kafka-connector as sink make sure to use the following Maven dependency to have support for the connector:

<dependency>
  <groupId>org.apache.camel.kafkaconnector</groupId>
  <artifactId>camel-hdfs-kafka-connector</artifactId>
  <version>x.x.x</version>
  <!-- use the same version as your Camel Kafka connector version -->
</dependency>

To use this sink connector in Kafka connect you’ll need to set the following connector.class

connector.class=org.apache.camel.kafkaconnector.hdfs.CamelHdfsSinkConnector

The camel-hdfs sink connector supports 30 options, which are listed below.

Name Description Default Priority

camel.sink.path.hostName

Required HDFS host to use.

HIGH

camel.sink.path.port

HDFS port to use.

8020

MEDIUM

camel.sink.path.path

Required The directory path to use.

HIGH

camel.sink.endpoint.connectOnStartup

Whether to connect to the HDFS file system on starting the producer/consumer. If false then the connection is created on-demand. Notice that HDFS may take up till 15 minutes to establish a connection, as it has hardcoded 45 x 20 sec redelivery. By setting this option to false allows your application to startup, and not block for up till 15 minutes.

true

MEDIUM

camel.sink.endpoint.fileSystemType

Set to LOCAL to not use HDFS but local java.io.File instead. One of: [LOCAL] [HDFS].

Enum values:

  • LOCAL

  • HDFS

"HDFS"

MEDIUM

camel.sink.endpoint.fileType

The file type to use. For more details see Hadoop HDFS documentation about the various files types. One of: [NORMAL_FILE] [SEQUENCE_FILE] [MAP_FILE] [BLOOMMAP_FILE] [ARRAY_FILE].

Enum values:

  • NORMAL_FILE

  • SEQUENCE_FILE

  • MAP_FILE

  • BLOOMMAP_FILE

  • ARRAY_FILE

"NORMAL_FILE"

MEDIUM

camel.sink.endpoint.keyType

The type for the key in case of sequence or map files. One of: [NULL] [BOOLEAN] [BYTE] [INT] [FLOAT] [LONG] [DOUBLE] [TEXT] [BYTES].

Enum values:

  • NULL

  • BOOLEAN

  • BYTE

  • INT

  • FLOAT

  • LONG

  • DOUBLE

  • TEXT

  • BYTES

"NULL"

MEDIUM

camel.sink.endpoint.namedNodes

A comma separated list of named nodes (e.g. srv11.example.com:8020,srv12.example.com:8020).

MEDIUM

camel.sink.endpoint.owner

The file owner must match this owner for the consumer to pickup the file. Otherwise the file is skipped.

MEDIUM

camel.sink.endpoint.valueType

The type for the key in case of sequence or map files One of: [NULL] [BOOLEAN] [BYTE] [INT] [FLOAT] [LONG] [DOUBLE] [TEXT] [BYTES].

Enum values:

  • NULL

  • BOOLEAN

  • BYTE

  • INT

  • FLOAT

  • LONG

  • DOUBLE

  • TEXT

  • BYTES

"BYTES"

MEDIUM

camel.sink.endpoint.append

Append to existing file. Notice that not all HDFS file systems support the append option.

false

MEDIUM

camel.sink.endpoint.lazyStartProducer

Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.

false

MEDIUM

camel.sink.endpoint.overwrite

Whether to overwrite existing files with the same name.

true

MEDIUM

camel.sink.endpoint.blockSize

The size of the HDFS blocks.

67108864L

MEDIUM

camel.sink.endpoint.bufferSize

The buffer size used by HDFS.

4096

MEDIUM

camel.sink.endpoint.checkIdleInterval

How often (time in millis) in to run the idle checker background task. This option is only in use if the splitter strategy is IDLE.

500

MEDIUM

camel.sink.endpoint.chunkSize

When reading a normal file, this is split into chunks producing a message per chunk.

4096

MEDIUM

camel.sink.endpoint.compressionCodec

The compression codec to use One of: [DEFAULT] [GZIP] [BZIP2].

Enum values:

  • DEFAULT

  • GZIP

  • BZIP2

"DEFAULT"

MEDIUM

camel.sink.endpoint.compressionType

The compression type to use (is default not in use) One of: [NONE] [RECORD] [BLOCK].

Enum values:

  • NONE

  • RECORD

  • BLOCK

"NONE"

MEDIUM

camel.sink.endpoint.openedSuffix

When a file is opened for reading/writing the file is renamed with this suffix to avoid to read it during the writing phase.

"opened"

MEDIUM

camel.sink.endpoint.readSuffix

Once the file has been read is renamed with this suffix to avoid to read it again.

"read"

MEDIUM

camel.sink.endpoint.replication

The HDFS replication factor.

3

MEDIUM

camel.sink.endpoint.splitStrategy

In the current version of Hadoop opening a file in append mode is disabled since it’s not very reliable. So, for the moment, it’s only possible to create new files. The Camel HDFS endpoint tries to solve this problem in this way: If the split strategy option has been defined, the hdfs path will be used as a directory and files will be created using the configured UuidGenerator. Every time a splitting condition is met, a new file is created. The splitStrategy option is defined as a string with the following syntax: splitStrategy=ST:value,ST:value,…​ where ST can be: BYTES a new file is created, and the old is closed when the number of written bytes is more than value MESSAGES a new file is created, and the old is closed when the number of written messages is more than value IDLE a new file is created, and the old is closed when no writing happened in the last value milliseconds.

MEDIUM

camel.sink.endpoint.kerberosConfigFileLocation

MEDIUM

camel.sink.endpoint.kerberosKeytabLocation

The location of the keytab file used to authenticate with the kerberos nodes (contains pairs of kerberos principals and encrypted keys (which are derived from the Kerberos password)).

MEDIUM

camel.sink.endpoint.kerberosUsername

The username used to authenticate with the kerberos nodes.

MEDIUM

camel.component.hdfs.lazyStartProducer

Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel’s routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing.

false

MEDIUM

camel.component.hdfs.autowiredEnabled

Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc.

true

MEDIUM

camel.component.hdfs.jAASConfiguration

To use the given configuration for security with JAAS.

MEDIUM

camel.component.hdfs.kerberosConfigFile

To use kerberos authentication, set the value of the 'java.security.krb5.conf' environment variable to an existing file. If the environment variable is already set, warn if different than the specified parameter.

MEDIUM

The camel-hdfs sink connector has no converters out of the box.

The camel-hdfs sink connector has no transforms out of the box.

The camel-hdfs sink connector has no aggregation strategies out of the box.