camel-splunk-kafka-connector source configuration
Connector Description: Publish or search for events in Splunk.
When using camel-splunk-kafka-connector as source make sure to use the following Maven dependency to have support for the connector:
<dependency>
<groupId>org.apache.camel.kafkaconnector</groupId>
<artifactId>camel-splunk-kafka-connector</artifactId>
<version>x.x.x</version>
<!-- use the same version as your Camel Kafka connector version -->
</dependency>
To use this source connector in Kafka connect you’ll need to set the following connector.class
connector.class=org.apache.camel.kafkaconnector.splunk.CamelSplunkSourceConnector
The camel-splunk source connector supports 40 options, which are listed below.
Name | Description | Default | Priority |
---|---|---|---|
Required Name has no purpose. |
HIGH |
||
Splunk app. |
MEDIUM |
||
Timeout in MS when connecting to Splunk server. |
5000 |
MEDIUM |
|
Splunk host. |
"localhost" |
MEDIUM |
|
Splunk owner. |
MEDIUM |
||
Splunk port. |
8089 |
MEDIUM |
|
Splunk scheme. |
"https" |
MEDIUM |
|
Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. |
false |
MEDIUM |
|
A number that indicates the maximum number of entities to return. |
MEDIUM |
||
Earliest time of the search time window. |
MEDIUM |
||
Initial start offset of the first search. |
MEDIUM |
||
Latest time of the search time window. |
MEDIUM |
||
The name of the query saved in Splunk to run. |
MEDIUM |
||
The Splunk query to run. |
MEDIUM |
||
If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. |
false |
MEDIUM |
|
Sets streaming mode. Streaming mode sends exchanges as they are received, rather than in a batch. |
false |
MEDIUM |
|
To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. |
MEDIUM |
||
Sets the exchange pattern when the consumer creates an exchange. One of: [InOnly] [InOut] [InOptionalOut]. Enum values:
|
MEDIUM |
||
A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. |
MEDIUM |
||
The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. |
MEDIUM |
||
The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. |
MEDIUM |
||
To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the next actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. |
MEDIUM |
||
Milliseconds before the next poll. |
500L |
MEDIUM |
|
If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the previous run polled 1 or more messages. |
false |
MEDIUM |
|
Milliseconds before the first poll starts. |
1000L |
MEDIUM |
|
Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. |
0L |
MEDIUM |
|
The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. One of: [TRACE] [DEBUG] [INFO] [WARN] [ERROR] [OFF]. Enum values:
|
"TRACE" |
MEDIUM |
|
Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. |
MEDIUM |
||
To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. |
"none" |
MEDIUM |
|
To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. |
MEDIUM |
||
Whether the scheduler should be auto started. |
true |
MEDIUM |
|
Time unit for initialDelay and delay options. One of: [NANOSECONDS] [MICROSECONDS] [MILLISECONDS] [SECONDS] [MINUTES] [HOURS] [DAYS]. Enum values:
|
"MILLISECONDS" |
MEDIUM |
|
Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. |
true |
MEDIUM |
|
Password for Splunk. |
MEDIUM |
||
Set the ssl protocol to use One of: [TLSv1.2] [TLSv1.1] [TLSv1] [SSLv3]. Enum values:
|
"TLSv1.2" |
MEDIUM |
|
Username for Splunk. |
MEDIUM |
||
Use sun.net.www.protocol.https.Handler Https handler to establish the Splunk Connection. Can be useful when running in application servers to avoid app. server https handling. |
false |
MEDIUM |
|
Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. |
false |
MEDIUM |
|
Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. |
true |
MEDIUM |
|
To use the SplunkConfigurationFactory. |
MEDIUM |
The camel-splunk source connector has no converters out of the box.
The camel-splunk source connector has no transforms out of the box.
The camel-splunk source connector has no aggregation strategies out of the box.