Google Big Query Sink
Provided by: "Apache Software Foundation"
Support Level for this Kamelet is: "Preview"
Send data to a Google Big Query table.
It is expected the body is in Json format representing an object or an array of objects.
The credentialsFileLocation property needs to be a path to a service account key file.
Configuration Options
The following table summarizes the configuration options available for the google-bigquery-sink
Kamelet:
Property | Name | Description | Type | Default | Example |
---|---|---|---|---|---|
Google Cloud Platform Credential File |
Required The credential to access Google Cloud Platform api services. |
string |
|||
Big Query Dataset Id |
Required The Big Query Dataset Id. |
string |
|||
Google Cloud Project Id |
Required Google Cloud Project id. |
string |
|||
Big Query Table Id |
Required The Big Query Table Id. |
string |
Dependencies
At runtime, the google-bigquery-sink
Kamelet relies upon the presence of the following dependencies:
-
camel:core
-
camel:kamelet
-
camel:google-bigquery
-
camel:jackson
Usage
This section describes how you can use the google-bigquery-sink
.
Knative sink
You can use the google-bigquery-sink
Kamelet as a Knative sink by binding it to a Knative object.
apiVersion: camel.apache.org/v1alpha1
kind: KameletBinding
metadata:
name: google-bigquery-sink-binding
spec:
source:
ref:
kind: Channel
apiVersion: messaging.knative.dev/v1
name: mychannel
sink:
ref:
kind: Kamelet
apiVersion: camel.apache.org/v1alpha1
name: google-bigquery-sink
properties:
credentialsFileLocation: The Google Cloud Platform Credential File
dataset: The Big Query Dataset Id
projectId: The Google Cloud Project Id
table: The Big Query Table Id
Prerequisite
You have Camel K installed on the cluster.
Procedure for using the cluster CLI
-
Save the
google-bigquery-sink-binding.yaml
file to your local drive, and then edit it as needed for your configuration. -
Run the sink by using the following command:
kubectl apply -f google-bigquery-sink-binding.yaml
Procedure for using the Kamel CLI
Configure and run the sink by using the following command:
kamel bind google-bigquery-sink -p "sink.credentialsFileLocation=The Google Cloud Platform Credential File" -p "sink.dataset=The Big Query Dataset Id" -p "sink.projectId=The Google Cloud Project Id" -p "sink.table=The Big Query Table Id" channel:mychannel
This command creates the KameletBinding in the current namespace on the cluster.
Kafka sink
You can use the google-bigquery-sink
Kamelet as a Kafka sink by binding it to a Kafka topic.
apiVersion: camel.apache.org/v1alpha1
kind: KameletBinding
metadata:
name: google-bigquery-sink-binding
spec:
source:
ref:
kind: KafkaTopic
apiVersion: kafka.strimzi.io/v1beta1
name: my-topic
sink:
ref:
kind: Kamelet
apiVersion: camel.apache.org/v1alpha1
name: google-bigquery-sink
properties:
credentialsFileLocation: The Google Cloud Platform Credential File
dataset: The Big Query Dataset Id
projectId: The Google Cloud Project Id
table: The Big Query Table Id
Prerequisites
-
You’ve installed Strimzi.
-
You’ve created a topic named
my-topic
in the current namespace. -
You have Camel K installed on the cluster.
Procedure for using the cluster CLI
-
Save the
google-bigquery-sink-binding.yaml
file to your local drive, and then edit it as needed for your configuration. -
Run the sink by using the following command:
kubectl apply -f google-bigquery-sink-binding.yaml
Procedure for using the Kamel CLI
Configure and run the sink by using the following command:
kamel bind google-bigquery-sink -p "sink.credentialsFileLocation=The Google Cloud Platform Credential File" -p "sink.dataset=The Big Query Dataset Id" -p "sink.projectId=The Google Cloud Project Id" -p "sink.table=The Big Query Table Id" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic
This command creates the KameletBinding in the current namespace on the cluster.