• Angelo Cesaro

Apache Kafka and BigQuery connector with batch loads

Updated: Jan 8

BigQuery has 2 ways to ingest data:

1. First via streaming

2. Second via BigQuery jobs which read data from a GCS bucket and batch loads them to BigQuery.

The difference between the 2 approaches is the cost. Streaming data directly to BigQuery costs you by number of messages streamed and has a cap of 100k records per second. Conversely, BigQuery batch loads are free (you will be paying for data stored in bucket).

Kafka BigQuery sink connector uses streaming API by default but the beta feature (enableBatchLoad) allows the other route of loading through BigQuery batches.

Confluent Documentation:

Google BigQuery Quotas and Limits

Google BigQuery Sink Connector for Confluent

Get in touch to find out more!

We'll be happy to discuss your needs

Send us an email at contact@cesaro.io

363 views0 comments

Recent Posts

See All


Cesaro.io LTD

1 Derwent Business Centre

Clarke Street

Derby DE1 2BU, United Kingdom

  • White LinkedIn Icon
  • White Twitter Icon

Apache, Apache Kafka, Kafka and associated open source project names are trademark of the Apache Software Foundation

Company Registration Number: 10823769

Privacy Policy

© Copyright 2017 - 2021 Cesaro.io LTD - All Rights Reserved