Skip Navigation
Show nav
Dev Center
  • Get Started
  • Documentation
  • Changelog
  • Search
  • Get Started
    • Node.js
    • Ruby on Rails
    • Ruby
    • Python
    • Java
    • PHP
    • Go
    • Scala
    • Clojure
    • .NET
  • Documentation
  • Changelog
  • More
    Additional Resources
    • Home
    • Elements
    • Products
    • Pricing
    • Careers
    • Help
    • Status
    • Events
    • Podcasts
    • Compliance Center
    Heroku Blog

    Heroku Blog

    Find out what's new with Heroku on our blog.

    Visit Blog
  • Log inorSign up
View categories

Categories

  • Heroku Architecture
    • Compute (Dynos)
      • Dyno Management
      • Dyno Concepts
      • Dyno Behavior
      • Dyno Reference
      • Dyno Troubleshooting
    • Stacks (operating system images)
    • Networking & DNS
    • Platform Policies
    • Platform Principles
  • Developer Tools
    • Command Line
    • Heroku VS Code Extension
  • Deployment
    • Deploying with Git
    • Deploying with Docker
    • Deployment Integrations
  • Continuous Delivery & Integration (Heroku Flow)
    • Continuous Integration
  • Language Support
    • Node.js
      • Working with Node.js
      • Node.js Behavior in Heroku
      • Troubleshooting Node.js Apps
    • Ruby
      • Rails Support
      • Working with Bundler
      • Working with Ruby
      • Ruby Behavior in Heroku
      • Troubleshooting Ruby Apps
    • Python
      • Working with Python
      • Background Jobs in Python
      • Python Behavior in Heroku
      • Working with Django
    • Java
      • Java Behavior in Heroku
      • Working with Java
      • Working with Maven
      • Working with Spring Boot
      • Troubleshooting Java Apps
    • PHP
      • PHP Behavior in Heroku
      • Working with PHP
    • Go
      • Go Dependency Management
    • Scala
    • Clojure
    • .NET
      • Working with .NET
  • Databases & Data Management
    • Heroku Postgres
      • Postgres Basics
      • Postgres Getting Started
      • Postgres Performance
      • Postgres Data Transfer & Preservation
      • Postgres Availability
      • Postgres Special Topics
      • Migrating to Heroku Postgres
    • Heroku Key-Value Store
    • Apache Kafka on Heroku
    • Other Data Stores
  • AI
    • Working with AI
    • Heroku Inference
      • Inference API
      • Quick Start Guides
      • AI Models
      • Inference Essentials
    • Vector Database
    • Model Context Protocol
  • Monitoring & Metrics
    • Logging
  • App Performance
  • Add-ons
    • All Add-ons
  • Collaboration
  • Security
    • App Security
    • Identities & Authentication
      • Single Sign-on (SSO)
    • Private Spaces
      • Infrastructure Networking
    • Compliance
  • Heroku Enterprise
    • Enterprise Accounts
    • Enterprise Teams
    • Heroku Connect (Salesforce sync)
      • Heroku Connect Administration
      • Heroku Connect Reference
      • Heroku Connect Troubleshooting
  • Patterns & Best Practices
  • Extending Heroku
    • Platform API
    • App Webhooks
    • Heroku Labs
    • Building Add-ons
      • Add-on Development Tasks
      • Add-on APIs
      • Add-on Guidelines & Requirements
    • Building CLI Plugins
    • Developing Buildpacks
    • Dev Center
  • Accounts & Billing
  • Troubleshooting & Support
  • Integrating with Salesforce
  • Databases & Data Management
  • Heroku Streaming Data Connectors

Heroku Streaming Data Connectors

English — 日本語に切り替える

Last updated March 10, 2025

Table of Contents

  • Heroku App Setup
  • Heroku Add-ons Setup
  • Heroku’s Streaming Data Connector Setup
  • Managing a Connector
  • Upgrading the Version of a Heroku Postgres Database
  • Destroying a Connector

This article describes how to configure Change Data Capture (CDC) for Heroku Postgres events and stream them to your Apache Kafka on Heroku add-on provisioned in a Private Space or a Shield Private Space. This process involves three high-level steps:

  1. Creating an app in Private Space or Shield Private Space.
  2. Provisioning a Private or Shield Heroku Postgres add-on and a Private or Shield Apache Kafka on Heroku add-on on your new app.
  3. Creating a streaming data connector to enable CDC events from your Postgres to your Kafka.

Streaming data connectors only work if your Private or Shield Apache Kafka on Heroku add-on and Private or Shield Heroku Postgres add-on are in the same Private or Shield Private Space.

For more information about how to best configure a streaming data connector, see Best Practices for Heroku’s Streaming Data Connectors.

Heroku App Setup

To begin, create a Private or Shield Private Space. When your Space is available, you can create an app in your Space.

$ heroku spaces:create --region virginia --team my-team-name --space myspace
$ heroku spaces:wait --space myspace
$ heroku apps:create --space myspace my-cdc-app

Heroku Add-ons Setup

Next, you need two Private or Shield data add-ons attached to your app.

$ heroku addons:create heroku-postgresql:private-7 --as DATABASE --app my-cdc-app
$ heroku addons:create heroku-kafka:private-extended-2 --as KAFKA --app my-cdc-app

You can monitor the add-on provisioning progress:

$ heroku addons:wait --app my-cdc-app

When your add-ons are available, import your schema and your data into your Postgres database.

Heroku’s Streaming Data Connector Setup

When you have a Private or Shield Private Space App with Heroku Postgres and Apache Kafka on Heroku add-ons configured, you can provision a connector.

First, install the CLI plugin:

$ heroku plugins:install data

To create a connector, you must gather several pieces of information.

  1. The name of the Kafka add-on
  2. The name of the Postgres add-on
  3. The names of the Postgres tables from which you want to capture events
  4. (Optionally) The names of the columns you wish to exclude from capture events

In order to capture events in your Postgres database, a few requirements must be met:

  • The database encoding must be UTF-8
  • The tables must currently exist
  • The tables must have a primary key
  • The tables must not be partitioned
  • The table names must only contain the characters [a-z,A-Z,0–9,\_]
  • The Kafka Formation must have direct Zookeeper access disabled

You want to take care in choosing what tables to capture. A single connector isn’t able to keep up with a high volume of events from many tables.

Next, you can create the connector. You need the names of your Postgres and Kafka add-ons, as well as a list of fully qualified tables you want to include in your database capture events:

$ heroku data:connectors:create \
    --source postgresql-neato-98765 \
    --store kafka-lovely-12345 \
    --table public.posts --table public.users

Provisioning can take approximately 15–20 minutes to complete. You can monitor the connector provisioning progress:

$ heroku data:connectors:wait gentle-connector-1234

Streaming Data Connectors creates a topic in your Kafka cluster for each table you’ve chosen to capture data changes. When your connector is available, you can view the details including newly created Kafka topics:

$ heroku data:connectors:info gentle-connector-1234
=== Data Connector status for gentle_connector_1234
Name:   gentle_connector_1234
Status: available

=== Configuration
Table Name   Topic Name
public.posts gentle_connector_1234.public.posts
public.users gentle_connector_1234.public.users

These topics are configured with the following parameters:

  • partition_count: 32
  • replication_factor: 3
  • cleanup_policy: delete
  • retention_time_ms: 24 hours

Your principal (Kafka user) has Read and Describe access on the table topics created by Streaming Data Connectors. Write, Delete, and Alter operations are denied.

This feature also creates a heartbeat topic for each connector. Your principal also has Read and Describe access to the heartbeat topic.

Managing a Connector

After you’ve created your connector, there a few options available for managing it.

Pause or Resume

You can pause processing of new events. While a connector is paused, it stops polling for additional records until you resume it. Alternating between these two states is simple:

# to pause
$ heroku data:connectors:pause gentle-connector-1234

# to resume
$ heroku data:connectors:resume gentle-connector-1234

Under normal operation, the connector doesn’t lose change events that occur while a connector is paused. The connector uses a replication slot on the Postgres database to track progress, and picks up where it left off without losing data when resumed.

 

Don’t leave connectors in a “paused” state for more than a few hours. Paused connectors prevent WAL from being deleted, which can put the primary database at risk. It’s better to destroy the connector than to leave it paused for a long period.

 

Change events that occur while a connector is paused are not guaranteed to make it into Kafka. If a failover happens (due to a system failure or a scheduled maintenance), change events after the connector was paused are lost.

 

If the connector is paused for a very long time on a busy database, the replication slot prevents Postgres from deleting unread write-ahead logs (WAL). As a result, the WAL drive can fill up, which causes the database to shut down. Our automation generally detects these situations ahead of time, but in a worst-case scenario, we must drop the replication slot to protect the database. In that rare case, change events wouldn’t make it to Kafka.

Update Configuration

You can modify certain properties associated with your connector via the CLI. These properties include:

property possible values default value details
decimal.handling.mode precise, double, string precise docs
hstore.handling.mode map, json map docs
time.precision.mode adaptive, adaptive_time_microseconds, connect adaptive docs
interval.handling.mode numeric, string numeric docs
tombstones.on.delete true, false true docs
binary.handling.mode bytes, base64, hex bytes docs

For example, you can update the tombstones.on.delete to false:

$ heroku data:connectors:update gentle-connector-1234 \
  --setting tombstones.on.delete=false

It’s recommended that you familiarize yourself with our recommended Best Practices when working with connectors.

Configuration managed by Heroku

Most configuration properties are entirely managed by Heroku and are modified as needed.

property managed value details
heartbeat.interval.ms 60 seconds docs

Update Tables and Excluded Columns

You can also modify the connector’s Postgres tables, as well as excluded columns.

For example, you can add the table public.parcels and remove the table public.posts:

$ heroku data:connectors:update gentle-connector-1234 \
  --add-table public.parcels \
  --remove-table public.posts

New tables must adhere to the same requirements as outlined in the Setup.

Likewise, you can add and remove excluded columns:

$ heroku data:connectors:update gentle-connector-1234 \
  --exclude-column public.parcels.address \
  --remove-excluded-column public.posts.keys

Upgrading the Version of a Heroku Postgres Database

Upgrading a Heroku Postgres database with Streaming Data Connectors requires you to recreate the connectors on the newly upgraded database. There are extra steps to reconfigure your systems so that the connectors stream events to the new database. The upgrade itself is done using a follower database and pg:upgrade. These instructions show how streaming data connectors fit into the upgrade process.

The steps to upgrade a database with streaming data connectors are:

  1. Get information about the current connectors and their settings
  2. Prepare a new follower database to upgrade and promote
  3. Disable writes on the old database and pause connectors
  4. Run the upgrade
  5. Promote or attach the newly upgraded database
  6. Recreate the connectors on the newly upgraded database
  7. Exit maintenance mode
  8. Deprovision the old primary database

1. Get Information About Connectors

You must get a list of your connectors and the settings details on each one. You need this information in step 6 when you recreate the connectors to the upgraded database.

To get a list of connectors, run the command:

$ heroku data:connectors --app example-app

=== Data Connector info for example-app
Connector Name: inventive-connector-83577
Kafka Add-On: example-app-kafka
Postgres Add-On: example-app-postgres
Tables: public.posts
public.comments

To get info and configuration details on a specific connector, run the command with a connector name:

$ heroku data:connectors:info inventive-connector-83577

=== Data Connector status for inventive-connector-83577
Lag: 15 MB
Service Name: 10d5e5cb-0343-4166-80d8-f03fcc4d1e21
Status: available

=== Configuration
Table Name Topic Name
public.posts inventive_connector_83577.public.posts
public.comments inventive_connector_83577.public.comments

Your Data Connector is now available.

2. Provision a Follower Database

Next, create your follower database and wait for it to get mostly caught up with your leader database. Creating a follower minimizes the amount of downtime required for the upgrade. We recommend creating your follower at least 24 hours in advance before upgrading.

3. Enter Maintenance Mode and Pause Connectors

Next, put your app in maintenance mode and scale down your app dynos to prevent writing to your database during the upgrade.

$ heroku maintenance:on --app example-app
$ heroku ps:scale consumer=0 generator=0 web=0 --app example-app

Then, pause your data connectors:

$ heroku data:connectors:pause inventive-connector-83577

Pausing Data Connector inventive-connector-83577... done

4. Upgrade the Follower Database

Before upgrading, check that your follower is caught up with pg:info. In the Behind By field, you see 0 commits when your follower database is in sync.

Next, upgrade the follower database using pg:upgrade. This command has the follower unfollow the leader and perform the Postgres version upgrade. You can monitor the progress of the upgrade with pg:wait.

5. Promote or Attach the New Database

If DATABASE_URL was the config var for your previous primary database, use pg:promote to promote the newly upgraded database as the new DATABASE_URL. If the database you’re upgrading has a different config var other than the default DATABASE_URL, use heroku addons:attach to promote your newly upgraded database with the required alias or attachment name.

6. Replace Your Connectors

Next, you must destroy the connectors on your old database and recreate them for your newly upgraded database. Make sure you have the information from step 1 before destroying. Destroying connectors usually takes around 10 minutes to complete.

Destroying connectors doesn’t destroy associated Kafka topics.

$ heroku data:connectors:destroy inventive-connector-83577

To proceed, type inventive-connector-83577 or re-run this command with --confirm inventive-connector-83577: inventive-connector-83577
Data Connector inventive-connector-83577 deleted successfully.
Note: We do not delete your Kafka topics automatically, because they could still contain messages which you haven't consumed. Please delete the topics manually. See heroku kafka:topics:destroy --help
Destroying Data Connector... done

After the connectors are destroyed, recreate them in your new database.

You must create the connectors with the same name and configuration as the old connectors.

7. Exit Maintenance Mode

To resume normal application operation, scale any non-web dynos back to their original levels and turn off maintenance mode:

$ heroku ps:scale consumer=1:Private-M generator=1:Private-M web=1:Private-M --app example-app

Scaling dynos... done, now running consumer at 1:Private-M, web at 1:Private-M, generator at 1:Private-M, console at 0:Private-M, rake at 0:Private-M

$ heroku maintenance:off --app example-app

We recommend verifying your application is working properly after the upgrade and events are streamed from your new connections. Run kafka:tail to see the latest messages in a connector topic.

8. Deprovision the Old Primary Database

After you upgrade your database, be sure to deprovision your old primary database:

$ heroku addons:destroy HEROKU_POSTGRESQL_LAVENDER --app example-app

Destroying a Connector

You can destroy a connector via the CLI.

This command does not destroy the Kafka topics used to produce events. You must manage their lifecycle independently.

$ heroku data:connectors:destroy gentle-connector-1234

Keep reading

  • Databases & Data Management

Feedback

Log in to submit feedback.

Information & Support

  • Getting Started
  • Documentation
  • Changelog
  • Compliance Center
  • Training & Education
  • Blog
  • Support Channels
  • Status

Language Reference

  • Node.js
  • Ruby
  • Java
  • PHP
  • Python
  • Go
  • Scala
  • Clojure
  • .NET

Other Resources

  • Careers
  • Elements
  • Products
  • Pricing
  • RSS
    • Dev Center Articles
    • Dev Center Changelog
    • Heroku Blog
    • Heroku News Blog
    • Heroku Engineering Blog
  • Twitter
    • Dev Center Articles
    • Dev Center Changelog
    • Heroku
    • Heroku Status
  • Github
  • LinkedIn
  • © 2025 Salesforce, Inc. All rights reserved. Various trademarks held by their respective owners. Salesforce Tower, 415 Mission Street, 3rd Floor, San Francisco, CA 94105, United States
  • heroku.com
  • Legal
  • Terms of Service
  • Privacy Information
  • Responsible Disclosure
  • Trust
  • Contact
  • Cookie Preferences
  • Your Privacy Choices