Supabase

Integrating Supabase as a Source with GlassFlow Using Webhook Connector

In this guide, you will learn how to integrate Supabase as a data source with GlassFlow using the Webhook connector. This integration allows you to detect and stream data changes from your Supabase database directly into a GlassFlow pipeline in real-time. Read more about the use case with Supabase and GlassFlow.

Connect Supabase to GlassFlow

Prerequisites

Before you begin, make sure you have the following:

Step 1: Log in to GlassFlow and Create a New Pipeline

  1. Log in to the GlassFlow WebApp.

  2. Create a New Pipeline.

    • Go to the "Pipelines" section and click on "Create New Pipeline."

    • Provide a name for your pipeline, for example, Supabase-Webhook-Pipeline.

    • Select the "Space" you want the pipeline to reside in.

Step 2: Configure the Webhook as a Data Source

  1. Choose "Webhook" as the Data Source.

    • During the pipeline creation process, select "Webhook" as your data source connector.

    • GlassFlow will provide you with a unique Webhook URL. This is where Supabase will send data whenever a specified event occurs.

Step 3: Add a Transformation Stage

  1. Configure the Transformation Stage.

    • In the pipeline setup, you will see an option to add a transformation function. This is where you can define how the data should be processed or transformed before it reaches the final destination.

    • You can upload a Python script (transform.py) or write your transformation logic directly in the GlassFlow WebApp.

    • For example, if you want to enrich the incoming data or filter out specific fields, you can implement this logic here.

  2. Choose Dependencies (if needed).

    • If your transformation requires external libraries (e.g., pandas, openai), you can select them from the dependency menu in GlassFlow.

Step 4: Choose a Data Sink

  1. Select Your Data Sink.

    • After setting up the transformation stage, you need to choose where the transformed data will be sent.

    • You can choose an existing built-in integration like AWS S3, Google Pub/Sub, or use the SDK to connect to a custom endpoint or another service.

  2. Configure the Data Sink.

    • If using a built-in integration, follow the prompts to enter the required details (e.g., bucket name for S3, topic for Pub/Sub).

    • If using the SDK, you will configure the sink in your code to send the data to a specific API or service.

Step 5: Finalize and Deploy the Pipeline

  • Confirm Pipeline Settings.

    • Review your pipeline configuration in GlassFlow and click "Create Pipeline."

    • Your pipeline is now active and will start receiving data from Supabase.

  • Copy the Webhook URL.

    • Copy the Webhook URL and pipeline Access Token provided by GlassFlow. You will need this in the next steps to configure Supabase.

Step 6: Set Up the Webhook in Supabase

  1. Log in to Supabase.

  2. Navigate to the "API" Section.

    • In your Supabase project, find the "API" section under "Project Settings" where you can manage webhooks and other API settings.

  3. Create a New Webhook.

    • Set up a new Webhook to monitor the database table(s) you want.

    • Paste the Webhook URL you copied from GlassFlow into the Webhook URL field in Supabase.

    • Add the following headers X-Pipeline-Access-Token set to a valid Access Token you copied from GlassFlow and Content-Type set to application/json.

  4. Select Events to Monitor.

    • Choose the events you want to monitor (Insert, Update, Delete).

    • Optionally, you can filter the events based on certain conditions.

  5. Save the Webhook Configuration.

    • Save your webhook settings in Supabase. Now, whenever an event occurs in the specified table(s), Supabase will trigger the webhook, sending data to your GlassFlow pipeline.

Step 7: Monitor and Process Data

  1. Monitor the Data Stream.

    • Use the GlassFlow WebApp Pipeline Logs section to monitor incoming data and ensure your pipeline is functioning correctly.

  2. View Transformed Data.

    • Depending on the chosen data sink, you can now view the transformed data in the configured destination (e.g., AWS S3, Pub/Sub, or a custom Python-consuming service using Python SDK).

    • To get started with consuming events from your pipeline using Python SDK, you can use the code snippet provided on the Pipeline Details page.

Conclusion

You’ve successfully set up a real-time data pipeline from Supabase to GlassFlow using the Webhook connector. This setup is ideal for applications that require immediate data processing, such as real-time analytics, notifications, and more.

Last updated

Logo

© 2023 GlassFlow