Skip to main content
Version: Next

Dremio

Concept Mapping

Here's a table for Concept Mapping between Dremio and DataHub to provide a clear overview of how entities and concepts in Dremio are mapped to corresponding entities in DataHub:

Source ConceptDataHub ConceptNotes
Physical Dataset/TableDatasetSubtype: Table
Virtual Dataset/ViewsDatasetSubtype: View
SpacesContainerMapped to DataHub’s Container aspect. Subtype: Space
FoldersContainerMapped as a Container in DataHub. Subtype: Folder
SourcesContainerRepresented as a Container in DataHub. Subtype: Source

Certified

Important Capabilities

CapabilityStatusNotes
Asset ContainersEnabled by default
Data ProfilingOptionally enabled via configuration
DescriptionsEnabled by default
Detect Deleted EntitiesOptionally enabled via stateful_ingestion.remove_stale_metadata
DomainsSupported via the domain config field
Extract OwnershipEnabled by default
Platform InstanceEnabled by default
Table-Level LineageEnabled by default

This plugin integrates with Dremio to extract and ingest metadata into DataHub. The following types of metadata are extracted:

  • Metadata for Spaces, Folders, Sources, and Datasets:

    • Includes physical and virtual datasets, with detailed information about each dataset.
    • Extracts metadata about Dremio's organizational hierarchy: Spaces (top-level), Folders (sub-level), and Sources (external data connections).
  • Schema and Column Information:

    • Column types and schema metadata associated with each physical and virtual dataset.
    • Extracts column-level metadata, such as names, data types, and descriptions, if available.
  • Lineage Information:

    • Dataset-level and column-level lineage tracking:
      • Dataset-level lineage shows dependencies and relationships between physical and virtual datasets.
      • Column-level lineage tracks transformations applied to individual columns across datasets.
    • Lineage information helps trace the flow of data and transformations within Dremio.
  • Ownership and Glossary Terms:

    • Metadata related to ownership of datasets, extracted from Dremio’s ownership model.
    • Glossary terms and business metadata associated with datasets, providing additional context to the data.
  • Optional SQL Profiling (if enabled):

    • Table, row, and column statistics can be profiled and ingested via optional SQL queries.
    • Extracts statistics about tables and columns, such as row counts and data distribution, for better insight into the dataset structure.

Setup

This integration pulls metadata directly from the Dremio APIs.

You'll need to have a Dremio instance up and running with access to the necessary datasets, and API access should be enabled with a valid token.

The API token should have the necessary permissions to read metadata and retrieve lineage.

Steps to Get the Required Information

  1. Generate an API Token:

    • Log in to your Dremio instance.
    • Navigate to your user profile in the top-right corner.
    • Select Generate API Token to create an API token for programmatic access.
  2. Permissions:

    • The token should have read-only or admin permissions that allow it to:
      • View all datasets (physical and virtual).
      • Access all spaces, folders, and sources.
      • Retrieve dataset and column-level lineage information.
  3. Verify External Data Source Permissions:

    • If Dremio is connected to external data sources (e.g., AWS S3, relational databases), ensure that Dremio has access to the credentials required for querying those sources.

CLI based Ingestion

Install the Plugin

The dremio source works out of the box with acryl-datahub.

Starter Recipe

Check out the following recipe to get started with ingestion! See below for full configuration options.

For general pointers on writing and running a recipe, see our main recipe guide.

source:
type: dremio
config:
# Coordinates
hostname: localhost
port: 9047
tls: true

# Credentials with personal access token(recommended)
authentication_method: PAT
password: pass
# OR Credentials with basic auth
# authentication_method: password
# username: user
# password: pass

#For cloud instance
#is_dremio_cloud: True
#dremio_cloud_project_id: <project_id>

include_query_lineage: True

#Optional
source_mappings:
- platform: s3
source_name: samples

#Optional
schema_pattern:
allow:
- "<source_name>.<table_name>"

sink:
# sink configs

Config Details

Note that a . is used to denote nested fields in the YAML recipe.

FieldDescription
authentication_method
string
Authentication method: 'password' or 'PAT' (Personal Access Token)
Default: PAT
disable_certificate_verification
boolean
Disable TLS certificate verification
Default: False
domain
string
Domain for all source objects.
dremio_cloud_project_id
string
ID of Dremio Cloud Project. Found in Project Settings in the Dremio Cloud UI
dremio_cloud_region
Enum
One of: "US", "EU"
Default: US
hostname
string
Hostname or IP Address of the Dremio server
include_query_lineage
boolean
Whether to include query-based lineage information.
Default: False
is_dremio_cloud
boolean
Whether this is a Dremio Cloud instance
Default: False
max_workers
integer
Number of worker threads to use for parallel processing
Default: 20
password
string
Dremio password or Personal Access Token
path_to_certificates
string
Path to SSL certificates
Default: /vercel/path0/metadata-ingestion/venv/lib/python3....
platform_instance
string
The instance of the platform that all assets produced by this recipe belong to. This should be unique within the platform. See https://datahubproject.io/docs/platform-instances/ for more details.
port
integer
Port of the Dremio REST API
Default: 9047
tls
boolean
Whether the Dremio REST API port is encrypted
Default: True
username
string
Dremio username
env
string
The environment that all assets produced by this connector belong to
Default: PROD
dataset_pattern
AllowDenyPattern
Regex patterns for tables and views to filter in ingestion. Specify regex to match the entire table name in dremio.schema.table format. e.g. to match all tables starting with customer in Customer database and public schema, use the regex 'dremio.public.customer.*'
Default: {'allow': ['.*'], 'deny': [], 'ignoreCase': True}
dataset_pattern.ignoreCase
boolean
Whether to ignore case sensitivity during pattern matching.
Default: True
dataset_pattern.allow
array
List of regex patterns to include in ingestion
Default: ['.*']
dataset_pattern.allow.string
string
dataset_pattern.deny
array
List of regex patterns to exclude from ingestion.
Default: []
dataset_pattern.deny.string
string
profile_pattern
AllowDenyPattern
Regex patterns for tables to profile
Default: {'allow': ['.*'], 'deny': [], 'ignoreCase': True}
profile_pattern.ignoreCase
boolean
Whether to ignore case sensitivity during pattern matching.
Default: True
profile_pattern.allow
array
List of regex patterns to include in ingestion
Default: ['.*']
profile_pattern.allow.string
string
profile_pattern.deny
array
List of regex patterns to exclude from ingestion.
Default: []
profile_pattern.deny.string
string
schema_pattern
AllowDenyPattern
Regex patterns for schemas to filter
Default: {'allow': ['.*'], 'deny': [], 'ignoreCase': True}
schema_pattern.ignoreCase
boolean
Whether to ignore case sensitivity during pattern matching.
Default: True
schema_pattern.allow
array
List of regex patterns to include in ingestion
Default: ['.*']
schema_pattern.allow.string
string
schema_pattern.deny
array
List of regex patterns to exclude from ingestion.
Default: []
schema_pattern.deny.string
string
source_mappings
array
Mappings from Dremio sources to DataHub platforms and datasets.
source_mappings.DremioSourceMapping
DremioSourceMapping
Any source that produces dataset urns in a single environment should inherit this class
source_mappings.DremioSourceMapping.platform 
string
Source connection made by Dremio (e.g. S3, Snowflake)
source_mappings.DremioSourceMapping.source_name 
string
Alias of platform in Dremio connection
source_mappings.DremioSourceMapping.platform_instance
string
The instance of the platform that all assets produced by this recipe belong to. This should be unique within the platform. See https://datahubproject.io/docs/platform-instances/ for more details.
source_mappings.DremioSourceMapping.env
string
The environment that all assets produced by this connector belong to
Default: PROD
usage
BaseUsageConfig
The usage config to use when generating usage statistics
Default: {'bucket_duration': 'DAY', 'end_time': '2024-11-26...
usage.bucket_duration
Enum
Size of the time window to aggregate usage stats.
Default: DAY
usage.end_time
string(date-time)
Latest date of lineage/usage to consider. Default: Current time in UTC
usage.format_sql_queries
boolean
Whether to format sql queries
Default: False
usage.include_operational_stats
boolean
Whether to display operational stats.
Default: True
usage.include_read_operational_stats
boolean
Whether to report read operational stats. Experimental.
Default: False
usage.include_top_n_queries
boolean
Whether to ingest the top_n_queries.
Default: True
usage.start_time
string(date-time)
Earliest date of lineage/usage to consider. Default: Last full day in UTC (or hour, depending on bucket_duration). You can also specify relative time with respect to end_time such as '-7 days' Or '-7d'.
usage.top_n_queries
integer
Number of top queries to save to each table.
Default: 10
usage.user_email_pattern
AllowDenyPattern
regex patterns for user emails to filter in usage.
Default: {'allow': ['.*'], 'deny': [], 'ignoreCase': True}
usage.user_email_pattern.ignoreCase
boolean
Whether to ignore case sensitivity during pattern matching.
Default: True
usage.user_email_pattern.allow
array
List of regex patterns to include in ingestion
Default: ['.*']
usage.user_email_pattern.allow.string
string
usage.user_email_pattern.deny
array
List of regex patterns to exclude from ingestion.
Default: []
usage.user_email_pattern.deny.string
string
profiling
ProfileConfig
Configuration for profiling
Default: {'enabled': False, 'operation_config': {'lower_fre...
profiling.enabled
boolean
Whether profiling should be done.
Default: False
profiling.include_field_distinct_count
boolean
Whether to profile for the number of distinct values for each column.
Default: True
profiling.include_field_distinct_value_frequencies
boolean
Whether to profile for distinct value frequencies.
Default: False
profiling.include_field_histogram
boolean
Whether to profile for the histogram for numeric fields.
Default: False
profiling.include_field_max_value
boolean
Whether to profile for the max value of numeric columns.
Default: True
profiling.include_field_mean_value
boolean
Whether to profile for the mean value of numeric columns.
Default: True
profiling.include_field_min_value
boolean
Whether to profile for the min value of numeric columns.
Default: True
profiling.include_field_null_count
boolean
Whether to profile for the number of nulls for each column.
Default: True
profiling.include_field_quantiles
boolean
Whether to profile for the quantiles of numeric columns.
Default: False
profiling.include_field_sample_values
boolean
Whether to profile for the sample values for all columns.
Default: True
profiling.include_field_stddev_value
boolean
Whether to profile for the standard deviation of numeric columns.
Default: True
profiling.limit
integer
Max number of documents to profile. By default, profiles all documents.
profiling.max_workers
integer
Number of worker threads to use for profiling. Set to 1 to disable.
Default: 20
profiling.offset
integer
Offset in documents to profile. By default, uses no offset.
profiling.profile_table_level_only
boolean
Whether to perform profiling at table-level only, or include column-level profiling as well.
Default: False
profiling.query_timeout
integer
Time before cancelling Dremio profiling query
Default: 300
profiling.operation_config
OperationConfig
Experimental feature. To specify operation configs.
profiling.operation_config.lower_freq_profile_enabled
boolean
Whether to do profiling at lower freq or not. This does not do any scheduling just adds additional checks to when not to run profiling.
Default: False
profiling.operation_config.profile_date_of_month
integer
Number between 1 to 31 for date of month (both inclusive). If not specified, defaults to Nothing and this field does not take affect.
profiling.operation_config.profile_day_of_week
integer
Number between 0 to 6 for day of week (both inclusive). 0 is Monday and 6 is Sunday. If not specified, defaults to Nothing and this field does not take affect.
stateful_ingestion
StatefulStaleMetadataRemovalConfig
Base specialized config for Stateful Ingestion with stale metadata removal capability.
stateful_ingestion.enabled
boolean
Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or datahub_api is specified, otherwise False
Default: False
stateful_ingestion.remove_stale_metadata
boolean
Soft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled.
Default: True

Starter Receipe for Dremio Cloud Instance

source:
type: dremio
config:
# Authentication details
authentication_method: PAT # Use Personal Access Token for authentication
password: <your_api_token> # Replace <your_api_token> with your Dremio Cloud API token
is_dremio_cloud: True # Set to True for Dremio Cloud instances
dremio_cloud_project_id: <project_id> # Provide the Project ID for Dremio Cloud

# Enable query lineage tracking
include_query_lineage: True

#Optional
source_mappings:
- platform: s3
source_name: samples

# Optional
schema_pattern:
allow:
- "<source_name>.<table_name>"

sink:
# Define your sink configuration here

Code Coordinates

  • Class Name: datahub.ingestion.source.dremio.dremio_source.DremioSource
  • Browse on GitHub

Questions

If you've got any questions on configuring ingestion for Dremio, feel free to ping us on our Slack.