About the Snowflake Connector for MySQL¶
Note
The Snowflake Connector for MySQL is subject to the Connector Terms.
The Snowflake Connector for MySQL allows you to:
Load data into Snowflake from a MySQL database.
Configure replication so that changes in your MySQL database are replicated to Snowflake.
To handle connections between Snowflake and MySQL, the connector uses an agent. The agent is distributed as a Docker image. The agent is run within your network and is used to push data into your Snowflake account.
Note
The Snowflake Connector for MySQL requires exactly one instance of the agent application to be running at all times.
The ongoing incremental updates use the Change Data Capture (CDC) technique that captures changes performed on the source database. The changes include INSERT, UPDATE, and DELETE operations, which are automatically replicated on the destination database in Snowflake.
Multiple appplication instances¶
You can install multiple instances of the same connector application on your Snowflake account.
Private links¶
The Snowflake connector for MySQL supports private links. For more information, see:
Agent & Connector App compatibilities¶
The Snowflake Connector for MySQL is being released against a specific version, described as x.y.z version where x is major, y is minor and z is patch. Agents on dockerhub are also released with the X.Y.Z version. Each x.y.z version of Snowflake Connector for MySQL supports all agents with the same major version X=x and not greater minor version of the agent. Moreover each x.0.0 version of the Snowflake Connector for MySQL supports all (x-1).Y.Z versions of the agent for all Y and Z.
Known limitations¶
The following sections describe the known limitations for the connector.
Maximum number of tables¶
The connector works well with up to 200 source tables added to the replication. Adding more tables may cause the connector to become unstable.
Transaction size¶
The connector is subject to the same limitations as MySQL’s group replication. This means that a single transaction must fit into a binary log message of no more than 4GB. Transactions exceeding this size will cause the source table to be marked as permanently failed, and require a full snapshot reload of the associated table.
Connector availability¶
When installing the connector note the following limitations:
Accounts in government regions are not supported.
To install and configure the connector, you must be logged in as a user with the ACCOUNTADMIN role. Other roles are not supported at this time.
Types compatibility¶
Due to the differences between the source database and Snowflake column types, some values cannot be converted and written into Snowflake because of the maximum column capacity or allowed ranges. For example:
Snowflake
BINARY
type has a maximum length of 8 MB (8,388,608 bytes)Snowflake date types, like
DATE
,DATETIME
, andTIMESTAMP
, have a maximum year of 9999Snowflake
VARCHAR
type has a maximum length of 16 MB (16,777,216 bytes)
If such incompatibility happens, the replication of a table is stopped with a failure.
Source table schema changes¶
The following table shows different types of changes to the source table schema and whether they are supported, and if so how.
New column names are subject to the same limitations as described in the Identifiers limitations section.
Type of schema change |
Supported? |
Notes |
---|---|---|
Adding a new column |
Yes |
The new column will be visible in the destination table just like any other column that existed at the start of the replication. It is not possible to add a new column with the same name as a previously deleted or renamed column. For example, if columns |
Deleting an existing column |
Yes |
If a column is deleted in the source table, it will not be deleted in the destination table. Instead, a soft-delete approach is followed and the column will be renamed to For example, if a column |
Renaming a column |
Yes |
Renaming a column is equal to deleting the column and creating a new one with the new name. The deletion follows the soft-delete approach explained in the previous row. For example, if column It is not possible to rename a column to the same name as a previously deleted or renamed column. For example, if columns |
Changing the type of column |
Partially |
Changing the type of source table column is only possible if both the previous and the new type are mapped to the same type in Snowflake. In any other case, the replication will fail permanently. |
Changing the precision of a numeric column |
No |
Changing the precision of a source table column will result in replication failing permanently. |
Changing the scale of a numeric column |
No |
Changing the scale of a source table column will result in replication failing permanently. |
Changing the primary key definition |
No |
Changing the primary key definition of the source table column will result in replication failing permanently. |
High-capacity columns¶
An active agent is continuously reading all events from the binary log, even if some events refer to source tables that were not added for replication. If the binary log contains very large events, like updates of the BLOB-like columns, the agent might crash due to the lack of available memory.
Primary keys¶
Tables without primary keys are not supported.
Identifiers limitations¶
Currently, the connector does not support the "
character in replicated schema, table or column names.
Additionally, the following keywords are not supported:
- For schema names:
INFORMATION_SCHEMA
- For column names:
_SNOWFLAKE_INSERTED_AT
_SNOWFLAKE_UPDATED_AT
_SNOWFLAKE_DELETED
names with suffix
__SNOWFLAKE_DELETED
Column names marked as
Cannot be used as column name
in Snowflake Reserved and Limited Keywords
MySQL version >= 8.0.0¶
Currently, the connector depends on binlog_row_metadata = full
configuration property that was introduced in MySQL, version 8.