- 18 Jul 2024
- 3 Minutes to read
- PDF
Bobsled data types
- Updated on 18 Jul 2024
- 3 Minutes to read
- PDF
Overview: Schema Inference and Mapping
It’s a well-known fact that data types are not identical across various file formats and databases. When transferring structured data from a source to a destination, Bobsled takes care of reading the columns and data types of source data and mapping them accurately to the destination.
When Bobsled reads the data from a source, we infer the schema based on the file format and data source.
For self-describing file formats such as Parquet, we read the schema directly from the files. The same is true of data from a data-warehouse source.
For formats like CSVs and JSONs, we auto-infer the schema.
This schema is represented using our internal Bobsled Data Types. These Bobsled data types are designed to provide an interchange layer between various Sources and Destinations and to help providers better understand the way data will be delivered to downstream destinations.
Providers should therefore familiarize themselves with the Bobsled data types and can use this documentation to understand how Bobsled types drive destination data types.
![sschema inferences(1)](http://cdn.document360.io/87a415ff-4d6a-4c0c-adbb-26c563a2d9a8/Images/Documentation/sschema%20inferences(1).png)
Example of Bobsled schema inference and internal mapping.
Bobsled Data Types
Bobsled Data Types include all the well-known SQL Data Types.
Primitive Data Types
Data Type | Explanation |
---|---|
| Variable length Binary Data. |
| TRUE or FALSE or NULL |
| Represents a Fixed Point Decimal Number. The precision and scale are limited by the source and destination systems or file formats. See the sections on Mappings for details. |
| Double Precision (64-bit) Floating Point Number |
| Parameterized Floating point. Can represent either a 32-bit or 64-bit floating point number. Currently supported only when transferring data from Parquet to Databricks |
| INTEGER data type. The range of value depends on ssource and destination systems or file formats. |
| Parameterized Integer Data Type. Represents 1, 2, 4, and 8-byte integers. Currently supported only when transferring data from Parquet to Databricks. |
| UTF-8 encoded String of varying length. |
Date and Time
Data Type | Explanation |
---|---|
| SQL Date in the Gregorian Calendar |
| Represents a wall-clock TIME value (for ex. 10:23) irrespective of any time zone. |
| Represents a wall-clock TIME value (for ex. 10:23 CET) in a specific time zone. |
| Represents a specific wall-clock date-time value (for ex. 2024-04-01 10:23) irrespective of any time zone. Equivalent to DATETIME in some SQL dialects. |
| Represents a specific point in time (for ex. 2024-04-01 10:23 UTC) |
Geospatial Data Types
Data Type | Explanation |
---|---|
| GEOGRAPHY SQL data type as per the WGS 84 standard ↗ |
| GEOMETRY SQL data type as per OpenGIS Simple Features Specification (PDF) ↗ |
Complex Data Types
Data Type | Explanation |
---|---|
| Variable length list of elements of simple data types. Containing elements must be of the same data type. |
| COMPLEX is a notional data type to indicate that the contained element is either an array, struct, or map. |
| JSON-formatted String. Also used for nested types such as Structs and Maps. |
Mapping into Bobsled Data Types
Snowflake → Bobsled
Snowflake | Bobsled Data Type |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
BigQuery → Bobsled
BigQuery | Bobsled Type |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Parquet/Delta-Lake → Bobsled
Parquet | Bobsled Type |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| Not Supported |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
CSV/JSON → Bobsled
CSV | Bobsled Type |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Bobsled Types to Destinations Mapping
Bobsled → Snowflake
Bobsled Type | Snowflake |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Bobsled → BigQuery
Bobsled Type | BigQuery |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Bobsled → Databricks
Bobsled Type | Databricks |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| Cannot be mapped. A source parquet file containing TIME_NTZ can not be loaded to Databricks. |
|
|
| Cannot be mapped. A source parquet file containing TIME_NTZ can not be loaded to Databricks. |
|
|
|
|
|
|
Bobsled → Parquet
Parquet Mappings depend on the source of the data as we rely on the parquet writers of the source systems.
Bobsled Type | Snowflake → Parquet | BigQuery → Parquet |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| N/A |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|