You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/integration-services/load-data-to-sql-data-warehouse.md
+17-19Lines changed: 17 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
---
2
-
title: Load data into Azure Synapse Analytics with SQL Server Integration Services (SSIS) | Microsoft Docs
3
-
description: Shows you how to create a SQL Server Integration Services (SSIS) package to move data from a wide variety of data sources to Azure Synapse Analytics.
2
+
title: Load data into Azure Synapse Analytics with SQL Server Integration Services (SSIS)
3
+
description: Shows you how to create a SQL Server Integration Services (SSIS) package to move data from a wide variety of data sources into a dedicated SQL pool in Azure Synapse Analytics.
4
4
documentationcenter: NA
5
5
ms.prod: sql
6
6
ms.prod_service: "integration-services"
@@ -11,13 +11,11 @@ ms.date: 08/09/2018
11
11
ms.author: chugu
12
12
author: chugugrace
13
13
---
14
-
# Load data into Azure Synapse Analytics with SQL Server Integration Services (SSIS)
14
+
# Load data into a dedicated SQL pool in Azure Synapse Analytics with SQL Server Integration Services (SSIS)
Create a SQL Server Integration Services (SSIS) package to load data into [Azure Synapse Analytics](/azure/sql-data-warehouse/index). You can optionally restructure, transform, and cleanse the data as it passes through the SSIS data flow.
18
+
Create a SQL Server Integration Services (SSIS) package to load data into a dedicated SQL pool in Azure Synapse Analytics](/azure/sql-data-warehouse/index). You can optionally restructure, transform, and cleanse the data as it passes through the SSIS data flow.
21
19
22
20
This article shows you how to do the following things:
23
21
@@ -35,8 +33,8 @@ A detailed introduction to SSIS is beyond the scope of this article. To learn mo
35
33
36
34
-[How to Create an ETL Package](ssis-how-to-create-an-etl-package.md)
37
35
38
-
## Options for loading data into SQL Data Warehouse with SSIS
39
-
SQL Server Integration Services (SSIS) is a flexible set of tools that provides a variety of options for connecting to, and loading data into, SQL Data Warehouse.
36
+
## Options for loading data into Azure Synapse Analytics with SSIS
37
+
SQL Server Integration Services (SSIS) is a flexible set of tools that provides a variety of options for connecting to, and loading data into, Azure Synapse Analytics.
40
38
41
39
1. The preferred method, which provides the best performance, is to create a package that uses the [Azure SQL DW Upload Task](control-flow/azure-sql-dw-upload-task.md) to load the data. This task encapsulates both source and destination information. It assumes that your source data is stored locally in delimited text files.
42
40
@@ -48,7 +46,7 @@ To step through this tutorial, you need the following things:
48
46
1.**SQL Server Integration Services (SSIS)**. SSIS is a component of SQL Server and requires a licensed version, or the developer or evaluation version, of SQL Server. To get an evaluation version of SQL Server, see [Evaluate SQL Server](https://www.microsoft.com/evalcenter/evaluate-sql-server-2017-rtm).
49
47
2.**Visual Studio** (optional). To get the free Visual Studio Community Edition, see [Visual Studio Community][Visual Studio Community]. If you don't want to install Visual Studio, you can install SQL Server Data Tools (SSDT) only. SSDT installs a version of Visual Studio with limited functionality.
50
48
3.**SQL Server Data Tools for Visual Studio (SSDT)**. To get SQL Server Data Tools for Visual Studio, see [Download SQL Server Data Tools (SSDT)][Download SQL Server Data Tools (SSDT)].
51
-
4.**An Azure Synapse Analytics database and permissions**. This tutorial connects to a SQL Data Warehouse instance and loads data into it. You have to have permission to connect, to create a table, and to load data.
49
+
4.**An Azure Synapse Analytics database and permissions**. This tutorial connects to a dedicated SQL pool in Azure Synapse Analytics instance and loads data into it. You have to have permission to connect, to create a table, and to load data.
52
50
53
51
## Create a new Integration Services project
54
52
1. Launch Visual Studio.
@@ -74,7 +72,7 @@ To continue the tutorial with this option, you need the following things:
74
72
75
73
- The [Microsoft SQL Server Integration Services Feature Pack for Azure][Microsoft SQL Server 2017 Integration Services Feature Pack for Azure]. The SQL DW Upload task is a component of the Feature Pack.
76
74
77
-
- An [Azure Blob Storage](/azure/storage/) account. The SQL DW Upload task loads data from Azure Blob Storage into Azure Synapse Analytics. You can load files that are already in Blob Storage, or you can load files from your computer. If you select files on your computer, the SQL DW Upload task uploads them to Blob Storage first for staging, and then loads them into SQL Data Warehouse.
75
+
- An [Azure Blob Storage](https://docs.microsoft.com/azure/storage/) account. The SQL DW Upload task loads data from Azure Blob Storage into Azure Synapse Analytics. You can load files that are already in Blob Storage, or you can load files from your computer. If you select files on your computer, the SQL DW Upload task uploads them to Blob Storage first for staging, and then loads them into your dedicated SQL pool.
78
76
79
77
### Add and configure the SQL DW Upload Task
80
78
@@ -92,25 +90,25 @@ For more control, you can manually create a package that emulates the work done
92
90
93
91
1. Use the Azure Blob Upload Task to stage the data in Azure Blob Storage. To get the Azure Blob Upload task, download the [Microsoft SQL Server Integration Services Feature Pack for Azure][Microsoft SQL Server 2017 Integration Services Feature Pack for Azure].
94
92
95
-
2. Then use the SSIS Execute SQL task to launch a PolyBase script that loads the data into SQL Data Warehouse. For an example that loads data from Azure Blob Storage into SQL Data Warehouse (but not with SSIS), see [Tutorial: Load data to Azure Synapse Analytics](/azure/sql-data-warehouse/load-data-wideworldimportersdw).
93
+
2. Then use the SSIS Execute SQL task to launch a PolyBase script that loads the data into your dedicated SQL pool. For an example that loads data from Azure Blob Storage into dedicated SQL pool (but not with SSIS), see [Tutorial: Load data to Azure Synapse Analytics](/azure/sql-data-warehouse/load-data-wideworldimportersdw).
96
94
97
95
## Option 2 - Use a source and destination
98
96
99
97
The second approach is a typical package which uses a Data Flow task that contains a source and a destination. This approach supports a wide range of data sources, including SQL Server and Azure Synapse Analytics.
100
98
101
99
This tutorial uses SQL Server as the data source. SQL Server runs on premises or on an Azure virtual machine.
102
100
103
-
To connect to SQL Server and to SQL Data Warehouse, you can use an ADO.NET connection manager and source and destination, or an OLE DB connection manager and source and destination. This tutorial uses ADO.NET because it has the fewest configuration options. OLE DB may provide slightly better performance than ADO.NET.
101
+
To connect to SQL Server and to a dedicated SQL pool, you can use an ADO.NET connection manager and source and destination, or an OLE DB connection manager and source and destination. This tutorial uses ADO.NET because it has the fewest configuration options. OLE DB may provide slightly better performance than ADO.NET.
104
102
105
103
As a shortcut, you can use the SQL Server Import and Export Wizard to create the basic package. Then, save the package, and open it in Visual Studio or SSDT to view and customize it. For more info, see [Import and Export Data with the SQL Server Import and Export Wizard](import-export-data/import-and-export-data-with-the-sql-server-import-and-export-wizard.md).
106
104
107
105
### Prerequisites for Option 2
108
106
109
107
To continue the tutorial with this option, you need the following things:
110
108
111
-
1.**Sample data**. This tutorial uses sample data stored in SQL Server in the AdventureWorks sample database as the source data to be loaded into SQL Data Warehouse. To get the AdventureWorks sample database, see [AdventureWorks Sample Databases][AdventureWorks 2014 Sample Databases].
109
+
1.**Sample data**. This tutorial uses sample data stored in SQL Server in the AdventureWorks sample database as the source data to be loaded into a dedicated SQL pool. To get the AdventureWorks sample database, see [AdventureWorks Sample Databases][AdventureWorks 2014 Sample Databases].
112
110
113
-
2.**A firewall rule**. You have to create a firewall rule on SQL Data Warehouse with the IP address of your local computer before you can upload data to the SQL Data Warehouse.
111
+
2.**A firewall rule**. You have to create a firewall rule on your dedicated SQL pool with the IP address of your local computer before you can upload data to the dedicated SQL pool.
114
112
115
113
### Create the basic data flow
116
114
1. Drag a Data Flow Task from the Toolbox to the center of the design surface (on the **Control Flow** tab).
@@ -169,9 +167,9 @@ To continue the tutorial with this option, you need the following things:
169
167
3. In the **Configure ADO.NET Connection Manager** dialog box, click the **New** button to open the **Connection Manager** dialog box and create a new data connection.
170
168
4. In the **Connection Manager** dialog box, do the following things.
171
169
1. For **Provider**, select the SqlClient Data Provider.
172
-
2. For **Server name**, enter the SQL Data Warehouse name.
170
+
2. For **Server name**, enter the dedicated SQL pool name.
173
171
3. In the **Log on to the server** section, select **Use SQL Server authentication** and enter authentication information.
174
-
4. In the **Connect to a database** section, select an existing SQL Data Warehouse database.
172
+
4. In the **Connect to a database** section, select an existing dedicated SQL pool database.
175
173
5. Click **Test Connection**.
176
174
6. In the dialog box that reports the results of the connection test, click **OK** to return to the **Connection Manager** dialog box.
177
175
7. In the **Connection Manager** dialog box, click **OK** to return to the **Configure ADO.NET Connection Manager** dialog box.
@@ -182,8 +180,8 @@ To continue the tutorial with this option, you need the following things:
182
180
7. In the **Create Table** dialog box, do the following things.
183
181
184
182
1. Change the name of the destination table to **SalesOrderDetail**.
185
-
2. Remove the **rowguid** column. The **uniqueidentifier** data type is not supported in SQL Data Warehouse.
186
-
3. Change the data type of the **LineTotal** column to **money**. The **decimal** data type is not supported in SQL Data Warehouse. For info about supported data types, see [CREATE TABLE (Azure Synapse Analytics, Parallel Data Warehouse)][CREATE TABLE (Azure Synapse Analytics, Parallel Data Warehouse)].
183
+
2. Remove the **rowguid** column. The **uniqueidentifier** data type is not supported in dedicated SQL pool.
184
+
3. Change the data type of the **LineTotal** column to **money**. The **decimal** data type is not supported in dedicated SQL pool. For info about supported data types, see [CREATE TABLE (Azure Synapse Analytics, Parallel Data Warehouse)][CREATE TABLE (Azure Synapse Analytics, Parallel Data Warehouse)].
187
185
188
186
![Screenshot of the Create Table dialog box, with code to create a table named SalesOrderDetail with LineTotal as a money column and no rowguid column.][12b]
189
187
4. Click **OK** to create the table and return to the **ADO.NET Destination Editor**.
Copy file name to clipboardExpand all lines: docs/integration-services/what-s-new-in-integration-services-in-sql-server-2016.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -305,7 +305,7 @@ The latest version of the Azure Feature Pack includes a connection manager, sour
305
305
306
306
#### <aname="sqldwupload"></a> Support for Azure Synapse Analytics released
307
307
308
-
The latest version of the Azure Feature Pack includes the Azure SQL DW Upload task for populating SQL Data Warehouse with data. For more info, see [Azure Feature Pack for Integration Services (SSIS)](../integration-services/azure-feature-pack-for-integration-services-ssis.md)
308
+
The latest version of the Azure Feature Pack includes the Azure SQL DW Upload task for populating Azure Synapse Analytics with data. For more info, see [Azure Feature Pack for Integration Services (SSIS)](../integration-services/azure-feature-pack-for-integration-services-ssis.md)
|database_id|**int**|Identifier of database used by explicit context (for example, USE DB_X).|See ID in [sys.databases (Transact-SQL)](../../relational-databases/system-catalog-views/sys-databases-transact-sql.md).|
36
36
|command|**nvarchar(4000)**|Holds the full text of the request as submitted by the user.|Any valid query or request text. Queries that are longer than 4000 bytes are truncated.|
37
37
|resource_class|**nvarchar(20)**|The workload group used for this request. |Static Resource Classes</br>staticrc10</br>staticrc20</br>staticrc30</br>staticrc40</br>staticrc50</br>staticrc60</br>staticrc70</br>staticrc80</br> </br>Dynamic Resource Classes</br>SmallRC</br>MediumRC</br>LargeRC</br>XLargeRC|
38
-
|importance|**nvarchar(128)**|The Importance setting the request executed at. This is the relative importance of a request in this workload group and across workload groups for shared resources. Importance specified in the classifier overrides the workload group importance setting.</br>Applies to: Azure SQL Data Warehouse|NULL</br>low</br>below_normal</br>normal (default)</br>above_normal</br>high|
39
-
|group_name|**sysname**|For requests utilizing resources, group_name is the name of the workload group the request is running under. If the request does not utilize resources, group_name is null.</br>Applies to: Azure SQL Data Warehouse|
38
+
|importance|**nvarchar(128)**|The Importance setting the request executed at. This is the relative importance of a request in this workload group and across workload groups for shared resources. Importance specified in the classifier overrides the workload group importance setting.</br>Applies to: Azure Synapse Analytics|NULL</br>low</br>below_normal</br>normal (default)</br>above_normal</br>high|
39
+
|group_name|**sysname**|For requests utilizing resources, group_name is the name of the workload group the request is running under. If the request does not utilize resources, group_name is null.</br>Applies to: Azure Synapse Analytics|
40
40
|classifier_name|**sysname**|For requests utilizing resources, The name of the classifier used for assigning resources and importance.||
41
-
|resource_allocation_percentage|**decimal(5,2)**|The percentage amount of resources allocated to the request.</br>Applies to: Azure SQL Data Warehouse|
42
-
|result_cache_hit|**int**|Details whether a completed query used result set cache. </br>Applies to: Azure SQL Data Warehouse| 1 = Result set cache hit </br> 0 = Result set cache miss </br> Negative integer values = Reasons why result set caching was not used. See remarks section for details.|
41
+
|resource_allocation_percentage|**decimal(5,2)**|The percentage amount of resources allocated to the request.</br>Applies to: Azure Synapse Analytics|
42
+
|result_cache_hit|**int**|Details whether a completed query used result set cache. </br>Applies to: Azure Synapse Analytics| 1 = Result set cache hit </br> 0 = Result set cache miss </br> Negative integer values = Reasons why result set caching was not used. See remarks section for details.|
43
43
|client_correlation_id|**nvarchar(255)**|Optional user-defined name for a client session. To set for a session, call sp_set_session_context 'client_correlation_id', '<CorrelationIDName>'. Run `SELECT SESSION_CONTEXT(N'client_correlation_id')` to retrieve its value.|
44
44
||||
45
45
@@ -75,4 +75,4 @@ The negative integer value in the result_cache_hit column is a bitmap value of a
75
75
76
76
## See Also
77
77
78
-
[SQL Data Warehouse and Parallel Data Warehouse Dynamic Management Views (Transact-SQL)](../../relational-databases/system-dynamic-management-views/sql-and-parallel-data-warehouse-dynamic-management-views.md)
78
+
[Azure Synapse Analytics and Parallel Data Warehouse Dynamic Management Views (Transact-SQL)](../../relational-databases/system-dynamic-management-views/sql-and-parallel-data-warehouse-dynamic-management-views.md)
|**software_major_version**|**tinyint**|[!INCLUDE[msCoName](../../includes/msconame-md.md)][!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] major version number. Can be NULL.|
51
51
|**software_minor_version**|**tinyint**|[!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] minor version number. Can be NULL.|
52
52
|**software_build_version**|**smallint**|[!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] build number. Can be NULL.|
53
-
|**time_zone**|**smallint**|Difference between local time (where the backup operation is taking place) and Coordinated Universal Time (UTC) in 15-minute intervals at the time the backup operation started. Values can be -48 through +48, inclusive. A value of 127 indicates unknown. For example, -20 is Eastern Standard Time (EST) or five hours after UTC. Can be NULL.|
53
+
|**time_zone**|**smallint**|Difference between local time (where the backup operation is taking place) and Coordinated Universal Time (UTC) in 15-minute intervals using the time zone information at the time the backup operation started. Values can be -48 through +48, inclusive. A value of 127 indicates unknown. For example, -20 is Eastern Standard Time (EST) or five hours after UTC. Can be NULL.|
54
54
|**mtf_minor_version**|**tinyint**|[!INCLUDE[msCoName](../../includes/msconame-md.md)] Tape Format minor version number. Can be NULL.|
55
55
|**first_lsn**|**numeric(25,0)**|Log sequence number of the first or oldest log record in the backup set. Can be NULL.|
56
56
|**last_lsn**|**numeric(25,0)**|Log sequence number of the next log record after the backup set. Can be NULL.|
0 commit comments