Bulk insert

Timothy Smith
The load consumed 7 seconds for 2.45 million rows during the SQL bulk insert.

Lock Configurations with SQL Bulk Insert

May 11, 2020 by

One challenge we may face when using SQL bulk insert is whether we want to allow access during the operation or prevent access and how we coordinate this with possible following transactions. We’ll look at working with a few configurations of this tool and how we may apply them in OLAP, OLPT, and mixed environments where we may want to use the tool’s flexibility for our data import needs.

Read more »
Sifiso Ndlovu
Error at OLE DB Destination component during data migration using SSIS's Data Flow Task

Dynamic column mapping in SSIS: SqlBulkCopy class vs Data Flow

February 14, 2020 by

The Data Flow Task is an essential component in SQL Server Integration Services (SSIS) as it provides SSIS ETL developers with an ability to conveniently extract data from various data sources; perform basic, fuzzy to advance data transformations; and migrate data into all kinds of data repository systems. Yet, with all its popularity and convenience, there are instances whereby the Data Flow Task is simply not good enough and recently, I got to experience such inefficiencies. To demonstrate some of the limitations of SSIS’s Data Flow Task, I have put together a random list of Premier League’s leading goal scorers for the 2019-2020 season.

Read more »
Timothy Smith

SQL Bulk Insert Concurrency and Performance Considerations

January 18, 2019 by

One of the challenges we face when using SQL bulk insert from files flat can be concurrency and performance challenges, especially if the load involves a multi-step data flow, where we can’t execute a latter step until we finish with an early step. We also see these optimization challenges with constraints as well, as fewer steps to complete a data flow results in saved time, but possibly less accurate data.

Read more »
Timothy Smith

Working With Line Numbers and Errors Using Bulk Insert

January 10, 2019 by

In the first part of reviewing the basics of bulk insert, we looked at importing entire files, specifying delimiters for rows and columns, and bypassing error messages. Sometimes we’ll want to skip first and ending lines, log errors and bad records for review after inserting data, and work with data types directly without first importing using a varchar and converting to the data type later. In this part, we look at these techniques using T-SQL’s native bulk insert.

Timothy Smith

T-SQL’s Native Bulk Insert Basics

January 8, 2019 by

From troubleshooting many data flow applications designed by others, I’ve seen a common pattern of over complexity with many designs. Putting aside possible risks by introducing too much complexity, troubleshooting these designs often involves opening many different applications – from a notepad file, to SSIS, to SQL Server Management Studio, to a script tool, etc. It may sound like many of these are doing a hundred steps, yet many times, they’re simply importing data from a file, or calling five stored procedures and then a file task of moving a file. This complexity is often unnecessary, as is opening many different tools when we can use a few tools and solve issues faster.

Read more »
Prashanth Jayaram

The BCP (Bulk Copy Program) command in action

August 13, 2018 by

There are various methods available for bulk data operations.

  1. BCP utility
  2. BULK INSERT
  3. Using OPENROWSET
  4. Import/Export wizard

The BCP (Bulk Copy Program) utility is a command line that program that bulk-copies data between a SQL instance and a data file using a special format file. The BCP utility can be used to import large numbers of rows into SQL Server or export SQL Server data into files. The BCP data files don’t include any schema details or format information. Hence, it is recommended to create a format file to record the data format so in case of any failures, you can refer to the format file and better understand the data format to determine what may have gone wrong..

We’ve been using the BCP tool for a long time, the reason being that it has a very low overhead, and works great for bulk exporting and importing of data. It is one of the most efficient ways to handle bulk import and export of data.

Read more »
Prashanth Jayaram

6 methods to write PowerShell output to a SQL Server table

December 25, 2017 by

PowerShell has become the ultimate choice for many database administrators because of its efficient way of handling and managing automation in a simple, quick way. It’s built on .NET Framework and uses Object Models such as COM, ADSI, ADO, and WMI. PowerShell has replaced the traditional way of scripting that used many legacy scripting practices to monitor SQL instances.

I’ve been asked on several occasions about how to store the output of PowerShell WMI data into the SQL table. The question comes up so frequently that I decided to write this article.

When sending data within a system (such as a PowerShell object to a cmdlet), the process is straightforward. However, with non-native data interchange (for instance, WMI to SQL), the process can potentially get complicated. Due to this, many purists suggest sticking to simple interchange formats, such as CSV, JSON or in some cases, XML.

Read more »
Esat Erkec

How to use parallel insert in SQL Server 2016 to improve query performance

December 8, 2017 by

Introduction

In the first part of this article, we will discuss about parallelism in the SQL Server Engine. Parallel processing is, simply put, dividing a big task into multiple processors. This model is meant to reduce processing time.

  • SQL Server can execute queries in parallel
  • SQL Server creates a path for every query. This path is execution plan
  • The SQL Server query optimizer creates execution plans
  • SQL Server query optimizer decides the most efficient way for create execution plan

Execution plans are the equivalent to highways and traffic signs of T-SQL queries. They tell us how a query is executed.

Read more »
Daniel Calbimonte

How to use BULK INSERT to import data locally and in Azure

November 16, 2017 by

Introduction

BULK INSERT is a popular method to import data from a local file to SQL Server. This feature is supported by the moment in SQL Server on-premises.

However, there is a new feature that is supported only in SQL Server 2017 on-premises. This feature allows importing data from a file stored in an Azure storage account to SQL Server on-premises using BULK INSERT. This feature will be supported in Azure SQL versions in the future.

In this article, we will show two examples. The first example will show how to use the traditional BULK INSERT statement from a local CSV file to Azure and the second example will show how to import data from a CSV file stored in Azure to SQL Server on-premises.

Read more »
Minette Steynberg

Reading the SQL Server Transaction Log

February 9, 2015 by

Introduction

There has always been some debate as to whether or not there are real benefits to be gained from accessing the information in the transaction log. This article will endeavor to answer that question by looking at the following:

  • What is the SQL Server Transaction Log?
  • What information is stored in the transaction log?
  • What can be gained by accessing the information in the transaction log?
  • How does the transaction log work?
  • What tools are available for reading the transaction log?
  • And ultimately, is this something we should be doing at all?
Read more »