Powershell ETL, Part 2

Max Trinidad has part 2 of his Powershell ETL series:

If you notice, in the above cmdlet the where-clause I’m selecting to use the Column1 property instead of a reasonable label. In my scenario the data in the CSV file contain variable columns fopr its different data types such as: Info, Error, and System. So, it was easy to identify the total number of columns to be 15 columns.

Now, using the cmdlet “Import-Csv” using the parameter “-Header”, you can define a list columns when you build the $Logdata object. We create the $header variable with the column-names separated by comma.

Keep an eye out for part 3.  In the meantime, check out part 1 if you haven’t already.

Related Posts

Azure Data Factory and Schema Drift

Mark Kromer walks us through two techniques we can use in Azure Data Factory to deal with schema drift: Azure Data Factory’s Mapping Data Flows have built-in capabilities to handle complex ETL scenarios that include the ability to handle flexible schemas and changing source data. We call this capability “schema drift“. When you build transformations […]

Read More

Using dbatools Instead of RDP

Garry Bargsley shows how we can use dbatools to do some of the things which we might naturally do with Remote Desktop: So you hear this spread across the Twitterverse and Blogosphere. You should not RDP your SQL Servers to do administrative work. My nature has always been to troubleshoot issues from the server in […]

Read More

Categories

May 2016
MTWTFSS
« Apr Jun »
 1
2345678
9101112131415
16171819202122
23242526272829
3031