Press "Enter" to skip to content

Month: September 2019

TensorFlow Changes

Ajit Jaokar summarizes key changes from TensorFlow from 1.x to 2.0:

The Data pipeline simplified:  TensorFlow2.0 has a separate module TensorFlow DataSets that can be used to operate with the model in more elegant way. Not only it has a large range of existing datasets, making your job of experimenting with a new architecture easier – it also has well defined way to add your data to it.
 
In TensorFlow 1.x for building a model we would first need to declare placeholders. These were the dummy variables which will later (in the session) used to feed data to the model. There were many built-in APIs for building the layers like tf.contrib, tf.layers and tf.keras, one could also build layers by defining the actual mathematical operations.
TensorFlow 2.0 you can build your model defining your own mathematical operations, as before you can use math module (tf.math) and linear algebra (tf.linalg) module. However, you can take advantage of the high level Keras API and tf.layers module. The important part is we do not need to define placeholders any more.

These look like some nice improvements.

Comments closed

Get Lock Details Against a Database

David Fowler has a new procedure:

Have you ever wanted a quick and easy way to see who was holding (and waiting on) locks on a particular database? Perhaps you’ve got some blocking issues going on and you want to see exactly which rows the row level locks were taken out on?

sp_LockDetails will return some handy information about the locks held on a specific database, including SPID, login, database, lock type and resource.

David includes the script in the post as well.

Comments closed

Azure Data Factory Pipeline Hierarchies

Paul Andrew explains the idea of pipeline hierarchies with respect to Azure Data Factory:

Next, even if the concept isn’t new, where I’d like to call out two big differences in my approach to orchestration with ADF comes from working within Microsoft Azure. The highly scalable cloud platform presents some new challenges that SSIS simply didn’t. For me these are:

– Needing to consider our wider solution and what things now cost. I’m fairly sure I’ve said it before. When working with ‘Pay-as-you-go’ services we need to think about designing for cost/consumption as well as all our other data transformation and output requirements. In Azure it is so easy to just leave resources running night and day, when only a short window of compute is needed.
– We need to consider the scale out capabilities of the other services that ADF is going to invoke. Or, to put it another way, how much parallel activity execution do we want ADF to achieve? As you may know the ADF ForEach activity by default allows us to execution inner activities in parallel, but is that enough?

It’s a very interesting idea; read the whole thing.

Comments closed

Listing Windows Users with Powershell

Jack Vamvas shows us how we can use Powershell to list Windows users in an Active Directory group:

Question: How can I use Powershell to list out Windows users? Are there Powershell cmdlets which can report on Windows users ?
Answer: There are “out of the box” Powershell cmdlets which will support the requirement . How you apply the powershell cmdlets will depend on how much detail is required.

Jack has a few examples here as well.

Comments closed

Troubles with Dropping Logins

Pamela Mooney takes us through a scenario involving dropping a user and login, and some of the difficulties which might arise:

I had to obscure a lot, but the bottom query results correlate to the top results.  The first line of the bottom query results show the grantor of the permissions, and the bottom line is the grantee.  In this case, a login was explicitly denied impersonation on a server role.  I’m using this example because it is really quirky to fix.  Most often, you’ll just reverse the permissions, using pretty standard syntax. Even easier, right click on the login, go to the “Securables” tab, and remove the permissions.  However, if you are a fan of the TSQL approach, this one is not so straightforward, so it’s a good one to show.  

Click through for a demonstration.

Comments closed

K-Means Clustering with Python

Abhinav Choudhary walks us through k-means clustering using scikit-learn:

K Means Clustering tries to cluster your data into clusters based on their similarity. In this algorithm, we have to specify the number of clusters (which is a hyperparameter) we want the data to be grouped into. Hyperparameters are the variables whose value need to be set before applying value to the dataset. Hyperparameters are adjustable parameters you choose to train a model that carries out the training process itself.

Read on for a demo.

Comments closed

Drawing Spatial Lines with SQL Server

Hasan Savran takes us through spatial data types in SQL Server:

In this post, I want to show you how easy it is to draw a spatial line by using spatial points. To make the following demo to work, you must have SQL Server 2017 or later. The reason is, I will use the new system functions STRING_AGG and CONCAT_WS. There are not Spatial functions and you can draw a spatial line by using spatial points. They will make this process easy. You can read about these new function in my older post here.

     I downloaded Hurricane data from NOAA for free. Dataset has the location of the hurricane eyes in latitude and longitude. By knowing the location, its pretty easy to display these points as Spatial data (geography). I wanted to connect these points to each other and create a line, by doing that I could add a buffer around the line and make a spatial range search and find if I have any customer under this line.

I think spatial data types are probably one of the lesser-utilized data types with respect to how useful they can be.

Comments closed

Azure Data Studio Auto-Save

Dave Bland takes us through one nice feature of Azure Data Studio:

Azure Data Studio has many great features and even more if you add all the extensions that are available.  Many extensions are very useful now, even though they are still in preview.  These features are naturally compared to SQL Server Management Studio.  One feature I like that sort of exists I SSMS is the auto save feature.  This feature will automatically save your files when you close Azure Data Studio and will be there the next time you use ADS.  SSMS has the auto recovery option, but is works a bit differently so it isn’t quite the same. ADS has a setting named “Files: Hot Exit”.

Read on to see how it works.

Comments closed

Instant Transaction Rollback in SQL Server 2019

Matthew McGiffen explains that Accelerated Database Recovery in SQL Server 2019 works for more than just startup times:

If you’ve read about the Accelerated Database Recovery feature in SQL Server 2019 you could be forgiven for thinking it’s just about speeding up database recovery time in case of a server failure.

In fact, enabling it also means that where you have a long running transaction that fails or is cancelled the rollback is almost instantaneous. This is great news for DBAs who have to sometimes kill a long-running blocking transaction but worry that it may take a long time to rollback – continuing to block all that time.

Read on for an example. I hadn’t thought about this, but it’s pretty cool.

Comments closed