Press "Enter" to skip to content

Month: July 2019

Oracle Data Guard on Azure

Kellyn Pot’vin-Gorman’s worlds continue to collide:

So, as most people know, I’m not a big fan of Oracle RAC, (Real Application Cluster).  My opinion was that it was often sold for use cases that it doesn’t serve, (such as HA) and the resource demands between the nodes, as well as what happens when a node is evicted to those that are left are not in the best interest for most use cases.  On the other hand, I LOVE Oracle Data Guard, active or standard, don’t matter, the product is great and it’s an awesome option for those migrating their Oracle databases to Azure VMs.

Read on to see what Oracle Data Guard is and where you might use it.

Comments closed

Refreshing Views After DDL Changes

Eduardo Pivaral shows how you can refresh the metadata for a view in SQL Server after one of its underlying tables or functions changes:

So we proceed to execute an alter view over the first view:

ALTER VIEW dbo.[vi_invoices_received_by]
AS
SELECT
ConfirmedReceivedBy
as [Received by],
COUNT(InvoiceID) as [# of Invoices],
CustomerID
FROM Sales.Invoices
GROUP BY ConfirmedReceivedBy, CustomerID;
GO

So we should now be able to see the updated column names on the second view, right? 

but if you execute the view again, you will obtain the same results as before:

Read on to see what’s gone wrong and how you can fix it.

Comments closed

AMD and Server CPUs

Glenn Berry has an interesting post on why he’s seriously considering recommending AMD CPUs to people:

AMD claims a 15% Instructions Per Clock (IPC) increase between the desktop Zen+ and Zen 2 generations, and we are likely to see a similar increase between the previous AMD EPYC 7001 “Naples” and the AMD EPYC 7002 series processors.

So far, we don’t know the official base and turbo clock speeds, but there was a recent leak of partial specifications and pricing by a European retailer that listed max boost clock speeds of up to 3.4 GHz. We won’t know the actual single-threaded performance of these processors until they have been released and benchmarked by neutral third-party testers. I am optimistic that they will have higher single-threaded CPU performance than Intel Cascade Lake-SP processors.

I’ve always had a soft spot in my heart for AMD, so I’d love to see them come through with a serious competitor to Intel in the server space, for nostalgic reasons but also to make price more competitive and to make Intel get back on its game.

Comments closed

Troubleshooting DSC in Push Refresh Mode

Jess Pomfret continues a series on Powershell Desired State Configuration:

One of the biggest obstacles people face when using DSC is the troubleshooting and reporting pieces. There are options here to integrate with third party tools to create a more polished enterprise solution, but if you’re going with just straight DSC you might feel it is lacking some in this area.

We do however have several tools available to troubleshoot issues with configurations or to monitor our nodes to determine whether they are still in the desired state. I’m specifically going to look at the options available if you’re using DSC in the Push refresh mode.

Click through for discussion of the PSDesiredStateConfiguration module as well as DSC-related event log messages.

Comments closed

Notebooks in Azure Databricks

Brad Llewellyn takes us through Azure Databricks notebooks:

Azure Databricks Notebooks support four programming languages, Python, Scala, SQL and R.  However, selecting a language in this drop-down doesn’t limit us to only using that language.  Instead, it makes the default language of the notebook.  Every code block in the notebook is run independently and we can manually specify the language for each code block.

Before we get to the actually coding, we need to attach our new notebook to an existing cluster.  As we said, Notebooks are nothing more than an interface for interactive code.  The processing is all done on the underlying cluster.

Read on to learn how Databricks uses the notebook metaphor heavily in how you interact with it.

Comments closed

Logging in Azure

Rolf Tesmer has a detailed post covering how and what to log when using Azure for a modern data warehouse:

In my view – what often doesn’t get enough attention up front are the critical aspects of monitoring, auditing and availability. Thankfully, these are generally not too difficult to plug-in at any point in the delivery cycle, but as like with most things in cloud there are just so many different options to consider!

So the purpose of this blog is to focus on the key areas of Azure Services Monitoring and Auditing for the Azure Modern Data Platform architecture.

Click through for examples from a number of different Azure services.

Comments closed

Drawing SSIS Packages as SVGs

Bartosz Ratajczyk continues a series on taking SSIS packages and generating SVGs from their control flows:

To make things harder, the layout of the sequences and tasks is not some nested XML structure. All of the elements have the same parent – <GraphLayout>, meaning all of them are at the same tree level. Also – there is no attribute showing where a particular object belongs. Almost. In the example with the sequences, I see two regularities:
– the outer container is placed later in the XML, than the inner container
– the @Id attributes show the nesting of the objects

I’m not sure how often I’d use this in practice, but if you want to understand some of the internals of SSIS, this is an interesting series to follow.

Comments closed

Getting to DevOps

Grant Fritchey takes us through some of the baby steps in getting started with DevOps:

However, easy by comparison doesn’t mean just simply easy. There’s a lot of work involved and making mistakes early in the process has repercussions for every later step.

For example, where do your put your code?

Yeah, yeah, I know. Source control. I mean, where in source control do you put the code? What do you call the project and solution? Is it in git, github, Azure, or somewhere else? Choose wisely because every single step of automation you set up after this will be completely dependent on that first choice. Further, putting in github, or example, has repercussions for how you implement automation in Azure DevOps Pipelines.

But hey, no pressure.

Comments closed