Press "Enter" to skip to content

Day: October 17, 2017

Reasons For Using Docker With R

Jeroen Ooms gives us a few reasons why we might want to containerize our R-based products:

The flagship of the OpenCPU system is the OpenCPU server: a mature and powerful Linux stack for embedding R in systems and applications. Because OpenCPU is completely open source we can build and ship on DockerHub. A ready-to-go linux server with both OpenCPU and RStudio can be started using the following (use port 8004 or 80):

docker run -t -p 8004:8004 opencpu/rstudio

Now simply open http://localhost:8004/ocpu/ and http://localhost:8004/rstudio/ in your browser! Login via rstudio with user: opencpu (passwd: opencpu) to build or install apps. See the readme for more info.

This is in the context of one particular product, but the reasons fit other scenarios too.  H/T R-Bloggers

Comments closed

Temporal Table Permissions

Kenneth Fisher shows us the permissions needed to create temporal tables:

Msg 13538, Level 16, State 3, Line 6
You do not have the required permissions to complete the operation.

Well, that’s not good. What permissions do I need exactly? Well, again, according to BOL I need CONTROL on the table and its history table. For those that don’t know CONTROL is the top level permission for any object. You can do anything at all with it.

Read the whole thing.  I gather the reason for requiring this level of access is that you don’t want people to go monkeying with data collected for auditing purposes.

Comments closed

Anomaly Detection With Kafka Streams

Ajmal Karuthakantakath shows us an application which performs fairly simple anomaly detection using Kafka Streams:

The problem is in the banking loan payment domain, where customers have taken a loan and they need to make monthly payments to repay the loan amount.

Assume there are millions of customers in the system and all these customers need to make monthly payments to their account. Each customer may have a different monthly due date depending on their monthly loan due date.

Each customer payment will appear as a PaymentScheduleEvent event. Customers can make more than one PaymentScheduleEvent per month. Each monthly due date for a customer will appear as a PaymentDueEvent.

An arbitrarily chosen anomaly condition for this example is that if the amount due is more than $150 for any customer at any point in time, this generates an anomaly.

Click through for instructions, the application, and further resources.  If you want to learn Kafka Streams, this should keep you busy for a little while.

Comments closed

Supplementary Characters In SQL Server 2017

Solomon Rutzky tests SQL Server 2017 collations to determine whether they support supplementary characters:

As you scroll through the list, does anything appear to be missing? Go ahead, take another look. I’ll wait :wink:. Figure it out? Yep, that’s right: none of those Collations end in “_SC“. Collations ending in “_SC” were added in SQL Server 2012 and support the full UTF-16 character set. Without the “_SC” ending built-in functions and comparisons / sorting only fully support the base UCS-2 character set (i.e. the first 65,536 Unicode Code Points). You can, of course, store and view all Unicode Code Points, even Supplementary Characters, in non-“_SC” Collations, but they will be interpreted as being two separate “unknown” characters instead of as a single character.

To illustrate this point, the following query shows: the two surrogate Code Points (not actual characters by themselves, but when used in pairs of any of the assigned combinations, produce a single character), the resulting Supplementary Character, and how the LEN function interprets that sequence in both non-“_SC” and “_SC” Collations.

Read on for Solomon’s testing, which includes a cat face emoji.  I know that I, for one, thought there was insufficient cat face emoji representation in SQL Server prior to 2017.

Comments closed

Performance Tuning TVFs With Optional Parameters

Arvind Shyamsundar walks us through a scenario with user-defined functions with optional parameters:

If you notice carefully, the above query is an example of ‘optional parameters’ wherein the same query caters to situations where there are specific values for the parameters as well as other cases where there are none. Due to the implementation of the query (specifically the usage of ISNULL(@paramname, ColName)) what ends up happening is that the query plan thus generated will not leverage any indexes on the table. While this query can be refactored to separate versions for cases where the parameter values are supplied, and where they are not, another viable option is to use OPTION (RECOMPILE) on the statement level. This is an acceptable solution in most cases because the cost of scanning the table is often far higher than the cost of recompiling this query. So here is how we used OPTION RECOMPILE in this case:

Arvind walks us through three separate solutions.  My fourth solution is, don’t use user-defined table-valued functions.

Comments closed

Biml Metadata, With And Without BimlStudio

Ben Weissman has a pair of posts regarding metadata models in Biml.  First up, he gives us the high-roller solution:

If you’re lucky enough to be a BimlStudio user, you have access to the Biml Metadata feature! This feature allows you to build a Metadata model that fits your exacts needs which can then be browsed and used through a Metadata Instance using a dynamic object model.

As you probably still want to maintain your metadata outside of BimlStudio, we’ve build this little piece of code. It will ready your meta-Schema from a given SQL Database and build a Biml Metadata-Model from it. In a second step, it will also import the contents of your model into an instance:

If your company doesn’t want to shell out the cash to buy a license for BimlStudio, Ben also has a version for people using the free BimlExpress tool:

So maybe you’ve noticed our blog post on deriving metadata from SQL Server into BimlStudio, but you are using BimlExpress and therefore don’t have access to the feature described in there? While it’s true, that BimlExpress doesn’t support the Metadata features of the Biml language, there are similar ways of achieving a flexible metadata model in Biml.

This post shows you, how you can build a model in SQL Server, import it to Biml including derived relationships etc. and use it in a very flexible way.

To get started, we need to set up a sample model in SQL Server first. You can either download the scripts from https://solisyon.de/files/SetupDerivedMetadata.sql or scroll to the very end of that page. Although your individual model can and will differ from this one, we suggest you follow along using our example and start tweaking it afterwards!

Once you really get how Biml converts metadata to packages, life gets so much easier.

Comments closed

Using dbatools To Determine SQL Server Versions

Simone Bizzotto walks us through a new dbatools feature:

You get back on a jiffy:
– the Build
– the Major Release
– the Service Pack
– the Cumulative Update
– the KB related to that version
– when the support for that version ends
– if all of the above are matching a verified build
– if a warning is shown, you passed a bad build or the JSON must be updated

Getting the build is easy; getting some of this other information is where they add a lot of value.

Comments closed

Master Data In Azure

Matt How explains why Master Data Services isn’t a great cloud-based master data management solution and offers up an alternative:

Excel is easy to use, but not user friendly

Excel is on nearly every desktop in any Windows based organisation and with the Master Data Services Add-in, it puts the data well within the reach of the users. Whilst it is simple it is in no way user friendly when compared to other applications that your users may be using. Not to mention that for most this will be the only part of the solution they see! Wouldn’t it be great if there was a way to supply the same data but with an intuitive, mobile ready front end that people enjoy using?

Developers are tightly constrained

Developers like to develop, not choose options from drop down menus in a web based portal. With MDS, not only can Devs not make use of Visual Studio and a like but they are very tightly constrained by the business rules engine. At this point we should be able to make use of our preferred IDE so that we can benefit from source control, frameworks and customised business logic.

Not scalable according to modern expectations

Finally, MDS cannot scale to handle any kind of “big data”. It’s a bit of buzz word but as businesses collect more and more data, we need a data management option that can grow with that data. Due to the fact that MDS must be deployed from a server, there is no easy way to meet those big data requirements.

There are a few pieces to Matt’s solution, making for an interesting read.

Comments closed