Press "Enter" to skip to content

Month: May 2020

Setting Up Your Own R Package Repository

Steve Belcher explains how to configure a custom package repository in your environment:

One of the strengths of the R language is the thousands of third-party packages that have been made publicly available via CRAN, the Comprehensive R Archive Network. R includes several functions that make it easy to download and install these packages. However, in many enterprise environments, access to the Internet is limited or non-existent. In such environments, it is useful to create a local package repository that users can access from within the corporate firewall.

Your local repository may contain source packages, binary packages, or both. If at least some of your users will be working on Windows systems, you should include Windows binaries in your repository. Windows binaries are R-version-specific; if you are running R 3.3.3, you need Windows binaries built under R 3.3. These versioned binaries are available from CRAN and other public repositories. If at least some of your users will be working on Linux systems, you must include source packages in your repository.

There are some tools which help out with this, so read the whole thing.

Comments closed

C# Notebooks with Cosmos DB

Hasan Savran takes us through Jupyter notebooks in Cosmos DB:

Jupyter Notebooks are in everywhere in these days. You can write chunk of code and run it on a web application without worrying about compiler is a great feeling. C# has been little bit late to the party, but we started to see C# Notebooks lately too. Azure Cosmos DB announced their version if C# Notebook this week.
     You can reach all notebook functionalities under the Data Explorer link, There are bunch of sample notebooks you will see under the Notebook link.

There are some limitations here, like needing to use the SQL API, but it’s an interesting approach to data access in Cosmos DB.

Comments closed

The Pain of Nested Powershell Modules

Fred Weinmann explains why you probably don’t want to build a nested module in Powershell:

Yay, but … that is not enough for some people:

– What if somebody copy&pastes it to another machine and forgets the dependencies?
– What if another module uses the same dependency, but at a different version?
– What if I don’t want to confront the user with those dependencies?
– What if a later version of my dependency breaks things? I just tested this version!

And the answer to all four of those is the same: Ship dependencies within your own module, hidden from prying eyes. With the NestedModule feature of PowerShell modules we natively support that as well. Great! Feels good, feels stable, feels reliable, feels … solid.
It’s an illusion.

Click through to understand why this benefit is illusory.

Comments closed

When to Have Multiple Azure Data Factories

Paul Andrew explains how to become a factory mogul:

The obvious and easy reason for having multiple Data Factory’s could be that you simply want to separate your business processes. Maybe they all have separate data delivery requirements and it just makes management of data flows easier to handle. For example:

– Sales
– Finance
– HR

They could have different data delivery deadlines, they process on different schedules and don’t share any underlying connections.

You may also have multiple projects underway that mean you want to keep teams isolated.

But that’s not the only reason, so click through to learn several other reasons why you might have multiple Azure Data Factory instances running.

Comments closed

Reviewing SSMS Client Statistics

Reitse Eskens learns about SQL Server Management Studio’s client statistics:

In my case, i was looking for the amount of bytes received from server to determine the network speed. The number of rows is one thing, but i can’t easily tell if a row is 1 or 1000 kilobytes. By checking out the bytes received i can get some feel for the datasize. If there’s a huge amount of data coming towards me, that explains why i’ve got to wait for minutes. If there’s only a few kilobytes in the end, maybe something else is going wrong.

Reitse also takes some time to figure out how the client statistics tool works.

Comments closed

Blocking Inbound Connections to SQL Server

John Morehouse shows one quick way of preventing anybody else from connecting to your SQL Server instance:

We even tried to restart the instance into single user mode, however, every time that happened something else would take the connection before we could get into the instance.  We eventually restarted the SQL Server instance to normal operation so that we could investigate why we could not get a connection when in single user mode.

Turns out that with the production nature of the instance, the clients large farm of application servers was connecting to it faster than we could.   This was discovered by using sp_who2, however, you could use the DMV sys.dm_exec_connections to see what is connecting to the instance if you desired.  So, we needed a way to block incoming connections while not being evasive like shutting down the application servers or a large network change.

This is where the brilliance comes in.

Click through for the idea. This is the type of thing you keep in your back pocket in a real pinch, but hope never to need to use.

Comments closed

Using D3 to Visualize Data in Cube.js

Artyom Keydunov takes us through integrating D3.js within Cube.js:

You can check the online demo of this dashboard here and the complete source code of the example app is available on Github.

We are going to use Postgres to store our data. Cube.js will connect to it and act as a middleware between the database and the client, providing API, abstraction, caching, and a lot more. On the frontend, we’ll have React with Material UI and D3 for chart rendering. Below, you can find a schema of the whole architecture of the example app.

D3 is a powerful visualization library in Javascript, though I’ve found that it’s a complex visualization library.

Comments closed

Neural Networks and a Reproducibility Problem

William Vorhies looks at a recent paper on attempts at reproducing results from various types of neural networks:

Your trusted lead on recommenders rushes up with a new paper in hand.  Just back from the RecSys conference where the paper was presented he shows you the results.  It appears your top-N recommender could be made several percentage points better using the new technique.  The downside is that it would require you to adopt one of the new DNN collaborative filtering models which would be much more compute intensive and mean a deep reskilling dive for some of your team.

Would you be surprised to find out that the results in that paper are not reproducible?  Or more, that the baseline techniques to which it was compared to show improvements were not properly optimized.  And, if they had been, the much simpler techniques would be shown to be superior.

In a recent paper “Are We Really Making Much Progress”, researchers Maurizio Ferrari Dacrema, Paolo Cremonesi, Dietmar Jannach raise a major red flag.  Houston, we have a reproducibility problem.

Having worked through some of these papers for a different algorithm, I’m not that surprised. Sometimes it seems like improvements are limited solely to the data set and scenario the authors came up with, though that may just be the cynic in me.

This article is a good reason for looking at several types of models during the research phase, and even trying to keep several models up to date. It’s also a reminder that if you’re looking at papers and hot algorithms, make sure they include a way to get the data used in testing (and source code, if you can).

Comments closed