Andrew Loree has made a Powershell script available:
Quick and easy backup for VisualSVN. Wraps the svnadmin.exe and performs a hotcopy of all repositories in the $source_path, dumping them to the $backup_path
Read on for the script.
Comments closedA Fine Slice Of SQL Server
Andrew Loree has made a Powershell script available:
Quick and easy backup for VisualSVN. Wraps the svnadmin.exe and performs a hotcopy of all repositories in the $source_path, dumping them to the $backup_path
Read on for the script.
Comments closedGaurav Gupta shows how to use Spring-Kafka to implement a request-reply pattern:
The behavior of request-reply is consistent even if you were to create, say, three partitions of the request topic and set the concurrency of three in consumer factory. The replies from all three consumers still go to the single reply topic. The container at the listening end is able to do the heavy lifting of matching the correlation IDs.
Kafka’s real advantage still comes from distributed, asynchronous processing, but if you have a use case where you absolutely need synchronous processing, you can do that in Kafka as well.
Comments closedErik Darling notes that scalar functions can cause multi-table blocking:
Someone had tried to be clever. Looking at the code running, if you’ve been practicing SQL Server for a while, usually means one thing.
A Scalar Valued Function was running!
In this case, here’s what it looked like:
123456789101112 CREATE OR ALTER FUNCTION dbo.BadIdea ( @uid INT )RETURNS BIGINTWITH RETURNS NULL ON NULL INPUT, SCHEMABINDINGASBEGINDECLARE @BCount BIGINT;SELECT @BCount = COUNT_BIG(*)FROM dbo.Badges AS bWHERE b.UserId = @uidGROUP BY b.UserId;RETURN @BCount;END;Someone had added that function as a computed column to the Users table:
|
|
Spoilers: this was a bad idea.
Comments closedNow moving onto our FRM (Functional Relational Mapping) and repository setup, the following import will be used for MS SQL Server Slick driver’s API
import slick.jdbc.SQLServerProfile.api._And thereafter the FRM will look same as the rest of the FRM’s delineated on the official Slick documentation. For the example on this blog let’s use the following table structure
CREATE TABLE user_profiles ( id INT IDENTITY (1, 1) PRIMARY KEY, first_name VARCHAR(100) NOT NULL, last_name VARCHAR(100) NOT NULL )whose functional relational mapping will look like this:
class UserProfiles(tag: Tag) extends Table[UserProfile](tag, "user_profiles") { def id: Rep[Int] = column[Int]("id", O.PrimaryKey, O.AutoInc) def firstName: Rep[String] = column[String]("first_name") def lastName: Rep[String] = column[String]("last_name") def * : ProvenShape[UserProfile] = (id, firstName, lastName) <>(UserProfile.tupled, UserProfile.unapply) // scalastyle:ignore }
I’m definitely going to need to learn more about this.
Comments closedMike Robbins shows how to split out validation from your primary function within Powershell:
They responded by asking if it was possible to move the custom message that Throw returns to the private function. At first, I didn’t think this would be possible, but decided to try the code to make an accurate determination instead of just assuming it wasn’t possible.
I’ve now learned something else which makes the whole process of moving the validation from the ValidateScript block to a private function much more user friendly which is what I think the person who asked the question was trying to accomplish.
If you have several parameters with somewhat complex validation logic, this makes maintenance a lot easier.
Comments closedCaludio Silva shows how you can run multiple instances of dbachecks concurrently:
Imagine that I want to check for databases in Full Recovery Model on the production environment and I want to start (in parallel) a new check for the development environment where I want to check for Simple Recovery Model if this setting is not changed in the correct time frame, we can end checking for Full Recovery Model on the development environment where we want the Simple Recovery Model.
The first time I tried to run tests for some environments in parallel, that had the need to change some configs, I didn’t realise about this detail so I ended up with much more failed tests than the expected! The bell rang when the majority of the failed tests were from a specific test…the one I had changed the value.
Read the whole thing before you start running Task.Parallel or even running multiple copies of dbachecks in separate Powershell windows.
Comments closedHolden Ackerman has an interesting analysis of Qubole customers’ adoption of Hadoop 2:
In Qubole’s 2018 Data Activation Report, we did a deep-dive analysis of how companies are adopting and using different big data engines. As part of this research, we found some fascinating details about Hadoop that we will detail in the rest of this blog.
A common misconception in the market is that Hadoop is dying. However, when you hear people refer to this, they often mean “MapReduce” as a standalone resource manager and “HDFS” as being the primary storage component that is dying. Beyond this, Hadoop as a framework is a core base for the entire big data ecosystem (Apache Airflow, Apache Oozie, Apache Hbase, Apache Spark, Apache Storm, Apache Flink, Apache Pig, Apache Hive, Apache NiFi, Apache Kafka, Apache Sqoop…the list goes on).
I clipped this portion rather than the direct analysis because I think it’s an important point: the Hadoop ecosystem is thriving as the matter of primary importance switches from what was important a decade ago (batch processing of large amounts of data on servers with direct attached storage) to what is important today (a combination of batch and streaming processing of large amounts of data on virtualized and often cloud-based servers with network-attached flash storage).
Comments closedWhen we do a transformation on any RDD, it gives us a new RDD. But it does not start the execution of those transformations. The execution is performed only when an action is performed on the new RDD and gives us a final result.
So once you perform any action on an RDD, Spark context gives your program to the driver.
The driver creates the DAG (directed acyclic graph) or execution plan (job) for your program. Once the DAG is created, the driver divides this DAG into a number of stages. These stages are then divided into smaller tasks and all the tasks are given to the executors for execution.
Click through for more details.
Comments closedThomas Rushton investigates what’s taking so long with an xp_cmdshell call:
I wanted to know what he was up to, but the sql_text field only gives “xp_cmdshell”, not anything useful that might help to identify what went wrong.
So we have to go to Taskmanager on the server. On the “Process Details” page, you can select which detail columns you want to see. We want to see the Command Line, as that’ll tell us if it’s some manually-launched batch job that’s failed or something else going wrong.
An alternative to using the Task Manager is to open ProcMon, part of the Sysinternals toolset. It takes a bit of getting used to, but is quite powerful once you know its ins and outs.
Comments closedAbdul Majed Raja shows how to call Python from R and build plots using the Seaborn Python package:
The reticulate package provides a comprehensive set of tools for interoperability between Python and R. The package includes facilities for:
- Calling Python from R in a variety of ways including R Markdown, sourcing Python scripts, importing Python modules, and using Python interactively within an R session.
- Translation between R and Python objects (for example, between R and Pandas data frames, or between R matrices and NumPy arrays).
- Flexible binding to different versions of Python including virtual environments and Conda environments.
Reticulate embeds a Python session within your R session, enabling seamless, high-performance interoperability.
The more common use of reticulate
I’ve seen is running TensorFlow neural networks from R.