JupyterLab uses a web-based UI that’s akin to the tab-and-panel interface used in IDEs like Visual Studio or Eclipse. Notebooks, command-line consoles, code editors, language references, and many more items can be arranged in various combinations, powered by the PhosphorJSframework.
“The entire JupyterLab [project] is built as a collection of plugins that talk to kernels for code execution and that can communicate with one another,” the developers wrote. “We hope the community will develop many more plugins for new use cases that go far beyond the basic system.”
It looks like they’re making major changes to keep up with Zeppelin on the back end. The biggest advantage Jupyter had for me over Zeppelin was its installation simplicity, so I hope they keep it just as easy as installing Anaconda and then loading JupyterLab.
At first, you have to read the file you want to copy into a SQL Server. You have to choose a database to perform that action. It can be Test database or you can create a new database to perform that action or it can be even TempDB. There is only two requirements for the database:
– It must not be a production Database;
– Database should have enough of space to accommodate the file you want to copy.
The idea is that if the database engine’s service account has rights to a file you want to access but don’t have permissions to access, you can bulk copy the contents as a binary blob and then grab the contents and write the results to your local system using bcp. Sure, it becomes your company’s most expensive file copy tool, but I love the mad ingeniousness behind it.
The Aster Plot allows a category that dives the chart and up to 2 measures.
The first measure controls the depth of each section
The second measure controls the width of each section
I have to admit that I’m not a fan of the Aster Plot. It has all the disadvantages of pie and torus charts (specifically, that humans have a hard time discerning differences in angles) while making it more complex and comparing across a second dimension as well.
It gives you a map on how to manage your security as you move into the cloud. Note: one of the main points is that your on premise security is equally as important and has to be managed with and as a part of your cloud security.
Now if you are like me and want more than just dry reading they also provide a link to a Microsoft Virtual Academy training course called Security in a Cloud-Enabled World that follows this roadmap and provides more detail and guidance.
Read the whole thing.
Before I’m going into detail, I want to give full kudos to Ola Hallengren (Website | @olahallengren). He has spend a lot of his time to build a SQL Server Maintenance Solution that is completely free for everyone to use. And he did such an excellent job a lot companies (also huge companies) use his solution to run maintenance tasks on their databases.
None of the scripts below are written by me, but only small changes are made in order to make things more clear when the solution is deployed to an environment. The original scripts can be downloaded via the download page on Ola’s website.
Most of the to-dos are the same between on-premises and Azure SQL DB, but some of the implementation steps are a bit different. This is worth checking out if you have any Azure SQL Database instances.
I’ve reproduced Sharon’s code and charts below. I did make a couple of tweaks to the code, though. I added a call to checkpoint(“2016-08-22”) which, if you’ve saved the code to a file, will install all the necessary packages for you. (I also verified that the code runs with package versions as of today’s date, and if you’re trying out this code at a later time it will continue to do so, thanks to checkpoint.) I also modified the data download code to make it work more easily on Windows. Here are the charts and code
It’s really easy to get basic visualizations within R, and these are better than basic visualizations.
This extension method will convert a datatable into an AstTableNode.
The datatype mapping is not complete
The code itself is straightforward; it’s converting and mapping. Check it out.
I recently had the need to compare a “Gold” or “Master” copy of our database to the 300 client copies of that database that exist in our Production environment. I’m not alone…many fellow DBAs have had the same need. Google searches for this confirm it. This is for an upcoming upgrade to the application that will need post-upgrade comparison/verification.
There are 3rd party tools that do SQL Compares…my particular favorite is aptly named SQL Compare from Red Gate. I’ve been using it off an on for 10 years. I don’t know if it can be set up to hit more than one database at a time. The other issue is that I don’t have a copy here.
Microsoft’s SQL Server Data Tools will also do this within Visual Studio. Still one database at a time. I forget where, but someone pointed me to the fact that SSDT uses SQLPackage.exe under the hood to do the work. I figure if I can run it at a command line I can script out all of the databases. I’m not much of a DOS scripting guy, so everything that follows is just my hack version…but it works, and not just on my machine!
This is very useful if you work in an environment with multiple copies of databases—or even if you have dev, test, and prod versions of the same database.
In this blog post we describe how to tell a Kafka Streams application to reprocess its input data from scratch. This is actually a very common situation when you are implementing stream processing applications in practice, and it might be required for a number of reasons, including but not limited to: during development and testing, when addressing bugs in production, when doing A/B testing of algorithms and campaigns, when giving demos to customers or internal stakeholders, and so on.
The quick answer is you can do this either manually (cumbersome and error-prone) or you can use the new application reset tool for Kafka Streams, which is an easy-to-use solution for the problem. The application reset tool is available starting with upcoming Confluent Platform 3.0.1 and Apache Kafka 0.10.0.1.
In the first part of this post we explain how to use the new application reset tool. In the second part we discuss what is required for a proper (manual) reset of a Kafka Streams application. This parts includes a deep dive into relevant Kafka Streams internals, namely internal topics, operator state, and offset commits. As you will see, these details make manually resetting an application a bit complex, hence the motivation to create an easy to use application reset tool.
Being able to reprocess streams is a critical part of the Kappa architecture, and this article is a nice overview of how to do that if you’re using Kafka Streams.
With CREATE permissions this isn’t the case; there is a piece of the above template that isn’t needed, and it’s quite easy to see why when I sat down and thought about it.
Specifically, it’s this bit:<On What>
I’m granting CREATE permissions; since I haven’t created anything, I can’t grant the permission on anything.
I like this post for the direct reason (granting certain permissions doesn’t require specifying an object), but for the implicit point as well: we build up internal systems of rules and processes as we act on things. This inductive reasoning tends to work well for us in most scenarios, but at some point, our systems break down and we find out either that we need to incorporate edge cases into our system, or that we were actually focusing on an edge case the entire time.