While R’s base graphics library is almost limitlessly flexible when it comes to create static graphics and data visualizations, new Web-based technologies like d3 and webgl open up new horizons in high-resolution, rescalable and interactive charts. Graphics built with these libraries can easily be embedded in a webpage, can be dynamically resized while maintaining readable fonts and clear lines, and can include interactive features like hover-over data tips or movable components. And thanks to htmlwidgets for R, you can easily create a variety of such charts using R data and functions, explore them in an interactive R session, and include them in Web-based applications for others to experience.
There are some nice widgets in this set.
Project structures often organically grow to suit people’s needs, leading to different project structures within a team. You can consider yourself lucky if at some point in time you find, or someone in your team finds, a obscure blog post with a somewhat sane structure and enforces it in your team.
Many years ago I stumbled upon ProjectTemplate for R. Since then I’ve tried to get people to use a good project structure. More recently DrivenData (what’s in a name?) released their more generic Cookiecutter Data Science.
The main philosophies of those projects are:
A consistent and well-organized structure allows people to collaborate more easily.
Your analyses should be reproducible and your structure should enable that.
A projects starts from raw data that should never be edited; consider raw data immutable and only edit derived sources.
This is a set of prescriptions and focuses on the phase before the project actually kicks off.
My results are as below. I have 1000 records in the table. This tells me that I have 82 occurences of age cohort 0-5, 8.2% of my dataset is from this bracket, 82 again is the cumulative frequency since this is the first record and 8.2 cumulative percent. For the next bracket 06-12 I have 175 occurences, 17.5 %, 257 occurences of age below 12, and 25.7 % of my data is in this age bracket. And so on.
Click through for the T-SQL and R scripts.
You can mirror datasets with Falcon. Mirroring is a very useful option for enterprises and is well-documented. This is something that you may want to get validated by a third party. See the following resources:
Tim shows several recovery options, making it useful reading if you use Hadoop as a source system for anything (or if you can’t afford it to be down for a 2-3 day period as you recover data).
Trace flag 1118 addresses contention that can exist on a particular type of page in a database, the SGAM page. This trace flag typically provides benefit for customers that make heavy use of the tempdb system database. In SQL Server 2016, you change this behavior using the MIXED_PAGE_ALLOCATION database option, and there is no need for TF 1118.
Trace flag 3023 is used to enable the CHECKSUM option, by default, for all backups taken on an instance. With this option enabled, page checksums are validated during a backup, and a checksum for the entire backup is generated. Starting in SQL Server 2014, this option can be set instance-wide through sp_configure (‘backup checksum default’).
The last trace flag, 3226, prevents the writing of successful backup messages to the SQL Server ERRORLOG. Information about successful backups is still written to msdb and can be queried using T-SQL. For servers with multiple databases and regular transaction log backups, enabling this option means the ERRORLOG is no longer bloated with BACKUP DATABASE and Database backed up messages. As a DBA, this is a good thing because when I look in my ERRORLOG, I really only want to see errors, I don’t want to scroll through hundreds or thousands of entries about successful backups.
Click through for more useful information, including a list of officially supported trace flags.
PFS pages occur every 8088 pages in every data file and store a byte of information about itself and the following 8087 pages. The most important piece of information it stores is whether a page is allocated (in use) or not. You can read more about PFS pages and the other per-database allocation bitmaps in this blog post.
So why can’t they be repaired by DBCC CHECKDB, when all the other per-database allocation bitmaps can?
The answer is that the is-this-page-allocated-or-not information is not duplicated anywhere else in the database, and it’s impossible to reconstruct it in all cases.
In case you’re not particularly familiar with PFS pages, Paul has a blog post from 2006 describing GAM, SGAM, and PFS pages.
Meagan Longoria has a multi-part series on using Biml to script Azure Data Factory tasks to migrate data from an on-prem SQL Server instance to Azure Data Lake Store. Here’s part 1:
My Azure Data Factory is made up of the following components:
Gateway – Allows ADF to retrieve data from an on premises data source
Linked Services – define the connection string and other connection properties for each source and destination
Datasets – Define a pointer to the data you want to process, sometimes defining the schema of the input and output data
Pipelines – combine the data sets and activities and define an execution schedule
Click through for the Biml.
Of course we can use any source for our users – a database, an excel file, Active Directory or even just type them in.
We can use the Add-SQLLogin command from the sqlserver module to add our users as SQL Logins, but at present we cannot add them as database users and assign them to a role.
Rob includes a demo script as well, thereby making it even easier.
- Automation – Complete autopilot, no human interaction.
- Report job status – Accurate reporting in the event the job failed, the CHECKDB failed or the restore failed.
- Use dbaltools cmdlets for restore and CHECKDB operations
- Use SQL Agent Job automation, logging and alerting
So let’s walk through this implementation together.
You won’t get a turnkey solution from this blog post, but you will get a good process to follow.