Today marks the last day of PASS Summit for 2016. Unfortunately, the curation staff became wrapped up in the excesses of this conference and will wake up in a couple hours wondering why the alarm clock is going off so early. Curation will continue this upcoming Monday.
Comments closedMonth: October 2016
Tim Radney walks us through steps to migrate an on-prem database to Azure SQL Database:
When planning to migrate on premises databases to V12, the size of the database is a huge factor in how long the migration will take. The export of the database, the transfer of the data, and the import will all increase in proportion to the size of the database.
Another big factor in the restore/import time when moving your databases to V12 is the performance tier you are restoring too. The restore/import process requires a lot of horsepower, so to help expedite your migration, you should consider restoring to a higher performance tier. When the database is online, you can easily and quickly drop down to a lesser tier that meets your daily performance needs. Being able to change performance tiers with a few mouse clicks is one of the big benefits of Azure SQL Database.
There are some design considerations for moving to Azure SQL Database, and once those are covered, Tim’s article helps with the actual migration process.
Comments closedRobert Sheldon shows how to import Excel data into Power BI:
We have two basic approaches for bringing Excel data into Power BI Desktop: the Get Data process and the import process. For the most part, we’ll use the Get Data process to bring in spreadsheets and use the import process to pull in the non-spreadsheet components.
Where things get a little tricky is if we have a spreadsheet table based on a query. (No doubt there are other tricky areas that I’ve yet to discover.) You can use the import process to bring in the query, in which case you have to take the extra steps of creating and populating your table, or you can use the Get Data process to bring in either the table or query. If the query exists without as associated spreadsheet table, your only option is to import the Excel file.
There’s a pretty good chance that you’ve got important Excel spreadsheets somewhere in the organization, making this a valuable article.
Comments closedThe reality is that not all workloads are a right fit for the cloud. If you are running highly sustained workloads, then the cloud probably isn’t the right solution for your environment. The systems which work best in the cloud are the ones which can be converted to use the various PaaS services not just the IaaS services. The PaaS services will usually provide a more cost effective hosting solution, especially for workloads which vary over time; for example, ramping up during the day and becoming almost idle overnight.
Even if running in a PaaS environment isn’t an option this may be cost effective for running in an IaaS environment. It all depends on how bursty the workload is that you plan on moving to the cloud.
There are some good points here; check it out.
Comments closedDavid Smith discusses a new service to test packages on multiple platforms:
If you’re developing a package for R to share with others — on CRAN, say — you’ll want to make sure it works for others. That means testing it on various platforms (Windows, Mac, Linux, and all the versions thereof), and on various versions of R (current, past, and future). But it’s likely you only have access to one platform, and installing and managing multiple R versions can be a pain.
R-hub, the online package-building service now in public beta, aims to solve this problem by making it easy to build and test your package on a variety of platforms and R versions. Using the rhub R package, you can with a single command upload your package to the cloud-based R-hub service, and build and test your package on the current, prior, and in-development versions of R, using any or all of these platforms
This looks like an interesting service for package developers and companies with a broad distribution of R installations.
Comments closedKoen Verbeeck looks at setting up MDS and conquers some configuration file permission issues:
The error seemed quite clear: Cannot read configuration file due to insufficient permissions. Just to be sure, I added the user MDSAppPool – created in the MDS Configuration Manager for the MDS Application Pool – to the Administrators group on the machine. A brute-force solution, but since it’s on my own machine for demo purposes, I didn’t really care. Of course it didn’t work. Then I assigned full control permissions for the MDSAppPool user on the folder C:\Program Files\Microsoft SQL Server\130\Master Data Services. Didn’t work. Used the browser in Administrator modus. Also didn’t work. Checked IIS settings and discovered that Windows Authentication was not enabled. So I enable it, but the error persists. This is the point where it all starts to get frustrating. Adding MDSAppPool to the IIS_IUSRS group doesn’t work. Giving that group full control on the MDS directory either.
Read on for the solution.
Comments closedArun Sirpal looks at how row versioning information gets stored:
I like row versioning– see this link for more details:https://technet.microsoft.com/en-us/library/ms189122(v=sql.105).aspx
If your database is enabled for one of the isolation levels that uses row versioning and a row is updated it will have 14 bytes added to it.
Click through for a demo and explanation.
Comments closedI have a post on restoring a database in Azure SQL Database:
You will need to select your restore point as well. In this case, I decided to restore back to midnight UTC on a particular date. Note that the dates are UTC rather than your local timezone!
After selecting your restore point, you pick the target server and can decide a couple of things. First, you can put this database into an elastic database pool, which makes cross-database connections a lot easier. Second, you can choose a different pricing tier. Because I only needed this database for a few minutes, keeping it at P2 Premium was fine; the total restore time meant that we spent less than a dollar bringing the data back to its pristine condition.
Be aware of the time for restoration; it can be very slow.
Comments closedMeet Bhagdev reports that a new version of the SQL Server ODBC driver for Linux is available:
What’s new
-
Native Linux Install Experience: The driver can now be installed with apt-get (Ubuntu), yum (RedHat/CentOS) and Zypper (SUSE). Instructions on how to do this is below.
-
AlwaysOn Availability Groups (AG): The driver now supports transparent connections to AlwaysOn Availability Groups. The driver quickly discovers the current AlwaysOn topology of your server infrastructure and connects to the current active server transparently.
-
TLS 1.2 support: The driver now supports TLS 1.2 connections to SQL Server.
These are some nice additions. None of them are groundbreaking, but they add up to a nice release. Click through for instructions on how to install the driver; it got a lot easier for supported platforms.
Comments closedPaul Turley discusses a brand new announcement:
What am I most excited about as I prepare for the PASS Summit here in Seattle this week? A lot of things. Preparing for my session, which will be on Thursday at 1:30, by far the most popular and interesting topics are about integration and tool choice. Today’s public announcement on the SSRS product team blog about on-premises Power BI integration with Reporting Services is really big news. It’s great to see two of the technologies I love working together. Whether in the cloud or on-premises, Power BI and Reporting Services can be used together.
It’s hard to overstate how huge this is. Plenty of companies want the reporting that Power BI offers, but have security or software policies in place which prevented Power BI adoption. Having it render through Reporting Services means that end users don’t need to have Power BI Desktop and that the data and reports remain entirely on-prem.
Comments closed