The Connect Item I have chosen to write about is an old one and is about getting Intellisense for MDX in SQL Server Management Studio [SSMS]. Despite the fact that it was created back in 2009 by Jamie Thomson (b|l|t), it is still active and there has been a public acknowledgement back then, by the Analysis Service Team, that they will consider this request for an upcoming release. 2009, still active… True story.
Read on for more details and be sure to join Jens’s quixotic quest if you’d like to see MDX Intellisense.
The sample script below shows how this is done. The sequence command is used to delete multiple partitions in a single transaction. This is similar to the batch command in XMLA. In this example we’re only performing delete operations, but many different operations can be performed in sequence (And some in parallel).
Click through for a description of the process as well as a script to do the job.
(2) Data Sources
From a single source such as a data warehouse. This is the most traditional path for BI development, and still has a very valid place in many BI/analytics deployments. This scenario puts the work of data integration on the ETL process into the data warehouse, which is the most appropriate place.
Directly from various systems. This can be done, but works well only in specific cases – it definitely won’t work well if there are a lot of highly normalized tables, or if there’s not a straightforward way to relate the disparate data together. Trying to go directly to the source systems & skip an intermediary data warehouse puts the “integration” burden on the data source view in Analysis Services, so plan for plenty of time testing if you’re going to try this route (i.e., it can be much harder, not easier). Note that this option only makes sense if the data is stored in Analysis Services because it needs to be related together somehow (i.e., DirectQuery mode, discussed next in #3, with > 1 data source won’t work if a user tries to combine data sources because the data is not inherently related).
If you’re thinking about Azure Analysis Services, this post is a good one.
Unfortunately this doesn’t make any objects in the cube that are not visible, like measures or dimensions, visible again – it just makes the cube itself visible. However, if you’re working on the Calculations tab of the Cube Editor in SSDT it is possible to make all hidden objects visible as I show here.
Read on for the command and watch out for that caveat.
I’ve just argued why Microsoft was obliged to include this functionality in SSAS v.next but in fact there are many positive reasons for doing this too. The most obvious one is to do with support for more data sources. At the moment SSAS Tabular supports a pretty good range of data sources, but the world of BI is getting more and more diverse and in order to stay relevant SSAS needs to support far more than it does today. By using Power Query/M as its data access mechanism, SSAS v.next will immediately support a much larger number of data sources and this number is going to keep on growing: any investment that Microsoft or third parties make for Power BI in this area will also benefit SSAS. Also, because Power Query/M can query and fold to more than just relational databases, I suspect that in the future this will allow for DirectQuery connections to many of these non-relational data sources too.
Read the whole thing.
I have not met a setup where applying compression was not an option, yet. Obviously this has a penalty cost on CPU while executing the backup, and will affect the rest of the tasks running on the server (even if you have your data and backup dir on different drives). But in my experience, the impact is negligible.
This may not be the case with the encryption option, as this has a much larger foot print on the server. You should be using this with some caution in production. Test on smaller subsets of the data if in doubt.
Another thing to keep in mind, as always when dealing with encryption, do remember the password. There is no way of retrieving the data other than with the proper password.
My goal is to be able to rebuild any cube from the relational database, but even with that goal in mind, it is smart to have backups.
I came across an interesting MDX challenge this week; within a cube’s Date dimension, how to show December twice, once where it should be and again as the opening value for the following year. i.e. for each year I need to show Dec (prev yr), Jan, …, Nov, Dec.
Why? Well if you consider the following pivot chart, you can clearly see growth from Jan to Feb, Feb to Mar, etc., but it’s very difficult to see the growth between Dec and Jan.
The solution is easier than I would have expected.
Frequently when developing updates to an SSAS cube I want to deploy my schema and process the dimension. Sometimes several of dimensions process successfully and then fails on one. At this point I go and correct the error, deploy the new schema, and then I only want to process all of my dimensions except the dimensions which did process successfully the first time. Sometimes this is really easy, but if you have a large number of dimensions this can become cumbersome since the only way to know which dimensions had been processed successfully or to right-click each dimension one at a time and find out, or to have memorized which dimensions had processed successfully on the earlier attempt. There can be a better way, and of course, PowerShell is one of those options. J The only problem is that as things currently stand, PowerShell is not as easy as it could be; the Invoke-ProcessDimension cmdlet doesn’t accept [direct] pipeline input. What is one to do when PowerShell isn’t as easy as it could be? File a Connect item of course!
Check out the Trello board. It’s been instrumental in helping Microsoft developers get the leverage they need to dedicate time to improving particular aspects of the product.
In addition to being an “in-memory” technology, Analysis Services Tabular is also a “column-store” technology which means all the values in a table for a single column are stored together. As a result – and this is especially true for dimensional models – we are able to achieve very high compression ratios. On average, you can expect to see compression ratios anywhere from 10x-100x depending on the model, data types, and value distribution.
What this ultimately means is that your 2 TB data mart will likely only requrie between 20 GB of memory (low-end) and 200 GB (high-end) of memory. That’s pretty amazing – but still leaves us with a fairly wide margin of uncertainty. In order to further reduce the level of uncertainty, you will want to take a representative sample from your source database, load it into a model on a server in your DEV environment, and calculate the compression factor.
Read the whole thing; Bill has several factors he considers when sizing a machine.
The Analysis Services product team explained to me that a a user from a tenant which has never provisioned Azure Analysis Services cannot be added to another tenant’s provisioned server. Put another way, our Corporate tenant had never provisioned AAS so the Development tenant could not do so via cross-tenant guest security.
One resolution for this is to provision an AAS server in a subscription associated with the Corporate tenant, and then immediately delete the service from the Corporate tenant. Doing that initial provisioning will do the magic behind the scenes and allow the tenant to be known to Azure Analysis Services. Then we can proceed to provision it for real in the Development tenant.
Read the whole thing.