The coordinate system can be manipulated by adding one of ggplot’s different coordinate systems. When you are imagining a coordinate system, you are most likely thinking of a Cartesian one. The Cartesian coordinate system combines x and y dimension orthogonally and is ggplots default (coord_cartesian).
There also are several varaitions of the familiar Cartesian coordinate system in ggplot, namely coord_fixed, coord_flip and coord_trans. For all of them, the displayed section of the data can be specified by defining the maximal value depicted on the x (xlim =) and y (ylim =) axis. This allows to “zoom in” or “zoom out” of a plot. It is a great advantage, that all manipulations of the coordinate system only alter the depiction of the data but not the data itself.
I tend to avoid polar coordinates, but that’s mostly because I don’t work in a space which benefits from it.
This results in a row per ride and visualises pretty well in SSMS. If you are familiar with the geography of London you can make out the river Thames toward the centre of the image and Regents Park towards the top left:
This could be overlaid on a shape file of London or a map from another provider such as Google Maps or Mapbox.
However, when you try to load the dataset into Power BI, you find that Power BI does not natively support Geography data types. There is an idea you can vote on here to get them supported: https://ideas.powerbi.com/forums/265200-power-bi-ideas/suggestions/12257955-support-sql-server-geometry-geography-data-types-i
Hit up that idea link if you want to see geography type support within Power BI.
In the current Azure SQL Database Managed Instance (MI) preview, when customers create a new instance, they can allocate a certain number of CPU vCores and a certain amount of disk storage space for the instance. However, there is no explicit configuration option for the amount of memory allocated to the instance, because on MI, memory allocation is proportional to the number of vCores used.
How can a customer determine the actual amount of memory their MI instance can use, in GB? The answer is less obvious than it may seem. Using the traditional SQL Server methods will not provide the right answer on MI. In this article, we will go over the technical details of CPU and memory allocation on MI, and describe the correct way to answer this question.
The information and behavior described in this article are as of the time of writing (April 2018). Some aspects of MI behavior, including the visibility of certain compute resource allocations, may be temporary and will likely change as MI progresses from the current preview to general availability and beyond. Nevertheless, customers using MI in preview will find that this article answers some of the common questions about MI resource allocation.
What you see in SQL Server Management Studio is true, but it is also not the whole picture.
Sometimes you want to connect to a report server instance using Management Studio, for example to create a new security role or modify an existing one. Recently I tried to log into our newly installed Power BI Report Server (March 2018 edition). I was greeted with the following error:
The Reporting Services instance could not be found.
Read on to see how to solve this problem.
Been working on monitoring. For some reason, when you tell Aurora to send errorlogs to Cloudwatch, all it sends are the Audit Logs, which will tell you that code had changed, etc, but doesn’t (!?!?!??!!!) actually put your logs in Cloudwatch. I don’t understand it, so I built this process to look through logs and return the data. The next step would be to format it and either upload to Cloudwatch manually, or log it, or send email.
Click through for the script.
The query plan shows us that we got a Trivial Plan, and that Simple Parameterization was attempted (as shown by the 100000 literal turning into the @1 parameter.)
Simple Parameterization is only attempted in a Trivial Plan.
The key word here is attempted.
Normally, you spot the change to your query string, you go in to the properties and you see both (and it has to be both) a Parameter Compiled Value and a Parameter Runtime Value, you’ve got simple parameterization going on.
Or do you?
Notice the final property on the sheet, StatementParameterizationType. Honestly, I never really paid attention to that property. I knew what kind of parameterization I was seeing. I’m not running Forced Parameterization. This isn’t a parameterized query. It’s Simple Parameterization. Of course it is. All the keys are there. Change to the code. Parameter List values. Done.
Narrator voice: it wasn’t done.
What can you query from ADLS?
You can connect to the data stored in Azure Data Lake Store. What you *cannot* connect to currently is the data stored in the Catalog tables/views/stored procedures within Azure Data Lake Analytics (hopefully connectivity to the ADLA Catalog objects from tools other than U-SQL is available soon).
You’re not sending a U-SQL query here. Rather, we’re sending a web API request to an endpoint.
With an ADLS data source, you have to import the data into Power BI Desktop. There is no option for DirectQuery.
In other words, data that you’ve already prepped using U-SQL and want to display to the outside world. Click through for a demonstration as well as additional helpful information.