Modern Microservices are all about making systems event-driven: instead of making remote requests and waiting for the response (services and components calling each other and tell each other what to do), we can send notifications to related microservices when an event occurs.
These events are facts about the business. For example, an ATM or online transaction, a new log entry, or a customer registering for a new mobile plan. They are the data points collected by organizations that make their datasets. The good thing is, we can store these events in the very same infrastructure that we use to broadcast them: Apache Kafka. The better thing is we can even process them in the same infrastructure with Stream Processing applications. This means our applications and systems are linked via this central data pipeline, that is capable of real time data broadcast and processing and all data sources are shared via this data pipeline.
Read the whole thing.
Cross Validation explained in one simple picture. The method shown here is k-fold cross validation, where data is split into k folds (in this example, 5 folds). Blue balls represent training data; 1/k (i.e. 1/5) balls are held back for model testing.
Monte Carlo cross validation works the same way, except that the balls would be chosen with replacement. In other words, it would be possible for a ball to appear in more than one sample.
You’ll have to click through for the picture.
Structure() function is a simple, yet powerful function that describes a given object with given attributes. It is part of base R language library, so there is no need to load any additional library. And also, since the function was part of S-Language, it is in the base library from the earlier versions, making it backward or forward compatible.
Read on to see how you can create a matrix or data frame using this function and additional details you can save.
Like most tricks, this has a specific use case, but can be quite effective when you spot it.
I’m going to assume you have a vague understanding of parameter sniffing with stored procedures going into this. If you don’t, the post may not make a lot of sense.
Or, heck, maybe it’ll give you a vague understanding of parameter sniffing in stored procedures.
This was new to me.
The Power BI Performance Analyzer is a feature included in the May 2019 release of Power BI Desktop that simplifies the way you can collect the DAX queries generated by Power BI. You can use DAX Studio to capture them (as described in Capturing Power BI queries using DAX Studio), but the Performance Analyzer integrated in Power BI is simpler and provides a few insights about the time consumed in other activities, such as the rendering time of any visuals.
You can enable the Power BI Performance Analyzer by clicking the Performance Analyzer checkbox in the View ribbon of Power BI Desktop.
Read the whole thing.
This has come up a few times recently, I find it rather fascinating and I can never seem to remember how to do it properly .. so in other words, it’s a perfect subject for a blog post.
Basically, you can use VALUES to create a table within a query. I’ve seen it done in a number of places. Mostly when demoing something and you want some data, but don’t want to actually create a table. I’ve also seen it used to create a numbers table, create test data, etc. Really, any case where you want a short list of values but don’t want to create an actual (or even temp) table to store them.
Click through for examples on how to construct and use this virtual table as a quick replacement for creating a temporary table or table variable.
There are much more sophisticated ways to get this done using Docker Files. However, this illustrates the point quite simply. You can customize your servers and then use those customizations. You don’t have to re-customize every time. Again, this is just a small slice of why containers are so powerful.
This method is great when you want to build out a sample data set, like when you’re running through automated testing and want to start from the same known point each time.
You would logically think that the minimum setting would be zero seconds. If you think that way, you are correct. However, 0 does not mean 0 seconds. When this is set to 0 it means the event will stay in the buffer until the buffer becomes full. This is the same set setting the “Unlimited” option you see below. Given this, the true minimum is one second.
Read on to see what it does and why it can be important.
As you slide the IOPS slider to the right you will see the number of data disks increase, the Storage Size increase, and the Throughput increase. You will be limited to the max number of IOPS and disks supported by that instance size. You see in the screenshot below I am able to go as high as 80,000 IOPS when provisioning storage for a Standard E64-16s_v3 instance.
It sounds like they did a pretty good job of things there.