In this blog, I’m going to explain higher-order functions.
A higher order function takes other function as a parameter or return a function as a result.
This is possible because functions are first-class value in scala. What does that mean?
It means that functions can be passed as arguments to other functions and functions can return other function.
The map function is a classic example of a higher order function.
Higher-order functions are one of the key components to functional programming and allows us to reason in small chunks at a time
Given the shortcomings of monitoring and testing, we should shift focus to building observable systems. This means treating observability of system behaviour as a primary feature of the system being built, and integrating this feature into how we design, build, test, and maintain our systems. This also means acknowledging that the ease with which we can debug our production environment will be a key indicator of system reliability, scalability, and ultimately customer experience. Designing a system to be observable requires effort from three disciplines of software development: development, testing, and operations. None of these disciplines is more important than the others, and the sum of them is greater than the value of the individual parts. Let’s take some time to look at each discipline in more detail, with a focus on observability.
My struggle has never been with the concept, but rather with getting the implementation details right. “Make everything observable” is great until you run out of disk space because you’re logging everything.
The Need to Vacuum
As Redshift does not reclaim free space automatically, updates and delete operations can frequently lead to table growth. Equally, it’s important as new entries are added, that the data is maintained in a sorted sequence.
The VACUUM command is used to re-sequence data, and reclaim disk space as a result of DELETE and UPDATE operations. Although it won’t block other processes, it can be a resource-intensive operation, especially for data stored using interleaved sort keys.
It should be run periodically to ensure consistent performance and to reduce disk usage.
Some of this is good Postgres advice; some of it is good MPP advice (and serves well, for example, when dealing with Azure SQL Data Warehouse); the rest is Redshift-specific.
Let’s look at the application domain in more detail. In the previous blog series on Kongo, a Kafka focussed IoT logistics application, we persisted business “violations” to Cassandra for future use using Kafka Connect. For example, we could have used the data in Cassandra to check and certify that a delivery was free of violations across its complete storage and transportation chain.
An appropriate scenario for a Platform application involving Kafka and Cassandra has the following characteristics:
Large volumes of streaming data is ingested into Kafka (at variable rates)
Data is sent to Cassandra for long term persistence
Streams processing is triggered by the incoming events in real-time
Historic data is requested from Cassandra
Historic data is retrieved from Cassandra
Historic data is processed, and
A result is produced.
It looks like he’s focusing on changepoint detection, which is one of several good techniques for generalized anomaly detection. I’ll be interested in following this series.
I have a huge love for the DBCC CLONEDATABASE command – it has been made available (backported) to every SQL Server version starting with SQL Server 2012, since the original release in SQL Server 2014, while being constantly improved in the Service Packs and Cumulative Updates.
This blog post is focusing on the Database Cloning improvement in the SQL Server 2019 that is already available in the public CTP 2.0 – the possibility of the automated statistics extraction for the Columnstore Indexes.
Well, there was quite a significant problem with the Columnstore Indexes previously – the statistics for them were not extracted into the cloned database, unless you did created the statistics in the most recent step before Database cloning.
Click through for more details and a comparison between SQL Server versions.
The authors conducted a series of microbenchmarks as follows:
scans with single predicates with varying selectivity to study the trade-off between the range scan of a B+ tree vs a columnstore scan
sort and group-by queries to study the benefit of the sort order supported by B+ trees (columnstores in SQL Server are not sorted).
update statements with varying numbers of updated rows to analyze the cost of updating the different index types
mixed workloads with different combinations of reads and updates
It’s interesting to read an academic paper covering the topic, particularly when you can confirm that it works well in practice too.
This will show you
- the UniqueTag which will enable you to run only that check if you wish
- AllTags which shows which tags will include that check
- Config will show you which configuration items can be set for this check
The trace flag checks require the app.sqlinstance configuration which is the list of SQL instances that the checks will run against. You can also specify the instances as a parameter for Invoke-DbCheck as well.
Click through for an example.
SQL Server 2017 and Azure SQL Database introduced native graph database capabilities used to model many-to-many relationships. The first implementation of SQL Graph introduced support for nodes to represent entities, edges to represent relationships and a new MATCH predicate to support graph pattern matching and traversal.
We will be further expanding the graph database capabilities with several new features. In this blog we will discuss one of these features that is now available for public preview in SQL Server 2019, Edge Constraints on Graph Edge Tables.
In the first release of SQL Graph, an edge could connect any node to any other node in the database. With Edge Constraints users can enforce specific semantics on the edge tables. The constraints also help in maintaining data integrity. This post describes how you can create and use edge constraints in a graph database. We will use the following graph schema created in the WideWorldImporters database for the samples discussed here.
I know that SQL Server 2017 was a bit underwhelming for graph database work, so I will be interested in seeing how much of the gap they cover in this release.
Data Type Selection
The data type selected will impact the physical storage used, not the compression of the models in memory. It is important whenever possible to reduce the cardinality of the data in order to be able to sort the data effectively. When storing decimal numbers, unless you need many significant digits, store the data as Currency as it will take less space in physical storage than decimal.
Click through for additional tips.
Let’s start with what we are told about this new feature. According to the documentation, the new UTF-8 Collations:
can be used …
- as a database-level default Collation
- as a column-level Collation
- by appending “
_UTF8” to the end of any Supplementary Character-Aware Collation (i.e. either having “
_SC” in their name, or being of level
- with only the
(implied) have no effect on
NVARCHARdata (meaning: for these types, the UTF-8 Collations behave the same as their non-UTF-8 equivalents
“This feature may provide significant storage savings, depending on the character set in use.” (emphasis mine)
Solomon takes his normal, thorough approach to the problem and finds several issues.