But as we know nothing is perfect. So is the Cassandra Database. What I mean by this is that you cannot have a perfect package. If you wish for one brilliant feature then you might have to compromise on the other features. In today’s blog, we will be going through some of the benefits of selecting Cassandra as your database as well as the problems/drawbacks that one might face if he/she chooses Cassandra for his/her application.
I have also written some blogs earlier which you can go through for reference if you want to know What Cassandra is, How to set it up and how it performs its Reads and Writes.
The only question we have is that should we or should we not pick Cassandra over the other databases that are available. So let’s start by having a quick look at when to use the Cassandra Database. This will give a clear picture to all those who are confused in decided whether to give Cassandra a try or not.
This is a level-headed analysis of Cassandra, so check it out.
A pure function can be defined like this:
The output of a pure function depends only on(a) its input parameters and(b) its internal algorithm,which is unlike an OOP method, which can depend on other fields in the same class as the method.
A pure function has no side effects, i.e., that it does not read anything from the outside world or write anything to the outside world. – For example, It does not read from a file, web service, UI, or database, and does not write anything either.
As a result of those first two statements, if a pure function is called with an input parameter x an infinite number of times, it will always return the same result y. – For instance, any time a “string length” function is called with the string “Ayush”, the result will always be 5.
If I got to add one more thing, it’d be the idea that functions are first-class data types. In other words, a function can be an input to another function, the same as any other data type like int, string, etc. It takes some time to get used to that concept, but once you do, these types of languages become quite powerful.
In the post about using MSTest framework to execute ssisUnit tests, I used parts of the ssisUnit API model. If you want, you can write all your tests using this model, and this post will guide you through the first steps. I will show you how to write one of the previously prepared XML tests using C# and (again) MSTest.
Why MSTest? Because I don’t want to write some application that will contain all the tests I want to run, display if they pass or not. When I write the MSTest tests, I can run them using the Test Explorer in VS, using a command line, or in TFS.
UIs are great for learning how to do things and for one-off actions, but writing code scales much better in terms of time.
Here is signature
def sortBy[B](f: A => B)(implicit ord: Ordering[B]): Repr
sortByfunction is used to sort one or more
Here is a small example.
sort based on a single attribute of the case class.
Click through for several examples.
The problem is that in our original query we’re not getting data from the LinkedPosts entity, just data from Posts and PostTags. Entity Framework knows that it doesn’t have the data for the LinkPosts entity, so it very kindly gets the data from the database for each row in the query results.
Obviously, making multiple calls to the database instead of one call for the same data is slower. This is a perfect example of RBAR (row by agonizing row) processing.
Read the comments for more answers on top of Richie’s. My answer (only 70% tongue in cheek)? Functional programming languages don’t require ORMs.
Let’s look at a concrete example with the Click-Through Rate Prediction dataset of ad impressions and clicks from the data science website Kaggle. The goal of this workflow is to create a machine learning model that, given a new ad impression, predicts whether or not there will be a click.
To build our advanced analytics workflow, let’s focus on the three main steps:
Data Exploration, for example, using SQL
Advanced Analytics / Machine Learning
The Databricks blog has a couple other examples, but this was the most interesting one for me.
Decorator design pattern is a structural design pattern.
Structural design patterns focus on Class and Object composition and decorator design pattern is about adding responsibilities to objects dynamically.
Decorator design pattern gives some additional responsibility to our base class.
This pattern is about creating a decorator class that can wrap original class and can provide additional functionality keeping class methods signature intact.
I don’t use the Decorator pattern as often as I probably should, but it can be quite useful.
So we branch the the code in source control, and start writing a helper class to manage the data for us closer to the application. We throw in a
SqlDataAdapter, use the
Fill()method to bring back all the rows from the query in one go, and then use a caching layer to keep it in memory in case we need it again. SQL Server’s part in this story has now faded into the background. This narrow table consumes a tiny 8 MB of RAM, and having two or more copies in memory isn’t the end of the world for our testing. So far, so good again.
We run the new code, first stepping through to make sure that it still does what it used to, massaging here and there so that in the end, a grid is populated on the application with the results of the query. Success! We then compile it in Release mode, and run it without any breakpoints to do some performance testing.
And then we find that it runs at exactly the same speed to produce exactly the same report, using our caching and
SqlDataAdapter, and we’ve wasted another hour of our time waiting for the grid and report. Where did we go wrong?
As people get better at tuning, we start to make assumptions based on prior experience. That, on net, is a good thing, but as Randolph shows, those assumptions can still be wrong.
Then I could use the extension like this:
if (mySeries.In(Enum.Series.ProMazda, Enum.Series.Usf2000)) myChassis = "Tatuus";
As for the other two methods, well… When is a null not a null? When it’s a System.DBNull.Value, of course! SQL Server pros who have spent any time in the .NET Framework will recognize this awkwardness:
var p = new System.Data.SqlClient.SqlParameter("@myParam", System.Data.SqlDbType.Int);
p.Value = (object)myVar ?? System.DBNull.Value;
With the extension, the second line becomes:
p.Value = mVar.ToDbNull();
I like it that Jay ended up going with a different language than T-SQL. It’s no F#, but it’ll do.
extractoris an object that has an
unapplymethod. It takes an object as an input and gives back arguments. Custom extractors are created using the unapply method. The
unapplymethod is called
extractorbecause it takes an element of the same set and extracts some of its parts,
applymethod also called injection acts as a constructor, takes some arguments and yields an element of a given set.
Click through for explanatory examples.