Press "Enter" to skip to content

Day: September 25, 2018

Writing To Elasticsearch With Spark Streaming

Anuj Saxena has an example of writing data from a Spark Streaming pipeline out to Elasticsearch:

There’s been a lot of time we have been working on streaming data. Using Apache Spark for that can be much convenient. Spark provides two APIs for streaming data one is Spark Streaming which is a separate library provided by Spark. Another one is Structured Streaming which is built upon the Spark-SQL library. We will discuss the trade-offs and differences between these two libraries in another blog. But today we’ll focus on saving streaming data to Elasticseach using Spark Structured Streaming. Elasticsearch added support for Spark Structured Streaming 2.2.0 onwards in version 6.0.0 version of “Elasticsearch For Apache Hadoop” dependency. We will be using these versions or higher to build our sbt-scala project.

Click through for an example.

Comments closed

Wasting Money With Data Science

Giovanni Lanzani has a post with the controversial title above:

Some data is gathered, given to data scientists, and — after two weeks — the first demo takes place. The results are promising, but they need a bit more time.

Fine. After all, the data was messy: they had to clean it up and go back to the source a couple of times.

Two weeks pass and the new results are even nicer. With 70% accuracy, they can predict if a patient will go home after their visit to the emergency room.

This is much better than random (50%)! A full-fledged pilot starts.

They are faced with a couple of challenges to go from model to data product:

  • How to send the source data to the model is unclear;

  • Where the model should run;

  • The hospital operations need to change, as the intake happens with pen and paper;

  • They realize that without knowing to which department the patient will go, they won’t add any value;

  • To predict the department, the model need the diagnosis. But once the diagnosis gets typed in the computer, the patient has reached their destination: the model is useless!

I think it’s a fair point:  it’s easy from the standpoint of internal researchers to look for things which they can do, but which don’t have much business value.  The risk on the other side is that you’ll start diving into a high-potential-value problem and then realize that the data isn’t there to draw conclusions or that the relationships you expected simply aren’t there.

Comments closed

Be Careful Of P-Hacking

Vincent Granville discusses the problem of p-hacking:

I read an article this morning, about a top Cornell food researcher having 13 studies retracted, see here. It prompted me to write this blog. It is about data science charlatans and unethical researchers in the Academia, destroying the value of p-values again, using a well known trick called p-hacking, to get published in top journals and get grant money or tenure. The issue is widespread, not just in academic circles, and make people question the validity of scientific methods. It fuels the fake “theories” of those who have lost faith in science.

The trick consists of repeating an experiment sufficiently many times, until the conclusions fit with your agenda. Or by being cherry-picking about the data you use, or even discarding observations deemed to have a negative impact on conclusions. Sometimes, causation and correlations are mixed up on purpose, or misleading charts are displayed. Sometimes, the author lacks statistical acumen.

Usually, these experiments are not reproducible. Even top journals sometimes accept these articles, due to

  • Poor peer-review process

  • Incentives to publish sensational material

Wansink is a charlatan.  But beyond p-hacking is Andrew Gelman and Eric Loken’s Garden of Forking Paths.  Gelman’s blog, incidentally (example), is where I originally learned about Wansink’s shady behaviors.  Gelman also warns us not to focus on the procedural, but instead on a deeper problem.

1 Comment

Databricks Delta Now Available On Azure

Cihan Biyikoglu and Singh Garewal announce the availability of Databricks Delta on Azure Databricks:

Using an innovative new table design, Delta supports both batch and streaming use cases with high query performance and strong data reliability while requiring a simpler data pipeline architecture:

Increased query performance – Able to deliver 10 to 100 times faster performance than Apache Spark(™) on Parquet through the use of key enablers such as compaction, flexible indexing, multi-dimensional clustering and data caching.

Improved data reliability – By employing ACID (“all or nothing”) transactions, schema validation / enforcement, exactly once semantics, snapshot isolation and support for UPSERTS and DELETES.

Reduced system complexity – Through the unification of batch and streaming in a common pipeline architecture – being able to operate on the same table also means a shorter time from data ingest to query result. Schema evolution provides the ability to infer schema from input data making it easier to deal with changing business needs.

The Azure version of Databricks is quickly reaching parity with the classic AWS-hosed version.

Comments closed

Other Ignite Announcements

Denny Cherry gives us a quick roundup of Ignite announcements:

On the Azure Data Platform side of the world, we have the announcement that Azure SQL DB now supports databases up to 100 TB in size using the Hyperscale feature of Azure SQL DB which you’ll see coming on October 1st, 2018.  Hyperscale is an excellent move for customers, as many customers were blocked by the fact that they couldn’t move the database to Azure SQL DB simple because of size; and this limit is going away in just a few short days.

Along with the legacy database platform, we have Managed Instance which was in Public Preview.  The fact is that it is in preview is no-more; Managed Instance is being released in General Availability starting on October 1st, 2018.  Managed Instance will make migrations to Azure much more accessible for many clients that need support for a SQL Server instance because of features that aren’t available in Azure SQL DB. Managed Instance will bridge this gap for customers giving customers basically full SQL Server functionality within a PaaS service.

In the Azure SQL DB space, we see new features for optimization of query performance getting released to General Availability.  These features include three new features called row mode: memory grant feedback, approximate query processing, and table variable deferred compilation. With minimal effort, these features can collectively optimize your memory usage and improve overall query performance.

They’re throwing a lot of stuff our way, including a less expensive version of Azure SQL Data Warehouse.

Comments closed

New Use Hint In SQL Server 2017 CU10

Pedro Lopes shows us a new use hint introduced in SQL Server 2017 CU10:

In this scenario, you only have this one query that apparently does better in SQL Server 2014 than 2017. That’s all “New CE” – there’s no CE70 vs CE 120+ at issue here. Using any known trace flag, the FORCE_LEGACY_CARDINALITY_ESTIMATION hint or the FORCE_DEFAULT_CARDINALITY_ESTIMATION hint doesn’t help. Rewriting the query is an option, but in the interim, I need a quick fix. How?

In SQL Server 2017 CU10, we have introduced a few new USE HINTs: the QUERY_OPTIMIZER_COMPATIBILITY_LEVEL_n, where n is a supported database compatibility level. This forces the query optimizer behavior at a query level, as if the query was compiled with database compatibility level. You can refer to sys.dm_exec_valid_use_hints for a list of currently supported values for n.

So to be clear, the new hint is not forcing only a specific CE model, it’s forcing the equivalent of the specific database compatibility level’s query optimizer behavior, including any query optimizer fixes that are enabled by default in that database compatibility level.

Something to keep in mind, though ideally not something you’d want to use regularly.

Comments closed

Batch Mode On Rowstore

Kevin Farlee announces another query processing improvement in SQL Server 2019:

In the SQL Sever 2019 preview, we are further expanding query processing capabilities with several new features under the Intelligent Query Processing (QP) feature family.  In this blog post we’ll discuss one of these Intelligent QP features that is now available in public preview, batch mode on rowstore. This feature unlocks the advantages of batch mode execution in cases where there is no columnstore participating in the query.

Batch mode is a different execution mode primarily targeted at analytics queries which are characterized as scanning many rows, and doing significant aggregations, sorts, and group-by operations across these rows.  Batch mode has been reserved for queries which involve columnstore  indexes until now.

Performing scans and calculations using batches of ~ 900 rows at a time rather than row by row is much more efficient for analytic-type queries.  For queries that can take advantage of it, batch mode can easily make queries execute many times faster than the same query against the same data in row mode.

To date, the workaround you could use was to create an empty filtered columnstore index on a rowstore table.  This solution is more architecturally pleasing and means one less hack to explain.

Comments closed

The Evolution Of Polybase

Asad Khan gets into improvements in SQL Server 2019:

  • Break down data silos and deliver one view across all of your data using data virtualization. Starting in SQL Server 2016, PolyBase has enabled you to run a T-SQL query inside SQL Server to pull data from your data lake and return it in a structured format—all without moving or copying the data. Now in SQL Server 2019, we’re expanding that concept of data virtualization to additional data sources, including Oracle, Teradata, MongoDB, PostgreSQL, and others. Using the new PolyBase, you can break down data silos and easily combine data from many sources using virtualization to avoid the time, effort, security risks and duplicate data created by data movement and replication. New elastically scalable “data pools” and “compute pools” make querying virtualized data lighting fast by caching data and distributing query execution across many instances of SQL Server.

Just in time for me to scramble to update Polybase slides for Conference Season…

Comments closed

What’s In SQL Server 2019 CTP 2.0?

Aaron Bertrand gives us the highlights:

  • Certificate Management in Config Manager View and validate all of your certificates from a single interface, and manage and deploy certificate changes across all of the replicas in an Availability Group or all of the nodes in a Failover Cluster Instance.

  • Built-in data classification A new ADD SENSITIVITY CLASSIFICATION statement helps you identify and automatically audit sensitive data, a huge step up from the previous SSMS wizard (which just used extended properties).

Aaron also digs into the engine a bit:

APPROX_COUNT_DISTINCT

This new aggregate function is designed for data warehouse scenarios, and is an equivalent for COUNT(DISTINCT()). Instead of performing expensive distinct sort operations to determine actual counts, it relies instead on statistics to get something relatively accurate. You should find that the margin of error is within 2% of the precise count, 97% of the time, which is usually fine for high-level analytics, values that populate a dashboard, or quick estimates.

On my system I created a table with integer columns ranging from 100 to 1,000,000 unique values, and string columns ranging from 100 to 100,000 unique values. There were no indexes other than a clustered primary key on the leading integer column. Here are the results of COUNT(DISTINCT()) vs. APPROX_COUNT_DISTINCT() against those columns, so you can see where it is off by a bit (but always well within 2%):

By the way, APPROX_COUNT_DISTINCT() is a really good idea, and I’m glad it’s here.

Comments closed

SQL Operations Studio Is Now Azure Data Studio

David Hiltenbrand notes a name change:

Will SQL Operations Studio upgrade automatically to Azure Data Studio? 

NO! Although they’re effectively the same thing currently, you do need to install Azure Data Studio separately from your existing sqlops install. You can install the new Azure Data Studio after downloading it from here: https://aka.ms/getazuredatastudio. The docs also include a helpful section, Move User Settings, that will help you migrate any custom settings you don’t want to lose from your sqlops configuration.

Personally, I’m not a big fan of the name change.  But Grant Fritchey clues us in on the reason behind it:

The core concept here is to have a development tool that gives you a common framework for working with data, not just SQL data, but CosmosDB and others. Further, a tool that you can run where you work. Do you have a Mac? Cool. Use Azure Data Studio. Running Linux? Cool. Use Azure Data Studio. Still on Windows with me? We also get Azure Data Studio.

I do get the benefit of a tool which can hit different data sources, including something which is not SQL-based.  But the “Azure” in the name throws me.  I’ll still connect to my on-prem and AWS-based SQL Servers with it though.

Comments closed