Press "Enter" to skip to content

Category: Versions

Hortonworks DataFlow 3.1 Released

George Vetticaden and Haimo Liu announce Hortonworks DataFlow version 3.1:

Apache Kafka 1.0 support with full integration with HDF Services – Kafka 1.0 provides important new features including more stringent message processing semantics with support for message headers and transactions, performance improvements and advanced security options.

  • Apache Ambari support for Kafka 1.0 – Install, configure, manage, upgrade, monitor, and secure Kafka 1.0 clusters with Ambari.

  • Apache Ranger support for Kafka 1.0 – Manage access control policies (ACLs) using resource or tag-based security for Kafka 1.0 clusters.

  • New NiFi and SAM processors for Kafka 1.0 – New processors in NiFi and Hortonworks Streaming Analytics Manager (SAM) support Kafka 1.0 features including message headers and transactions.

Click through for the list of top changes.

Comments closed

Microsoft R Open 3.4.3

David Smith announces Microsoft R Open 3.4.3:

Microsoft R Open (MRO), Microsoft’s enhanced distribution of open source R, has been upgraded to version 3.4.3 and is now available for download for Windows, Mac, and Linux. This update upgrades the R language engine to the latest R (version 3.4.3) and updates the bundled packages (specifically: checkpointcurldoParallelforeach, and iterators) to new versions.

MRO is 100% compatible with all R packages. MRO 3.4.3 points to a fixed CRAN snapshot taken on January 1 2018, and you can see some highlights of new packages released since the prior version of MRO on the Spotlights page. As always, you can use the built-in checkpoint packageto access packages from an earlier date (for reproducibility) or a later date (to access new and updated packages).

That brings Microsoft up to speed with base R.

Comments closed

Getting dbatools To Version 1.0

Simone Bizzotto explains what it’s going to take to get dbatools up to version 1.0:

We’re looking for contributors to help us finally reach version 1.0. Currently, we are on par with Gmail’s beta schedule: a whopping 4 years. But, we’re almost there and need your help finalizing our changes. If you’re interested in helping us bring 1.0 alive, we identified four areas with 5 primary contacts on the SQL Server Community Slack:

  • Standardize param names (@wsmelton)
  • Create tests for existing functions (@cl and @niphlod)
  • Review existing function documentation (@alevyinroc or @gbargsley)
  • Prepare for 1.0 with “code style” (Bill of Health, more on that later)

As you can see, a few of us are the main reference (on GitHub and Slack, mostly) for each area.

Read the whole thing and, if you’ve found dbatools to be helpful in the past, see if there’s anything you can do to help them out a little in return.

Comments closed

Hadoop 3.0 Ships

Alex Woodie reports that Hadoop 3.0 is officially out there, and looks at what’s forthcoming in 3.1 and 3.2:

As we told you about last week, Hadoop 3.0 brings two big new features that are compelling in their own right. That includes support for erasure coding, which should boost storage efficiency by 50% thanks to more efficient data replication; and YARN Federation, which should allow Hadoop clusters to scale up to 40,000 nodes.

The delivery of Hadoop 3.0 shows that open open source community is responding to demands of industry, said Doug Cutting, original co-creator of Apache Hadoop and the chief architect at Cloudera.

“It’s tremendous to see this significant progress, from the raw tool of eleven years ago, to the mature software in today’s release,” he said in a press release.  “With this milestone, Hadoop better meets the requirements of its growing role in enterprise data systems.

But some of the new features in Hadoop 3.0 weren’t designed to bring immediate rewards to users. Instead, they pave the way for the Apache Hadoop community to deliver more compelling features with versions 3.1 and versions 3.2, according to  Hortonworks director of engineering Vinod Kumar Vavilapalli, who’s also a committer on the Apache Hadoop project.

“Hadoop 3.0 is actually a building block, a foundation, for more exciting things to come in 3.1 and 3.2,” he said.

Click through to see some of those exciting things.

Comments closed

Upgrading That Expired Evaluation Copy Of SQL Server

Cody Konior finds a way to extricate the poor souls who need to upgrade expired evaluation copies of SQL Server from their mess:

Common advice here is to set the clock backwards. My problem with that is that you’re probably doing this on an unsupported unknown black-box flaming garbage can of a system set up by someone who wasn’t meant to do it – because otherwise they wouldn’t be using the evaluation edition. So what are the repercussions of setting the clock backwards? Perhaps their application spawning silently in the background and trashing this or other databases with bad date information? Perhaps you’ll lose your RDP connection and then be unable to connect back in because of the SSPI error generated by a clock mismatch?

No thanks. Instead you need to do some detective work.

Read the whole thing.

Comments closed

Upgrading A Cluster To Windows Server 2016

Ryan Adams shows how to upgrade a failover cluster running Windows Server 2012 R2 to Windows Server 2016 without having to start from scratch:

Starting in Windows Server 2012 R2 you now have a way to upgrade a cluster to Windows 2016.  The best part is it’s not an OS upgrade, but a rebuild.  The magic is that you can join a Windows 2016 server to a Windows 2012 R2 cluster.  You can upgrade your cluster with as little as one failover and thus very little down time.  Everything stays in compatibility mode until all nodes are upgraded to Windows 2016 and then you upgrade the cluster functional level.  This is great news for those of us running FCIs or AGs.

Click through for a listing of steps and a video.

Comments closed

Using dbatools To Determine SQL Server Versions

Simone Bizzotto walks us through a new dbatools feature:

You get back on a jiffy:
– the Build
– the Major Release
– the Service Pack
– the Cumulative Update
– the KB related to that version
– when the support for that version ends
– if all of the above are matching a verified build
– if a warning is shown, you passed a bad build or the JSON must be updated

Getting the build is easy; getting some of this other information is where they add a lot of value.

Comments closed

Installing Zeppelin With Spark2 Support On HDP

Paul Hernandez shows how to install Apache Zeppelin 0.7.3 on Hortonworks Data Platform 2.5 in order to gain Spark2 support:

As a recent client requirement I needed to propose a solution in order to add spark2 as interpreter to zeppelin in HDP (Hortonworks Data Platform) 2.5.3
The first hurdle is, HDP 2.5.3 comes with zeppelin 0.6.0 which does not support spark2, which was included as a technical preview. Upgrade the HDP version was not an option due to the effort and platform availability. At the end I found in the HCC (Hortonworks Community Connection) a solution, which involves installing a standalone zeppelin which does not affect the Ambari managed zeppelin delivered with HDP 2.5.3.
I want to share how I did it with you.

Read on to see how Paul did it.  It’s not trivial but Paul lays out the process step-by-step.

Comments closed

What’s New In Analysis Services

Christian Wade explains what’s new in SQL Server Analysis Services 2017:

SSAS 2017 introduces the 1400 compatibility level. Here are just some highlights of the new features:

  • New infrastructure for data connectivity and ingestion into tabular models with support for TOM APIs and TMSL scripting. This enables support for a range of additional data sources, and data transformation and mashup capabilities.

  • Support for BI tools such as Microsoft Excel enables drill-down to detailed data from an aggregated report. For example, when end-users view total sales for a region and month, they can view the associated order details.

  • Object-level security to secure table and column names in addition to the data within them.

  • Enhanced support for ragged hierarchies such as organizational charts and chart of accounts.

  • Various other improvements for performance, monitoring, and consistency with the Power BI modeling experience.

There’s plenty more where that came from (unless you’re a Multidimensional fan…), so click through for the details.

Comments closed

Changes To SQL Server’s Servicing Model

Pedro Lopes announces changes to SQL Server’s servicing model:

Starting with SQL Server 2017, we are adopting a simplified, predictable mainstream servicing lifecycle:

  • SPs will no longer be made available. Only CUs, and GDRs when needed.
  • CUs will now accommodate localized content, allowing new feature completeness and supportability enhancements to be delivered faster.
  • CUs will be delivered more often at first and then less frequently. Every month for the first 12 months, and every quarter for the remainder 4 years of the full 5-year mainstream lifecycle.
  • CUs are delivered on the same week of the month: week of 3rd Tuesday.

Note: the Modern Servicing Model (MSM) only applies to SQL Server 2017 and future versions.

If you’re the type who waits for SP1 to drop, you’ll be waiting for Godot.  Who should be here any minute now.

Comments closed