How old is really old?
The Azure Data Lake Store binaries have been broadly certified for Hadoop distributions after 3.0 and above. We are really in uncharted territory for lower versions. So the farther away you go from 3.0 the higher the likelihood of them not working. My personal recommendation is to go no lower than 2.6. After that your mileage may really vary.
This is a good article, and do check it out. A very small mini-rant follows: Hadoop version 2.6 is not old. Nor is 2.7. 2.7 is the most recent production-worthy branch and 3.0 isn’t expected to go GA until August.
This service is the hidden gem of HPC (high performance computing) within the Azure Compute service family. As the name implies, Azure Batch is designed to run large-scale and high-performance computing applications efficiently in the cloud. When you’re faced with large workloads, all you have to do is to use Azure Batch to define compute resources to execute your applications in parallel and at the desired scale. A good use-case for Azure Batch would be to perform financial risk modelling, climate data analysis or stress testing. What makes Batch so useful is the fact that you don’t need to manually manage the node cluster, virtual networks or scheduling because all this is handled by the service. You need to define a job, any associated data and the number of nodes you want to utilise. It makes no difference if you need to run on one, a hundred or even thousands of nodes. The service is designed to scale according to the workload needs.
The cheapest server may very well be no server, and we’re at the point where relatively simple services could just run as Azure Functions or AWS Lambda functions.
Lots of things have been reported to kill the DBA over the years
SQL Server 2005 was said to be “self-tuning”! Who needs a DBA when the instance tunes itself? (Apparently everyone.)
Outsourcing: All the DBA jobs are going to X location, then Y location, then Z location. Then back to X. DBA jobs have become more global, but “outsourcing” hasn’t gotten rid of DBA jobs in the United States. It has been part of the trend to make working remotely more normal and easy, which is generally good for DBAs.
DevOps! All the developers will manage everything. And somehow know to do so. I love Dev Ops, and I have seen it wipe out some QA departments, but I haven’t seen it wipe out DBAs. I think it’s fun to be a DBA working with a Dev Ops team.
Consider this in contrast to Dave Mason’s concern. My perspective is a lot closer to Kendra’s, but both posts make the good point that IT roles are ever-shifting.
In this post, we will show you a visualization and build a predictive model of US flights with sparklyr. Flight visualization code is based on this article.
This post assumes you already have the following tables:
- Airlines data as
airlines_bi_pq. It is assumed to be on S3, but you can put it into HDFS. See also the Ibis project.
- Airports data converted into Parquet format as
airports_new_pq. See also 2009 ASA Data Expo.
You should make these tables available through Apache Hive or Apache Impala (incubating) with Hue.
There’s some setup work to get this going, but getting a handle on sparklyr looks to be a good idea if you’re in the analytics space.
Microsoft emphasized “choice” when it originally introduced Azure Container Service. Although it launched without Kubernetes, Azure initially supported Mesosphere DC/OS and Docker Swarm because the majority of Microsoft’s customers used them and the company believed they would be well served by the support.
Since then, Kubernetes has emerged as a clear leader among container orchestration solutions. It is used as an underpinning for deep learning frameworks and the basis for an open source serverless/“lambda” app framework, as well as offered as a managed on-premise service by one company.
Kubernetes on Azure is strictly focused on running Kubernetes within Azure, not providing it as a service elsewhere. But the GA release includes additions meant to appeal to a broad audience of both Linux and Windows Server users, such as support for the latest version of DC/OS (1.8.8).
It’s an interesting world out there.
Azure SQL Database enables you to directly load files stored on Azure Blob Storage using the BULK INSERT T-SQL command and OPENROWSET function.
Loading content of files form Azure Blob Storage account into a table in SQL Database is now single command
Click through for the details.
Each execution of the application on any input file will generate 3 text output files with the results of the assessment. The application runs at a rate of about 1-2 calls per second (the max send rate cannot exceed 100/min as this is the API limit).
File 1 [AzureTextAPI_SentimentText_YYYYMMDDHHMMSS.txt] – the sentiment score between 0 and 1 for each individual line in the Source Text File. The entire line in the file is graded as a single data point. 0 is negative, 1 is positive.
File 2 [AzureTextAPI_SentenceText_YYYYMMDDHHMMSS.txt] – if the “Split Document into Sentences” option was selected then this contains each individual sentence in each individual line with the sentiment score of that sentence between 0 and 1. 0 is negative, 1 is positive. RegEx is used to split the line into sentences.
File 3 [AzureTextAPI_KeyPhrasesText_YYYYMMDDHHMMSS.txt] – the key phrases identified within the text on each individual line in the Source Text File.
Rolf has also put his code on GitHub, so read on and check out his repo.
So to give a concrete example, if the default file system was
/user/filename.txtwould resolve to
Why does the default file system matter? The first answer to this is purely convenience. It is a heck lot easier to simply say
adl://amitadls.azuredatalakestore.net/in code and configurations. Secondly, many components in Hadoop use relative paths by default. For instance there are a fixed set of places, specified by relative paths, where various applications generate their log files. Finally, many ISV applications running on Hadoop specify important locations by relative paths.
Read on to see how.
Hadoop was developed for deployment over Linux running on bare metal. Cloud deployment implies virtual machines, and for Hadoop it’s a huge difference.
As detailed in other articles (for instance, Your Cluster Is an Appliance or Understanding Hadoop Hardware Requirements), bare-metal deployments have an inherent advantage over virtual machine deployments. The biggest of these is that they can use direct attached storage, i.e., local disks.
Not every Hadoop workload is storage I/O bound, but most are, and even when Hadoop seems to be CPU bound, much of the CPU activity is often either directly in service of I/O, i.e., marshaling, unmarshaling, compression, etc., or in service of avoiding I/O, i.e., building in-memory tables for map-side joins.
Read the whole thing.
When you create a new Azure database, you usually need to open the firewall to remotely administrate or query this database with SSMS. An option is to create rules from the Azure Portal. It’s surely a convenient way to do it when you create a database but I prefer to keep a minimum of tools and when the Azure portal is not open, I prefer to not have to open it just to define a few firewall rules.
Opening the firewall with SSMS is a kind of chicken and eggs problem: to connect to your database/server, you need to open the firewall. Hopefully, SSMS has a great suite of screens to call the underlying API of Azure Portal and open the firewall for the computer running SSMS.