Last week in Part Two I went through how to create named volumes and map them to containers in order to persist data.
However, there is also another option available in the form of data volume containers.
The idea here is that create we a volume in a container and then mount that volume into another container. Confused? Yeah, me too but best to learn by doing so let’s run through how to use these things now.
Read through for the step-by-step description.
But as you can see from the marketplace page there, by default this plugin doesn’t know PowerShell. In my user settings file
settings.json, I added PowerShell to the
cSpell.enabledLanguageIdssection so it’s always recognized:
And with that, VSCode was giving me green squiggles under lots of words – both misspelled and not. Code Spellchecker doesn’t understand PowerShell in its default setup, it doesn’t have a dictionary for it. Just to get things started, I added a
cSpell.userWordssection to my
settings.jsonand the squiggles started disappearing.
It’s an interesting post, so read the whole thing.
The following solutions were tested:
- Graphing with Grafana and Graphite
- Collection with collectd and Telegraf
- Storage with Graphite/Whisper and InfluxDB
We landed on a solution which uses InfluxDB, collectd and Grafana. InfluxDB gave us the performance and flexibility we needed, collectd is a light weight tool to collect system performance information, and Grafana is a rich and interactive tool for visualizing the data.
In the sections below, we will provide you with all the steps necessary to setup this same solution in your environment quickly and easily. Details include step-by-step setup and configuration instructions, along with a pointer to the complete GitHub project.
I’ve been a big fan of Grafana since Hortonworks introduced it as the primary monitoring tool in HDP 2.5. We use Grafana extensively for monitoring SQL on Windows and SQL on Linux.
Windows has the same setting. It’s in the Power Options under Control Panel, and for all servers, no matter what, it should be set to High Performance.
Here’s a free T-SQL script I wrote that will check for you what the power settings are. We don’t always have desktop access to a server when we are checking diagnostics, but it’s good to know if performance problems can be addressed by a really simple fix that doesn’t require the vehicle to be at rest.
(The script also respects your settings, so if you had
xp_cmdshelldisabled, it’ll turn it off again when it’s done.)
Click through for the script.
In the past few months I’ve had several clients reach out about migrating to In-Memory OLTP solutions. When the second or third request rolled in I remember thinking, “Finally!” As in, I’ve been wondering why businesses haven’t been looking to implement In-Memory sooner. In fact, I added a section on In-Memory to our IEPTO2 course because with the availability of the feature in SQL Server 2016 SP1 (yes, there are memory limitations, but it’s a good place to start) I figured we would see an uptick in interest. But here we are, half way through 2017 and over 6 months since the release of SQL Server 2016 SP1, and I still see a lot of hesitation around it.
I wrote a post over on SQLPerformance last week, Testing DML Statements for In-Memory OLTP, and that generated some discussions on Twitter. So I figured it was time for a post to find out what’s holding companies back. This isn’t a true poll – it’s a fill-in-the-blank. As in: post a comment. If you have considered migrating to In-Memory and then didn’t, I want to understand why. I recognize there are limitations – the product is still new and it’s evolving. But perhaps if we understand the largest inhibitors to migration we can help move them up on Microsoft’s list via Connect and other pathways. Understand I am asking for specifics here, for example: we can’t use In-Memory tables because they don’t support spatial data types. Whatever the reason, share it via a comment.
If I were to take a wild guess, the most common answers will be something like:
I like using memory-optimized tables where I can, but have had much more success with memory-optimized table-valued parameters.
As I travel to multiple events focused on numerous platforms the database is crucial to, I’m faced with peers frustrated with DevOps and considerable conversation dedicated to how it’s the end of the database administrator. It may be my imagination, but I’ve been hearing this same story, with the blame assigned elsewhere- either its Agile, DevOps, the Cloud or even a latest release of the actual database platform. The story’s the same- the end of the Database Administrator.
The most alarming and obvious pain point of this, is that in each of these scenarios, the result was the Database Administrator a focal point in the end more so than they were when it began. When it comes to DevOps, the specific challenges of the goal needed the DBA more so than any of these storylines. As development hurdled top speed to deliver what the business required, the DBA and operations as a whole, delivered the security, the stability and the methodologies to build automation at the level that the other groups simply never needed previously.
This is a useful rejoinder to fears of imminent job loss.
In a distributed system, the computers that make up the system can always fail independently of one another. In the case of Kafka, an individual broker can crash, or a network failure can happen while the producer is sending a message to a topic. Depending on the action the producer takes to handle such a failure, you can get different semantics:
At least once semantics: if the producer receives an acknowledgement (ack) from the Kafka broker and acks=all, it means that the message has been written exactly once to the Kafka topic. However, if a producer ack times out or receives an error, it might retry sending the message assuming that the message was not written to the Kafka topic. If the broker had failed right before it sent the ack but after the message was successfully written to the Kafka topic, this retry leads to the message being written twice and hence delivered more than once to the end consumer. And everybody loves a cheerful giver, but this approach can lead to duplicated work and incorrect results.
At most once semantics: if the producer does not retry when an ack times out or returns an error, then the message might end up not being written to the Kafka topic, and hence not delivered to the consumer. In most cases it will be, but in order to avoid the possibility of duplication, we accept that sometimes messages will not get through.
Exactly once semantics: even if a producer retries sending a message, it leads to the message being delivered exactly once to the end consumer. Exactly-once semantics is the most desirable guarantee, but also a poorly understood one. This is because it requires a cooperation between the messaging system itself and the application producing and consuming the messages. For instance, if after consuming a message successfully you rewind your Kafka consumer to a previous offset, you will receive all the messages from that offset to the latest one, all over again. This shows why the messaging system and the client application must cooperate to make exactly-once semantics happen.
Read on for a discussion of technical details. I appreciate how Neha linked to a 60+ page design document as well, for those wanting to dig into the details.
This section could probably could have much less workarounds if I’d knew more about LDAP.
But I’m a data scientist at heart and I want to get things done.
If you ever dealt with Hadoop, you know that there are a bunch of non-interactive users, i.e. users who are not supposed to login, such as
hadoop, etc. These users are important to have. However the groups with the same name are also important to have. For example when using
airflowand launching a
sparkjob, the log folders will be created under the
airflowuser, in the
LDAP, however, doesn’t allow you, to my knowledge, to have overlapping user/groups, as Unix does.
The way I solved it was to create, in LDAP, the
hdfs_useror …) to work around this limitation.
Also, Giovanni apparently lives in an interesting neighborhood.
we are going to predict the concrete strength using neural network. neural network can be used for predict a value or class, or it can be used for predicting multiple items. In this example, we are going to predict a value, that is concrete strength.
I have loaded the data in power bi first, and in “Query Editor” I am going to write some R codes. First we need to do some data transformations. As you can see in the below picture number 2,3 and 4,data is not in a same scale, we need to do some data normalization before applying any machine learning. I am going to write a code for that (Already explained the normalization in post KNN). So to write some R codes, I just click on the R transformation component (number 5).
There’s a lot going on in this demo; check it out.
For more information, read this blog: End to End Loan ChargeOff Prediction Built Using Azure HDInsight Spark Clusters and SQL Server 2016 R Service
We have published two solution templates deployable using two technology stacks for the above chargeoff scenario:-
Loan Chargeoff Prediction using SQL Server 2016 R Services – Using DSVM with SQL Server 2016 and Microsoft ML, this solution template walks through how to create and clean up a set of simulated data, use 5 different models to train, select the best performant model, perform scoring using the model and save the prediction results back to SQL Server. A PowerBI report connects to the prediction table and show interactive reports with the user on the chargeoff prediction.
Loan Chargeoff Prediction using HDInsight Spark Clusters – This solution demonstrates how to develop machine learning models for predicting loan chargeoff (including data processing, feature engineering, training and evaluating models), deploy the models as a web service (on the edge node) and consume the web service remotely with Microsoft R Server on Azure HDInsight Spark clusters. The final predictions is saved to a Hive table which could be visualized in Power BI.
These tend to be nice because they show you how the different pieces of the Azure stack tie together.