Press "Enter" to skip to content

Category: Elasticsearch

Debugging an Unresponsive Elasticsearch Cluster

Derric Gilling troubleshoots an Elasticsearch cluster:

Because of this sharding, a read or write request to an Elasticsearch cluster requires coordinating between multiple nodes as there is no “global view” of your data on a single server. While this makes Elasticsearch highly scalable, it also makes it much more complex to setup and tune than other popular databases like MongoDB or PostgresSQL, which can run on a single server.

When reliability issues come up, firefighting can be stressful if your Elasticsearch setup is buggy or unstable. Your incident could be impacting customers which could negatively impact revenue and your business reputation. Fast remediation steps are important, yet spending a large amount of time researching solutions online during an incident or outage is not a luxury most engineers have. This guide is intended to be a cheat sheet for common issues that engineers running that can cause issues with Elasticsearch and what to look for.

Read on for several helpful tips.

Comments closed

Removing a Node from Elasticsearch

The Big Data in Real World team spams the delete button:

Shutting down a node abruptly is not the right way to decommission or remove a node from the Elasticsearch cluster. Doing so will cause your shards which are replicated to go down in replication and it could cause disruption to the clients who are currently consuming data from Elasticsearch.

Proper way to decommission or remove a node from Elasticsearch is to add the host to the exclusion list.

Click through to learn how to do this.

Comments closed

Creating an Alias in Elasticsearch

The Big Data in Real World team needs an alias:

An alias as the name suggests is an alias or another name to the index in Elasticsearch. It is quite useful when you want to refer to an index by another name. So instead of performing an reindex to rename or cloning an index you can create an alias to the index.

Click through for the script to create an alias, how you might use one, and the right way to delete one without removing the underlying article.

Comments closed

Migrating from Elasticsearch to Azure Data Explorer

Bhaskar Kakaraparthy does a logging switcharoo:

This article is an extension to an existing article to migrate data from Elastic Search to Azure Data Explorer (ADX) using Logstash pipeline as a step-step-step guide.  In this article, we will explore the process involved in migrating data from one source (ELK) to another (ADX) and discuss some of the best practices and tools available to make the process as smooth as possible.

Using Logstash for data migration from Elasticsearch to Azure Data Explorer (ADX) was a smooth and efficient process. With the help of ADX output plugin & Logstash, I was able to migrate approximately 30TBs of data in a timely manner. The configuration was straightforward, and the data transfer with ADX output plugin was quick and reliable. Overall, the experience of using ADX output plugin with Logstash for data migration was positive and I would definitely use it again for similar projects in the future.

Read on to see how.

Comments closed

Shipping Kafka Logs to Kibana via Filebeat

Shivani Sarthi uses Filebeat to perform log shipping:

To ship the Kafka logs, we will be using the filebeat agent. A filebeat agent is a lightweight shipper whose purpose is to forward and centralize the log data.

For filebeat to work, you need to install it as an agent on the desired servers. Filebeat then monitors the log files, collects the log events, and forwards them to the ElasticSearch or LogStash for indexing.

Click through for an Ansible script to install Filebeat, integrate with Kafka, and communicate with Logstash for eventual querying via Kibana.

Comments closed

Selective Document Copy in Elasticsearch

The Hadoop in Real World team show how to migrate specific documents when building a new index:

As shown in the other post, we still use a reindex by specifying the source and destination but this time we also specify a query in source along with the term which indicates that the documents with state = ‘ny’ will be filtered from the source.

So only documents with state ny will be copied to the new index account_v3 with this reindex operation.

Click through for an example of how this works.

Comments closed

OpenSearch 1.0 Released

Andrew Hopp, et al, announce version 1.0 of OpenSearch:

OpenSearch is a community-driven, open source search and analytics suite derived from Apache 2.0 licensed Elasticsearch 7.10.2 & Kibana 7.10.2. It consists of a search engine daemon (OpenSearch), a visualization and user interface (OpenSearch Dashboards), and advanced features from Open Distro for Elasticsearch like security, alerting, anomaly detection and more.

Click through for the full rundown.

Comments closed