Skip to main content

A Basic CRUD Operation Demo On Elasticsearch Using Kibana

In this article, we will understand the way to implement Elasticsearch CRUD operation using the Kibana tool.

Elasticsearch:

Elasticsearch is a distributed, free, and open search and analytics engine for all types of data, including textual, numerical, geospatial, structured, and unstructured. It provides simple REST APIs, distributed nature, speed, and scalability.

Elasticsearch stores data in JSON format. so each JSON format data record in elastic search is called a document. So documents are queried or searched by the Index, Index holds the reference to respective documents.

Elasticsearch use cases:
  • Application Search
  • Website Search
  • Enterprise search
  • Loggin and log analytics
  • Infrastructure metrics and container monitoring
  • Application performance monitoring

Kibana:

Kibana is a free and open frontend application that sits on top of the Elastic Stack, providing search and data visualization capabilities for data indexed in Elasticsearch.

Run Elasticsearch And Kibana Docker Containers:

Create a network first that can help to connect our services like 'Elasticsearch' & 'Kibana' under it.
Command to create a network:
docker network create your_network_name_any_name

Let's pull and create the Elasticsearch docker container.
Command To Create Elasticsearch Docker Container:

docker run -d --name your_container_name_any_name --net network_name_just_created -p 9200:9200 -e "discovery.type=single-node" elasticsearch:7.16.3
  • [ -d ] run the docker command in detach mode, which means runs as a background service.
  • [ --name your_container_name_any_name] define the name to the docker container.
  • [ --net network_name_just_created] specify the network name under which our service should run.
  • [ -p 9200:9200] right-hand side port number(fixed port number) is the default port number for the 'ElasticSearch', the left-hand side port number is exposing port number we can define any of our custom port numbers.
  • [-e "discovery.type=singlenode"] setting the environment variable to run our Elastic search on a single node. it can be changed for production applications.
  • [ elasticsearch:7.16.3] name of service and its version.
Let's pull and create the Kibana docker container.
Command To Create Kibana Docker Container:

docker run -d --name your_container_name_any_name --net network_name_just_created -p 5601:5601 kibana:7.16.3
  • [ -d ] run the docker command in detach mode, which means runs as a background service.
  • [ --name your_container_name_any_name] define the name to the docker container.
  • [ --net network_name_just_created] specify the network name under which our service should run.
  • [ -p 5601:5601] right-hand side port number(fixed port number) is the default port number for the 'Kibana', the left-hand side port number is exposing port number we can define any of our custom port numbers.
  • [ kibana:7.16.3] name of service and its version.
After running the Kibana container then wait for a couple of minutes then try to access the Kibana UI tool at the URL http://localhost:5601. So this Kibana UI tool we will implement all our elastic search CRUD operations.


Cluster Health Check:

Query to check the elastic search cluster health check.
GET _cluster/health

Nodes In A Cluster:

Query to get the information about the nodes in the cluster.
GET _nodes/stats

Create An Index:

In elastic search, 'Index' will hold the reference to the documents. To compare with SQL, the 'Index' will be equivalent to the 'Table'.

Syntax to create an index.
PUT Name_of_Your_Index

Sample query to create an index.
PUT user_info

Document Create Operation With HTTP Verbs POST Or PUT:

(1) Creating a document using HTTP POSt makes elastic search to auto-generate the document's unique identifier.

The syntax for creating a document using the HTTP POST
POST Name_Of_Your_Index/_doc { "field": "value" }

Sample Query to create a document using HTTP POST
POST user_info/_doc
{
 "name":"naveen",
 "age": "28"
}
  • 'POST' - Http verb
  • 'user_info' - index name.
  • '_doc' - endpoint represents looking for a document
  • 'name' & 'age' - properties of payload to save as new document.

(2) The HTTP PUT can be used to create a document when we want to specify the id(unique identifier of the document) explicitly. This will either create a new document or update the document with the specified argument.

The syntax for creating a document using HTTP PUT
PUT Name_of_your_Index/_doc/id_your_document
{
 "field": "value"
}

Sample query to creating a document using HTTP PUT
PUT user_info/_doc/1
{
 "name":"hemanth",
 "age": "28"
}
  • 'PUT' - HTTP verb
  • 'user_info' - index name.
  • '_doc' - keyword represents document endpoint.
  • '1' - identifier value specified by us for the document while creating it.
  • 'name', 'age' - payload properties to save as a document.

Document Create Operation With '_create' Endpoint:

The '_create' endpoint restricts the overriding of the document. It will throw an error if the document already exists, wherein the above step the 'HTTP PUT' will override the document.

The syntax for creating a document using the '_create' endpoint.
PUT Name_of_your_index/_create/id
{
 "field": "value"
}

A Sample query for creating the document using the '_create' endpoint.
PUT user_info/_create/2
{
 "name":"Kumar",
 "age": "28"
}
  • 'PUT' - HTTP verb.
  • 'user_info' - index name.
  • '_create' - keyword represent create endpoint.
  • '2' - id specified to create a document.
  • 'name', 'age' - payload to save as a document.
(1)Creating a new record
(2)Try to create a new record with an identifier of an already existing document.

Document Read Operation:

Let's try to fetch the documents of the elastic search.

The syntax for the document read operation
GET Name_of_your_Index/_doc/id

Sample query for the document read operation
GET user_info/_doc/1
  • 'GET' - HTTP verb
  • 'userInfo' - index name,
  • _doc - keyword represents document.
  • '1' - id of the document.

Document Update Operation:

Using '_update' endpoint we can update the properties of the document.

The syntax for updating the document properties with '_update' endpoint.
POST Name_of_your_Index/_update/id
{
 "doc":
{
 "field1": "value",
 "field2": "value",
 }
}

The sample query for updating the document properties with '_update' endpoint.
POST user_info/_update/1
{
 "doc":{
 "name":"hemanth kumar"
 }
}
  • 'POST' - Http verb
  • 'user_infor' - index name
  • '_udpate' - keyword represent update endpoint.
  • '1' - id of the document.
  • 'doc' - inside of the 'doc' object define our properties that need to update in the document.

Document Delete Operation:

Let's try to delete the document from the elastic search.

The syntax for deleting the document.
DELETE Name_of_your_Index/_doc/id

The sample query for deleting the document.
DELETE user_info/_doc/1
  • 'DELETE' - HTTP verb
  • 'user_info' - index name
  • '_doc' - keyword represent document
  • '1' - id of the document.

Support Me!
Buy Me A Coffee PayPal Me

Video Session:

Wrapping Up:

Hopefully, I think this article delivered some useful information on a basic CRUD operation in Elasticsearch using Kibana. using I love to have your feedback, suggestions, and better techniques in the comment section below.

Follow Me:

Comments

Popular posts from this blog

.NET6 Web API CRUD Operation With Entity Framework Core

In this article, we are going to do a small demo on AspNetCore 6 Web API CRUD operations. What Is Web API: Web API is a framework for building HTTP services that can be accessed from any client like browser, mobile devices, desktop apps. In simple terminology API(Application Programming Interface) means an interface module that contains a programming function that can be requested via HTTP calls to save or fetch the data for their respective clients. Some of the key characteristics of API: Supports HTTP verbs like 'GET', 'POST', 'PUT', 'DELETE', etc. Supports default responses like 'XML' and 'JSON'. Also can define custom responses. Supports self-hosting or individual hosting, so that all different kinds of apps can consume it. Authentication and Authorization are easy to implement. The ideal platform to build REST full services. Create A .NET6 Web API Application: Let's create a .Net6 Web API sample application to accomplish our

Blazor WebAssembly Custom Authentication From Scratch

In this article, we are going to explore and implement custom authentication from the scratch. In this sample, we will use JWT authentication for user authentication. Main Building Blocks Of Blazor WebAssembly Authentication: The core concepts of blazor webassembly authentication are: AuthenticationStateProvider Service AuthorizeView Component Task<AuthenticationState> Cascading Property CascadingAuthenticationState Component AuthorizeRouteView Component AuthenticationStateProvider Service - this provider holds the authentication information about the login user. The 'GetAuthenticationStateAsync()' method in the Authentication state provider returns user AuthenticationState. The 'NotifyAuthenticationStateChaged()' to notify the latest user information within the components which using this AuthenticationStateProvider. AuthorizeView Component - displays different content depending on the user authorization state. This component uses the AuthenticationStateProvider

How Response Caching Works In Asp.Net Core

What Is Response Caching?: Response Caching means storing of response output and using stored response until it's under it's the expiration time. Response Caching approach cuts down some requests to the server and also reduces some workload on the server. Response Caching Headers: Response Caching carried out by the few Http based headers information between client and server. Main Response Caching Headers are like below Cache-Control Pragma Vary Cache-Control Header: Cache-Control header is the main header type for the response caching. Cache-Control will be decorated with the following directives. public - this directive indicates any cache may store the response. private - this directive allows to store response with respect to a single user and can't be stored with shared cache stores. max-age - this directive represents a time to hold a response in the cache. no-cache - this directive represents no storing of response and always fetch the fr

.Net5 Web API Managing Files Using Azure Blob Storage

In this article, we are going to understand the different file operations like uploading, reading, downloading, and deleting in .Net5 Web API application using Azure Blob Storage. Azure Blob Storage: Azure blob storage is Microsoft cloud storage. Blob storage can store a massive amount of file data as unstructured data. The unstructured data means not belong to any specific type, which means text or binary data. So something like images or pdf or videos to store in the cloud, then the most recommended is to use the blob store. The key component to creating azure blob storage resource: Storage Account:- A Storage account gives a unique namespace in Azure for all the data we will save. Every object that we store in Azure Storage has an address. The address is nothing but the unique name of our Storage Account name. The combination of the account name and the Azure Storage blob endpoint forms the base address for each object in our Storage account. For example, if our Storage Account is n

A Small Guide On NestJS Queues

NestJS Application Queues helps to deal with application scaling and performance challenges. When To Use Queues?: API request that mostly involves in time taking operations like CPU bound operation, doing them synchronously which will result in thread blocking. So to avoid these issues, it is an appropriate way to make the CPU-bound operation separate background job.  In nestjs one of the best solutions for these kinds of tasks is to implement the Queues. For queueing mechanism in the nestjs application most recommended library is '@nestjs/bull'(Bull is nodejs queue library). The 'Bull' depends on Redis cache for data storage like a job. So in this queueing technique, we will create services like 'Producer' and 'Consumer'. The 'Producer' is used to push our jobs into the Redis stores. The consumer will read those jobs(eg: CPU Bound Operations) and process them. So by using this queues technique user requests processed very fastly because actually

.Net5 Web API Redis Cache Using StackExchange.Redis.Extensions.AspNetCore Library

In this article, we are going to explore the integration of Redis cache in .Net5 Web API application using the 'StackExchange.Redis.Exntensions' library. Note:- Microsoft has introduced an 'IDistributedCache' interface in dotnet core which supports different cache stores like In-Memory, Redis, NCache, etc. It is simple and easy to work with  'IDistributedCache', for the Redis store with limited features but if we want more features of the Redis store we can choose to use 'StackExchange.Redis.Extensions'.  Click here for Redis Cache Integration Using IDistributedCache Interface . Overview On StackExchange.Redis.Extnesions Library: The 'StackExchange.Redis.Extension' library extended from the main library 'StackExchange.Redis'. Some of the key features of this library like: Default serialization and deserialization. Easy to save and fetch complex objects. Search key. Multiple Database Access Setup Redis Docker Instance: For this sampl

Usage Of CancellationToken In Asp.Net Core Applications

When To Use CancellationToken?: In a web application request abortion or orphan, requests are quite common. On users disconnected by network interruption or navigating between multiple pages before proper response or closing of the browser, tabs make the request aborted or orphan. An orphan request can't deliver a response to the client, but it will execute all steps(like database calls, HTTP calls, etc) at the server. Complete execution of an orphan request at the server might not be a problem generally if at all requests need to work on time taking a job at the server in those cases might be nice to terminate the execution immediately. So CancellationToken can be used to terminate a request execution at the server immediately once the request is aborted or orphan. Here we are going to see some sample code snippets about implementing a CancellationToken for Entity FrameworkCore, Dapper ORM, and HttpClient calls in Asp.NetCore MVC application. Note: The sample codes I will show in

Endpoint Routing In Asp.Net Core

How Routing Works In  Core 2.1 And Below Versions?: In Asp.Net Core routing is configured using app.UseRouter() or app.UseMvc() middleware. app.UseMvc(routes => { routes.MapRoute( name: "default", template: "{controller=Home}/{action=Index}/{id?}"); }); Here in Dotnet Core version 2.1 or below versions on the execution of route middleware request will be navigated appropriate controller matched to the route. An operation or functionality which is dependent on route URL or route values and that need to be implemented before the execution of route middleware can be done by accessing the route path from the current request context as below app.Use(async (context, next) => { if(context.Request.Path.Value.IndexOf("oldvehicle") != -1) { context.Response.Redirect("vehicle"); } else { await next(); } }); app.UseMvc(routes => { routes.MapRoute( name: "vehicleRoute", template: "vehicle", defaul

Asp.Net Core MVC Form Validation Techniques

Introduction: Form validations in any applications are like assures that a valid data is storing on servers. All programing frameworks have their own individual implementations for form validations. In Dotnet Core MVC application server-side validations carried on by the models with the help of Data Annotations and the client-side validations carried by the plugin jQuery Unobtrusive Validation. jQuery Unobtrusive Validation is a custom library developed by Microsoft based on the popular library  jQuery Validate . In this article, we are going to learn how the model validation and client-side validation works in Asp.Net Core MVC Application with sample examples. Getting Started: Let's create an Asp.Net Core MVC application project using preferred editors like Microsoft Visual Studio or Microsoft Visual Studio Code. Here I'm using Visual Studio. Let's create an MVC controller and name it as 'PersonController.cs' and add an action method as bel

NestJS File Upload

In this article, we are going to understand the steps to create a file uploading endpoint in the NestJS application. Key Features In NestJS File Upload: Let us know some key features of NestJS file upload before implementing a sample application. FileInterceptor: The 'FileInterceptor' will be decorated on top of the file upload endpoint. This interceptor will read single file data from the form posted to the endpoint. export declare function FilesInterceptor(fieldName: string, localOptions?: MulterOptions): Type<NestInterceptor>; Here we can observe the 'fieldName' first input parameter this value should be a match with our 'name' attribute value on the form file input field. So our interceptor read our files that are attached to the file input field. Another input parameter of 'MulterOptions' that provides configuration like file destination path, customizing file name, etc. FilesInterceptor: The 'FilesInterceptor' will be decorated on t