In elasticsearch, Ingest pipelines lets you transform or modify your data before indexing. For example, you can remove or rename a field. A pipeline can have multiple processors, where each processor performs a specific change to the data. Each processor runs in sequential order. After this process, elasticsearch adds the data in indexes.
In this example, we create two set processors (one is to convert data of field 'section' and the other for field 'default') followed by a lowercase processor.
PUT _ingest/pipeline/pipeline-example { "description": "Description of my pipeline", "processors": [ { "set": { "description": "Set section field value", "field": "section", "value": 10 } }, { "set": { "description": "Set default field to true", "field": "default", "value": true } }, { "lowercase": { "field": "email-id" } } ] }
After execution, you will get a response like this:
{ "acknowledged" : true }
The previous response acknowledges if your pipeline is created or failed.
The next step is to use this pipeline in our document. For this, we have to pass the pipeline parameter and the pipeline's name as a value.
PUT /logs/_doc/101?pipeline=pipeline-example { "section":"9", "default":"false", "email-id":"[email protected]" }
On execution of this query, you'll get a response like:
{ "_index" : "logs", "_type" : "_doc", "_id" : "1", "_version" : 1, "_seq_no" : 0, "_primary_term" : 1, "found" : true, "_source" : { "section":"10", "default":"true", "email-id":"[email protected]" } }
As you can see, data has been converted as per the pipeline processors.
You can also create or manage a pipeline using Kibana. For this, open the main menu in kibana, then select Ingest Node Pipelines from Stack Management.
Your pipeline will be created.