Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:37 pm | [2022/03/25 07:08:37] [debug] [outputes.0] HTTP Status=200 URI=/_bulk [2022/03/24 04:20:51] [debug] [retry] re-using retry for task_id=0 attempts=5 Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [outputes.0] HTTP Status=200 URI=/_bulk [2022/03/24 04:19:54] [debug] [out coro] cb_destroy coro_id=4 $ sudo kubectl logs -n rtf -l app=external-log-forwarder [2021/03/01 12:55:57] [ warn . Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=69336502 watch_fd=12 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY [2022/03/24 04:21:08] [error] [outputes.0] could not pack/validate JSON response Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [task] created task=0x7ff2f183ac00 id=15 OK Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [task] created task=0x7ff2f1839ee0 id=8 OK [2022/03/24 04:19:22] [debug] [upstream] KA connection #103 to 10.3.4.84:9200 is now available [2022/03/24 04:20:06] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000 WebSocket - Fluent Bit: Official Manual Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) es 7.6.2 fluent/fluent-bit 1.8.12, Operating System and version: centos 7.9, kernel 5.4 LT, Filters and plugins: "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"eeMnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [http_client] not using http_proxy for header If it is not mounted then the link fails to resolve. Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [outputes.0] task_id=11 assigned to thread #1 Logstash_Format On [2022/03/24 04:19:49] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000 Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [outputes.0] task_id=13 assigned to thread #0 We use a stalebot among other tools to help manage the state of issues in this project. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [retry] new retry created for task_id=15 attempts=1 [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/fluent-bit-9hwpg_logging_fluent-bit-a7e85dd8e51db82db787e3386358a885ccff94c3411c8ba80a9a71598c01f387.log, inode 35353988 Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [retry] re-using retry for task_id=11 attempts=2 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"z-Mnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [http_client] not using http_proxy for header You're sending more data than the cluster can index. Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [outputes.0] HTTP Status=200 URI=/_bulk [2022/03/24 04:19:24] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000 [2022/03/24 04:19:47] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [ warn] [engine] failed to flush chunk '1-1648192110.850147571.flb', retry in 37 seconds: task_id=9, input=tail.0 > output=es.0 (out_id=0) [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/ffffhello-world-dcqbx_argo_main-4522cea91646c207c4aa9ad008d19d9620bc8c6a81ae6135922fb2d99ee834c7.log, inode 34598706 Elastic receives small amount of data and nginx buffers the bulk events Thanks for your answers, took some time after holiday (everybody happy new year) to dive into fluent-bit errors. Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY [2022/03/24 04:20:49] [debug] [outputes.0] task_id=0 assigned to thread #1 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:19:30] [debug] [out coro] cb_destroy coro_id=1 Host 10.3.4.84 Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=35359369 removing file name /var/log/containers/hello-world-swxx6_argo_wait-dc29bc4a400f91f349d4efd144f2a57728ea02b3c2cd527fcd268e3147e9af7d.log Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [outputes.0] task_id=5 assigned to thread #1 Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104051102 events: IN_ATTRIB Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [http_client] not using http_proxy for header "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"OeMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=1931990 removing file name /var/log/containers/hello-world-swxx6_argo_main-8738378bea8bd6d3dfd18bf8ef2c5a5687c900539317432114c7472eff9e63c2.log [2022/03/24 04:19:21] [debug] [task] created task=0x7f7671e38540 id=0 OK Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Logstash_Prefix node Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [out coro] cb_destroy coro_id=6 1 comment Closed . [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records To Reproduce. #Write_Operation upsert [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/helm-install-traefik-j2ncv_kube-system_helm-4554d6945ad4a135678c69aae3fb44bf003479edc450b256421a51ce68a37c59.log, inode 622082 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [http_client] not using http_proxy for header "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"duMnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Btw, although some warn messages, I still can search specific app logs from elastic search. Deployed, Graylog using Helm Charts. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"leMmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Data is loaded into elasticsearch, but some records are missing in kibana. Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [retry] re-using retry for task_id=6 attempts=2 Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [outputes.0] task_id=15 assigned to thread #0 [2022/03/24 04:20:04] [debug] [outputes.0] task_id=1 assigned to thread #0 Error failed to flush user | timeout Issue #2143 grafana/loki fluent-bit crashes when using azure blob output #2839 - Github Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [ warn] [engine] failed to flush chunk '1-1648192100.653122953.flb', retry in 19 seconds: task_id=3, input=tail.0 > output=es.0 (out_id=0) Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk '1-1647920426.171646994.flb', retry in 632 seconds: task_id=233, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [debug] [outputes.0] task_id=5 assigned to thread #1 * [2022/03/24 04:20:20] [debug] [input:tail:tail.0] inode=69179340 removing file name /var/log/containers/hello-world-6lqzf_argo_wait-6939f915dcb1d1e0050739f656afcd8636884b83c4d26692024699930b263fad.log Match kube. chunks are getting stuck Issue #3014 fluent/fluent-bit GitHub Fri, Mar 25 2022 3:08:36 pm | [2022/03/25 07:08:36] [debug] [out coro] cb_destroy coro_id=9 There same issues Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [out coro] cb_destroy coro_id=5 failed to flush the buffer Issue #600 uken/fluent-plugin - Github Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [task] created task=0x7ff2f183a2a0 id=10 OK [2022/03/24 04:19:30] [ warn] [engine] failed to flush chunk '1-1648095560.297175793.flb', retry in 19 seconds: task_id=2, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Output always starts working after the restart. Are you still receiving some of the records on the ES side, or does it stopped receiving records altogether? Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY [2021/02/23 10:15:04] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=85527545 watch_fd=11 [2021/02/23 10:15:17] [ warn] [engine] failed to flush chunk '1-1614075316.467653746.flb', retry in 6 seconds: task_id=16, input=tail.0 > output=es.0 (out_id=0) [2021/02/23 10:15:17] [ warn] [engine] failed to flush chunk '1-1614075316.380912397 . [2022/03/24 04:21:20] [debug] [input:tail:tail.0] inode=1885019 removing file name /var/log/containers/hello-world-dsxks_argo_wait-114879608f2fe019cd6cfce8e3777f9c0a4f34db2f6dc72bb39b2b5ceb917d4b.log Failed to Flush user, file too large #4497 - Github Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available [Cassandra] Loki not creating Chunk or Index tables on - Github Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [retry] re-using retry for task_id=9 attempts=3 Log ingestion to Elastic Cloud not working with ECS Fargate Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [debug] [task] created task=0x7ff2f183a480 id=11 OK Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input chunk] update output instances with new chunk size diff=697 Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [ warn] [engine] failed to flush chunk '1-1648192122.113977737.flb', retry in 21 seconds: task_id=16, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104048677 removing file name /var/log/containers/hello-world-hxn5d_argo_main-ce2dea5b2661227ee3931c554317a97e7b958b46d79031f1c48b840cd10b3d78.log I'm using fluentd logging on k8s for application logging, we are handling 100M (around 400 tps) and getting this issue. I am getting these errors. It seems that you're trying to create a new index with dots on its name. Fri, Mar 25 2022 3:08:36 pm | [2022/03/25 07:08:36] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [ warn] [engine] failed to flush chunk '1-1648192129.207138564.flb', retry in 8 seconds: task_id=19, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled) Share. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. helm install helm-charts-fluent-bit-0.19.19. [2021 /02/05 22:18:08] [warn] [engine] failed to flush chunk '6056-1612534687.673438119.flb', retry in 7 seconds: task_id = 0, input = tcp.0 > output = websocket.0 (out_id = 0) [2021 . Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"l-Mmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:20:26] [debug] [out coro] cb_destroy coro_id=5 and after set Trace_Error On Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [http_client] not using http_proxy for header If I send the CONT signal to fluentbit I see that fluentbit still has them. Fri, Mar 25 2022 3:08:22 pm | [2022/03/25 07:08:22] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fluentd will wait to flush the buffered chunks for delayed events. Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available [2022/03/24 04:19:38] [debug] [out coro] cb_destroy coro_id=2 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"4-Mmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Match host. Chunk cannot be retried, failed to flush chunk #5916 - Github Every time the website is updated a new manifest file is loaded from the backend, if there are newer files, the service worker will fetch all the new chunks and notify the user of the update. [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_glob add(): /var/log/containers/hello-world-dsxks_argo_main-3bba9f6587b663e2ec8fde9f40424e43ccf8783cf5eafafc64486d405304f470.log, inode 35353618 Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [http_client] not using http_proxy for header [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=3076476 removing file name /var/log/containers/hello-world-89skv_argo_wait-5d919c301d4709b0304c6c65a8389aac10f30b8617bd935a9680a84e1873542b.log Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [http_client] not using http_proxy for header [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [out coro] cb_destroy coro_id=16 [2022/03/24 04:20:20] [debug] [input:tail:tail.0] purge: monitored file has been deleted: /var/log/containers/hello-world-6lqzf_argo_wait-6939f915dcb1d1e0050739f656afcd8636884b83c4d26692024699930b263fad.log [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/coredns-66c464876b-4g64d_kube-system_coredns-3081b7d8e172858ec380f707cf6195c93c8b90b797b6475fe3ab21820386fc0d.log, inode 67178299 Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [ warn] [engine] failed to flush chunk '1-1648192118.5008496.flb', retry in 8 seconds: task_id=12, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [outputes.0] task_id=6 assigned to thread #1 Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [out coro] cb_destroy coro_id=19 Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [outputes.0] task_id=13 assigned to thread #0 Fluentbit gets stuck [multiple issues] #3581 - Github [2022/03/22 03:57:49] [ warn] [engine] failed to flush chunk '1-1647920934.181870214.flb', retry in 786 seconds: task_id=739, input=tail.0 > output=es.0 (out_id=0), use helm to install helm-charts-fluent-bit-0.19.19. Retry_Limit False. [warn]: temporarily failed to flush the buffer. Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [outputes.0] HTTP Status=200 URI=/_bulk "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"JOMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Apr 15, 2021 at 17:18. Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [out coro] cb_destroy coro_id=12 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"IuMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:19:21] [debug] [outputes.0] task_id=1 assigned to thread #1 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"O-Mmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/svclb-traefik-twmt7_kube-system_lb-port-443-ab3854479885ed2d0db7202276fdb1d2142db002b93c0c88d3d9383fc2d8068b.log, inode 34105877 Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [outputes.0] task_id=18 assigned to thread #1 For debugging you could use tcpdump: sudo tcpdump -i eth0 tcp port 24224 -X -s 0 -nn. [2022/03/24 04:19:49] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [retry] new retry created for task_id=3 attempts=1 Fri, Mar 25 2022 3:08:42 pm | [2022/03/25 07:08:42] [debug] [upstream] KA connection #35 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [out coro] cb_destroy coro_id=19 Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [http_client] not using http_proxy for header [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/helm-install-traefik-j2ncv_kube-system_helm-4554d6945ad4a135678c69aae3fb44bf003479edc450b256421a51ce68a37c59.log, inode 622082 Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [upstream] KA connection #35 to 10.3.4.84:9200 is now available "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"luMmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:21:08] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=3070975 watch_fd=18 [2022/03/24 04:21:20] [debug] [input:tail:tail.0] purge: monitored file has been deleted: /var/log/containers/hello-world-dsxks_argo_wait-114879608f2fe019cd6cfce8e3777f9c0a4f34db2f6dc72bb39b2b5ceb917d4b.log [2022/03/24 04:19:34] [debug] [upstream] KA connection #103 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [outputes.0] task_id=2 assigned to thread #0 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"fuMnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [retry] new retry created for task_id=10 attempts=1 [2022/05/26 06:04:45] [ info] [engine] flush chunk '1-1653545056.585580723.flb' succeeded at retry 1: task_id=90, input=tail.0 > output=forward.0 (out_id=0) [2022/05/26 06:04:46] [ info] [engine] flush chunk '1-1653545061.402631314.flb' succeeded at retry 1: task_id=102, input=tail.0 > output=forward.0 (out_id=0) [2022/05/26 06:04:46] [ info] [engine] flush chunk '1-1653545059.352915125.flb .