Error in visualization [esaggs] Cannot read propertiy ‘4’ of undefined

When we launch any dashboard we do not observe any data. In addition, an error appears in the lower right of the screen: Error in visualization [esaggs] Cannot read propertiy ‘4’ of undefined. This error is due to the fact that Elasticsearch data disk occupancy, in the analytics node (which has Elasticserach), has exceeded the security threshold that Elasticsearch has set (70%). Into Kibana log file we can observe the nexts lines repetidly:

{"type":"response","@timestamp":"2020-01-30T13:59:01Z","tags":["api"],"pid":5261,"method":"get","statusCode":200,"req":{"url":"/api/status","method":"get","headers":{"user-agent":"curl/7.29.0","host":"localhost:5601","accept":"*/*"},"remoteAddress":"127.0.0.1","userAgent":"127.0.0.1"},"res":{"statusCode":200,"responseTime":11,"contentLength":9},"message":"GET /api/status 200 11ms - 9.0B"}
{"type":"log","@timestamp":"2020-01-30T13:59:01Z","tags":["error","task_manager"],"pid":5261,"message":"Failed to poll for work: [cluster_block_exception] index [.kibana_task_manager] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)]; :: {\"path\":\"/.kibana_task_manager/_update/Maps-maps_telemetry\",\"query\":{\"if_seq_no\":113,\"if_primary_term\":12,\"refresh\":\"true\"},\"body\":\"{\\\"doc\\\":{\\\"type\\\":\\\"task\\\",\\\"task\\\":{\\\"taskType\\\":\\\"maps_telemetry\\\",\\\"state\\\":\\\"{\\\\\\\"runs\\\\\\\":1,\\\\\\\"stats\\\\\\\":{\\\\\\\"mapsTotalCount\\\\\\\":0,\\\\\\\"timeCaptured\\\\\\\":\\\\\\\"2020-01-18T23:00:01.612Z\\\\\\\",\\\\\\\"attributesPerMap\\\\\\\":{\\\\\\\"dataSourcesCount\\\\\\\":{\\\\\\\"min\\\\\\\":0,\\\\\\\"max\\\\\\\":0,\\\\\\\"avg\\\\\\\":0},\\\\\\\"layersCount\\\\\\\":{\\\\\\\"min\\\\\\\":0,\\\\\\\"max\\\\\\\":0,\\\\\\\"avg\\\\\\\":0},\\\\\\\"layerTypesCount\\\\\\\":{},\\\\\\\"emsVectorLayersCount\\\\\\\":{}}}}\\\",\\\"params\\\":\\\"{}\\\",\\\"attempts\\\":0,\\\"scheduledAt\\\":\\\"2020-01-14T11:50:35.793Z\\\",\\\"runAt\\\":\\\"2020-01-30T14:00:01.762Z\\\",\\\"status\\\":\\\"running\\\"},\\\"kibana\\\":{\\\"uuid\\\":\\\"44a2c02e-e349-4951-b306-5551bc636eb5\\\",\\\"version\\\":7020099,\\\"apiVersion\\\":1}}}\",\"statusCode\":403,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"cluster_block_exception\\\",\\\"reason\\\":\\\"index [.kibana_task_manager] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];\\\"}],\\\"type\\\":\\\"cluster_block_exception\\\",\\\"reason\\\":\\\"index [.kibana_task_manager] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];\\\"},\\\"status\\\":403}\"}"}

Valid the high disk space occupancy with the next command:

[root@opennac-analytics ~]# df -h
Filesystem           Size  Used Avail Use% Mounted on
devtmpfs             7.8G     0    7.8G   0% /dev
tmpfs                7.8G     0    7.8G   0% /dev/shm
tmpfs                7.8G  177M    7.6G   3% /run
tmpfs                7.8G     0    7.8G   0% /sys/fs/cgroup
/dev/mapper/cl-root   50G  4.5G    46G   9% /
/dev/sda1           1014M  314M    701M  31% /boot
/dev/mapper/cl-home   20G   33M    20G   1% /home
/dev/mapper/cl-var   122G   90.2G  32.2G  74% /var
tmpfs                1.6G     0    1.6G   0% /run/user/0

As you can view, /var file system has high occupancy disk space. Into this file system are located Elasticsearch’s indices.

Check Elasticsearch’s indices and identify the biggest:

[root@opennac-analytics scripts]# curl -XGET http://localhost:9200/_cat/indices
green open radius-2020.01.30    yPkjd_w6RKKEInEyKSxaZA 1 0     1440    0 396.7kb 396.7kb
green open opennac-2020.02.03   A1FSPD_tQxq2ej4Pj9xvIw 1 0    25509    0  44.5mb  44.5mb
green open opennac-2020.02.01   LxZ3SNDgTDCS2QEeoplzdg 1 0    16871    0  19.4mb  19.4mb
green open radius-2020.01.31    mcrewf3wR66r3kXOP2TmkQ 1 0     1440    0 385.6kb 385.6kb
green open opennac-2020.01.30   DMjGA941QKSV2SK9tsctaQ 1 0    11002    0    36mb    36mb
green open misc-2020.01.29      uSU5lSGgQU-zBkS3V0sZcA 1 0        1    0  13.5kb  13.5kb
green open radius-2020.02.02    7uDF9_LCQJaJpsQqFXHlYg 1 0     1440    0   462kb   462kb
green open bro-2020.02.04       4GpTfcnRQ9qzFqir4Vszbw 1 0   578099    0   2.4gb   2.4gb
green open misc-2020.01.30      JSiF98e6QyyFgqL47rUZ7Q 1 0        1    0  13.5kb  13.5kb
green open radius-2020.01.27    a8O3B7qhQECcMLwdonhZjw 1 0     1440    0 394.6kb 394.6kb
green open radius-2020.01.29    RHtfd_noQ0SXNXCdZvJ52A 1 0     1440    0   431kb   431kb
green open radius-2020.02.07    YMsT-gvsT-2J4_Wmsyahrw 1 0     1394    0 621.1kb 621.1kb
green open bro-2020.02.01       WPdAF_s1R6u4XW6wT3ZTag 1 0  6144969    0  20.6gb  20.6gb
green open opennac-2020.01.31   CX6cmm8KS4mPbsrmsCr9PQ 1 0    22458    0  63.3mb  63.3mb
green open opennac_ud           dHB-XoF5TVG-Fw55h-dPow 1 0    10406 4342  33.3mb  33.3mb
green open bro-2020.02.06       DKI7O6nvTwiO45V_-92FUg 1 0        2    0 249.7kb 249.7kb
green open radius-2020.01.24    WneZcEaFRlGqZi5SH3WkGQ 1 0     1440    0 408.1kb 408.1kb
green open radius-2020.02.03    jHQqI2HqT7qImqtx8W8vDQ 1 0     1436    0 532.7kb 532.7kb
green open opennac-2020.02.07   nNN1NYzJRYO7PsDahwGkLg 1 0     3104    0   4.3mb   4.3mb
green open radius-2020.01.28    42A4FSwdQnCZFOqzOrHj2Q 1 0     1440    0 417.8kb 417.8kb
green open radius-2020.01.26    9xwTTDefQdaQ51lwgViJoA 1 0     1440    0 397.1kb 397.1kb
green open opennac-2020.01.25   zmlq1oCPTXWxtFeqRpzCow 1 0       54    0  68.6kb  68.6kb
green open bro-2020.02.02       1kMV8LuzTNOasjgO1RSiRQ 1 0  6019000    0    23gb    23gb
green open radius-2020.02.04    qG-CVEd2SA-d0UPaTzocRw 1 0      634    0 208.2kb 208.2kb
green open .kibana_1            wqKmCrM_QfWpQ4om8m4iwQ 1 0     1742   14 757.8kb 757.8kb
green open .kibana_task_manager DfGYkrrZTU2bEh4CT-x-Fw 1 0        2    0  31.8kb  31.8kb
green open opennac-2020.02.04   qGtiVwXUQ2ul6Yp4HDehlg 1 0     3463    0   4.3mb   4.3mb
green open bro-2020.02.05       hlZQfRL2RK2wvsZM9POFHA 1 0        1    0  37.9kb  37.9kb
green open bro-2020.02.03       BkYjo5e8QjKeuuPMnBASOw 1 0 10171817    0    37gb    37gb
green open radius-2020.02.01    D-4lehdqRHuSDG2vghp1Qg 1 0     1440    0 393.1kb 393.1kb
green open bro-2020.02.07       NOT37UrXTWGCCvWrMM3-Tw 1 0   244342    0   1.3gb   1.3gb
green open opennac-2020.02.02   teRp8YDXTRauDMoaFs_AvA 1 0    16278    0  20.8mb  20.8mb
green open opennac-2020.01.29   cooKP-tdSVKrAj2GtVB-0Q 1 0      268    0 195.1kb 195.1kb

We can observe most indices with high occupancy. Bro is really active and occupe too much disk space. So, we can calculate how disk space we use for bro by day: .. code:

[root@opennac-analytics scripts]# curl -XGET http://localhost:9200/_cat/indices/bro*
green open bro-2020.02.04       4GpTfcnRQ9qzFqir4Vszbw 1 0   578099    0   2.4gb   2.4gb
green open bro-2020.02.01       WPdAF_s1R6u4XW6wT3ZTag 1 0  6144969    0  20.6gb  20.6gb
green open bro-2020.02.06       DKI7O6nvTwiO45V_-92FUg 1 0        2    0 249.7kb 249.7kb
green open bro-2020.02.02       1kMV8LuzTNOasjgO1RSiRQ 1 0  6019000    0    23gb    23gb
green open bro-2020.02.05       hlZQfRL2RK2wvsZM9POFHA 1 0        1    0  37.9kb  37.9kb
green open bro-2020.02.03       BkYjo5e8QjKeuuPMnBASOw 1 0 10171817    0    37gb    37gb
green open bro-2020.02.07       NOT37UrXTWGCCvWrMM3-Tw 1 0   244342    0   1.3gb   1.3gb

In one week, we have stored 84GB of Sensor data and 37GB in one day at most. We can known, now, how much days we can store Sensor data before exceeded the security threshold. 70% of 122GB (/var) = 85GB. Remember to estimate other Elasticsearch indices and other data stored in the /var file system such as log files.

To resolve it we need to change ElasticCurator configuration and change the number of day of Bro indices history with lower value: Edit /etc/elastCurator/action.yaml

      actions:
1:
  action: delete_indices
  description: Delete bro-* > 7 days.
  options:
    ignore_empty_list: True
    timeout_override:
    continue_if_exception: False
    ignore_empty_list: True
    disable_action: False
  filters:
  - filtertype: pattern
    kind: prefix
    value: bro-
    exclude:
  - filtertype: age
    source: name
    direction: older
    timestring: '%Y.%m.%d'
    unit: days
    unit_count: 7
    exclude:

We need to change unit_count and description fields with lower value. Save the configuration file and execute ElastCurator:

/usr/share/opennac/analytics/scripts/elasticsearch_purge_index_curator.sh

The next step is set indices to RW, executing “read only.sh” script.

Follow the next instructions and check the result:

Execute: /usr/share/opennac/analytics/scripts/read_only.sh

You can now verify disk space occupancy again and indices with commands shown during the troubleshooting.