StandardSSLContextService configuration

OK, that was an ownership issue - create_es_native_certs.sh creates files owned by root. chown to the user allows es to start correctly. Still see the same shard error when running create_es_native_credentials.sh.

The permission error included a message to check logs at /usr/share/elasticsearch/logs/elasticsearch-cogstack-cluster.log, but this doesn’t exist. The folder is full of gc.log files.

When I run make start-elastic after removing -d from the up command I see the following repeated many times in the console. I missed a step somewhere.

elasticsearch-1               | {"@timestamp":"2022-10-17T11:23:20.786Z", "log.level":"ERROR", "message":"security index is unavailable. short circuiting retrieval of user [kibanaserver]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es01][transport_worker][T#9]","log.logger":"org.elasticsearch.xpack.security.authc.esnative.NativeUsersStore","trace.id":"bf4c558e3668dcb5acd61f26f261e2ea","elasticsearch.cluster.uuid":"_UQQKW_jSQG95EelJsEafA","elasticsearch.node.id":"TqezORaURL-2ZICF-246KQ","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"elasticsearch-cogstack-cluster"}

This is sligtly odd, I can’t replicate the issue but it seems it has been mentioned in some cases: Security index is unavailable. short circuiting retrieval of user with helm chart and letsencrypt - Elasticsearch - Discuss the Elastic Stack . I’ve pushed an update to the main docker-compose file that changes the ELASTIC_USER to “elastic” perhaps this will fix the issue, make sure to remove the elasticsearch-1, elasticsearch-2 and cogstack-kibana containers and then delete their volumes.

Still no luck, but the original error remains.

I’ve discovered the following error from Kibana:


cogstack-kibana               | [2022-10-17T22:56:58.021+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. connect ECONNREFUSED 172.21.0.2:9200


cogstack-kibana               | [2022-10-17T22:57:03.073+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. security_exception: [security_exception] Reason: unable to authenticate user [kibanaserver] for REST request [/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip]

And from elasticsearch

elasticsearch-2               | {"@timestamp":"2022-10-17T23:27:12.577Z", "log.level":"ERROR", "message":"exception during geoip databases update", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es02][generic][T#4]","log.logger":"org.elasticsearch.ingest.geoip.GeoIpDownloader","elasticsearch.cluster.uuid":"GyvR-lvsRCuLjF9AOPi9xg","elasticsearch.node.id":"az4qTCDsQPKiGOQRakDsJQ","elasticsearch.node.name":"es02","elasticsearch.cluster.name":"elasticsearch-cogstack-cluster","error.type":"org.elasticsearch.ElasticsearchException","error.message":"not all primary shards of [.geoip_databases] index are active","error.stack_trace":"org.elasticsearch.ElasticsearchException: not all primary shards of [.geoip_databases] index are active\n\tat org.elasticsearch.ingest.geoip.GeoIpDownloader.updateDatabases(GeoIpDownloader.java:134)\n\tat org.elasticsearch.ingest.geoip.GeoIpDownloader.runDownloader(GeoIpDownloader.java:274)\n\tat org.elasticsearch.ingest.geoip.GeoIpDownloaderTaskExecutor.nodeOperation(GeoIpDownloaderTaskExecutor.java:102)\n\tat org.elasticsearch.ingest.geoip.GeoIpDownloaderTaskExecutor.nodeOperation(GeoIpDownloaderTaskExecutor.java:48)\n\tat org.elasticsearch.server@8.3.3/org.elasticsearch.persistent.NodePersistentTasksExecutor$1.doRun(NodePersistentTasksExecutor.java:42)\n\tat org.elasticsearch.server@8.3.3/org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:769)\n\tat org.elasticsearch.server@8.3.3/org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.base/java.lang.Thread.run(Thread.java:833)\n"}
elasticsearch-1               | {"@timestamp":"2022-10-17T23:27:13.361Z", "log.level":"ERROR", "message":"security index is unavailable. short circuiting retrieval of user [kibanaserver]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es01][transport_worker][T#8]","log.logger":"org.elasticsearch.xpack.security.authc.esnative.NativeUsersStore","trace.id":"b168bb34f76607db01ea7724a62ccffe","elasticsearch.cluster.uuid":"GyvR-lvsRCuLjF9AOPi9xg","elasticsearch.node.id":"S-Rcbdn0SLmiECfd7nVVlQ","elasticsearch.node.name":"es01","elasticsearch.cluster.name":"elasticsearch-cogstack-cluster"}

I’ve run another test where I start the elasticsearch-? without kibana.

I must be missing something simple. Should we arrange a walkthrough?

The errors are:

elasticsearch-2               | {"@timestamp":"2022-10-17T23:47:07.103Z", "log.level":"ERROR", "message":"error downloading geoip database [GeoLite2-ASN.mmdb]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[es02][generic][T#3]","log.logger":"org.elasticsearch.ingest.geoip.GeoIpDownloader","elasticsearch.cluster.uuid":"I8OLmKzIR0-zFhk4DYAq0A","elasticsearch.node.id":"bixZpHrqTaWAVnr9KyJiLA","elasticsearch.node.name":"es02","elasticsearch.cluster.name":"elasticsearch-cogstack-cluster","error.type":"org.elasticsearch.action.UnavailableShardsException","error.message":"[.geoip_databases][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.geoip_databases][0]] containing [index {[.geoip_databases][GeoLite2-ASN.mmdb_0_1666050336214], source[n/a, actual length: [1mb], max length: 2kb]}]]","error.stack_trace":"org.elasticsearch.action.UnavailableShardsException: [.geoip_databases][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.geoip_databases][0]] containing [index {[.geoip_databases][GeoLite2-ASN.mmdb_0_1666050336214], source[n/a, actual length: [1mb], max length: 2kb]}]]\n\tat org.elasticsearch.server@8.3.3/org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retryBecauseUnavailable(TransportReplicationAction.java:1074)\n\tat org.elasticsearch.server@8.3.3/org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:874)\n\tat org.elasticsearch.server@8.3.3/org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)\n\tat org.elasticsearch.server@8.3.3/org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onTimeout(TransportReplicationAction.java:1033)\n\tat org.elasticsearch.server@8.3.3/org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:345)\n\tat org.elasticsearch.server@8.3.3/org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:263)\n\tat org.elasticsearch.server@8.3.3/org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:649)\n\tat org.elasticsearch.server@8.3.3/org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:710)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n\tat java.base/java.lang.Thread.run(Thread.java:833)\n"}

That repeats

Then I fire up kibana, and see the following errors:

cogstack-kibana               | [2022-10-17T23:49:35.342+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. security_exception: [security_exception] Reason: unable to authenticate user [kibanaserver] for REST request [/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip]

Credential setting gives the same error:

ELASTIC_HOST not set, defaulting to ELASTIC_HOST=localhost
ELASTIC_PASSWORD not set, defaulting to ELASTIC_PASSWORD=kibanaserver
ELASTIC_USER not set, defaulting to ELASTIC_USER=elastic
KIBANA_USER not set, defaulting to KIBANA_USER=kibanaserver
KIBANA_PASSWORD not set, defaulting to KIBANA_PASSWORD=kibanaserver
INGEST_SERVICE_USER not set, defaulting to INGEST_SERVICE_USER=ingest_service
INGEST_SERVICE_PASSWORD not set, defaulting to INGEST_SERVICE_PASSWORD=ingest_service
Waiting for Elasticsearch availability
curl: (6) Could not resolve host: 
{
  "name" : "es01",
  "cluster_name" : "elasticsearch-cogstack-cluster",
  "cluster_uuid" : "I8OLmKzIR0-zFhk4DYAq0A",
  "version" : {
    "number" : "8.3.3",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "801fed82df74dbe537f89b71b098ccaff88d2c56",
    "build_date" : "2022-07-23T19:30:09.227964828Z",
    "build_snapshot" : false,
    "lucene_version" : "9.2.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}
Setting kibana_system password
{"error":{"root_cause":[{"type":"unavailable_shards_exception","reason":"[.security-7][0] [1] shardIt, [0] active : Timeout waiting for [1m], request: indices:data/write/update"}],"type":"unavailable_shards_exception","reason":"[.security-7][0] [1] shardIt, [0] active : Timeout waiting for [1m], request: indices:data/write/update"},"status":503}Creating users
{
  "error" : {
    "root_cause" : [
      {
        "type" : "unavailable_shards_exception",
        "reason" : "[.security-7][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.security-7][0]] containing [index {[.security][user-kibanaserver], source[{\"username\":\"kibanaserver\",\"password\":\"$2a$10$4midskHtrgJm6jr8YAEd/.SFmhRDTXpOrHWTXsN/LYpKeaG71G1hW\",\"roles\":[\"kibana_system\",\"kibana_admin\",\"ingest_admin\"],\"full_name\":\"kibanaserver\",\"email\":\"cogstack@admin.net\",\"metadata\":null,\"enabled\":true,\"type\":\"user\"}]}] and a refresh]"
      }
    ],
    "type" : "unavailable_shards_exception",
    "reason" : "[.security-7][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.security-7][0]] containing [index {[.security][user-kibanaserver], source[{\"username\":\"kibanaserver\",\"password\":\"$2a$10$4midskHtrgJm6jr8YAEd/.SFmhRDTXpOrHWTXsN/LYpKeaG71G1hW\",\"roles\":[\"kibana_system\",\"kibana_admin\",\"ingest_admin\"],\"full_name\":\"kibanaserver\",\"email\":\"cogstack@admin.net\",\"metadata\":null,\"enabled\":true,\"type\":\"user\"}]}] and a refresh]"
  },
  "status" : 503
}

Well, I tried installation on a fresh cloud vm. The default user has id 1000. The OS is ubuntu 20.04 LTS, and docker compose is now a plugin, requiring edits here and there. In this case it appears to have worked to the point of setting elasticsearch passwords/certificates. Note that I haven’t played with nifi at all. I’m going to retest with 18.04.

ELASTIC_PASSWORD not set, defaulting to ELASTIC_PASSWORD=kibanaserver
ELASTIC_USER not set, defaulting to ELASTIC_USER=elastic
KIBANA_USER not set, defaulting to KIBANA_USER=kibanaserver
KIBANA_PASSWORD not set, defaulting to KIBANA_PASSWORD=kibanaserver
INGEST_SERVICE_USER not set, defaulting to INGEST_SERVICE_USER=ingest_service
INGEST_SERVICE_PASSWORD not set, defaulting to INGEST_SERVICE_PASSWORD=ingest_service
Waiting for Elasticsearch availability
curl: (6) Could not resolve host: .
{
  "name" : "es01",
  "cluster_name" : "elasticsearch-cogstack-cluster",
  "cluster_uuid" : "aFztEOi3QPKRYJWdp5z0xg",
  "version" : {
    "number" : "8.3.3",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "801fed82df74dbe537f89b71b098ccaff88d2c56",
    "build_date" : "2022-07-23T19:30:09.227964828Z",
    "build_snapshot" : false,
    "lucene_version" : "9.2.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}
Setting kibana_system password
{}Creating users
{
  "created" : true
}
{
  "created" : true
}

This is becoming stranger and stranger. A second cloud install on older ubuntu has also been successful (at least as far as being able to set credentials).

I have no clue as to what is going wrong with my local install, which I do need to fix as I use it as a demo system. Is there somewhere I can send logs for you take peak?

I’ve done a bunch of cleaning out of other vm related packages like virt qemu and tested again.

The only errors I can see are in the kibana log:

cogstack-kibana  | [2022-10-18T10:59:14.722+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. connect ECONNREFUSED 172.21.0.2:9200
cogstack-kibana  | [2022-10-18T10:59:22.208+00:00][ERROR][elasticsearch-service] Unable to retrieve version information from Elasticsearch nodes. security_exception: [security_exception] Reason: unable to authenticate user [kibanaserver] for REST request [/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip]

Progress! I purged docker, reinstalled, started with a fresh git clone and it eventually it all worked. Other difference was that I’m on a wired network in the office.

Purging docker has removed some containers relating to other projects that weren’t running. Can’t tell what caused the problem. Onto the rest of the workflow.

ELASTIC_HOST not set, defaulting to ELASTIC_HOST=localhost
ELASTIC_PASSWORD not set, defaulting to ELASTIC_PASSWORD=kibanaserver
ELASTIC_USER not set, defaulting to ELASTIC_USER=elastic
KIBANA_USER not set, defaulting to KIBANA_USER=kibanaserver
KIBANA_PASSWORD not set, defaulting to KIBANA_PASSWORD=kibanaserver
INGEST_SERVICE_USER not set, defaulting to INGEST_SERVICE_USER=ingest_service
INGEST_SERVICE_PASSWORD not set, defaulting to INGEST_SERVICE_PASSWORD=ingest_service
Waiting for Elasticsearch availability
curl: (6) Could not resolve host: 
{
  "name" : "es01",
  "cluster_name" : "elasticsearch-cogstack-cluster",
  "cluster_uuid" : "HteKwC6YTKqHsV_JcGlOkQ",
  "version" : {
    "number" : "8.3.3",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "801fed82df74dbe537f89b71b098ccaff88d2c56",
    "build_date" : "2022-07-23T19:30:09.227964828Z",
    "build_snapshot" : false,
    "lucene_version" : "9.2.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}
Setting kibana_system password
{}Creating users
{
  "created" : true
}
{
  "created" : true
}

@vladd

I thought I was home free after our discussion. I now have the medcat service up and running (verified by sending query via curl). However I’m unable to pull data out of elastic search. I can see the index in kibana, but get the following bulletin reports:


02:32:56 UTC
DEBUG
69bcd20f-3b64-3558-5fe7-05c9d28b1ce2

ScrollElasticsearchHttp[id=69bcd20f-3b64-3558-5fe7-05c9d28b1ce2] No previous state found

02:32:56 UTC
DEBUG
69bcd20f-3b64-3558-5fe7-05c9d28b1ce2

ScrollElasticsearchHttp[id=69bcd20f-3b64-3558-5fe7-05c9d28b1ce2] Querying medical_reports_text/null from Elasticsearch: document:cancer

02:32:56 UTC
DEBUG
69bcd20f-3b64-3558-5fe7-05c9d28b1ce2

ScrollElasticsearchHttp[id=69bcd20f-3b64-3558-5fe7-05c9d28b1ce2] Sending Elasticsearch request to https://elasticsearch-1:9200/medical_reports_text/_search?q=document%3Acancer&size=20&scroll=1m

02:32:56 UTC
DEBUG
69bcd20f-3b64-3558-5fe7-05c9d28b1ce2

ScrollElasticsearchHttp[id=69bcd20f-3b64-3558-5fe7-05c9d28b1ce2] Received response from Elasticsearch with status code 200

02:32:56 UTC
ERROR
69bcd20f-3b64-3558-5fe7-05c9d28b1ce2

ScrollElasticsearchHttp[id=69bcd20f-3b64-3558-5fe7-05c9d28b1ce2] Failed to read FlowFile[filename=8bb447cc-c913-4118-beee-0567464604b2] from Elasticsearch due to null: java.lang.NullPointerException

Also, this works:

curl -k -X GET -u elastic:kibanaserver https://localhost:9200/medical_reports_text/_search?q=document%3Acancer&size=20&scroll=1m

I can punch document:cancer into the kibana search box and get 49 results. Any clues to the problem? Another version related change?
Thanks

@vladd

I’m now attempting to get opensearch working, as it will probably be what we use most.

Running into trouble with the kibana certificates. I have


      - ../security/root-ca.pem:/usr/share/opensearch/config/root-ca.pem:ro
      - ../security/es_certificates/opensearch/admin.pem:/usr/share/opensearch/config/admin.pem:ro
      - ../security/es_certificates/opensearch/admin-key.pem:/usr/share/opensearch/config/admin-key.pem:ro
      - ../security/es_certificates/opensearch/elasticsearch/elasticsearch-1/elasticsearch-1-pkcs12.key:/usr/share/kibana/config/esnode1-pcks12.key:ro
      - ../security/es_certificates/opensearch/elasticsearch/elasticsearch-1/elasticsearch-1.pem:/usr/share/kibana/config/esnode1.pem:ro
      - ../security/es_certificates/opensearch/elasticsearch/elasticsearch-1/elasticsearch-1.key:/usr/share/kibana/config/esnode1.key:ro
      - ../security/es_certificates/opensearch/elasticsearch/elasticsearch-2/elasticsearch-2-pkcs12.key:/usr/share/kibana/config/esnode2-pcks12.key:ro
      - ../security/es_certificates/opensearch/elasticsearch/elasticsearch-2/elasticsearch-2.pem:/usr/share/kibana/config/esnode2.pem:ro
      - ../security/es_certificates/opensearch/elasticsearch/elasticsearch-2/elasticsearch-2.key:/usr/share/kibana/config/esnode2.key:ro

but I get errors from kibana:

cogstack-kibana  |  FATAL  Error: ENOENT: no such file or directory, open 'config/esnode1.crt'

I can’t see anything to create opensearch crt files.

Hello,

Just an update on this so that users can see the solution.

Certificates within the repo are now consistent for both ES versions, provided that users change the variables within the “./deploy/elasticsearch.env” to match their ES distribution, by default it is OpenSearch.

This would have caused headaches before as people would have to manually fiddle with “services.yml” which is not ideal.