StandardSSLContextService configuration

Hi,
I’m attempting to run the workflows in the “ingest_raw_text_into_ES_and_annotate…” template. I’m stuck configuring the StandardSSLContextService configuration required by the PutElasticSearchHTTPRecord. Specifically, which trust and keystores should be used and what are the default passwords.

Thanks

Hello,

Yep, the default is set to cogstackNifi, Workflows — CogStack-Nifi latest documentation .

The trust and keystore paths should be there by default, if somehow they aren’t there, the paths are:/opt/nifi/nifi-current/es_certificates/opensearch/elasticsearch/elasticsearch-1/elasticsearch-1-keystore.jks and opt/nifi/nifi-current/es_certificates/opensearch/elasticsearch/elasticsearch-1/elasticsearch-1-truststore.jks

If you are using elasticsearch native then change the “opensearch” bit of the path to “es_native” .

Thanks - that seems to work.

One problem I should document for anyone else searching is that even though the file suffix is jks, the “type” field needs to be set to PKCS12, which might be obvious to certificate experts.

Spoke too soon.
The test of the “basic ingestion DB to ES” gave the following error :

Any suggestions?

10:55:46 UTC

ERROR

f3aa6c49-015b-3007-a79a-813fca8d53ee

PutElasticsearchHttpRecord[id=f3aa6c49-015b-3007-a79a-813fca8d53ee] Routing to failure due to exception: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target: javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target - Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target - Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

Which version of elasticsearch are you using ?

I realised I made a mistake the original es_native (the one that is currently the default) certs are as follows /opt/nifi/nifi-current/es_certificates/es_native/elasticsearch/elasticsearch-1/elasticsearch-1.p12 , use the same for both keystore and truststore, and as you mentioned set the type to PKCS12.

I’m using the default from the master branch, which the services.yml file says is 8.3.3

Ive set both to:

/opt/nifi/nifi-current/es_certificates/opensearch/elasticsearch/elasticsearch-1/elasticsearch-1.p12

I’m getting the following error:

21:15:57 UTC
ERROR
f3aa6c49-015b-3007-a79a-813fca8d53ee

PutElasticsearchHttpRecord[id=f3aa6c49-015b-3007-a79a-813fca8d53ee] Routing to failure due to exception: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target: javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
- Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
- Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

I’ve edited the path in the previous answer, it is : /opt/nifi/nifi-current/es_certificates/es_native/elasticsearch/elasticsearch-1/elasticsearch-1.p12 after seeing that it was still pointing to the opensearch version.

Just for reference the certs are all mounted from the ./security/es_certificates/ folder and respect the same convention on the paths.

I did try before (I noticed the same thing and was going to ask) and had a different error. I’ll test again later

A different problem this time, although perhaps I can ignore it as it is a warning (see below). However I can’t seem to use the curl command to check elasticsearch.

06:46:24 UTC

WARNING

f3aa6c49-015b-3007-a79a-813fca8d53ee

PutElasticsearchHttpRecord[id=f3aa6c49-015b-3007-a79a-813fca8d53ee] Elasticsearch returned code 401 with message Unauthorized, transferring flow file to failure

The processor is set up to send to the default elasticsearch-1. However I can’t get any diagnostics out of it at all using stuff like. Do curl commands need to change now that there are certificates everywhere?

curl -s -XGET http://admin:admin@localhost:9200/medical_reports_text/_count

Further - I managed to get a response via curl, but there are still problems. I’m guessing that the authentication is more complex now?

 curl -s -k  -XGET https://admin:admin@localhost:9200/|jq
{
  "error": {
    "root_cause": [
      {
        "type": "security_exception",
        "reason": "unable to authenticate user [admin] for REST request [/]",
        "header": {
          "WWW-Authenticate": [
            "Basic realm=\"security\" charset=\"UTF-8\"",
            "Bearer realm=\"security\"",
            "ApiKey"
          ]
        }
      }
    ],
    "type": "security_exception",
    "reason": "unable to authenticate user [admin] for REST request [/]",
    "header": {
      "WWW-Authenticate": [
        "Basic realm=\"security\" charset=\"UTF-8\"",
        "Bearer realm=\"security\"",
        "ApiKey"
      ]
    }
  },
  "status": 401
}

I note the following in services.yml:

 ELASTIC_USER=kibanaserver
 ELASTIC_PASSWORD=kibanaserver 

but can’t authenticate if I replace admin with kibanaserver.

Although the services.yml contains the user/pass combo, the default superuser for elastic is “elastic” (this is unchangable), so the account that should work by default should be “elastic:kibanaserver” for the ES clusters (only the password is set by the default env var), to generate the rest of the users and the actual “kibanaserver” user you should use “/sercurity/create_es_native_credentials.sh” (if you inspect the file you will see additional env vars that can be set for the user/pass combos) , I know this is only mentioned in the documentation as an optional step but it is now mandatory. I will be amanding the docs to reflect this.

If it still doesnt work I recommend deleting the containers and their respective volumes, start the containers and wait 30 secs before attempting to create the accounts.

I think we’re making progress here, but still work to do.

The current problem is with create_es_node_cert.sh (note that there are underscore in the name, missing from docs.

There are problems adding entries to the keystore if the existing entries use the default alias (mykey) - see here.

This causes the last command of create_keystore.sh to fail. I’m sure there’s a simple fix by deleting the appropriate files and starting from scratch.

echo "Creating truststore key"
keytool -import -file $1.pem -keystore $1-"truststore.key" -storepass $KEYSTORE_PASSWORD -noprompt

Since you are using ES native, you only need to execute create_es_native_certs.sh, it will generate all the appropiate certificates .

From the security folder, the scripts that are for the ES native all have ‘es_native’ in them, always look to these when you need something . We had to add these as a separate set of scripts because unfortunately the original ES needs its own tool for the certs.

Thanks - testing this now. I have questions about es native vs opensearch too, but I’ll post a separate topic.

No luck - now Kibana is causing problems:

 bash ./create_es_native_credentials.sh
ELASTIC_HOST not set, defaulting to ELASTIC_HOST=localhost
ELASTIC_PASSWORD not set, defaulting to ELASTIC_PASSWORD=kibanaserver
ELASTIC_USER not set, defaulting to ELASTIC_USER=elastic
KIBANA_USER not set, defaulting to KIBANA_USER=kibanaserver
KIBANA_PASSWORD not set, defaulting to KIBANA_PASSWORD=kibanaserver
INGEST_SERVICE_USER not set, defaulting to INGEST_SERVICE_USER=ingest_service
INGEST_SERVICE_PASSWORD not set, defaulting to INGEST_SERVICE_PASSWORD=ingest_service
Waiting for Elasticsearch availability
curl: (6) Could not resolve host: 
{
  "name" : "es01",
  "cluster_name" : "elasticsearch-cogstack-cluster",
  "cluster_uuid" : "nNx6zk33TOOipJTlPjB_ig",
  "version" : {
    "number" : "8.3.3",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "801fed82df74dbe537f89b71b098ccaff88d2c56",
    "build_date" : "2022-07-23T19:30:09.227964828Z",
    "build_snapshot" : false,
    "lucene_version" : "9.2.0",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}

Setting kibana_system password
{"error":{"root_cause":[{"type":"unavailable_shards_exception","reason":"[.security-7][0] [1] shardIt, [0] active : Timeout waiting for [1m], request: indices:data/write/update"}],"type":"unavailable_shards_exception","reason":"[.security-7][0] [1] shardIt, [0] active : Timeout waiting for [1m], request: indices:data/write/update"},"status":503}Creating users
{
  "error" : {
    "root_cause" : [
      {
        "type" : "unavailable_shards_exception",
        "reason" : "[.security-7][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.security-7][0]] containing [index {[.security][user-kibanaserver], source[{\"username\":\"kibanaserver\",\"password\":\"$2a$10$9ixjIyIdAYRo0Dlcq5tVLOJXhDtPrzcyd6jXZfygKL2OLAkqT9bGi\",\"roles\":[\"kibana_system\",\"kibana_admin\",\"ingest_admin\"],\"full_name\":\"kibanaserver\",\"email\":\"cogstack@admin.net\",\"metadata\":null,\"enabled\":true,\"type\":\"user\"}]}] and a refresh]"
      }
    ],
    "type" : "unavailable_shards_exception",
    "reason" : "[.security-7][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.security-7][0]] containing [index {[.security][user-kibanaserver], source[{\"username\":\"kibanaserver\",\"password\":\"$2a$10$9ixjIyIdAYRo0Dlcq5tVLOJXhDtPrzcyd6jXZfygKL2OLAkqT9bGi\",\"roles\":[\"kibana_system\",\"kibana_admin\",\"ingest_admin\"],\"full_name\":\"kibanaserver\",\"email\":\"cogstack@admin.net\",\"metadata\":null,\"enabled\":true,\"type\":\"user\"}]}] and a refresh]"
  },
  "status" : 503
}

Hmm, are both ES instances online ? You need to start both elasticsearch-1 and elasticsearch-2. The shard error is on the ES side, they both need to sync. Kibana should work fine afterwards and you should have no problems with generating the accounts.

Docker ps does list both of them as alive. Is there any other way I should check?

af8f5dccdb60   docker.elastic.co/kibana/kibana:8.3.3                 "/bin/tini -- /usr/l…"   10 hours ago   Up 10 hours   0.0.0.0:5601->5601/tcp, :::5601->5601/tcp                                                                                         cogstack-kibana
8ada1aaa9b1d   docker.elastic.co/elasticsearch/elasticsearch:8.3.3   "/bin/tini -- /usr/l…"   10 hours ago   Up 10 hours   0.0.0.0:9201->9200/tcp, :::9201->9200/tcp, 0.0.0.0:9301->9300/tcp, :::9301->9300/tcp, 0.0.0.0:9601->9600/tcp, :::9601->9600/tcp   elasticsearch-2
a6eaf2ece6b1   docker.elastic.co/elasticsearch/elasticsearch:8.3.3   "/bin/tini -- /usr/l…"   10 hours ago   Up 10 hours   0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 0.0.0.0:9300->9300/tcp, :::9300->9300/tcp, 0.0.0.0:9600->9600/tcp, :::9600->9600/tcp   elasticsearch-1
d086b162d0ea   postgres:14.5-alpine                                  "docker-entrypoint.s…"   3 days ago     Up 10 hours   0.0.0.0:5554->5432/tcp, :::5554->5432/tcp                                                                                         cogstack-samples-db
bd898788ccaf   cogstacksystems/tika-service:0.5.1                    "/bin/bash /app/run.…"   3 days ago     Up 10 hours   0.0.0.0:8090->8090/tcp, :::8090->8090/tcp                                                                                         cogstack-tika-service
e9f09170a565   deploy_nifi-nginx                                     "/docker-entrypoint.…"   3 days ago     Up 10 hours   80/tcp, 0.0.0.0:8443->8443/tcp, :::8443->8443/tcp                                                                                 cogstack-nifi-nginx
9225e080176c   cogstacksystems/cogstack-nifi:latest                  "../scripts/start.sh"    3 days ago     Up 10 hours   8000/tcp, 8080/tcp, 10000/tcp, 10443/tcp, 0.0.0.0:8082->8443/tcp, :::8082->8443/tcp                                               cogstack-nifi

Nope, that was it, looks good! Could you check elasticsearch-1 or 2 logs ? if they seem fine and no access or connection issues are present then delete and re-create the kibana container. The shard errors only appear if somehow the setup had more than the two clusters active at some point (which was never the case here).

Any specific logs I should be checking. /var/log doesn’t appear to have anything specific to elasticsearch:

docker exec -it 8ada1aaa9b1d /bin/bash

elasticsearch@8ada1aaa9b1d:~$ ls /var/log/
alternatives.log  apt  bootstrap.log  btmp  dpkg.log  faillog  lastlog  wtmp

The shell’s wd is /usr/share/elasticsearch and there’s a log/gc.log, and I can’t see any errors there. Anywhere else?

I’m testing a fresh install using a new user with ID=1000, as a sanity check.

One thing I’ve noticed is that the test db has changed structure - no medical_reports_text anymore:

 psql -U test  -d db_samples 
psql (14.5)
Type "help" for help.

db_samples=# \dt
             List of relations
 Schema |       Name       | Type  | Owner 
--------+------------------+-------+-------
 public | annotations      | table | test
 public | documents        | table | test
 public | meta_annotations | table | test
 public | nlp_models       | table | test

Discovered the problem - the db dumps are managed by git lfs and aren’t necessarily retrieved by a default clone. a git lfs pull after the fact fixed this issue

Still struggling to get the elasticsearch dockers configured. What is the order of configuration scripts that I should be running and which ones should be run while the containers are running.

I just tried running bash create_es_native_certs.sh and then restarted the dockers but found that they are continually restarting, so something isn’t right.