Category Archives: Docker

Orient Me, Elasticsearch and Disk space

With IBM Connections 6 you can deploy the additional component Orient Me, which provides the first microservices which will build the new IBM Connections pink. Orient Me is installed on top of IBM Spectrum Conductor for Containers (CFC) a new product to help with clustering and orchestrating of the Docker containers.

Klaus Bild showed in a blog post some weeks ago how to add a container with Kibana to use the deployed Elasticsearch for visualizing the environment.

I found two issues with the deployed Elasticsearch container, but let me explain from the beginning.

On Monday I checked my demo server and the disk was full, so I searched a little bit and found that Elasticsearch is using around 50GB of disk space for the indices. On my server the data path for Elasticsearch is /var/lib/elasticsearch/data. With du -hs /var/lib/* you can check the used space.

You will see something like this and I would recommend to create a seperate mount point for /var/lib or two on /var/lib/docker and /var/lib/elasticsearch for your CFC/Orient Me server:

du -hs /var/lib/*
15G /var/lib/docker
0   /var/lib/docker.20170425072316
6,8G    /var/lib/elasticsearch
451M    /var/lib/etcd

So I searched how to show and delete Elasticsearch indices.

On your CFC host run:

curl localhost:9200/_aliases


[root@cfc ~]# curl http://localhost:9200/_aliases?pretty=1 
  "logstash-2017.06.01" : {
    "aliases" : { }
  "logstash-2017.05.30" : {
    "aliases" : { }
  "logstash-2017.05.31" : {
    "aliases" : { }
  ".kibana" : {
    "aliases" : { }
  "heapster-2017.06.01" : {
    "aliases" : {
      "heapster-cpu-2017.06.01" : { },
      "heapster-filesystem-2017.06.01" : { },
      "heapster-general-2017.06.01" : { },
      "heapster-memory-2017.06.01" : { },
      "heapster-network-2017.06.01" : { }

On my first try, the list was “a little bit” longer. So it is a test server, so I just deleted the indices with:

curl XDELETE http://localhost:9200/logstash-* 
curl XDELETE http://localhost:9200/heapster-*

For this post, I checked this commands from my local machine and curl XDELETE ... with IP or hostname are working too! Elasticsearch provides no real security for the index handling, so best practice is to put a Nginx server in front and only allow GET and POST on the URL. So in a production environment, you should think about securing the port 9200 (Nginx, iptables), or anybody could delete the indices. Only logs and performance data, but I don’t want to allow this.

Now the server was running again and I digged a little bit deeper. So I found that there is a container indices-cleaner running on the server:

[root@cfc ~]# docker ps | grep clean 
6c1a52fe0e0e ibmcom/indices-cleaner:0.1 "cron && tail -f /..." 51 minutes ago Up 51 minutes k8s_indices-cleaner.a3303a57_k8s-elasticsearch-

So I checked this container:

docker logs 6c1a52fe0e0e

shows nothing. Normally it should show us the curator log. The container command is not selected in the best way.

cron && tail -f /var/log/curator-cron.log

shall show the log file of curator (a tool to delete Elasticsearch indices), but with && it only starts tail when cron is ended with status true. So that’s the reason that docker logs shows nothing.

I started a bash in the container with docker exec -it 6c1a52fe0e0e bash and checked the settings there.

cat /etc/cron.d/curator-cron 
59 23 * * * root /bin/bash / 
# An empty line is required at the end of this file for a valid cron file.

There is a cronjob which runs each day at 23:59. The started script runs:

/usr/local/bin/curator --config /etc/curator.yml /action.yml

Within the /action.yml the config shows that logstash-* should be deleted after 5 days and heapster-* after 1 day.

I checked /var/log/curator-cron.log, but it was empty! So the cronjob never ran. To test if the script works as expected, I just started / and the log file shows:

cat /var/log/curator-cron.log
2017-05-31 08:17:01,654 INFO      Preparing Action ID: 1, "delete_indices"
2017-05-31 08:17:01,663 INFO      Trying Action ID: 1, "delete_indices": Delete logstash- prefixed indices. Ignore the error if the filter does not result in an actionable list of indices (ignore_empty_list) and exit cleanly.
2017-05-31 08:17:01,797 INFO      Deleting selected indices: [u'logstash-2017.05.08', u'logstash-2017.05.09', u'logstash-2017.05.03', u'logstash-2017.04.28', u'logstash-2017.04.27', u'logstash-2017.04.26', u'logstash-2017.05.18', u'logstash-2017.05.15', u'logstash-2017.05.12', u'logstash-2017.05.11']
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.05.08
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.05.09
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.05.03
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.04.28
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.04.27
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.04.26
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.05.18
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.05.15
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.05.12
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.05.11
2017-05-31 08:17:02,130 INFO      Action ID: 1, "delete_indices" completed.
2017-05-31 08:17:02,130 INFO      Preparing Action ID: 2, "delete_indices"
2017-05-31 08:17:02,133 INFO      Trying Action ID: 2, "delete_indices": Delete heapster prefixed indices. Ignore the error if the filter does not result in an actionable list of indices (ignore_empty_list) and exit cleanly.
2017-05-31 08:17:02,161 INFO      Deleting selected indices: [u'heapster-2017.04.26', u'heapster-2017.04.27', u'heapster-2017.04.28', u'heapster-2017.05.03', u'heapster-2017.05.15', u'heapster-2017.05.12', u'heapster-2017.05.11', u'heapster-2017.05.09', u'heapster-2017.05.08']2017-05-31 08:17:02,161 INFO      ---deleting index heapster-2017.04.26
2017-05-31 08:17:02,161 INFO      ---deleting index heapster-2017.04.27
2017-05-31 08:17:02,161 INFO      ---deleting index heapster-2017.04.28
2017-05-31 08:17:02,161 INFO      ---deleting index heapster-2017.05.03
2017-05-31 08:17:02,161 INFO      ---deleting index heapster-2017.05.15
2017-05-31 08:17:02,161 INFO      ---deleting index heapster-2017.05.12
2017-05-31 08:17:02,161 INFO      ---deleting index heapster-2017.05.11
2017-05-31 08:17:02,161 INFO      ---deleting index heapster-2017.05.09
2017-05-31 08:17:02,161 INFO      ---deleting index heapster-2017.05.08
2017-05-31 08:17:02,366 INFO      Action ID: 2, "delete_indices" completed.
2017-05-31 08:17:02,367 INFO      Job completed.

I checked the log file daily after the research and after running the task manually the cron job is working as expected and curator does it’s job. No full disk since last week.

CFC uses kubernetes and so stopping the clean-indices container creates a new one immediately! All changes disappear then and the cronjob stops working. I don’t want to wait until IBM provides a container update, so I searched a way to run the curator even with a new container on a regular basis.

I created a script:

id=`docker ps | grep indices-cleaner | awk '{print $1}'` 
docker exec -t $id / 
docker exec -t $id tail /var/log/curator-cron.log

and added it to my crontab on the CFC server.

crontab -e 59 23 * * * script >> /var/log/curator.log

When you use Kibana to analyse the logs, you maybe want to have more indices available. docker inspect containerid shows us:

"Mounts": [
                "Type": "bind",
                "Source": "/etc/cfc/conf/curator-action.yml",
                "Destination": "/action.yml",
                "Mode": "",
                "RW": true,
                "Propagation": ""

So you can edit the file /etc/cfc/conf/curator-action.yml on the CFC host instead of the container file and your changes will be persistent.

IBM Connect 2017 – slides, news and so

This year I attended IBM Connect in San Francisco. In my eyes it was a great event and I enjoyed it very much.

Some announcements are very important for the future and evolution of the IBM portfolio:

  • IBM Connections Pink – Jason Gary and the IBM Development team showed the future of IBM Connections. The basis will be Docker and a lot of other Opensource products. I see forward to work with a complete new stack and be very curious on deployment, migration and scaling. It is a complete rewrite and will not longer need DB2 or WebSphere. A good summarize was written by Glenn Kline.
  • panagenda ApplicationInsight – all IBM Domino customers with valid maintenance will get ApplicationInsights to analyze the code and usage of their Domino databases
  • IBM Domino will be updated through feature packs, we will get Java 8 and other long awaited functionality
  • IBM announced a new lifetime IBM Champion: Julian Robichaux, big congrats to him and well deserved

Just a few session slides are available through the official conference page (we provided them, but they are still not available), so we uploaded ours to slideshare:

Best and Worst Practices for Deploying IBM Connections

IBM Connections Adminblast

All other session slides of my panagenda colleagues can be found in the panagenda slideshare account.


During the 11 hour flight to San Francisco I used the time to update the XPages and Generic HTML Widgets (OpenNTF) for IBM Connections 5.5 CR2. Frank van der Linden uploaded the changes today.

Better logstash filter to analyze SystemOut.log and some more

Last week I wrote a post about Using Docker and ELK to Analyze WebSphere Application Server SystemOut.log, but i wasn’t happy with my date filter and how the websphere response code is analyzed. The main problem was, that the WAS response code is not always on the beginning of a log message, or do not end with “:” all the time.

I replaced the used filter (formerly 4 lines with match) with following code:

grok {
        # was_shortname need to be regex, because numbers and $ can be in the word
        match => ["message", "\[%{DATA:wastimestamp} %{WORD:tz}\] %{BASE16NUM:was_threadID} (?<was_shortname>\b[A-Za-z0-9\$]{2,}\b) %{SPACE}%{WORD:was_loglevel}%{SPACE} %{GREEDYDATA:message}"]
        overwrite => [ "message" ]
        #tag_on_failure => [ ]
grok {
        # Extract the WebSphere Response Code
        match => ["message", "(?<was_responsecode>[A-Z0-9]{9,10})[:,\s\s]"]
        tag_on_failure => [ ]


Using Docker and ELK to Analyze WebSphere Application Server SystemOut.log

I often get SystemOut.log files from customers or friends to help them analyzing a problem. Often it is complicated to find the right server and application which generates the real error, because most WebSphere Applications (like IBM Connections or Sametime) are installed on different Application Servers and Nodes. So you need to open multiple large files in your editor, scroll each to the needed timestamps and check the lines before for possible error messages.

Klaus Bild showed on several conferences and in his blog the functionality of ELK, so I thought about using ELK too. I started to build a virtual machine with ELK Stack (Elasticsearch, Logstash & Kibana) and imported my local logs and logs i got mailed. This way is cool to analyze your environment, but adding just some SystemOut.logs from outside is not the best way. It’s hard to remove this stuff after analyzing from a ELK instance.

Then I found a tutorial to install ELK with Docker, there is even a great Part 2 of this post, which helps us installing an ELK Cluster. Just follow this blog posts, install Docker and Docker-compose. It’s really fast deployed. In my case i do not use flocker, it’s enough to use a local data container for the elasticsearch data.

Why do I use Docker instead of a virtual machine? It’s super easy to just drop the data container and begin with an empty database.

My setup

I created a folder on my Mac to put all needed stuff for the Docker images to it:

mkdir elk
cd elk