Social Connections 12 in Vienna

This time Social Connections is hosted in Vienna. The austrian capital is a great place to meet new and old friends. I love the city and the awesome dialect, to listen and practise a little bit, start with the lovely video about the only word you really need to survive in Vienna.

The next two days are packed with informations about IBM Connections and I hope to get in touch with the developer team of Pink to see the next planned features. The agenda promises a good mix for all people interested in IBM Connections, developers, administrators and adoption experts can attend and discuss with speakers, IBM developers, product managers and IBM Business partners.

My sessions

I will do one session with my colleague and mate Nico Meisenzahl about “IBM Connections Admin Blast”. The updated session was born for IBM Connect in February this year and will guide you through around 55 different tips and tasks you should be aware during deploying and administrating IBM Connections and the available addons like Forms/Surveys,Docs and CCM. The session takes place on monday 11:20-12:20 in Breakout 2 (lunch will start around 12:30, so we will take the chance and show all slides without jumping over the last ones and let you have food in time).

My other session is a little bit shorter and is completly new. “Automate IBM Connections deployments” on tuesday 9:40-10:10 in Breakout 1. Klaus Bild an I showed some scripts to do the same during Social Connections 7 in Stockholm, but for me these were complicated and updating requirements and new packages were a lot of work.
The last months I worked with Ansible, which is a perfect match to do automated installations and configurations. Ansible needs just a ssh connection to the servers you want to configure, no special software clients are needed. Agentless on Linux and with the option to do the same with remote powershell on Microsoft Windows.


All important tasks are already built in and so you can install software through the tools of the used Linux (apt, dnf, yum…), change security settings, edit ulimits or service configurtions. Installing all Connections prerequisists (Installation Manager, Websphere Application Server, DB2, IBM Httpserver, TDI and all needed os packages) need about 20 minutes on 4 minimal installed Centos VMs on my Notebook. So you can directly start creating databases, update peopledb and finish the last tasks on Websphere.

I think adding more scripts does only need a lot of time during scripting and will need a lot of customization from environment to environment.

Working with Ansible is fun and not really complicated. Ansible has extensions for container management and the Orientme installer is based on it too. So every admin and developer should have a look at it.

Orient Me, Elasticsearch and Disk space

With IBM Connections 6 you can deploy the additional component Orient Me, which provides the first microservices which will build the new IBM Connections pink. Orient Me is installed on top of IBM Spectrum Conductor for Containers (CFC) a new product to help with clustering and orchestrating of the Docker containers.

Klaus Bild showed in a blog post some weeks ago how to add a container with Kibana to use the deployed Elasticsearch for visualizing the environment.

I found two issues with the deployed Elasticsearch container, but let me explain from the beginning.

On Monday I checked my demo server and the disk was full, so I searched a little bit and found that Elasticsearch is using around 50GB of disk space for the indices. On my server the data path for Elasticsearch is /var/lib/elasticsearch/data. With du -hs /var/lib/* you can check the used space.

You will see something like this and I would recommend to create a seperate mount point for /var/lib or two on /var/lib/docker and /var/lib/elasticsearch for your CFC/Orient Me server:

du -hs /var/lib/*
15G /var/lib/docker
0   /var/lib/docker.20170425072316
6,8G    /var/lib/elasticsearch
451M    /var/lib/etcd

So I searched how to show and delete Elasticsearch indices.

On your CFC host run:

curl localhost:9200/_aliases


[root@cfc ~]# curl http://localhost:9200/_aliases?pretty=1 
  "logstash-2017.06.01" : {
    "aliases" : { }
  "logstash-2017.05.30" : {
    "aliases" : { }
  "logstash-2017.05.31" : {
    "aliases" : { }
  ".kibana" : {
    "aliases" : { }
  "heapster-2017.06.01" : {
    "aliases" : {
      "heapster-cpu-2017.06.01" : { },
      "heapster-filesystem-2017.06.01" : { },
      "heapster-general-2017.06.01" : { },
      "heapster-memory-2017.06.01" : { },
      "heapster-network-2017.06.01" : { }

On my first try, the list was “a little bit” longer. So it is a test server, so I just deleted the indices with:

curl XDELETE http://localhost:9200/logstash-* 
curl XDELETE http://localhost:9200/heapster-*

For this post, I checked this commands from my local machine and curl XDELETE ... with IP or hostname are working too! Elasticsearch provides no real security for the index handling, so best practice is to put a Nginx server in front and only allow GET and POST on the URL. So in a production environment, you should think about securing the port 9200 (Nginx, iptables), or anybody could delete the indices. Only logs and performance data, but I don’t want to allow this.

Now the server was running again and I digged a little bit deeper. So I found that there is a container indices-cleaner running on the server:

[root@cfc ~]# docker ps | grep clean 
6c1a52fe0e0e ibmcom/indices-cleaner:0.1 "cron && tail -f /..." 51 minutes ago Up 51 minutes k8s_indices-cleaner.a3303a57_k8s-elasticsearch-

So I checked this container:

docker logs 6c1a52fe0e0e

shows nothing. Normally it should show us the curator log. The container command is not selected in the best way.

cron && tail -f /var/log/curator-cron.log

shall show the log file of curator (a tool to delete Elasticsearch indices), but with && it only starts tail when cron is ended with status true. So that’s the reason that docker logs shows nothing.

I started a bash in the container with docker exec -it 6c1a52fe0e0e bash and checked the settings there.

cat /etc/cron.d/curator-cron 
59 23 * * * root /bin/bash / 
# An empty line is required at the end of this file for a valid cron file.

There is a cronjob which runs each day at 23:59. The started script runs:

/usr/local/bin/curator --config /etc/curator.yml /action.yml

Within the /action.yml the config shows that logstash-* should be deleted after 5 days and heapster-* after 1 day.

I checked /var/log/curator-cron.log, but it was empty! So the cronjob never ran. To test if the script works as expected, I just started / and the log file shows:

cat /var/log/curator-cron.log
2017-05-31 08:17:01,654 INFO      Preparing Action ID: 1, "delete_indices"
2017-05-31 08:17:01,663 INFO      Trying Action ID: 1, "delete_indices": Delete logstash- prefixed indices. Ignore the error if the filter does not result in an actionable list of indices (ignore_empty_list) and exit cleanly.
2017-05-31 08:17:01,797 INFO      Deleting selected indices: [u'logstash-2017.05.08', u'logstash-2017.05.09', u'logstash-2017.05.03', u'logstash-2017.04.28', u'logstash-2017.04.27', u'logstash-2017.04.26', u'logstash-2017.05.18', u'logstash-2017.05.15', u'logstash-2017.05.12', u'logstash-2017.05.11']
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.05.08
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.05.09
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.05.03
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.04.28
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.04.27
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.04.26
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.05.18
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.05.15
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.05.12
2017-05-31 08:17:01,797 INFO      ---deleting index logstash-2017.05.11
2017-05-31 08:17:02,130 INFO      Action ID: 1, "delete_indices" completed.
2017-05-31 08:17:02,130 INFO      Preparing Action ID: 2, "delete_indices"
2017-05-31 08:17:02,133 INFO      Trying Action ID: 2, "delete_indices": Delete heapster prefixed indices. Ignore the error if the filter does not result in an actionable list of indices (ignore_empty_list) and exit cleanly.
2017-05-31 08:17:02,161 INFO      Deleting selected indices: [u'heapster-2017.04.26', u'heapster-2017.04.27', u'heapster-2017.04.28', u'heapster-2017.05.03', u'heapster-2017.05.15', u'heapster-2017.05.12', u'heapster-2017.05.11', u'heapster-2017.05.09', u'heapster-2017.05.08']2017-05-31 08:17:02,161 INFO      ---deleting index heapster-2017.04.26
2017-05-31 08:17:02,161 INFO      ---deleting index heapster-2017.04.27
2017-05-31 08:17:02,161 INFO      ---deleting index heapster-2017.04.28
2017-05-31 08:17:02,161 INFO      ---deleting index heapster-2017.05.03
2017-05-31 08:17:02,161 INFO      ---deleting index heapster-2017.05.15
2017-05-31 08:17:02,161 INFO      ---deleting index heapster-2017.05.12
2017-05-31 08:17:02,161 INFO      ---deleting index heapster-2017.05.11
2017-05-31 08:17:02,161 INFO      ---deleting index heapster-2017.05.09
2017-05-31 08:17:02,161 INFO      ---deleting index heapster-2017.05.08
2017-05-31 08:17:02,366 INFO      Action ID: 2, "delete_indices" completed.
2017-05-31 08:17:02,367 INFO      Job completed.

I checked the log file daily after the research and after running the task manually the cron job is working as expected and curator does it’s job. No full disk since last week.

CFC uses kubernetes and so stopping the clean-indices container creates a new one immediately! All changes disappear then and the cronjob stops working. I don’t want to wait until IBM provides a container update, so I searched a way to run the curator even with a new container on a regular basis.

I created a script:

id=`docker ps | grep indices-cleaner | awk '{print $1}'` 
docker exec -t $id / 
docker exec -t $id tail /var/log/curator-cron.log

and added it to my crontab on the CFC server.

crontab -e 59 23 * * * script >> /var/log/curator.log

When you use Kibana to analyse the logs, you maybe want to have more indices available. docker inspect containerid shows us:

"Mounts": [
                "Type": "bind",
                "Source": "/etc/cfc/conf/curator-action.yml",
                "Destination": "/action.yml",
                "Mode": "",
                "RW": true,
                "Propagation": ""

So you can edit the file /etc/cfc/conf/curator-action.yml on the CFC host instead of the container file and your changes will be persistent.

Deleting temp and wstemp on Microsoft Windows Server

Since some versions of IBM Connections, it is mandatory to delete temp and wstemp of your Connections node after deployment or updates, or you end up with an old layout/design of Connections GUI.

On a Windows Server System this can be a pain, because within temp/wstemp WebSphere Application Server creates a folder structure with nodename / application server name and so on. In must cases the delete ends with the message “path too long”.

So you can start and rename the folders and try to delete over and over again. A time consuming activity and you need to do several times during an update.

There are several tips around, but most of them need an extra tool installed. I searched for a solution for this for a long time, but never blogged about the way I normally use to avoid this. I remembered during a skype discussion with other Connections guys some days ago, so here is the easiest and fasted way to get rid of long paths:

Path too long? Use Robocopy (thanks Bert van Langen)

Robocopy is a great tool and it is installed by default since Windows Server 2008, I use it during migrations to move the IBM Connections shared data to an other place, but it’s easy to create an empty folder and move it to the temp folder of the WebSphere Application server node.

Here as an example:

mkdir d:\empty 
robocopy d:\empty D:\IBM\WebSphere\AppServer\profiles\AppSrv01\temp /purge

But be careful, robocopy is not using the trash, so when you type the wrong path, or forget the \temp, you end up with searching the backup tapes.

IBM Connect 2017 – slides, news and so

This year I attended IBM Connect in San Francisco. In my eyes it was a great event and I enjoyed it very much.

Some announcements are very important for the future and evolution of the IBM portfolio:

  • IBM Connections Pink – Jason Gary and the IBM Development team showed the future of IBM Connections. The basis will be Docker and a lot of other Opensource products. I see forward to work with a complete new stack and be very curious on deployment, migration and scaling. It is a complete rewrite and will not longer need DB2 or WebSphere. A good summarize was written by Glenn Kline.
  • panagenda ApplicationInsight – all IBM Domino customers with valid maintenance will get ApplicationInsights to analyze the code and usage of their Domino databases
  • IBM Domino will be updated through feature packs, we will get Java 8 and other long awaited functionality
  • IBM announced a new lifetime IBM Champion: Julian Robichaux, big congrats to him and well deserved

Just a few session slides are available through the official conference page (we provided them, but they are still not available), so we uploaded ours to slideshare:

Best and Worst Practices for Deploying IBM Connections

IBM Connections Adminblast

All other session slides of my panagenda colleagues can be found in the panagenda slideshare account.


During the 11 hour flight to San Francisco I used the time to update the XPages and Generic HTML Widgets (OpenNTF) for IBM Connections 5.5 CR2. Frank van der Linden uploaded the changes today.

Internet Explorer – Edge Mode without SPNEGO SSO

Last week I had an issue that some Domino Server didn’t provide SSO through SPNEGO any longer (environment worked for over 2 years now). This environment uses the customized domcfg.nsf template of Andreas Artner, maybe it’s related, but I don’t think so, on Windows 7 with latest Internet Explorer 11 and Domino Servers 9.0.1 with latest fix pack.

So what happened? The Domino servers are placed in the “Local Intranet Zone” of IE through Group Policy from beginning. The Windows administrators started to enable “Enterprise Mode” for better handling of compatibility mode and one of the steps is to deactivate the “Display intranet sites in compatibility View” option.

After this, all sites which are not explicitly configured in “Enterprise Mode” are loading in “Edge Mode” and not longer in quirks mode.

Nearly everything worked fine, XPages load every HTML5 Element, the sites seem to deliver content faster and so on.


The configured SPNEGO authentication does not load any longer. The domcfg.nsf loads directly the fallback login form. I analyzed with Fiddler 4, but nothing suspicious was in the trace. So we configured one Domino Url to load in Quirks Mode (IE Level 5) and Desktop SSO worked immediately. So we played with the different levels and it showed that only the “Edge Mode” in IE11 made problems, when we went a step back and used the IE 10 compatibility mode everything worked: XPages, HTML5 and Desktop Single Sign-On.

I hope this saves you some time during troubleshooting, I think the Enterprise Mode is a trending thing and removing the Quirks Mode is an important step.