Docker Can not set Cookie error – Check Semaphores.

Seen one of these?  Stopped a container, would not restart, got this “Can not set cookie dm_task_set_cookie failed” error.  After trying to manually remove the /dev/mapper/docker-* file systems, I got a semaphore error.

So check your semaphores and see if you need to increase.

ipcs -u

—— Messages Status ——–
allocated queues = 0
used headers = 0
used space = 0 bytes

—— Shared Memory Status ——–
segments allocated 0
pages allocated 0
pages resident 0
pages swapped 0
Swap performance: 0 attempts 0 successes

—— Semaphore Status ——–
used arrays = 128
allocated semaphores = 128

And this is it, I had used all 128 arrays….  Increasing all the values didn’t really hurt I don’t think, but I could have just increased the number of arrays.

ipcs -ls

—— Semaphore Limits ——–
max number of arrays = 256
max semaphores per array = 500
max semaphores system wide = 64000
max ops per semop call = 200
semaphore max value = 32767

Increased in /etc/sysctl.conf

#kernel.sem=250 32000 100 128 – old value  only number I really needed to increase was the 128 to 256.
kernel.sem=500 64000 200 256

This solved the issue for me.  Course you could do the windows thing and reboot….  Here is the journalctl -xe partial output.

— Subject: Unit docker.service has failed
— Defined-By: systemd
— Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

— Unit docker.service has failed.

— The result is failed.
Jun 27 22:04:29 xxxxx.redhawk.org systemd[1]: Dependency failed for Run docker-cleanup every hour.
— Subject: Unit docker-cleanup.timer has failed
— Defined-By: systemd
— Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel

— Unit docker-cleanup.timer has failed.

— The result is dependency.
Jun 27 22:04:29 xxxxx.redhawk.org systemd[1]: Job docker-cleanup.timer/start failed with result ‘d
Jun 27 22:04:29 xxxxx.redhawk.org systemd[1]: Unit docker.service entered failed state.
Jun 27 22:04:29 xxxxx.redhawk.org systemd[1]: docker.service failed.
Jun 27 22:04:29 xxxxx.redhawk.org polkitd[639]: Unregistered Authentication Agent for unix-process
[user@xxxxx ~]#

:: Linux Tuning :: Large HPC System Tuning

We Run this on our HPC nodes which are 256GB memory, but we also use the same tuneables on 16GB and higher systems. These systems are 10Gb connected and do a great deal of NFS traffic.

Again, turn off tcp offload functions on your ethernet cards.

# Tuning from HPC nodes
net.ipv4.ipfrag_low_thresh = 262144
net.ipv4.ipfrag_high_thresh = 393216
sunrpc.tcp_slot_table_entries = 128

kernel.shmall = 20971520
kernel.sem = 250 32000 100 128
fs.aio-max-nr=3145728
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 4194304
net.core.rmem_max = 16777216
net.core.wmem_default = 4194304
net.core.wmem_max = 16777216
net.core.netdev_max_backlog=3000
net.ipv4.tcp_window_scaling=1
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216

vm.page-cluster = 4000
vm.pagecache = 90
vm.min_free_kbytes = 200000
vm.swappiness = 0
vm.dirty_background_ratio = 10
vm.dirty_expire_centisecs = 4000
vm.dirty_ratio = 30
vm.dirty_writeback_centisecs = 1500
vm.vfs_cache_pressure = 10000

:: Linux Tuning :: Large Systems Tuning

While I don’t think this is at all finished, this is running on a Dell R920 with 1.5TB memory. I think the cache dirty ratio might be too high, and still tweaking the net tunables. Remember that in general, tcp offload should be turned off as most ethernet cards are too slow to support what the system and linux can do now a days.

This system is 10gb connected and does heavy NFS traffic.

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

net.core.rmem_default = 16777216
net.core.rmem_max = 67108864
net.core.wmem_default = 16777216
net.core.wmem_max = 67108864
sunrpc.tcp_slot_table_entries = 1024

net.ipv4.ipfrag_low_thresh = 262144
net.ipv4.ipfrag_high_thresh = 393216

# Net tuning
net.ipv4.tcp_rmem = 393216 1024000 67108864
net.ipv4.tcp_wmem = 393216 1024000 67108864
##
net.ipv4.tcp_mem = 393216 1024000 67108864
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_sack = 0
net.ipv4.tcp_window_scaling = 1
fs.file-max=327679
fs.aio-max-nr=3145728

vm.page-cluster = 4000
vm.min_free_kbytes = 200000
vm.swappiness = 0
vm.dirty_background_ratio = 10
vm.dirty_expire_centisecs = 4000
vm.dirty_ratio = 30
vm.dirty_writeback_centisecs = 1500
vm.vfs_cache_pressure = 10000

vm.lowmem_reserve_ratio = 512     512     64

:: Linux Tuning :: Tuning the Desktop

I am going to start posting tuning parameters for different systems configurations.

This is for a Small memory < 4GB desktop.  Specifically a 3GB laptop.

RHEL5 and 6, some tunables change in 6, but most are ok.

Don’t use for a server, will be really bad.  Obviously, there are network tuning parameters here and I presume NFS access/needs.  These parameters are well documented on the internet, the why of my parameters, is really my secret sauce.

At a later date, I will describe the why and how of these configurations, but for now, paly if you wish.

Of course these drop in /etc/sysctl.conf.

net.ipv4.tcp_syncookies = 1
net.core.rmem_default = 262144
net.core.rmem_max = 16777216
net.core.wmem_default = 262144
net.core.wmem_max = 16777216
net.core.optmem_max = 1048576
net.core.somaxconn = 512
sunrpc.tcp_slot_table_entries = 128
net.ipv4.ipfrag_low_thresh = 262144
net.ipv4.ipfrag_high_thresh = 393216
net.ipv4.tcp_mem = 786432  1048576 16777216
net.ipv4.tcp_rmem = 8192 87380 16777216
net.ipv4.tcp_wmem = 8192 65536 16777216
net.ipv4.tcp_sack = 0net.ipv4.tcp_timestamps = 0

vm.page-cluster = 20
vm.min_free_kbytes = 200000
vm.swappiness = 0
vm.dirty_background_ratio = 1
vm.dirty_expire_centisecs = 1000
vm.dirty_ratio = 2
vm.dirty_writeback_centisecs = 250
vm.vfs_cache_pressure = 10000
vm.zone_reclaim_mode = 1
vm.laptop_mode = 0