kdb+ on Non-Uniform Access Memory (NUMA) Hardware
This section only applies to Linux.
For the majority of use cases, running kdb+ on a NUMA system can cause a number of operational issues, including high system process usage and poor performance. Hence for these systems, you should disable NUMA and set an interleave memory policy. kdb+ is unaware of whether NUMA is enabled or not.
If possible, disable NUMA in bios. Otherwise use the technique below.
To fully disable NUMA and set an interleave memory policy, start kdb+ with the numactl command as follows
numactl --interleave=all q
AND disable zone reclaim in the proc settings as follows
echo 0 > /proc/sys/vm/zone_reclaim_mode
An interesting and relevant post regarding this topic may be found here - The MySQL "swap insanity" problem and the effects of NUMA. Although the post is about the impact on MySQL, the issues are the same for other databases such as kdb+.
To find out whether NUMA is enabled in your bios, use
dmesg | grep -i numa
And to see if NUMA is enabled on a process basis
Huge Pages and Transparent Huge Pages (THP)
A number of customers have been impacted by bugs in the Linux kernel wrt THP. These issues manifest themselves as process crashes, stalls at 100% CPU usage, and sporadic performance degradation. Until further notice, we strongly recommend that Linux systems that run kdb+ have THP disabled. Other database vendors are also reporting similar issues with THP - e.g. Oracle.
Note that disabling transparent huge pages isn’t possible via sysctl(8). Rather, it requires manually echoing settings into /sys/kernel at or after boot. In /etc/rc.local or by hand:
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then echo never > /sys/kernel/mm/transparent_hugepage/enabled fi if test -f /sys/kernel/mm/transparent_hugepage/defrag; then echo never > /sys/kernel/mm/transparent_hugepage/defrag fi
Redhat users may need a slightly different path
echo never >/sys/kernel/mm/redhat_transparent_hugepage/enabled
Another possibility to configure this is via grub
n.b. kdb+ must be restarted to pick up the new setting.
Monitoring free disk space
In addition to monitoring free disk space for your usual partitions which you write to, ensure that you also monitor free space of /tmp on unix, since kdb+ uses this area for capturing the output from system commands, such as system"ls".
This section only applies to Linux.
If you find that kdb+ is seg faulting (crashing) when accessing compressed files, try increasing the linux kernel parameter vm.max_map_count
as root $ sysctl vm.max_map_count=16777216
and/or make a suitable change for this parameter more permanent through /etc/sysctl.conf
as root $ echo "vm.max_map_count = 16777216" | tee -a /etc/sysctl.conf $ sysctl -p
You can check current settings with
$ more /proc/sys/vm/max_map_count
Assuming you are using 128kB logical size blocks for your compressed files, a general guide is, at a minimum, set max_map_count to 1 map per 128kB of memory, or 65530, whichever is higher.
If you are encountering a SIGBUS error, please check that the size of /dev/shm is large enough to accommodate the decompressed data. Typically, you should set the size of /dev/shm to be at least as large as a fully decompressed hdb partition.
Timekeeping on production servers is a complicated topic. These are just a few notes which can help.
If you are using any of local time functions( .z.(TPNZD)) kdb+ will use localtime(3) system function to determine time offset from GMT. In some setup(GNU libc) this can cause excessive system calls to /etc/localtime. Some details here and here. Setting TZ environment helps this:
$ export TZ=America/New_York #or from kdb+ setenv[`TZ;"Europe/London"]
One more way of getting excessive system calls when using .z.(pt...) is to have slow clock source configured on your OS. Modern Linux distributions provide very low overhead functionality for getting current time. Use tsc clocksource to activate this
$ echo tsc >/sys/devices/system/clocksource/clocksource0/current_clocksource # list available clocksource on the system $ cat /sys/devices/system/clocksource/clocksource*/available_clocksource
If you are using PTP for timekeeping, your PTP hardware vendor might provide their own implementation of time. Check that those utilize VDSO mechanism for exposing time to user space.