Log Scraping

Log Scraping is a SysMon collector which tails log files looking for specific messages.

Configuration

Log scraping is configured on the Monitor_Config dashboard, in the File Size/Growth Configuration tab, the DM_LOG_SCRAPING parameter. Specify Host, Directory, File and Scrape Pattern.

Screenshot Log Scraper configuration

Advanced configuration of the Log Scraping, allowing complex regex patterns is available in the Flex Dashboards available within your deploy at https://host>:<port>/delta within the File Config Dashboard.

Screenshot Log Scraper configuration using the flex DM_FileMonitorConfig dashboard

Clicking the plus symbol on the lower left-hand corner of the screen displays the Log Scraper Entry screen. Specify the Log Scraper Entry details as described in the following table:

Screenshot
Log Scraper Entry

control effect
ID ID of the log scraper entry
Host host the logfile is located on (wildcards allowed)
Directory directory the log files are located in: when files are added to this directory a new listener is started and the new file will be scraped if a pattern match is detected
File Name log file name
Scrape Pattern opens the Scrape Pattern Editor
Level ERROR, INFO or WARN
Edit Alert Options opens the Alert Options Editor

Scrape Pattern Editor

Screenshot
Scrape Pattern Editor

A scrape pattern can be either a literal expression or a regular expression. A literal expression is a string of characters to be matched.

A regular expression (regex) allows more complicated text patterns to be matched, and can allow one search to do the work of many. For example, search for the word separate and its common misspellings with regex s[ae]p[ae]r[ae]te. The Scrape Pattern Editor rejects invalid regexes.
Regex 101, Regexpal

The log scraper will check for each string specified. When the Scrape Pattern Editor opens it displays the entered string with quote marks around it. This is useful to highlight if there is a space/spaces at the end of the string. e.g. if the search pattern is XXXYYY with a space at the end this will be displayed as "XXXYYY " in the Scraper Pattern Editor screen.

Alert Options Editor

Use the Alert Options Editor to configure context-specific alerts and negative alerts.

Context-specific alerts

In the Alert Options Editor, open Context Specific Alerting. Specify Content Analytic and Content Config Parameter.

Screenshot Setting a context-specific alert

parameter effect
Content Analytic Analytic to look up the context of the Alert
Content Config Parameter Associated Context Config Parameter

Click on the Select/edit config parameter button button to the right of the Context Config Parameter dropdown to summon the Config Parameter dialog.

this will allow the user to update the configuration parameter for this alert

Screenshot
Config parameter dialog

Example: with colValue set to 95, a DM_GC_Context of the garbage collector value of memory in use as a percentage of total memory size, an analytic that means exceeds, and a colValue of 95, an alert is written to the log if the observed value exceeds 95%.

Example: in Kx Control, the configuration parameter DM_SYSMON_CONFIG_DEFAULT attribute maxScrapersPerHost is used to specify the maximum number of log scrapers per host. An error message will be written to the log file and a warning displayed on the Log Scraper dashboard if the number of scrapers defined for the host exceeds this maximum.

Negative alerts

Screenshot Negative alerting settings

Screenshot A negative alert reported on the File Monitor Config dashboard

Negative alerts are triggered by an absence in the log. Signify this by checking Negative Alert in the Alert Options Editor.

parameter effect
Execution Time time of day to make the check
TTL (mins) Time To Live: the period (in mins before the Execution Time) in which the scrape pattern is sought in the logs
Negative ID identifer: group alerts together by giving them the same ID
Early Notifications raise the alert if the scraped value appears before the TTL period
Schedule frequency of the check: ONCE, HIGHFREQ, DAILY, WEEKLY, MONTHLY. The option chosen determines further details: e.g. DAILY requires a start date and an option to exclude weekends.
Date an exact date for the check

Once a negative alert is set it will be displayed in the Alert Options column on the Log Scraper Configuration dashboard listing all Log Scraper Dashboard.

Log Scraper dashboard

Screenshot

The Kx Monitoring - Log Scraper tab provides a view of all scraped logging information.

Filter the display based on Start Date, End Date, Host, File, Level (INFO, WARN, ERROR, ALL) and Time Bucket. On Submit, the Filtered Logs table is updated.

Double-click on an item in the Filtered Logs table to populate the Filtered Log Details table.