The login dialog, new sites are created with the user omdadmin and the password omd.
|The Main Dashboard|
Dashboards combine several elements into one view. The dynamic layout algorithm automatically adapts the dashbaord to all possible screen sizes and formats.
The default problems view displays all or just the unhandled problems, grouped by their state. Views can be customized.
All details about one service
Views offer flexible filtering mechanism for hosts and services
|List of Services|
This screenshot shows the list of services of a host. Performance values are graphically visualized.
|Example: Search for CPU services|
A simple search for CPU in the Quicksearch snapin shows all relevant CPU services. This allows an easy comparison of the current performance of verious hosts.
By use of the integrated NagVis, network dependencies are visualized automatically.
Views can make use of large screens by displaying multiple columns of data instead of just one.
All Monitoring commands can be issued not only on a single object, but also on several at once.
All views can be customized. You can define your own filters, columns, grouping, layout and other aspects.
The BI Module allows you to compute the state of an application or a business process by combining thestate of hosts and services, while taking redundancies into account.
All performance values are kept for several years. This allows not only a conveniant error diagnosis but also capacity planning.
A detailed statistics of all packets, bytes and errors are stored separately for each switch port and each network device of a server
This builtin view shows report of the most frequent problems among all hosts and services. The time frame for this is selectable by the user.
Each user can have his or her individual sidebar. Many snapins are available.
|History of Events|
This example shows the global history of events.
From the historic states you can compute an availability for each host, service or BI aggregation.
This shows the time line of a services with respect to its historic states and availability.
The graphing tool NagVis is seamlessly integrated into the Check_MK GUI. A dedicated Snapin gives quick access to all maps.
The Event Console processes messages from syslog, SNMP traps, Windows event logs or text files. Via flexible rules you can specify how to handle each event.
|Monitoring of VMWare ESX|
Check_MK has its own plugin for monitoring VMWare ESX via vSphere. You can either connect to the host systems or to the vCenter.
|WATO - Main Menu|
WATO is the Web Administration Tool for Check_MK. The complete monitoring system can be configured via the Web. For experienced users an administration via config files is also possible.
|Hosts and Folders|
The monitored hosts are organized in Folders. Attributes can be configured on a folder-level and inherited to the hosts.
|Details of one Host|
Each attribute can either be set explicitely or is inherited by one of its parents folders. That way most stettings are chosen correctly just be adding a host to the correct folder.
This screens shows some of the global settings.
|Rule based configuration|
The configuration of host- and service-parameters is based on rules like "Use a warning level of 90% on all filesystems on productive Linux systems that begin with /oracle."
The check plugins that come with Check_MK can be configured with full GUI support. Every parameter is validated and a help text is available.
|User, Contactgroups aand Roles|
All users and permissions are configured via WATO.
Check_MK allows to connect to an existing LDAP user database (like Active Directory).
|Roles and Permissions|
Via configurable roles you can specify which user should be able to perform which actions.
Custom timeperiod definitions allow to restrict checks or notifications to certain times.
In the current version also the BI aggregations can be configured via WATO.
The pattern editor for the intergrated logfile monitoring
Check_MK allows to build one centrally administered large monitoring system by connecting several remote Check_MK instances. No monitoring data needs to be centralized. This saves network bandwidth and scales well.