10.4. using openMosixview

10.4.1. main application

Here is a picture of the main application-window. The functionality is explained in the following.

openMosixview displays a row with a lamp, a button, a slider, a lcd-number, two progress-bars and some labels for each cluster-member. The lights at the left are displaying the openMosix-Id and the status of the cluster-node. Red if down, green for available.

If you click on a button displaying the ip-address of one node a configuration-dialog will pop up. It shows buttons to execute the most common used "mosctl"-commands. (described later in this HOWTO) With the "speed-sliders" you can set the openMosix-speed for each host. The current speed is displayed by the lcd-number.

You can influence the load-balancing of the whole cluster by changing these values. Processes in a openMosix-Cluster are migrating easier to a node with more openMosix-speed than to nodes with less speed. Sure it is not the physically speed you can set but it is the speed openMosix "thinks" a node has. e.g. a cpu-intensive job on a cluster-node which speed is set to the lowest value of the whole cluster will search for a better processor for running on and migrate away easily.

The progress bars in the middle gives an overview of the load on each cluster-member. It displays in percent so it does not represent exactly the load written to the file /proc/hpc/nodes/x/load (by openMosix), but it should give an overview.

The next progressbar is for the used memory the nodes. It shows the currently used memory in percent from the available memory on the hosts (the label to the right displays the available mem). How many CPUs your cluster have is written in the box to the right. The first line of the main windows contains a configuration button for "all-nodes". You can configure all nodes in your cluster similar by this option.

How good the load-balancing works is displayed by the progressbar in the top left. 100% is very good and means that all nodes nearly have the same load.

Use the collector- and analyzer-menu to manage the openMosixcollector and open the openMosixanalyzer. This two parts of the openMosixview-application suite are useful for getting an overview of your cluster during a longer period.

10.4.2. the configuration-window

This dialog will pop up if an "cluster-node"-button is clicked.

The openMosix-configuration of each host can be changed easily now. All commands will be executed per "rsh" or "ssh" on the remote hosts (even on the local node) so "root" has to "rsh" (or "ssh") to each host in the cluster without prompting for a password (it is well described in a Beowulf documentation or on the HOWTO on this page how to configure it).

The commands are:


automigration on/off 
quiet yes/no 
bring/lstay yes/no 
exspel yes/no 
openMosix start/stop 
If openMosixprocs is properly installed on the remote cluster-nodes click the "remote proc-box"-button to open openMosixprocs (proc-box) from remote. xhost +hostname will be set and the display will point to your localhost. The client is executed on the remote also per "rsh" or "ssh". (the binary openmosixprocs must be copied to e.g. /usr/bin on each host of the cluster) openMosixprocs is a process-box for managing your programs. It is useful to manage programs started and running local on the remote nodes and is described later in this HOWTO.

If you are logged on your cluster from a remote workstation insert your local hostname in the edit-box below the "remote proc-box". Then openMosixprocs will be displayed on your workstation and not on the cluster-member you are logged on. (maybe you have to set "xhost +clusternode" on your workstation). There is a history in the combo-box so you have to write the hostname only once.

10.4.3. advanced-execution

If you want to start jobs on your cluster the "advanced execution"-dialog may help you.

Choose a program to start with the "run-prog" button (file-open-icon) and you can specify how and where the job is started by this execution-dialog. There are several options to explain.

10.4.4. the command-line

You can specify additional commandline-arguments in the lineedit-widget on top of the window.

Table 10-1. how to start

-no migration start a local job which won't migrate
-run home start a local job
-run on start a job on the node you can choose with the "host-chooser"
-cpu job start a computation intensive job on a node (host-chooser)
-io job start a io intensive job on a node (host-chooser)
-no decay start a job with no decay (host-chooser)
-slow decay start a job with slow decay (host-chooser)
-fast decay start a job with fast decay (host-chooser)
-parallel start a job parallel on some or all node (special host-chooser)

10.4.5. the host-chooser

For all jobs you start non-local simple choose a host with the dial-widget. The openMosix-id of the node is also displayed by a lcd-number. Then click execute to start the job.

10.4.6. the parallel host-chooser

You can set the first and last node with 2 spinboxes. Then the command will be executed an all nodes from the first node to the last node. You can also inverse this option.