InstallationGuide » History » Version 1

Version 1/42 - Next » - Current version
Anonymous, 07/05/2007 03:20 PM


= Installation Guide =

This guide describes the installation procedure for ''ProCKSI''.

  • '''Release''': procksi_8-2
  • '''Environment''': Cluster with one head node and two compute nodes

All installations must be done with ''root'' access.

= System Design and Requirements =

Cluster Design
The cluster is assumed to have a ''head node'' and several ''compute nodes''. * The head node runs the webserver, the file server, the central database server, the email server, and the server for the queuing system. It can be used as a compute node itself. * The compute nodes run the calculations. Software Requirements
We are assuming the following software components to be already installed the head node: * Operating system: ''Centos4'' (''RHEL4'') * Webserver: Apache2 * Database: MySQL * Email server: Postfix (SMTP) * Queuing system: PBS torque + maui

The compute nodes only requires the following components:

  • Operating system: ''Centos4'' (''RHEL4'')
  • Queuing system: PBS torque

The configuration for these components will be described later in this installation guide.

= URL and Email Forwarding =
ProCKSI uses URL and email forwarding in order to provide a stable internet address and corresponding email addresses.

Provider
Provider: ''www.planetdomain.com/ukcheap/home.jsp''BR Login: nkrasnogorBR Password: [BBSRC GRANT NUMBER]BR Domains
ProCKSI's main domain name is ''www.procksi.net'', which is redirected to ''procksi.cs.nott.ac.uk'', which is an alias for ''procksi0.cs.nott.ac.uk''. All other domain names are redirected to its main domain name.

||'''Domain Name''' ||'''Redirected to''' ||'''Expires at''' ||www.procksi.net ||procksi.cs.nott.ac.uk ||11-01-2011 ||www.procksi.org ||[http://www.procksi.net/ www.procksi.net] ||11-01-2011 ||www.procksi.com ||[http://www.procksi.net/ www.procksi.net] ||11-01-2011 ||www.procksi.info ||[http://www.procksi.net/ www.procksi.net] ||11-01-2008

DNS Settings
The primary and secondary DNS servers must be set as follows: ||Primary ||ns1.iprimus.com.au ||Secondary ||ns2.iprimus.com.au

The following changes must be made manually in ''Advanced DNS settings'':

||CNAME    ||*.procksi.net    ||procksi.cs.nott.ac.uk.
||CNAME    ||*.procksi.org    ||www.procksi.net.
||CNAME    ||*.procksi.com    ||www.procksi.net.
||CNAME    ||*.procksi.info    ||www.procksi.net.
Email Settings
The following email addresses must be created and redirected to '''', which must be available: || ||redirected to || || || || || || || || || || || || || |||| ||

The following changes must be made manually in ''Advanced DNS settings:''

||MX    ||@.procksi.net    ||mailhost.planetdomain.com    ||10
Domain Usage Monitoring
The usage of ProCKSI's domains is monitored.

||Provider ||''www.sitemeter.org'' ||Login: ||s18procksiBR ||Password: ||FAKUIL

All HTML documents must contain the following code in order to be tracked correctly.

{{{
<!-- Site Meter -->
<script type="text/javascript" src="http://s18.sitemeter.com/js/counter.js?site=s18procksi">
</script>
<noscript>
<a href="http://s18.sitemeter.com/stats.asp?site=s18procksi" target="_top">
<img src=[http://s18.sitemeter.com/meter.asp?site=s18procksi http://s18.sitemeter.com/meter.asp?site=s18procksi]
alt="Site Meter" border="0"/>
</a>
</noscript>

<!-- Copyright (c)2006 Site Meter -->
}}}

= Data Management and Exchange =
The head node and all compute nodes must be able to communicate with each other and exchange data. Therefore, a common user management and shared file system is necessary.

User Management
Make the following changes on the head node and each compute node: * Add a new user into ''/etc/passwd'': ''BR ''''procksi:x:510:510:ProCKSI-Server:/home/procksi:/bin/bash'' * Add an entry for the new user into ''/etc/shadow'' if desired: ''BR ''''procksi:[ENCRYPTED_PASSWORD]:13483:0:99999:7:::'' * Add a new group into ''/etc/group'', and add all users who should have access:''BR ''''procksi:x:510:dxbBR ''The members for group procksi are now: ''procksi'', ''dxb'' * Generate home directory for ''procksi'' Firewall
All network traffic using the internal (private) network is trusted and considered to be secure. So no firewall is needed on the internal network interface (eth1). * Modify ''/etc/sysconfig/iptables'' on the head node and on each compute node. BR If ETH1 is on the private network, addBR -A RH-Firewall-1-INPUT -i eth1 -j ACCEPTBR directly afterBR -A RH-Firewall-1-INPUT -i lo -j ACCEPT * Restart the firewall on the head node and on each compute node:BR /sbin/service iptables restart Changes in the firewall settings regarding the external network interface (eth0) will be described in other sections where necessary. Host Name Resolution
As each node consists of two network interfaces (= multihomed host), the host name resolution must be configured correctly in order to prioritize the internal, trusted network for communication between different nodes. * The official hostname for each node must be set to the ''internal'' name of the machine in ''/etc/sysconfig/network''. This is an example for the head node:BR HOSTNAME=procksi0-priv.cs.nott.ac.ukBR The compute nodes must be named and configured accordingly. * Add the following to ''/etc/hosts'' on the head node:BR 127.0.0.1 procksi0-priv.cs.nott.ac.uk procksi0-priv localhost.localdomain localhostBR and alter the line for each compute node (procksi1, procksi2) accordingly. * Add the following to ''/etc/hosts'' on the head node and each compute node:BR 192.168.199.10 procksi0-priv.cs.nott.ac.uk procksi0-priv BR 192.168.199.11 procksi1-priv.cs.nott.ac.uk procksi1-priv BR 192.168.199.12 procksi2-priv.cs.nott.ac.uk procksi2-priv BR Edit ''/etc/host.conf'' so that local setting in ''/etc/hosts'' take precedence over DNS:BR order hosts,bind Data Access
The head node hosts a RAID system of hard disks that will store all data generated by ProCKSI on all compute nodes and the head node itself. This partition must be accessible by all nodes and is exported as a network file system (NFS) therefore. Executables used by ProCKSI must be installed locally on each compute node for better performance. * Add the following to the end of ''/etc/exports'' on the head node (''procki0''):BR /home/procksi procksi?-priv.cs.nott.ac.uk(sync,rw,no_root_squash) * Add the following to the end of ''/etc/fstab'' on each compute node (''procksi1'', ''procksi2''):BR procksi0:/home/procksi /home/procksi nfs bg,hard,intr 0 0BR procksi0:/usr/local/procksi /usr/local/procksi nfs bg,hard,intr 0 0 * Make the NFS daemons start at bootup. Enter at the command line of the head node and each compute node:BR /sbin/chkconfig nfs on * Start the NFS daemons. Enter at the command line on the head node and on each compute node:BR /sbin/service nfsd start * Generate the following temp directory on the head node and each compute node, at best on a separate partition:BR /scratch Time Synchronisation
The system time on all nodes must be synchronized as a) data is written/read on/from a common, shared file system or even expires after a certain period of time and must be deleted, and b) system logs are maintained independently but entries must be able to be associated with each other. * Add your own time server to ''/etc/ntp/ntpservers:''BR marian.cs.nott.ac.uk * Modify ''/etc/ntp.conf'' (at the of the file):BR server marian.cs.nott.ac.ukBR restrict marian.cs.nott.ac.uk mask 255.255.255.255 nomodify notrap noquery * Make the NTP daemon start at bootup. Enter at the command line:BR /sbin/chkconfig ntpd on * Start the NTP daemon. Enter at the command line:BR /sbin/service ntpd start

= Queuing System =
The queueing system (resource manager) is the heart of the distributed computing on a cluster. It consists of three parts, the server, the scheduler, and the machine-oriented mini-server (MOM) executing the jobs.

We are assuming the following configuration:
||PBS TORQUE|| version 2.1.6           ||server, basic scheduler, mom 
||MAUI      || version 3.2.6.p18    ||scheduler
The sources can be obtained from:
||PBS TORQUE ||http://www.clusterresources.com/pages/products/torque-resource-manager.php
||MAUI       ||http://www.clusterresources.com/pages/products/maui-cluster-scheduler.php
The install directories for ''TORQUE'' and ''MAUI'' will be:
||PBS TORQUE ||''/usr/local/torque''
||MAUI       ||''/usr/local/maui''
TORQUE

=== Register new services ===
Edit ''/etc/services'' and add at the end: {{{ # PBS/Torque services

pbs           15001/tcp    # pbs_server
pbs 15001/udp # pbs_server
pbs_mom 15002/tcp # pbs_mom <-> pbs_server
pbs_mom 15002/udp # pbs_mom <-> pbs_server
pbs_resmom 15003/tcp # pbs_mom resource management
pbs_resmom 15003/udp # pbs_mom resource management
pbs_sched 15004/tcp # pbs scheduler (pbs_sched)
pbs_sched 15004/udp # pbs scheduler (pbs_sched)
}}}

=== Setup and Configuration on the Head Node ===
Extract and build the distribution TORQUE on the head node. Configure server, monitor and clients to use secure file transfer (scp).BR export TORQUECFG=/usr/local/torqueBR tar -xzvf TORQUE.tar.gzBR cd TORQUEBR Configuration for a 64bit machine with the following compiler options:

FFLAGS   = “-m64 -march=nocona -O3 -fPIC" 
CFLAGS = “-m64 -march=nocona -O3 -fPIC"
CXXFLAGS = “-m64 -march=nocona -O3 -fPIC"
LDFLAGS = “-L/usr/local/lib -L/usr/local/lib64"
./configure     --enable-server --enable-monitor --enable-clients [[BR]]          
--with-server-home=$TORQUECFG --with-rcp=scp --disable-filesync --with-server-name
make
make install

If not configures otherwise, binaries are installed in ''/usr/local/bin'' and ''/usr/local/sbin''. You should have these directories included in your path. But you can configure TORQUE to have the binaries in the default system directory withBR

./configure --bindir=/usr/bin --sbindir=/usr/sbin {{{
SERVERHOST procksi0-priv.cs.nott.ac.uk
ALLOWCOMPUTEHOSTSUBMIT true
}}}

Configure the main queue ''batch'':[[BR]]     > create queue batch queue_type=execution[[BR]]     > set queue batch started=true[[BR]]     > set queue batch enabled=true[[BR]]     > set queue batch resources_default.nodes=1
Configure queue ''test ''accordingly''.''[[BR]]
Configure default node to be used (see below):
> set server default_node = slave
Specify all compute nodes to be used by creating/editing ''$TORQUECFG/server_priv/nodes.'' This may include the same machine where pbs_server will run. If the compute nodes have more than one processor, just add np=X after the name with X being the number of processors. Add node attributes so that a subset of nodes can be requested during the submission stage.

{{{
procksi0-priv.cs.nott.ac.uk np=1 procksi head
procksi1-priv.cs.nott.ac.uk np=2 procksi slave slave1
procksi2-priv.cs.nott.ac.uk np=2 procksi slave slave2
}}}

Although the head node (procksi0) has two processors as well, we only allow one processor to be used for the queueing system as the other processor will be used for handling all frontend communication and I/O. (Make sure that hyperthreading technology is disabled on the head node and all compute nodes!)
Build packages for the compute nodes and copy them to each compute node:[[BR]]     cd $TORQUE[[BR]]     make packages[[BR]]     scp torque-package-mom-linux-i686.sh     procksi1|procksi2[[BR]]     scp torque-package-clients-linux-i686.sh procksi1|procksi2[[BR]]     [[BR]]

=== Setup and Configuration on the Compute Nodes ===
Install prepared packes. A directory similar to ''$TORQUECFG'' will be automatically created.

psh compute torque-package-mom-linux-i686.sh -install[[BR]]     psh compute torque-package-clients-linux-i686.sh -install
Check if the nodes know the head node[[BR]]     $TORQUECFG/server_name[[BR]]     procksi0-priv.cs.nott.ac.uk
[[BR]] Configure the compute nodes by creating/editing ''$TORQUECFG/mom_priv/config''. The first line specifies the PBS server, the second line specifies hosts which can be trusted to access mom services as non-root, and the last line allows to copy data via NFS without using SCP.

{{{
$pbsserver procksi0-priv.cs.nott.ac.uk
$loglevel 255
$restricted procksi?-priv.cs.nott.ac.uk
$usecp procksi0-priv.cs.nott.ac.uk:/home/procksi /home/procksi
}}}

[[BR]] Start the queueing system (manually) in the correct order.
  • Start the mom: ''/usr/local/sbin/pbs_mom''
  • Kill the server: ''/usr/local/sbin/qterm -t quick''
  • Start the server: ''/usr/local/sbin/pbs_server''
  • Start the scheduler: ''/usr/local/sbin/pbs_sched''
    If you want to use MAUI as the final scheduler, keep in mind to kill pbs_sched after testing the TORQURE installation.
Check that all nodes are properly configured and correctly reporting[[BR]]     qstat -q[[BR]]     pbsnodes -a

=== Prologue and Epilogue Scripts ===
The ''prologue'' script is executed just before the submitted job starts. Here, it generates a unique temp directory for each job in ''/scratch''. It must be installed on each node:

cp $PROCKSI/install/prologue $TORQUECFG/mom_priv
chmod 500 $TORQUECFG/mom_priv/prologue
The ''epilogue'' script is executed right after the submitted job has ended. Here, it deletes the job's temp directory from ''/scratch.'' It must be installed on each node:
cp $PROCKSI/install/epilogue $TORQUECFG/mom_priv
chmod 500 $TORQUECFG/mom_priv/epilogue
MAUI

=== Register new services ===
Edit ''/etc/services'' and add at the end:

{{{ # PBS/MAUI services
pbs_maui 42559/tcp # pbs scheduler (maui)
pbs_maui 42559/udp # pbs scheduler (maui)
}}}

=== ===

=== Setup and Configuration on the Head Node ===
Extract and build the distribution MAUI. {{{
SERVERHOST procksi0-priv.cs.nott.ac.uk

  1. primary admin must be first in list
    ADMIN1 procksi
    ADMIN1 root
  1. Resource Manager Definition
    RMCFG[PROCKSI0-PRIV.CS.NOTT.AC.UK]
    TYPE=PBS BR
    HOST=PROCKSI0-PRIV.CS.NOTT.AC.UK
    PORT=15001
    EPORT=15004 [CAN BE ALTERNATIVELY: 15017 - TRY!!!]
    SERVERPORT 42559
    SERVERMODE NORMAL
  1. Node Allocation:
    NODEALLOCATIONPOLICY PRIORITY
    NODECFG[DEFAULT] PRIORITY='- JOBCOUNT'
    }}}
    Configure attributes of compute nodes:
qmgr
> set node procksi0.cs.nott.ac.uk properties = “procksi, head"
> set node procksi1.cs.nott.ac.uk properties = “procksi, slave"
> set node procksi0.cs.nott.ac.uk properties = “procksi, slave"
Request job to be run on specific nodes (on submission):
  • Run on any compute node: qsub -q batch -l nodes=1:procksi
  • Run on any slave node: qsub -q batch -l nodes=1:slave
  • Run on head node: qsub -q batch -l nodes=1:head
Start the MAUI scheduler manually. Make sure that pbs_sched is not running any longer.
  • Start the scheduler: ''/usr/local/sbin/maui''
Make the entire queueing system start at bootup:[[BR]]      cp /home/procksi/latest/install/pbs_head-node /etc/init.d/pbs [[BR]]      /sbin/chkconfig --add pbs[[BR]]      /sbin/chkconfig pbs on

=== Setup and Configuration on the Compute Nodes ===
Attention: If the head node is a compute node itself, do NOT proceed with the following steps as the head node was configured in the previous step!

Make the entire queueing system start at bootup:[[BR]]      cp /home/procksi/latest/install/pbs_compute-node /etc/init.d/pbs[[BR]]      /sbin/chkconfig --add pbs[[BR]]      /sbin/chkconfig pbs on

= Cluster Monitoring =

Ganglia
“Ganglia is a scalable distributed monitoring system for high-performance computing systems such as clusters and Grids." See [http://ganglia.sourceforge.net/ http://ganglia.sourceforge.net] for information on requirements, configuration and installation. !JobMonarch
!JobMonarch is an add-on to Ganglia which provides PBS job monitoring through the web browser. See [http://subtrac.rc.sara.nl/oss/jobmonarch/wiki/Documentation http://subtrac.rc.sara.nl/oss/jobmonarch/wiki/Documentation] for information on requirements, configuration and installation.

= Additional Software =

PERL Libraries
Please make sure that the following libraries are installed in the official library directory and install all depending libraries, if necessary. For ''Image::Magick'', use the corresponding libraries that come with the main installation.

BR

ErrorBRImage::MagickBR Digest::MD5BR Config::SimpleBR DBIBR CGIBR CGI::SessionBR Data::!FormValidatorBR HTML::TemplateBR HTML::Template::ProBR MIME::LiteBR !FreezeThawBR Storable 0.17008BR6.0.7BR2.36BR4.58BR1.53BR3.25BR4.13BR4.40BR2.8BR0.64BR3.01BR0.43BR2.12 Time::FormatBRIMAP::ClientBRFile::CopyBR WWW::MechanizeBRTime::LocalBRCloneBRSOAP::LiteBR 1.02BR2.2.9BR2.08BR1.20BR1.13BR0.22BR0.69

PERL modules are installed best with the CPAN shell:

Perl -MCPAN -eshell
Third Party Executables for ProCKSI
Generate the following directories on each compute node to contain important executables: * ''/usr/local/procksi''/''Cluster/'' * ''/usr/local/procksi''/''!DaliLite'' * ''/usr/local/procksi''/''MaxCMO'' * ''/usr/local/procksi''/''!MolScript''

For the following installation of the ProCKSI server components, the following executables must be present:

  • ''/usr/local/procksi/Cluster/qclust''
  • ''/usr/local/procksi/!DaliLite/!DaliLite''
  • ''/usr/local/procksi/MaxCMO/ProtCompVNS''
  • ''/usr/local/procksi/molauto''
  • ''/usr/local/procksi/molscript''
Image Software

=== Installation === * Install ''!ImageMagick'' from www.imagemagick.org if not already installed. * Install ''!MolScript'' from www.avatar.se/molscript. Please link the MesaGL libraries instead of the OpenGL libraries; a modified makefile can be found under ''/www/PROCKSI/install/Makefile.molscript''

=== Virtual Display ===
!MolScript needs an X display in order to generate images (jpg, gif, …). Its possible to use the console X display for the OpenGL bits even if it is not logged in. Therefore, ''procksi'' must be authenticated and allowed to use this X display virtually.

On each node copy magic cookie file for x-authentication:

cp /home/procksi/latest/install/:0.Xauth /var/gdm/:0.Xauth

On each node copy scripts for automatic x-authentication:

cp /home/procksi/latest/install/procksixauth /usr/local/sbin/procksixauth
cp /home/procksi/latest/install/:0 /etc/X11/gdm/Init/:0

Restart the X display manager for the changes to take effect:

/usr/sbin/gdm-restart

The virtual X display can be used with unix socket :0, e.g.:

molauto protein.pdb | DISPLAY=unix:0.0 molscript -jpeg -out protein.jpeg

= ProCKSI Server Component =

Installation and Basic Configuration
This section describes the installation and configuration of the ProCKSI server component. This includes the configuration of the web server and the database.

The server component will be installed into the home directory of the user ''procksi''. Therefore, make sure that it is on a separate partition / hard disk with much space. In the best case, this will be a RAID system.

Get the latest release of the server component, referred to in the following as ''RELEASE'', and extract it into ''/home/procksi/RELEASE.''

tar -xvzf RELEASE.tgz[[BR]]

Create a softlink from ''RELEASE'' to a generic directory ''/home/procksi/latest''. This will be accessed by the web server:

ln -s /home/procksi/RELEASE /home/procksi/latest

In order to test new versions, referred in the following as ''TEST'', before taking them officially online, create a softlink from ''TEST'' to a generic directory ''/home/procksi/test''. This will be accessed by the web server:

ln -s /home/procksi/TEST /home/procksi/test

In case that you want to bring the test version online, just delete the softlinks and repeat the previous steps for the new release. Please make sure that always both softlinks exist!

Change into the administrative directory and run the installation script. Change the server settings, database settings and directory settings if necessary.

cd /home/procksi/latest/admin
./configure.pl
Database Configuration
Make sure that the MySQL daemon is running, and that it will start at boot time: /sbin/service mysqld start
/sbin/chkconfig --add mysqldBR /sbin/chkconfig mysqld on

Make sure that you have access to the MySQL database management as ''root'' and login as user ''root ''with the corresponding password:

mysql -u root -p

Create new mysql users ''procksi_user ''and ''procksi_admin'':

USE mysql;[[BR]] INSERT INTO user SET host='localhost', user='procksi_user',         password=PASSWORD('''password_procksi_user''');[[BR]] INSERT INTO user SET host='localhost', user='procksi_admin',             password=PASSWORD('''password_procksi_admin''');[[BR]] FLUSH PRIVILEGES;

Repeat these steps analogously for ''procksi0-priv, procksi1-priv, procksi2-priv.''

Create a new database:

CREATE DATABASE procksi_latest;

Give privileges to users ''procksi_user ''and ''procksi_admin'' for all compute nodes:

GRANT ALL ON procksi_latest.* TO procksi_admin@localhost [[BR]]          WITH GRANT OPTION;[[BR]] GRANT SELECT, UPDATE, INSERT, DELETE ON procksi_latest.* [[BR]]          TO procksi_user@localhost;[[BR]] GRANT ALL ON procksi_latest.* TO  [[BR]]          WITH GRANT OPTION;[[BR]] GRANT SELECT, UPDATE, INSERT, DELETE ON procksi_latest.* [[BR]]          TO ;[[BR]] GRANT SELECT, UPDATE, INSERT, DELETE ON procksi_latest.* [[BR]]          TO ;[[BR]] GRANT SELECT, UPDATE, INSERT, DELETE ON procksi_latest.* [[BR]]          TO ;[[BR]] FLUSH PRIVILEGES;

If you change the password for ''procksi_user'', please make sure that you also change it in ''/home/procksi/latest/config/main.ini''

Import the main database ''procksi_latest'' from the backup given in ''/home/procksi/RELEASE/admin'':

''msysql -u procksi_admin -p procksi_latest < procksi_latest.sql ''

In order to create a database ''procksi_test'' for the test version, repeat the previous steps and set the privileges accordingly.

Web Server Configuration
Make the following changes to the Apache configuration file (''/etc/httpd/conf/httpd.conf''):

{{{
User procksi
Group procksi
!ServerAdmin
!ServerName procksi.cs.nott.ac.uk
!DocumentRoot /home/procksi/latest/html
<Directory /home/procksi/latest/html">
AuthConfig
</Directory>
!LogFormat "%t h %l %u \"%r\" %>s %b \"{Referer}i\" \"%{User-Agent}i\"" combined
!LogFormat "%t %h %l %u \"%r\" %>s %b" common
!LogFormat "%t %{Referer}i -> %U" referer
!LogFormat "%t %{User-agent}i" agent

#Exclude Logging of Ganglia Requests
!SetEnvIf Request_URI "ganglia" ganglia
#
  1. The location and format of the access logfile (Common Logfile Format).
  2. If you do not define any access logfiles within a <!VirtualHost>
  3. container, they will be logged here. Contrariwise, if you do
  4. define per-<!VirtualHost> access logfiles, transactions will be
  5. logged therein and not in this file. #
!CustomLog /home/procksi/latest/logs/access.log common env=!ganglia
#
  1. If you would like to have agent and referer logfiles, uncomment the
  2. following directives. #
!CustomLog /home/procksi/latest/logs/referer.log referer env=!ganglia
!CustomLog /home/procksi/latest/logs/agent.log agent env=!ganglia
#
  1. For a single logfile with access, agent, and referer information
  2. (Combined Logfile Format), use the following directive: #
    #!CustomLog logs/access_log combined env=!ganglia
!ScriptAlias /cgi-bin/ /home/procksi/latest/cgi-bin/
&lt;Directory "/home/procksi/latest/cgi-bin"&gt;
!AllowOverride None
Options None
Order allow,deny
Allow from all
&lt;/Directory&gt;
Alias     /data/         /home/procksi/latest/data/
Alias /images/ /home/procksi/latest/images/
Alias /styles/ /home/procksi/latest/styles/
Alias /applets/ /home/procksi/latest/applets/
Alias /scripts/ /home/procksi/latest/scripts/
!AddLanguage de .de
!AddLanguage en .en
!AddLanguage es .es
!AddLanguage fr .fr
!LanguagePriority en es de fr
Alias /errordocs/ "/home/procksi/errordocs" 
&lt;!IfModule mod_negotiation.c&gt;
&lt;!IfModule mod_include.c&gt;
&lt;Directory /home/procksi/errordocs&gt;
!AllowOverride none
Options IncludesNoExec !FollowSymLinks
!AddType text/html .shtml
&lt;!FilesMatch "\.shtml[!.$]"&gt;
!SetOutputFilter INCLUDES
&lt;/!FilesMatch&gt;
&lt;/Directory&gt;
!ErrorDocument 400 /errordocs/400_BAD_REQUEST
!ErrorDocument 401 /errordocs/401_UNAUTHORIZED
!ErrorDocument 403 /errordocs/403_FORBIDDEN
!ErrorDocument 404 /errordocs/404_NOT_FOUND
!ErrorDocument 405 /errordocs/405_METHOD_NOT_ALLOWED
!ErrorDocument 406 /errordocs/406_NOT_ACCEPTABLE
!ErrorDocument 408 /errordocs/408_REQUEST_TIMEOUT
!ErrorDocument 410 /errordocs/410_GONE
!ErrorDocument 411 /errordocs/411_LENGTH_REQUIRED
!ErrorDocument 412 /errordocs/412_PRECONDITION_FAILED
!ErrorDocument 413 /errordocs/413_REQUEST_ENTITY_TOO_LARGE
!ErrorDocument 414 /errordocs/414_REQUEST_URI_TOO_LARGE
!ErrorDocument 415 /errordocs/415_UNSUPPORTED_MEDIA_TYPE
!ErrorDocument 500 /errordocs/500_INTERNAL_SERVER_ERROR
!ErrorDocument 501 /errordocs/501_NOT_IMPLEMENTED
!ErrorDocument 502 /errordocs/502_BAD_GATEWAY
!ErrorDocument 503 /errordocs/503_SERVICE_UNAVAILABLE
!ErrorDocument 506 /errordocs/506_VARIANT_ALSO_VARIES
&lt;/!IfModule&gt;
&lt;/!IfModule&gt;
&lt;Location /server-status&gt;
!SetHandler server-status
Order deny,allow
Deny from all
Allow from .cs.nott.ac.uk
&lt;/Location&gt;
&lt;Location /server-info&gt;
!SetHandler server-info
Order deny,allow
Deny from all
Allow from .cs.nott.ac.uk
&lt;/Location&gt;
}}}

Make sure that the server accepts connections to port 80. Check the firewall settings in ''/etc/sysconfig/iptables'' for the following entry:BR -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT

Make sure that the apache daemon is running, and that it will start at boot time:

/sbin/service httpd start
/sbin/chkconfig --add httpd[[BR]] /sbin/chkconfig httpd on
Email Configuration
The ProCKSI server component send emails to the user for several occasions. In order to make sure that they are delivered correctly even when the internet is temporarily not available, a local SMTP server (''postfix'') is set up. This will accect emails from the private network only, store them temporarily (if necessary), and forward them to an email relay server.

Make sure that ''postfix'' is the default mailing software (and not ''sendmail''!).

system-switch-mail -activate postfix

Make the following changes to the ''postfix ''configuration file (''/etc/postfix/main.cf''):

{{{
myhostname = procksi0.cs.nott.ac.uk
mydomain = cs.nott.ac.uk
myorigin = $mydomai
inet_interfaces = all
mydestination = $myhostname, localhost!.$mydomain, localhost
mynetworks_style = subnet
virtual_alias_maps = hash:/etc/postfix/virtual
relayhost = marian.cs.nott.ac.uk
}}}

Create or modify /etc/postfix/virtual:

{{{
root root@localhost
postmaster postmaster@localhost
adm root@localhost
}}}

Generate the corresponding database file (''virtual.db''):

postmap /etc/postfix/virtual

Make sure that the postfix daemon is running, and that it will start at boot time:

/sbin/service postfix start[[br]]
/sbin/chkconfig --add postfix[[BR]]
/sbin/chkconfig postfix on[[br]]

Make sure that the firewall is not open for port 25 or port 28!

Check that the STMTP server in ''/home/procksi/latest/conf/main.ini'' is set correctly set to ''procksi0.cs.nott.ac.uk''

Garbage Cleanup Scheduling
After a certain period of time, given in ''/home/procksi/latest/conf/main.ini'', sessions and requests expire and must be deleted.

Edit ''procksi's'' crontab file taking effect for the ''latest'' and ''test'' version: {{{
0-59/1 * * * * /home/procksi/latest/cron/check_sessions.sh
1-59/1 * * * * /home/procksi/latest/cron/check_tasks.sh
2-59/1 * * * * /home/procksi/latest/cron/check_requests.sh
}}}

crontab -e[[BR]]

Analogously for ''/home/procksi/test''

Linking External Software
Make sure that all links in ''/home/procksi/latest/bin'' point to the correct files of the operating system: sh, compress, bzip2, gzip, zip, ppmz, qsub

Make sure that all further executable links in ''/home/procksi/latest/bin'' point to the correct files on the file system:

exec_cluster, exec_!DaliLite, exec_MaxCMO, exec_molauto, exec_molscript