-\section{Installation}
+\section{Installation and Configuration}
\TODO{}
\subsection{Complete RPMs description}
-\subsection{Daemons description}
-\subsection{CLI tools description: purge/dump/load}
+%\subsection{Daemons description}
+%\subsection{CLI tools description: purge/dump/load}
+\subsection{\LB server}
+
+\subsubsection{Standard installation}
+
+\TODO{salvet}
+
+\subsubsection{Migration from previous versions}
+
+\TODO{salvet}
+
+\subsubsection{Index configuration}
+
+\TODO{Initial YAIM way only, rest in Sect.~\ref{maintain:index}}
+
+\subsubsection{Notification delivery}
+
+\subsubsection{Exploiting paralelism}
+
+\subsubsection{Multiple server instances}
+
+\subsubsection{Tuning database engine}
+
+\TODO{Salvet}
+
+\subsubsection{Purging old data}
+
+\TODO{Setup cron job, refer to Sect.~\ref{maintain:purge}}
+
+\subsubsection{Export to Job Provenance}
+
+
+\subsection{\LB proxy}
+
+\subsection{\LB logger}
+
+
+\subsection{Smoke tests}
+
+\TODO{ljocha: get something from the old testing docs}
\section{Introduction}
\TODO{This document should contain:}
+\subsection{Service overview}
+
+A~fairly complete overview of the \LB service is given in \LB User's Guide~\cite{lbug}.
+This section is a~brief excerpt only, providing minimal information necessary for
+understanding the rest of this document.
+
+The task of \LB is gathering \emph{\LB events} from various grid middleware components,
+and delivering the events to \LB servers where users can query for them.
+Figure~\ref{f:gather} shows all principal components involved in the event gathering.
+
+\begin{figure}
+\centering
+\includegraphics[width=.67\hsize]{LB-components-gather}
+\caption{Components involved in gathering and transfering \LB events}
+\end{figure}
+
+\begin{figure}
+\centering
+\includegraphics[width=.67\hsize]{LB-components-query}
+\caption{\LB queries and notifications}
+\end{figure}
+
+\TODO{revize vhodnosti textu wrt. include do AG i UG}
+\input components
+
+
\subsection{Deployment scenarios}
+\subsubsection{Standalone \LB server}
+
+\subsubsection{Hybrid \LB server-proxy}
+
+\subsubsection{\LB server on WMS node}
+Highly obsolete and inefficient \dots
-\section{Running and stopping the services}
-\TODO{}
+\section{Maintenance}
-\subsection{Tests if everything works properly}
+\subsection{\LB server}
+This section deals with several typical but more peculiar tasks
+that need more verbose description.
+It is complemented with the full commands reference that is provided
+as standard manual pages installed with the \LB packages.
+
+\subsubsection{Changing index configuration}
+
+\TODO{ljocha}
+
+\subsubsection{Multiple server instances}
+
+\subsubsection{Backup dumps}
+
+\subsubsection{Purging old data}
+
+\TODO{salvet}
+
+\subsubsection{Export to Job Provenance}
+
+
+\subsection{\LB proxy}
+
+\subsection{\LB logger}
\newpage
\input{LBAG-Installation}
-\newpage
-\input{LBAG-Configuration}
+%\newpage
+%\input{LBAG-Configuration}
\newpage
\input{LBAG-Running}
-\newpage
-\input{LBAG-Troubleshooting}
+%\newpage
+%\input{LBAG-Troubleshooting}
\nocite{jgc}
\bibliographystyle{unsrt}
\label{f:comp-gather}
\end{figure}
-\subsubsection{\LB API and library}
-Both logging events and querying the service are implemented via
-calls to a~public \LB API.
-The complete API (both logging and queries)
-is available in ANSI~C binding, most of the querying capabilities also in C++.
-These APIs are provided as sets of C/C++ header files and shared libraries.
-The library implements communication protocol with other \LB components
-(logger and server), including encryption, authentication etc.
-
-We do not describe the API here in detail; it is documented in~\LB User's
-Guide\footnote{\url{https://edms.cern.ch/file/571273/1/LB-guide.pdf}},
-including complete reference and both simple and complex usage examples.
-
-Events can be also logged with a~standalone program (using the C~API in turn),
-intended for usage in scripts.
-
-The query interface is also available as a~web-service provided by the
-\LB server (Sect.~\ref{server}).
-
-\subsubsection{Logger}
-The task of the \emph{logger} component is taking over the events from
-the logging library, storing them reliably, and forwarding to the destination
-server.
-The component should be deployed very close to each source of events---on the
-same machine ideally, or, in the case of computing elements with many
-worker nodes, on the head node of the cluster%
-\footnote{In this setup logger also serves as an application proxy,
-overcoming networking issues like private address space of the worker nodes,
-blocked outbound connectivity etc.}.
-
-Technically the functionality is realized with two daemons:
-\begin{itemize}
-\item \emph{Local-logger} accepts incoming events,
-appends them in a~plain disk file (one file per Grid job),
-and forwards to inter-logger.
-It is kept as simple as possible in order to achieve
-maximal reliability.
-\item \emph{Inter-logger} accepts the events from the local-logger,
-implements the event routing (currently trivial as the destination
-address is a~part of the jobid), and manages
-delivery queues (one per destination server).
-It is also responsible for crash recovery---on startup, the queues are
-populated with undelivered events read from the local-logger files.
-Finally, the inter-logger purges the files when the events are delivered to
-their final destination.
-\end{itemize}
-
-\subsubsection{Server}
-\label{server}
-\emph{\LB server} is the destination component where the events are delivered,
-stored and processed to be made available for user queries.
-The server storage backend is implemented using MySQL database.
-
-Incoming events are parsed, checked for correctness, authorized (only the job
-owner can store events belonging to a~particular job), and stored into the
-database.
-In addition, the current state of the job is retrieved from the database,
-the event is fed
-into the state machine (Sect.~\ref{evprocess}), and the job state updated
-accordingly.
-
-On the other hand, the server exposes querying interface (Fig.~\ref{f:comp-query}, Sect.~\ref{retrieve}).
-The incoming user queries are transformed into SQL queries on the underlying
-database engine.
-The query result is filtered, authorization rules applied, and the result
-sent back to the user.
-
-While using the SQL database, its full query power is not made available
-to end users.
-In order to avoid either intentional or unintentional denial-of-service
-attacks, the queries are restricted in such a~way that the transformed SQL
-query must hit a~highly selective index on the database.
-Otherwise the query is refused, as full database scan would yield unacceptable
-load.
-The set of indices is configurable, and it may involve both \LB system
-attributes (\eg job owner, computing element,
-timestamps of entering particular state,~\dots) and user defined ones.
-
-The server also maintains the active notification handles
-(Sect.~\ref{retrieve}), providing the subscription interface to the user.
-Whenever an event arrives and the updated job state is computed,
-it is matched against the active handles%
-\footnote{The current implementation enforces specifying an~actual jobid
-in the subscription hence the matching has minimal performance impact.}.
-Each match generates a~notification message, an extended \LB event
-containing the job state data, notification handle,
-and the current user's listener location.
-The event is passed to the \emph{notification inter-logger}
-via persistent disk file and directly (see Fig.~\ref{f:comp-query}).
-The daemon delivers events in the standard way, using the specified
-listener as destination.
-In addition, the server generates control messages when the user re-subscribes,
-changing the listener location.
-Inter-logger recognizes these messages, and changes its routing of all
-pending events belonging to this handle accordingly.
-
-
-% asi nepotrebujeme \subsubsection{Clients}
-
-\subsubsection{Proxy}
-\TODO{Proxy is now "integrated" into the server executable, update the text:}
-\emph{\LB proxy} is the implementation of the local view concept
-(Sect.~\ref{local}).
-When deployed (on the Resource Broker node in the current gLite middleware)
-it takes over the role of the local-logger daemon---it accepts the incoming
-events, stores them in files, and forwards them to the inter-logger.
-
-In addition, the proxy provides the basic principal functionality of \LB server,
-\ie processing events into job state and providing a~query interface,
-with the following differences:
-\begin{itemize}
-\item only events coming from sources on this node are considered; hence
-the job state may be incomplete,
-\item proxy is accessed through local UNIX-domain socket instead of network
-interface,
-\item no authorization checks are performed---proxy is intended for
-privileged access only (enforced by the file permissions on the socket),
-\item aggressive purge strategy is applied---whenever a~job reaches
-a~known terminal state (which means that no further events are expected), it is purged
-from the local database immediately,
-\item no index checks are applied---we both trust the privileged parties
-and do not expect the database to grow due to the purge strategy.
-\end{itemize}
+\input components
\subsubsection{Sequence codes for event ordering}%
\label{seqcode}
--- /dev/null
+\subsubsection{\LB API and library}
+Both logging events and querying the service are implemented via
+calls to a~public \LB API.
+The complete API (both logging and queries)
+is available in ANSI~C binding, most of the querying capabilities also in C++.
+These APIs are provided as sets of C/C++ header files and shared libraries.
+The library implements communication protocol with other \LB components
+(logger and server), including encryption, authentication etc.
+
+We do not describe the API here in detail; it is documented in~\LB User's
+Guide\footnote{\url{https://edms.cern.ch/file/571273/1/LB-guide.pdf}},
+including complete reference and both simple and complex usage examples.
+
+Events can be also logged with a~standalone program (using the C~API in turn),
+intended for usage in scripts.
+
+The query interface is also available as a~web-service provided by the
+\LB server (Sect.~\ref{server}).
+
+\subsubsection{Logger}
+The task of the \emph{logger} component is taking over the events from
+the logging library, storing them reliably, and forwarding to the destination
+server.
+The component should be deployed very close to each source of events---on the
+same machine ideally, or, in the case of computing elements with many
+worker nodes, on the head node of the cluster%
+\footnote{In this setup logger also serves as an application proxy,
+overcoming networking issues like private address space of the worker nodes,
+blocked outbound connectivity etc.}.
+
+Technically the functionality is realized with two daemons:
+\begin{itemize}
+\item \emph{Local-logger} accepts incoming events,
+appends them in a~plain disk file (one file per Grid job),
+and forwards to inter-logger.
+It is kept as simple as possible in order to achieve
+maximal reliability.
+\item \emph{Inter-logger} accepts the events from the local-logger,
+implements the event routing (currently trivial as the destination
+address is a~part of the jobid), and manages
+delivery queues (one per destination server).
+It is also responsible for crash recovery---on startup, the queues are
+populated with undelivered events read from the local-logger files.
+Finally, the inter-logger purges the files when the events are delivered to
+their final destination.
+\end{itemize}
+
+\subsubsection{Server}
+\label{server}
+\emph{\LB server} is the destination component where the events are delivered,
+stored and processed to be made available for user queries.
+The server storage backend is implemented using MySQL database.
+
+Incoming events are parsed, checked for correctness, authorized (only the job
+owner can store events belonging to a~particular job), and stored into the
+database.
+In addition, the current state of the job is retrieved from the database,
+the event is fed
+into the state machine (Sect.~\ref{evprocess}), and the job state updated
+accordingly.
+
+On the other hand, the server exposes querying interface (Fig.~\ref{f:comp-query}, Sect.~\ref{retrieve}).
+The incoming user queries are transformed into SQL queries on the underlying
+database engine.
+The query result is filtered, authorization rules applied, and the result
+sent back to the user.
+
+While using the SQL database, its full query power is not made available
+to end users.
+In order to avoid either intentional or unintentional denial-of-service
+attacks, the queries are restricted in such a~way that the transformed SQL
+query must hit a~highly selective index on the database.
+Otherwise the query is refused, as full database scan would yield unacceptable
+load.
+The set of indices is configurable, and it may involve both \LB system
+attributes (\eg job owner, computing element,
+timestamps of entering particular state,~\dots) and user defined ones.
+
+The server also maintains the active notification handles
+(Sect.~\ref{retrieve}), providing the subscription interface to the user.
+Whenever an event arrives and the updated job state is computed,
+it is matched against the active handles%
+\footnote{The current implementation enforces specifying an~actual jobid
+in the subscription hence the matching has minimal performance impact.}.
+Each match generates a~notification message, an extended \LB event
+containing the job state data, notification handle,
+and the current user's listener location.
+The event is passed to the \emph{notification inter-logger}
+via persistent disk file and directly (see Fig.~\ref{f:comp-query}).
+The daemon delivers events in the standard way, using the specified
+listener as destination.
+In addition, the server generates control messages when the user re-subscribes,
+changing the listener location.
+Inter-logger recognizes these messages, and changes its routing of all
+pending events belonging to this handle accordingly.
+
+
+% asi nepotrebujeme \subsubsection{Clients}
+
+\subsubsection{Proxy}
+\TODO{Proxy is now "integrated" into the server executable, update the text:}
+\emph{\LB proxy} is the implementation of the local view concept
+(Sect.~\ref{local}).
+When deployed (on the Resource Broker node in the current gLite middleware)
+it takes over the role of the local-logger daemon---it accepts the incoming
+events, stores them in files, and forwards them to the inter-logger.
+
+In addition, the proxy provides the basic principal functionality of \LB server,
+\ie processing events into job state and providing a~query interface,
+with the following differences:
+\begin{itemize}
+\item only events coming from sources on this node are considered; hence
+the job state may be incomplete,
+\item proxy is accessed through local UNIX-domain socket instead of network
+interface,
+\item no authorization checks are performed---proxy is intended for
+privileged access only (enforced by the file permissions on the socket),
+\item aggressive purge strategy is applied---whenever a~job reaches
+a~known terminal state (which means that no further events are expected), it is purged
+from the local database immediately,
+\item no index checks are applied---we both trust the privileged parties
+and do not expect the database to grow due to the purge strategy.
+\end{itemize}
+