Communication

Active messages

There are two types of active messages, small and large. With small messages, the message is received into a temporary buffer then the handler function is called and a pointer to the temporary buffer is passed as a parameter. The data in the buffer is "raw" packed data, so it's up to the handler function to unpack it. Utility functions for unpacking the data are provided by the infrastructure.

With large messages, the active message handler is called before the data is received. The data can either be received into a user-specified buffer or forwarded to another destination. To receive the data into a buffer, the handler calls a function to specify the buffer and an optional completion callback. The completion callback is called when the infrastructure has received the data into the buffer. To forward the data, the handler function calls a function to specify the destination along with the handler function to be called at the destination.

The threshold for the size of small messages is set at initialization time (of the stream, or infrastructure). We can consider allowing it to be set dynamically, but we have to make sure all parties (pairs of agents/junctions or all agents/junctions in a stream) do it collectively.

The handler functions are "registered" at the receiver. Registration associates a user-provided handler id to the handler function. This association is local to the receiver ( i.e., the same handler id can potentially correspond to different functions at different receivers). A different set of handlers can be associated with each stream. Handlers are registered at init time (avoids the need for bootstrap synchronizing: you can't send an AM message before it's registered, but how do you know that it's already been registered at the receiver? This requirement may be relaxed). The handler functions provide a mechanism for multiple streams operations to be defined per Stream - one per function registered.

It has been suggested that we combine small and large active message sends into a single active message send function that would send both a smaller buffer that is delivered immediately, and a large buffer that would be delivered after the the handler exits. So to emulate a small message send, the user would specify NULL for the large buffer, and to emulate a large message send, the user can specify NULL for the small buffer.

Here are some references for other active-message implementations. There are probably others, but these are the ones I generally refer to.

Non-Blocking Sends

Non-blocking versions of sends are also provided. Non-blocking send functions pass back a request handle. The request handle is used to test for completion of the send. Functions for testing and waiting on one or several requests are provided.

We should provide a mechanism where the user can wait on AM request handles and unix file-descriptors (and whatever the equivalent would be for windows?). E.g., we can provide an STCI_sys_select() function (and STCI_sys_poll()) which would take fds and request handles and return when something completes. Calling STCI_sys_select() would also allow the comm library to make progress while in a system call.

Flow Control

If handler functions are called asynchronously, the receiver could be overwhelmed by handler function calls and not be able to get any work done. This doesn't happen with traditional send-receive type messaging because if the receiver isn't ready to receive messages, it won't post a receive.

There needs to be a way for the user to control the invocation of handlers. There may be more than one way to do this, and the user should be able to choose which mechanism. One way to do this would be to provide a mode where handlers are only called when the user calls a "poll" function, rather than asynchronously. A flow control mechanism also needs to be provided for the asynchronous mode. This can be in the form of stop_receive() and start_receive() functions, which can be used to keep handlers from being called for certain periods.

There can also be a mode where messages are dropped rather than delayed. If packets are dropped there should be a way for the app to find out how many messages were dropped (or maybe even how many bytes were dropped).

We may want to provide a way to send a 'stop_sending' event to the app at the sender in response to a call to stop_receive(). This could be a high-priority message that would be delivered even if the sender can't receive messages (e.g., sender blocked receives or the sending node is congested). We may also want to limit the amount of memory used for buffering sends and receives, so that we don't use up all of memory for messages. Sends functions should also be able to return an out-of-resources error message.

Message Ordering and Reliability

Message ordering and reliability can be set for each stream.

On an in-order stream, between any two processes, message handlers are called at the receiver in the order that the corresponding sends were called by the sender. The data for large messages need not arrive in the same order (a smaller message sent after a larger message may be fully received before the larger one). Point-to-point messages are not ordered between with regard to stream messages. Messages on two different streams are not ordered with regard to each other.

On an unordered stream, message handlers can be called in any order.

On a reliable stream, the communication library will detect lost or corrupted messages and resend them transparent to the application. How long does the comm lib keep retrying? Should this be user selectable? What happens in that case? Is the stream "closed"? How is an error returned to the user?

On an unreliable stream messages may be dropped by the underlying network layer, or by the communication library (e.g., out of buffers). No notification is made to the user of dropped messages (maybe the user can query for the status of a connection which may report %message loss, etc.). Corrupted messages detected by the communication library are also dropped, so the app can be sure that if it gets a message, it's correct and complete.

Handler functions

Handler functions are called in response to an incoming message. Depending on the implementation, a handler function may be called within an interrupt context, from a separate thread, or from within a "poll" function called by the main (only) thread.

Stream synchronization — It may be desirable for a junction or front-end with many children, to have the handler be called only when a particular message is received from every child (or some number of children), rather than once for each message. Synchronization can be specified by how many children to wait for, and an optional timeout. The timeout specifies how long to wait after the first message is received for messages from the specified number of children to arrive. If the required number of messages are not received by then, the handler is called anyway. The timeout value can be infinite, so that handler will never be called without receiving the required number of messages. To reduce the buffering requirements, only small messages can be synchronized. Also, for large messages the data is only transfered when the receive buffer is specified by the user in the handler function. So if large messages were to be synchronized, the transfer of the data from all of the messages would be delayed until all of the messages arrived. This is probably not desirable. Only messages in the DOWN direction (i.e., messages from children) can be synchronized; it is invalid to have a message from a parent specify a synchronizing handler (this requirement is to avoid deadlocks).

On an in-order stream, if a synchronized message from a child is being delayed waiting for messages from other children, any subsequent messages received from that child will be also be delayed to preserve order. This requirement is not there for out-of-order streams.

We need to address buffering issues for delayed messages. The application should be able to specify the number of buffers (or number of bytes for buffered messages). Flow control is needed to ensure we don't run out of buffer space.

Synchronization is specified on a per-handler basis when the handler is registered. The prototype of the handler function for a synchronizing handler differs from the normal small message handlers in that the synchronizing handler is passed in an array of source ids, an array of buffer pointers, an array of sizes, and an array of parameters, where the ith element in each of the arrays corresponds one of the message received. Note that the order of messages reported in the arrays is not specified.

Allowed operations from within a handler — This still needs more thought What operations are allowed in a handler? Can you send? Can you just enqueue a send? No communication operations? Possibly, for stream communication, one can send at most one message back in the direction of the source, but any number of messages in the opposite direction. What about acquiring a lock?

If we allow sends from within a handler, we can define a new set of send functions (e.g., am_inhandler_send()) to enforce the restrictions on sends.

Progress

Progress on sending and receiving messages can be made either from within STCI communication functions, or asynchronously, e.g., in a separate progress thread or in an interrupt context. Different architectures have varying support for threads and interrupts, so there must be a way for the tool to query and control this. The infrastructure needs to provide a STCI_Progress_level(requested, provided) function. The tool calls this function specifying the level it requires, e.g., STCI_POLLING, STCI_THREAD or STCI_INTERRUPT. The function returns (in the provided parameter) which levels are actually provided. If the infrastructure implementation provides the level that the user requested, then the infrastructure will use that method to make progress, otherwise, it will provide the STCI_POLLING level. The STCI_POLLING level means that communication progress is only made, and handler functions can only be called from within stci communication functions (e.g., am_small_send()) or withing the am_poll() function.

Function descriptions

Notation

Identifying the source or destination of a message
For stream communication, this consists of a stream handle and a direction (UP/DOWN). For point-to-point communication, this consists of a stream handle and a rank (rank needs to be defined). Group communication is just like stream communication, so the source or destination is the stream handle and a direction.
buffer_info
This is an opaque object that would be returned by a memory allocation or memory registration function and is passed in to a send or receive function. The purpose of this object is to allow the infrastructure's communication library to associate the allocation or registration of a buffer with communication operations performed with it. E.g., some network libraries use keys to control remote access to local memory. The buffer_info object could be used to keep track of the key associated with a particular region of memory. When the user make a call to the large-message receive function, the user passes the buffer_info along with the buffer pointer. The communications library can now pass the key to the sender to directly transfer the message data into the receiver's buffer.
Note: Other communication libraries keep track of keys and memory registration using a registration cache or a hash table. Such methods require the library to be informed when a previously registered region is freed, which cannot, in general, be done reliably. Using buffer_info makes the association between a registered memory region and a buffer passed to a communication function explicit, allowing the user to manage such associations correctly.
parameter block
Parameter blocks are used to pass parameters to handler functions. A parameter block is simply a contiguous buffer (not greater than STCI_MAX_PARAM_SIZE) where the parameters are written. This buffer is then sent to the receiver and passed to the handler function (see the function descriptions below). Because the individual parameters may have various types, the values may need to be translated on heterogeneous systems before being passed to the handler. For this reason the user must describe the datatype layout of the parameters. E.g., if an int, a double and a short are to be passed as parameters to a handler, the user would define the parameter block as struct param_s { int a; double b; short c; } p;. The user would also define a datatype description corresponding to the struct param_s. The parameter block can also be through of as a user-defined header for the message.
Note: Some active message libraries allow the user to specify the parameters to be passed to the handler function as parameters to the send function, rather than using parameter blocks (e.g., GASnet). While this may look more intuitive, and be more convenient to use, in these libraries, all parameters must be cast to the same datatype, usually an int or something large enough to store a pointer. So if one wants to pass two chars as parameters, one must cast them into 32-bit or 64-bit values, which increases the amount of data to be sent. Also, there would be issues with passing in floating-point values, and with translating the values in heterogeneous environments where floating point formats aren't the same.
datatype
A description of the layout and types of the elements in a buffer. This is needed so the library knows how to convert a buffer when communicating in a heterogeneous environment. It would be desirable to be able to create datatypes on-the-fly, rather than having to pre-register them.

Registering handler functions

am_register_handlers( num_handlers, handler_desc[] )

Associates handler ids with handler functions. Association is local.

num_handlers
number of handlers being registered
handler_desc
array of { handler_id, handler_fn_pointer }
handler_id
user provided id (e.g., we could use an int). If a handler is already registered with this ID an error is returned
handler_fn_pointer
function pointer to handler function

Small messages (up to size AM_MAX_SMALL_MSG)

am_small_send( dest_id, stream_id, buffer, buffer_info, size, datatype, handler_id, param_size, param_datatype, ptr_to_param )

dest_id
identifies destination (e.g., pt-to-pt node id, or stream direction)
stream_id
stream with which the data is associated
buffer
pointer to buffer containing data to send
buffer_info
buffer attributes (such as "memory is registered for access by x")
size
length of buffer (or count of datatype)
datatype
description of data layout, for non-contiguous or heterogeneous
handler_id
id for active message handler function on remote node
param_size
size (in bytes) of parameter block to be sent with the data. (may be 0 bytes in length)
param_datatype
parameter datatype description
ptr_to_param
pointer to sender defined parameter block

am_small_handler_fn( source_id, stream_id, buffer, size, ptr_to_param, param_size )

source_id
identifies source of message (e.g., pt-to-pt node id or stream handle)
stream_id
stream with which the data is associated
buffer
pointer to temporary, packed buffer containing message data. Handler will have to copy/unpack data from buffer before returning.
size
length of data
ptr_to_param
pointer to sender defined parameter block
param_size
length of parameter block

Large messages (can be any size. Intended for "zero-copy" transfer.)

am_large_send( dest_id, stream_id, buffer, buffer_info, size, datatype, handler_id, param_size, param_datatype, ptr_to_param )

same parameters as for small messages

am_large_isend( dest_id, stream_id, buffer, buffer_info, size, datatype, handler_id, param_size, param_datatype, ptr_to_param, completion_fn, completion_param(s) )

same parameters as for small messages, plus the following:

completion_fn
function to be called when the user can modify the send buffer (e.g., the send has completed, or the infrastructure has buffered the data to be sent). The user may not modify the buffer after am_large_isend() has been called until the completion function is called. This parameter may not be NULL.
completion_param(s)
parameters to be passed to completion_fn

am_large_handler_fn( source_id, stream_id, send_size, info, ptr_to_param, param_size )

source_id
identifies source of message (e.g., pt-to-pt node id or stream handle)
stream_id
stream with which the data is associated
size
length of data sent by sender
info
a handle used to identify the received message. This is used for specifying a buffer to receive the data into or for forwarding the data.
ptr_to_param
pointer to sender defined parameter block
param_size
length of parameter block

am_receive_message( info, buffer, size, datatype, completion_fn, parameter(s) )

Called from inside a handler function. Receives message data into user-specified buffer. This function can only be called once inside the handler function. Note that by also calling am_forward_message() the data for the message can be forwarded to other destinations as well as being received locally.

info
info handle passed into handler function
buffer
pointer to buffer into which message should be received
buffer_info
buffer attributes (such as "memory is registered for access by x")
size
max length of data to be received
datatype
description of layout of data to be received (for non-contiguous or heterogeneous)
completion_fn
function to be called when infrastructure has received data into user specified buffer. May be NULL.
parameter(s)
parameters to be passed to completion_fn

am_completion_fn( parameter(s) )

Called when data from large AM has been received into user buffer

parameter(s)
parameter passed in from am_large_handler_fn

am_forward_message( info, dest_id, stream_id, handler_id, param_size, param_datatype, ptr_to_param)

Called from inside a handler function. Forwards message to another destination. This function can be called multiple times within the same handler function to specify that the same message should be forwarded to more than one destination, however, a message cannot be forwarded to a particular destination more than once. Note, if am_receive_message() has also been called in the same handler function, the data will be received locally as well as being forwarded.

info
info handle passed into the handler function
dest_id
identifies destination (e.g., pt-to-pt node id, or stream handle)
stream_id
stream with which the data is associated
handler_id
id for active message handler function on remote node
param_size
size of parameter block to be sent with the data. (may be 0 bytes in length)
param_datatype
parameter datatype description
ptr_to_param
pointer to sender defined parameter block

Implementation notes — The idea behind the am_receive_message() operation is to allow the communication library to transfer the data directly into the user's buffer in the most efficient way possible, e.g., using RDMA get. The idea behind the am_forward_message() operation is to allow the communication library to pipeline the forwarding of data through internal buffers, rather than receiving the entire data into the user's buffer and then sending it out again. Multiple destinations can be specified for forwarding the data, which may allow the implementation to receive a chunk of the data once and send it out to each destination from the same internal buffer. If the user requests the data to be received into local memory as well as being forwarded, the infrastructure can do a memcpy from the internal buffer to the user's buffer, or possibly RDMA the data to the user's buffer in chunks which are then forwarded, directly from the user's buffer, to the destinations.

Functions for sending multi-rooted communications

am_sendrecv( dest_id, stream_id, src_buffer, src_buffer_info, src_size, src_datatype, dest_buffer, dest_buffer_info, dest_size, dest_datatype, handler_id, param_size, param_datatype, ptr_to_param )

Used for group communications where data is sent and received. The handler specified in this function is a small-message type handler.

dest_id
identifies destination (e.g., pt-to-pt node id, or stream direction)
stream_id
stream with which the data is associated
src_buffer
pointer to buffer containing data to send
src_buffer_info
buffer attributes (such as "memory is registered for access by x")
src_size
length of buffer (or count of datatype)
src_datatype
description of data layout, for non-contiguous or heterogeneous
dest_buffer
pointer to buffer containing data to send
dest_buffer_info
buffer attributes (such as "memory is registered for access by x")
dest_size
length of buffer (or count of datatype)
dest_datatype
description of data layout, for non-contiguous or heterogeneous
handler_id
id for active message handler function on remote node
param_size
size of parameter block to be sent with the data. (may be 0 bytes in length)
param_datatype
parameter datatype description
ptr_to_param
pointer to sender defined param

Group Communication

Group communication (e.g., barrier, broadcast, etc) is performed over streams. A group-communication stream is created over a set of agents and possibly the front-end. Active message operations are used to perform the group communications. The type of group communication (e.g., barrier v.s. broadcast) is chosen by specifying a predefined handler_id. For a particular participant of a group communication, the group communication operations fall into three categories: (1) The participant neither sends nor receives any data; (2) The participant either sends or receives data, but not both; (3) The participant both sends and receives data. The example below shows a barrier operation.

/* global variable */
int barrier_complete;

/* Handler registered with STCI_BARRIER handler_id */
my_barrier_handler( src, stream, buf, size, hdr, hdr_len ) 
    { barrier_complete = TRUE; };

...
/* in main() */
barrier_complete = FALSE;
am_small_send( UP, grp_stream, NULL, NULL, NULL, NULL, STCI_BARRIER, NULL, NULL, NULL );
while (! barrier_complete)
    /* NOOP */;

In this example, before performing a barrier, the process registers the handler function my_barrier_handler with the predefined handler id STCI_BARRIER. This handler function simply sets a global variable barrier_complete to TRUE. The process is also part of a group communication stream, grp_stream. To perform the barrier, the process sets barrier_complete to FALSE, then sends a small active message on grp_stream with the predefined handler id STCI_BARRIER. No data is sent, so those parameters are set to NULL. After sending the active message, the process waits in a busy loop for barrier_complete to become TRUE. The barrier operation is performed by the stream. When all processes on the stream have reached the barrier, the stream sends an active message back to each process with the STCI_BARRIER handler id and the same handler parameters that the process specified in the am_small_send() function used to initiate the barrier. The handler sets barrier_complete to TRUE, and the process exits the while loop.

This is a simple example, a more sophisticated example could pass in a pointer to a local variable for the handler function to set (rather than a global variable), so that different threads can perform barriers on different streams at the same time. One could also use a semaphore, rather than a variable, so the process would avoid busy waiting and sleep while waiting for the barrier to complete.

"Multi-rooted" group communications, such as allgather, need to specify a buffer for the result up front, when the collective operation is initiated. This is so that the result can be gathered into the buffer as the operation is being performed. The example below shows an allgather operation.

int allgather_completed;
my_allgather_handler( src, stream, buf, size, hdr, hdr_len ) 
    { allgather_complete = TRUE; };

...

int result[N];
int my_value;

my_value = rank;
allgather_complete = FALSE;
am_sendrecv( UP, grp_stream, &my_value, NULL, 1, STCI_INT, result, NULL, N, STCI_INT,
    STCI_ALLGATHER, NULL, NULL, NULL );
while (! allgather_complete)
    /* NOOP */;

/* result[] now contains the ranks of each process */

In this example, we used a flag allgather_complete to indicate when the operation completes, just like we did in the barrier example. In this example, the process sets it's value, my_value, to it's rank, then initiates the allgather by calling the am_sendrecv() function to send my_value in an active message on the stream with the STCI_ALLGATHER handler id. The process am_sendrecv() function also allows the process to specify the buffer in which to receive the result. The stream performs the allgather, gathering the values from each process, then sends an active message back to each process with the handler_id STCI_ALLGATHER and with the same handler parameters that the process specified in the am_sendrecv() function.

An alternative interface considered for multirooted group communication operations was, rather than define a new am_sendrecv function, to have the user use a regular am_small_send() or am_large_send() function to initiate the operation, and have the result send back to the process in the payload of an active message. So that after the operation is complete, each process would have to receive the result into its result buffer. However, this does not allow the STCI communication library to assemble the intermediate result directly into the buffer. E.g., as the operation proceeds, a process receives the values from other processes and forwards these along to other processes in a recursive-doubling fashion. If the communication library knows where the application wants the result to be placed ahead of time, it can use this buffer to store the intermediate values, so that when the operation is complete, all of the values are in the result buffer. If the communication library does not know the destination buffer ahead of time, it must allocate a temporary buffer to gather the intermediate results, then copy the result from the temporary buffer to the destination buffer when the application specifies it in the handler.

Block diagrams for active message communication on a stream

Below are block diagrams of a front-end sending to agents on a stream. This assumes there are no junctions (AKA plugins) on the stream. Diagrams for communication between Agents and/or Junctions and/or Front-ends are identical.

The first two diagrams show blocking and nonblocking communication for small messages. The third and fourth diagrams show blocking large message communication. Non-blocking large message communication is not shown. The fourth diagram shows the receiver forwarding the message data to another destination, rather than receiving it locally.

stci-communicate-small-b-20071112.PNGstci-communicate-small-nb-20071112.PNGstci-communicate-large-b-20071112.PNGstci-communicate-large-b-fwd-20071112.PNG

Junction Communication

There are two models of communication to used by a junction that we need to consider. One is a simplified model (simple model), similar to a MRNet filter, that has a simpler, but is less powerful, interface. This can be used by tool developers who not need the full generality of the other model. The other model is the active-message model (AM model), that has the same active-message communication operations that are used by agents and front-ends as described above.

In the AM model, when a junction is started by the infrastructure, the infrastructure will call a junction_init() callback function implemented by the tool developer. In this function, the junction initializes global state, registers active message handlers, etc. This infrastructure passes a handle to the stream to this callback. When the junction is terminated, the infrastructure calls a junction_finalize() callback, to allow the junction to clean up. Communication proceeds as expected, with active-message handlers being called as messages arrive. Note that junction activity is strictly in response to incoming messages, i.e., the junction does not have a main() function or idle loop, and does nothing while waiting for messages to arrive. This is in contrast to front-ends and agents which can do processing outside of the message handlers. (Perhaps this restriction can be relaxed, or allow an "idle handler" function which is called periodically.)

In the simple model, the junction implementor implements a function (filter function) that takes as input a buffer with data (in an format specified by the implementor), and passes back an output buffer. When a message arrives at the junction, the filter function is called with the message data passed in as the input buffer parameter. When the filter function returns, the output buffer is sent in a message in the same direction on the stream as the received message. Because messages can be sent on the stream with different handler IDs, the junction implementor must also specify which handler ID to associate with this filter function. So multiple filter functions can be specified for the same junction, each with a different handler function. Note, however, that while filter functions can declare static local variables to maintain state between invocations, there is no global state which can be shared between different filter functions. Filter functions can also be synchronizing so that they are called only when a message is received from every child; in which case the input parameter to the function is an array of buffers rather than a single buffer.

In order to implement a more complex junction, such as for file staging as described in the use cases, the AM model would be needed. This is because different types of messages (e.g., request for file, file data) with different handler IDs are sent, and global information must be maintained between the handlers, e.g., which file was requested by which child, where is the file cached, etc..

The simple model can be implemented using the AM model. It is likely that a junction implementor wishing to use the simple model would just need to link with an additional .o file, provided by the STCI implementation, which implements the AM-model-to-simple-model glue code.

Junction Composition

With the simple model, filter functions can be composed. E.g., if a tool implementor implements an encryption filter function and a compression filter function, the implementor can create a compression junction, an encryption junction and a compress-then-encrypt junction without having to reimplement the same functionality.

Wait and Test Operations

It is often desirable to associate a request with a response, and to wait for or test that a response has been received. This can be implemented with the stci active-message interface like this:

/* message handler for response message */
my_response_handler( source, stream, buffer, size, param_ptr, param_size ) {
    my_params_t *my_params = (my_params_t *)param_ptr;
    ... /* process response message */
    (*my_params->r)--;
}

/* in main() */
int r;
my_params_t params;

r = 1;
params.r = &r;
am_small_send( dest, stream, buffer, NULL, buf_sz, buf_dt, REQ_HANDLER_ID, 
                        1, MY_PARAMS_DT, &params );
while ( r )
    ; /* NOOP */

In this example, the client passes a pointer to a variable, r, in the request message, then waits for the variable to become zero. The receiver of this message, the server, is required to return this same value back to the client in the response (this part is not shown in the example). The response handler at the client will decrement the variable referenced by the pointer after it handles the response message, allowing the client to exit the while loop.

STCI can provide utilities to help the tool developer implement this functionality. For example:

request_t
type for request object
REQUESTP_DT
stci datatype description for a pointer to request_t
REQUEST_INIT0
static initializer for request object (e.g., request_t req = REQUEST_INIT0;). Initializes the request for zero pending operations.
REQUEST_INIT1
static initializer for request object. Initializes the request for one pending operation.
request_init( request_t *req, int num_pending )
non-static initializer
request_add( request_t *req )
adds a pending operation to req
request_complete( request_t *req )
used to indicate that an operation pending on req has completed
request_wait( request_t *req )
returns only when all operations pending on req have completed
request_test( request_t *req )
returns TRUE if all operations pending on req have completed

The example below does the same thing as the one above, except it uses the STCI request utilities.

/* message handler for response message */
my_response_handler( source, stream, buffer, size, param, param_size ) {
    ... /* process response message */
    request_complete( *(request_t **)param );
}

/* in main() */
request_t req = REQUEST_INIT1;
request_t *reqp = &req;

am_small_send( dest, stream, buffer, NULL, buf_sz, buf_dt, REQ_HANDLER_ID, 
                        1, REQUESTP_DT, &reqp );
request_wait( &req );

Request objects can be implemented to do busy waiting or can block while waiting. In fact the request utilities can be implemented so that the type of request (busy waiting or block while waiting) can be chosen by the user by specifying a different initializer (e.g., request_t req = REQUEST_BUSY_WAIT_INIT0;).

We should consider a case where the handler is called only when X messages are received with the same request value.

Datatypes

TODO

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License