|
|
The following example demonstrates two important concepts. The first is a server's ability to manage multiple outstanding connect indications. The second is an illustration of the ability to write event-driven software using the Transport Interface and the STREAMS system call interface.
The server example in ``Connection-mode service'' is capable of supporting only one outstanding connect indication, but the Transport Interface supports the ability to manage multiple outstanding connect indications. One reason a server might wish to receive several simultaneous connect indications is to impose a priority scheme on each client. A server may retrieve several connect indications, and then accept them in an order based on a priority associated with each client. A second reason for handling several outstanding connect indications is that the single-threaded scheme has some limitations. Depending on the implementation of the transport provider, it is possible that while the server is processing the current connect indication, other clients will find it busy. If, however, multiple connect indications can be processed simultaneously, the server will be found to be busy only if more than the maximum allowed number of clients attempt to call the server simultaneously.
The server example is event-driven: the process polls a transport endpoint for incoming Transport Interface events, and then takes the appropriate actions for the current event. The example demonstrates the ability to poll multiple transport endpoints for incoming events.
The definitions and local management functions needed by this example are similar to those of the server example in ``Connectionless-mode service''.
#include <xti.h> #include <fcntl.h> #include <stdio.h> #include <poll.h> #include <stropts.h> #include <signal.h>The file descriptor returned by t_open is stored in a pollfd structure (see poll(S)) used to poll the transport endpoint for incoming data. Notice that only one transport endpoint is established in this example. However, several endpoints could be supported with minor changes to the above code. The remainder of this example assumes that several transport endpoints have been established.#define NUM_FDS 1 #define MAX_CONN_IND 4 #define SRV_ADDR 1 /* server's well known address */
int conn_fd; /* server connection here */
/* holds connect indications */ struct t_call *calls[NUM_FDS][MAX_CONN_IND];
main() { struct pollfd pollfds[NUM_FDS]; struct t_bind *bind; int i;
/* * Only opening and binding one transport endpoint, * but more could be supported */ if ((pollfds[0].fd = t_open("/dev/ticots", O_RDWR, NULL)) < 0) { t_error("t_open failed"); exit(1); }
if ((bind = (struct t_bind *)t_alloc(pollfds[0].fd, T_BIND, T_ALL)) == NULL) { t_error("t_alloc of t_bind structure failed"); exit(2); } bind->qlen = MAX_CONN_IND; bind->addr.len = sizeof(int); *(int *)bind->addr.buf = SRV_ADDR;
if (t_bind(pollfds[0].fd, bind, NULL) < 0) { t_error("t_bind failed"); exit(3); }
An important aspect of this server is that it sets qlen to a value greater than 1 for t_bind. This specifies that the server is willing to handle multiple outstanding connect indications. Remember that the earlier examples single-threaded the connect indications and responses. The server would have to accept the current connect indication before retrieving additional connect indications. This example, however, can retrieve up to MAX_CONN_IND connect indications at one time before responding to any of them. The transport provider may negotiate the value of qlen downward if it cannot support MAX_CONN_IND outstanding connect indications.
Once the server has bound its address and is ready to process incoming connect requests, it does the following:
pollfds[0].events = POLLIN;Thewhile (1) { if (poll(pollfds, NUM_FDS, -1) < 0) { perror("poll failed"); exit(5); }
for (i = 0; i < NUM_FDS; i++) {
switch (pollfds[i].revents) {
default: perror("poll returned error event"); exit(6);
case 0: continue;
case POLLIN: do_event(i, pollfds[i].fd); service_conn_ind(i, pollfds[i].fd); } } } }
events
member of the
pollfd
structure is set to
POLLIN,
which will ask the provider to notify the server of any incoming
Transport Interface events.
The server then enters an infinite loop, in which it
polls
the transport endpoint(s) for events, and then processes those events
as they occur.
The poll call will block indefinitely, waiting for an incoming event. On return, each entry (corresponding to each transport endpoint) is checked for an existing event. If revents is set to 0, no event has occurred on that endpoint. In this case, the server continues to the next transport endpoint. If revents is set to POLLIN, an event does exist on the endpoint. In this case, do_event is called to process the event. If revents contains any other value, an error must have occurred on the transport endpoint, and the server will exit.
For each iteration of the loop, if any event is found on the transport endpoint, service_conn_ind is called to process any outstanding connect indications. However, if another connect indication is pending, service_conn_ind will save the current connect indication and respond to it later. This routine will be explained shortly.
If an incoming event is discovered, the following routine is called to process it:
do_event(slot, fd) { struct t_discon *discon; int i;This routine takes a number, slot, and a file descriptor, fd, as arguments. slot is used as an index into the global array calls. This array contains an entry for each polled transport endpoint, where each entry consists of an array of t_call structures that hold incoming connect indications for that transport endpoint. The value of slot is used to identify the transport endpoint.switch (t_look(fd)) {
default: fprintf(stderr,"t_look: unexpected event\n"); exit(7);
case T_ERROR: fprintf(stderr,"t_look returned T_ERROR event\n"); exit(8);
case -1: t_error("t_look failed"); exit(9);
case 0: /* since POLLIN returned, this should not happen */ fprintf(stderr,"t_look returned no event\n"); exit(10);
case T_LISTEN: /* * find free element in calls array */ for (i = 0; i < MAX_CONN_IND; i++) { if (calls[slot][i] == NULL) break; }
if ((calls[slot][i] = (struct t_call *)t_alloc(fd, T_CALL, T_ALL)) == NULL) { t_error("t_alloc of t_call structure failed"); exit(11); }
if (t_listen(fd, calls[slot][i]) < 0) { t_error("t_listen failed"); exit(12); } break;
case T_DISCONNECT: discon = (struct t_discon *)t_alloc(fd, T_DIS, T_ALL);
if (t_rcvdis(fd, discon) < 0) { t_error("t_rcvdis failed"); exit(13); } /* * find call ind in array and delete it */ for (i = 0; i < MAX_CONN_IND; i++) { if (discon->sequence == calls[slot][i]->sequence) { t_free(calls[slot][i], T_CALL); calls[slot][i] = NULL; } } t_free(discon, T_DIS); break; } }
do_event calls t_look to determine the Transport Interface event that has occurred on the transport endpoint specified by fd. If a connect indication (T_LISTEN event) or disconnect indication (T_DISCONNECT event) has arrived, the event is processed. Otherwise, the server prints an appropriate error message and exits.
For connect indications, do_event scans the array of outstanding connect indications looking for the first free entry. A t_call structure is then allocated for that entry, and the connect indication is retrieved using t_listen. There must always be at least one free entry in the connect indication array, because the array is large enough to hold the maximum number of outstanding connect indications as negotiated by t_bind. The processing of the connect indication is deferred until later.
If a disconnect indication arrives, it must correspond to a previously received connect indication. This occurs if a client attempts to undo a previous connect request. In this case, do_event allocates a t_discon structure to retrieve the relevant disconnect information. This structure has the following members:
struct t_discon { struct netbuf udata; int reason; int sequence; }where
udata
identifies any user data that might have been sent with the disconnect
indication,
reason
contains a protocol-specific disconnect reason code, and
sequence
identifies the outstanding connect indication that matches this
disconnect indication.
Next, t_rcvdis is called to retrieve the disconnect indication. The array of connect indications for slot is then scanned for one that contains a sequence number that matches the sequence number in the disconnect indication. When the connect indication is found, it is freed and the corresponding entry is set to NULL.
As mentioned earlier, if any event is found on a transport endpoint, service_conn_ind is called to process all currently outstanding connect indications associated with that endpoint as follows:
service_conn_ind(slot, fd) { int i;For the given slot (the transport endpoint), the array of outstanding connect indications is scanned. For each indication, the server will open a responding transport endpoint, bind an address to the endpoint, and then accept the connection on that endpoint. If another event (connect indication or disconnect indication) arrives before the current indication is accepted, t_accept will fail and set t_errno to TLOOK.for (i = 0; i < MAX_CONN_IND; i++) { if (calls[slot][i] == NULL) continue;
if ((conn_fd = t_open("/dev/ticots", O_RDWR, NULL)) < 0) { t_error("open failed"); exit(14); } if (t_bind(conn_fd, NULL, NULL) < 0) { t_error("t_bind failed"); exit(15); }
if (t_accept(fd, conn_fd, calls[slot][i]) < 0) { if (t_errno == TLOOK) { t_close(conn_fd); return; } t_error("t_accept failed"); exit(16); } t_free(calls[slot][i], T_CALL); calls[slot][i] = NULL;
run_server(fd); } }
If this error occurs, the responding transport endpoint is closed and service_conn_ind will return immediately (saving the current connect indication for later processing). This causes the server's main processing loop to be entered, and the new event will be discovered by the next call to poll. In this way, multiple connect indications may be queued by the user.
Eventually, all events will be processed, and service_conn_ind will be able to accept each connect indication in turn. Once the connection has been established, the run_server routine used by the server in ``Connection-mode service'' is called to manage the data transfer.