Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Clients append to the log by making a request requests through the leader. The leader adds the new entry to its log, then sends a request containing that log entry to each of the other servers. Each server appends the entry to its log and also writes the data to durable secondary storage; once this is done, the server is said to have accepted the log entry. Once a majority of the cluster has accepted the new log entry, the leader can respond to the client.  At this point the entry is called guaranteed because its durability is assured; the only event that could cause it can to be lost is simultaneous catastrophic failures (i.e. they lose their secondary storage permanently) of more than half the servers in the cluster, causing them to lose their secondary storage.

If a passive server crashes then it will not be able to accept new data from the leader. The leader need not wait for the crashed server to restart before responding to client requests: as long as a majority of the cluster is responsive, the cluster can continue operation. When a server restarts after a crash, it enters passive mode (it does not attempt to contact the leader). If the leader does not receive an acceptance from a server when it sends a new log entry, it continues trying at regular intervals; eventually the server will restart, at which point the leader will "catch it up" on with the log entries it has not yet receivedaccepted. This mechanism insurers ensures that all servers will eventually mirror all log entries (in the absence of leader failures).

Leader failures are more interesting. At the time of a leader failure, there may be one or more log entries that have been partially accepted by the cluster (i.e., the leader has not yet responded to the requesting client). There may also be any number of log entries that have been guaranteed by the cluster, but are not yet fully replicated on all servers. For the partially accepted entries, the new leader must guarantee that these entries are either fully replicated and accepted, or completely expunged from all logs. Any entry that is expunged must never be returned to a client in a read operation: the system must behave as if the client never made its original request. For entries that have been guaranteed but not fully replicated, the new leader must make sure that these entries are eventually fully replicated.

First, let's handle the case of guaranteed but not fully replicated entries. ALPO makes sure that the new leader is chosen from among those servers that have accepted whose log is complete (i.e. it includes all of the guaranteed entries). It does this by modifying the notion of rank during leader election. When candidates request votes they include in the request the term for the election, their server id, and the log id of the most log id of the most recent entry they have accepted. One candidate automatically outranks another if its "last accepted id" is higher than that of the other candidate. If they both have the same "last accepted id", then rank is determined by server id. Furthermore, a server will automatically reject a request for its vote if its "last accepted id" is higher than that of the requesting candidate, even if its vote is still available; when a candidate receives this form of rejection it immediately drops out of the election and returns to passive state.

Since a candidate requires votes from a majority of the cluster to become leader, and since it has accepted all of the log entries that were accepted by any of the other servers that voted for it, and since any guaranteed log entry must have been accepted by a majority of the servers in the cluster, the new leader is certain to store all of the guaranteed log entries. As it communicates with the other servers in the cluster it can update any that are running behind. In particular, when the leader sends a new log entry to a passive server, the passive server will reject the request unless it already stores all of the log entries with smaller ids. When this happens, the passive server indicates to the leader the highest id that it stores, so the leader can then send it all of the missing entries.

The second problem is the case of partially accepted (but not yet guaranteed) log entries. The algorithm described in the previous paragraph makes it likely that the new leader will store these entries, in which case they will get fully replicated by the new leader. However, if an entry has only been accepted by a small number of servers then , it is possible that a new leader can be elected without storing the entry. In this case the new leader must make sure that the entry is expunged by all other servers. The way it It does this is by initiating appending a new log append for a new entry indicating leadership change. The id for this entry will be the next one in order on the leader. When this entry arrives at a passive server, the passive server deletes any existing entries with this id or higher before it accepts the entry. This is the only situation where a passive server receives a log entry whose id it has already accepted.Leader failures also introduce the potential for zombie leaders. A zombie leader is a leader that has been replaced but does not yet know it. ALPO must make sure that    Problem: passive server P stores a not-yet-guaranteed entry E when the leader goes down.  The new leader creates the entry indicating leadership change (C) and starts replicating it; the entry gets rotated enough to be guaranteed, but the new leader crashes before C is stored on P. The next leader will now create a new leadership change entry, but at log position C+1; it's not clear how entry E will get overwritten with C.

Leader failures also introduce the potential for zombie leaders. A zombie leader is a leader that has been replaced but does not yet know it. ALPO must make sure that zombie leaders cannot modify the log. To do this, each request issued by the leader includes the leader's term number. If a passive server receives a request whose term is lower than the server's current term, then it rejects the request. If a leader receives such a request then it knows it has been deposed, so it returns to passive state. Before a server gives its vote to a candidate it must increase its term number to that of the new term. This guarantees that by the time new leader knows that it elected, it is impossible for the previous leader to communicate with a majority of the cluster, so it cannot create guaranteed log entries. Furthermore, if a leader receives any election-related requests from candidates with a higher term number, this also indicates that the leader has been deposed, so it returns to passive stateof the cluster, so it cannot create guaranteed log entries.

One final issue related to log management is log cleaning. ALPO allows each server to perform cleaning (or any other form of log truncation) on its log independently of the other servers. However, there is one restriction on log cleaning: a server must not delete a log entry until it has been fully replicated. Otherwise the server could become leader and need that entry to update a lagging passive server. To ensure this property, the leader keeps track of the highest log id that has been fully replicated and includes this value in any requests that make to other servers. The other servers use this information to restrict cleaning; in most cases the fully-replicated-id will be at or near the head of the log, so this will not impose much of a restriction on cleaning.

...

ALPO can provide exactly-once semantics to clients, meaning that if the leader fails while processing a log append request from a client, the client library package can automatically retry the request once a new leader has been elected, and the new entry will be guaranteed to be appended to the log exactly once (regardless of whether the original request completed before the leader crashed). In order to implement exactly-once semantics, clients must provide a unique serial number in each request; this serial number, along with the client identifier, must be included in every log entry so that it is seen by every server. Using this information, each server can keep track of the most recent serial number that has been successfully completed for each client. When a client retry the retries a request because of leader failure, the new leader can use this information to skip the request if it has already been successfully completed.

...

This section contains more detailed information on managing terms. Terms are used to distinguish votes from different election cycles, and also to help servers detect when they are out of date with respect to the rest of the cluster.  In general, if a candidate or leader finds itself out of date it immediately passivates, under the assumption that someone else will take over leadershipis leader or will become leader soon.

  • Each server stores a term number called currentTerm. This indicates the most recent term that has been used seen by this server or received in a message from another server.
  • When a server starts, currentTerm is set to 0.
  • Every message from server to server contains the term of the sender, plus an indication whether the sender is a candidate or leader (passive servers never issue requests). This value (call it senderTerm) is used to update currentTerm and to detect out-of-data servers:
    • senderTerm < currentTerm: reject the message with an indication that the sender is out of date; the rejection also includes currentTerm. When the sender receives the rejection it updates its currentTerm to match the one in the response, then passivates. This makes sense for leaders because it means their term of leadership is over. This also make sense for candidates because it means there is already a new election cycle with other active candidates; there is no need for the sender to participate.
    • senderTerm == currentTerm: if the sender is a leader, then set lastReign to senderTermprocess the request normally.
    • senderTerm > currentTerm: set currentTerm to senderTerm. If the recipient is currently a leader or candidate, then it passivates. Finally, it processes the request.
  • When a server switches from passive to candidate, it increments currentTerm to force a new election cycle.
  • I do not believe that currentTerm needs to be saved on disk. When a server restarts, it sets currentTerm to 0. Servers start out in passive state; most likely the leader will contact them (which will update currentTerm) before the new server times out and becomes a candidate. If all servers crash simultaneously, it should be OK for the term numbers to restart at 1. If all servers but one crash and restart, and for some reason the remaining server is disconnected from the others, it's possible that the new servers will choose a leader with term 1, without contacting the remaining server. At some point they will eventually contact it, which will cause the currentTerms to update. The existing leader will passivate, and a new election cycle will eventually occur; at this point everyone will be caught up to the term number of the server that didn't crash.

...

  • currentTerm: number of the most recent term seen by this server.
  • Vote: the server id and log length of the candidate who has received this server's vote for currentTerm (if any).
  • Server id of this server.
  • Log entries that have been accepted by the server.
  • Id of the most recent log entry that has been accepted by all serversfully replicated.
  • Time of receipt of the last request from the leader.
  • Cluster map: id and location of each server in the cluster, whether dead or alive.

...