doc.go 6.0 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153
  1. // Copyright 2015 CoreOS, Inc.
  2. //
  3. // Licensed under the Apache License, Version 2.0 (the "License");
  4. // you may not use this file except in compliance with the License.
  5. // You may obtain a copy of the License at
  6. //
  7. // http://www.apache.org/licenses/LICENSE-2.0
  8. //
  9. // Unless required by applicable law or agreed to in writing, software
  10. // distributed under the License is distributed on an "AS IS" BASIS,
  11. // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  12. // See the License for the specific language governing permissions and
  13. // limitations under the License.
  14. /*
  15. Package raft provides an implementation of the raft consensus algorithm.
  16. Usage
  17. The primary object in raft is a Node. You either start a Node from scratch
  18. using raft.StartNode or start a Node from some initial state using raft.RestartNode.
  19. storage := raft.NewMemoryStorage()
  20. c := &Config{
  21. ID: 0x01,
  22. ElectionTick: 10,
  23. HeartbeatTick: 1,
  24. Storage: storage,
  25. MaxSizePerMsg: 4096,
  26. MaxInflightMsgs: 256,
  27. }
  28. n := raft.StartNode(c, []raft.Peer{{ID: 0x02}, {ID: 0x03}})
  29. Now that you are holding onto a Node you have a few responsibilities:
  30. First, you must read from the Node.Ready() channel and process the updates
  31. it contains. These steps may be performed in parallel, except as noted in step
  32. 2.
  33. 1. Write HardState, Entries, and Snapshot to persistent storage if they are
  34. not empty. Note that when writing an Entry with Index i, any
  35. previously-persisted entries with Index >= i must be discarded.
  36. 2. Send all Messages to the nodes named in the To field. It is important that
  37. no messages be sent until after the latest HardState has been persisted to disk,
  38. and all Entries written by any previous Ready batch (Messages may be sent while
  39. entries from the same batch are being persisted). If any Message has type MsgSnap,
  40. call Node.ReportSnapshot() after it has been sent (these messages may be large).
  41. 3. Apply Snapshot (if any) and CommittedEntries to the state machine.
  42. If any committed Entry has Type EntryConfChange, call Node.ApplyConfChange()
  43. to apply it to the node. The configuration change may be cancelled at this point
  44. by setting the NodeID field to zero before calling ApplyConfChange
  45. (but ApplyConfChange must be called one way or the other, and the decision to cancel
  46. must be based solely on the state machine and not external information such as
  47. the observed health of the node).
  48. 4. Call Node.Advance() to signal readiness for the next batch of updates.
  49. This may be done at any time after step 1, although all updates must be processed
  50. in the order they were returned by Ready.
  51. Second, all persisted log entries must be made available via an
  52. implementation of the Storage interface. The provided MemoryStorage
  53. type can be used for this (if you repopulate its state upon a
  54. restart), or you can supply your own disk-backed implementation.
  55. Third, when you receive a message from another node, pass it to Node.Step:
  56. func recvRaftRPC(ctx context.Context, m raftpb.Message) {
  57. n.Step(ctx, m)
  58. }
  59. Finally, you need to call Node.Tick() at regular intervals (probably
  60. via a time.Ticker). Raft has two important timeouts: heartbeat and the
  61. election timeout. However, internally to the raft package time is
  62. represented by an abstract "tick".
  63. The total state machine handling loop will look something like this:
  64. for {
  65. select {
  66. case <-s.Ticker:
  67. n.Tick()
  68. case rd := <-s.Node.Ready():
  69. saveToStorage(rd.State, rd.Entries, rd.Snapshot)
  70. send(rd.Messages)
  71. if !raft.IsEmptySnap(rd.Snapshot) {
  72. processSnapshot(rd.Snapshot)
  73. }
  74. for entry := range rd.CommittedEntries {
  75. process(entry)
  76. if entry.Type == raftpb.EntryConfChange:
  77. var cc raftpb.ConfChange
  78. cc.Unmarshal(entry.Data)
  79. s.Node.ApplyConfChange(cc)
  80. }
  81. s.Node.Advance()
  82. case <-s.done:
  83. return
  84. }
  85. }
  86. To propose changes to the state machine from your node take your application
  87. data, serialize it into a byte slice and call:
  88. n.Propose(ctx, data)
  89. If the proposal is committed, data will appear in committed entries with type
  90. raftpb.EntryNormal. There is no guarantee that a proposed command will be
  91. committed; you may have to re-propose after a timeout.
  92. To add or remove node in a cluster, build ConfChange struct 'cc' and call:
  93. n.ProposeConfChange(ctx, cc)
  94. After config change is committed, some committed entry with type
  95. raftpb.EntryConfChange will be returned. You must apply it to node through:
  96. var cc raftpb.ConfChange
  97. cc.Unmarshal(data)
  98. n.ApplyConfChange(cc)
  99. Note: An ID represents a unique node in a cluster for all time. A
  100. given ID MUST be used only once even if the old node has been removed.
  101. This means that for example IP addresses make poor node IDs since they
  102. may be reused. Node IDs must be non-zero.
  103. Implementation notes
  104. This implementation is up to date with the final Raft thesis
  105. (https://ramcloud.stanford.edu/~ongaro/thesis.pdf), although our
  106. implementation of the membership change protocol differs somewhat from
  107. that described in chapter 4. The key invariant that membership changes
  108. happen one node at a time is preserved, but in our implementation the
  109. membership change takes effect when its entry is applied, not when it
  110. is added to the log (so the entry is committed under the old
  111. membership instead of the new). This is equivalent in terms of safety,
  112. since the old and new configurations are guaranteed to overlap.
  113. To ensure that we do not attempt to commit two membership changes at
  114. once by matching log positions (which would be unsafe since they
  115. should have different quorum requirements), we simply disallow any
  116. proposed membership change while any uncommitted change appears in
  117. the leader's log.
  118. This approach introduces a problem when you try to remove a member
  119. from a two-member cluster: If one of the members dies before the
  120. other one receives the commit of the confchange entry, then the member
  121. cannot be removed any more since the cluster cannot make progress.
  122. For this reason it is highly recommened to use three or more nodes in
  123. every cluster.
  124. */
  125. package raft