doc.go 2.5 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980
  1. // Copyright 2014 CoreOS Inc.
  2. //
  3. // Licensed under the Apache License, Version 2.0 (the "License");
  4. // you may not use this file except in compliance with the License.
  5. // You may obtain a copy of the License at
  6. //
  7. // http://www.apache.org/licenses/LICENSE-2.0
  8. //
  9. // Unless required by applicable law or agreed to in writing, software
  10. // distributed under the License is distributed on an "AS IS" BASIS,
  11. // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  12. // See the License for the specific language governing permissions and
  13. // limitations under the License.
  14. /*
  15. Package raft provides an implementation of the raft consensus algorithm.
  16. The primary object in raft is a Node. You either start a Node from scratch
  17. using raft.StartNode or start a Node from some initial state using raft.RestartNode.
  18. n := raft.StartNode(0x01, []int64{0x02, 0x03}, 3, 1)
  19. Now that you are holding onto a Node you have a few responsibilities:
  20. First, you need to push messages that you receive from other machines into the
  21. Node with n.Step().
  22. func recvRaftRPC(ctx context.Context, m raftpb.Message) {
  23. n.Step(ctx, m)
  24. }
  25. Second, you need to save log entries to storage, process committed log entries
  26. through your application and then send pending messages to peers by reading the
  27. channel returned by n.Ready(). It is important that the user persist any
  28. entries that require stable storage before sending messages to other peers to
  29. ensure fault-tolerance.
  30. And finally you need to service timeouts with Tick(). Raft has two important
  31. timeouts: heartbeat and the election timeout. However, internally to the raft
  32. package time is represented by an abstract "tick". The user is responsible for
  33. calling Tick() on their raft.Node on a regular interval in order to service
  34. these timeouts.
  35. The total state machine handling loop will look something like this:
  36. for {
  37. select {
  38. case <-s.Ticker:
  39. n.Tick()
  40. case rd := <-s.Node.Ready():
  41. saveToStable(rd.State, rd.Entries)
  42. process(rd.CommittedEntries)
  43. send(rd.Messages)
  44. case <-s.done:
  45. return
  46. }
  47. }
  48. To propose changes to the state machine from your node take your application
  49. data, serialize it into a byte slice and call:
  50. n.Propose(ctx, data)
  51. To add or remove node in a cluster, build Config struct and call:
  52. n.Configure(ctx, conf)
  53. After configuration is committed, you should apply it to node through:
  54. var conf raftpb.Config
  55. conf.Unmarshal(data)
  56. switch conf.Type {
  57. case raftpb.ConfigAddNode:
  58. n.AddNode(conf.ID)
  59. case raftpb.ConfigRemoveNode:
  60. n.RemoveNode(conf.ID)
  61. }
  62. */
  63. package raft