Browse Source

Merge pull request #5212 from gyuho/doc_fix

v2 documentation link fix
Gyu-Ho Lee 9 years ago
parent
commit
c25c8573ac

+ 0 - 31
Documentation/04_to_2_snapshot_migration.md

@@ -1,31 +0,0 @@
-# Snapshot Migration
-
-You can migrate a snapshot of your data from a v0.4.9+ cluster into a new etcd 2.2 cluster using a snapshot migration. After snapshot migration, the etcd indexes of your data will change. Many etcd applications rely on these indexes to behave correctly. This operation should only be done while all etcd applications are stopped.
-
-To get started get the newest data snapshot from the 0.4.9+ cluster:
-
-```
-curl http://cluster.example.com:4001/v2/migration/snapshot > backup.snap
-```
-
-Now, import the snapshot into your new cluster:
-
-```
-etcdctl --endpoint new_cluster.example.com import --snap backup.snap
-```
-
-If you have a large amount of data, you can specify more concurrent works to copy data in parallel by using `-c` flag.
-If you have hidden keys to copy, you can use `--hidden` flag to specify. For example fleet uses `/_coreos.com/fleet` so to import those keys use `--hidden /_coreos.com`.
-
-And the data will quickly copy into the new cluster:
-
-```
-entering dir: /
-entering dir: /foo
-entering dir: /foo/bar
-copying key: /foo/bar/1 1
-entering dir: /
-entering dir: /foo2
-entering dir: /foo2/bar2
-copying key: /foo2/bar2/2 2
-```

+ 0 - 1130
Documentation/api.md

@@ -1,1130 +0,0 @@
-# etcd API
-
-## Running a Single Machine Cluster
-
-These examples will use a single member cluster to show you the basics of the etcd REST API.
-Let's start etcd:
-
-```sh
-./bin/etcd
-```
-
-This will bring up etcd listening on the IANA assigned ports and listening on localhost.
-The IANA assigned ports for etcd are 2379 for client communication and 2380 for server-to-server communication.
-
-## Getting the etcd version
-
-The etcd version of a specific instance can be obtained from the `/version` endpoint.
-
-```sh
-curl -L http://127.0.0.1:2379/version
-```
-
-## Key Space Operations
-
-The primary API of etcd is a hierarchical key space.
-The key space consists of directories and keys which are generically referred to as "nodes".
-
-### Setting the value of a key
-
-Let's set the first key-value pair in the datastore.
-In this case the key is `/message` and the value is `Hello world`.
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/message -XPUT -d value="Hello world"
-```
-
-```json
-{
-    "action": "set",
-    "node": {
-        "createdIndex": 2,
-        "key": "/message",
-        "modifiedIndex": 2,
-        "value": "Hello world"
-    }
-}
-```
-
-The response object contains several attributes:
-
-1. `action`: the action of the request that was just made.
-The request attempted to modify `node.value` via a `PUT` HTTP request, thus the value of action is `set`.
-
-2. `node.key`: the HTTP path to which the request was made.
-We set `/message` to `Hello world`, so the key field is `/message`.
-etcd uses a file-system-like structure to represent the key-value pairs, therefore all keys start with `/`.
-
-3. `node.value`: the value of the key after resolving the request.
-In this case, a successful request was made that attempted to change the node's value to `Hello world`.
-
-4. `node.createdIndex`: an index is a unique, monotonically-incrementing integer created for each change to etcd.
-This specific index reflects the point in the etcd state member at which a given key was created.
-You may notice that in this example the index is `2` even though it is the first request you sent to the server.
-This is because there are internal commands that also change the state behind the scenes, like adding and syncing servers.
-
-5. `node.modifiedIndex`: like `node.createdIndex`, this attribute is also an etcd index.
-Actions that cause the value to change include `set`, `delete`, `update`, `create`, `compareAndSwap` and `compareAndDelete`.
-Since the `get` and `watch` commands do not change state in the store, they do not change the value of `node.modifiedIndex`.
-
-
-### Response Headers
-
-etcd includes a few HTTP headers in responses that provide global information about the etcd cluster that serviced a request:
-
-```
-X-Etcd-Index: 35
-X-Raft-Index: 5398
-X-Raft-Term: 1
-```
-
-* `X-Etcd-Index` is the current etcd index as explained above. When request is a watch on key space, `X-Etcd-Index` is the current etcd index when the watch starts, which means that the watched event may happen after `X-Etcd-Index`.
-* `X-Raft-Index` is similar to the etcd index but is for the underlying raft protocol.
-* `X-Raft-Term` is an integer that will increase whenever an etcd leader election happens in the cluster. If this number is increasing rapidly, you may need to tune the election timeout. See the [tuning][tuning] section for details.
-
-### Get the value of a key
-
-We can get the value that we just set in `/message` by issuing a `GET` request:
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/message
-```
-
-```json
-{
-    "action": "get",
-    "node": {
-        "createdIndex": 2,
-        "key": "/message",
-        "modifiedIndex": 2,
-        "value": "Hello world"
-    }
-}
-```
-
-
-### Changing the value of a key
-
-You can change the value of `/message` from `Hello world` to `Hello etcd` with another `PUT` request to the key:
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/message -XPUT -d value="Hello etcd"
-```
-
-```json
-{
-    "action": "set",
-    "node": {
-        "createdIndex": 3,
-        "key": "/message",
-        "modifiedIndex": 3,
-        "value": "Hello etcd"
-    },
-    "prevNode": {
-    	"createdIndex": 2,
-    	"key": "/message",
-    	"value": "Hello world",
-    	"modifiedIndex": 2
-    }
-}
-```
-
-Here we introduce a new field: `prevNode`. The `prevNode` field represents what the state of a given node was before resolving the request at hand. The `prevNode` field follows the same format as the `node`, and is omitted in the event that there was no previous state for a given node.
-
-### Deleting a key
-
-You can remove the `/message` key with a `DELETE` request:
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/message -XDELETE
-```
-
-```json
-{
-    "action": "delete",
-    "node": {
-        "createdIndex": 3,
-        "key": "/message",
-        "modifiedIndex": 4
-    },
-    "prevNode": {
-    	"key": "/message",
-    	"value": "Hello etcd",
-    	"modifiedIndex": 3,
-    	"createdIndex": 3
-    }
-}
-```
-
-
-### Using key TTL
-
-Keys in etcd can be set to expire after a specified number of seconds.
-You can do this by setting a TTL (time to live) on the key when sending a `PUT` request:
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/foo -XPUT -d value=bar -d ttl=5
-```
-
-```json
-{
-    "action": "set",
-    "node": {
-        "createdIndex": 5,
-        "expiration": "2013-12-04T12:01:21.874888581-08:00",
-        "key": "/foo",
-        "modifiedIndex": 5,
-        "ttl": 5,
-        "value": "bar"
-    }
-}
-```
-
-Note the two new fields in response:
-
-1. The `expiration` is the time at which this key will expire and be deleted.
-
-2. The `ttl` is the specified time to live for the key, in seconds.
-
-_NOTE_: Keys can only be expired by a cluster leader, so if a member gets disconnected from the cluster, its keys will not expire until it rejoins.
-
-Now you can try to get the key by sending a `GET` request:
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/foo
-```
-
-If the TTL has expired, the key will have been deleted, and you will be returned a 100.
-
-```json
-{
-    "cause": "/foo",
-    "errorCode": 100,
-    "index": 6,
-    "message": "Key not found"
-}
-```
-
-The TTL can be unset to avoid expiration through update operation:
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/foo -XPUT -d value=bar -d ttl= -d prevExist=true
-```
-
-```json
-{
-    "action": "update",
-    "node": {
-        "createdIndex": 5,
-        "key": "/foo",
-        "modifiedIndex": 6,
-        "value": "bar"
-    },
-    "prevNode": {
-        "createdIndex": 5,
-        "expiration": "2013-12-04T12:01:21.874888581-08:00",
-        "key": "/foo",
-        "modifiedIndex": 5,
-        "ttl": 3,
-        "value": "bar"
-    }
-}
-```
-
-### Refreshing key TTL
-
-Keys in etcd can be refreshed without notifying watchers
-this can be achieved by setting the refresh to true when updating a TTL
-
-You cannot update the value of a key when refreshing it
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/foo -XPUT -d value=bar -d ttl=5
-curl http://127.0.0.1:2379/v2/keys/foo -XPUT -d ttl=5 -d refresh=true -d prevExist=true
-```
-
-```json
-{
-    "action": "set",
-    "node": {
-        "createdIndex": 5,
-        "expiration": "2013-12-04T12:01:21.874888581-08:00",
-        "key": "/foo",
-        "modifiedIndex": 5,
-        "ttl": 5,
-        "value": "bar"
-    }
-}
-{
-   "action":"update",
-   "node":{
-       "key":"/foo",
-       "value":"bar",
-       "expiration": "2013-12-04T12:01:26.874888581-08:00",
-       "ttl":5,
-       "modifiedIndex":6,
-       "createdIndex":5
-    },
-   "prevNode":{
-       "key":"/foo",
-       "value":"bar",
-       "expiration":"2013-12-04T12:01:21.874888581-08:00",
-       "ttl":3,
-       "modifiedIndex":5,
-       "createdIndex":5
-     }
-}
-```
-
-### Waiting for a change
-
-We can watch for a change on a key and receive a notification by using long polling.
-This also works for child keys by passing `recursive=true` in curl.
-
-In one terminal, we send a `GET` with `wait=true` :
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/foo?wait=true
-```
-
-Now we are waiting for any changes at path `/foo`.
-
-In another terminal, we set a key `/foo` with value `bar`:
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/foo -XPUT -d value=bar
-```
-
-The first terminal should get the notification and return with the same response as the set request:
-
-```json
-{
-    "action": "set",
-    "node": {
-        "createdIndex": 7,
-        "key": "/foo",
-        "modifiedIndex": 7,
-        "value": "bar"
-    },
-    "prevNode": {
-        "createdIndex": 6,
-        "key": "/foo",
-        "modifiedIndex": 6,
-        "value": "bar"
-    }
-}
-```
-
-However, the watch command can do more than this.
-Using the index, we can watch for commands that have happened in the past.
-This is useful for ensuring you don't miss events between watch commands. 
-Typically, we watch again from the `modifiedIndex` + 1 of the node we got.
-
-Let's try to watch for the set command of index 7 again:
-
-```sh
-curl 'http://127.0.0.1:2379/v2/keys/foo?wait=true&waitIndex=7'
-```
-
-The watch command returns immediately with the same response as previously.
-
-If we were to restart the watch from index 8 with:
-
-```sh
-curl 'http://127.0.0.1:2379/v2/keys/foo?wait=true&waitIndex=8'
-```
-
-Then even if etcd is on index 9 or 800, the first event to occur to the `/foo`
-key between 8 and the current index will be returned.
-
-**Note**: etcd only keeps the responses of the most recent 1000 events across all etcd keys. 
-It is recommended to send the response to another thread to process immediately
-instead of blocking the watch while processing the result. 
-
-#### Watch from cleared event index
-
-If we miss all the 1000 events, we need to recover the current state of the 
-watching key space through a get and then start to watch from the
-`X-Etcd-Index` + 1.
-
-For example, we set `/other="bar"` for 2000 times and try to wait from index 8.
-
-```sh
-curl 'http://127.0.0.1:2379/v2/keys/foo?wait=true&waitIndex=8'
-```
-
-We get the index is outdated response, since we miss the 1000 events kept in etcd.
-
-```
-{"errorCode":401,"message":"The event in requested index is outdated and cleared","cause":"the requested history has been cleared [1008/8]","index":2007}
-```
-
-To start watch, first we need to fetch the current state of key `/foo`:
-
-```sh
-curl 'http://127.0.0.1:2379/v2/keys/foo' -vv
-```
-
-``` 
-< HTTP/1.1 200 OK
-< Content-Type: application/json
-< X-Etcd-Cluster-Id: 7e27652122e8b2ae
-< X-Etcd-Index: 2007
-< X-Raft-Index: 2615
-< X-Raft-Term: 2
-< Date: Mon, 05 Jan 2015 18:54:43 GMT
-< Transfer-Encoding: chunked
-< 
-{"action":"get","node":{"key":"/foo","value":"bar","modifiedIndex":7,"createdIndex":7}}
-```
-
-Unlike watches we use the `X-Etcd-Index` + 1 of the response as a `waitIndex`
-instead of the node's `modifiedIndex` + 1 for two reasons:
-
-1. The `X-Etcd-Index` is always greater than or equal to the `modifiedIndex` when
-   getting a key because `X-Etcd-Index` is the current etcd index, and the `modifiedIndex`
-   is the index of an event already stored in etcd.
-2. None of the events represented by indexes between `modifiedIndex` and
-   `X-Etcd-Index` will be related to the key being fetched.
-
-Using the `modifiedIndex` + 1 is functionally equivalent for subsequent
-watches, but since it is smaller than the `X-Etcd-Index` + 1, we may receive a
-`401 EventIndexCleared` error immediately.
-
-So the first watch after the get should be:
-
-```sh
-curl 'http://127.0.0.1:2379/v2/keys/foo?wait=true&waitIndex=2008'
-```
-
-#### Connection being closed prematurely
-
-The server may close a long polling connection before emitting any events.
-This can happen due to a timeout or the server being shutdown.
-Since the HTTP header is sent immediately upon accepting the connection, the response will be seen as empty: `200 OK` and empty body.
-The clients should be prepared to deal with this scenario and retry the watch.
-
-### Atomically Creating In-Order Keys
-
-Using `POST` on a directory, you can create keys with key names that are created in-order.
-This can be used in a variety of useful patterns, like implementing queues of keys which need to be processed in strict order.
-An example use case would be ensuring clients get fair access to a mutex.
-
-Creating an in-order key is easy:
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/queue -XPOST -d value=Job1
-```
-
-```json
-{
-    "action": "create",
-    "node": {
-        "createdIndex": 6,
-        "key": "/queue/00000000000000000006",
-        "modifiedIndex": 6,
-        "value": "Job1"
-    }
-}
-```
-
-If you create another entry some time later, it is guaranteed to have a key name that is greater than the previous key.
-Also note the key names use the global etcd index, so the next key can be more than `previous + 1`.
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/queue -XPOST -d value=Job2
-```
-
-```json
-{
-    "action": "create",
-    "node": {
-        "createdIndex": 29,
-        "key": "/queue/00000000000000000029",
-        "modifiedIndex": 29,
-        "value": "Job2"
-    }
-}
-```
-
-To enumerate the in-order keys as a sorted list, use the "sorted" parameter.
-
-```sh
-curl -s 'http://127.0.0.1:2379/v2/keys/queue?recursive=true&sorted=true'
-```
-
-```json
-{
-    "action": "get",
-    "node": {
-        "createdIndex": 2,
-        "dir": true,
-        "key": "/queue",
-        "modifiedIndex": 2,
-        "nodes": [
-            {
-                "createdIndex": 2,
-                "key": "/queue/00000000000000000002",
-                "modifiedIndex": 2,
-                "value": "Job1"
-            },
-            {
-                "createdIndex": 3,
-                "key": "/queue/00000000000000000003",
-                "modifiedIndex": 3,
-                "value": "Job2"
-            }
-        ]
-    }
-}
-```
-
-
-### Using a directory TTL
-
-Like keys, directories in etcd can be set to expire after a specified number of seconds.
-You can do this by setting a TTL (time to live) on a directory when it is created with a `PUT`:
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/dir -XPUT -d ttl=30 -d dir=true
-```
-
-```json
-{
-    "action": "set",
-    "node": {
-        "createdIndex": 17,
-        "dir": true,
-        "expiration": "2013-12-11T10:37:33.689275857-08:00",
-        "key": "/dir",
-        "modifiedIndex": 17,
-        "ttl": 30
-    }
-}
-```
-
-The directory's TTL can be refreshed by making an update.
-You can do this by making a PUT with `prevExist=true` and a new TTL.
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/dir -XPUT -d ttl=30 -d dir=true -d prevExist=true
-```
-
-Keys that are under this directory work as usual, but when the directory expires, a watcher on a key under the directory will get an expire event:
-
-```sh
-curl 'http://127.0.0.1:2379/v2/keys/dir?wait=true'
-```
-
-```json
-{
-    "action": "expire",
-    "node": {
-        "createdIndex": 8,
-        "key": "/dir",
-        "modifiedIndex": 15
-    },
-    "prevNode": {
-        "createdIndex": 8,
-        "key": "/dir",
-        "dir":true,
-        "modifiedIndex": 17,
-        "expiration": "2013-12-11T10:39:35.689275857-08:00"
-    }
-}
-```
-
-
-### Atomic Compare-and-Swap
-
-etcd can be used as a centralized coordination service in a cluster, and `CompareAndSwap` (CAS) is the most basic operation used to build a distributed lock service.
-
-This command will set the value of a key only if the client-provided conditions are equal to the current conditions.
-
-*Note that `CompareAndSwap` does not work with [directories][directories]. If an attempt is made to `CompareAndSwap` a directory, a 102 "Not a file" error will be returned.*
-
-The current comparable conditions are:
-
-1. `prevValue` - checks the previous value of the key.
-
-2. `prevIndex` - checks the previous modifiedIndex of the key.
-
-3. `prevExist` - checks existence of the key: if `prevExist` is true, it is an `update` request; if `prevExist` is `false`, it is a `create` request.
-
-Here is a simple example.
-Let's create a key-value pair first: `foo=one`.
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/foo -XPUT -d value=one
-```
-
-Now let's try some invalid `CompareAndSwap` commands.
-
-Trying to set this existing key with `prevExist=false` fails as expected:
-```sh
-curl http://127.0.0.1:2379/v2/keys/foo?prevExist=false -XPUT -d value=three
-```
-
-The error code explains the problem:
-
-```json
-{
-    "cause": "/foo",
-    "errorCode": 105,
-    "index": 39776,
-    "message": "Key already exists"
-}
-```
-
-Now let's provide a `prevValue` parameter:
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/foo?prevValue=two -XPUT -d value=three
-```
-
-This will try to compare the previous value of the key and the previous value we provided. If they are equal, the value of the key will change to three.
-
-```json
-{
-    "cause": "[two != one]",
-    "errorCode": 101,
-    "index": 8,
-    "message": "Compare failed"
-}
-```
-
-which means `CompareAndSwap` failed. `cause` explains why the test failed.
-Note: the condition prevIndex=0 always passes.
-
-Let's try a valid condition:
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/foo?prevValue=one -XPUT -d value=two
-```
-
-The response should be:
-
-```json
-{
-    "action": "compareAndSwap",
-    "node": {
-        "createdIndex": 8,
-        "key": "/foo",
-        "modifiedIndex": 9,
-        "value": "two"
-    },
-    "prevNode": {
-    	"createdIndex": 8,
-    	"key": "/foo",
-    	"modifiedIndex": 8,
-    	"value": "one"
-    }
-}
-```
-
-We successfully changed the value from "one" to "two" since we gave the correct previous value.
-
-### Atomic Compare-and-Delete
-
-This command will delete a key only if the client-provided conditions are equal to the current conditions.
-
-*Note that `CompareAndDelete` does not work with [directories]. If an attempt is made to `CompareAndDelete` a directory, a 102 "Not a file" error will be returned.*
-
-The current comparable conditions are:
-
-1. `prevValue` - checks the previous value of the key.
-
-2. `prevIndex` - checks the previous modifiedIndex of the key.
-
-Here is a simple example. Let's first create a key: `foo=one`.
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/foo -XPUT -d value=one
-```
-
-Now let's try some `CompareAndDelete` commands.
-
-Trying to delete the key with `prevValue=two` fails as expected:
-```sh
-curl http://127.0.0.1:2379/v2/keys/foo?prevValue=two -XDELETE
-```
-
-The error code explains the problem:
-
-```json
-{
-	"errorCode": 101,
-	"message": "Compare failed",
-	"cause": "[two != one]",
-	"index": 8
-}
-```
-
-As does a `CompareAndDelete` with a mismatched `prevIndex`:
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/foo?prevIndex=1 -XDELETE
-```
-
-```json
-{
-	"errorCode": 101,
-	"message": "Compare failed",
-	"cause": "[1 != 8]",
-	"index": 8
-}
-```
-
-And now a valid `prevValue` condition:
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/foo?prevValue=one -XDELETE
-```
-
-The successful response will look something like:
-
-```json
-{
-	"action": "compareAndDelete",
-	"node": {
-		"key": "/foo",
-		"modifiedIndex": 9,
-		"createdIndex": 8
-	},
-	"prevNode": {
-		"key": "/foo",
-		"value": "one",
-		"modifiedIndex": 8,
-		"createdIndex": 8
-	}
-}
-```
-
-### Creating Directories
-
-In most cases, directories for a key are automatically created.
-But there are cases where you will want to create a directory or remove one.
-
-Creating a directory is just like a key except you cannot provide a value and must add the `dir=true` parameter.
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/dir -XPUT -d dir=true
-```
-```json
-{
-    "action": "set",
-    "node": {
-        "createdIndex": 30,
-        "dir": true,
-        "key": "/dir",
-        "modifiedIndex": 30
-    }
-}
-```
-
-
-### Listing a directory
-
-In etcd we can store two types of things: keys and directories.
-Keys store a single string value.
-Directories store a set of keys and/or other directories.
-
-In this example, let's first create some keys:
-
-We already have `/foo=two` so now we'll create another one called `/foo_dir/foo` with the value of `bar`:
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/foo_dir/foo -XPUT -d value=bar
-```
-
-```json
-{
-    "action": "set",
-    "node": {
-        "createdIndex": 2,
-        "key": "/foo_dir/foo",
-        "modifiedIndex": 2,
-        "value": "bar"
-    }
-}
-```
-
-Now we can list the keys under root `/`:
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/
-```
-
-We should see the response as an array of items:
-
-```json
-{
-    "action": "get",
-    "node": {
-        "key": "/",
-        "dir": true,
-        "nodes": [
-            {
-                "key": "/foo_dir",
-                "dir": true,
-                "modifiedIndex": 2,
-                "createdIndex": 2
-            },
-            {
-                "key": "/foo",
-                "value": "two",
-                "modifiedIndex": 1,
-                "createdIndex": 1
-            }
-        ]
-    }
-}
-```
-
-Here we can see `/foo` is a key-value pair under `/` and `/foo_dir` is a directory.
-We can also recursively get all the contents under a directory by adding `recursive=true`.
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/?recursive=true
-```
-
-```json
-{
-    "action": "get",
-    "node": {
-        "key": "/",
-        "dir": true,
-        "nodes": [
-            {
-                "key": "/foo_dir",
-                "dir": true,
-                "nodes": [
-                    {
-                        "key": "/foo_dir/foo",
-                        "value": "bar",
-                        "modifiedIndex": 2,
-                        "createdIndex": 2
-                    }
-                ],
-                "modifiedIndex": 2,
-                "createdIndex": 2
-            },
-            {
-                "key": "/foo",
-                "value": "two",
-                "modifiedIndex": 1,
-                "createdIndex": 1
-            }
-        ]
-    }
-}
-```
-
-
-### Deleting a Directory
-
-Now let's try to delete the directory `/foo_dir`.
-
-You can remove an empty directory using the `DELETE` verb and the `dir=true` parameter.
-
-```sh
-curl 'http://127.0.0.1:2379/v2/keys/foo_dir?dir=true' -XDELETE
-```
-```json
-{
-    "action": "delete",
-    "node": {
-        "createdIndex": 30,
-        "dir": true,
-        "key": "/foo_dir",
-        "modifiedIndex": 31
-    },
-    "prevNode": {
-    	"createdIndex": 30,
-    	"key": "/foo_dir",
-    	"dir": true,
-    	"modifiedIndex": 30
-    }
-}
-```
-
-To delete a directory that holds keys, you must add `recursive=true`.
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/dir?recursive=true -XDELETE
-```
-
-```json
-{
-    "action": "delete",
-    "node": {
-        "createdIndex": 10,
-        "dir": true,
-        "key": "/dir",
-        "modifiedIndex": 11
-    },
-    "prevNode": {
-    	"createdIndex": 10,
-    	"dir": true,
-    	"key": "/dir",
-    	"modifiedIndex": 10
-    }
-}
-```
-
-
-### Creating a hidden node
-
-We can create a hidden key-value pair or directory by add a `_` prefix.
-The hidden item will not be listed when sending a `GET` request for a directory.
-
-First we'll add a hidden key named `/_message`:
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/_message -XPUT -d value="Hello hidden world"
-```
-
-```json
-{
-    "action": "set",
-    "node": {
-        "createdIndex": 3,
-        "key": "/_message",
-        "modifiedIndex": 3,
-        "value": "Hello hidden world"
-    }
-}
-```
-
-Next we'll add a regular key named `/message`:
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/message -XPUT -d value="Hello world"
-```
-
-```json
-{
-    "action": "set",
-    "node": {
-        "createdIndex": 4,
-        "key": "/message",
-        "modifiedIndex": 4,
-        "value": "Hello world"
-    }
-}
-```
-
-Now let's try to get a listing of keys under the root directory, `/`:
-
-```sh
-curl http://127.0.0.1:2379/v2/keys/
-```
-
-```json
-{
-    "action": "get",
-    "node": {
-        "dir": true,
-        "key": "/",
-        "nodes": [
-            {
-                "createdIndex": 2,
-                "dir": true,
-                "key": "/foo_dir",
-                "modifiedIndex": 2
-            },
-            {
-                "createdIndex": 4,
-                "key": "/message",
-                "modifiedIndex": 4,
-                "value": "Hello world"
-            }
-        ]
-    }
-}
-```
-
-Here we see the `/message` key but our hidden `/_message` key is not returned.
-
-### Setting a key from a file
-
-You can also use etcd to store small configuration files, JSON documents, XML documents, etc directly.
-For example you can use curl to upload a simple text file and encode it:
-
-```
-echo "Hello\nWorld" > afile.txt
-curl http://127.0.0.1:2379/v2/keys/afile -XPUT --data-urlencode value@afile.txt
-```
-
-```json
-{
-    "action": "get",
-    "node": {
-        "createdIndex": 2,
-        "key": "/afile",
-        "modifiedIndex": 2,
-        "value": "Hello\nWorld\n"
-    }
-}
-```
-
-### Read Linearization
-
-If you want a read that is fully linearized you can use a `quorum=true` GET.
-The read will take a very similar path to a write and will have a similar
-speed. If you are unsure if you need this feature feel free to email etcd-dev
-for advice.
-
-## Statistics
-
-An etcd cluster keeps track of a number of statistics including latency, bandwidth and uptime.
-These are exposed via the statistics endpoint to understand the internal health of a cluster.
-
-### Leader Statistics
-
-The leader has a view of the entire cluster and keeps track of two interesting statistics: latency to each peer in the cluster, and the number of failed and successful Raft RPC requests.
-You can grab these statistics from the `/v2/stats/leader` endpoint:
-
-```sh
-curl http://127.0.0.1:2379/v2/stats/leader
-```
-
-```json
-{
-    "followers": {
-        "6e3bd23ae5f1eae0": {
-            "counts": {
-                "fail": 0,
-                "success": 745
-            },
-            "latency": {
-                "average": 0.017039507382550306,
-                "current": 0.000138,
-                "maximum": 1.007649,
-                "minimum": 0,
-                "standardDeviation": 0.05289178277920594
-            }
-        },
-        "a8266ecf031671f3": {
-            "counts": {
-                "fail": 0,
-                "success": 735
-            },
-            "latency": {
-                "average": 0.012124141496598642,
-                "current": 0.000559,
-                "maximum": 0.791547,
-                "minimum": 0,
-                "standardDeviation": 0.04187900156583733
-            }
-        }
-    },
-    "leader": "924e2e83e93f2560"
-}
-```
-
-
-### Self Statistics
-
-Each node keeps a number of internal statistics:
-
-- `id`: the unique identifier for the member
-- `leaderInfo.leader`: id of the current leader member
-- `leaderInfo.uptime`: amount of time the leader has been leader
-- `name`: this member's name
-- `recvAppendRequestCnt`: number of append requests this node has processed
-- `recvBandwidthRate`: number of bytes per second this node is receiving (follower only)
-- `recvPkgRate`: number of requests per second this node is receiving (follower only)
-- `sendAppendRequestCnt`: number of requests that this node has sent
-- `sendBandwidthRate`: number of bytes per second this node is sending (leader only). This value is undefined on single member clusters.
-- `sendPkgRate`: number of requests per second this node is sending (leader only). This value is undefined on single member clusters.
-- `state`: either leader or follower
-- `startTime`: the time when this node was started
-
-This is an example response from a follower member:
-
-```sh
-curl http://127.0.0.1:2379/v2/stats/self
-```
-
-```json
-{
-    "id": "eca0338f4ea31566",
-    "leaderInfo": {
-        "leader": "8a69d5f6b7814500",
-        "startTime": "2014-10-24T13:15:51.186620747-07:00",
-        "uptime": "10m59.322358947s"
-    },
-    "name": "node3",
-    "recvAppendRequestCnt": 5944,
-    "recvBandwidthRate": 570.6254930219969,
-    "recvPkgRate": 9.00892789741075,
-    "sendAppendRequestCnt": 0,
-    "startTime": "2014-10-24T13:15:50.072007085-07:00",
-    "state": "StateFollower"
-}
-```
-
-And this is an example response from a leader member:
-
-```sh
-curl http://127.0.0.1:2379/v2/stats/self
-```
-
-```json
-{
-    "id": "924e2e83e93f2560",
-    "leaderInfo": {
-        "leader": "924e2e83e93f2560",
-        "startTime": "2015-02-09T11:38:30.177534688-08:00",
-        "uptime": "9m33.891343412s"
-    },
-    "name": "infra3",
-    "recvAppendRequestCnt": 0,
-    "sendAppendRequestCnt": 6535,
-    "sendBandwidthRate": 824.1758351191694,
-    "sendPkgRate": 11.111234716807138,
-    "startTime": "2015-02-09T11:38:28.972034204-08:00",
-    "state": "StateLeader"
-}
-```
-
-
-### Store Statistics
-
-The store statistics include information about the operations that this node has handled.
-Note that v2 `store Statistics` is stored in-memory. When a member stops, store statistics will reset on restart.
-
-Operations that modify the store's state like create, delete, set and update are seen by the entire cluster and the number will increase on all nodes.
-Operations like get and watch are node local and will only be seen on this node.
-
-```sh
-curl http://127.0.0.1:2379/v2/stats/store
-```
-
-```json
-{
-    "compareAndSwapFail": 0,
-    "compareAndSwapSuccess": 0,
-    "createFail": 0,
-    "createSuccess": 2,
-    "deleteFail": 0,
-    "deleteSuccess": 0,
-    "expireCount": 0,
-    "getsFail": 4,
-    "getsSuccess": 75,
-    "setsFail": 2,
-    "setsSuccess": 4,
-    "updateFail": 0,
-    "updateSuccess": 0,
-    "watchers": 0
-}
-```
-
-## Cluster Config
-
-See the [members API][members-api] for details on the cluster management.
-
-[directories]: #listing-a-directory
-[members-api]: members_api.md
-[tuning]: tuning.md

+ 0 - 180
Documentation/authentication.md

@@ -1,180 +0,0 @@
-# Authentication Guide
-
-## Overview
-
-Authentication -- having users and roles in etcd -- was added in etcd 2.1. This guide will help you set up basic authentication in etcd.
-
-etcd before 2.1 was a completely open system; anyone with access to the API could change keys. In order to preserve backward compatibility and upgradability, this feature is off by default.
-
-For a full discussion of the RESTful API, see [the authentication API documentation][auth-api]
-
-## Special Users and Roles
-
-There is one special user, `root`, and there are two special roles, `root` and `guest`.
-
-### User `root`
-
-User `root` must be created before security can be activated. It has the `root` role and allows for the changing of anything inside etcd. The idea behind the `root` user is for recovery purposes -- a password is generated and stored somewhere -- and the root role is granted to the administrator accounts on the system. In the future, for troubleshooting and recovery, we will need to assume some access to the system, and future documentation will assume this root user (though anyone with the role will suffice). 
-
-### Role `root`
-
-Role `root` cannot be modified, but it may be granted to any user. Having access via the root role not only allows global read-write access (as was the case before 2.1) but allows modification of the authentication policy and all administrative things, like modifying the cluster membership.
-
-### Role `guest`
-
-The `guest` role defines the permissions granted to any request that does not provide an authentication. This will be created on security activation (if it doesn't already exist) to have full access to all keys, as was true in etcd 2.0. It may be modified at any time, and cannot be removed.
-
-## Working with users
-
-The `user` subcommand for `etcdctl` handles all things having to do with user accounts.
-
-A listing of users can be found with
-
-```
-$ etcdctl user list
-```
-
-Creating a user is as easy as
-
-```
-$ etcdctl user add myusername
-```
-
-And there will be prompt for a new password.
-
-Roles can be granted and revoked for a user with
-
-```
-$ etcdctl user grant myusername --roles foo,bar,baz
-$ etcdctl user revoke myusername --roles bar,baz
-```
-
-We can look at this user with
-
-```
-$ etcdctl user get myusername
-```
-
-And the password for a user can be changed with
-
-```
-$ etcdctl user passwd myusername
-```
-
-Which will prompt again for a new password.
-
-To delete an account, there's always
-```
-$ etcdctl user remove myusername
-```
-
-
-## Working with roles
-
-The `role` subcommand for `etcdctl` handles all things having to do with access controls for particular roles, as were granted to individual users.
-
-A listing of roles can be found with
-
-```
-$ etcdctl role list
-```
-
-A new role can be created with
-
-```
-$ etcdctl role add myrolename
-```
-
-A role has no password; we are merely defining a new set of access rights.
-
-Roles are granted access to various parts of the keyspace, a single path at a time.
-
-Reading a path is simple; if the path ends in `*`, that key **and all keys prefixed with it**, are granted to holders of this role. If it does not end in `*`, only that key and that key alone is granted.
-
-Access can be granted as either read, write, or both, as in the following examples:
-
-```
-# Give read access to keys under the /foo directory
-$ etcdctl role grant myrolename --path '/foo/*' -read
-
-# Give write-only access to the key at /foo/bar
-$ etcdctl role grant myrolename --path '/foo/bar' -write
-
-# Give full access to keys under /pub
-$ etcdctl role grant myrolename --path '/pub/*' -readwrite
-```
-
-Beware that 
-
-```
-# Give full access to keys under /pub??
-$ etcdctl role grant myrolename --path '/pub*' -readwrite
-```
-
-Without the slash may include keys under `/publishing`, for example. To do both, grant `/pub` and `/pub/*`
-
-To see what's granted, we can look at the role at any time:
-
-```
-$ etcdctl role get myrolename
-```
-
-Revocation of permissions is done the same logical way:
-
-```
-$ etcdctl role revoke myrolename --path '/foo/bar' -write
-```
-
-As is removing a role entirely
-
-```
-$ etcdctl role remove myrolename
-```
-
-## Enabling authentication
-
-The minimal steps to enabling auth are as follows. The administrator can set up users and roles before or after enabling authentication, as a matter of preference. 
-
-Make sure the root user is created:
-
-```
-$ etcdctl user add root 
-New password:
-```
-
-And enable authentication
-
-```
-$ etcdctl auth enable
-```
-
-After this, etcd is running with authentication enabled. To disable it for any reason, use the reciprocal command:
-
-```
-$ etcdctl -u root:rootpw auth disable
-```
-
-It would also be good to check what guests (unauthenticated users) are allowed to do:
-```
-$ etcdctl -u root:rootpw role get guest
-```
-
-And modify this role appropriately, depending on your policies.
-
-## Using `etcdctl` to authenticate
-
-`etcdctl` supports a similar flag as `curl` for authentication.
-
-```
-$ etcdctl -u user:password get foo
-```
-
-or if you prefer to be prompted:
-
-```
-$ etcdctl -u user get foo
-```
-
-Otherwise, all `etcdctl` commands remain the same. Users and roles can still be created and modified, but require authentication by a user with the root role.
-
-[auth-api]: auth_api.md

+ 0 - 71
Documentation/backward_compatibility.md

@@ -1,71 +0,0 @@
-# Backward Compatibility
-
-The main goal of etcd 2.0 release is to improve cluster safety around bootstrapping and dynamic reconfiguration. To do this, we deprecated the old error-prone APIs and provide a new set of APIs.
-
-The other main focus of this release was a more reliable Raft implementation, but as this change is internal it should not have any notable effects to users.
-
-## Command Line Flags Changes
-
-The major flag changes are to mostly related to bootstrapping. The `initial-*` flags provide an improved way to specify the required criteria to start the cluster. The advertised URLs now support a list of values instead of a single value, which allows etcd users to gracefully migrate to the new set of IANA-assigned ports (2379/client and 2380/peers) while maintaining backward compatibility with the old ports.
-
- - `addr` is replaced by `advertise-client-urls`.
- - `bind-addr` is replaced by `listen-client-urls`.
- - `peer-addr` is replaced by `initial-advertise-peer-urls`.
- - `peer-bind-addr` is replaced by `listen-peer-urls`.
- - `peers` is replaced by `initial-cluster`.
- - `peers-file` is replaced by `initial-cluster`.
- - `peer-heartbeat-interval` is replaced by `heartbeat-interval`.
- - `peer-election-timeout` is replaced by `election-timeout`.
-
-The documentation of new command line flags can be found at
-https://github.com/coreos/etcd/blob/master/Documentation/configuration.md.
-
-## Data Directory Naming
-
-The default data dir location has changed from {$hostname}.etcd to {name}.etcd.
-
-## Key-Value API
-
-### Read consistency flag
-
-The consistent flag for read operations is removed in etcd 2.0.0. The normal read operations provides the same consistency guarantees with the 0.4.6 read operations with consistent flag set.
-
-The read consistency guarantees are:
-
-The consistent read guarantees the sequential consistency within one client that talks to one etcd server. Read/Write from one client to one etcd member should be observed in order. If one client write a value to an etcd server successfully, it should be able to get the value out of the server immediately. 
-
-Each etcd member will proxy the request to leader and only return the result to user after the result is applied on the local member. Thus after the write succeed, the user is guaranteed to see the value on the member it sent the request to.
-
-Reads do not provide linearizability. If you want linearizable read, you need to set quorum option to true.
-
-**Previous behavior**
-
-We added an option for a consistent read in the old version of etcd since etcd 0.x redirects the write request to the leader. When the user get back the result from the leader, the member it sent the request to originally might not apply the write request yet. With the consistent flag set to true, the client will always send read request to the leader. So one client should be able to see its last write when consistent=true is enabled. There is no order guarantees among different clients.
-
-
-## Standby
-
-etcd 0.4’s standby mode has been deprecated. [Proxy mode][proxymode] is introduced to solve a subset of problems standby was solving.
-
-Standby mode was intended for large clusters that had a subset of the members acting in the consensus process. Overall this process was too magical and allowed for operators to back themselves into a corner.
-
-Proxy mode in 2.0 will provide similar functionality, and with improved control over which machines act as proxies due to the operator specifically configuring them. Proxies also support read only or read/write modes for increased security and durability.
-
-[proxymode]: proxy.md
-
-## Discovery Service
-
-A size key needs to be provided inside a [discovery token][discoverytoken].
-[discoverytoken]: clustering.md#custom-etcd-discovery-service
-
-## HTTP Admin API
-
-`v2/admin` on peer url and `v2/keys/_etcd` are unified under the new [v2/members API][members-api] to better explain which machines are part of an etcd cluster, and to simplify the keyspace for all your use cases.
-
-[members-api]: members_api.md
-
-## HTTP Key Value API
-- The follower can now transparently proxy write requests to the leader. Clients will no longer see 307 redirections to the leader from etcd.
-
-- Expiration time is in UTC instead of local time.
-

+ 1 - 1
Documentation/dev-internal/discovery_protocol.md

@@ -108,7 +108,7 @@ You can check the status for this discovery token, including the machines that h
 
 The repository is located at https://github.com/coreos/discovery.etcd.io. You could use it to build your own public discovery service.
 
-[api]: api.md#waiting-for-a-change
+[api]: ../v2/api.md#waiting-for-a-change
 [cluster-size]: admin_guide.md#optimal-cluster-size
 [expected-cluster-size]: #specifying-the-expected-cluster-size
 [new-discovery-token]: #creating-a-new-discovery-token

+ 0 - 42
Documentation/errorcode.md

@@ -1,42 +0,0 @@
-# Error Code
-======
-
-This document describes the error code used in key space '/v2/keys'. Feel free to import 'github.com/coreos/etcd/error' to use.
-
-It's categorized into four groups:
-
-- Command Related Error
-
-| name                 | code | strerror              |
-|----------------------|------|-----------------------|
-| EcodeKeyNotFound     | 100  | "Key not found"       |
-| EcodeTestFailed      | 101  | "Compare failed"      |
-| EcodeNotFile         | 102  | "Not a file"          |
-| EcodeNotDir          | 104  | "Not a directory"     |
-| EcodeNodeExist       | 105  | "Key already exists"  |
-| EcodeRootROnly       | 107  | "Root is read only"   |
-| EcodeDirNotEmpty     | 108  | "Directory not empty" |
-
-- Post Form Related Error
-
-| name                     | code | strerror |
-|--------------------------|------|------------------------------------------------|
-| EcodePrevValueRequired   | 201  | "PrevValue is Required in POST form"           |
-| EcodeTTLNaN              | 202  | "The given TTL in POST form is not a number"   |
-| EcodeIndexNaN            | 203  | "The given index in POST form is not a number" |
-| EcodeInvalidField        | 209  | "Invalid field"                                |
-| EcodeInvalidForm         | 210  | "Invalid POST form"                            |
-
-- Raft Related Error
-
-| name              | code | strerror                 |
-|-------------------|------|--------------------------|
-| EcodeRaftInternal | 300  | "Raft Internal Error"    |
-| EcodeLeaderElect  | 301  | "During Leader Election" |
-
-- Etcd Related Error
-
-| name                    | code | strerror                                               |
-|-------------------------|------|--------------------------------------------------------|
-| EcodeWatcherCleared     | 400  | "watcher is cleared due to etcd recovery"              |
-| EcodeEventIndexCleared  | 401  | "The event in requested index is outdated and cleared" |

+ 0 - 120
Documentation/members_api.md

@@ -1,120 +0,0 @@
-# Members API
-
-* [List members](#list-members)
-* [Add a member](#add-a-member)
-* [Delete a member](#delete-a-member)
-* [Change the peer urls of a member](#change-the-peer-urls-of-a-member)
-
-## List members
-
-Return an HTTP 200 OK response code and a representation of all members in the etcd cluster.
-
-### Request
-
-```
-GET /v2/members HTTP/1.1
-```
-
-### Example
-
-```sh
-curl http://10.0.0.10:2379/v2/members
-```
-
-```json
-{
-    "members": [
-        {
-            "id": "272e204152",
-            "name": "infra1",
-            "peerURLs": [
-                "http://10.0.0.10:2380"
-            ],
-            "clientURLs": [
-                "http://10.0.0.10:2379"
-            ]
-        },
-        {
-            "id": "2225373f43",
-            "name": "infra2",
-            "peerURLs": [
-                "http://10.0.0.11:2380"
-            ],
-            "clientURLs": [
-                "http://10.0.0.11:2379"
-            ]
-        },
-    ]
-}
-```
-
-## Add a member
-
-Returns an HTTP 201 response code and the representation of added member with a newly generated a memberID when successful. Returns a string describing the failure condition when unsuccessful.
-
-If the POST body is malformed an HTTP 400 will be returned. If the member exists in the cluster or existed in the cluster at some point in the past an HTTP 409 will be returned. If any of the given peerURLs exists in the cluster an HTTP 409 will be returned. If the cluster fails to process the request within timeout an HTTP 500 will be returned, though the request may be processed later.
-
-### Request
-
-```
-POST /v2/members HTTP/1.1
-
-{"peerURLs": ["http://10.0.0.10:2380"]}
-```
-
-### Example
-
-```sh
-curl http://10.0.0.10:2379/v2/members -XPOST \
--H "Content-Type: application/json" -d '{"peerURLs":["http://10.0.0.10:2380"]}'
-```
-
-```json
-{
-    "id": "3777296169",
-    "peerURLs": [
-        "http://10.0.0.10:2380"
-    ]
-}
-```
-
-## Delete a member
-
-Remove a member from the cluster. The member ID must be a hex-encoded uint64.
-Returns 204 with empty content when successful. Returns a string describing the failure condition when unsuccessful.
-
-If the member does not exist in the cluster an HTTP 500(TODO: fix this) will be returned. If the cluster fails to process the request within timeout an HTTP 500 will be returned, though the request may be processed later.
-
-### Request
-
-```
-DELETE /v2/members/<id> HTTP/1.1
-```
-
-### Example
-
-```sh
-curl http://10.0.0.10:2379/v2/members/272e204152 -XDELETE
-```
-
-## Change the peer urls of a member
-
-Change the peer urls of a given member. The member ID must be a hex-encoded uint64. Returns 204 with empty content when successful. Returns a string describing the failure condition when unsuccessful.
-
-If the POST body is malformed an HTTP 400 will be returned. If the member does not exist in the cluster an HTTP 404 will be returned. If any of the given peerURLs exists in the cluster an HTTP 409 will be returned. If the cluster fails to process the request within timeout an HTTP 500 will be returned, though the request may be processed later.
-
-### Request
-
-```
-PUT /v2/members/<id> HTTP/1.1
-
-{"peerURLs": ["http://10.0.0.10:2380"]}
-```
-
-### Example
-
-```sh
-curl http://10.0.0.10:2379/v2/members/272e204152 -XPUT \
--H "Content-Type: application/json" -d '{"peerURLs":["http://10.0.0.10:2380"]}'
-```
-

+ 1 - 1
Documentation/op-guide/configuration.md

@@ -266,7 +266,7 @@ Follow the instructions when using these flags.
 [reconfig]: runtime-configuration.md
 [discovery]: clustering.md#discovery
 [iana-ports]: https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=etcd
-[proxy]: proxy.md
+[proxy]: ../v2/proxy.md
 [reconfig]: runtime-configuration.md
 [restore]: admin_guide.md#restoring-a-backup
 [rfc-v3]: rfc/v3api.md

+ 3 - 1
Documentation/op-guide/runtime-configuration.md

@@ -50,6 +50,8 @@ All changes to the cluster are done one at a time:
 All of these examples will use the `etcdctl` command line tool that ships with etcd.
 If you want to use the members API directly you can find the documentation [here][member-api].
 
+TODO: v3 member API documentation
+
 ### Update a Member
 
 #### Update advertise client URLs
@@ -178,7 +180,7 @@ It is recommended to enable this option. However, it is disabled by default beca
 [disaster recovery]: admin_guide.md#disaster-recovery
 [fault tolerance table]: admin_guide.md#fault-tolerance-table
 [majority failure]: #restart-cluster-from-majority-failure
-[member-api]: members_api.md
+[member-api]: ../v2/members_api.md
 [member migration]: admin_guide.md#member-migration
 [remove member]: #remove-a-member
 [runtime-reconf]: runtime-reconf-design.md

+ 0 - 28
Documentation/other_apis.md

@@ -1,28 +0,0 @@
-# Miscellaneous APIs
-
-* [Getting the etcd version](#getting-the-etcd-version)
-* [Checking health of an etcd member node](#checking-health-of-an-etcd-member-node)
-
-## Getting the etcd version
-
-The etcd version of a specific instance can be obtained from the `/version` endpoint.
-
-```sh
-curl -L http://127.0.0.1:2379/version
-```
-
-```
-etcd 2.0.12
-```
-
-## Checking health of an etcd member node
-
-etcd provides a `/health` endpoint to verify the health of a particular member.
-
-```sh
-curl http://10.0.0.10:2379/health
-```
-
-```json
-{"health": "true"}
-```

+ 0 - 153
Documentation/proxy.md

@@ -1,153 +0,0 @@
-# Proxy
-
-etcd can run as a transparent proxy. Doing so allows for easy discovery of etcd within your infrastructure, since it can run on each machine as a local service. In this mode, etcd acts as a reverse proxy and forwards client requests to an active etcd cluster. The etcd proxy does not participate in the consensus replication of the etcd cluster, thus it neither increases the resilience nor decreases the write performance of the etcd cluster.
-
-etcd currently supports two proxy modes: `readwrite` and `readonly`. The default mode is `readwrite`, which forwards both read and write requests to the etcd cluster. A `readonly` etcd proxy only forwards read requests to the etcd cluster, and returns `HTTP 501` to all write requests.
-
-The proxy will shuffle the list of cluster members periodically to avoid sending all connections to a single member.
-
-The member list used by an etcd proxy consists of all client URLs advertised in the cluster. These client URLs are specified in each etcd cluster member's `advertise-client-urls` option.
-
-An etcd proxy examines several command-line options to discover its peer URLs. In order of precedence, these options are `discovery`, `discovery-srv`, and `initial-cluster`. The `initial-cluster` option is set to a comma-separated list of one or more etcd peer URLs used temporarily in order to discover the permanent cluster.
-
-After establishing a list of peer URLs in this manner, the proxy retrieves the list of client URLs from the first reachable peer. These client URLs are specified by the `advertise-client-urls` option to etcd peers. The proxy then continues to connect to the first reachable etcd cluster member every thirty seconds to refresh the list of client URLs.
-
-While etcd proxies therefore do not need to be given the `advertise-client-urls` option, as they retrieve this configuration from the cluster, this implies that `initial-cluster` must be set correctly for every proxy, and the `advertise-client-urls` option must be set correctly for every non-proxy, first-order cluster peer. Otherwise, requests to any etcd proxy would be forwarded improperly. Take special care not to set the `advertise-client-urls` option to URLs that point to the proxy itself, as such a configuration will cause the proxy to enter a loop, forwarding requests to itself until resources are exhausted. To correct either case, stop etcd and restart it with the correct URLs.
-
-[This example Procfile][procfile] illustrates the difference in the etcd peer and proxy command lines used to configure and start a cluster with one proxy under the [goreman process management utility][goreman].
-
-To summarize etcd proxy startup and peer discovery:
-
-1. etcd proxies execute the following steps in order until the cluster *peer-urls* are known:
-	1. If `discovery` is set for the proxy, ask the given discovery service for
-	   the *peer-urls*. The *peer-urls* will be the combined
-	   `initial-advertise-peer-urls` of all first-order, non-proxy cluster
-	   members.
-	2. If `discovery-srv` is set for the proxy, the *peer-urls* are discovered
-	   from DNS.
-	3. If `initial-cluster` is set for the proxy, that will become the value of
-	   *peer-urls*.
-	4. Otherwise use the default value of
-	   `http://localhost:2380,http://localhost:7001`.
-2. These *peer-urls* are used to contact the (non-proxy) members of the cluster
-   to find their *client-urls*. The *client-urls* will thus be the combined
-   `advertise-client-urls` of all cluster members (i.e. non-proxies).
-3. Request of clients of the proxy will be forwarded (proxied) to these
-   *client-urls*.
-
-Always start the first-order etcd cluster members first, then any proxies. A proxy must be able to reach the cluster members to retrieve its configuration, and will attempt connections somewhat aggressively in the absence of such a channel. Starting the members before any proxy ensures the proxy can discover the client URLs when it later starts.
-
-## Using an etcd proxy
-To start etcd in proxy mode, you need to provide three flags: `proxy`, `listen-client-urls`, and `initial-cluster` (or `discovery`).
-
-To start a readwrite proxy, set `-proxy on`; To start a readonly proxy, set `-proxy readonly`.
-
-The proxy will be listening on `listen-client-urls` and forward requests to the etcd cluster discovered from in `initial-cluster` or `discovery` url.
-
-### Start an etcd proxy with a static configuration
-To start a proxy that will connect to a statically defined etcd cluster, specify the `initial-cluster` flag:
-
-```
-etcd --proxy on \
---listen-client-urls http://127.0.0.1:8080 \
---initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380
-```
-
-### Start an etcd proxy with the discovery service
-If you bootstrap an etcd cluster using the [discovery service][discovery-service], you can also start the proxy with the same `discovery`.
-
-To start a proxy using the discovery service, specify the `discovery` flag. The proxy will wait until the etcd cluster defined at the `discovery` url finishes bootstrapping, and then start to forward the requests.
-
-```
-etcd --proxy on \
---listen-client-urls http://127.0.0.1:8080 \
---discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de \
-```
-
-## Fallback to proxy mode with discovery service
-
-If you bootstrap an etcd cluster using [discovery service][discovery-service] with more than the expected number of etcd members, the extra etcd processes will fall back to being `readwrite` proxies by default. They will forward the requests to the cluster as described above. For example, if you create a discovery url with `size=5`, and start ten etcd processes using that same discovery url, the result will be a cluster with five etcd members and five proxies. Note that this behaviour can be disabled with the `discovery-fallback='exit'` flag.
-
-## Promote a proxy to a member of etcd cluster
-
-A Proxy is in the part of etcd cluster that does not participate in consensus. A proxy will not promote itself to an etcd member that participates in consensus automatically in any case.
-
-If you want to promote a proxy to an etcd member, there are four steps you need to follow:
-
-- use etcdctl to add the proxy node as an etcd member into the existing cluster
-- stop the etcd proxy process or service
-- remove the existing proxy data directory
-- restart the etcd process with new member configuration
-
-## Example
-
-We assume you have a one member etcd cluster with one proxy. The cluster information is listed below:
-
-|Name|Address|
-|------|---------|
-|infra0|10.0.1.10|
-|proxy0|10.0.1.11|
-
-This example walks you through a case that you promote one proxy to an etcd member. The cluster will become a two member cluster after finishing the four steps.
-
-### Add a new member into the existing cluster
-
-First, use etcdctl to add the member to the cluster, which will output the environment variables need to correctly configure the new member:
-
-``` bash
-$ etcdctl -endpoint http://10.0.1.10:2379 member add infra1 http://10.0.1.11:2380
-added member 9bf1b35fc7761a23 to cluster
-
-ETCD_NAME="infra1"
-ETCD_INITIAL_CLUSTER="infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380"
-ETCD_INITIAL_CLUSTER_STATE=existing
-```
-
-### Stop the proxy process
-
-Stop the existing proxy so we can wipe it's state on disk and reload it with the new configuration:
-
-``` bash
-px aux | grep etcd
-kill %etcd_proxy_pid%
-```
-
-or (if you are running etcd proxy as etcd service under systemd)
-
-``` bash
-sudo systemctl stop etcd
-```
-
-### Remove the existing proxy data dir
-
-``` bash
-rm -rf %data_dir%/proxy
-```
-
-### Start etcd as a new member
-
-Finally, start the reconfigured member and make sure it joins the cluster correctly:
-
-``` bash
-$ export ETCD_NAME="infra1"
-$ export ETCD_INITIAL_CLUSTER="infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380"
-$ export ETCD_INITIAL_CLUSTER_STATE=existing
-$ etcd --listen-client-urls http://10.0.1.11:2379 \
---advertise-client-urls http://10.0.1.11:2379 \
---listen-peer-urls http://10.0.1.11:2380 \
---initial-advertise-peer-urls http://10.0.1.11:2380 \
---data-dir %data_dir%
-```
-
-If you are running etcd under systemd, you should modify the service file with correct configuration and restart the service:
-
-``` bash
-sudo systemd restart etcd
-```
-
-If an error occurs, check the [add member troubleshooting doc][runtime-configuration].
-
-[discovery-service]: clustering.md#discovery
-[goreman]: https://github.com/mattn/goreman
-[procfile]: /Procfile
-[runtime-configuration]: runtime-configuration.md#error-cases-when-adding-members

+ 0 - 116
Documentation/upgrade_2_1.md

@@ -1,116 +0,0 @@
-# Upgrade etcd to 2.1
-
-In the general case, upgrading from etcd 2.0 to 2.1 can be a zero-downtime, rolling upgrade:
- - one by one, stop the etcd v2.0 processes and replace them with etcd v2.1 processes
- - after you are running all v2.1 processes, new features in v2.1 are available to the cluster
-
-Before [starting an upgrade](#upgrade-procedure), read through the rest of this guide to prepare.
-
-## Upgrade Checklists
-
-### Upgrade Requirements
-
-To upgrade an existing etcd deployment to 2.1, you must be running 2.0. If you’re running a version of etcd before 2.0, you must upgrade to [2.0][v2.0] before upgrading to 2.1.
-
-Also, to ensure a smooth rolling upgrade, your running cluster must be healthy. You can check the health of the cluster by using `etcdctl cluster-health` command. 
-
-### Preparedness 
-
-Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment. 
-
-You might also want to [backup your data directory][backup-datastore] for a potential [downgrade](#downgrade).
-
-etcd 2.1 introduces a new [authentication][auth] feature, which is disabled by default. If your deployment depends on these, you may want to test the auth features before enabling them in production.
-
-### Mixed Versions
-
-While upgrading, an etcd cluster supports mixed versions of etcd members. The cluster is only considered upgraded once all its members are upgraded to 2.1.
-
-Internally, etcd members negotiate with each other to determine the overall etcd cluster version, which controls the reported cluster version and the supported features. For example, if you are mid-upgrade, any 2.1 features (such as the the authentication feature mentioned above) won’t be available.
-
-### Limitations
-
-If you encounter any issues during the upgrade, you can attempt to restart the etcd process in trouble using a newer v2.1 binary to solve the problem. One known issue is that etcd v2.0.0 and v2.0.2 may panic during rolling upgrades due to an existing bug, which has been fixed since etcd v2.0.3.
-
-It might take up to 2 minutes for the newly upgraded member to catch up with the existing cluster when the total data size is larger than 50MB (You can check the size of the existing snapshot to know about the rough data size). In other words, it is safest to wait for 2 minutes before upgrading the next member.
-
-If you have even more data, this might take more time. If you have a data size larger than 100MB you should contact us before upgrading, so we can make sure the upgrades work smoothly.
-
-### Downgrade
-
-If all members have been upgraded to v2.1, the cluster will be upgraded to v2.1, and downgrade is **not possible**. If any member is still v2.0, the cluster will remain in v2.0, and you can go back to use v2.0 binary. 
-
-Please [backup your data directory][backup-datastore] of all etcd members if you want to downgrade the cluster, even if it is upgraded.
-
-### Upgrade Procedure
-
-#### 1. Check upgrade requirements.
-
-```
-$ etcdctl cluster-health
-cluster is healthy
-member 6e3bd23ae5f1eae0 is healthy
-member 924e2e83e93f2560 is healthy
-member a8266ecf031671f3 is healthy
-
-$ curl http://127.0.0.1:4001/version
-etcd 2.0.x
-```
-
-#### 2. Stop the existing etcd process
-
-You will see similar error logging from other etcd processes in your cluster. This is normal, since you just shut down a member.
-
-```
-2015/06/23 15:45:09 sender: error posting to 6e3bd23ae5f1eae0: dial tcp 127.0.0.1:7002: connection refused
-2015/06/23 15:45:09 sender: the connection with 6e3bd23ae5f1eae0 became inactive
-2015/06/23 15:45:11 rafthttp: encountered error writing to server log stream: write tcp 127.0.0.1:53783: broken pipe
-2015/06/23 15:45:11 rafthttp: server streaming to 6e3bd23ae5f1eae0 at term 2 has been stopped
-2015/06/23 15:45:11 stream: error sending message: stopped
-2015/06/23 15:45:11 stream: stopping the stream server...
-```
-
-You could [backup your data directory][backup-datastore] for data safety.
-
-```
-$ etcdctl backup \
-      --data-dir /var/lib/etcd \
-      --backup-dir /tmp/etcd_backup
-```
-
-#### 3. Drop-in etcd v2.1 binary and start the new etcd process
-
-You will see the etcd publish its information to the cluster.
-
-```
-2015/06/23 15:45:39 etcdserver: published {Name:infra2 ClientURLs:[http://localhost:4002]} to cluster e9c7614f68f35fb2
-```
-
-You could verify the cluster becomes healthy.
-
-```
-$ etcdctl cluster-health
-cluster is healthy
-member 6e3bd23ae5f1eae0 is healthy
-member 924e2e83e93f2560 is healthy
-member a8266ecf031671f3 is healthy
-```
-
-#### 4. Repeat step 2 to step 3 for all other members 
-
-#### 5. Finish
-
-When all members are upgraded, you will see the cluster is upgraded to 2.1 successfully:
-
-```
-2015/06/23 15:46:35 etcdserver: updated the cluster version from 2.0.0 to 2.1.0
-```
-
-```
-$ curl http://127.0.0.1:4001/version
-{"etcdserver":"2.1.x","etcdcluster":"2.1.0"}
-```
-
-[auth]: auth_api.md
-[backup-datastore]: admin_guide.md#backing-up-the-datastore
-[v2.0]: https://github.com/coreos/etcd/releases/tag/v2.0.13

+ 0 - 132
Documentation/upgrade_2_2.md

@@ -1,132 +0,0 @@
-# Upgrade etcd from 2.1 to 2.2
-
-In the general case, upgrading from etcd 2.1 to 2.2 can be a zero-downtime, rolling upgrade:
-
- - one by one, stop the etcd v2.1 processes and replace them with etcd v2.2 processes
- - after you are running all v2.2 processes, new features in v2.2 are available to the cluster
-
-Before [starting an upgrade](#upgrade-procedure), read through the rest of this guide to prepare.
-
-## Upgrade Checklists
-
-### Upgrade Requirement
-
-To upgrade an existing etcd deployment to 2.2, you must be running 2.1. If you’re running a version of etcd before 2.1, you must upgrade to [2.1][v2.1] before upgrading to 2.2.
-
-Also, to ensure a smooth rolling upgrade, your running cluster must be healthy. You can check the health of the cluster by using `etcdctl cluster-health` command. 
-
-### Preparedness 
-
-Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment. 
-
-You might also want to [backup the data directory][backup-datastore] for a potential [downgrade].
-
-### Mixed Versions
-
-While upgrading, an etcd cluster supports mixed versions of etcd members. The cluster is only considered upgraded once all its members are upgraded to 2.2.
-
-Internally, etcd members negotiate with each other to determine the overall etcd cluster version, which controls the reported cluster version and the supported features.
-
-### Limitations
-
-If you have a data size larger than 100MB you should contact us before upgrading, so we can make sure the upgrades work smoothly.
-
-Every etcd 2.2 member will do health checking across the cluster periodically. etcd 2.1 member does not support health checking. During the upgrade, etcd 2.2 member will log warning about the unhealthy state of etcd 2.1 member. You can ignore the warning. 
-
-### Downgrade
-
-If all members have been upgraded to v2.2, the cluster will be upgraded to v2.2, and downgrade is **not possible**. If any member is still v2.1, the cluster will remain in v2.1, and you can go back to use v2.1 binary. 
-
-Please [backup the data directory][backup-datastore] of all etcd members if you want to downgrade the cluster, even if it is upgraded.
-
-### Upgrade Procedure
-
-In the example, we upgrade a three member v2.1 cluster running on local machine.
-
-#### 1. Check upgrade requirements.
-
-```
-$ etcdctl cluster-health
-member 6e3bd23ae5f1eae0 is healthy: got healthy result from http://localhost:22379
-member 924e2e83e93f2560 is healthy: got healthy result from http://localhost:32379
-member a8266ecf031671f3 is healthy: got healthy result from http://localhost:12379
-cluster is healthy
-
-$ curl http://localhost:4001/version
-{"etcdserver":"2.1.x","etcdcluster":"2.1.0"}
-```
-
-#### 2. Stop the existing etcd process
-
-You will see similar error logging from other etcd processes in your cluster. This is normal, since you just shut down a member and the connection is broken.
-
-```
-2015/09/2 09:48:35 etcdserver: failed to reach the peerURL(http://localhost:12380) of member a8266ecf031671f3 (Get http://localhost:12380/version: dial tcp [::1]:12380: getsockopt: connection refused)
-2015/09/2 09:48:35 etcdserver: cannot get the version of member a8266ecf031671f3 (Get http://localhost:12380/version: dial tcp [::1]:12380: getsockopt: connection refused)
-2015/09/2 09:48:35 rafthttp: failed to write a8266ecf031671f3 on stream Message (write tcp 127.0.0.1:32380->127.0.0.1:64394: write: broken pipe)
-2015/09/2 09:48:35 rafthttp: failed to write a8266ecf031671f3 on pipeline (dial tcp [::1]:12380: getsockopt: connection refused)
-2015/09/2 09:48:40 etcdserver: failed to reach the peerURL(http://localhost:7001) of member a8266ecf031671f3 (Get http://localhost:7001/version: dial tcp [::1]:12380: getsockopt: connection refused)
-2015/09/2 09:48:40 etcdserver: cannot get the version of member a8266ecf031671f3 (Get http://localhost:12380/version: dial tcp [::1]:12380: getsockopt: connection refused)
-2015/09/2 09:48:40 rafthttp: failed to heartbeat a8266ecf031671f3 on stream MsgApp v2 (write tcp 127.0.0.1:32380->127.0.0.1:64393: write: broken pipe)
-```
-
-You will see logging output like this from ungraded member due to a mixed version cluster. You can ignore this while upgrading.
-
-```
-2015/09/2 09:48:45 etcdserver: the etcd version 2.1.2+git is not up-to-date
-2015/09/2 09:48:45 etcdserver: member a8266ecf031671f3 has a higher version &{2.2.0-rc.0+git 2.1.0}
-```
-
-You will also see logging output like this from the newly upgraded member, since etcd 2.1 member does not support health checking. You can ignore this while upgrading.
-
-```
-2015-09-02 09:55:42.691384 W | rafthttp: the connection to peer 6e3bd23ae5f1eae0 is unhealthy
-2015-09-02 09:55:42.705626 W | rafthttp: the connection to peer 924e2e83e93f2560 is unhealthy
-
-```
-
-[Backup your data directory][backup-datastore] for data safety.
-
-```
-$ etcdctl backup \
-      --data-dir /var/lib/etcd \
-      --backup-dir /tmp/etcd_backup
-```
-
-#### 3. Drop-in etcd v2.2 binary and start the new etcd process
-
-Now, you can start the etcd v2.2 binary with the previous configuration.
-You will see the etcd start and publish its information to the cluster.
-
-```
-2015-09-02 09:56:46.117609 I | etcdserver: published {Name:infra2 ClientURLs:[http://localhost:22380]} to cluster e9c7614f68f35fb2
-```
-
-You could verify the cluster becomes healthy.
-
-```
-$ etcdctl cluster-health
-member 6e3bd23ae5f1eae0 is healthy: got healthy result from http://localhost:22379
-member 924e2e83e93f2560 is healthy: got healthy result from http://localhost:32379
-member a8266ecf031671f3 is healthy: got healthy result from http://localhost:12379
-cluster is healthy
-```
-
-#### 4. Repeat step 2 to step 3 for all other members 
-
-#### 5. Finish
-
-When all members are upgraded, you will see the cluster is upgraded to 2.2 successfully:
-
-```
-2015-09-02 09:56:54.896848 N | etcdserver: updated the cluster version from 2.1 to 2.2
-```
-
-```
-$ curl http://127.0.0.1:4001/version
-{"etcdserver":"2.2.x","etcdcluster":"2.2.0"}
-```
-
-[backup-datastore]: admin_guide.md#backing-up-the-datastore
-[downgrade]: #downgrade
-[v2.1]: https://github.com/coreos/etcd/releases/tag/v2.1.2

+ 0 - 121
Documentation/upgrade_2_3.md

@@ -1,121 +0,0 @@
-## Upgrade etcd from 2.2 to 2.3
-
-In the general case, upgrading from etcd 2.2 to 2.3 can be a zero-downtime, rolling upgrade:
- - one by one, stop the etcd v2.2 processes and replace them with etcd v2.3 processes
- - after running all v2.3 processes, new features in v2.3 are available to the cluster
-
-Before [starting an upgrade](#upgrade-procedure), read through the rest of this guide to prepare.
-
-### Upgrade Checklists
-
-#### Upgrade Requirements
-
-To upgrade an existing etcd deployment to 2.3, the running cluster must be 2.2 or greater. If it's before 2.2, please upgrade to [2.2](https://github.com/coreos/etcd/releases/tag/v2.2.0) before upgrading to 2.3.
-
-Also, to ensure a smooth rolling upgrade, the running cluster must be healthy. You can check the health of the cluster by using the `etcdctl cluster-health` command.
-
-#### Preparation
-
-Before upgrading etcd, always test the services relying on etcd in a staging environment before deploying the upgrade to the production environment.
-
-Before beginning,  [backup the etcd data directory](admin_guide.md#backing-up-the-datastore). Should something go wrong with the upgrade, it is possible to use this backup to[downgrade](#downgrade) back to existing etcd version.
-
-#### Mixed Versions
-
-While upgrading, an etcd cluster supports mixed versions of etcd members, and operates with the protocol of the lowest common version. The cluster is only considered upgraded once all of its members are upgraded to version 2.3. Internally, etcd members negotiate with each other to determine the overall cluster version, which controls the reported version and the supported features.
-
-#### Limitations
-
-It might take up to 2 minutes for the newly upgraded member to catch up with the existing cluster when the total data size is larger than 50MB. Check the size of a recent  snapshot to estimate  the total data size. In other words, it is safest to wait for 2 minutes between upgrading each member.
-
-For a much larger total data size, 100MB or more , this one-time process might take even more time. Administrators of very large etcd clusters of this magnitude can feel free to contact the [etcd team][etcd-contact] before upgrading, and we’ll be happy to provide advice on the procedure.
-
-#### Downgrade
-
-If all members have been upgraded to v2.3, the cluster will be upgraded to v2.3, and downgrade from this completed state is **not possible**. If any single member is still v2.2, however, the cluster and its operations remains “v2.2”, and it is possible from this mixed cluster state to return to using a v2.2 etcd binary on all members.
-
-Please [backup the data directory](admin_guide.md#backing-up-the-datastore) of all etcd members to make downgrading the cluster possible even after it has been completely upgraded.
-
-### Upgrade Procedure
-
-
-This example details the  upgrade of a three-member v2.2 ectd cluster running on a local machine.
-
-#### 1. Check upgrade requirements.
-
-Is the the cluster healthy and running v.2.2.x?
-
-```
-$ etcdctl cluster-health
-member 6e3bd23ae5f1eae0 is healthy: got healthy result from http://localhost:22379
-member 924e2e83e93f2560 is healthy: got healthy result from http://localhost:32379
-member a8266ecf031671f3 is healthy: got healthy result from http://localhost:12379
-cluster is healthy
-
-$ curl http://localhost:4001/version
-{"etcdserver":"2.2.x","etcdcluster":"2.2.0"}
-```
-
-#### 2. Stop the existing etcd process
-
-When each etcd process is stopped, expected errors will be logged by other cluster members. This is normal since a cluster member connection has been (temporarily) broken:
-
-```
-2016-03-11 09:50:49.860319 E | rafthttp: failed to read 8211f1d0f64f3269 on stream Message (unexpected EOF)
-2016-03-11 09:50:49.860335 I | rafthttp: the connection with 8211f1d0f64f3269 became inactive
-2016-03-11 09:50:51.023804 W | etcdserver: failed to reach the peerURL(http://127.0.0.1:12380) of member 8211f1d0f64f3269 (Get http://127.0.0.1:12380/version: dial tcp 127.0.0.1:12380: getsockopt: connection refused)
-2016-03-11 09:50:51.023821 W | etcdserver: cannot get the version of member 8211f1d0f64f3269 (Get http://127.0.0.1:12380/version: dial tcp 127.0.0.1:12380: getsockopt: connection refused)
-```
-
-It’s a good idea at this point to  [backup the etcd data directory](https://github.com/coreos/etcd/blob/7f7e2cc79d9c5c342a6eb1e48c386b0223cf934e/Documentation/admin_guide.md#backing-up-the-datastore) to provide a downgrade path should any problems occur:
-
-```
-$ etcdctl backup \
-      --data-dir /var/lib/etcd \
-      --backup-dir /tmp/etcd_backup
-```
-
-#### 3. Drop-in etcd v2.3 binary and start the new etcd process
-
-The new v2.3 etcd will publish its information to the cluster:
-
-```
-09:58:25.938673 I | etcdserver: published {Name:infra1 ClientURLs:[http://localhost:12379]} to cluster 524400597fb1d5f6
-```
-
-Verify that each member, and then the entire cluster, becomes healthy with the new v2.3 etcd binary:
-
-```
-$ etcdctl cluster-health
-member 6e3bd23ae5f1eae0 is healthy: got healthy result from http://localhost:22379
-member 924e2e83e93f2560 is healthy: got healthy result from http://localhost:32379
-member a8266ecf031671f3 is healthy: got healthy result from http://localhost:12379
-cluster is healthy
-```
-
-
-Upgraded members will log warnings like the following until the entire cluster is upgraded. This is expected and will cease after all etcd cluster members are upgraded to v2.3:
-
-```
-2016-03-11 09:58:26.851837 W | etcdserver: the local etcd version 2.2.0 is not up-to-date
-2016-03-11 09:58:26.851854 W | etcdserver: member c02c70ede158499f has a higher version 2.3.0
-```
-
-#### 4. Repeat step 2 to step 3 for all other members
-
-#### 5. Finish
-
-When all members are upgraded, the cluster will report  upgrading to 2.3 successfully:
-
-```
-2016-03-11 10:03:01.583392 N | etcdserver: updated the cluster version from 2.2 to 2.3
-```
-
-```
-$ curl http://127.0.0.1:4001/version
-{"etcdserver":"2.3.x","etcdcluster":"2.3.0"}
-```
-
-
-[etcd-contact]: https://coreos.com/etcd/?
-

+ 1 - 1
error/error.go

@@ -13,7 +13,7 @@
 // limitations under the License.
 
 // Package error describes errors in etcd project. When any change happens,
-// Documentation/errorcode.md needs to be updated correspondingly.
+// Documentation/v2/errorcode.md needs to be updated correspondingly.
 package error
 
 import (

+ 1 - 1
etcdctl/README.md

@@ -333,7 +333,7 @@ Releases will follow lockstep with the etcd release cycle.
 
 etcdctl is under the Apache 2.0 license. See the [LICENSE][license] file for details.
 
-[authentication]: ../Documentation/authentication.md
+[authentication]: ../Documentation/v2/authentication.md
 [etcd]: https://github.com/coreos/etcd
 [github-release]: https://github.com/coreos/etcd/releases/
 [license]: https://github.com/coreos/etcdctl/blob/master/LICENSE