Browse Source

Merge pull request #5215 from gyuho/finish_doc

Finish v2 documentation cleaning
Gyu-Ho Lee 9 years ago
parent
commit
e50df7c19b

+ 0 - 303
Documentation/admin_guide.md

@@ -1,303 +0,0 @@
-# Administration
-
-## Data Directory
-
-### Lifecycle
-
-When first started, etcd stores its configuration into a data directory specified by the data-dir configuration parameter.
-Configuration is stored in the write ahead log and includes: the local member ID, cluster ID, and initial cluster configuration.
-The write ahead log and snapshot files are used during member operation and to recover after a restart.
-
-Having a dedicated disk to store wal files can improve the throughput and stabilize the cluster. 
-It is highly recommended to dedicate a wal disk and set `--wal-dir` to point to a directory on that device for a production cluster deployment.
-
-If a member’s data directory is ever lost or corrupted then the user should [remove][remove-a-member] the etcd member from the cluster using `etcdctl` tool.
-
-A user should avoid restarting an etcd member with a data directory from an out-of-date backup.
-Using an out-of-date data directory can lead to inconsistency as the member had agreed to store information via raft then re-joins saying it needs that information again.
-For maximum safety, if an etcd member suffers any sort of data corruption or loss, it must be removed from the cluster.
-Once removed the member can be re-added with an empty data directory.
-
-### Contents
-
-The data directory has two sub-directories in it:
-
-1. wal: write ahead log files are stored here. For details see the [wal package documentation][wal-pkg]
-2. snap: log snapshots are stored here. For details see the [snap package documentation][snap-pkg]
-
-If `--wal-dir` flag is set, etcd will write the write ahead log files to the specified directory instead of data directory.
-
-## Cluster Management
-
-### Lifecycle
-
-If you are spinning up multiple clusters for testing it is recommended that you specify a unique initial-cluster-token for the different clusters.
-This can protect you from cluster corruption in case of mis-configuration because two members started with different cluster tokens will refuse members from each other.
-
-### Monitoring
-
-It is important to monitor your production etcd cluster for healthy information and runtime metrics.
-
-#### Health Monitoring
-
-At lowest level, etcd exposes health information via HTTP at `/health` in JSON format. If it returns `{"health": "true"}`, then the cluster is healthy. Please note the `/health` endpoint is still an experimental one as in etcd 2.2.
-
-```
-$ curl -L http://127.0.0.1:2379/health
-
-{"health": "true"}
-```
-
-You can also use etcdctl to check the cluster-wide health information. It will contact all the members of the cluster and collect the health information for you.
-
-```
-$./etcdctl cluster-health 
-member 8211f1d0f64f3269 is healthy: got healthy result from http://127.0.0.1:12379
-member 91bc3c398fb3c146 is healthy: got healthy result from http://127.0.0.1:22379
-member fd422379fda50e48 is healthy: got healthy result from http://127.0.0.1:32379
-cluster is healthy
-```
-
-#### Runtime Metrics
-
-etcd uses [Prometheus][prometheus] for metrics reporting in the server. You can read more through the runtime metrics [doc][metrics].
-
-### Debugging
-
-Debugging a distributed system can be difficult. etcd provides several ways to make debug
-easier.
-
-#### Enabling Debug Logging
-
-When you want to debug etcd without stopping it, you can enable debug logging at runtime.
-etcd exposes logging configuration at `/config/local/log`.
-
-```
-$ curl http://127.0.0.1:2379/config/local/log -XPUT -d '{"Level":"DEBUG"}'
-$ # debug logging enabled
-$
-$ curl http://127.0.0.1:2379/config/local/log -XPUT -d '{"Level":"INFO"}'
-$ # debug logging disabled
-```
-
-#### Debugging Variables
-
-Debug variables are exposed for real-time debugging purposes. Developers who are familiar with etcd can utilize these variables to debug unexpected behavior. etcd exposes debug variables via HTTP at `/debug/vars` in JSON format. The debug variables contains
-`cmdline`, `file_descriptor_limit`, `memstats` and `raft.status`.
-
-`cmdline` is the command line arguments passed into etcd.
-
-`file_descriptor_limit` is the max number of file descriptors etcd can utilize.
-
-`memstats` is explained in detail in the [Go runtime documentation][golang-memstats].
-
-`raft.status` is useful when you want to debug low level raft issues if you are familiar with raft internals. In most cases, you do not need to check `raft.status`.
-
-```json
-{
-"cmdline": ["./etcd"],
-"file_descriptor_limit": 0,
-"memstats": {"Alloc":4105744,"TotalAlloc":42337320,"Sys":12560632,"...":"..."},
-"raft.status": {"id":"ce2a822cea30bfca","term":5,"vote":"ce2a822cea30bfca","commit":23509,"lead":"ce2a822cea30bfca","raftState":"StateLeader","progress":{"ce2a822cea30bfca":{"match":23509,"next":23510,"state":"ProgressStateProbe"}}}
-}
-```
-
-### Optimal Cluster Size
-
-The recommended etcd cluster size is 3, 5 or 7, which is decided by the fault tolerance requirement. A 7-member cluster can provide enough fault tolerance in most cases. While larger cluster provides better fault tolerance the write performance reduces since data needs to be replicated to more machines.
-
-#### Fault Tolerance Table
-
-It is recommended to have an odd number of members in a cluster. Having an odd cluster size doesn't change the number needed for majority, but you gain a higher tolerance for failure by adding the extra member. You can see this in practice when comparing even and odd sized clusters:
-
-| Cluster Size | Majority   | Failure Tolerance |
-|--------------|------------|-------------------|
-| 1 | 1 | 0 |
-| 3 | 2 | 1 |
-| 4 | 3 | 1 |
-| 5 | 3 | **2** |
-| 6 | 4 | 2 |
-| 7 | 4 | **3** |
-| 8 | 5 | 3 |
-| 9 | 5 | **4** |
-
-As you can see, adding another member to bring the size of cluster up to an odd size is always worth it. During a network partition, an odd number of members also guarantees that there will almost always be a majority of the cluster that can continue to operate and be the source of truth when the partition ends.
-
-#### Changing Cluster Size
-
-After your cluster is up and running, adding or removing members is done via [runtime reconfiguration][runtime-reconfig], which allows the cluster to be modified without downtime. The `etcdctl` tool has `member list`, `member add` and `member remove` commands to complete this process.
-
-### Member Migration
-
-When there is a scheduled machine maintenance or retirement, you might want to migrate an etcd member to another machine without losing the data and changing the member ID.
-
-The data directory contains all the data to recover a member to its point-in-time state. To migrate a member:
-
-* Stop the member process.
-* Copy the data directory of the now-idle member to the new machine.
-* Update the peer URLs for the replaced member to reflect the new machine according to the [runtime reconfiguration instructions][update-member].
-* Start etcd on the new machine, using the same configuration and the copy of the data directory.
-
-This example will walk you through the process of migrating the infra1 member to a new machine:
-
-|Name|Peer URL|
-|------|--------------|
-|infra0|10.0.1.10:2380|
-|infra1|10.0.1.11:2380|
-|infra2|10.0.1.12:2380|
-
-```sh
-$ export ETCDCTL_ENDPOINT=http://10.0.1.10:2379,http://10.0.1.11:2379,http://10.0.1.12:2379
-```
-
-```sh
-$ etcdctl member list
-84194f7c5edd8b37: name=infra0 peerURLs=http://10.0.1.10:2380 clientURLs=http://127.0.0.1:2379,http://10.0.1.10:2379
-b4db3bf5e495e255: name=infra1 peerURLs=http://10.0.1.11:2380 clientURLs=http://127.0.0.1:2379,http://10.0.1.11:2379
-bc1083c870280d44: name=infra2 peerURLs=http://10.0.1.12:2380 clientURLs=http://127.0.0.1:2379,http://10.0.1.12:2379
-```
-
-#### Stop the member etcd process
-
-```sh
-$ ssh 10.0.1.11
-```
-
-```sh
-$ kill `pgrep etcd`
-```
-
-#### Copy the data directory of the now-idle member to the new machine
-
-```
-$ tar -cvzf infra1.etcd.tar.gz %data_dir%
-```
-
-```sh
-$ scp infra1.etcd.tar.gz 10.0.1.13:~/
-```
-
-#### Update the peer URLs for that member to reflect the new machine
-
-```sh
-$ curl http://10.0.1.10:2379/v2/members/b4db3bf5e495e255 -XPUT \
--H "Content-Type: application/json" -d '{"peerURLs":["http://10.0.1.13:2380"]}'
-```
-
-Or use `etcdctl member update` command
-
-```sh
-$ etcdctl member update b4db3bf5e495e255 http://10.0.1.13:2380
-```
-
-#### Start etcd on the new machine, using the same configuration and the copy of the data directory
-
-```sh
-$ ssh 10.0.1.13
-```
-
-```sh
-$ tar -xzvf infra1.etcd.tar.gz -C %data_dir%
-```
-
-```
-etcd --name infra1 \
---listen-peer-urls http://10.0.1.13:2380 \
---listen-client-urls http://10.0.1.13:2379,http://127.0.0.1:2379 \
---advertise-client-urls http://10.0.1.13:2379,http://127.0.0.1:2379
-```
-
-### Disaster Recovery
-
-etcd is designed to be resilient to machine failures. An etcd cluster can automatically recover from any number of temporary failures (for example, machine reboots), and a cluster of N members can tolerate up to _(N-1)/2_ permanent failures (where a member can no longer access the cluster, due to hardware failure or disk corruption). However, in extreme circumstances, a cluster might permanently lose enough members such that quorum is irrevocably lost. For example, if a three-node cluster suffered two simultaneous and unrecoverable machine failures, it would be normally impossible for the cluster to restore quorum and continue functioning.
-
-To recover from such scenarios, etcd provides functionality to backup and restore the datastore and recreate the cluster without data loss.
-
-#### Backing up the datastore
-
-**NB:** Windows users must stop etcd before running the backup command.
-
-The first step of the recovery is to backup the data directory on a functioning etcd node. To do this, use the `etcdctl backup` command, passing in the original data directory used by etcd. For example:
-
-```sh
-etcdctl backup \
-  --data-dir %data_dir% \
-  --backup-dir %backup_data_dir%
-```
-
-This command will rewrite some of the metadata contained in the backup (specifically, the node ID and cluster ID), which means that the node will lose its former identity. In order to recreate a cluster from the backup, you will need to start a new, single-node cluster. The metadata is rewritten to prevent the new node from inadvertently being joined onto an existing cluster.
-
-#### Restoring a backup
-
-To restore a backup using the procedure created above, start etcd with the `-force-new-cluster` option and pointing to the backup directory. This will initialize a new, single-member cluster with the default advertised peer URLs, but preserve the entire contents of the etcd data store. Continuing from the previous example:
-
-```sh
-etcd \
-  --data-dir=%backup_data_dir% \
-  --force-new-cluster \
-  ...
-```
-
-Now etcd should be available on this node and serving the original datastore.
-
-Once you have verified that etcd has started successfully, shut it down and move the data back to the previous location (you may wish to make another copy as well to be safe):
-
-```sh
-pkill etcd
-rm -fr %data_dir%
-mv %backup_data_dir% %data_dir%
-etcd \
-  --data-dir=%data_dir% \
-  ...
-```
-
-#### Restoring the cluster
-
-Now that the node is running successfully, [change its advertised peer URLs][update-member], as the `--force-new-cluster` option has set the peer URL to the default listening on localhost.
-
-You can then add more nodes to the cluster and restore resiliency. See the [add a new member][add-a-member] guide for more details. **NB:** If you are trying to restore your cluster using old failed etcd nodes, please make sure you have stopped old etcd instances and removed their old data directories specified by the data-dir configuration parameter.
-
-### Client Request Timeout
-
-etcd sets different timeouts for various types of client requests. The timeout value is not tunable now, which will be improved soon (https://github.com/coreos/etcd/issues/2038).
-
-#### Get requests
-
-Timeout is not set for get requests, because etcd serves the result locally in a non-blocking way.
-
-**Note**: QuorumGet request is a different type, which is mentioned in the following sections.
-
-#### Watch requests
-
-Timeout is not set for watch requests. etcd will not stop a watch request until client cancels it, or the connection is broken.
-
-#### Delete, Put, Post, QuorumGet requests
-
-The default timeout is 5 seconds. It should be large enough to allow all key modifications if the majority of cluster is functioning.
-
-If the request times out, it indicates two possibilities:
-
-1. the server the request sent to was not functioning at that time.
-2. the majority of the cluster is not functioning.
-
-If timeout happens several times continuously, administrators should check status of cluster and resolve it as soon as possible.
-
-### Best Practices
-
-#### Maximum OS threads
-
-By default, etcd uses the default configuration of the Go 1.4 runtime, which means that at most one operating system thread will be used to execute code simultaneously. (Note that this default behavior [has changed in Go 1.5][golang1.5-runtime]).
-
-When using etcd in heavy-load scenarios on machines with multiple cores it will usually be desirable to increase the number of threads that etcd can utilize. To do this, simply set the environment variable GOMAXPROCS to the desired number when starting etcd. For more information on this variable, see the [Go runtime documentation][golang-runtime].
-
-[add-a-member]: runtime-configuration.md#add-a-new-member
-[golang1.5-runtime]: https://golang.org/doc/go1.5#runtime
-[golang-memstats]: https://golang.org/pkg/runtime/#MemStats
-[golang-runtime]: https://golang.org/pkg/runtime
-[metrics]: metrics.md
-[prometheus]: http://prometheus.io/
-[remove-a-member]: runtime-configuration.md#remove-a-member
-[runtime-reconfig]: runtime-configuration.md#cluster-reconfiguration-operations
-[snap-pkg]: http://godoc.org/github.com/coreos/etcd/snap
-[update-a-member]: runtime-configuration.md#update-a-member
-[wal-pkg]: http://godoc.org/github.com/coreos/etcd/wal

+ 0 - 511
Documentation/auth_api.md

@@ -1,511 +0,0 @@
-# v2 Auth and Security
-
-## etcd Resources 
-There are three types of resources in etcd
-
-1. permission resources: users and roles in the user store
-2. key-value resources: key-value pairs in the key-value store
-3. settings resources: security settings, auth settings, and dynamic etcd cluster settings (election/heartbeat)
-
-### Permission Resources 
-
-#### Users
-A user is an identity to be authenticated. Each user can have multiple roles. The user has a capability (such as reading or writing) on the resource if one of the roles has that capability.
-
-A user named `root` is required before authentication can be enabled, and it always has the ROOT role. The ROOT role can be granted to multiple users, but `root` is required for recovery purposes.
-
-#### Roles
-Each role has exact one associated Permission List. An permission list exists for each permission on key-value resources. 
-
-The special static ROOT (named `root`) role has a full permissions on all key-value resources, the permission to manage user resources and settings resources. Only the ROOT role has the permission to manage user resources and modify settings resources. The ROOT role is built-in and does not need to be created.
-
-There is also a special GUEST role, named 'guest'. These are the permissions given to unauthenticated requests to etcd. This role will be created automatically, and by default allows access to the full keyspace due to backward compatibility. (etcd did not previously authenticate any actions.). This role can be modified by a ROOT role holder at any time, to reduce the capabilities of unauthenticated users.
-
-#### Permissions
-
-There are two types of permissions, `read` and `write`. All management and settings require the ROOT role.
-
-A Permission List is a list of allowed patterns for that particular permission (read or write). Only ALLOW prefixes are supported. DENY becomes more complicated and is TBD.
-
-### Key-Value Resources
-A key-value resource is a key-value pairs in the store. Given a list of matching patterns, permission for any given key in a request is granted if any of the patterns in the list match.
-
-Only prefixes or exact keys are supported. A prefix permission string ends in `*`. 
-A permission on `/foo` is for that exact key or directory, not its children or recursively. `/foo*` is a prefix that matches `/foo` recursively, and all keys thereunder, and keys with that prefix (eg. `/foobar`. Contrast to the prefix `/foo/*`). `*` alone is permission on the full keyspace. 
-
-### Settings Resources
-
-Specific settings for the cluster as a whole. This can include adding and removing cluster members, enabling or disabling authentication, replacing certificates, and any other dynamic configuration by the administrator (holder of the ROOT role).
-
-## v2 Auth
-
-### Basic Auth
-We only support [Basic Auth][basic-auth] for the first version. Client needs to attach the basic auth to the HTTP Authorization Header.
-
-### Authorization field for operations
-Added to requests to /v2/keys, /v2/auth
-Add code 401 Unauthorized to the set of responses from the v2 API
-Authorization: Basic {encoded string}
-
-### Future Work
-Other types of auth can be considered for the future (eg, signed certs, public keys) but the `Authorization:` header allows for other such types
-
-### Things out of Scope for etcd Permissions
-
-* Pluggable AUTH backends like LDAP (other Authorization tokens generated by LDAP et al may be a possibility)
-* Very fine-grained access controls (eg: users modifying keys outside work hours)
-
-
-
-## API endpoints
-
-An Error JSON corresponds to:
-{
-  "name": "ErrErrorName",
-  "description" : "The longer helpful description of the error."
-}
-
-#### Enable and Disable Authentication
-        
-**Get auth status**
-
-GET  /v2/auth/enable
-
-    Sent Headers:
-    Possible Status Codes:
-        200 OK
-    200 Body:
-        {
-          "enabled": true
-        }
-
-
-**Enable auth**
-
-PUT  /v2/auth/enable
-
-    Sent Headers:
-    Put Body: (empty)
-    Possible Status Codes:
-        200 OK
-        400 Bad Request (if root user has not been created)
-        409 Conflict (already enabled)
-    200 Body: (empty)
-
-**Disable auth**
-
-DELETE  /v2/auth/enable
-
-    Sent Headers:
-        Authorization: Basic <RootAuthString>
-    Possible Status Codes:
-        200 OK
-        401 Unauthorized (if not a root user)
-        409 Conflict (already disabled)
-    200 Body: (empty)
-
-
-#### Users
-
-The User JSON object is formed as follows:
-
-```
-{
-  "user": "userName",
-  "password": "password",
-  "roles": [
-    "role1",
-    "role2"
-  ],
-  "grant": [],
-  "revoke": []
-}
-```
-
-Password is only passed when necessary.
-
-**Get a List of Users**
-
-GET/HEAD  /v2/auth/users
-
-    Sent Headers:
-        Authorization: Basic <BasicAuthString>
-    Possible Status Codes:
-        200 OK
-        401 Unauthorized
-    200 Headers:
-        Content-type: application/json
-    200 Body:
-        {
-          "users": [
-            {
-              "user": "alice",
-              "roles": [
-                {
-                  "role": "root",
-                  "permissions": {
-                    "kv": {
-                      "read": ["*"],
-                      "write": ["*"]
-                    }
-                  }
-                }
-              ]
-            },
-            {
-              "user": "bob",
-              "roles": [
-                {
-                  "role": "guest",
-                  "permissions": {
-                    "kv": {
-                      "read": ["*"],
-                      "write": ["*"]
-                    }
-                  }
-                }
-              ]
-            }
-          ]
-        }
-
-**Get User Details**
-
-GET/HEAD  /v2/auth/users/alice
-
-    Sent Headers:
-        Authorization: Basic <BasicAuthString>
-    Possible Status Codes:
-        200 OK
-        401 Unauthorized
-        404 Not Found
-    200 Headers:
-        Content-type: application/json
-    200 Body:
-        {
-          "user" : "alice",
-          "roles" : [
-            {
-              "role": "fleet",
-              "permissions" : {
-                "kv" : {
-                  "read": [ "/fleet/" ],
-                  "write": [ "/fleet/" ]
-                }
-              }
-            },
-            {
-              "role": "etcd",
-              "permissions" : {
-                "kv" : {
-                  "read": [ "*" ],
-                  "write": [ "*" ]
-                }
-              }
-            }
-          ]
-        }
-
-**Create Or Update A User**
-
-A user can be created with initial roles, if filled in. However, no roles are required; only the username and password fields
-
-PUT  /v2/auth/users/charlie
-
-    Sent Headers:
-        Authorization: Basic <BasicAuthString>
-    Put Body:
-        JSON struct, above, matching the appropriate name 
-          * Starting password and roles when creating. 
-          * Grant/Revoke/Password filled in when updating (to grant roles, revoke roles, or change the password).
-    Possible Status Codes:
-        200 OK
-        201 Created
-        400 Bad Request
-        401 Unauthorized
-        404 Not Found (update non-existent users)
-        409 Conflict (when granting duplicated roles or revoking non-existent roles)
-    200 Headers:
-        Content-type: application/json
-    200 Body:
-        JSON state of the user
-
-**Remove A User**
-
-DELETE  /v2/auth/users/charlie
-
-    Sent Headers:
-        Authorization: Basic <BasicAuthString>
-    Possible Status Codes:
-        200 OK
-        401 Unauthorized
-        403 Forbidden (remove root user when auth is enabled)
-        404 Not Found
-    200 Headers:
-    200 Body: (empty)
-
-#### Roles
-
-A full role structure may look like this. A Permission List structure is used for the "permissions", "grant", and "revoke" keys.
-```
-{
-  "role" : "fleet",
-  "permissions" : {
-    "kv" : {
-      "read" : [ "/fleet/" ],
-      "write": [ "/fleet/" ]
-    }
-  },
-  "grant" : {"kv": {...}},
-  "revoke": {"kv": {...}}
-}
-```
-
-**Get Role Details**
-
-GET/HEAD  /v2/auth/roles/fleet
-
-    Sent Headers:
-        Authorization: Basic <BasicAuthString>
-    Possible Status Codes:
-        200 OK
-        401 Unauthorized
-        404 Not Found
-    200 Headers:
-        Content-type: application/json
-    200 Body:
-        {
-          "role" : "fleet",
-          "permissions" : {
-            "kv" : {
-              "read": [ "/fleet/" ],
-              "write": [ "/fleet/" ]
-            }
-          }
-        }
-
-**Get a list of Roles**
-
-GET/HEAD  /v2/auth/roles
-
-    Sent Headers:
-        Authorization: Basic <BasicAuthString>
-    Possible Status Codes:
-        200 OK
-        401 Unauthorized
-    200 Headers:
-        Content-type: application/json
-    200 Body:
-        {
-          "roles": [
-            {
-              "role": "fleet",
-              "permissions": {
-                "kv": {
-                  "read": ["/fleet/"],
-                  "write": ["/fleet/"]
-                }
-              }
-            },
-            {
-              "role": "etcd",
-              "permissions": {
-                "kv": {
-                  "read": ["*"],
-                  "write": ["*"]
-                }
-              }
-            },
-            {
-              "role": "quay",
-              "permissions": {
-                "kv": {
-                  "read": ["*"],
-                  "write": ["*"]
-                }
-              }
-            }
-          ]
-        }
-
-**Create Or Update A Role**
-
-PUT  /v2/auth/roles/rkt
-
-    Sent Headers:
-        Authorization: Basic <BasicAuthString>
-    Put Body:
-        Initial desired JSON state, including the role name for verification and:
-          * Starting permission set if creating
-          * Granted/Revoked permission set if updating
-    Possible Status Codes:
-        200 OK
-        201 Created
-        400 Bad Request
-        401 Unauthorized
-        404 Not Found (update non-existent roles)
-        409 Conflict (when granting duplicated permission or revoking non-existent permission)
-    200 Body: 
-        JSON state of the role
-
-**Remove A Role**
-
-DELETE  /v2/auth/roles/rkt
-
-    Sent Headers:
-        Authorization: Basic <BasicAuthString>
-    Possible Status Codes:
-        200 OK
-        401 Unauthorized
-        403 Forbidden (remove root)
-        404 Not Found
-    200 Headers:
-    200 Body: (empty)
-
-
-## Example Workflow
-
-Let's walk through an example to show two tenants (applications, in our case) using etcd permissions.
-
-### Create root role
-
-```
-PUT  /v2/auth/users/root
-    Put Body:
-        {"user" : "root", "password": "betterRootPW!"}
-```
-
-### Enable auth
-
-```
-PUT  /v2/auth/enable
-```
-
-### Modify guest role (revoke write permission)
-
-```
-PUT  /v2/auth/roles/guest
-    Headers:
-        Authorization: Basic <root:betterRootPW!>
-    Put Body:
-        {
-          "role" : "guest",
-          "revoke" : {
-            "kv" : {
-              "write": [
-                "*"
-              ]
-            }
-          }
-        }
-```
-
-
-### Create Roles for the Applications
-
-Create the rkt role fully specified:
-
-```
-PUT /v2/auth/roles/rkt
-    Headers:
-        Authorization: Basic <root:betterRootPW!>
-    Body:
-        {
-          "role" : "rkt",
-          "permissions" : {
-            "kv": {
-              "read": [
-                "/rkt/*"
-              ],
-              "write": [
-                "/rkt/*"
-              ]
-            }
-          }
-        }
-```
-
-But let's make fleet just a basic role for now:
-
-```
-PUT /v2/auth/roles/fleet
-    Headers:
-      Authorization: Basic <root:betterRootPW!>
-    Body:
-        {
-          "role" : "fleet"
-        }
-```
-
-### Optional: Grant some permissions to the roles
-
-Well, we finally figured out where we want fleet to live. Let's fix it.
-(Note that we avoided this in the rkt case. So this step is optional.)
-
-
-```
-PUT /v2/auth/roles/fleet
-    Headers:
-        Authorization: Basic <root:betterRootPW!>
-    Put Body:
-        {
-          "role" : "fleet",
-          "grant" : {
-            "kv" : {
-              "read": [
-                "/rkt/fleet",
-                "/fleet/*"
-              ]
-            }
-          }
-        }
-```
-
-### Create Users
-
-Same as before, let's use rocket all at once and fleet separately
-
-```
-PUT /v2/auth/users/rktuser
-    Headers:
-        Authorization: Basic <root:betterRootPW!>
-    Body:
-        {"user" : "rktuser", "password" : "rktpw", "roles" : ["rkt"]}
-```
-
-```
-PUT /v2/auth/users/fleetuser
-    Headers:
-        Authorization: Basic <root:betterRootPW!>
-    Body:
-        {"user" : "fleetuser", "password" : "fleetpw"}
-```
-
-### Optional: Grant Roles to Users
-
-Likewise, let's explicitly grant fleetuser access.
-
-```
-PUT /v2/auth/users/fleetuser
-    Headers:
-        Authorization: Basic <root:betterRootPW!>
-    Body:
-        {"user": "fleetuser", "grant": ["fleet"]}
-```
-
-#### Start to use fleetuser and rktuser
-
-
-For example:
-
-```
-PUT /v2/keys/rkt/RktData
-    Headers:
-        Authorization: Basic <rktuser:rktpw>
-    Body:
-        value=launch
-```
-
-Reads and writes outside the prefixes granted will fail with a 401 Unauthorized.
-
-[basic-auth]: https://en.wikipedia.org/wiki/Basic_access_authentication

+ 1 - 1
Documentation/dev-internal/discovery_protocol.md

@@ -109,6 +109,6 @@ You can check the status for this discovery token, including the machines that h
 The repository is located at https://github.com/coreos/discovery.etcd.io. You could use it to build your own public discovery service.
 
 [api]: ../v2/api.md#waiting-for-a-change
-[cluster-size]: admin_guide.md#optimal-cluster-size
+[cluster-size]: ../v2/admin_guide.md#optimal-cluster-size
 [expected-cluster-size]: #specifying-the-expected-cluster-size
 [new-discovery-token]: #creating-a-new-discovery-token

+ 0 - 94
Documentation/docker_guide.md

@@ -1,94 +0,0 @@
-# Running etcd under Docker
-
-The following guide will show you how to run etcd under Docker using the [static bootstrap process](clustering.md#static).
-
-## Running etcd in standalone mode
-
-In order to expose the etcd API to clients outside of the Docker host you'll need use the host IP address when configuring etcd.
-
-```
-export HostIP="192.168.12.50"
-```
-
-The following `docker run` command will expose the etcd client API over ports 4001 and 2379, and expose the peer port over 2380.
-
-This will run the latest release version of etcd. You can specify version if needed (e.g. `quay.io/coreos/etcd:v2.2.0`).
-
-```
-docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \
- --name etcd quay.io/coreos/etcd \
- --name etcd0 \
- --advertise-client-urls http://${HostIP}:2379,http://${HostIP}:4001 \
- --listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \
- --initial-advertise-peer-urls http://${HostIP}:2380 \
- --listen-peer-urls http://0.0.0.0:2380 \
- --initial-cluster-token etcd-cluster-1 \
- --initial-cluster etcd0=http://${HostIP}:2380 \
- --initial-cluster-state new
-```
-
-Configure etcd clients to use the Docker host IP and one of the listening ports from above.
-
-```
-etcdctl -C http://192.168.12.50:2379 member list
-```
-
-```
-etcdctl -C http://192.168.12.50:4001 member list
-```
-
-## Running a 3 node etcd cluster
-
-Using Docker to setup a multi-node cluster is very similar to the standalone mode configuration.
-The main difference being the value used for the `-initial-cluster` flag, which must contain the peer urls for each etcd member in the cluster.
-
-### etcd0
-
-```
-docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \
- --name etcd quay.io/coreos/etcd \
- --name etcd0 \
- --advertise-client-urls http://192.168.12.50:2379,http://192.168.12.50:4001 \
- --listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \
- --initial-advertise-peer-urls http://192.168.12.50:2380 \
- --listen-peer-urls http://0.0.0.0:2380 \
- --initial-cluster-token etcd-cluster-1 \
- --initial-cluster etcd0=http://192.168.12.50:2380,etcd1=http://192.168.12.51:2380,etcd2=http://192.168.12.52:2380 \
- --initial-cluster-state new
-```
-
-### etcd1
-
-```
-docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \
- --name etcd quay.io/coreos/etcd \
- --name etcd1 \
- --advertise-client-urls http://192.168.12.51:2379,http://192.168.12.51:4001 \
- --listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \
- --initial-advertise-peer-urls http://192.168.12.51:2380 \
- --listen-peer-urls http://0.0.0.0:2380 \
- --initial-cluster-token etcd-cluster-1 \
- --initial-cluster etcd0=http://192.168.12.50:2380,etcd1=http://192.168.12.51:2380,etcd2=http://192.168.12.52:2380 \
- --initial-cluster-state new
-```
-
-### etcd2
-
-```
-docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \
- --name etcd quay.io/coreos/etcd \
- --name etcd2 \
- --advertise-client-urls http://192.168.12.52:2379,http://192.168.12.52:4001 \
- --listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \
- --initial-advertise-peer-urls http://192.168.12.52:2380 \
- --listen-peer-urls http://0.0.0.0:2380 \
- --initial-cluster-token etcd-cluster-1 \
- --initial-cluster etcd0=http://192.168.12.50:2380,etcd1=http://192.168.12.51:2380,etcd2=http://192.168.12.52:2380 \
- --initial-cluster-state new
-```
-
-Once the cluster has been bootstrapped etcd clients can be configured with a list of etcd members:
-
-```
-etcdctl -C http://192.168.12.50:2379,http://192.168.12.51:2379,http://192.168.12.52:2379 member list
-```

+ 0 - 84
Documentation/faq.md

@@ -1,84 +0,0 @@
-# FAQ
-
-## 1) How come I can read an old version of the data when a majority of the members are down?  
-
-In situations where a client connects to a minority, etcd
-favors by default availability over consistency. This means that even though
-data might be “out of date”, it is still better to return something versus
-nothing. 
-
-In order to confirm that a read is up to date with a majority of the cluster,
-the client can use the `quorum=true` parameter on reads of keys. This means
-that a majority of the cluster is checked on reads before returning the data,
-otherwise the read will timeout and fail.
-
-## 2) With quorum=false, doesn’t this mean that if my client switched the member it was connected to, that it could experience a logical ordering where the cluster goes backwards in time? 
-
-Yes, but this could be handled at the etcd client implementation via
-remembering the last seen index. The “index” is the cluster's single
-irrevocable sequence of the entire modification history. The client could
-remember the last seen index, and determine via comparing the index returned on
-the GET whether or not the state of the key-value pair is before or after its
-last seen state. 
-
-## 3) What happens if a watch is registered on a minority member? 
-
-The watch will stay untriggered, even as modifications are occurring in the
-majority quorum. This is an open issue, and is being addressed in v3. There are
-multiple ways to work around the watch trigger not firing. 
-
-1) build a signaling mechanism independent of etcd. This could be as simple as
-a “pulse” to the client to reissue a GET with quorum=true for the most recent
-version of the data. 
-   
-2) poll on the `/v2/keys` endpoint and check that the raft-index is increasing every
-timeout. 
-
-## 4) What is a proxy used for? 
-
-A proxy is a redirection server to the etcd cluster. The proxy handles the
-redirection of a client to the current configuration of the etcd cluster. A
-typical use case is to start a proxy on a machine, and on first boot up of the
-proxy specify both the `--proxy` flag and the `--initial-cluster` flag. 
-
-From there, any etcdctl client that starts up automatically speaks to the local
-proxy and the proxy redirects operations to the current configuration of the
-cluster it was originally paired with. 
-
-In the v2 spec of etcd, proxies cannot be promoted to members of the cluster.
-They also cannot be promoted to followers or at any point become part of the
-replication of the etcd cluster itself. 
-
-## 5) How is cluster membership and health handled in etcd v2? 
-
-The design goal of etcd is that reconfiguration is simply an API, and health
-monitoring and addition/removal of members is up to the individual application
-and their integration with the reconfiguration API. 
-
-Thus, a member that is down, even infinitely, will never be automatically
-removed from the etcd cluster member list. 
-
-This makes sense because it's usually an application level / administrative
-action to determine whether a reconfiguration should happen based on health. 
-
-For more information, refer to the [runtime reconfiguration design document][runtime-reconf-design].
-
-## 6) how does --endpoint work with etcdctl? 
-
-The `--endpoint` flag can specify any number of etcd cluster members in a comma
-separated list. This list might be a subset, equal to, or more than the actual
-etcd cluster member list itself. 
-
-If only one peer is specified via the `--endpoint` flag, the etcdctl discovers the
-rest of the cluster via the member list of that one peer, and then it randomly
-chooses a member to use.  Again, the client can use the `quorum=true` flag on
-reads, which will always fail when using a member in the minority. 
-
-If peers from multiple clusters are specified via the `--endpoint` flag, etcdctl
-will randomly choose a peer, and the request will simply get routed to one of
-the clusters. This is probably not what you want. 
-
-Note: --peers flag is now deprecated and --endpoint should be used instead, 
-as it might confuse users to give etcdctl a peerURL.
-
-[runtime-reconf-design]: runtime-reconf-design.md

+ 0 - 65
Documentation/implementation-faq.md

@@ -1,65 +0,0 @@
-# FAQ
-
-## Initial Bootstrapping UX
-
-etcd initial bootstrapping is done via command line flags such as
-`--initial-cluster` or `--discovery`. These flags can safely be left on the
-command line after your cluster is running but they will be ignored if you have
-a non-empty data dir. So, why did we decide to have this sort of odd UX?
-
-One of the design goals of etcd is easy bringup of clusters using a one-shot
-static configuration like AWS Cloud Formation, PXE booting, etc. Essentially we
-want to describe several virtual machines and bring them all up at once into an
-etcd cluster.
-
-To achieve this sort of hands-free cluster bootstrap we had two other options:
-
-**API to bootstrap**
-
-This is problematic because it cannot be coordinated from a single service file
-and we didn't want to have the etcd socket listening but unresponsive to
-clients for an unbound period of time.
-
-It would look something like this:
-
-```
-ExecStart=/usr/bin/etcd
-ExecStartPost/usr/bin/etcd init localhost:2379 --cluster=
-```
-
-**etcd init subcommand**
-
-```
-etcd init --cluster='default=http://localhost:2380,default=http://localhost:7001'...
-etcd init --discovery https://discovery-example.etcd.io/193e4
-```
-
-Then after running an init step you would execute `etcd`. This however
-introduced problems: we now have to define a hand-off protocol between the etcd
-init process and the etcd binary itself. This is hard to coordinate in a single
-service file such as:
-
-```
-ExecStartPre=/usr/bin/etcd init --cluster=....
-ExecStart=/usr/bin/etcd
-```
-
-There are several error cases:
-
-0) Init has already run and the data directory is already configured
-1) Discovery fails because of network timeout, etc
-2) Discovery fails because the cluster is already full and etcd needs to fall back to proxy
-3) Static cluster configuration fails because of conflict, misconfiguration or timeout
-
-In hindsight we could have made this work by doing:
-
-```
-rc	status
-0	Init already ran
-1	Discovery fails on network timeout, etc
-0	Discovery fails for cluster full, coordinate via proxy state file
-1	Static cluster configuration failed
-```
-
-Perhaps we can add the init command in a future version and deprecate if the UX
-continues to confuse people.

+ 0 - 61
Documentation/internal-protocol-versioning.md

@@ -1,61 +0,0 @@
-# Versioning
-
-Goal: We want to be able to upgrade an individual peer in an etcd cluster to a newer version of etcd.
-The process will take the form of individual followers upgrading to the latest version until the entire cluster is on the new version.
-
-Immediate need: etcd is moving too fast to version the internal API right now.
-But, we need to keep mixed version clusters from being started by a rolling upgrade process (e.g. the CoreOS developer alpha).
-
-Longer term need: Having a mixed version cluster where all peers are not running the exact same version of etcd itself but are able to speak one version of the internal protocol.
-
-Solution: The internal protocol needs to be versioned just as the client protocol is.
-Initially during the 0.\*.\* series of etcd releases we won't allow mixed versions at all.
-
-## Join Control
-
-We will add a version field to the join command.
-But, who decides whether a newly upgraded follower should be able to join a cluster?
-
-### Leader Controlled
-
-If the leader controls the version of followers joining the cluster then it compares its version to the version number presented by the follower in the JoinCommand and rejects the join if the number is less than the leader's version number.
-
-Advantages
-
-- Leader controls all cluster decisions still
-
-Disadvantages
-
-- Follower knows better what versions of the internal protocol it can talk than the leader
-
-
-### Follower Controlled
-
-A newly upgraded follower should be able to figure out the leaders internal version from a defined internal backwards compatible API endpoint and figure out if it can join the cluster.
-If it cannot join the cluster then it simply exits.
-
-Advantages
-
-- The follower is running newer code and knows better if it can talk older protocols
-
-Disadvantages
-
-- This cluster decision isn't made by the leader
-
-## Recommendation
-
-To solve the immediate need and to plan for the future lets do the following:
-
-- Add Version field to JoinCommand
-- Have a joining follower read the Version field of the leader and if its own version doesn't match the leader then sleep for some random interval and retry later to see if the leader has upgraded.
-
-# Research
-
-## Zookeeper versioning
-
-Zookeeper very recently added versioning into the protocol and it doesn't seem to have seen any use yet.
-https://issues.apache.org/jira/browse/ZOOKEEPER-1633
-
-## doozerd
-
-doozerd stores the version number of the peers in the datastore for other clients to check, no decisions are made off of this number currently.

+ 1 - 1
Documentation/op-guide/configuration.md

@@ -268,7 +268,7 @@ Follow the instructions when using these flags.
 [iana-ports]: https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=etcd
 [proxy]: ../v2/proxy.md
 [reconfig]: runtime-configuration.md
-[restore]: admin_guide.md#restoring-a-backup
+[restore]: ../v2/admin_guide.md#restoring-a-backup
 [rfc-v3]: rfc/v3api.md
 [security]: security.md
 [systemd-intro]: http://freedesktop.org/wiki/Software/systemd/

+ 3 - 3
Documentation/op-guide/runtime-configuration.md

@@ -177,10 +177,10 @@ It is recommended to enable this option. However, it is disabled by default beca
 [cluster-reconf]: #cluster-reconfiguration-operations
 [conf-adv-peer]: configuration.md#-initial-advertise-peer-urls
 [conf-name]: configuration.md#-name
-[disaster recovery]: admin_guide.md#disaster-recovery
-[fault tolerance table]: admin_guide.md#fault-tolerance-table
+[disaster recovery]: recovery.md
+[fault tolerance table]: ../v2/admin_guide.md#fault-tolerance-table
 [majority failure]: #restart-cluster-from-majority-failure
 [member-api]: ../v2/members_api.md
-[member migration]: admin_guide.md#member-migration
+[member migration]: ../v2/admin_guide.md#member-migration
 [remove member]: #remove-a-member
 [runtime-reconf]: runtime-reconf-design.md

+ 1 - 1
Documentation/op-guide/runtime-reconf-design.md

@@ -47,4 +47,4 @@ It seems that using public discovery service is a convenient way to do runtime r
 If you want to have a discovery service that supports runtime reconfiguration, the best choice is to build your private one.
 
 [add-member]: runtime-configuration.md#add-a-new-member
-[disaster-recovery]: admin_guide.md#disaster-recovery
+[disaster-recovery]: recovery.md

+ 1 - 1
etcdserver/membership/cluster.go

@@ -381,7 +381,7 @@ func (c *RaftCluster) IsReadyToAddNewMember() bool {
 
 	if nstarted == 1 && nmembers == 2 {
 		// a case of adding a new node to 1-member cluster for restoring cluster data
-		// https://github.com/coreos/etcd/blob/master/Documentation/admin_guide.md#restoring-the-cluster
+		// https://github.com/coreos/etcd/blob/master/Documentation/v2/admin_guide.md#restoring-the-cluster
 
 		plog.Debugf("The number of started member is 1. This cluster can accept add member request.")
 		return true