yubin.byb 10 vuotta sitten
vanhempi
commit
0c45d5c830
14 muutettua tiedostoa jossa 1853 lisäystä ja 957 poistoa
  1. 44 244
      doc/上传文件.md
  2. 65 107
      doc/下载文件.md
  3. 1 1
      oss/bucket_test.go
  4. 395 0
      oss/download.go
  5. 360 0
      oss/download_test.go
  6. 0 88
      oss/multipart.go
  7. 53 21
      oss/option.go
  8. 433 0
      oss/upload.go
  9. 455 0
      oss/upload_test.go
  10. 0 3
      sample.go
  11. 25 26
      sample/get_object.go
  12. 0 223
      sample/multipart_copy.go
  13. 0 230
      sample/multipart_upload.go
  14. 22 14
      sample/put_object.go

+ 44 - 244
doc/上传文件.md

@@ -1,6 +1,9 @@
 # 上传文件
-在OSS中,用户操作的基本数据单元是文件(Object)。
-单个文件最大允许大小根据上传数据方式不同而不同,Put Object方式最大不能超过5GB, 使用multipart上传方式文件大小不能超过48TB。
+OSS Go SDK提供了丰富的文件上传接口,用户可以通过以下方式向OSS中上传文件:
+
+- 简单上传PutObject,适合小文件
+- 断点续传上传UploadFile,适合大文件
+- 追加上传AppendObject
 
 ## 简单上传
 
@@ -12,7 +15,7 @@
 > 
 > - 简单上传的示例代码在`sample/put_object.go`。
 
-#### 字符串(string)上传
+#### 字符串上传
 ```go
     import "strings"
     import "github.com/aliyun/aliyun-oss-go-sdk/oss"
@@ -54,7 +57,7 @@
     }
 ```
 
-#### 本地文件上传
+#### 文件上传
 ```go
     import "os"
     import "github.com/aliyun/aliyun-oss-go-sdk/oss"
@@ -109,9 +112,9 @@
 > - 使用上述方法上传最大文件不能超过5G。如果超过请使用分片上传。
 
 
-#### 上传时指定元信息
+### 上传时指定元信息
 
-使用数据流上传文件时,用户可以指定一个或多个文件(Object)的元信息。元数据的名称大小写不敏感,比如用户上传文件时,定义名字为“name”的meta,使用Bucket.GetObjectDetailedMeta读取结果是:“X-Oss-Meta-Name”,比较/读取时请忽略大小写。
+使用数据流上传文件时,用户可以指定一个或多个文件的元信息。元数据的名称大小写不敏感,比如用户上传文件时,定义名字为“name”的meta,使用Bucket.GetObjectDetailedMeta读取结果是:“X-Oss-Meta-Name”,比较/读取时请忽略大小写。
 
 可以指定的元信息如下:
 
@@ -155,10 +158,10 @@
 ```
 
 > 提示:
-> - Bucket.PutObject、Bucket.PutObjectFromFile都支持上传时指定元数据。
+> - Bucket.PutObject、Bucket.PutObjectFromFile、Bucket.UploadFile、Bucket.UploadFile都支持上传时指定元数据。
 
 
-## 创建模拟文件夹
+### 创建模拟文件夹
 
 OSS服务是没有文件夹这个概念的,所有元素都是以文件来存储。但给用户提供了创建模拟文件夹的方式,如下代码:
 ```go
@@ -246,266 +249,63 @@ OSS支持可追加的文件类型,通过`Bucket.AppendObject`来上传可追
 ```
 
 ## 分片上传
-除了通过PutObject接口上传文件到OSS以外,OSS还提供了另外一种上传模式 —— Multipart Upload。
-用户可以在如下的应用场景内(但不仅限于此),使用Multipart Upload上传模式,如:
-
-- 需要支持断点上传。
-- 上传超过100MB大小的文件。
-- 网络条件较差,和OSS的服务器之间的链接经常断开。
-- 上传文件之前,无法确定上传文件的大小。
-
-### 封装后的分片上传
-其实现原理是将要上传的文件分成若干个分片上传,最后所有分片都上传成功后,完成整个文件的上传。
-所以用户需要指定分片的大小,单位是Bytes。分片的最小值是100KB,最大值是5GB,请根据网络情况选择合适的分片大小。
-```go
-    import "github.com/aliyun/aliyun-oss-go-sdk/oss"
-    
-    client, err := oss.New("Endpoint", "AccessKeyId", "AccessKeySecret")
-    if err != nil {
-        // HandleError(err)
-    }
-
-    bucket, err := client.Bucket("my-bucket")
-    if err != nil {
-        // HandleError(err)
-    }
-    
-    // 分片大小是1MB
-    err = bucket.UploadFile("my-object", "LocalFile", 1024 * 1024)
-    if err != nil {
-        // HandleError(err)
-    }
-```
-
-> 提示:
-> 
-> - 分片上传Bucket.UploadFile时,您可以指定文件(Object)的元信息。
->
-```go
-    imur, err := bucket.UploadFile("my-object", "LocalFile", 1024 * 1024, oss.Meta("MyProp", "MyPropVal"))
-    if err != nil {
-            // HandleError(err)
-    }
-```
->
-
-### 分步完成MultipartUpload
-
-分片上传(MultipartUpload)一般的流程如下:
-
-1. 初始化一个分片上传任务(InitiateMultipartUpload)
-2. 逐个或并行上传分片(UploadPart)
-3. 完成上传(CompleteMultipartUpload)
-
-> 提示:
-> 
-> - 分片上传的示例代码在`samples/multipart_upload.go`
-
-#### 按照片数/片大小分片上传
-本地有一个大文件bigfile.zip,将其分成10片上传到OSS中。
-```go
-    import "github.com/aliyun/aliyun-oss-go-sdk/oss"
-    
-    client, err := oss.New("Endpoint", "AccessKeyId", "AccessKeySecret")
-    if err != nil {
-        // HandleError(err)
-    }
-
-    bucket, err := client.Bucket("my-bucket")
-    if err != nil {
-        // HandleError(err)
-    }
-    
-    // 按照指定片数分割大文件
-    chunks, err := oss.SplitFileByPartNum("bigfile.zip", 10)
-    if err != nil {
-        // HandleError(err)
-    }
+当上传大文件时,如果网络不稳定或者程序崩溃了,则整个上传就失败了。用户
+不得不重头再来,这样做不仅浪费资源,在网络不稳定的情况下,往往重试多次
+还是无法完成上传。
+通过`Bucket.UploadFile`接口来实现断点续传上传。它有以下参数:
+
+- objectKey 上传到OSS的Object名字
+- filePath 待上传的本地文件路径
+- partSize 分片上传大小,从100KB到5GB,单位是Byte
+- options 可选项,主要包括:
+  * Routines 指定分片上传的并发数,默认是1,及不使用并发上传
+  * Checkpoint 指定上传是否开启断点续传,及checkpoint文件的路径。默认断点续传功能关闭,checkpoint文件的路径可以指定为空,为与本地文件同
+    目录下的`file.cp`,其中`file`是本地文件的名字
+  * 其它元信息,请参看`上传时指定元信息`
+
+其实现的原理是将要上传的文件分成若干个分片分别上传,最后所有分片都上传
+成功后,完成整个文件的上传。在上传的过程中会记录当前上传的进度信息(记
+录在checkpoint文件中),如果上传过程中某一分片上传失败,再次上传时会从
+checkpoint文件中记录的点继续上传。这要求再次调用时要指定与上次相同的
+checkpoint文件。上传完成后,checkpoint文件会被删除。
 
-    // 初始化分片上传任务
-    imur, err := bucket.InitiateMultipartUpload("my-object")
-    if err != nil {
-        // HandleError(err)
-    }
-    
-    // 上传分片
-    parts := []oss.UploadPart{}
-    for _, chunk := range chunks {
-        part, err := bucket.UploadPartFromFile(imur, "bigfile.zip", chunk.Offset,
-            chunk.Size, chunk.Number)
-        if err != nil {
-            // HandleError(err)
-        }
-        parts = append(parts, part)
-    }
-
-    // 完成上传
-    _, err = bucket.CompleteMultipartUpload(imur, parts)
-    if err != nil {
-        // HandleError(err)
-    }
-```
-
-> 注意:
-> 
-> - 上面程序的核心是调用UploadPart方法来上传每一个分片,但是要注意以下几点:
-> - UploadPart 方法要求除最后一个Part以外,其他的Part大小都要大于100KB。但是Upload Part接口并不会立即校验上传
-Part的大小(因为不知道是否为最后一片);只有当Complete Multipart Upload的时候才会校验。
-> - OSS会将服务器端收到Part数据的MD5值放在ETag头内返回给用户。
-> - 为了保证数据在网络传输过程中不出现错误,SDK会自动设置Content-MD5,OSS会计算上传数据的MD5值与SDK计算的MD5值比较,
-如果不一致返回InvalidDigest错误码。
-> - Part号码的范围是1~10000。如果超出这个范围,OSS将返回InvalidArgument的错误码。
-> - 每次上传part时都要把流定位到此次上传片开头所对应的位置。
-> - 每次上传part之后,OSS的返回结果会包含一个 PartETag 对象,他是上传片的ETag与片编号(PartNumber)的组合,
-> - 在后续完成分片上传的步骤中会用到它,因此我们需要将其保存起来。一般来讲我们将这些 PartETag 对象保存到List中。
-    
-    
 > 提示:
-> 
-> - 您还可以调用oss.SplitFileByPartSize把文件按照大小切分,然后分片删除。如bigfile.zip按照1M分片:
->
-```go
-    chunks, err := oss.SplitFileByPartSize("bigfile.zip", 1024*1024)
-    if err != nil {
-        // HandleError(err)
-    }
-```
->
-> - 初始化上传任务时候,您可以指定文件(Object)的元信息。
->
-```go
-    imur, err := bucket.InitiateMultipartUpload("my-object", oss.Meta("MyProp", "MyPropVal"))
-    if err != nil {
-        // HandleError(err)
-    }
-```
->
-> - 初始化上传任务后,或者上传一部分分片后,由于某种原因需要取消任务,您可以使用AbortMultipartUpload取消分片上传任务,其中参数imur是InitiateMultipartUpload的返回值。
->
-```go
-    err = bucket.AbortMultipartUpload(imur)
-    if err != nil {
-        // HandleError(err)
-    }
-```
+> - 分片上传的示例代码在`sample/put_object.go`。
 >
 
-#### 分片分组上传
-上面的分片上传需要先知道文件的大小,但是某种情况下文件大小是不可预知,这种情况可以通过指定文件的偏移量和片大小分别上传分片。
 ```go
-    import "os"
     import "github.com/aliyun/aliyun-oss-go-sdk/oss"
     
     client, err := oss.New("Endpoint", "AccessKeyId", "AccessKeySecret")
     if err != nil {
         // HandleError(err)
     }
-
-    bucket, err := client.Bucket("my-bucket")
-    if err != nil {
-        // HandleError(err)
-    }
-    
-    // 上传文件的前3个分片,分片大小是1MB
-    chunks := []oss.FileChunk {
-        {Number: 1, Offset: 0 * 1024 * 1024 , Size: 1024 * 1024},
-        {Number: 2, Offset: 1 * 1024 * 1024 , Size: 1024 * 1024},
-        {Number: 3, Offset: 2 * 1024 * 1024 , Size: 1024 * 1024},
-    }
-    
-    fd, err := os.Open("bigfile.zip")
-    if err != nil {
-        // HandleError(err)
-    }
-    defer fd.Close()
-    
-    // 初始化分片上传任务
-    imur, err := bucket.InitiateMultipartUpload("my-object")
-    if err != nil {
-        // HandleError(err)
-    }
-    
-    // 上传前3个分片
-    parts := []oss.UploadPart{}
-    for _, chunk := range chunks {
-        fd.Seek(chunk.Offset, os.SEEK_SET)
-        part, err := bucket.UploadPart(imur, fd, chunk.Size, chunk.Number)
-        if err != nil {
-            // HandleError(err)
-        }
-        parts = append(parts, part)
-    }
-    
-    // 后面的分片在文件增长后再上传
-    // ... ...
-    
-    // 所有分片上传完毕后,完成上传
-    _, err = bucket.CompleteMultipartUpload(imur, parts)
-    if err != nil {
-        // HandleError(err)
-    }
-```   
-
-#### 并发上传分片
-分片上传时,多个分片可以并发上传,或者由不同进程甚至不同机器完成。
-```go
-    import (
-        "sync"
-        "github.com/aliyun/aliyun-oss-go-sdk/oss"
-    )
     
-	client, err := oss.New("Endpoint", "AccessKeyId", "AccessKeySecret")
-    if err != nil {
-        // HandleError(err)
-    }
-
     bucket, err := client.Bucket("my-bucket")
     if err != nil {
         // HandleError(err)
     }
     
-    // 把文件分成10片
-    partNum := 10
-	chunks, err := oss.SplitFileByPartNum("LocalFile", partNum)
-	if err != nil {
-		// HandleError(err)
-	}
-
-    // 初始化上传任务
-	imur, err := bucket.InitiateMultipartUpload("my-object")
+    // 分片大小100K,3个协程并发上传分片,使用断点续传
+	err = bucket.UploadFile("my-object", "LocalFile", 100*1024, oss.Routines(3), oss.Checkpoint(true, ""))
 	if err != nil {
 		// HandleError(err)
 	}
+```
 
-	// 并发上传分片
-	var waitgroup sync.WaitGroup
-	var parts = make([]oss.UploadPart, partNum)
-	for _, chunk := range chunks {
-		waitgroup.Add(1)
-		go func(chunk oss.FileChunk) {
-			part, err := bucket.UploadPartFromFile(imur, "LocalFile", chunk.Offset,
-				chunk.Size, chunk.Number)
-			if err != nil {
-				// HandleError(err)
-			}
-			parts[chunk.Number - 1] = part
-			waitgroup.Done()
-		}(chunk)
-	}
-	
-	// 等待分片上传完成
-	waitgroup.Wait()
+> 注意:
+> - SDK会将上传的中间状态信息记录在cp文件中,所以要确保用户对cp文件有写权限
+> - cpt文件记录了上传的中间状态信息并自带了校验,用户不能去编辑它,如
+>   果cpt文件损坏则重新上传所有分片。整个上传完成后cpt文件会被删除。
+> - 如果上传过程中本地文件发生了改变,则重新上传所有分片
 
-	// 提交上传任务
-	_, err = bucket.CompleteMultipartUpload(imur, parts)
-	if err != nil {
-		// HandleError(err)
-	}
-```
+> 提示:
+> - 指定断点续传checkpoint文件路径使用`oss.Checkpoint(true, "your-cp-file.cp")`
+> - 使用`bucket.UploadFile(objectKey, localFile, 100*1024)`,默认不使用分片并发上传、不启动断点续传
 
 
-### 获取所有已上传的片信息
-您可以用Bucket.ListUploadedParts获取某个上传事件所有已上传的分片。
+### 获取所有已上传的分片信息
+您可以用Bucket.ListUploadedParts获取某个分片上传已上传的分片。
 ```go 
     import "fmt"
     import "github.com/aliyun/aliyun-oss-go-sdk/oss"

+ 65 - 107
doc/下载文件.md

@@ -1,19 +1,19 @@
 # 下载文件(Object)
 
-OSS Go SDK提供了丰富的文件下载接口,用户可以通过以下方式从OSS中下载文件(Object)
+OSS Go SDK提供了丰富的文件下载接口,用户可以通过以下方式从OSS中下载文件:
 
 - 下载到数据流io.ReadCloser
 - 下载到本地文件
-- 分下载
+- 分下载
 
-## 简单下载
+## 下载到数据流io.ReadCloser
 
 > 提示:
 > 
-> - 简单下载的示例代码在`sample/get_object.go`。
+> - 下载的示例代码在`sample/get_object.go`。
 > 
 
-### 下载文件到数据流io.ReadCloser
+### 下载文件到数据流
 ```go
     import (
         "fmt"
@@ -108,7 +108,7 @@ OSS Go SDK提供了丰富的文件下载接口,用户可以通过以下方式
 	io.Copy(fd, body)
 ```
 
-### 下载文件(Object)到本地文件
+## 下载到本地文件
 ```go
     import "github.com/aliyun/aliyun-oss-go-sdk/oss"
     
@@ -128,7 +128,62 @@ OSS Go SDK提供了丰富的文件下载接口,用户可以通过以下方式
     }
 ```
 
-### 下载文件时指定限定条件
+## 分片下载
+当下载大文件时,如果网络不稳定或者程序崩溃了,则整个下载就失败了。用户
+不得不重头再来,这样做不仅浪费资源,在网络不稳定的情况下,往往重试多次
+还是无法完成下载。
+通过`Bucket.DownloadFile`接口来实现断点续传下载。它有以下参数:
+
+- objectKey 要下载的Object名字
+- filePath 下载到本地文件的路径
+- partSize 下载分片大小,从1B到5GB,单位是Byte
+- options 可选项,主要包括:
+  * Routines 指定分片下载的并发数,默认是1,及不使用并发下载
+  * Checkpoint 指定下载是否开启断点续传,及checkpoint文件的路径。默认断点续传功能关闭,checkpoint文件的路径可以指定为空,如果不指定则默认为与本地文件同
+    目录下的`file.cpt`,其中`file`是本地文件的名字
+  * 下载时限定条件,请参看`指定限定条件下载`
+
+其实现的原理是将要下载的Object分成若干个分片分别下载,最后所有分片都下
+载成功后,完成整个文件的下载。在下载的过程中会记录当前下载的进度信息
+(记录在checkpoint文件中)和已下载的分片,如果下载过程中某一分片下载失败,再次下
+载时会从checkpoint文件中记录的点继续下载。这要求再次调用时要指定与上次
+相同的checkpoint文件。下载完成后,checkpoint文件会被删除。
+
+```go
+    import "github.com/aliyun/aliyun-oss-go-sdk/oss"
+    
+    client, err := oss.New("Endpoint", "AccessKeyId", "AccessKeySecret")
+    if err != nil {
+        // HandleError(err)
+    }
+
+    bucket, err := client.Bucket("my-bucket")
+    if err != nil {
+        // HandleError(err)
+    }
+    
+    err = bucket.DownloadFile("my-object", "LocalFile", 100*1024, oss.Routines(3), oss.Checkpoint(true, ""))
+    if err != nil {
+        // HandleError(err)
+    }
+```
+
+> 注意:
+> - SDK会将下载的中间状态信息记录在cp文件中,所以要确保用户对cpt文
+>   件有写权限
+> - cpt文件记录了下载的中间状态信息并自带了校验,用户不能去编辑它,如
+>   果cpt文件损坏则重新下载文件
+> - 如果下载过程中待下载的Object发生了改变(ETag改变),或者part文件丢
+>   失或被修改,则重新下载文件
+
+
+> 提示:
+> - 指定断点续传checkpoint文件路径使用`oss.Checkpoint(true, "your-cp-file.cp")`
+> - 使用`bucket.DownloadFile(objectKey, localFile, 100*1024)`,默认不使用分片并发下载、不启动断点续传
+>
+
+
+## 指定限定条件下载
 
 下载文件时,用户可以指定一个或多个限定条件,所有的限定条件都满足时下载,不满足时报错不下载文件。
 可以使用的限定条件如下:
@@ -172,11 +227,11 @@ OSS Go SDK提供了丰富的文件下载接口,用户可以通过以下方式
 > 提示:
 > 
 > - ETag的值可以通过Bucket.GetObjectDetailedMeta获取。
-> - Bucket.GetObject,Bucket.GetObjectToFile都支持限定条件。
+> - Bucket.GetObject,Bucket.GetObjectToFile,Bucket.DownloadFile都支持限定条件。
 >
 
-### 文件压缩下载
-文件可以压缩下载,目前支持GZIP压缩。
+## 文件压缩下载
+文件可以压缩下载,目前支持GZIP压缩。Bucket.GetObject、Bucket.GetObjectToFile支持压缩功能。
 ```go
     import "github.com/aliyun/aliyun-oss-go-sdk/oss"
     
@@ -195,100 +250,3 @@ OSS Go SDK提供了丰富的文件下载接口,用户可以通过以下方式
         // HandleError(err)
     }
 ```
-
-## 分段下载
-当下载大文件时,如果网络不稳定或者程序崩溃了,则整个下载就失败了。用户
-不得不重头再来,这样做不仅浪费资源,在网络不稳定的情况下,往往重试多次
-还是无法完成下载。分段下载就是把文件分成多个小段,逐个下载。
-
-### 封装分段下载
-可以调用Bucket.DownloadFile完成分段下载。分段下载是将大文件分成段,分别下载每段。它有以下参数:
-
-- "my-object" 要下载的Object名字。
-- filePath 下载到本地文件的路径。
-- partSize 分段大小,单位是Bytes,最小值1B,最大5GB。
-
-```go
-    import "github.com/aliyun/aliyun-oss-go-sdk/oss"
-    
-    client, err := oss.New("Endpoint", "AccessKeyId", "AccessKeySecret")
-    if err != nil {
-        // HandleError(err)
-    }
-    
-    bucket, err := client.Bucket("my-bucket")
-    if err != nil {
-        // HandleError(err)
-    }
-    
-    // 分段大小是1MB
-    err = bucket.DownloadFile("my-object", "LocalFile", 1024 * 1024)
-    if err != nil {
-        // HandleError(err)
-    }
-```
-
-> 提示:
-> 
-> - Bucket.DownloadFile支持IfModifiedSince、IfUnmodifiedSince、IfMatch、IfNoneMatch限定条件。
->
-
-### GetObject实现分段下载
-Bucket.GetObject/Bucket.GetObjectToFile,支持选项Range指定下载文件(Object)的范围。如设定Range为 bytes=0-9,表示传送第0到第9这10个字符,如果不符合范围规范,则传送全部内容。通过Range参数可以支持分段下载。
-
-> 提示:
-> 
-> - 分段下载的示例代码在`sample/get_object.go`。
-
-```go
-    import (
-        "io"
-        "os"
-        "strconv"
-        "github.com/aliyun/aliyun-oss-go-sdk/oss"
-    )
-    
-    client, err := oss.New("Endpoint", "AccessKeyId", "AccessKeySecret")
-    if err != nil {
-        // HandleError(err)
-    }
-    
-    bucket, err := client.Bucket("my-bucket")
-    if err != nil {
-        // HandleError(err)
-    }
-    
-    // 获取文件(Object)长度
-    meta, err := bucket.GetObjectDetailedMeta("my-object")
-    if err != nil {
-        // HandleError(err)  
-    }
-    
-    // 分段大小是1MB
-    var partSize int64 = 1024 * 1024
-    objectSize, err := strconv.ParseInt(meta.Get(oss.HTTPHeaderContentLength), 10, 0)
-    
-    fd, err := os.OpenFile("LocalFile", os.O_WRONLY|os.O_CREATE, 0660)
-    if err != nil {
-        // HandleError(err)  
-    }
-    defer fd.Close()
-    
-    // 分段下载
-    for i := int64(0); i < objectSize; i += partSize {
-        option := oss.Range(i, oss.GetPartEnd(i, objectSize, partSize))
-		body, err := bucket.GetObject("my-object", option)
-		if err != nil {
-			// HandleError(err)
-		}
-		io.Copy(fd, body)
-		body.Close()
-    }
-```
-
-> 提示:
-> 
-> - 通过Range参数,您可以并发下载大文件。
-> - 分段大小,单位是Bytes,最小值1B,最大5GB。
-> - 使用GetObject下载,LocalFile需要用户创建并打开。
-> - GetObjectToFile每次写文件都是追加写,支持分段下载。

+ 1 - 1
oss/bucket_test.go

@@ -196,6 +196,7 @@ func (s *OssBucketSuite) TestPutObjectType(c *C) {
 	c.Assert(err, IsNil)
 
 	// Check
+	time.Sleep(time.Second)
 	body, err := s.bucket.GetObject(objectName)
 	c.Assert(err, IsNil)
 	str, err := readBody(body)
@@ -926,7 +927,6 @@ func (s *OssBucketSuite) TestGetObjectDetailedMeta(c *C) {
 	c.Assert(len(meta.Get("Last-Modified")) > 0, Equals, true)
 
 	// IfModifiedSince/IfModifiedSince
-	time.Sleep(time.Second * 3)
 	_, err = s.bucket.GetObjectDetailedMeta(objectName, IfModifiedSince(futureDate))
 	c.Assert(err, NotNil)
 

+ 395 - 0
oss/download.go

@@ -0,0 +1,395 @@
+package oss
+
+import (
+	"crypto/md5"
+	"encoding/base64"
+	"encoding/json"
+	"errors"
+	"io"
+	"io/ioutil"
+	"os"
+	"strconv"
+	"sync"
+	"time"
+)
+
+//
+// DownloadFile 分块下载文件,适合加大Object
+//
+// objectKey  object key。
+// filePath   本地文件。objectKey下载到文件。
+// partSize   本次上传文件片的大小,字节数。比如100 * 1024为每片100KB。
+// options    Object的属性限制项。详见GetObject。
+//
+// error 操作成功error为nil,非nil为错误信息。
+//
+func (bucket Bucket) DownloadFile(objectKey, filePath string, partSize int64, options ...Option) error {
+	if partSize < 1 || partSize > MaxPartSize {
+		return errors.New("oss: part size invalid range (1, 5GB]")
+	}
+
+	cpConf, err := getCpConfig(options, filePath)
+	if err != nil {
+		return err
+	}
+
+	routines := getRoutines(options)
+
+	if cpConf.IsEnable {
+		return bucket.downloadFileWithCp(objectKey, filePath, partSize, options, cpConf.FilePath, routines)
+	}
+
+	return bucket.downloadFile(objectKey, filePath, partSize, options, routines)
+}
+
+// ----- 并发无断点的下载  -----
+
+// 工作协程参数
+type downloadWorkerArg struct {
+	bucket   *Bucket
+	key      string
+	filePath string
+	options  []Option
+	hook     downloadPartHook
+}
+
+// Hook用于测试
+type downloadPartHook func(part downloadPart) error
+
+var downloadPartHooker downloadPartHook = defaultDownloadPartHook
+
+func defaultDownloadPartHook(part downloadPart) error {
+	return nil
+}
+
+// 工作协程
+func downloadWorker(id int, arg downloadWorkerArg, jobs <-chan downloadPart, results chan<- downloadPart, failed chan<- error) {
+	for part := range jobs {
+		if err := arg.hook(part); err != nil {
+			failed <- err
+			break
+		}
+
+		opt := Range(part.Start, part.End)
+		opts := append(arg.options, opt)
+		rd, err := arg.bucket.GetObject(arg.key, opts...)
+		if err != nil {
+			failed <- err
+			break
+		}
+		defer rd.Close()
+
+		fd, err := os.OpenFile(arg.filePath, os.O_WRONLY, 0660)
+		if err != nil {
+			failed <- err
+			break
+		}
+		defer fd.Close()
+
+		_, err = fd.Seek(part.Start, os.SEEK_SET)
+		if err != nil {
+			failed <- err
+			break
+		}
+
+		_, err = io.Copy(fd, rd)
+		if err != nil {
+			failed <- err
+			break
+		}
+
+		results <- part
+	}
+}
+
+// 调度协程
+func downloadScheduler(jobs chan downloadPart, parts []downloadPart) {
+	for _, part := range parts {
+		jobs <- part
+	}
+	close(jobs)
+}
+
+// 下载片
+type downloadPart struct {
+	Index int   // 片序号,从0开始编号
+	Start int64 // 片起始位置
+	End   int64 // 片结束位置
+}
+
+// 文件分片
+func getDownloadPart(bucket *Bucket, objectKey string, partSize int64) ([]downloadPart, error) {
+	meta, err := bucket.GetObjectDetailedMeta(objectKey)
+	if err != nil {
+		return nil, err
+	}
+
+	parts := []downloadPart{}
+	objectSize, err := strconv.ParseInt(meta.Get(HTTPHeaderContentLength), 10, 0)
+	if err != nil {
+		return nil, err
+	}
+
+	part := downloadPart{}
+	i := 0
+	for offset := int64(0); offset < objectSize; offset += partSize {
+		part.Index = i
+		part.Start = offset
+		part.End = GetPartEnd(offset, objectSize, partSize)
+		parts = append(parts, part)
+		i++
+	}
+	return parts, nil
+}
+
+// 并发无断点续传的下载
+func (bucket Bucket) downloadFile(objectKey, filePath string, partSize int64, options []Option, routines int) error {
+	// 如果文件不存在则创建
+	fd, err := os.OpenFile(filePath, os.O_WRONLY|os.O_CREATE, 0660)
+	if err != nil {
+		return err
+	}
+	fd.Close()
+
+	// 分割文件
+	parts, err := getDownloadPart(&bucket, objectKey, partSize)
+	if err != nil {
+		return err
+	}
+
+	jobs := make(chan downloadPart, len(parts))
+	results := make(chan downloadPart, len(parts))
+	failed := make(chan error)
+
+	// 启动工作协程
+	arg := downloadWorkerArg{&bucket, objectKey, filePath, options, downloadPartHooker}
+	for w := 1; w <= routines; w++ {
+		go downloadWorker(w, arg, jobs, results, failed)
+	}
+
+	// 并发上传分片
+	go downloadScheduler(jobs, parts)
+
+	// 等待分片下载完成
+	completed := 0
+	ps := make([]downloadPart, len(parts))
+	for {
+		select {
+		case part := <-results:
+			completed++
+			ps[part.Index] = part
+		case err := <-failed:
+			return err
+		default:
+			time.Sleep(time.Second)
+		}
+
+		if completed >= len(parts) {
+			break
+		}
+	}
+
+	return nil
+}
+
+// ----- 并发有断点的下载  -----
+
+const downloadCpMagic = "92611BED-89E2-46B6-89E5-72F273D4B0A3"
+
+type downloadCheckpoint struct {
+	Magic    string         // magic
+	MD5      string         // cp内容的MD5
+	FilePath string         // 本地文件
+	Object   string         // key
+	ObjStat  objectStat     // 文件状态
+	Parts    []downloadPart // 全部分片
+	PartStat []bool         // 分片下载是否完成
+	mutex    sync.Mutex     // Lock
+}
+
+type objectStat struct {
+	Size         int64  // 大小
+	LastModified string // 最后修改时间
+	Etag         string // etag
+}
+
+// CP数据是否有效,CP有效且Object没有更新时有效
+func (cp downloadCheckpoint) isValid(bucket *Bucket, objectKey string) (bool, error) {
+	// 比较CP的Magic及MD5
+	cpb := cp
+	cpb.MD5 = ""
+	js, _ := json.Marshal(cpb)
+	sum := md5.Sum(js)
+	b64 := base64.StdEncoding.EncodeToString(sum[:])
+
+	if cp.Magic != downloadCpMagic || b64 != cp.MD5 {
+		return false, nil
+	}
+
+	// 确认object没有更新
+	meta, err := bucket.GetObjectDetailedMeta(objectKey)
+	if err != nil {
+		return false, err
+	}
+
+	objectSize, err := strconv.ParseInt(meta.Get(HTTPHeaderContentLength), 10, 0)
+	if err != nil {
+		return false, err
+	}
+
+	// 比较Object的大小/最后修改时间/etag
+	if cp.ObjStat.Size != objectSize ||
+		cp.ObjStat.LastModified != meta.Get(HTTPHeaderLastModified) ||
+		cp.ObjStat.Etag != meta.Get(HTTPHeaderEtag) {
+		return false, nil
+	}
+
+	return true, nil
+}
+
+// 从文件中load
+func (cp *downloadCheckpoint) load(filePath string) error {
+	contents, err := ioutil.ReadFile(filePath)
+	if err != nil {
+		return err
+	}
+
+	err = json.Unmarshal(contents, cp)
+	return err
+}
+
+// dump到文件
+func (cp *downloadCheckpoint) dump(filePath string) error {
+	bcp := *cp
+
+	// 计算MD5
+	bcp.MD5 = ""
+	js, err := json.Marshal(bcp)
+	if err != nil {
+		return err
+	}
+	sum := md5.Sum(js)
+	b64 := base64.StdEncoding.EncodeToString(sum[:])
+	bcp.MD5 = b64
+
+	// 序列化
+	js, err = json.Marshal(bcp)
+	if err != nil {
+		return err
+	}
+
+	// dump
+	return ioutil.WriteFile(filePath, js, 0644)
+}
+
+// 未完成的分片
+func (cp downloadCheckpoint) todoParts() []downloadPart {
+	dps := []downloadPart{}
+	for i, ps := range cp.PartStat {
+		if !ps {
+			dps = append(dps, cp.Parts[i])
+		}
+	}
+	return dps
+}
+
+// 初始化下载任务
+func (cp *downloadCheckpoint) prepare(bucket *Bucket, objectKey, filePath string, partSize int64) error {
+	// cp
+	cp.Magic = downloadCpMagic
+	cp.FilePath = filePath
+	cp.Object = objectKey
+
+	// object
+	meta, err := bucket.GetObjectDetailedMeta(objectKey)
+	if err != nil {
+		return err
+	}
+
+	objectSize, err := strconv.ParseInt(meta.Get(HTTPHeaderContentLength), 10, 0)
+	if err != nil {
+		return err
+	}
+
+	cp.ObjStat.Size = objectSize
+	cp.ObjStat.LastModified = meta.Get(HTTPHeaderLastModified)
+	cp.ObjStat.Etag = meta.Get(HTTPHeaderEtag)
+
+	// parts
+	cp.Parts, err = getDownloadPart(bucket, objectKey, partSize)
+	if err != nil {
+		return err
+	}
+	cp.PartStat = make([]bool, len(cp.Parts))
+	for i := range cp.PartStat {
+		cp.PartStat[i] = false
+	}
+
+	return nil
+}
+
+func (cp *downloadCheckpoint) complete(cpFilePath string) error {
+	return os.Remove(cpFilePath)
+}
+
+// 并发带断点的下载
+func (bucket Bucket) downloadFileWithCp(objectKey, filePath string, partSize int64, options []Option, cpFilePath string, routines int) error {
+	// LOAD CP数据
+	dcp := downloadCheckpoint{}
+	err := dcp.load(cpFilePath)
+	if err != nil {
+		os.Remove(cpFilePath)
+	}
+
+	// LOAD出错或数据无效重新初始化下载
+	valid, err := dcp.isValid(&bucket, objectKey)
+	if err != nil || !valid {
+		if err = dcp.prepare(&bucket, objectKey, filePath, partSize); err != nil {
+			return err
+		}
+		os.Remove(cpFilePath)
+	}
+
+	// 文件不存在创建
+	fd, err := os.OpenFile(filePath, os.O_WRONLY|os.O_CREATE, 0660)
+	if err != nil {
+		return err
+	}
+	fd.Close()
+
+	// 未完成的分片
+	parts := dcp.todoParts()
+	jobs := make(chan downloadPart, len(parts))
+	results := make(chan downloadPart, len(parts))
+	failed := make(chan error)
+
+	// 启动工作协程
+	arg := downloadWorkerArg{&bucket, objectKey, filePath, options, downloadPartHooker}
+	for w := 1; w <= routines; w++ {
+		go downloadWorker(w, arg, jobs, results, failed)
+	}
+
+	// 并发下载分片
+	go downloadScheduler(jobs, parts)
+
+	// 等待分片下载完成
+	completed := 0
+	for {
+		select {
+		case part := <-results:
+			completed++
+			dcp.PartStat[part.Index] = true
+			dcp.dump(cpFilePath)
+		case err := <-failed:
+			return err
+		default:
+			time.Sleep(time.Second)
+		}
+
+		if completed >= len(parts) {
+			break
+		}
+	}
+
+	return dcp.complete(cpFilePath)
+}

+ 360 - 0
oss/download_test.go

@@ -0,0 +1,360 @@
+package oss
+
+import (
+	"fmt"
+	"os"
+	"time"
+
+	. "gopkg.in/check.v1"
+)
+
+type OssDownloadSuite struct {
+	client *Client
+	bucket *Bucket
+}
+
+var _ = Suite(&OssDownloadSuite{})
+
+// Run once when the suite starts running
+func (s *OssDownloadSuite) SetUpSuite(c *C) {
+	client, err := New(endpoint, accessID, accessKey)
+	c.Assert(err, IsNil)
+	s.client = client
+
+	err = s.client.CreateBucket(bucketName)
+	c.Assert(err, IsNil)
+
+	bucket, err := s.client.Bucket(bucketName)
+	c.Assert(err, IsNil)
+	s.bucket = bucket
+
+	fmt.Println("SetUpSuite")
+}
+
+// Run before each test or benchmark starts running
+func (s *OssDownloadSuite) TearDownSuite(c *C) {
+	// Delete Part
+	lmur, err := s.bucket.ListMultipartUploads()
+	c.Assert(err, IsNil)
+
+	for _, upload := range lmur.Uploads {
+		var imur = InitiateMultipartUploadResult{Bucket: s.bucket.BucketName,
+			Key: upload.Key, UploadID: upload.UploadID}
+		err = s.bucket.AbortMultipartUpload(imur)
+		c.Assert(err, IsNil)
+	}
+
+	// Delete Objects
+	lor, err := s.bucket.ListObjects()
+	c.Assert(err, IsNil)
+
+	for _, object := range lor.Objects {
+		err = s.bucket.DeleteObject(object.Key)
+		c.Assert(err, IsNil)
+	}
+
+	// delete bucket
+	err = s.client.DeleteBucket(bucketName)
+	c.Assert(err, IsNil)
+
+	fmt.Println("TearDownSuite")
+}
+
+// Run after each test or benchmark runs
+func (s *OssDownloadSuite) SetUpTest(c *C) {
+	err := removeTempFiles("../oss", ".jpg")
+	c.Assert(err, IsNil)
+
+	fmt.Println("SetUpTest")
+}
+
+// Run once after all tests or benchmarks have finished running
+func (s *OssDownloadSuite) TearDownTest(c *C) {
+	err := removeTempFiles("../oss", ".jpg")
+	c.Assert(err, IsNil)
+
+	fmt.Println("TearDownTest")
+}
+
+// TestUploadRoutineWithoutRecovery 多线程无断点恢复的下载
+func (s *OssDownloadSuite) TestDownloadRoutineWithoutRecovery(c *C) {
+	objectName := objectNamePrefix + "tdrwr"
+	fileName := "../sample/BingWallpaper-2015-11-07.jpg"
+	newFile := "down-new-file.jpg"
+
+	// 上传文件
+	err := s.bucket.UploadFile(objectName, fileName, 100*1024, Routines(3))
+	c.Assert(err, IsNil)
+
+	// 使用默认值下载
+	err = s.bucket.DownloadFile(objectName, newFile, 100*1024)
+	c.Assert(err, IsNil)
+
+	// check
+	eq, err := compareFiles(fileName, newFile)
+	c.Assert(err, IsNil)
+	c.Assert(eq, Equals, true)
+
+	// 使用2个协程下载,小于总分片数5
+	os.Remove(newFile)
+	err = s.bucket.DownloadFile(objectName, newFile, 100*1024, Routines(2))
+	c.Assert(err, IsNil)
+
+	// check
+	eq, err = compareFiles(fileName, newFile)
+	c.Assert(err, IsNil)
+	c.Assert(eq, Equals, true)
+
+	// 使用5个协程下载,等于总分片数5
+	os.Remove(newFile)
+	err = s.bucket.DownloadFile(objectName, newFile, 100*1024, Routines(5))
+	c.Assert(err, IsNil)
+
+	// check
+	eq, err = compareFiles(fileName, newFile)
+	c.Assert(err, IsNil)
+	c.Assert(eq, Equals, true)
+
+	// 使用10个协程下载,大于总分片数5
+	os.Remove(newFile)
+	err = s.bucket.DownloadFile(objectName, newFile, 100*1024, Routines(10))
+	c.Assert(err, IsNil)
+
+	// check
+	eq, err = compareFiles(fileName, newFile)
+	c.Assert(err, IsNil)
+	c.Assert(eq, Equals, true)
+
+	err = s.bucket.DeleteObject(objectName)
+	c.Assert(err, IsNil)
+}
+
+// ErrorHooker DownloadPart请求Hook
+func DownErrorHooker(part downloadPart) error {
+	if part.Index == 4 {
+		time.Sleep(time.Second)
+		return fmt.Errorf("ErrorHooker")
+	}
+	return nil
+}
+
+// TestDownloadRoutineWithRecovery 多线程有断点恢复的下载
+func (s *OssDownloadSuite) TestDownloadRoutineWithRecovery(c *C) {
+	objectName := objectNamePrefix + "tdrtr"
+	fileName := "../sample/BingWallpaper-2015-11-07.jpg"
+	newFile := "down-new-file-2.jpg"
+
+	// 上传文件
+	err := s.bucket.UploadFile(objectName, fileName, 100*1024, Routines(3))
+	c.Assert(err, IsNil)
+
+	// 下载,CP使用默认值
+	downloadPartHooker = DownErrorHooker
+	err = s.bucket.DownloadFile(objectName, newFile, 100*1024, Checkpoint(true, ""))
+	c.Assert(err, NotNil)
+	c.Assert(err.Error(), Equals, "ErrorHooker")
+	downloadPartHooker = defaultDownloadPartHook
+
+	// check
+	dcp := downloadCheckpoint{}
+	err = dcp.load(newFile + ".cp")
+	c.Assert(err, IsNil)
+	c.Assert(dcp.Magic, Equals, downloadCpMagic)
+	c.Assert(len(dcp.MD5), Equals, len("LC34jZU5xK4hlxi3Qn3XGQ=="))
+	c.Assert(dcp.FilePath, Equals, newFile)
+	c.Assert(dcp.ObjStat.Size, Equals, int64(482048))
+	c.Assert(len(dcp.ObjStat.LastModified), Equals, len("2015-12-17 18:43:03 +0800 CST"))
+	c.Assert(dcp.ObjStat.Etag, Equals, "\"2351E662233817A7AE974D8C5B0876DD-5\"")
+	c.Assert(dcp.Object, Equals, objectName)
+	c.Assert(len(dcp.Parts), Equals, 5)
+	c.Assert(len(dcp.todoParts()), Equals, 1)
+
+	err = s.bucket.DownloadFile(objectName, newFile, 100*1024, Checkpoint(true, ""))
+	c.Assert(err, IsNil)
+
+	err = dcp.load(newFile + ".cp")
+	c.Assert(err, NotNil)
+
+	eq, err := compareFiles(fileName, newFile)
+	c.Assert(err, IsNil)
+	c.Assert(eq, Equals, true)
+
+	// 下载,指定CP
+	os.Remove(newFile)
+	downloadPartHooker = DownErrorHooker
+	err = s.bucket.DownloadFile(objectName, newFile, 100*1024, Checkpoint(true, objectName+".cp"))
+	c.Assert(err, NotNil)
+	c.Assert(err.Error(), Equals, "ErrorHooker")
+	downloadPartHooker = defaultDownloadPartHook
+
+	// check
+	dcp = downloadCheckpoint{}
+	err = dcp.load(objectName + ".cp")
+	c.Assert(err, IsNil)
+	c.Assert(dcp.Magic, Equals, downloadCpMagic)
+	c.Assert(len(dcp.MD5), Equals, len("LC34jZU5xK4hlxi3Qn3XGQ=="))
+	c.Assert(dcp.FilePath, Equals, newFile)
+	c.Assert(dcp.ObjStat.Size, Equals, int64(482048))
+	c.Assert(len(dcp.ObjStat.LastModified), Equals, len("2015-12-17 18:43:03 +0800 CST"))
+	c.Assert(dcp.ObjStat.Etag, Equals, "\"2351E662233817A7AE974D8C5B0876DD-5\"")
+	c.Assert(dcp.Object, Equals, objectName)
+	c.Assert(len(dcp.Parts), Equals, 5)
+	c.Assert(len(dcp.todoParts()), Equals, 1)
+
+	err = s.bucket.DownloadFile(objectName, newFile, 100*1024, Checkpoint(true, objectName+".cp"))
+	c.Assert(err, IsNil)
+
+	err = dcp.load(objectName + ".cp")
+	c.Assert(err, NotNil)
+
+	eq, err = compareFiles(fileName, newFile)
+	c.Assert(err, IsNil)
+	c.Assert(eq, Equals, true)
+
+	// 一次完成下载,中间没有错误
+	os.Remove(newFile)
+	err = s.bucket.DownloadFile(objectName, newFile, 100*1024, Checkpoint(true, ""))
+	c.Assert(err, IsNil)
+
+	err = dcp.load(newFile + ".cp")
+	c.Assert(err, NotNil)
+
+	eq, err = compareFiles(fileName, newFile)
+	c.Assert(err, IsNil)
+	c.Assert(eq, Equals, true)
+
+	// 一次完成下载,中间没有错误
+	os.Remove(newFile)
+	err = s.bucket.DownloadFile(objectName, newFile, 100*1024, Routines(10), Checkpoint(true, ""))
+	c.Assert(err, IsNil)
+
+	err = dcp.load(newFile + ".cp")
+	c.Assert(err, NotNil)
+
+	eq, err = compareFiles(fileName, newFile)
+	c.Assert(err, IsNil)
+	c.Assert(eq, Equals, true)
+
+	err = s.bucket.DeleteObject(objectName)
+	c.Assert(err, IsNil)
+}
+
+// TestDownloadOption 选项
+func (s *OssDownloadSuite) TestDownloadOption(c *C) {
+	objectName := objectNamePrefix + "tdmo"
+	fileName := "../sample/BingWallpaper-2015-11-07.jpg"
+	newFile := "down-new-file-3.jpg"
+
+	// 上传文件
+	err := s.bucket.UploadFile(objectName, fileName, 100*1024, Routines(3))
+	c.Assert(err, IsNil)
+
+	meta, err := s.bucket.GetObjectDetailedMeta(objectName)
+	c.Assert(err, IsNil)
+
+	// IfMatch
+	os.Remove(newFile)
+	err = s.bucket.DownloadFile(objectName, newFile, 100*1024, Routines(3), IfMatch(meta.Get("Etag")))
+	c.Assert(err, IsNil)
+
+	eq, err := compareFiles(fileName, newFile)
+	c.Assert(err, IsNil)
+	c.Assert(eq, Equals, true)
+
+	// IfNoneMatch
+	os.Remove(newFile)
+	err = s.bucket.DownloadFile(objectName, newFile, 100*1024, Routines(3), IfNoneMatch(meta.Get("Etag")))
+	c.Assert(err, NotNil)
+
+	// IfMatch
+	err = s.bucket.DownloadFile(objectName, newFile, 100*1024, Routines(3), Checkpoint(true, ""), IfMatch(meta.Get("Etag")))
+	c.Assert(err, IsNil)
+
+	eq, err = compareFiles(fileName, newFile)
+	c.Assert(err, IsNil)
+	c.Assert(eq, Equals, true)
+
+	// IfNoneMatch
+	err = s.bucket.DownloadFile(objectName, newFile, 100*1024, Routines(3), Checkpoint(true, ""), IfNoneMatch(meta.Get("Etag")))
+	c.Assert(err, NotNil)
+}
+
+// TestDownloadObjectChange 上传过程中文件修改了
+func (s *OssDownloadSuite) TestDownloadObjectChange(c *C) {
+	objectName := objectNamePrefix + "tdloc"
+	fileName := "../sample/BingWallpaper-2015-11-07.jpg"
+	newFile := "down-new-file-4.jpg"
+
+	// 上传文件
+	err := s.bucket.UploadFile(objectName, fileName, 100*1024, Routines(3))
+	c.Assert(err, IsNil)
+
+	// 下载,CP使用默认值
+	downloadPartHooker = DownErrorHooker
+	err = s.bucket.DownloadFile(objectName, newFile, 100*1024, Checkpoint(true, ""))
+	c.Assert(err, NotNil)
+	c.Assert(err.Error(), Equals, "ErrorHooker")
+	downloadPartHooker = defaultDownloadPartHook
+
+	err = s.bucket.UploadFile(objectName, fileName, 100*1024, Routines(3))
+	c.Assert(err, IsNil)
+
+	err = s.bucket.DownloadFile(objectName, newFile, 100*1024, Checkpoint(true, ""))
+	c.Assert(err, IsNil)
+
+	eq, err := compareFiles(fileName, newFile)
+	c.Assert(err, IsNil)
+	c.Assert(eq, Equals, true)
+}
+
+// TestDownloadNegative Download Negative
+func (s *OssDownloadSuite) TestDownloadNegative(c *C) {
+	objectName := objectNamePrefix + "tdn"
+	fileName := "../sample/BingWallpaper-2015-11-07.jpg"
+	newFile := "down-new-file-3.jpg"
+
+	// 上传文件
+	err := s.bucket.UploadFile(objectName, fileName, 100*1024, Routines(3))
+	c.Assert(err, IsNil)
+
+	// worker线程错误
+	downloadPartHooker = DownErrorHooker
+	err = s.bucket.DownloadFile(objectName, newFile, 100*1024, Routines(2))
+	c.Assert(err, NotNil)
+	c.Assert(err.Error(), Equals, "ErrorHooker")
+	downloadPartHooker = defaultDownloadPartHook
+
+	// 本地文件不存在
+	err = s.bucket.DownloadFile(objectName, "/tmp/", 100*1024, Routines(2))
+	c.Assert(err, NotNil)
+
+	// 指定的分片大小无效
+	err = s.bucket.DownloadFile(objectName, newFile, 0, Routines(2))
+	c.Assert(err, NotNil)
+
+	err = s.bucket.DownloadFile(objectName, newFile, 1024*1024*1024*100, Routines(2))
+	c.Assert(err, NotNil)
+
+	err = s.bucket.DeleteObject(objectName)
+	c.Assert(err, IsNil)
+
+	// 本地文件不存在
+	err = s.bucket.DownloadFile(objectName, "/tmp/", 100*1024, Checkpoint(true, ""))
+	c.Assert(err, NotNil)
+
+	err = s.bucket.DownloadFile(objectName, "/tmp/", 100*1024, Routines(2), Checkpoint(true, ""))
+	c.Assert(err, NotNil)
+
+	// 指定的分片大小无效
+	err = s.bucket.DownloadFile(objectName, newFile, -1, Checkpoint(true, ""))
+	c.Assert(err, NotNil)
+
+	err = s.bucket.DownloadFile(objectName, newFile, 0, Routines(2), Checkpoint(true, ""))
+	c.Assert(err, NotNil)
+
+	err = s.bucket.DownloadFile(objectName, newFile, 1024*1024*1024*100, Checkpoint(true, ""))
+	c.Assert(err, NotNil)
+
+	err = s.bucket.DownloadFile(objectName, newFile, 1024*1024*1024*100, Routines(2), Checkpoint(true, ""))
+	c.Assert(err, NotNil)
+}

+ 0 - 88
oss/multipart.go

@@ -3,7 +3,6 @@ package oss
 import (
 	"bytes"
 	"encoding/xml"
-	"errors"
 	"io"
 	"net/http"
 	"os"
@@ -248,90 +247,3 @@ func (bucket Bucket) ListMultipartUploads(options ...Option) (ListMultipartUploa
 	err = decodeListMultipartUploadResult(&out)
 	return out, err
 }
-
-//
-// UploadFile 分块上传文件,适合加大文件
-//
-// objectKey  object名称。
-// filePath   本地文件。需要上传的文件。
-// partSize   本次上传文件片的大小,字节数。比如100 * 1024为每片100KB。
-// options    上传Object时可以指定Object的属性。详见InitiateMultipartUpload。
-//
-// error 操作成功为nil,非nil为错误信息。
-//
-func (bucket Bucket) UploadFile(objectKey, filePath string, partSize int64, options ...Option) error {
-	if partSize < MinPartSize || partSize > MaxPartSize {
-		return errors.New("oss: part size invalid range (1024KB, 5GB]")
-	}
-
-	chunks, err := SplitFileByPartSize(filePath, partSize)
-	if err != nil {
-		return err
-	}
-
-	imur, err := bucket.InitiateMultipartUpload(objectKey, options...)
-	if err != nil {
-		return err
-	}
-
-	parts := []UploadPart{}
-	for _, chunk := range chunks {
-		part, err := bucket.UploadPartFromFile(imur, filePath, chunk.Offset, chunk.Size,
-			chunk.Number)
-		if err != nil {
-			bucket.AbortMultipartUpload(imur)
-			return err
-		}
-		parts = append(parts, part)
-	}
-
-	_, err = bucket.CompleteMultipartUpload(imur, parts)
-	if err != nil {
-		bucket.AbortMultipartUpload(imur)
-		return err
-	}
-	return nil
-}
-
-//
-// DownloadFile 分块下载文件,适合加大Object
-//
-// objectKey  object key。
-// filePath   本地文件。objectKey下载到文件。
-// partSize   本次上传文件片的大小,字节数。比如100 * 1024为每片100KB。
-// options    Object的属性限制项。详见GetObject。
-//
-// error 操作成功error为nil,非nil为错误信息。
-//
-func (bucket Bucket) DownloadFile(objectKey, filePath string, partSize int64, options ...Option) error {
-	if partSize < 1 || partSize > MaxPartSize {
-		return errors.New("oss: part size invalid range (1, 5GB]")
-	}
-
-	meta, err := bucket.GetObjectDetailedMeta(objectKey)
-	if err != nil {
-		return err
-	}
-
-	fd, err := os.OpenFile(filePath, os.O_WRONLY|os.O_CREATE, 0660)
-	if err != nil {
-		return err
-	}
-	defer fd.Close()
-
-	objectSize, err := strconv.ParseInt(meta.Get(HTTPHeaderContentLength), 10, 0)
-	for i := int64(0); i < objectSize; i += partSize {
-		option := Range(i, GetPartEnd(i, objectSize, partSize))
-		options = append(options, option)
-		r, err := bucket.GetObject(objectKey, options...)
-		if err != nil {
-			return err
-		}
-		defer r.Close()
-		_, err = io.Copy(fd, r)
-		if err != nil {
-			return err
-		}
-	}
-	return nil
-}

+ 53 - 21
oss/option.go

@@ -2,6 +2,7 @@ package oss
 
 import (
 	"bytes"
+	"encoding/json"
 	"fmt"
 	"net/http"
 	"net/url"
@@ -13,8 +14,15 @@ import (
 type optionType string
 
 const (
-	optionParam optionType = "parameter" // URL中的参数
-	optionHTTP  optionType = "http"      // HTTP头
+	optionParam optionType = "HTTPParameter" // URL参数
+	optionHTTP  optionType = "HTTPHeader"    // HTTP头
+	optionArg   optionType = "FuncArgument"  // 函数参数
+)
+
+const (
+	deleteObjectsQuiet = "delete-objects-quiet"
+	routineNum         = "x-routine-num"
+	checkpointConfig   = "x-cp-config"
 )
 
 type (
@@ -189,44 +197,68 @@ func UploadIDMarker(value string) Option {
 	return addParam("upload-id-marker", value)
 }
 
-const deleteObjectsQuiet = "delete-objects-quiet"
-
-// DeleteObjectsQuiet DeleteObjects详细(verbose)模式或简单(quiet)模式,默认是详细模式。
+// DeleteObjectsQuiet DeleteObjects详细(verbose)模式或简单(quiet)模式,默认详细模式。
 func DeleteObjectsQuiet(isQuiet bool) Option {
-	return addParam(deleteObjectsQuiet, strconv.FormatBool(isQuiet))
+	return addArg(deleteObjectsQuiet, strconv.FormatBool(isQuiet))
+}
+
+type cpConfig struct {
+	IsEnable bool
+	FilePath string
+}
+
+// Checkpoint DownloadFile/UploadFile是否开启checkpoint及checkpoint文件路径
+func Checkpoint(isEnable bool, filePath string) Option {
+	res, _ := json.Marshal(cpConfig{isEnable, filePath})
+	return addArg(checkpointConfig, string(res))
+}
+
+// Routines DownloadFile/UploadFile并发数
+func Routines(n int) Option {
+	return addArg(routineNum, strconv.Itoa(n))
 }
 
 func setHeader(key, value string) Option {
-	return func(args map[string]optionValue) error {
+	return func(params map[string]optionValue) error {
 		if value == "" {
 			return nil
 		}
-		args[key] = optionValue{value, optionHTTP}
+		params[key] = optionValue{value, optionHTTP}
 		return nil
 	}
 }
 
 func addParam(key, value string) Option {
-	return func(args map[string]optionValue) error {
+	return func(params map[string]optionValue) error {
+		if value == "" {
+			return nil
+		}
+		params[key] = optionValue{value, optionParam}
+		return nil
+	}
+}
+
+func addArg(key, value string) Option {
+	return func(params map[string]optionValue) error {
 		if value == "" {
 			return nil
 		}
-		args[key] = optionValue{value, optionParam}
+		params[key] = optionValue{value, optionArg}
 		return nil
 	}
 }
 
 func handleOptions(headers map[string]string, options []Option) error {
-	args := map[string]optionValue{}
+	params := map[string]optionValue{}
 	for _, option := range options {
 		if option != nil {
-			if err := option(args); err != nil {
+			if err := option(params); err != nil {
 				return err
 			}
 		}
 	}
 
-	for k, v := range args {
+	for k, v := range params {
 		if v.Type == optionHTTP {
 			headers[k] = v.Value
 		}
@@ -236,10 +268,10 @@ func handleOptions(headers map[string]string, options []Option) error {
 
 func handleParams(options []Option) (string, error) {
 	// option
-	args := map[string]optionValue{}
+	params := map[string]optionValue{}
 	for _, option := range options {
 		if option != nil {
-			if err := option(args); err != nil {
+			if err := option(params); err != nil {
 				return "", err
 			}
 		}
@@ -247,8 +279,8 @@ func handleParams(options []Option) (string, error) {
 
 	// sort
 	var buf bytes.Buffer
-	keys := make([]string, 0, len(args))
-	for k, v := range args {
+	keys := make([]string, 0, len(params))
+	for k, v := range params {
 		if v.Type == optionParam {
 			keys = append(keys, k)
 		}
@@ -257,7 +289,7 @@ func handleParams(options []Option) (string, error) {
 
 	// serialize
 	for _, k := range keys {
-		vs := args[k]
+		vs := params[k]
 		prefix := url.QueryEscape(k) + "="
 
 		if buf.Len() > 0 {
@@ -271,16 +303,16 @@ func handleParams(options []Option) (string, error) {
 }
 
 func findOption(options []Option, param, defaultVal string) (string, error) {
-	args := map[string]optionValue{}
+	params := map[string]optionValue{}
 	for _, option := range options {
 		if option != nil {
-			if err := option(args); err != nil {
+			if err := option(params); err != nil {
 				return "", err
 			}
 		}
 	}
 
-	if val, ok := args[param]; ok {
+	if val, ok := params[param]; ok {
 		return val.Value, nil
 	}
 	return defaultVal, nil

+ 433 - 0
oss/upload.go

@@ -0,0 +1,433 @@
+package oss
+
+import (
+	"crypto/md5"
+	"encoding/base64"
+	"encoding/json"
+	"errors"
+	"io/ioutil"
+	"os"
+	"strconv"
+	"time"
+)
+
+//
+// UploadFile 分块上传文件,适合加大文件
+//
+// objectKey  object名称。
+// filePath   本地文件。需要上传的文件。
+// partSize   本次上传文件片的大小,字节数。比如100 * 1024为每片100KB。
+// options    上传Object时可以指定Object的属性。详见InitiateMultipartUpload。
+//
+// error 操作成功为nil,非nil为错误信息。
+//
+func (bucket Bucket) UploadFile(objectKey, filePath string, partSize int64, options ...Option) error {
+	if partSize < MinPartSize || partSize > MaxPartSize {
+		return errors.New("oss: part size invalid range (1024KB, 5GB]")
+	}
+
+	cpConf, err := getCpConfig(options, filePath)
+	if err != nil {
+		return err
+	}
+
+	routines := getRoutines(options)
+
+	if cpConf.IsEnable {
+		return bucket.uploadFileWithCp(objectKey, filePath, partSize, options, cpConf.FilePath, routines)
+	}
+
+	return bucket.uploadFile(objectKey, filePath, partSize, options, routines)
+}
+
+// ----- 并发无断点的上传  -----
+
+// 获取Checkpoint配置
+func getCpConfig(options []Option, filePath string) (*cpConfig, error) {
+	cpc := cpConfig{}
+	cpStr, err := findOption(options, checkpointConfig, "")
+	if err != nil {
+		return nil, err
+	}
+
+	if cpStr != "" {
+		if err = json.Unmarshal([]byte(cpStr), &cpc); err != nil {
+			return nil, err
+		}
+	}
+
+	if cpc.IsEnable && cpc.FilePath == "" {
+		cpc.FilePath = filePath + ".cp"
+	}
+
+	return &cpc, nil
+}
+
+// 获取并发数,默认并发数1
+func getRoutines(options []Option) int {
+	rStr, err := findOption(options, routineNum, "")
+	if err != nil || rStr == "" {
+		return 1
+	}
+
+	rs, err := strconv.Atoi(rStr)
+	if err != nil {
+		return 1
+	}
+
+	if rs < 1 {
+		rs = 1
+	} else if rs > 100 {
+		rs = 100
+	}
+
+	return rs
+}
+
+// 测试使用
+type uploadPartHook func(id int, chunk FileChunk) error
+
+var uploadPartHooker uploadPartHook = defaultUploadPart
+
+func defaultUploadPart(id int, chunk FileChunk) error {
+	return nil
+}
+
+// 工作协程参数
+type workerArg struct {
+	bucket   *Bucket
+	filePath string
+	imur     InitiateMultipartUploadResult
+	hook     uploadPartHook
+}
+
+// 工作协程
+func worker(id int, arg workerArg, jobs <-chan FileChunk, results chan<- UploadPart, failed chan<- error) {
+	for chunk := range jobs {
+		if err := arg.hook(id, chunk); err != nil {
+			failed <- err
+			break
+		}
+		part, err := arg.bucket.UploadPartFromFile(arg.imur, arg.filePath, chunk.Offset, chunk.Size, chunk.Number)
+		if err != nil {
+			failed <- err
+			break
+		}
+		results <- part
+	}
+}
+
+// 调度协程
+func scheduler(jobs chan FileChunk, chunks []FileChunk) {
+	for _, chunk := range chunks {
+		jobs <- chunk
+	}
+	close(jobs)
+}
+
+// 并发上传,不带断点续传功能
+func (bucket Bucket) uploadFile(objectKey, filePath string, partSize int64, options []Option, routines int) error {
+	chunks, err := SplitFileByPartSize(filePath, partSize)
+	if err != nil {
+		return err
+	}
+
+	// 初始化上传任务
+	imur, err := bucket.InitiateMultipartUpload(objectKey, options...)
+	if err != nil {
+		return err
+	}
+
+	jobs := make(chan FileChunk, len(chunks))
+	results := make(chan UploadPart, len(chunks))
+	failed := make(chan error)
+
+	// 启动工作协程
+	arg := workerArg{&bucket, filePath, imur, uploadPartHooker}
+	for w := 1; w <= routines; w++ {
+		go worker(w, arg, jobs, results, failed)
+	}
+
+	// 并发上传分片
+	go scheduler(jobs, chunks)
+
+	// 等待分配分片上传完成
+	completed := 0
+	parts := make([]UploadPart, len(chunks))
+	for {
+		select {
+		case part := <-results:
+			completed++
+			parts[part.PartNumber-1] = part
+		case err := <-failed:
+			bucket.AbortMultipartUpload(imur)
+			return err
+		default:
+			time.Sleep(time.Second)
+		}
+
+		if completed >= len(chunks) {
+			break
+		}
+	}
+
+	// 提交任务
+	_, err = bucket.CompleteMultipartUpload(imur, parts)
+	if err != nil {
+		bucket.AbortMultipartUpload(imur)
+		return err
+	}
+	return nil
+}
+
+// ----- 并发带断点的上传  -----
+const uploadCpMagic = "FE8BB4EA-B593-4FAC-AD7A-2459A36E2E62"
+
+type uploadCheckpoint struct {
+	Magic     string   // magic
+	MD5       string   // cp内容的MD5
+	FilePath  string   // 本地文件
+	FileStat  cpStat   // 文件状态
+	ObjectKey string   // key
+	UploadID  string   // upload id
+	Parts     []cpPart // 本地文件的全部分片
+}
+
+type cpStat struct {
+	Size         int64     // 文件大小
+	LastModified time.Time // 本地文件最后修改时间
+	MD5          string    // 本地文件MD5
+}
+
+type cpPart struct {
+	Chunk       FileChunk  // 分片
+	Part        UploadPart // 上传完成的分片
+	IsCompleted bool       // upload是否完成
+}
+
+// CP数据是否有效,CP有效且文件没有更新时有效
+func (cp uploadCheckpoint) isValid(filePath string) (bool, error) {
+	// 比较CP的Magic及MD5
+	cpb := cp
+	cpb.MD5 = ""
+	js, _ := json.Marshal(cpb)
+	sum := md5.Sum(js)
+	b64 := base64.StdEncoding.EncodeToString(sum[:])
+
+	if cp.Magic != uploadCpMagic || b64 != cp.MD5 {
+		return false, nil
+	}
+
+	// 确认本地文件是否更新
+	fd, err := os.Open(filePath)
+	if err != nil {
+		return false, err
+	}
+	defer fd.Close()
+
+	st, err := fd.Stat()
+	if err != nil {
+		return false, err
+	}
+
+	md, err := calcFileMD5(filePath)
+	if err != nil {
+		return false, err
+	}
+
+	// 比较文件大小/文件最后更新时间/文件MD5
+	if cp.FileStat.Size != st.Size() ||
+		cp.FileStat.LastModified != st.ModTime() ||
+		cp.FileStat.MD5 != md {
+		return false, nil
+	}
+
+	return true, nil
+}
+
+// 从文件中load
+func (cp *uploadCheckpoint) load(filePath string) error {
+	contents, err := ioutil.ReadFile(filePath)
+	if err != nil {
+		return err
+	}
+
+	err = json.Unmarshal(contents, cp)
+	return err
+}
+
+// dump到文件
+func (cp *uploadCheckpoint) dump(filePath string) error {
+	bcp := *cp
+
+	// 计算MD5
+	bcp.MD5 = ""
+	js, err := json.Marshal(bcp)
+	if err != nil {
+		return err
+	}
+	sum := md5.Sum(js)
+	b64 := base64.StdEncoding.EncodeToString(sum[:])
+	bcp.MD5 = b64
+
+	// 序列化
+	js, err = json.Marshal(bcp)
+	if err != nil {
+		return err
+	}
+
+	// dump
+	return ioutil.WriteFile(filePath, js, 0644)
+}
+
+// 更新分片状态
+func (cp *uploadCheckpoint) updatePart(part UploadPart) {
+	cp.Parts[part.PartNumber-1].Part = part
+	cp.Parts[part.PartNumber-1].IsCompleted = true
+}
+
+// 未完成的分片
+func (cp *uploadCheckpoint) todoParts() []FileChunk {
+	fcs := []FileChunk{}
+	for _, part := range cp.Parts {
+		if !part.IsCompleted {
+			fcs = append(fcs, part.Chunk)
+		}
+	}
+	return fcs
+}
+
+// 所有的分片
+func (cp *uploadCheckpoint) allParts() []UploadPart {
+	ps := []UploadPart{}
+	for _, part := range cp.Parts {
+		ps = append(ps, part.Part)
+	}
+	return ps
+}
+
+// 计算文件文件MD5
+func calcFileMD5(filePath string) (string, error) {
+	return "", nil
+}
+
+// 初始化分片上传
+func prepare(cp *uploadCheckpoint, objectKey, filePath string, partSize int64, bucket *Bucket, options []Option) error {
+	// cp
+	cp.Magic = uploadCpMagic
+	cp.FilePath = filePath
+	cp.ObjectKey = objectKey
+
+	// localfile
+	fd, err := os.Open(filePath)
+	if err != nil {
+		return err
+	}
+	defer fd.Close()
+
+	st, err := fd.Stat()
+	if err != nil {
+		return err
+	}
+	cp.FileStat.Size = st.Size()
+	cp.FileStat.LastModified = st.ModTime()
+	md, err := calcFileMD5(filePath)
+	if err != nil {
+		return err
+	}
+	cp.FileStat.MD5 = md
+
+	// chunks
+	parts, err := SplitFileByPartSize(filePath, partSize)
+	if err != nil {
+		return err
+	}
+
+	cp.Parts = make([]cpPart, len(parts))
+	for i, part := range parts {
+		cp.Parts[i].Chunk = part
+		cp.Parts[i].IsCompleted = false
+	}
+
+	// init load
+	imur, err := bucket.InitiateMultipartUpload(objectKey, options...)
+	if err != nil {
+		return err
+	}
+	cp.UploadID = imur.UploadID
+
+	return nil
+}
+
+// 提交分片上传,删除CP文件
+func complete(cp *uploadCheckpoint, bucket *Bucket, parts []UploadPart, cpFilePath string) error {
+	imur := InitiateMultipartUploadResult{Bucket: bucket.BucketName,
+		Key: cp.ObjectKey, UploadID: cp.UploadID}
+	_, err := bucket.CompleteMultipartUpload(imur, parts)
+	if err != nil {
+		return err
+	}
+	err = os.Remove(cpFilePath)
+	return err
+}
+
+// 并发带断点的上传
+func (bucket Bucket) uploadFileWithCp(objectKey, filePath string, partSize int64, options []Option, cpFilePath string, routines int) error {
+	// LOAD CP数据
+	ucp := uploadCheckpoint{}
+	err := ucp.load(cpFilePath)
+	if err != nil {
+		os.Remove(cpFilePath)
+	}
+
+	// LOAD出错或数据无效重新初始化上传
+	valid, err := ucp.isValid(filePath)
+	if err != nil || !valid {
+		if err = prepare(&ucp, objectKey, filePath, partSize, &bucket, options); err != nil {
+			return err
+		}
+		os.Remove(cpFilePath)
+	}
+
+	chunks := ucp.todoParts()
+	imur := InitiateMultipartUploadResult{
+		Bucket:   bucket.BucketName,
+		Key:      objectKey,
+		UploadID: ucp.UploadID}
+
+	jobs := make(chan FileChunk, len(chunks))
+	results := make(chan UploadPart, len(chunks))
+	failed := make(chan error)
+
+	// 启动工作协程
+	arg := workerArg{&bucket, filePath, imur, uploadPartHooker}
+	for w := 1; w <= routines; w++ {
+		go worker(w, arg, jobs, results, failed)
+	}
+
+	// 并发上传分片
+	go scheduler(jobs, chunks)
+
+	// 等待分配分片上传完成
+	completed := 0
+	for {
+		select {
+		case part := <-results:
+			completed++
+			ucp.updatePart(part)
+			ucp.dump(cpFilePath)
+		case err := <-failed:
+			return err
+		default:
+			time.Sleep(time.Second)
+		}
+
+		if completed >= len(chunks) {
+			break
+		}
+	}
+
+	// 提交分片上传
+	err = complete(&ucp, &bucket, ucp.allParts(), cpFilePath)
+	return err
+}

+ 455 - 0
oss/upload_test.go

@@ -0,0 +1,455 @@
+package oss
+
+import (
+	"fmt"
+	"io"
+	"os"
+	"time"
+
+	. "gopkg.in/check.v1"
+)
+
+type OssUploadSuite struct {
+	client *Client
+	bucket *Bucket
+}
+
+var _ = Suite(&OssUploadSuite{})
+
+// Run once when the suite starts running
+func (s *OssUploadSuite) SetUpSuite(c *C) {
+	client, err := New(endpoint, accessID, accessKey)
+	c.Assert(err, IsNil)
+	s.client = client
+
+	err = s.client.CreateBucket(bucketName)
+	c.Assert(err, IsNil)
+
+	bucket, err := s.client.Bucket(bucketName)
+	c.Assert(err, IsNil)
+	s.bucket = bucket
+
+	fmt.Println("SetUpSuite")
+}
+
+// Run before each test or benchmark starts running
+func (s *OssUploadSuite) TearDownSuite(c *C) {
+	// Delete Part
+	lmur, err := s.bucket.ListMultipartUploads()
+	c.Assert(err, IsNil)
+
+	for _, upload := range lmur.Uploads {
+		var imur = InitiateMultipartUploadResult{Bucket: s.bucket.BucketName,
+			Key: upload.Key, UploadID: upload.UploadID}
+		err = s.bucket.AbortMultipartUpload(imur)
+		c.Assert(err, IsNil)
+	}
+
+	// Delete Objects
+	lor, err := s.bucket.ListObjects()
+	c.Assert(err, IsNil)
+
+	for _, object := range lor.Objects {
+		err = s.bucket.DeleteObject(object.Key)
+		c.Assert(err, IsNil)
+	}
+
+	// delete bucket
+	err = s.client.DeleteBucket(bucketName)
+	c.Assert(err, IsNil)
+
+	fmt.Println("TearDownSuite")
+}
+
+// Run after each test or benchmark runs
+func (s *OssUploadSuite) SetUpTest(c *C) {
+	err := removeTempFiles("../oss", ".jpg")
+	c.Assert(err, IsNil)
+
+	fmt.Println("SetUpTest")
+}
+
+// Run once after all tests or benchmarks have finished running
+func (s *OssUploadSuite) TearDownTest(c *C) {
+	err := removeTempFiles("../oss", ".jpg")
+	c.Assert(err, IsNil)
+
+	fmt.Println("TearDownTest")
+}
+
+// TestUploadRoutineWithoutRecovery 多线程无断点恢复的上传
+func (s *OssUploadSuite) TestUploadRoutineWithoutRecovery(c *C) {
+	objectName := objectNamePrefix + "turwr"
+	fileName := "../sample/BingWallpaper-2015-11-07.jpg"
+	newFile := "upload-new-file.jpg"
+
+	// 不指定Routines,默认单线程
+	err := s.bucket.UploadFile(objectName, fileName, 100*1024)
+	c.Assert(err, IsNil)
+
+	os.Remove(newFile)
+	err = s.bucket.GetObjectToFile(objectName, newFile)
+	c.Assert(err, IsNil)
+
+	eq, err := compareFiles(fileName, newFile)
+	c.Assert(err, IsNil)
+	c.Assert(eq, Equals, true)
+
+	err = s.bucket.DeleteObject(objectName)
+	c.Assert(err, IsNil)
+
+	// 指定线程数1
+	err = s.bucket.UploadFile(objectName, fileName, 100*1024, Routines(1))
+	c.Assert(err, IsNil)
+
+	os.Remove(newFile)
+	err = s.bucket.GetObjectToFile(objectName, newFile)
+	c.Assert(err, IsNil)
+
+	eq, err = compareFiles(fileName, newFile)
+	c.Assert(err, IsNil)
+	c.Assert(eq, Equals, true)
+
+	err = s.bucket.DeleteObject(objectName)
+	c.Assert(err, IsNil)
+
+	// 指定线程数3,小于分片数5
+	err = s.bucket.UploadFile(objectName, fileName, 100*1024, Routines(3))
+	c.Assert(err, IsNil)
+
+	os.Remove(newFile)
+	err = s.bucket.GetObjectToFile(objectName, newFile)
+	c.Assert(err, IsNil)
+
+	eq, err = compareFiles(fileName, newFile)
+	c.Assert(err, IsNil)
+	c.Assert(eq, Equals, true)
+
+	err = s.bucket.DeleteObject(objectName)
+	c.Assert(err, IsNil)
+
+	// 指定线程数5,等于分片数
+	err = s.bucket.UploadFile(objectName, fileName, 100*1024, Routines(5))
+	c.Assert(err, IsNil)
+
+	os.Remove(newFile)
+	err = s.bucket.GetObjectToFile(objectName, newFile)
+	c.Assert(err, IsNil)
+
+	eq, err = compareFiles(fileName, newFile)
+	c.Assert(err, IsNil)
+	c.Assert(eq, Equals, true)
+
+	err = s.bucket.DeleteObject(objectName)
+	c.Assert(err, IsNil)
+
+	// 指定线程数10,大于分片数5
+	err = s.bucket.UploadFile(objectName, fileName, 100*1024, Routines(10))
+	c.Assert(err, IsNil)
+
+	os.Remove(newFile)
+	err = s.bucket.GetObjectToFile(objectName, newFile)
+	c.Assert(err, IsNil)
+
+	eq, err = compareFiles(fileName, newFile)
+	c.Assert(err, IsNil)
+	c.Assert(eq, Equals, true)
+
+	err = s.bucket.DeleteObject(objectName)
+	c.Assert(err, IsNil)
+
+	// 线程值无效自动变成1
+	err = s.bucket.UploadFile(objectName, fileName, 100*1024, Routines(0))
+	os.Remove(newFile)
+	err = s.bucket.GetObjectToFile(objectName, newFile)
+	c.Assert(err, IsNil)
+
+	eq, err = compareFiles(fileName, newFile)
+	c.Assert(err, IsNil)
+	c.Assert(eq, Equals, true)
+
+	err = s.bucket.DeleteObject(objectName)
+	c.Assert(err, IsNil)
+
+	// 线程值无效自动变成1
+	err = s.bucket.UploadFile(objectName, fileName, 100*1024, Routines(-1))
+	os.Remove(newFile)
+	err = s.bucket.GetObjectToFile(objectName, newFile)
+	c.Assert(err, IsNil)
+
+	eq, err = compareFiles(fileName, newFile)
+	c.Assert(err, IsNil)
+	c.Assert(eq, Equals, true)
+
+	err = s.bucket.DeleteObject(objectName)
+	c.Assert(err, IsNil)
+
+	// option
+	err = s.bucket.UploadFile(objectName, fileName, 100*1024, Routines(3), Meta("myprop", "mypropval"))
+
+	meta, err := s.bucket.GetObjectDetailedMeta(objectName)
+	c.Assert(err, IsNil)
+	c.Assert(meta.Get("X-Oss-Meta-Myprop"), Equals, "mypropval")
+
+	os.Remove(newFile)
+	err = s.bucket.GetObjectToFile(objectName, newFile)
+	c.Assert(err, IsNil)
+
+	eq, err = compareFiles(fileName, newFile)
+	c.Assert(err, IsNil)
+	c.Assert(eq, Equals, true)
+
+	err = s.bucket.DeleteObject(objectName)
+	c.Assert(err, IsNil)
+}
+
+// ErrorHooker UploadPart请求Hook
+func ErrorHooker(id int, chunk FileChunk) error {
+	if chunk.Number == 5 {
+		time.Sleep(time.Second)
+		return fmt.Errorf("ErrorHooker")
+	}
+	return nil
+}
+
+// TestUploadRoutineWithoutRecovery 多线程无断点恢复的上传
+func (s *OssUploadSuite) TestUploadRoutineWithoutRecoveryNegative(c *C) {
+	objectName := objectNamePrefix + "turwrn"
+	fileName := "../sample/BingWallpaper-2015-11-07.jpg"
+
+	uploadPartHooker = ErrorHooker
+	// worker线程错误
+	err := s.bucket.UploadFile(objectName, fileName, 100*1024, Routines(2))
+	c.Assert(err, NotNil)
+	c.Assert(err.Error(), Equals, "ErrorHooker")
+	uploadPartHooker = defaultUploadPart
+
+	// 本地文件不存在
+	err = s.bucket.UploadFile(objectName, "NotExist", 100*1024, Routines(2))
+	c.Assert(err, NotNil)
+
+	// 指定的分片大小无效
+	err = s.bucket.UploadFile(objectName, fileName, 1024, Routines(2))
+	c.Assert(err, NotNil)
+
+	err = s.bucket.UploadFile(objectName, fileName, 1024*1024*1024*100, Routines(2))
+	c.Assert(err, NotNil)
+}
+
+// TestUploadRoutineWithRecovery 多线程且有断点恢复的上传
+func (s *OssUploadSuite) TestUploadRoutineWithRecovery(c *C) {
+	objectName := objectNamePrefix + "turtr"
+	fileName := "../sample/BingWallpaper-2015-11-07.jpg"
+	newFile := "upload-new-file-2.jpg"
+
+	// Routines默认值,CP开启默认路径是fileName+.cp
+	// 第一次上传,上传4片
+	uploadPartHooker = ErrorHooker
+	err := s.bucket.UploadFile(objectName, fileName, 100*1024, Checkpoint(true, ""))
+	c.Assert(err, NotNil)
+	c.Assert(err.Error(), Equals, "ErrorHooker")
+	uploadPartHooker = defaultUploadPart
+
+	// check cp
+	ucp := uploadCheckpoint{}
+	err = ucp.load(fileName + ".cp")
+	c.Assert(err, IsNil)
+	c.Assert(ucp.Magic, Equals, uploadCpMagic)
+	c.Assert(len(ucp.MD5), Equals, len("LC34jZU5xK4hlxi3Qn3XGQ=="))
+	c.Assert(ucp.FilePath, Equals, fileName)
+	c.Assert(ucp.FileStat.Size, Equals, int64(482048))
+	c.Assert(ucp.FileStat.LastModified.String(), Equals, "2015-12-17 18:43:03 +0800 CST")
+	c.Assert(ucp.FileStat.MD5, Equals, "")
+	c.Assert(ucp.ObjectKey, Equals, objectName)
+	c.Assert(len(ucp.UploadID), Equals, len("3F79722737D1469980DACEDCA325BB52"))
+	c.Assert(len(ucp.Parts), Equals, 5)
+	c.Assert(len(ucp.todoParts()), Equals, 1)
+	c.Assert(len(ucp.allParts()), Equals, 5)
+
+	// 第二次上传,完成剩余的一片
+	err = s.bucket.UploadFile(objectName, fileName, 100*1024, Checkpoint(true, ""))
+	c.Assert(err, IsNil)
+
+	os.Remove(newFile)
+	err = s.bucket.GetObjectToFile(objectName, newFile)
+	c.Assert(err, IsNil)
+
+	eq, err := compareFiles(fileName, newFile)
+	c.Assert(err, IsNil)
+	c.Assert(eq, Equals, true)
+
+	err = s.bucket.DeleteObject(objectName)
+	c.Assert(err, IsNil)
+
+	err = ucp.load(fileName + ".cp")
+	c.Assert(err, NotNil)
+
+	// Routines指定,CP指定
+	uploadPartHooker = ErrorHooker
+	err = s.bucket.UploadFile(objectName, fileName, 100*1024, Routines(2), Checkpoint(true, objectName+".cp"))
+	c.Assert(err, NotNil)
+	c.Assert(err.Error(), Equals, "ErrorHooker")
+	uploadPartHooker = defaultUploadPart
+
+	// check cp
+	ucp = uploadCheckpoint{}
+	err = ucp.load(objectName + ".cp")
+	c.Assert(err, IsNil)
+	c.Assert(ucp.Magic, Equals, uploadCpMagic)
+	c.Assert(len(ucp.MD5), Equals, len("LC34jZU5xK4hlxi3Qn3XGQ=="))
+	c.Assert(ucp.FilePath, Equals, fileName)
+	c.Assert(ucp.FileStat.Size, Equals, int64(482048))
+	c.Assert(ucp.FileStat.LastModified.String(), Equals, "2015-12-17 18:43:03 +0800 CST")
+	c.Assert(ucp.FileStat.MD5, Equals, "")
+	c.Assert(ucp.ObjectKey, Equals, objectName)
+	c.Assert(len(ucp.UploadID), Equals, len("3F79722737D1469980DACEDCA325BB52"))
+	c.Assert(len(ucp.Parts), Equals, 5)
+	c.Assert(len(ucp.todoParts()), Equals, 1)
+	c.Assert(len(ucp.allParts()), Equals, 5)
+
+	err = s.bucket.UploadFile(objectName, fileName, 100*1024, Routines(3), Checkpoint(true, objectName+".cp"))
+	c.Assert(err, IsNil)
+
+	os.Remove(newFile)
+	err = s.bucket.GetObjectToFile(objectName, newFile)
+	c.Assert(err, IsNil)
+
+	eq, err = compareFiles(fileName, newFile)
+	c.Assert(err, IsNil)
+	c.Assert(eq, Equals, true)
+
+	err = s.bucket.DeleteObject(objectName)
+	c.Assert(err, IsNil)
+
+	err = ucp.load(objectName + ".cp")
+	c.Assert(err, NotNil)
+
+	// 一次完成上传,中间没有错误
+	err = s.bucket.UploadFile(objectName, fileName, 100*1024, Routines(3), Checkpoint(true, ""))
+	c.Assert(err, IsNil)
+
+	os.Remove(newFile)
+	err = s.bucket.GetObjectToFile(objectName, newFile)
+	c.Assert(err, IsNil)
+
+	eq, err = compareFiles(fileName, newFile)
+	c.Assert(err, IsNil)
+	c.Assert(eq, Equals, true)
+
+	err = s.bucket.DeleteObject(objectName)
+	c.Assert(err, IsNil)
+
+	// 用多协程下载,中间没有错误
+	err = s.bucket.UploadFile(objectName, fileName, 100*1024, Routines(10), Checkpoint(true, ""))
+	c.Assert(err, IsNil)
+
+	os.Remove(newFile)
+	err = s.bucket.GetObjectToFile(objectName, newFile)
+	c.Assert(err, IsNil)
+
+	eq, err = compareFiles(fileName, newFile)
+	c.Assert(err, IsNil)
+	c.Assert(eq, Equals, true)
+
+	err = s.bucket.DeleteObject(objectName)
+	c.Assert(err, IsNil)
+
+	// option
+	err = s.bucket.UploadFile(objectName, fileName, 100*1024, Routines(3), Checkpoint(true, ""), Meta("myprop", "mypropval"))
+
+	meta, err := s.bucket.GetObjectDetailedMeta(objectName)
+	c.Assert(err, IsNil)
+	c.Assert(meta.Get("X-Oss-Meta-Myprop"), Equals, "mypropval")
+
+	os.Remove(newFile)
+	err = s.bucket.GetObjectToFile(objectName, newFile)
+	c.Assert(err, IsNil)
+
+	eq, err = compareFiles(fileName, newFile)
+	c.Assert(err, IsNil)
+	c.Assert(eq, Equals, true)
+
+	err = s.bucket.DeleteObject(objectName)
+	c.Assert(err, IsNil)
+}
+
+// TestUploadRoutineWithoutRecovery 多线程无断点恢复的上传
+func (s *OssUploadSuite) TestUploadRoutineWithRecoveryNegative(c *C) {
+	objectName := objectNamePrefix + "turrn"
+	fileName := "../sample/BingWallpaper-2015-11-07.jpg"
+
+	// 本地文件不存在
+	err := s.bucket.UploadFile(objectName, "NotExist", 100*1024, Checkpoint(true, ""))
+	c.Assert(err, NotNil)
+
+	err = s.bucket.UploadFile(objectName, "NotExist", 100*1024, Routines(2), Checkpoint(true, ""))
+	c.Assert(err, NotNil)
+
+	// 指定的分片大小无效
+	err = s.bucket.UploadFile(objectName, fileName, 1024, Checkpoint(true, ""))
+	c.Assert(err, NotNil)
+
+	err = s.bucket.UploadFile(objectName, fileName, 1024, Routines(2), Checkpoint(true, ""))
+	c.Assert(err, NotNil)
+
+	err = s.bucket.UploadFile(objectName, fileName, 1024*1024*1024*100, Checkpoint(true, ""))
+	c.Assert(err, NotNil)
+
+	err = s.bucket.UploadFile(objectName, fileName, 1024*1024*1024*100, Routines(2), Checkpoint(true, ""))
+	c.Assert(err, NotNil)
+}
+
+// TestUploadLocalFileChange 上传过程中文件修改了
+func (s *OssUploadSuite) TestUploadLocalFileChange(c *C) {
+	objectName := objectNamePrefix + "tulfc"
+	fileName := "../sample/BingWallpaper-2015-11-07.jpg"
+	localFile := "../sample/BingWallpaper-2015-11-07-2.jpg"
+	newFile := "upload-new-file-3.jpg"
+
+	os.Remove(localFile)
+	err := copyFile(fileName, localFile)
+	c.Assert(err, IsNil)
+
+	// 第一次上传,上传4片
+	uploadPartHooker = ErrorHooker
+	err = s.bucket.UploadFile(objectName, localFile, 100*1024, Checkpoint(true, ""))
+	c.Assert(err, NotNil)
+	c.Assert(err.Error(), Equals, "ErrorHooker")
+	uploadPartHooker = defaultUploadPart
+
+	os.Remove(localFile)
+	err = copyFile(fileName, localFile)
+	c.Assert(err, IsNil)
+
+	// 文件修改,第二次上传全部分片重新上传
+	err = s.bucket.UploadFile(objectName, localFile, 100*1024, Checkpoint(true, ""))
+	c.Assert(err, IsNil)
+
+	os.Remove(newFile)
+	err = s.bucket.GetObjectToFile(objectName, newFile)
+	c.Assert(err, IsNil)
+
+	eq, err := compareFiles(fileName, newFile)
+	c.Assert(err, IsNil)
+	c.Assert(eq, Equals, true)
+
+	err = s.bucket.DeleteObject(objectName)
+	c.Assert(err, IsNil)
+}
+
+func copyFile(src, dst string) error {
+	srcFile, err := os.Open(src)
+	if err != nil {
+		return err
+	}
+	defer srcFile.Close()
+
+	dstFile, err := os.Create(dst)
+	if err != nil {
+		return err
+	}
+	defer dstFile.Close()
+
+	_, err = io.Copy(dstFile, srcFile)
+	return err
+}

+ 0 - 3
sample.go

@@ -26,9 +26,6 @@ func main() {
 	sample.PutObjectSample()
 	sample.GetObjectSample()
 
-	sample.MultipartUploadSample()
-	sample.MultipartCopySample()
-
 	sample.CnameSample()
 
 	fmt.Println("All samples completed")

+ 25 - 26
sample/get_object.go

@@ -6,7 +6,6 @@ import (
 	"io"
 	"io/ioutil"
 	"os"
-	"strconv"
 
 	"github.com/aliyun/aliyun-oss-go-sdk/oss"
 )
@@ -25,7 +24,7 @@ func GetObjectSample() {
 		HandleError(err)
 	}
 
-	// 场景1:下载object存储到ReadCloser,注意需要Close
+	// 场景1:下载object存储到ReadCloser,注意需要Close
 	body, err := bucket.GetObject(objectKey)
 	if err != nil {
 		HandleError(err)
@@ -37,7 +36,7 @@ func GetObjectSample() {
 	}
 	data = data // use data
 
-	// 场景2:下载object存储到bytes数组,适合小对象
+	// 场景2:下载object存储到bytes数组,适合小对象
 	buf := new(bytes.Buffer)
 	body, err = bucket.GetObject(objectKey)
 	if err != nil {
@@ -46,7 +45,7 @@ func GetObjectSample() {
 	io.Copy(buf, body)
 	body.Close()
 
-	// 场景3:下载object存储到本地文件,用户打开文件
+	// 场景3:下载object存储到本地文件,用户打开文件传入句柄。
 	fd, err := os.OpenFile("mynewfile-1.jpg", os.O_WRONLY|os.O_CREATE, 0660)
 	if err != nil {
 		HandleError(err)
@@ -60,20 +59,20 @@ func GetObjectSample() {
 	io.Copy(fd, body)
 	body.Close()
 
-	// 场景4:下载object存储到本地文件
+	// 场景4:下载object存储到本地文件
 	err = bucket.GetObjectToFile(objectKey, "mynewfile-2.jpg")
 	if err != nil {
 		HandleError(err)
 	}
 
-	// 场景5:满足约束条件下载,否则返回错误。GetObjectToFile具有相同功能。
-	// 修改时间,约束条件满足,执行下载
+	// 场景5:满足约束条件下载,否则返回错误。GetObject/GetObjectToFile/DownloadFile都支持该功能。
+	// 修改时间,约束条件满足,执行下载
 	body, err = bucket.GetObject(objectKey, oss.IfModifiedSince(pastDate))
 	if err != nil {
 		HandleError(err)
 	}
 	body.Close()
-	// 修改时间,约束条件不满足,不执行下载
+	// 修改时间,约束条件不满足,不执行下载
 	_, err = bucket.GetObject(objectKey, oss.IfUnmodifiedSince(pastDate))
 	if err == nil {
 		HandleError(err)
@@ -84,45 +83,45 @@ func GetObjectSample() {
 		HandleError(err)
 	}
 	md5 := meta.Get(oss.HTTPHeaderEtag)
-	// 校验内容,约束条件满足,执行下载
+	// 校验内容,约束条件满足,执行下载
 	body, err = bucket.GetObject(objectKey, oss.IfMatch(md5))
 	if err != nil {
 		HandleError(err)
 	}
 	body.Close()
 
-	// 校验内容,约束条件不满足,不执行下载
+	// 校验内容,约束条件不满足,不执行下载
 	body, err = bucket.GetObject(objectKey, oss.IfNoneMatch(md5))
 	if err == nil {
 		HandleError(err)
 	}
 
-	// 场景6:指定value的开始结束位置下载object,可以实现断点下载。GetObjectToFile具有相同功能。
-	meta, err = bucket.GetObjectDetailedMeta(objectKey)
+	// 场景6:大文件分片下载,支持并发下载,断点续传功能。
+	// 分片下载,分片大小为100K。默认使用不使用并发下载,不使用断点续传。
+	err = bucket.DownloadFile(objectKey, "mynewfile-3.jpg", 100*1024)
 	if err != nil {
 		HandleError(err)
 	}
-	fmt.Println("Object Meta:", meta[oss.HTTPHeaderContentLength])
 
-	var partSize int64 = 100 * 1024
-	objectSize, err := strconv.ParseInt(meta.Get(oss.HTTPHeaderContentLength), 10, 0)
-	fd, err = os.OpenFile("myfile.jpg", os.O_WRONLY|os.O_CREATE, 0660)
+	// 分片大小为100K,3个协程并发下载。
+	err = bucket.DownloadFile(objectKey, "mynewfile-3.jpg", 100*1024, oss.Routines(3))
 	if err != nil {
 		HandleError(err)
 	}
-	defer fd.Close()
 
-	for i := int64(0); i < objectSize; i += partSize {
-		option := oss.Range(i, oss.GetPartEnd(i, objectSize, partSize))
-		body, err := bucket.GetObject(objectKey, option)
-		if err != nil {
-			HandleError(err)
-		}
-		io.Copy(fd, body)
-		body.Close()
+	// 分片大小为100K,3个协程并发下载,使用断点续传下载文件。
+	err = bucket.DownloadFile(objectKey, "mynewfile-3.jpg", 100*1024, oss.Routines(3), oss.Checkpoint(true, ""))
+	if err != nil {
+		HandleError(err)
+	}
+
+	// 断点续传功能需要使用本地文件,记录哪些分片已经下载。该文件路径可以Checkpoint的第二个参数指定,如果为空,则为下载文件目录。
+	err = bucket.DownloadFile(objectKey, "mynewfile-3.jpg", 100*1024, oss.Checkpoint(true, "mynewfile.cp"))
+	if err != nil {
+		HandleError(err)
 	}
 
-	// 场景7:内容进行 GZIP压缩传输的用户。GetObject/GetObjectToWriter具有相同功能。
+	// 场景7:内容进行 GZIP压缩传输的用户。GetObject/GetObjectToFile具有相同功能。
 	err = bucket.PutObjectFromFile(objectKey, htmlLocalFile)
 	if err != nil {
 		HandleError(err)

+ 0 - 223
sample/multipart_copy.go

@@ -1,223 +0,0 @@
-package sample
-
-import (
-	"fmt"
-	"sync"
-
-	"github.com/aliyun/aliyun-oss-go-sdk/oss"
-)
-
-// MultipartCopySample Multipart Copy Sample
-func MultipartCopySample() {
-	var objectSrc = "my-object-src"
-	var objectDesc = "my-object-desc"
-
-	// 创建Bucket
-	bucket, err := GetTestBucket(bucketName)
-	if err != nil {
-		HandleError(err)
-	}
-
-	err = bucket.PutObjectFromFile(objectSrc, localFile)
-	if err != nil {
-		HandleError(err)
-	}
-
-	// 场景1:大文件分片拷贝,按照文件片大小分片
-	chunks, err := oss.SplitFileByPartNum(localFile, 3)
-	if err != nil {
-		HandleError(err)
-	}
-
-	imur, err := bucket.InitiateMultipartUpload(objectDesc)
-	if err != nil {
-		HandleError(err)
-	}
-
-	parts := []oss.UploadPart{}
-	for _, chunk := range chunks {
-		part, err := bucket.UploadPartCopy(imur, objectSrc, chunk.Offset, chunk.Size,
-			chunk.Number)
-		if err != nil {
-			HandleError(err)
-		}
-		parts = append(parts, part)
-	}
-
-	_, err = bucket.CompleteMultipartUpload(imur, parts)
-	if err != nil {
-		HandleError(err)
-	}
-
-	err = bucket.DeleteObject(objectDesc)
-	if err != nil {
-		HandleError(err)
-	}
-
-	// 场景2:大文件分片拷贝,按照指定文件片数
-	chunks, err = oss.SplitFileByPartSize(localFile, 1024*100)
-	if err != nil {
-		HandleError(err)
-	}
-
-	imur, err = bucket.InitiateMultipartUpload(objectDesc)
-	if err != nil {
-		HandleError(err)
-	}
-
-	parts = []oss.UploadPart{}
-	for _, chunk := range chunks {
-		part, err := bucket.UploadPartCopy(imur, objectSrc, chunk.Offset, chunk.Size,
-			chunk.Number)
-		if err != nil {
-			HandleError(err)
-		}
-		parts = append(parts, part)
-	}
-
-	_, err = bucket.CompleteMultipartUpload(imur, parts)
-	if err != nil {
-		HandleError(err)
-	}
-
-	err = bucket.DeleteObject(objectDesc)
-	if err != nil {
-		HandleError(err)
-	}
-
-	// 场景3:大文件分片拷贝,初始化时指定对象属性
-	chunks, err = oss.SplitFileByPartNum(localFile, 3)
-	if err != nil {
-		HandleError(err)
-	}
-
-	imur, err = bucket.InitiateMultipartUpload(objectDesc, oss.Meta("myprop", "mypropval"))
-	if err != nil {
-		HandleError(err)
-	}
-
-	parts = []oss.UploadPart{}
-	for _, chunk := range chunks {
-		part, err := bucket.UploadPartCopy(imur, objectSrc, chunk.Offset, chunk.Size,
-			chunk.Number)
-		if err != nil {
-			HandleError(err)
-		}
-		parts = append(parts, part)
-	}
-
-	_, err = bucket.CompleteMultipartUpload(imur, parts)
-	if err != nil {
-		HandleError(err)
-	}
-
-	err = bucket.DeleteObject(objectDesc)
-	if err != nil {
-		HandleError(err)
-	}
-
-	// 场景4:大文件分片拷贝,每个分片可以有线程/进程/机器独立完成,下面示例是每个线程拷贝一个分片
-	partNum := 4
-	chunks, err = oss.SplitFileByPartNum(localFile, partNum)
-	if err != nil {
-		HandleError(err)
-	}
-
-	imur, err = bucket.InitiateMultipartUpload(objectDesc)
-	if err != nil {
-		HandleError(err)
-	}
-
-	// 并发拷贝分片上传
-	var waitgroup sync.WaitGroup
-	var ps = make([]oss.UploadPart, partNum)
-	for _, chunk := range chunks {
-		waitgroup.Add(1)
-		go func(chunk oss.FileChunk) {
-			part, err := bucket.UploadPartCopy(imur, objectSrc, chunk.Offset, chunk.Size,
-				chunk.Number)
-			if err != nil {
-				HandleError(err)
-			}
-			ps[chunk.Number-1] = part
-			waitgroup.Done()
-		}(chunk)
-	}
-
-	// 等待拷贝完成
-	waitgroup.Wait()
-
-	// 通知完成
-	_, err = bucket.CompleteMultipartUpload(imur, ps)
-	if err != nil {
-		HandleError(err)
-	}
-
-	err = bucket.DeleteObject(objectDesc)
-	if err != nil {
-		HandleError(err)
-	}
-
-	// 场景5:大文件分片拷贝,对拷贝有约束条件,满足时候拷贝,不满足时报错
-	chunks, err = oss.SplitFileByPartNum(localFile, 3)
-	if err != nil {
-		HandleError(err)
-	}
-
-	imur, err = bucket.InitiateMultipartUpload(objectDesc)
-	if err != nil {
-		HandleError(err)
-	}
-
-	parts = []oss.UploadPart{}
-	for _, chunk := range chunks {
-		constraint := oss.CopySourceIfMatch("InvalidETag")
-		_, err := bucket.UploadPartCopy(imur, objectSrc, chunk.Offset, chunk.Size,
-			chunk.Number, constraint)
-		fmt.Println(err)
-	}
-
-	err = bucket.AbortMultipartUpload(imur)
-	if err != nil {
-		HandleError(err)
-	}
-
-	err = bucket.DeleteObject(objectDesc)
-	if err != nil {
-		HandleError(err)
-	}
-
-	// 场景6:大文件分片拷贝一部分后,中止上传,上传的数据将丢弃,UploadId也将无效
-	chunks, err = oss.SplitFileByPartNum(localFile, 3)
-	if err != nil {
-		HandleError(err)
-	}
-
-	imur, err = bucket.InitiateMultipartUpload(objectDesc)
-	if err != nil {
-		HandleError(err)
-	}
-
-	parts = []oss.UploadPart{}
-	for _, chunk := range chunks {
-		part, err := bucket.UploadPartCopy(imur, objectSrc, chunk.Offset, chunk.Size,
-			chunk.Number)
-		if err != nil {
-			HandleError(err)
-		}
-		parts = append(parts, part)
-	}
-
-	err = bucket.AbortMultipartUpload(imur)
-	if err != nil {
-		HandleError(err)
-	}
-
-	// 删除object和bucket
-	err = DeleteTestBucketAndObject(bucketName)
-	if err != nil {
-		HandleError(err)
-	}
-
-	fmt.Println("MultipartCopySample completed")
-}

+ 0 - 230
sample/multipart_upload.go

@@ -1,230 +0,0 @@
-package sample
-
-import (
-	"fmt"
-	"os"
-	"sync"
-
-	"github.com/aliyun/aliyun-oss-go-sdk/oss"
-)
-
-// MultipartUploadSample Multipart Upload Sample
-func MultipartUploadSample() {
-	// 创建Bucket
-	bucket, err := GetTestBucket(bucketName)
-	if err != nil {
-		HandleError(err)
-	}
-
-	// 场景1:大文件分片上传,按照文件片大小分片
-	chunks, err := oss.SplitFileByPartNum(localFile, 3)
-	if err != nil {
-		HandleError(err)
-	}
-
-	imur, err := bucket.InitiateMultipartUpload(objectKey)
-	if err != nil {
-		HandleError(err)
-	}
-
-	parts := []oss.UploadPart{}
-	for _, chunk := range chunks {
-		part, err := bucket.UploadPartFromFile(imur, localFile, chunk.Offset,
-			chunk.Size, chunk.Number)
-		if err != nil {
-			HandleError(err)
-		}
-		parts = append(parts, part)
-	}
-
-	_, err = bucket.CompleteMultipartUpload(imur, parts)
-	if err != nil {
-		HandleError(err)
-	}
-
-	err = bucket.DeleteObject(objectKey)
-	if err != nil {
-		HandleError(err)
-	}
-
-	// 场景2:大文件分片上传,按照指定文件片数
-	chunks, err = oss.SplitFileByPartSize(localFile, 1024*100)
-	if err != nil {
-		HandleError(err)
-	}
-
-	imur, err = bucket.InitiateMultipartUpload(objectKey)
-	if err != nil {
-		HandleError(err)
-	}
-
-	parts = []oss.UploadPart{}
-	for _, chunk := range chunks {
-		part, err := bucket.UploadPartFromFile(imur, localFile, chunk.Offset,
-			chunk.Size, chunk.Number)
-		if err != nil {
-			HandleError(err)
-		}
-		parts = append(parts, part)
-	}
-
-	_, err = bucket.CompleteMultipartUpload(imur, parts)
-	if err != nil {
-		HandleError(err)
-	}
-
-	err = bucket.DeleteObject(objectKey)
-	if err != nil {
-		HandleError(err)
-	}
-
-	chunks = []oss.FileChunk{
-		{Number: 1, Offset: 0 * 1024 * 1024, Size: 1024 * 1024},
-		{Number: 2, Offset: 1 * 1024 * 1024, Size: 1024 * 1024},
-		{Number: 3, Offset: 2 * 1024 * 1024, Size: 1024 * 1024},
-	}
-
-	// 创建3:大文件上传,您自己打开文件,传入句柄
-	chunks, err = oss.SplitFileByPartNum(localFile, 3)
-	if err != nil {
-		HandleError(err)
-	}
-
-	fd, err := os.Open(localFile)
-	if err != nil {
-		HandleError(err)
-	}
-	defer fd.Close()
-
-	imur, err = bucket.InitiateMultipartUpload(objectKey)
-	if err != nil {
-		HandleError(err)
-	}
-
-	parts = []oss.UploadPart{}
-	for _, chunk := range chunks {
-		fd.Seek(chunk.Offset, os.SEEK_SET)
-		part, err := bucket.UploadPart(imur, fd, chunk.Size, chunk.Number)
-		if err != nil {
-			HandleError(err)
-		}
-		parts = append(parts, part)
-	}
-
-	_, err = bucket.CompleteMultipartUpload(imur, parts)
-	if err != nil {
-		HandleError(err)
-	}
-
-	err = bucket.DeleteObject(objectKey)
-	if err != nil {
-		HandleError(err)
-	}
-
-	// 场景4:大文件分片上传,初始化时指定对象属性
-	chunks, err = oss.SplitFileByPartNum(localFile, 3)
-	if err != nil {
-		HandleError(err)
-	}
-
-	imur, err = bucket.InitiateMultipartUpload(objectKey, oss.Meta("myprop", "mypropval"))
-	if err != nil {
-		HandleError(err)
-	}
-
-	parts = []oss.UploadPart{}
-	for _, chunk := range chunks {
-		part, err := bucket.UploadPartFromFile(imur, localFile, chunk.Offset,
-			chunk.Size, chunk.Number)
-		if err != nil {
-			HandleError(err)
-		}
-		parts = append(parts, part)
-	}
-
-	_, err = bucket.CompleteMultipartUpload(imur, parts)
-	if err != nil {
-		HandleError(err)
-	}
-
-	err = bucket.DeleteObject(objectKey)
-	if err != nil {
-		HandleError(err)
-	}
-
-	// 场景5:大文件分片上传,每个分片可以有线程/进程/机器独立完成,下面示例是每个线程上传一个分片
-	partNum := 4
-	chunks, err = oss.SplitFileByPartNum(localFile, partNum)
-	if err != nil {
-		HandleError(err)
-	}
-
-	imur, err = bucket.InitiateMultipartUpload(objectKey)
-	if err != nil {
-		HandleError(err)
-	}
-
-	// 并发上传分片
-	var waitgroup sync.WaitGroup
-	var ps = make([]oss.UploadPart, partNum)
-	for _, chunk := range chunks {
-		waitgroup.Add(1)
-		go func(chunk oss.FileChunk) {
-			part, err := bucket.UploadPartFromFile(imur, localFile, chunk.Offset,
-				chunk.Size, chunk.Number)
-			if err != nil {
-				HandleError(err)
-			}
-			ps[chunk.Number-1] = part
-			waitgroup.Done()
-		}(chunk)
-	}
-
-	// 等待上传完成
-	waitgroup.Wait()
-
-	// 通知完成
-	_, err = bucket.CompleteMultipartUpload(imur, ps)
-	if err != nil {
-		HandleError(err)
-	}
-
-	err = bucket.DeleteObject(objectKey)
-	if err != nil {
-		HandleError(err)
-	}
-
-	// 场景6:大文件分片上传一部分后,中止上传,上传的数据将丢弃,UploadId也将无效
-	chunks, err = oss.SplitFileByPartNum(localFile, 3)
-	if err != nil {
-		HandleError(err)
-	}
-
-	imur, err = bucket.InitiateMultipartUpload(objectKey)
-	if err != nil {
-		HandleError(err)
-	}
-
-	parts = []oss.UploadPart{}
-	for _, chunk := range chunks {
-		part, err := bucket.UploadPartFromFile(imur, localFile, chunk.Offset,
-			chunk.Size, chunk.Number)
-		if err != nil {
-			HandleError(err)
-		}
-		parts = append(parts, part)
-	}
-
-	err = bucket.AbortMultipartUpload(imur)
-	if err != nil {
-		HandleError(err)
-	}
-
-	// 删除object和bucket
-	err = DeleteTestBucketAndObject(bucketName)
-	if err != nil {
-		HandleError(err)
-	}
-
-	fmt.Println("MultipartUploadSample completed")
-}

+ 22 - 14
sample/put_object.go

@@ -19,19 +19,19 @@ func PutObjectSample() {
 
 	var val = "花间一壶酒,独酌无相亲。 举杯邀明月,对影成三人。"
 
-	// 场景1:上传object,value是字符串
+	// 场景1:上传object,value是字符串
 	err = bucket.PutObject(objectKey, strings.NewReader(val))
 	if err != nil {
 		HandleError(err)
 	}
 
-	// 场景2:上传object,value是[]byte
+	// 场景2:上传object,value是[]byte
 	err = bucket.PutObject(objectKey, bytes.NewReader([]byte(val)))
 	if err != nil {
 		HandleError(err)
 	}
 
-	// 场景3:上传本地文件,您自己打开文件,传入句柄
+	// 场景3:上传本地文件,用户打开文件,传入句柄。
 	fd, err := os.Open(localFile)
 	if err != nil {
 		HandleError(err)
@@ -43,7 +43,13 @@ func PutObjectSample() {
 		HandleError(err)
 	}
 
-	// 场景4:上传object,上传时指定对象属性
+	// 场景4:上传本地文件,不需要打开文件。
+	err = bucket.PutObjectFromFile(objectKey, localFile)
+	if err != nil {
+		HandleError(err)
+	}
+
+	// 场景5:上传object,上传时指定对象属性。PutObject/PutObjectFromFile/UploadFile都支持该功能。
 	options := []oss.Option{
 		oss.Expires(futureDate),
 		oss.ObjectACL(oss.ACLPublicRead),
@@ -60,28 +66,30 @@ func PutObjectSample() {
 	}
 	fmt.Println("Object Meta:", props)
 
-	// 场景5:上传本地文件
-	err = bucket.PutObjectFromFile(objectKey, localFile)
+	// 场景6:大文件分片上传,支持并发上传,断点续传功能。
+	// 分片上传,分片大小为100K。默认使用不使用并发上传,不使用断点续传。
+	err = bucket.UploadFile(objectKey, localFile, 100*1024)
 	if err != nil {
 		HandleError(err)
 	}
 
-	// 场景6:上传本地文件,上传时指定对象属性
-	options = []oss.Option{
-		oss.Expires(futureDate),
-		oss.ObjectACL(oss.ACLPublicRead),
-		oss.Meta("myprop", "mypropval"),
+	// 分片大小为100K,3个协程并发上传。
+	err = bucket.UploadFile(objectKey, localFile, 100*1024, oss.Routines(3))
+	if err != nil {
+		HandleError(err)
 	}
-	err = bucket.PutObjectFromFile(objectKey, localFile, options...)
+
+	// 分片大小为100K,3个协程并发下载,使用断点续传上传文件。
+	err = bucket.UploadFile(objectKey, localFile, 100*1024, oss.Routines(3), oss.Checkpoint(true, ""))
 	if err != nil {
 		HandleError(err)
 	}
 
-	props, err = bucket.GetObjectDetailedMeta(objectKey)
+	// 断点续传功能需要使用本地文件,记录哪些分片已经上传。该文件路径可以Checkpoint的第二个参数指定,如果为空,则为上传文件目录。
+	err = bucket.UploadFile(objectKey, localFile, 100*1024, oss.Checkpoint(true, localFile+".cp"))
 	if err != nil {
 		HandleError(err)
 	}
-	fmt.Println("Object Meta:", props)
 
 	// 删除object和bucket
 	err = DeleteTestBucketAndObject(bucketName)