Browse Source

codec: Refactored for better APIs and improved performance (> 20% improvement).

1. Allow setting: AsSymbols to specify how/when to encode symbols.

Symbols incur a performance cost due to string comparison whenever a struct or map[string]XXX
is being encoded. Users can now configure when to encode as symbols, understanding the tradeoff
of encoding speed vs encoding size.

1. Remove msgpack time extensions.

There has not been agreement yet on the format of the msgpack time extension.
We removed this so folks do not have the wrong impression.
In Go 1.2, we support binaryMarshaler and time.Time's binaryMarshal methods are
used by default.

1. Support fast path for common slices and maps.

Reflection has overhead, and incurs extra allocation costs.

We mitigate this for some common slices and maps by bypassing reflection and
trying to encode/decode directly.

1. Support MapBySlice to allow encoding/decoding a map into a slice.

For some use-cases, a map should encode as a stream array,
as a sequence of key-value pairs.
Vice versa, a map can be decoded from a slice.

1. Many other optimizations:

- Better use of pointers and care during dereferencing vs copying values.
- ...

1. Re-organized code

- Code flows as you read it. Much better.
- Moved decode wasNilIntf handling to kInterface
- go fmt
- ...
Ugorji Nwoke 12 years ago
parent
commit
3119e5cd3f
12 changed files with 1640 additions and 1175 deletions
  1. 48 71
      codec/0doc.go
  2. 59 49
      codec/README.md
  3. 65 35
      codec/bench_test.go
  4. 108 104
      codec/binc.go
  5. 60 39
      codec/codecs_test.go
  6. 457 262
      codec/decode.go
  7. 390 222
      codec/encode.go
  8. 12 12
      codec/ext_dep_test.go
  9. 193 140
      codec/helper.go
  10. 166 196
      codec/msgpack.go
  11. 31 36
      codec/rpc.go
  12. 51 9
      codec/time.go

+ 48 - 71
codec/0doc.go

@@ -19,46 +19,53 @@ the standard library (ie json, xml, gob, etc).
 Rich Feature Set includes:
 
   - Simple but extremely powerful and feature-rich API
-  - Very High Performance.   
+  - Very High Performance.
     Our extensive benchmarks show us outperforming Gob, Json and Bson by 2-4X.
     This was achieved by taking extreme care on:
       - managing allocation
-      - stack frame size (important due to Go's use of split stacks), 
-      - reflection use
+      - function frame size (important due to Go's use of split stacks),
+      - reflection use (and by-passing reflection for common types)
       - recursion implications
       - zero-copy mode (encoding/decoding to byte slice without using temp buffers)
-  - Correct.  
-    Care was taken to precisely handle corner cases like: 
+  - Correct.
+    Care was taken to precisely handle corner cases like:
       overflows, nil maps and slices, nil value in stream, etc.
-  - Efficient zero-copying into temporary byte buffers  
+  - Efficient zero-copying into temporary byte buffers
     when encoding into or decoding from a byte slice.
   - Standard field renaming via tags
-  - Encoding from any value  
+  - Encoding from any value
     (struct, slice, map, primitives, pointers, interface{}, etc)
-  - Decoding into pointer to any non-nil typed value  
+  - Decoding into pointer to any non-nil typed value
     (struct, slice, map, int, float32, bool, string, reflect.Value, etc)
   - Supports extension functions to handle the encode/decode of custom types
   - Support Go 1.2 encoding.BinaryMarshaler/BinaryUnmarshaler
-  - Schema-less decoding  
-    (decode into a pointer to a nil interface{} as opposed to a typed non-nil value).  
-    Includes Options to configure what specific map or slice type to use 
+  - Schema-less decoding
+    (decode into a pointer to a nil interface{} as opposed to a typed non-nil value).
+    Includes Options to configure what specific map or slice type to use
     when decoding an encoded list or map into a nil interface{}
   - Provides a RPC Server and Client Codec for net/rpc communication protocol.
   - Msgpack Specific:
       - Provides extension functions to handle spec-defined extensions (binary, timestamp)
-      - Options to resolve ambiguities in handling raw bytes (as string or []byte)  
+      - Options to resolve ambiguities in handling raw bytes (as string or []byte)
         during schema-less decoding (decoding into a nil interface{})
-      - RPC Server/Client Codec for msgpack-rpc protocol defined at: 
-        http://wiki.msgpack.org/display/MSGPACK/RPC+specification
+      - RPC Server/Client Codec for msgpack-rpc protocol defined at:
+        https://github.com/msgpack-rpc/msgpack-rpc/blob/master/spec.md
+  - Fast Paths for some container types:
+    For some container types, we circumvent reflection and its associated overhead
+    and allocation costs, and encode/decode directly. These types are:
+	    []interface{}
+	    []int
+	    []string
+	    map[interface{}]interface{}
+	    map[int]interface{}
+	    map[string]interface{}
 
 Extension Support
 
 Users can register a function to handle the encoding or decoding of
-their custom types. 
+their custom types.
 
-There are no restrictions on what the custom type can be. Extensions can
-be any type: pointers, structs, custom types off arrays/slices, strings,
-etc. Some examples:
+There are no restrictions on what the custom type can be. Some examples:
 
     type BisSet   []int
     type BitSet64 uint64
@@ -66,37 +73,31 @@ etc. Some examples:
     type MyStructWithUnexportedFields struct { a int; b bool; c []int; }
     type GifImage struct { ... }
 
-Typically, MyStructWithUnexportedFields is encoded as an empty map because
-it has no exported fields, while UUID will be encoded as a string,
-etc. However, with extension support, you can encode any of these
-however you like.
+As an illustration, MyStructWithUnexportedFields would normally be
+encoded as an empty map because it has no exported fields, while UUID
+would be encoded as a string. However, with extension support, you can
+encode any of these however you like.
 
-We provide implementations of these functions where the spec has defined
-an inter-operable format. For msgpack, these are Binary and
-time.Time. Library users will have to explicitly configure these as seen
-in the usage below.
+RPC
+
+RPC Client and Server Codecs are implemented, so the codecs can be used
+with the standard net/rpc package.
 
 Usage
 
 Typical usage model:
 
-    var (
-      mapStrIntfTyp = reflect.TypeOf(map[string]interface{}(nil))
-      sliceByteTyp = reflect.TypeOf([]byte(nil))
-      timeTyp = reflect.TypeOf(time.Time{})
-    )
-    
     // create and configure Handle
     var (
       bh codec.BincHandle
       mh codec.MsgpackHandle
     )
 
-    mh.MapType = mapStrIntfTyp
-    
-    // configure extensions for msgpack, to enable Binary and Time support for tags 0 and 1
-    mh.AddExt(sliceByteTyp, 0, mh.BinaryEncodeExt, mh.BinaryDecodeExt)
-    mh.AddExt(timeTyp, 1, mh.TimeEncodeExt, mh.TimeDecodeExt)
+    mh.MapType = reflect.TypeOf(map[string]interface{}(nil))
+
+    // configure extensions
+    // e.g. for msgpack, define functions and enable Time support for tag 1
+    // mh.AddExt(reflect.TypeOf(time.Time{}), 1, myMsgpackTimeEncodeExtFn, myMsgpackTimeDecodeExtFn)
 
     // create and use decoder/encoder
     var (
@@ -105,15 +106,15 @@ Typical usage model:
       b []byte
       h = &bh // or mh to use msgpack
     )
-    
+
     dec = codec.NewDecoder(r, h)
     dec = codec.NewDecoderBytes(b, h)
-    err = dec.Decode(&v) 
-    
+    err = dec.Decode(&v)
+
     enc = codec.NewEncoder(w, h)
     enc = codec.NewEncoderBytes(&b, h)
     err = enc.Encode(v)
-    
+
     //RPC Server
     go func() {
         for {
@@ -123,43 +124,19 @@ Typical usage model:
             rpc.ServeCodec(rpcCodec)
         }
     }()
-    
+
     //RPC Communication (client side)
-    conn, err = net.Dial("tcp", "localhost:5555")  
+    conn, err = net.Dial("tcp", "localhost:5555")
     rpcCodec := codec.GoRpc.ClientCodec(conn, h)
     //OR rpcCodec := codec.MsgpackSpecRpc.ClientCodec(conn, h)
     client := rpc.NewClientWithCodec(rpcCodec)
 
 Representative Benchmark Results
 
-A sample run of benchmark using "go test -bi -bench=.":
-
-   ..............................................
-   BENCHMARK INIT: 2013-10-04 14:36:50.381959842 -0400 EDT
-   To run full benchmark comparing encodings (MsgPack, Binc, JSON, GOB, etc), use: "go test -bench=."
-   Benchmark: 
-      	Struct recursive Depth:             1
-      	ApproxDeepSize Of benchmark Struct: 4694 bytes
-   Benchmark One-Pass Run:
-      	      bson: len: 3025 bytes
-      	   msgpack: len: 1560 bytes
-      	      binc: len: 1187 bytes
-      	       gob: len: 1972 bytes
-      	      json: len: 2538 bytes
-   ..............................................
-   PASS
-   Benchmark__Msgpack__Encode	   50000	     61683 ns/op	   15395 B/op	      91 allocs/op
-   Benchmark__Msgpack__Decode	   10000	    111090 ns/op	   15591 B/op	     437 allocs/op
-   Benchmark__Binc_____Encode	   50000	     73577 ns/op	   17572 B/op	      95 allocs/op
-   Benchmark__Binc_____Decode	   10000	    112656 ns/op	   16474 B/op	     318 allocs/op
-   Benchmark__Gob______Encode	   10000	    138140 ns/op	   21164 B/op	     237 allocs/op
-   Benchmark__Gob______Decode	    5000	    408067 ns/op	   83202 B/op	    1840 allocs/op
-   Benchmark__Json_____Encode	   20000	     80442 ns/op	   13861 B/op	     102 allocs/op
-   Benchmark__Json_____Decode	   10000	    249169 ns/op	   14169 B/op	     493 allocs/op
-   Benchmark__Bson_____Encode	   10000	    121970 ns/op	   27717 B/op	     514 allocs/op
-   Benchmark__Bson_____Decode	   10000	    161103 ns/op	   16444 B/op	     788 allocs/op
-
-To run full benchmark suite (including against vmsgpack and bson), 
+Run the benchmark suite using:
+   go test -bi -bench=. -benchmem
+
+To run full benchmark suite (including against vmsgpack and bson),
 see notes in ext_dep_test.go
 
 */

+ 59 - 49
codec/README.md

@@ -5,7 +5,7 @@ encode/decode support for different serialization formats.
 
 Supported Serialization formats are:
 
-  - msgpack: [http://wiki.msgpack.org/display/MSGPACK/Format+specification]
+  - msgpack: [https://github.com/msgpack/msgpack]
   - binc: [http://github.com/ugorji/binc]
 
 To install:
@@ -24,8 +24,8 @@ Rich Feature Set includes:
     Our extensive benchmarks show us outperforming Gob, Json and Bson by 2-4X.
     This was achieved by taking extreme care on:
       - managing allocation
-      - stack frame size (important due to Go's use of split stacks), 
-      - reflection use
+      - function frame size (important due to Go's use of split stacks),
+      - reflection use (and by-passing reflection for common types)
       - recursion implications
       - zero-copy mode (encoding/decoding to byte slice without using temp buffers)
   - Correct.  
@@ -39,6 +39,7 @@ Rich Feature Set includes:
   - Decoding into pointer to any non-nil typed value  
     (struct, slice, map, int, float32, bool, string, reflect.Value, etc)
   - Supports extension functions to handle the encode/decode of custom types
+  - Support Go 1.2 encoding.BinaryMarshaler/BinaryUnmarshaler
   - Schema-less decoding  
     (decode into a pointer to a nil interface{} as opposed to a typed non-nil value).  
     Includes Options to configure what specific map or slice type to use 
@@ -49,16 +50,23 @@ Rich Feature Set includes:
       - Options to resolve ambiguities in handling raw bytes (as string or []byte)  
         during schema-less decoding (decoding into a nil interface{})
       - RPC Server/Client Codec for msgpack-rpc protocol defined at: 
-        http://wiki.msgpack.org/display/MSGPACK/RPC+specification
+        https://github.com/msgpack-rpc/msgpack-rpc/blob/master/spec.md
+  - Fast Paths for some container types:  
+    For some container types, we circumvent reflection and its associated overhead
+    and allocation costs, and encode/decode directly. These types are:  
+	    []interface{}
+	    []int
+	    []string
+	    map[interface{}]interface{}
+	    map[int]interface{}
+	    map[string]interface{}
 
 ## Extension Support
 
 Users can register a function to handle the encoding or decoding of
-their custom types. 
+their custom types.
 
-There are no restrictions on what the custom type can be. Extensions can
-be any type: pointers, structs, custom types off arrays/slices, strings,
-etc. Some examples:
+There are no restrictions on what the custom type can be. Some examples:
 
     type BisSet   []int
     type BitSet64 uint64
@@ -66,37 +74,31 @@ etc. Some examples:
     type MyStructWithUnexportedFields struct { a int; b bool; c []int; }
     type GifImage struct { ... }
 
-Typically, MyStructWithUnexportedFields is encoded as an empty map because
-it has no exported fields, while UUID will be encoded as a string,
-etc. However, with extension support, you can encode any of these
-however you like.
+As an illustration, MyStructWithUnexportedFields would normally be
+encoded as an empty map because it has no exported fields, while UUID
+would be encoded as a string. However, with extension support, you can
+encode any of these however you like.
 
-We provide implementations of these functions where the spec has defined
-an inter-operable format. For msgpack, these are Binary and
-time.Time. Library users will have to explicitly configure these as seen
-in the usage below.
+## RPC
+
+RPC Client and Server Codecs are implemented, so the codecs can be used
+with the standard net/rpc package.
 
 ## Usage
 
 Typical usage model:
 
-    var (
-      mapStrIntfTyp = reflect.TypeOf(map[string]interface{}(nil))
-      sliceByteTyp = reflect.TypeOf([]byte(nil))
-      timeTyp = reflect.TypeOf(time.Time{})
-    )
-    
     // create and configure Handle
     var (
       bh codec.BincHandle
       mh codec.MsgpackHandle
     )
 
-    mh.MapType = mapStrIntfTyp
+    mh.MapType = reflect.TypeOf(map[string]interface{}(nil))
     
-    // configure extensions for msgpack, to enable Binary and Time support for tags 0 and 1
-    mh.AddExt(sliceByteTyp, 0, mh.BinaryEncodeExt, mh.BinaryDecodeExt)
-    mh.AddExt(timeTyp, 1, mh.TimeEncodeExt, mh.TimeDecodeExt)
+    // configure extensions
+    // e.g. for msgpack, define functions and enable Time support for tag 1
+    // mh.AddExt(reflect.TypeOf(time.Time{}), 1, myMsgpackTimeEncodeExtFn, myMsgpackTimeDecodeExtFn)
 
     // create and use decoder/encoder
     var (
@@ -123,41 +125,49 @@ Typical usage model:
             rpc.ServeCodec(rpcCodec)
         }
     }()
-    
+
     //RPC Communication (client side)
-    conn, err = net.Dial("tcp", "localhost:5555")  
-    rpcCodec := rpcH.ClientCodec(conn, h)  
+    conn, err = net.Dial("tcp", "localhost:5555")
+    rpcCodec := codec.GoRpc.ClientCodec(conn, h)
+    //OR rpcCodec := codec.MsgpackSpecRpc.ClientCodec(conn, h)
     client := rpc.NewClientWithCodec(rpcCodec)
 
 ## Representative Benchmark Results
 
-A sample run of benchmark using "go test -bi -bench=.":
+A sample run of benchmark using "go test -bi -bench=. -benchmem":
 
+    /proc/cpuinfo: Intel(R) Core(TM) i7-2630QM CPU @ 2.00GHz (HT)
+    
     ..............................................
+    BENCHMARK INIT: 2013-10-16 11:02:50.345970786 -0400 EDT
+    To run full benchmark comparing encodings (MsgPack, Binc, JSON, GOB, etc), use: "go test -bench=."
     Benchmark: 
     	Struct recursive Depth:             1
-    	ApproxDeepSize Of benchmark Struct: 4786
+    	ApproxDeepSize Of benchmark Struct: 4694 bytes
     Benchmark One-Pass Run:
-    	   msgpack: len: 1564
-    	      binc: len: 1191
-    	       gob: len: 1972
-    	      json: len: 2538
-    	 v-msgpack: len: 1600
-    	      bson: len: 3025
+    	 v-msgpack: len: 1600 bytes
+    	      bson: len: 3025 bytes
+    	   msgpack: len: 1560 bytes
+    	      binc: len: 1187 bytes
+    	       gob: len: 1972 bytes
+    	      json: len: 2538 bytes
     ..............................................
     PASS
-    Benchmark__Msgpack__Encode	   50000	     61731 ns/op
-    Benchmark__Msgpack__Decode	   10000	    115947 ns/op
-    Benchmark__Binc_____Encode	   50000	     64568 ns/op
-    Benchmark__Binc_____Decode	   10000	    113843 ns/op
-    Benchmark__Gob______Encode	   10000	    143956 ns/op
-    Benchmark__Gob______Decode	    5000	    431889 ns/op
-    Benchmark__Json_____Encode	   10000	    158662 ns/op
-    Benchmark__Json_____Decode	    5000	    310744 ns/op
-    Benchmark__Bson_____Encode	   10000	    172905 ns/op
-    Benchmark__Bson_____Decode	   10000	    228564 ns/op
-    Benchmark__VMsgpack_Encode	   20000	     81752 ns/op
-    Benchmark__VMsgpack_Decode	   10000	    160050 ns/op
+    Benchmark__Msgpack____Encode	   50000	     54359 ns/op	   14953 B/op	      83 allocs/op
+    Benchmark__Msgpack____Decode	   10000	    106531 ns/op	   14990 B/op	     410 allocs/op
+    Benchmark__Binc_NoSym_Encode	   50000	     53956 ns/op	   14966 B/op	      83 allocs/op
+    Benchmark__Binc_NoSym_Decode	   10000	    103751 ns/op	   14529 B/op	     386 allocs/op
+    Benchmark__Binc_Sym___Encode	   50000	     65961 ns/op	   17130 B/op	      88 allocs/op
+    Benchmark__Binc_Sym___Decode	   10000	    106310 ns/op	   15857 B/op	     287 allocs/op
+    Benchmark__Gob________Encode	   10000	    135944 ns/op	   21189 B/op	     237 allocs/op
+    Benchmark__Gob________Decode	    5000	    405390 ns/op	   83460 B/op	    1841 allocs/op
+    Benchmark__Json_______Encode	   20000	     79412 ns/op	   13874 B/op	     102 allocs/op
+    Benchmark__Json_______Decode	   10000	    247979 ns/op	   14202 B/op	     493 allocs/op
+    Benchmark__Bson_______Encode	   10000	    121762 ns/op	   27814 B/op	     514 allocs/op
+    Benchmark__Bson_______Decode	   10000	    162126 ns/op	   16514 B/op	     789 allocs/op
+    Benchmark__VMsgpack___Encode	   50000	     69155 ns/op	   12370 B/op	     344 allocs/op
+    Benchmark__VMsgpack___Decode	   10000	    151609 ns/op	   20307 B/op	     571 allocs/op
+    ok  	ugorji.net/codec	30.827s
 
 To run full benchmark suite (including against vmsgpack and bson), 
 see notes in ext\_dep\_test.go

+ 65 - 35
codec/bench_test.go

@@ -16,7 +16,7 @@ import (
 )
 
 // Sample way to run:
-// go test -bi -bv -bd=1 -benchmem -bench Msgpack__Encode
+// go test -bi -bv -bd=1 -benchmem -bench=.
 
 var (
 	_       = fmt.Printf
@@ -34,8 +34,10 @@ var (
 	benchCheckers  []benchChecker
 )
 
-type benchEncFn func(*TestStruc) ([]byte, error)
-type benchDecFn func([]byte, *TestStruc) error
+type benchEncFn func(interface{}) ([]byte, error)
+type benchDecFn func([]byte, interface{}) error
+type benchIntfFn func() interface{}
+
 type benchChecker struct {
 	name     string
 	encodefn benchEncFn
@@ -43,14 +45,14 @@ type benchChecker struct {
 }
 
 func benchInitFlags() {
-	flag.BoolVar(&benchInitDebug, "bdbg", false, "Bench Debug")
+	flag.BoolVar(&benchInitDebug, "bg", false, "Bench Debug")
 	flag.IntVar(&benchDepth, "bd", 1, "Bench Depth: If >1, potential unreliable results due to stack growth")
 	flag.BoolVar(&benchDoInitBench, "bi", false, "Run Bench Init")
 	flag.BoolVar(&benchVerify, "bv", false, "Verify Decoded Value during Benchmark")
 	flag.BoolVar(&benchUnscientificRes, "bu", false, "Show Unscientific Results during Benchmark")
 }
 
-func benchInit() {	
+func benchInit() {
 	benchTs = newTestStruc(benchDepth, true)
 	approxSize = approxDataSize(reflect.ValueOf(benchTs))
 	bytesLen := 1024 * 4 * (benchDepth + 1) * (benchDepth + 1)
@@ -93,6 +95,10 @@ func runBenchInit() {
 	}
 }
 
+func fnBenchNewTs() interface{} {
+	return new(TestStruc)
+}
+
 func doBenchCheck(name string, encfn benchEncFn, decfn benchDecFn) {
 	runtime.GC()
 	tnow := time.Now()
@@ -115,11 +121,11 @@ func doBenchCheck(name string, encfn benchEncFn, decfn benchDecFn) {
 	logT(nil, "\t%10s: len: %d bytes, encode: %v, decode: %v\n", name, encLen, encDur, decDur)
 }
 
-func fnBenchmarkEncode(b *testing.B, encName string, encfn benchEncFn) {
+func fnBenchmarkEncode(b *testing.B, encName string, ts interface{}, encfn benchEncFn) {
 	runtime.GC()
 	b.ResetTimer()
 	for i := 0; i < b.N; i++ {
-		_, err := encfn(benchTs)
+		_, err := encfn(ts)
 		if err != nil {
 			logT(b, "Error encoding benchTs: %s: %v", encName, err)
 			b.FailNow()
@@ -127,8 +133,10 @@ func fnBenchmarkEncode(b *testing.B, encName string, encfn benchEncFn) {
 	}
 }
 
-func fnBenchmarkDecode(b *testing.B, encName string, encfn benchEncFn, decfn benchDecFn) {
-	buf, err := encfn(benchTs)
+func fnBenchmarkDecode(b *testing.B, encName string, ts interface{},
+	encfn benchEncFn, decfn benchDecFn, newfn benchIntfFn,
+) {
+	buf, err := encfn(ts)
 	if err != nil {
 		logT(b, "Error encoding benchTs: %s: %v", encName, err)
 		b.FailNow()
@@ -136,13 +144,15 @@ func fnBenchmarkDecode(b *testing.B, encName string, encfn benchEncFn, decfn ben
 	runtime.GC()
 	b.ResetTimer()
 	for i := 0; i < b.N; i++ {
-		ts := new(TestStruc)
+		ts = newfn()
 		if err = decfn(buf, ts); err != nil {
 			logT(b, "Error decoding into new TestStruc: %s: %v", encName, err)
 			b.FailNow()
 		}
 		if benchVerify {
-			verifyTsTree(b, ts)
+			if vts, vok := ts.(*TestStruc); vok {
+				verifyTsTree(b, vts)
+			}
 		}
 	}
 }
@@ -190,70 +200,90 @@ func verifyOneOne(b *testing.B, ts *TestStruc) {
 	}
 }
 
-func fnMsgpackEncodeFn(ts *TestStruc) (bs []byte, err error) {
+func fnMsgpackEncodeFn(ts interface{}) (bs []byte, err error) {
 	err = NewEncoderBytes(&bs, testMsgpackH).Encode(ts)
 	return
 }
 
-func fnMsgpackDecodeFn(buf []byte, ts *TestStruc) error {
+func fnMsgpackDecodeFn(buf []byte, ts interface{}) error {
 	return NewDecoderBytes(buf, testMsgpackH).Decode(ts)
 }
 
-func fnBincEncodeFn(ts *TestStruc) (bs []byte, err error) {
+func fnBincEncodeFn(ts interface{}) (bs []byte, err error) {
 	err = NewEncoderBytes(&bs, testBincH).Encode(ts)
 	return
 }
 
-func fnBincDecodeFn(buf []byte, ts *TestStruc) error {
+func fnBincDecodeFn(buf []byte, ts interface{}) error {
 	return NewDecoderBytes(buf, testBincH).Decode(ts)
 }
 
-func fnGobEncodeFn(ts *TestStruc) ([]byte, error) {
+func fnGobEncodeFn(ts interface{}) ([]byte, error) {
 	bbuf := new(bytes.Buffer)
 	err := gob.NewEncoder(bbuf).Encode(ts)
 	return bbuf.Bytes(), err
 }
 
-func fnGobDecodeFn(buf []byte, ts *TestStruc) error {
+func fnGobDecodeFn(buf []byte, ts interface{}) error {
 	return gob.NewDecoder(bytes.NewBuffer(buf)).Decode(ts)
 }
 
-func fnJsonEncodeFn(ts *TestStruc) ([]byte, error) {
+func fnJsonEncodeFn(ts interface{}) ([]byte, error) {
 	return json.Marshal(ts)
 }
 
-func fnJsonDecodeFn(buf []byte, ts *TestStruc) error {
+func fnJsonDecodeFn(buf []byte, ts interface{}) error {
 	return json.Unmarshal(buf, ts)
 }
 
-func Benchmark__Msgpack__Encode(b *testing.B) {
-	fnBenchmarkEncode(b, "msgpack", fnMsgpackEncodeFn)
+func Benchmark__Msgpack____Encode(b *testing.B) {
+	fnBenchmarkEncode(b, "msgpack", benchTs, fnMsgpackEncodeFn)
+}
+
+func Benchmark__Msgpack____Decode(b *testing.B) {
+	fnBenchmarkDecode(b, "msgpack", benchTs, fnMsgpackEncodeFn, fnMsgpackDecodeFn, fnBenchNewTs)
+}
+
+func Benchmark__Binc_NoSym_Encode(b *testing.B) {
+	tSym := testBincH.AsSymbols
+	testBincH.AsSymbols = AsSymbolNone
+	fnBenchmarkEncode(b, "binc", benchTs, fnBincEncodeFn)
+	testBincH.AsSymbols = tSym
 }
 
-func Benchmark__Msgpack__Decode(b *testing.B) {
-	fnBenchmarkDecode(b, "msgpack", fnMsgpackEncodeFn, fnMsgpackDecodeFn)
+func Benchmark__Binc_NoSym_Decode(b *testing.B) {
+	tSym := testBincH.AsSymbols
+	testBincH.AsSymbols = AsSymbolNone
+	fnBenchmarkDecode(b, "binc", benchTs, fnBincEncodeFn, fnBincDecodeFn, fnBenchNewTs)
+	testBincH.AsSymbols = tSym
 }
 
-func Benchmark__Binc_____Encode(b *testing.B) {
-	fnBenchmarkEncode(b, "binc", fnBincEncodeFn)
+func Benchmark__Binc_Sym___Encode(b *testing.B) {
+	tSym := testBincH.AsSymbols
+	testBincH.AsSymbols = AsSymbolAll
+	fnBenchmarkEncode(b, "binc", benchTs, fnBincEncodeFn)
+	testBincH.AsSymbols = tSym
 }
 
-func Benchmark__Binc_____Decode(b *testing.B) {
-	fnBenchmarkDecode(b, "binc", fnBincEncodeFn, fnBincDecodeFn)
+func Benchmark__Binc_Sym___Decode(b *testing.B) {
+	tSym := testBincH.AsSymbols
+	testBincH.AsSymbols = AsSymbolAll
+	fnBenchmarkDecode(b, "binc", benchTs, fnBincEncodeFn, fnBincDecodeFn, fnBenchNewTs)
+	testBincH.AsSymbols = tSym
 }
 
-func Benchmark__Gob______Encode(b *testing.B) {
-	fnBenchmarkEncode(b, "gob", fnGobEncodeFn)
+func Benchmark__Gob________Encode(b *testing.B) {
+	fnBenchmarkEncode(b, "gob", benchTs, fnGobEncodeFn)
 }
 
-func Benchmark__Gob______Decode(b *testing.B) {
-	fnBenchmarkDecode(b, "gob", fnGobEncodeFn, fnGobDecodeFn)
+func Benchmark__Gob________Decode(b *testing.B) {
+	fnBenchmarkDecode(b, "gob", benchTs, fnGobEncodeFn, fnGobDecodeFn, fnBenchNewTs)
 }
 
-func Benchmark__Json_____Encode(b *testing.B) {
-	fnBenchmarkEncode(b, "json", fnJsonEncodeFn)
+func Benchmark__Json_______Encode(b *testing.B) {
+	fnBenchmarkEncode(b, "json", benchTs, fnJsonEncodeFn)
 }
 
-func Benchmark__Json_____Decode(b *testing.B) {
-	fnBenchmarkDecode(b, "json", fnJsonEncodeFn, fnJsonDecodeFn)
+func Benchmark__Json_______Decode(b *testing.B) {
+	fnBenchmarkDecode(b, "json", benchTs, fnJsonEncodeFn, fnJsonDecodeFn, fnBenchNewTs)
 }

+ 108 - 104
codec/binc.go

@@ -5,31 +5,14 @@ package codec
 
 import (
 	"math"
-	"reflect"
-	"sync/atomic"
+	// "reflect"
+	// "sync/atomic"
 	"time"
 	//"fmt"
 )
 
 //var _ = fmt.Printf
 
-//BincHandle is a Handle for the Binc Schema-Free Encoding Format
-//defined at https://github.com/ugorji/binc .
-//
-//BincHandle currently supports all Binc features with the following EXCEPTIONS:
-//  - only integers up to 64 bits of precision are supported.
-//    big integers are unsupported.
-//  - Only IEEE 754 binary32 and binary64 floats are supported (ie Go float32 and float64 types).
-//    extended precision and decimal IEEE 754 floats are unsupported.
-//  - Only UTF-8 strings supported.
-//    Unicode_Other Binc types (UTF16, UTF32) are currently unsupported.
-//Note that these EXCEPTIONS are temporary and full support is possible and may happen soon.
-type BincHandle struct {
-	extHandle
-	EncodeOptions
-	DecodeOptions
-}
-
 // vd as low 4 bits (there are 16 slots)
 const (
 	bincVdSpecial byte = iota
@@ -81,38 +64,14 @@ type bincEncDriver struct {
 	b [8]byte
 }
 
-type bincDecDriver struct {
-	r      decReader
-	h      *BincHandle
-	bdRead bool
-	bdType decodeEncodedType
-	bd     byte
-	vd     byte
-	vs     byte
-	b      [8]byte
-	m      map[uint32]string // symbols (use uint32 as key, as map optimizes for it)
-}
-
-func (_ *BincHandle) newEncDriver(w encWriter) encDriver {
-	return &bincEncDriver{w: w}
-}
-
-func (h *BincHandle) newDecDriver(r decReader) decDriver {
-	return &bincDecDriver{r: r, h: h}
-}
-
-func (_ *BincHandle) writeExt() bool {
-	return true
-}
-
 func (e *bincEncDriver) isBuiltinType(rt uintptr) bool {
 	return rt == timeTypId
 }
 
-func (e *bincEncDriver) encodeBuiltinType(rt uintptr, rv reflect.Value) {
+func (e *bincEncDriver) encodeBuiltin(rt uintptr, v interface{}) {
 	switch rt {
 	case timeTypId:
-		bs := encodeTime(rv.Interface().(time.Time))
+		bs := encodeTime(v.(time.Time))
 		e.w.writen1(bincVdTimestamp<<4 | uint8(len(bs)))
 		e.w.writeb(bs)
 	}
@@ -140,7 +99,6 @@ func (e *bincEncDriver) encodeFloat32(f float32) {
 }
 
 func (e *bincEncDriver) encodeFloat64(f float64) {
-	//if true { e.w.writen1(bincVdFloat << 4 | bincFlBin64); e.w.writeUint64(math.Float64bits(f)); return; }
 	if f == 0 {
 		e.w.writen1(bincVdSpecial<<4 | bincSpZeroFloat)
 		return
@@ -236,7 +194,10 @@ func (e *bincEncDriver) encodeString(c charEncoding, v string) {
 }
 
 func (e *bincEncDriver) encodeSymbol(v string) {
-	//if true { e.encodeString(c_UTF8, v); return; }
+	// if WriteSymbolsNoRefs {
+	// 	e.encodeString(c_UTF8, v)
+	// 	return
+	// }
 
 	//symbols only offer benefit when string length > 1.
 	//This is because strings with length 1 take only 2 bytes to store
@@ -264,9 +225,9 @@ func (e *bincEncDriver) encodeSymbol(v string) {
 			e.w.writeUint16(ui)
 		}
 	} else {
-		//e.s++
-		//ui = uint16(e.s)
-		ui = uint16(atomic.AddUint32(&e.s, 1))
+		e.s++
+		ui = uint16(e.s)
+		//ui = uint16(atomic.AddUint32(&e.s, 1))
 		e.m[v] = ui
 		var lenprec uint8
 		switch {
@@ -342,6 +303,17 @@ func (e *bincEncDriver) encLenNumber(bd byte, v uint64) {
 
 //------------------------------------
 
+type bincDecDriver struct {
+	r      decReader
+	bdRead bool
+	bdType valueType
+	bd     byte
+	vd     byte
+	vs     byte
+	b      [8]byte
+	m      map[uint32]string // symbols (use uint32 as key, as map optimizes for it)
+}
+
 func (d *bincDecDriver) initReadNext() {
 	if d.bdRead {
 		return
@@ -350,48 +322,50 @@ func (d *bincDecDriver) initReadNext() {
 	d.vd = d.bd >> 4
 	d.vs = d.bd & 0x0f
 	d.bdRead = true
-	d.bdType = detUnset
+	d.bdType = valueTypeUnset
 }
 
-func (d *bincDecDriver) currentEncodedType() decodeEncodedType {
-	if d.bdType == detUnset {
+func (d *bincDecDriver) currentEncodedType() valueType {
+	if d.bdType == valueTypeUnset {
 		switch d.vd {
 		case bincVdSpecial:
 			switch d.vs {
 			case bincSpNil:
-				d.bdType = detNil
+				d.bdType = valueTypeNil
 			case bincSpFalse, bincSpTrue:
-				d.bdType = detBool
+				d.bdType = valueTypeBool
 			case bincSpNan, bincSpNegInf, bincSpPosInf, bincSpZeroFloat:
-				d.bdType = detFloat
+				d.bdType = valueTypeFloat
 			case bincSpZero, bincSpNegOne:
-				d.bdType = detInt
+				d.bdType = valueTypeInt
 			default:
 				decErr("currentEncodedType: Unrecognized special value 0x%x", d.vs)
 			}
 		case bincVdSmallInt:
-			d.bdType = detInt
+			d.bdType = valueTypeInt
 		case bincVdUint:
-			d.bdType = detUint
+			d.bdType = valueTypeUint
 		case bincVdInt:
-			d.bdType = detInt
+			d.bdType = valueTypeInt
 		case bincVdFloat:
-			d.bdType = detFloat
-		case bincVdSymbol, bincVdString:
-			d.bdType = detString
+			d.bdType = valueTypeFloat
+		case bincVdString:
+			d.bdType = valueTypeString
+		case bincVdSymbol:
+			d.bdType = valueTypeSymbol
 		case bincVdByteArray:
-			d.bdType = detBytes
+			d.bdType = valueTypeBytes
 		case bincVdTimestamp:
-			d.bdType = detTimestamp
+			d.bdType = valueTypeTimestamp
 		case bincVdCustomExt:
-			d.bdType = detExt
+			d.bdType = valueTypeExt
 		case bincVdArray:
-			d.bdType = detArray
+			d.bdType = valueTypeArray
 		case bincVdMap:
-			d.bdType = detMap
+			d.bdType = valueTypeMap
 		default:
 			decErr("currentEncodedType: Unrecognized d.vd: 0x%x", d.vd)
-		}		
+		}
 	}
 	return d.bdType
 }
@@ -408,7 +382,7 @@ func (d *bincDecDriver) isBuiltinType(rt uintptr) bool {
 	return rt == timeTypId
 }
 
-func (d *bincDecDriver) decodeBuiltinType(rt uintptr, rv reflect.Value) {
+func (d *bincDecDriver) decodeBuiltin(rt uintptr, v interface{}) {
 	switch rt {
 	case timeTypId:
 		if d.vd != bincVdTimestamp {
@@ -418,7 +392,8 @@ func (d *bincDecDriver) decodeBuiltinType(rt uintptr, rv reflect.Value) {
 		if err != nil {
 			panic(err)
 		}
-		rv.Set(reflect.ValueOf(tt))
+		var vt *time.Time = v.(*time.Time)
+		*vt = tt
 		d.bdRead = false
 	}
 }
@@ -597,7 +572,7 @@ func (d *bincDecDriver) decodeFloat(chkOverflow32 bool) (f float64) {
 			return math.Inf(-1)
 		default:
 			decErr("Invalid d.vs decoding float where d.vd=bincVdSpecial: %v", d.vs)
-		}		
+		}
 	case bincVdFloat:
 		f = d.decFloat()
 	case bincVdUint:
@@ -751,93 +726,122 @@ func (d *bincDecDriver) decodeExt(verifyTag bool, tag byte) (xtag byte, xbs []by
 	return
 }
 
-func (d *bincDecDriver) decodeNaked() (rv reflect.Value, ctx decodeNakedContext) {
+func (d *bincDecDriver) decodeNaked() (v interface{}, vt valueType, decodeFurther bool) {
 	d.initReadNext()
-	var v interface{}
 
 	switch d.vd {
 	case bincVdSpecial:
 		switch d.vs {
 		case bincSpNil:
-			ctx = dncNil
-			d.bdRead = false
+			vt = valueTypeNil
 		case bincSpFalse:
+			vt = valueTypeBool
 			v = false
 		case bincSpTrue:
+			vt = valueTypeBool
 			v = true
 		case bincSpNan:
+			vt = valueTypeFloat
 			v = math.NaN()
 		case bincSpPosInf:
+			vt = valueTypeFloat
 			v = math.Inf(1)
 		case bincSpNegInf:
+			vt = valueTypeFloat
 			v = math.Inf(-1)
 		case bincSpZeroFloat:
+			vt = valueTypeFloat
 			v = float64(0)
 		case bincSpZero:
+			vt = valueTypeInt
 			v = int64(0) // int8(0)
 		case bincSpNegOne:
+			vt = valueTypeInt
 			v = int64(-1) // int8(-1)
 		default:
 			decErr("decodeNaked: Unrecognized special value 0x%x", d.vs)
 		}
 	case bincVdSmallInt:
+		vt = valueTypeInt
 		v = int64(int8(d.vs)) + 1 // int8(d.vs) + 1
 	case bincVdUint:
+		vt = valueTypeUint
 		v = d.decUint()
 	case bincVdInt:
+		vt = valueTypeInt
 		v = d.decInt()
 	case bincVdFloat:
+		vt = valueTypeFloat
 		v = d.decFloat()
 	case bincVdSymbol:
+		vt = valueTypeSymbol
 		v = d.decodeString()
 	case bincVdString:
+		vt = valueTypeString
 		v = d.decodeString()
 	case bincVdByteArray:
+		vt = valueTypeBytes
 		v, _ = d.decodeBytes(nil)
 	case bincVdTimestamp:
+		vt = valueTypeTimestamp
 		tt, err := decodeTime(d.r.readn(int(d.vs)))
 		if err != nil {
 			panic(err)
 		}
 		v = tt
 	case bincVdCustomExt:
-		//ctx = dncExt
+		vt = valueTypeExt
 		l := d.decLen()
-		xtag := d.r.readn1()
-		xbs := d.r.readn(l)
-		var bfn func(reflect.Value, []byte) error
-		rv, bfn = d.h.getDecodeExtForTag(xtag)
-		if bfn == nil {
-			// decErr("decodeNaked: Unable to find type mapped to extension tag: %v", xtag)
-			re := RawExt { xtag, xbs }
-			rv = reflect.ValueOf(&re).Elem()
-		} else if fnerr := bfn(rv, xbs); fnerr != nil {
-			panic(fnerr)
-		}
+		var re RawExt
+		re.Tag = d.r.readn1()
+		re.Data = d.r.readn(l)
+		v = &re
+		vt = valueTypeExt
 	case bincVdArray:
-		ctx = dncContainer
-		if d.h.SliceType == nil {
-			rv = reflect.New(intfSliceTyp).Elem()
-		} else {
-			rv = reflect.New(d.h.SliceType).Elem()
-		}
+		vt = valueTypeArray
+		decodeFurther = true
 	case bincVdMap:
-		ctx = dncContainer
-		if d.h.MapType == nil {
-			rv = reflect.MakeMap(mapIntfIntfTyp)
-		} else {
-			rv = reflect.MakeMap(d.h.MapType)
-		}
+		vt = valueTypeMap
+		decodeFurther = true
 	default:
 		decErr("decodeNaked: Unrecognized d.vd: 0x%x", d.vd)
 	}
 
-	if ctx == dncHandled {
+	if !decodeFurther {
 		d.bdRead = false
-		if v != nil {
-			rv = reflect.ValueOf(v)
-		}
 	}
 	return
 }
 
+//------------------------------------
+
+//BincHandle is a Handle for the Binc Schema-Free Encoding Format
+//defined at https://github.com/ugorji/binc .
+//
+//BincHandle currently supports all Binc features with the following EXCEPTIONS:
+//  - only integers up to 64 bits of precision are supported.
+//    big integers are unsupported.
+//  - Only IEEE 754 binary32 and binary64 floats are supported (ie Go float32 and float64 types).
+//    extended precision and decimal IEEE 754 floats are unsupported.
+//  - Only UTF-8 strings supported.
+//    Unicode_Other Binc types (UTF16, UTF32) are currently unsupported.
+//Note that these EXCEPTIONS are temporary and full support is possible and may happen soon.
+type BincHandle struct {
+	BasicHandle
+}
+
+func (h *BincHandle) newEncDriver(w encWriter) encDriver {
+	return &bincEncDriver{w: w}
+}
+
+func (h *BincHandle) newDecDriver(r decReader) decDriver {
+	return &bincDecDriver{r: r}
+}
+
+func (_ *BincHandle) writeExt() bool {
+	return true
+}
+
+func (h *BincHandle) getBasicHandle() *BasicHandle {
+	return &h.BasicHandle
+}

+ 60 - 39
codec/codecs_test.go

@@ -33,11 +33,11 @@ import (
 	"os/exec"
 	"path/filepath"
 	"reflect"
+	"runtime"
 	"strconv"
+	"sync/atomic"
 	"testing"
 	"time"
-	"sync/atomic"
-	"runtime"
 )
 
 type testVerifyArg int
@@ -51,20 +51,22 @@ const (
 )
 
 var (
-	testInitDebug     bool
-	testUseIoEncDec   bool
-	testStructToArray bool
-	_                           = fmt.Printf
-	skipVerifyVal   interface{} = &(struct{}{})
+	testInitDebug      bool
+	testUseIoEncDec    bool
+	testStructToArray  bool
+	testWriteNoSymbols bool
+
+	_                         = fmt.Printf
+	skipVerifyVal interface{} = &(struct{}{})
 
-	// For Go Time, do not use a descriptive timezone. 
+	// For Go Time, do not use a descriptive timezone.
 	// It's unnecessary, and makes it harder to do a reflect.DeepEqual.
 	// The Offset already tells what the offset should be, if not on UTC and unknown zone name.
-	timeLoc                     = time.FixedZone("", -8*60*60)         // UTC-08:00 //time.UTC-8
-	timeToCompare1              = time.Date(2012, 2, 2, 2, 2, 2, 2000, timeLoc) 
-	timeToCompare2              = time.Date(1900, 2, 2, 2, 2, 2, 2000, timeLoc) 
-	timeToCompare3              = time.Unix(0, 0).UTC()
-	timeToCompare4              = time.Time{}.UTC()
+	timeLoc        = time.FixedZone("", -8*60*60) // UTC-08:00 //time.UTC-8
+	timeToCompare1 = time.Date(2012, 2, 2, 2, 2, 2, 2000, timeLoc)
+	timeToCompare2 = time.Date(1900, 2, 2, 2, 2, 2, 2000, timeLoc)
+	timeToCompare3 = time.Unix(0, 0).UTC()
+	timeToCompare4 = time.Time{}.UTC()
 
 	table              []interface{} // main items we encode
 	tableVerify        []interface{} // we verify encoded things against this after decode
@@ -78,10 +80,11 @@ var (
 
 func testInitFlags() {
 	// delete(testDecOpts.ExtFuncs, timeTyp)
-	flag.BoolVar(&testInitDebug, "tdbg", false, "Test Debug")
-	flag.BoolVar(&testUseIoEncDec, "tio", false, "Use IO Reader/Writer for Marshal/Unmarshal")
-	flag.BoolVar(&testStructToArray, "ts2a", false, "Set StructToArray option")
-}	
+	flag.BoolVar(&testInitDebug, "tg", false, "Test Debug")
+	flag.BoolVar(&testUseIoEncDec, "ti", false, "Use IO Reader/Writer for Marshal/Unmarshal")
+	flag.BoolVar(&testStructToArray, "ts", false, "Set StructToArray option")
+	flag.BoolVar(&testWriteNoSymbols, "tn", false, "Set NoSymbols option")
+}
 
 type AnonInTestStruc struct {
 	AS        string
@@ -140,7 +143,7 @@ type TestRpcInt struct {
 func (r *TestRpcInt) Update(n int, res *int) error      { r.i = n; *res = r.i; return nil }
 func (r *TestRpcInt) Square(ignore int, res *int) error { *res = r.i * r.i; return nil }
 func (r *TestRpcInt) Mult(n int, res *int) error        { *res = r.i * n; return nil }
-func (r *TestRpcInt) EchoStruct(arg TestABC, res *string) error { 
+func (r *TestRpcInt) EchoStruct(arg TestABC, res *string) error {
 	*res = fmt.Sprintf("%#v", arg)
 	return nil
 }
@@ -150,7 +153,7 @@ func (r *TestRpcInt) Echo123(args []string, res *string) error {
 }
 
 func testVerifyVal(v interface{}, arg testVerifyArg) (v2 interface{}) {
-	//for python msgpack, 
+	//for python msgpack,
 	//  - all positive integers are unsigned 64-bit ints
 	//  - all floats are float64
 	switch iv := v.(type) {
@@ -261,7 +264,7 @@ func testVerifyVal(v interface{}, arg testVerifyArg) (v2 interface{}) {
 	return
 }
 
-func testInit() {	
+func testInit() {
 	gob.Register(new(TestStruc))
 	if testInitDebug {
 		ts0 := newTestStruc(2, false)
@@ -269,10 +272,28 @@ func testInit() {
 	}
 
 	testBincH.StructToArray = testStructToArray
+	if testWriteNoSymbols {
+		testBincH.AsSymbols = AsSymbolNone
+	} else {
+		testBincH.AsSymbols = AsSymbolAll
+	}
 	testMsgpackH.StructToArray = testStructToArray
-	testMsgpackH.RawToString = true 
-	//testMsgpackH.AddExt(byteSliceTyp, 0, testMsgpackH.BinaryEncodeExt, testMsgpackH.BinaryDecodeExt)
-	testMsgpackH.AddExt(timeTyp, 1, testMsgpackH.TimeEncodeExt, testMsgpackH.TimeDecodeExt)
+	testMsgpackH.RawToString = true
+	// testMsgpackH.AddExt(byteSliceTyp, 0, testMsgpackH.BinaryEncodeExt, testMsgpackH.BinaryDecodeExt)
+	// testMsgpackH.AddExt(timeTyp, 1, testMsgpackH.TimeEncodeExt, testMsgpackH.TimeDecodeExt)
+	testMsgpackH.AddExt(timeTyp, 1,
+		func(rv reflect.Value) ([]byte, error) {
+			return encodeTime(rv.Interface().(time.Time)), nil
+		},
+		func(rv reflect.Value, bs []byte) error {
+			tt, err := decodeTime(bs)
+			if err == nil {
+				rv.Set(reflect.ValueOf(tt))
+			}
+			return err
+		},
+	)
+
 	primitives := []interface{}{
 		int8(-8),
 		int16(-1616),
@@ -517,7 +538,6 @@ func doTestCodecTableOne(t *testing.T, testNil bool, h Handle,
 			continue
 		}
 
-		// debugf("=============>>>> %#v", v0check)
 		if err = deepEqual(v0check, v1); err == nil {
 			logT(t, "++++++++ Before and After marshal matched\n")
 		} else {
@@ -534,8 +554,8 @@ func testCodecTableOne(t *testing.T, h Handle) {
 	var oldWriteExt, oldRawToString bool
 	switch v := h.(type) {
 	case *MsgpackHandle:
-		oldWriteExt, v.WriteExt = v.WriteExt, true 
-		oldRawToString, v.RawToString = v.RawToString, true 
+		oldWriteExt, v.WriteExt = v.WriteExt, true
+		oldRawToString, v.RawToString = v.RawToString, true
 	}
 	doTestCodecTableOne(t, false, h, table, tableVerify)
 	//if true { panic("") }
@@ -544,9 +564,9 @@ func testCodecTableOne(t *testing.T, h Handle) {
 		v.WriteExt, v.RawToString = oldWriteExt, oldRawToString
 	}
 	// func TestMsgpackAll(t *testing.T) {
-	
+
 	idxTime, numPrim, numMap := 19, 23, 4
-	
+
 	//skip []interface{} containing time.Time
 	doTestCodecTableOne(t, false, h, table[:numPrim], tableVerify[:numPrim])
 	doTestCodecTableOne(t, false, h, table[numPrim+1:], tableVerify[numPrim+1:])
@@ -570,10 +590,10 @@ func testCodecTableOne(t *testing.T, h Handle) {
 	}
 
 	// func TestMsgpackNilIntf(t *testing.T) {
-	
+
 	//do newTestStruc and last element of map
 	doTestCodecTableOne(t, true, h, table[numPrim+numMap:], tableTestNilVerify[numPrim+numMap:])
-	//TODO? What is this one? 
+	//TODO? What is this one?
 	//doTestCodecTableOne(t, true, h, table[17:18], tableTestNilVerify[17:18])
 }
 
@@ -624,6 +644,7 @@ func testCodecMiscOne(t *testing.T, h Handle) {
 		logT(t, "Error marshalling p: %v, Err: %v", p, err)
 		t.FailNow()
 	}
+
 	m2 := map[string]int{}
 	p2 := []interface{}{m2}
 	err = testUnmarshal(&p2, bs, h)
@@ -748,7 +769,7 @@ func doTestRpcOne(t *testing.T, rr Rpc, h Handle, doRequest bool, exitSleepMs ti
 	if exitSleepMs == 0 {
 		defer ln.Close()
 		defer exitFn()
-	} 
+	}
 	if doRequest {
 		bs := connFn()
 		cc := rr.ClientCodec(bs, h)
@@ -854,8 +875,8 @@ func doTestMsgpackPythonGenStreams(t *testing.T) {
 //    - Go Client to Go RPC Service (contained within TestMsgpackRpcSpec)
 //    - Go client to Python RPC Service (contained within doTestMsgpackRpcSpecGoClientToPythonSvc)
 //    - Python Client to Go RPC Service (contained within doTestMsgpackRpcSpecPythonClientToGoSvc)
-// 
-// This allows us test the different calling conventions 
+//
+// This allows us test the different calling conventions
 //    - Go Service requires only one argument
 //    - Python Service allows multiple arguments
 
@@ -864,7 +885,7 @@ func doTestMsgpackRpcSpecGoClientToPythonSvc(t *testing.T) {
 	cmd := exec.Command("python", "msgpack_test.py", "rpc-server", openPort, "2")
 	checkErrT(t, cmd.Start())
 	time.Sleep(100 * time.Millisecond) // time for python rpc server to start
-	bs, err2 := net.Dial("tcp", ":" + openPort)
+	bs, err2 := net.Dial("tcp", ":"+openPort)
 	checkErrT(t, err2)
 	cc := MsgpackSpecRpc.ClientCodec(bs, testMsgpackH)
 	cl := rpc.NewClientWithCodec(cc)
@@ -888,11 +909,10 @@ func doTestMsgpackRpcSpecPythonClientToGoSvc(t *testing.T) {
 		logT(t, "         %v", string(cmdout))
 		t.FailNow()
 	}
-	checkEqualT(t, string(cmdout), 
+	checkEqualT(t, string(cmdout),
 		fmt.Sprintf("%#v\n%#v\n", []string{"A1", "B2", "C3"}, TestABC{"Aa", "Bb", "Cc"}))
 }
 
-
 func TestMsgpackCodecsTable(t *testing.T) {
 	testCodecTableOne(t, testMsgpackH)
 }
@@ -921,6 +941,7 @@ func TestBincRpcGo(t *testing.T) {
 	doTestRpcOne(t, GoRpc, testBincH, true, 0)
 }
 
-//TODO: 
-//  - Add test for decoding empty list/map in stream into a nil slice/map
-
+// TODO:
+//   Add Tests for:
+//   - decoding empty list/map in stream into a nil slice/map
+//   - binary(M|Unm)arsher support for time.Time

+ 457 - 262
codec/decode.go

@@ -6,6 +6,7 @@ package codec
 import (
 	"io"
 	"reflect"
+	// "runtime/debug"
 )
 
 // Some tagging information for error messages.
@@ -14,35 +15,6 @@ var (
 	msgBadDesc = "Unrecognized descriptor byte"
 )
 
-// when decoding without schema, the nakedContext tells us what 
-// we decoded into, or if decoding has been handled.
-type decodeNakedContext uint8
-
-const (
-	dncHandled decodeNakedContext = iota
-	dncNil
-	// dncExt
-	dncContainer
-)
-
-// decodeEncodedType is the current type in the encoded stream
-type decodeEncodedType uint8
-
-const (
-	detUnset decodeEncodedType = iota
-	detNil
-	detInt
-	detUint
-	detFloat
-	detBool
-	detString
-	detBytes
-	detMap
-	detArray
-	detTimestamp
-	detExt
-)
-
 // decReader abstracts the reading source, allowing implementations that can
 // read from an io.Reader or directly off a byte slice with zero-copying.
 type decReader interface {
@@ -57,12 +29,11 @@ type decReader interface {
 type decDriver interface {
 	initReadNext()
 	tryDecodeAsNil() bool
-	currentEncodedType() decodeEncodedType
+	currentEncodedType() valueType
 	isBuiltinType(rt uintptr) bool
-	decodeBuiltinType(rt uintptr, rv reflect.Value)
-	//decodeNaked should completely handle extensions, builtins, primitives, etc.
-	//Numbers are decoded as int64, uint64, float64 only (no smaller sized number types).
-	decodeNaked() (rv reflect.Value, ctx decodeNakedContext)
+	decodeBuiltin(rt uintptr, v interface{})
+	//decodeNaked: Numbers are decoded as int64, uint64, float64 only (no smaller sized number types).
+	decodeNaked() (v interface{}, vt valueType, decodeFurther bool)
 	decodeInt(bitsize uint8) (i int64)
 	decodeUint(bitsize uint8) (ui uint64)
 	decodeFloat(chkOverflow32 bool) (f float64)
@@ -75,33 +46,139 @@ type decDriver interface {
 	readArrayLen() int
 }
 
-// decFnInfo has methods for registering handling decoding of a specific type
-// based on some characteristics (builtin, extension, reflect Kind, etc)
-type decFnInfo struct {
-	ti *typeInfo
-	d   *Decoder
-	dd  decDriver
-	xfFn  func(reflect.Value, []byte) error
-	xfTag byte 
+type DecodeOptions struct {
+	// An instance of MapType is used during schema-less decoding of a map in the stream.
+	// If nil, we use map[interface{}]interface{}
+	MapType reflect.Type
+	// An instance of SliceType is used during schema-less decoding of an array in the stream.
+	// If nil, we use []interface{}
+	SliceType reflect.Type
+	// ErrorIfNoField controls whether an error is returned when decoding a map
+	// from a codec stream into a struct, and no matching struct field is found.
+	ErrorIfNoField bool
 }
 
-type decFn struct {
-	i *decFnInfo
-	f func(*decFnInfo, reflect.Value) 
+// ------------------------------------
+
+// ioDecReader is a decReader that reads off an io.Reader
+type ioDecReader struct {
+	r  io.Reader
+	br io.ByteReader
+	x  [8]byte //temp byte array re-used internally for efficiency
 }
 
-// A Decoder reads and decodes an object from an input stream in the codec format.
-type Decoder struct {
-	r decReader
-	d decDriver
-	h decodeHandleI
-	f map[uintptr]decFn
-	x []uintptr
-	s []decFn
+func (z *ioDecReader) readn(n int) (bs []byte) {
+	bs = make([]byte, n)
+	if _, err := io.ReadAtLeast(z.r, bs, n); err != nil {
+		panic(err)
+	}
+	return
+}
+
+func (z *ioDecReader) readb(bs []byte) {
+	if _, err := io.ReadAtLeast(z.r, bs, len(bs)); err != nil {
+		panic(err)
+	}
+}
+
+func (z *ioDecReader) readn1() uint8 {
+	if z.br != nil {
+		b, err := z.br.ReadByte()
+		if err != nil {
+			panic(err)
+		}
+		return b
+	}
+	z.readb(z.x[:1])
+	return z.x[0]
+}
+
+func (z *ioDecReader) readUint16() uint16 {
+	z.readb(z.x[:2])
+	return bigen.Uint16(z.x[:2])
+}
+
+func (z *ioDecReader) readUint32() uint32 {
+	z.readb(z.x[:4])
+	return bigen.Uint32(z.x[:4])
+}
+
+func (z *ioDecReader) readUint64() uint64 {
+	z.readb(z.x[:8])
+	return bigen.Uint64(z.x[:8])
+}
+
+// ------------------------------------
+
+// bytesDecReader is a decReader that reads off a byte slice with zero copying
+type bytesDecReader struct {
+	b []byte // data
+	c int    // cursor
+	a int    // available
+}
+
+func (z *bytesDecReader) consume(n int) (oldcursor int) {
+	if z.a == 0 {
+		panic(io.EOF)
+	}
+	if n > z.a {
+		decErr("Trying to read %v bytes. Only %v available", n, z.a)
+	}
+	// z.checkAvailable(n)
+	oldcursor = z.c
+	z.c = oldcursor + n
+	z.a = z.a - n
+	return
+}
+
+func (z *bytesDecReader) readn(n int) (bs []byte) {
+	c0 := z.consume(n)
+	bs = z.b[c0:z.c]
+	return
+}
+
+func (z *bytesDecReader) readb(bs []byte) {
+	copy(bs, z.readn(len(bs)))
+}
+
+func (z *bytesDecReader) readn1() uint8 {
+	c0 := z.consume(1)
+	return z.b[c0]
+}
+
+// Use binaryEncoding helper for 4 and 8 bits, but inline it for 2 bits
+// creating temp slice variable and copying it to helper function is expensive
+// for just 2 bits.
+
+func (z *bytesDecReader) readUint16() uint16 {
+	c0 := z.consume(2)
+	return uint16(z.b[c0+1]) | uint16(z.b[c0])<<8
+}
+
+func (z *bytesDecReader) readUint32() uint32 {
+	c0 := z.consume(4)
+	return bigen.Uint32(z.b[c0:z.c])
+}
+
+func (z *bytesDecReader) readUint64() uint64 {
+	c0 := z.consume(8)
+	return bigen.Uint64(z.b[c0:z.c])
+}
+
+// ------------------------------------
+
+// decFnInfo has methods for registering handling decoding of a specific type
+// based on some characteristics (builtin, extension, reflect Kind, etc)
+type decFnInfo struct {
+	ti    *typeInfo
+	d     *Decoder
+	dd    decDriver
+	xfFn  func(reflect.Value, []byte) error
+	xfTag byte
 }
 
 func (f *decFnInfo) builtin(rv reflect.Value) {
-	f.dd.decodeBuiltinType(f.ti.rtid, rv)
+	f.dd.decodeBuiltin(f.ti.rtid, rv.Addr().Interface())
 }
 
 func (f *decFnInfo) rawExt(rv reflect.Value) {
@@ -207,38 +284,80 @@ func (f *decFnInfo) kUint16(rv reflect.Value) {
 // }
 
 func (f *decFnInfo) kInterface(rv reflect.Value) {
-	if rv.IsNil() {
-		// if nil interface, use some hieristics to set the nil interface to an
-		// appropriate value based on the first byte read (byte descriptor bd)
-		rv2, ndesc := f.dd.decodeNaked()
-		if ndesc == dncNil {
-			return
+	// debugf("\t===> kInterface")
+	if !rv.IsNil() {
+		f.d.decodeValue(rv.Elem())
+		return
+	}
+	// nil interface:
+	// use some hieristics to set the nil interface to an
+	// appropriate value based on the first byte read (byte descriptor bd)
+	v, vt, decodeFurther := f.dd.decodeNaked()
+	if vt == valueTypeNil {
+		return
+	}
+	// Cannot decode into nil interface with methods (e.g. error, io.Reader, etc)
+	// if non-nil value in stream.
+	if num := f.ti.rt.NumMethod(); num > 0 {
+		decErr("decodeValue: Cannot decode non-nil codec value into nil %v (%v methods)",
+			f.ti.rt, num)
+	}
+	var rvn reflect.Value
+	var useRvn bool
+	switch vt {
+	case valueTypeMap:
+		if f.d.h.MapType == nil {
+			var m2 map[interface{}]interface{}
+			v = &m2
+		} else {
+			rvn = reflect.New(f.d.h.MapType).Elem()
+			useRvn = true
 		}
-		// Cannot decode into nil interface with methods (e.g. error, io.Reader, etc) 
-		// if non-nil value in stream.
-		if num := f.ti.rt.NumMethod(); num > 0 {
-			decErr("decodeValue: Cannot decode non-nil codec value into nil %v (%v methods)", 
-				f.ti.rt , num)
-		} 
-		if ndesc == dncHandled {
-			rv.Set(rv2)
-			return
+	case valueTypeArray:
+		if f.d.h.SliceType == nil {
+			var m2 []interface{}
+			v = &m2
+		} else {
+			rvn = reflect.New(f.d.h.SliceType).Elem()
+			useRvn = true
 		}
-		f.d.decodeValue(rv2)
-		rv.Set(rv2)
-	} else {
-		f.d.decodeValue(rv.Elem())
+	case valueTypeExt:
+		re := v.(*RawExt)
+		var bfn func(reflect.Value, []byte) error
+		rvn, bfn = f.d.h.getDecodeExtForTag(re.Tag)
+		if bfn == nil {
+			rvn = reflect.ValueOf(*re)
+		} else if fnerr := bfn(rvn, re.Data); fnerr != nil {
+			panic(fnerr)
+		}
+		rv.Set(rvn)
+		return
+	}
+	if decodeFurther {
+		if useRvn {
+			f.d.decodeValue(rvn)
+		} else if v != nil {
+			// this v is a pointer, so we need to dereference it when done
+			f.d.decode(v)
+			rvn = reflect.ValueOf(v).Elem()
+			useRvn = true
+		}
+	}
+	if useRvn {
+		rv.Set(rvn)
+	} else if v != nil {
+		rv.Set(reflect.ValueOf(v))
 	}
 }
 
 func (f *decFnInfo) kStruct(rv reflect.Value) {
 	fti := f.ti
-	if currEncodedType := f.dd.currentEncodedType(); currEncodedType == detMap {
+	if currEncodedType := f.dd.currentEncodedType(); currEncodedType == valueTypeMap {
 		containerLen := f.dd.readMapLen()
 		if containerLen == 0 {
 			return
 		}
-		tisfi := fti.sfi 
+		tisfi := fti.sfi
 		for j := 0; j < containerLen; j++ {
 			// var rvkencname string
 			// ddecode(&rvkencname)
@@ -254,8 +373,8 @@ func (f *decFnInfo) kStruct(rv reflect.Value) {
 				}
 				// f.d.decodeValue(ti.field(k, rv))
 			} else {
-				if f.d.h.errorIfNoField() {
-					decErr("No matching struct field found when decoding stream map with key: %v", 
+				if f.d.h.ErrorIfNoField {
+					decErr("No matching struct field found when decoding stream map with key: %v",
 						rvkencname)
 				} else {
 					var nilintf0 interface{}
@@ -263,7 +382,7 @@ func (f *decFnInfo) kStruct(rv reflect.Value) {
 				}
 			}
 		}
-	} else if currEncodedType == detArray {
+	} else if currEncodedType == valueTypeArray {
 		containerLen := f.dd.readArrayLen()
 		if containerLen == 0 {
 			return
@@ -286,7 +405,7 @@ func (f *decFnInfo) kStruct(rv reflect.Value) {
 			}
 		}
 	} else {
-		decErr("Only encoded map or array can be decoded into a struct. (decodeEncodedType: %x)", 
+		decErr("Only encoded map or array can be decoded into a struct. (valueType: %x)",
 			currEncodedType)
 	}
 }
@@ -296,6 +415,24 @@ func (f *decFnInfo) kSlice(rv reflect.Value) {
 	// may have come in here (which may not be settable).
 	// In places where the slice got from an array could be, we should guard with CanSet() calls.
 
+	// A slice can be set from a map or array in stream.
+
+	if shortCircuitReflectToFastPath {
+		if rv.CanAddr() {
+			switch f.ti.rtid {
+			case intfSliceTypId:
+				f.d.decSliceIntf(rv.Addr().Interface().(*[]interface{}))
+				return
+			case intSliceTypId:
+				f.d.decSliceInt(rv.Addr().Interface().(*[]int))
+				return
+			case strSliceTypId:
+				f.d.decSliceStr(rv.Addr().Interface().(*[]string))
+				return
+			}
+		}
+	}
+
 	if f.ti.rtid == byteSliceTypId { // rawbytes
 		if bs2, changed2 := f.dd.decodeBytes(rv.Bytes()); changed2 {
 			rv.SetBytes(bs2)
@@ -303,57 +440,83 @@ func (f *decFnInfo) kSlice(rv reflect.Value) {
 		return
 	}
 
-	containerLen := f.dd.readArrayLen()
+	containerLen, containerLenS := decContLens(f.dd)
 
 	if rv.IsNil() {
-		rv.Set(reflect.MakeSlice(f.ti.rt, containerLen, containerLen))
-	} 
+		rv.Set(reflect.MakeSlice(f.ti.rt, containerLenS, containerLenS))
+	}
 	if containerLen == 0 {
 		return
 	}
 
 	// if we need to reset rv but it cannot be set, we should err out.
 	// for example, if slice is got from unaddressable array, CanSet = false
-	if rvcap, rvlen := rv.Len(), rv.Cap(); containerLen > rvcap {
+	if rvcap, rvlen := rv.Len(), rv.Cap(); containerLenS > rvcap {
 		if rv.CanSet() {
-			rvn := reflect.MakeSlice(f.ti.rt, containerLen, containerLen)
+			rvn := reflect.MakeSlice(f.ti.rt, containerLenS, containerLenS)
 			if rvlen > 0 {
 				reflect.Copy(rvn, rv)
 			}
 			rv.Set(rvn)
 		} else {
-			decErr("Cannot reset slice with less cap: %v than stream contents: %v", 
-				rvcap, containerLen)
+			decErr("Cannot reset slice with less cap: %v than stream contents: %v",
+				rvcap, containerLenS)
 		}
-	} else if containerLen > rvlen {
-		rv.SetLen(containerLen)
+	} else if containerLenS > rvlen {
+		rv.SetLen(containerLenS)
 	}
-	for j := 0; j < containerLen; j++ {
+
+	for j := 0; j < containerLenS; j++ {
 		f.d.decodeValue(rv.Index(j))
 	}
 }
 
 func (f *decFnInfo) kArray(rv reflect.Value) {
-	f.d.decodeValue(rv.Slice(0, rv.Len()))
+	// f.d.decodeValue(rv.Slice(0, rv.Len()))
+	f.kSlice(rv.Slice(0, rv.Len()))
 }
 
 func (f *decFnInfo) kMap(rv reflect.Value) {
+	// debugf("\t=> kMap: rv: %v", rv)
+	if shortCircuitReflectToFastPath {
+		if rv.CanAddr() {
+			switch f.ti.rtid {
+			case mapStringIntfTypId:
+				f.d.decMapStrIntf(rv.Addr().Interface().(*map[string]interface{}))
+				return
+			case mapIntfIntfTypId:
+				f.d.decMapIntfIntf(rv.Addr().Interface().(*map[interface{}]interface{}))
+				return
+			case mapIntIntfTypId:
+				f.d.decMapIntIntf(rv.Addr().Interface().(*map[int]interface{}))
+				return
+			}
+		}
+	}
+
 	containerLen := f.dd.readMapLen()
+	// defer func() {
+	// 	if rv.CanInterface() {
+	// 		debugf("\t=> kMap: containerLen: %v, rv.I: %v", containerLen, rv.Interface())
+	// 	}
+	// }()
 
 	if rv.IsNil() {
 		rv.Set(reflect.MakeMap(f.ti.rt))
 	}
-	
+
 	if containerLen == 0 {
 		return
 	}
 
 	ktype, vtype := f.ti.rt.Key(), f.ti.rt.Elem()
+	ktypeId := reflect.ValueOf(ktype).Pointer()
 	for j := 0; j < containerLen; j++ {
 		rvk := reflect.New(ktype).Elem()
 		f.d.decodeValue(rvk)
 
-		if ktype == intfTyp {
+		// if ktype == intfTyp {
+		if ktypeId == intfTypId {
 			rvk = rvk.Elem()
 			if rvk.Type() == byteSliceTyp {
 				rvk = reflect.ValueOf(string(rvk.Bytes()))
@@ -369,51 +532,33 @@ func (f *decFnInfo) kMap(rv reflect.Value) {
 	}
 }
 
-// ioDecReader is a decReader that reads off an io.Reader
-type ioDecReader struct {
-	r io.Reader
-	br io.ByteReader
-	x [8]byte //temp byte array re-used internally for efficiency
-}
-
-// bytesDecReader is a decReader that reads off a byte slice with zero copying
-type bytesDecReader struct {
-	b []byte // data
-	c int    // cursor
-	a int    // available
-}
-
-type decodeHandleI interface {
-	getDecodeExt(rt uintptr) (tag byte, fn func(reflect.Value, []byte) error)
-	errorIfNoField() bool
-}
+// ----------------------------------------
 
-type DecodeOptions struct {
-	// An instance of MapType is used during schema-less decoding of a map in the stream.
-	// If nil, we use map[interface{}]interface{}
-	MapType reflect.Type
-	// An instance of SliceType is used during schema-less decoding of an array in the stream.
-	// If nil, we use []interface{}
-	SliceType reflect.Type
-	// ErrorIfNoField controls whether an error is returned when decoding a map
-	// from a codec stream into a struct, and no matching struct field is found.
-	ErrorIfNoField bool
+type decFn struct {
+	i *decFnInfo
+	f func(*decFnInfo, reflect.Value)
 }
 
-func (o *DecodeOptions) errorIfNoField() bool {
-	return o.ErrorIfNoField
+// A Decoder reads and decodes an object from an input stream in the codec format.
+type Decoder struct {
+	r decReader
+	d decDriver
+	h *BasicHandle
+	f map[uintptr]decFn
+	x []uintptr
+	s []decFn
 }
 
 // NewDecoder returns a Decoder for decoding a stream of bytes from an io.Reader.
-// 
+//
 // For efficiency, Users are encouraged to pass in a memory buffered writer
-// (eg bufio.Reader, bytes.Buffer). 
+// (eg bufio.Reader, bytes.Buffer).
 func NewDecoder(r io.Reader, h Handle) *Decoder {
 	z := ioDecReader{
 		r: r,
 	}
 	z.br, _ = r.(io.ByteReader)
-	return &Decoder{r: &z, d: h.newDecDriver(&z), h: h}
+	return &Decoder{r: &z, d: h.newDecDriver(&z), h: h.getBasicHandle()}
 }
 
 // NewDecoderBytes returns a Decoder which efficiently decodes directly
@@ -423,7 +568,7 @@ func NewDecoderBytes(in []byte, h Handle) *Decoder {
 		b: in,
 		a: len(in),
 	}
-	return &Decoder{r: &z, d: h.newDecDriver(&z), h: h}
+	return &Decoder{r: &z, d: h.newDecDriver(&z), h: h.getBasicHandle()}
 }
 
 // Decode decodes the stream from reader and stores the result in the
@@ -433,7 +578,7 @@ func NewDecoderBytes(in []byte, h Handle) *Decoder {
 // Note that a pointer to a nil interface is not a nil pointer.
 // If you do not know what type of stream it is, pass in a pointer to a nil interface.
 // We will decode and store a value in that nil interface.
-// 
+//
 // Sample usages:
 //   // Decoding into a non-nil typed value
 //   var f float32
@@ -443,32 +588,39 @@ func NewDecoderBytes(in []byte, h Handle) *Decoder {
 //   var v interface{}
 //   dec := codec.NewDecoder(r, handle)
 //   err = dec.Decode(&v)
-// 
+//
 // When decoding into a nil interface{}, we will decode into an appropriate value based
 // on the contents of the stream:
-//   - Numbers are decoded as float64, int64 or uint64. 
-//   - Other values are decoded appropriately depending on the encoding: 
+//   - Numbers are decoded as float64, int64 or uint64.
+//   - Other values are decoded appropriately depending on the type:
 //     bool, string, []byte, time.Time, etc
 //   - Extensions are decoded as RawExt (if no ext function registered for the tag)
-// Configurations exist on the Handle to override defaults 
+// Configurations exist on the Handle to override defaults
 // (e.g. for MapType, SliceType and how to decode raw bytes).
-// 
-// When decoding into a non-nil interface{} value, the mode of encoding is based on the 
+//
+// When decoding into a non-nil interface{} value, the mode of encoding is based on the
 // type of the value. When a value is seen:
 //   - If an extension is registered for it, call that extension function
 //   - If it implements BinaryUnmarshaler, call its UnmarshalBinary(data []byte) error
 //   - Else decode it based on its reflect.Kind
-// 
+//
 // There are some special rules when decoding into containers (slice/array/map/struct).
-// Decode will typically use the stream contents to UPDATE the container. 
-//   - This means that for a struct or map, we just update matching fields or keys.
-//   - For a slice/array, we just update the first n elements, where n is length of the stream.
-//   - However, if decoding into a nil map/slice and the length of the stream is 0,
-//     we reset the destination map/slice to be a zero-length non-nil map/slice.
-//   - Also, if the encoded value is Nil in the stream, then we try to set
-//     the container to its "zero" value (e.g. nil for slice/map).
-//   - Note that a struct can be decoded from an array in the stream,
-//     by updating fields as they occur in the struct.
+// Decode will typically use the stream contents to UPDATE the container.
+//   - A map can be decoded from a stream map, by updating matching keys.
+//   - A slice can be decoded from a stream array,
+//     by updating the first n elements, where n is length of the stream.
+//   - A slice can be decoded from a stream map, by decoding as if
+//     it contains a sequence of key-value pairs.
+//   - A struct can be decoded from a stream map, by updating matching fields.
+//   - A struct can be decoded from a stream array,
+//     by updating fields as they occur in the struct (by index).
+//
+// When decoding a stream map or array with length of 0 into a nil map or slice,
+// we reset the destination map or slice to a zero-length value.
+//
+// However, when decoding a stream nil, we reset the destination container
+// to its "zero" value (e.g. nil for slice/map, etc).
+//
 func (d *Decoder) Decode(v interface{}) (err error) {
 	defer panicToErr(&err)
 	d.decode(v)
@@ -478,13 +630,14 @@ func (d *Decoder) Decode(v interface{}) (err error) {
 func (d *Decoder) decode(iv interface{}) {
 	d.d.initReadNext()
 
-	// Fast path included for various pointer types which cannot be registered as extensions
 	switch v := iv.(type) {
 	case nil:
 		decErr("Cannot decode into nil.")
+
 	case reflect.Value:
 		d.chkPtrValue(v)
 		d.decodeValue(v)
+
 	case *string:
 		*v = d.d.decodeString()
 	case *bool:
@@ -498,7 +651,7 @@ func (d *Decoder) decode(iv interface{}) {
 	case *int32:
 		*v = int32(d.d.decodeInt(32))
 	case *int64:
-		*v = int64(d.d.decodeInt(64))
+		*v = d.d.decodeInt(64)
 	case *uint:
 		*v = uint(d.d.decodeUint(uintBitsize))
 	case *uint8:
@@ -508,13 +661,31 @@ func (d *Decoder) decode(iv interface{}) {
 	case *uint32:
 		*v = uint32(d.d.decodeUint(32))
 	case *uint64:
-		*v = uint64(d.d.decodeUint(64))
+		*v = d.d.decodeUint(64)
 	case *float32:
 		*v = float32(d.d.decodeFloat(true))
 	case *float64:
 		*v = d.d.decodeFloat(false)
+	case *[]byte:
+		v2 := *v
+		*v, _ = d.d.decodeBytes(v2)
+
+	case *[]interface{}:
+		d.decSliceIntf(v)
+	case *[]int:
+		d.decSliceInt(v)
+	case *[]string:
+		d.decSliceStr(v)
+	case *map[string]interface{}:
+		d.decMapStrIntf(v)
+	case *map[interface{}]interface{}:
+		d.decMapIntfIntf(v)
+	case *map[int]interface{}:
+		d.decMapIntIntf(v)
+
 	case *interface{}:
 		d.decodeValue(reflect.ValueOf(iv).Elem())
+
 	default:
 		rv := reflect.ValueOf(iv)
 		d.chkPtrValue(rv)
@@ -535,7 +706,7 @@ func (d *Decoder) decodeValue(rv reflect.Value) {
 		}
 		return
 	}
-	
+
 	// If stream is not containing a nil value, then we can deref to the base
 	// non-pointer value, and decode into that.
 	for rv.Kind() == reflect.Ptr {
@@ -544,17 +715,15 @@ func (d *Decoder) decodeValue(rv reflect.Value) {
 		}
 		rv = rv.Elem()
 	}
-	
+
 	rt := rv.Type()
 	rtid := reflect.ValueOf(rt).Pointer()
-	
+
 	// retrieve or register a focus'ed function for this type
 	// to eliminate need to do the retrieval multiple times
-	
-	// if d.f == nil && d.s == nil {
-	// 	// debugf("---->Creating new dec f map for type: %v\n", rt)
-	// }
-	var fn decFn 
+
+	// if d.f == nil && d.s == nil { debugf("---->Creating new dec f map for type: %v\n", rt) }
+	var fn decFn
 	var ok bool
 	if useMapForCodecCache {
 		fn, ok = d.f[rtid]
@@ -568,8 +737,8 @@ func (d *Decoder) decodeValue(rv reflect.Value) {
 	}
 	if !ok {
 		// debugf("\tCreating new dec fn for type: %v\n", rt)
-		fi := decFnInfo { ti:getTypeInfo(rtid, rt), d:d, dd:d.d }
-		fn.i = &fi 
+		fi := decFnInfo{ti: getTypeInfo(rtid, rt), d: d, dd: d.d}
+		fn.i = &fi
 		// An extension can be registered for any type, regardless of the Kind
 		// (e.g. type BitSet int64, type MyStruct { / * unexported fields * / }, type X []int, etc.
 		//
@@ -580,58 +749,58 @@ func (d *Decoder) decodeValue(rv reflect.Value) {
 		// NOTE: if decoding into a nil interface{}, we return a non-nil
 		// value except even if the container registers a length of 0.
 		if rtid == rawExtTypId {
-			fn.f = (*decFnInfo).rawExt 
+			fn.f = (*decFnInfo).rawExt
 		} else if d.d.isBuiltinType(rtid) {
-			fn.f = (*decFnInfo).builtin 
+			fn.f = (*decFnInfo).builtin
 		} else if xfTag, xfFn := d.h.getDecodeExt(rtid); xfFn != nil {
 			fi.xfTag, fi.xfFn = xfTag, xfFn
-			fn.f = (*decFnInfo).ext 
+			fn.f = (*decFnInfo).ext
 		} else if supportBinaryMarshal && fi.ti.unm {
-			fn.f = (*decFnInfo).binaryMarshal 
+			fn.f = (*decFnInfo).binaryMarshal
 		} else {
 			switch rk := rt.Kind(); rk {
 			case reflect.String:
-				fn.f = (*decFnInfo).kString 
+				fn.f = (*decFnInfo).kString
 			case reflect.Bool:
-				fn.f = (*decFnInfo).kBool 
+				fn.f = (*decFnInfo).kBool
 			case reflect.Int:
-				fn.f = (*decFnInfo).kInt 
+				fn.f = (*decFnInfo).kInt
 			case reflect.Int64:
-				fn.f = (*decFnInfo).kInt64 
+				fn.f = (*decFnInfo).kInt64
 			case reflect.Int32:
-				fn.f = (*decFnInfo).kInt32 
+				fn.f = (*decFnInfo).kInt32
 			case reflect.Int8:
-				fn.f = (*decFnInfo).kInt8 
+				fn.f = (*decFnInfo).kInt8
 			case reflect.Int16:
-				fn.f = (*decFnInfo).kInt16 
+				fn.f = (*decFnInfo).kInt16
 			case reflect.Float32:
-				fn.f = (*decFnInfo).kFloat32 
+				fn.f = (*decFnInfo).kFloat32
 			case reflect.Float64:
-				fn.f = (*decFnInfo).kFloat64 
+				fn.f = (*decFnInfo).kFloat64
 			case reflect.Uint8:
-				fn.f = (*decFnInfo).kUint8 
+				fn.f = (*decFnInfo).kUint8
 			case reflect.Uint64:
-				fn.f = (*decFnInfo).kUint64 
+				fn.f = (*decFnInfo).kUint64
 			case reflect.Uint:
-				fn.f = (*decFnInfo).kUint 
+				fn.f = (*decFnInfo).kUint
 			case reflect.Uint32:
-				fn.f = (*decFnInfo).kUint32 
+				fn.f = (*decFnInfo).kUint32
 			case reflect.Uint16:
-				fn.f = (*decFnInfo).kUint16 
+				fn.f = (*decFnInfo).kUint16
 			// case reflect.Ptr:
-			// 	fn.f = (*decFnInfo).kPtr 
+			// 	fn.f = (*decFnInfo).kPtr
 			case reflect.Interface:
-				fn.f = (*decFnInfo).kInterface 
+				fn.f = (*decFnInfo).kInterface
 			case reflect.Struct:
-				fn.f = (*decFnInfo).kStruct 
+				fn.f = (*decFnInfo).kStruct
 			case reflect.Slice:
-				fn.f = (*decFnInfo).kSlice 
+				fn.f = (*decFnInfo).kSlice
 			case reflect.Array:
-				fn.f = (*decFnInfo).kArray 
+				fn.f = (*decFnInfo).kArray
 			case reflect.Map:
-				fn.f = (*decFnInfo).kMap 
+				fn.f = (*decFnInfo).kMap
 			default:
-				fn.f = (*decFnInfo).kErr 
+				fn.f = (*decFnInfo).kErr
 			}
 		}
 		if useMapForCodecCache {
@@ -644,9 +813,9 @@ func (d *Decoder) decodeValue(rv reflect.Value) {
 			d.x = append(d.x, rtid)
 		}
 	}
-	
+
 	fn.f(fn.i, rv)
-	
+
 	return
 }
 
@@ -663,102 +832,128 @@ func (d *Decoder) chkPtrValue(rv reflect.Value) {
 	}
 }
 
-// ------------------------------------
+// --------------------------------------------------
 
-func (z *ioDecReader) readn(n int) (bs []byte) {
-	bs = make([]byte, n)
-	if _, err := io.ReadAtLeast(z.r, bs, n); err != nil {
-		panic(err)
-	}
-	return
-}
+// short circuit functions for common maps and slices
 
-func (z *ioDecReader) readb(bs []byte) {	
-	if _, err := io.ReadAtLeast(z.r, bs, len(bs)); err != nil {
-		panic(err)
+func (d *Decoder) decSliceIntf(v *[]interface{}) {
+	_, containerLenS := decContLens(d.d)
+	s := *v
+	if s == nil {
+		s = make([]interface{}, containerLenS, containerLenS)
+	} else if containerLenS > cap(s) {
+		s = make([]interface{}, containerLenS, containerLenS)
+		copy(s, *v)
+	} else if containerLenS > len(s) {
+		s = s[:containerLenS]
 	}
-}
-
-func (z *ioDecReader) readn1() uint8 {
-	if z.br != nil {
-		b, err := z.br.ReadByte()
-		if err != nil {
-			panic(err)
-		}
-		return b
+	// debugf("\t=> decSliceIntf: containerLenS: %v", containerLenS)
+	for j := 0; j < containerLenS; j++ {
+		// debugf("\t=> decSliceIntf: j: %v", j)
+		d.decode(&s[j])
 	}
-	z.readb(z.x[:1])
-	return z.x[0]
-}
-
-func (z *ioDecReader) readUint16() uint16 {
-	z.readb(z.x[:2])
-	return bigen.Uint16(z.x[:2])
-}
-
-func (z *ioDecReader) readUint32() uint32 {
-	z.readb(z.x[:4])
-	return bigen.Uint32(z.x[:4])
-}
-
-func (z *ioDecReader) readUint64() uint64 {
-	z.readb(z.x[:8])
-	return bigen.Uint64(z.x[:8])
-}
-
-// ------------------------------------
-
-func (z *bytesDecReader) consume(n int) (oldcursor int) {
-	if z.a == 0 {
-		panic(io.EOF)
+	*v = s
+}
+
+func (d *Decoder) decSliceInt(v *[]int) {
+	_, containerLenS := decContLens(d.d)
+	s := *v
+	if s == nil {
+		s = make([]int, containerLenS, containerLenS)
+	} else if containerLenS > cap(s) {
+		s = make([]int, containerLenS, containerLenS)
+		copy(s, *v)
+	} else if containerLenS > len(s) {
+		s = s[:containerLenS]
 	}
-	if n > z.a {
-		decErr("Trying to read %v bytes. Only %v available", n, z.a)
+	for j := 0; j < containerLenS; j++ {
+		d.decode(&s[j])
 	}
-	// z.checkAvailable(n)
-	oldcursor = z.c
-	z.c = oldcursor + n
-	z.a = z.a - n
-	return
-}
-
-func (z *bytesDecReader) readn(n int) (bs []byte) {
-	c0 := z.consume(n)
-	bs = z.b[c0:z.c]
-	return
+	*v = s
+}
+
+func (d *Decoder) decSliceStr(v *[]string) {
+	_, containerLenS := decContLens(d.d)
+	s := *v
+	if s == nil {
+		s = make([]string, containerLenS, containerLenS)
+	} else if containerLenS > cap(s) {
+		s = make([]string, containerLenS, containerLenS)
+		copy(s, *v)
+	} else if containerLenS > len(s) {
+		s = s[:containerLenS]
+	}
+	for j := 0; j < containerLenS; j++ {
+		d.decode(&s[j])
+	}
+	*v = s
 }
 
-func (z *bytesDecReader) readb(bs []byte) {
-	copy(bs, z.readn(len(bs)))
+func (d *Decoder) decMapIntfIntf(v *map[interface{}]interface{}) {
+	containerLen := d.d.readMapLen()
+	m := *v
+	if m == nil {
+		m = make(map[interface{}]interface{}, containerLen)
+		*v = m
+	}
+	for j := 0; j < containerLen; j++ {
+		var mk interface{}
+		d.decode(&mk)
+		mv := m[mk]
+		d.decode(&mv)
+		m[mk] = mv
+	}
 }
 
-func (z *bytesDecReader) readn1() uint8 {
-	c0 := z.consume(1)
-	return z.b[c0]
+func (d *Decoder) decMapIntIntf(v *map[int]interface{}) {
+	containerLen := d.d.readMapLen()
+	m := *v
+	if m == nil {
+		m = make(map[int]interface{}, containerLen)
+		*v = m
+	}
+	for j := 0; j < containerLen; j++ {
+		d.d.initReadNext()
+		mk := int(d.d.decodeInt(intBitsize))
+		mv := m[mk]
+		d.decode(&mv)
+		m[mk] = mv
+	}
 }
 
-// Use binaryEncoding helper for 4 and 8 bits, but inline it for 2 bits
-// creating temp slice variable and copying it to helper function is expensive
-// for just 2 bits.
-
-func (z *bytesDecReader) readUint16() uint16 {
-	c0 := z.consume(2)
-	return uint16(z.b[c0+1]) | uint16(z.b[c0])<<8
+func (d *Decoder) decMapStrIntf(v *map[string]interface{}) {
+	containerLen := d.d.readMapLen()
+	m := *v
+	if m == nil {
+		m = make(map[string]interface{}, containerLen)
+		*v = m
+	}
+	for j := 0; j < containerLen; j++ {
+		d.d.initReadNext()
+		mk := d.d.decodeString()
+		mv := m[mk]
+		d.decode(&mv)
+		m[mk] = mv
+	}
 }
 
-func (z *bytesDecReader) readUint32() uint32 {
-	c0 := z.consume(4)
-	return bigen.Uint32(z.b[c0:z.c])
-}
+// ----------------------------------------
 
-func (z *bytesDecReader) readUint64() uint64 {
-	c0 := z.consume(8)
-	return bigen.Uint64(z.b[c0:z.c])
+func decContLens(dd decDriver) (containerLen, containerLenS int) {
+	switch currEncodedType := dd.currentEncodedType(); currEncodedType {
+	case valueTypeArray:
+		containerLen = dd.readArrayLen()
+		containerLenS = containerLen
+	case valueTypeMap:
+		containerLen = dd.readMapLen()
+		containerLenS = containerLen * 2
+	default:
+		decErr("Only encoded map or array can be decoded into a slice. (valueType: %x)",
+			currEncodedType)
+	}
+	return
 }
 
-// ----------------------------------------
-
 func decErr(format string, params ...interface{}) {
 	doPanic(msgTagDec, format, params...)
 }
-

+ 390 - 222
codec/encode.go

@@ -4,13 +4,10 @@
 package codec
 
 import (
-	//"bufio"
 	"io"
 	"reflect"
-	//"fmt"
 )
 
-//var _ = fmt.Printf
 const (
 	// Some tagging information for error messages.
 	msgTagEnc         = "codec.encoder"
@@ -18,6 +15,28 @@ const (
 	// maxTimeSecs32 = math.MaxInt32 / 60 / 24 / 366
 )
 
+// AsSymbolFlag defines what should be encoded as symbols.
+type AsSymbolFlag uint8
+
+const (
+	// AsSymbolDefault is default.
+	// Currently, this means only encode struct field names as symbols.
+	// The default is subject to change.
+	AsSymbolDefault AsSymbolFlag = iota
+
+	// AsSymbolAll means encode anything which could be a symbol as a symbol.
+	AsSymbolAll = 0xfe
+
+	// AsSymbolNone means do not encode anything as a symbol.
+	AsSymbolNone = 1 << iota
+
+	// AsSymbolMapStringKeys means encode keys in map[string]XXX as symbols.
+	AsSymbolMapStringKeysFlag
+
+	// AsSymbolStructFieldName means encode struct field names as symbols.
+	AsSymbolStructFieldNameFlag
+)
+
 // encWriter abstracting writing to a byte array or to an io.Writer.
 type encWriter interface {
 	writeUint16(uint16)
@@ -33,7 +52,7 @@ type encWriter interface {
 // encDriver abstracts the actual codec (binc vs msgpack, etc)
 type encDriver interface {
 	isBuiltinType(rt uintptr) bool
-	encodeBuiltinType(rt uintptr, rv reflect.Value)
+	encodeBuiltin(rt uintptr, v interface{})
 	encodeNil()
 	encodeInt(i int64)
 	encodeUint(i uint64)
@@ -51,40 +70,6 @@ type encDriver interface {
 	//encStringRunes(c charEncoding, v []rune)
 }
 
-// encodeHandleI is the interface that the encode functions need.
-type encodeHandleI interface {
-	getEncodeExt(rt uintptr) (tag byte, fn func(reflect.Value) ([]byte, error))
-	writeExt() bool
-	structToArray() bool
-}
-
-type encFnInfo struct {
-	ti   *typeInfo
-	e     *Encoder
-	ee    encDriver
-	xfFn  func(reflect.Value) ([]byte, error)
-	xfTag byte 
-}
-
-// encFn encapsulates the captured variables and the encode function.
-// This way, we only do some calculations one times, and pass to the 
-// code block that should be called (encapsulated in a function) 
-// instead of executing the checks every time.
-type encFn struct {
-	i *encFnInfo
-	f func(*encFnInfo, reflect.Value)
-}
-
-// An Encoder writes an object to an output stream in the codec format.
-type Encoder struct {
-	w encWriter
-	e encDriver
-	h encodeHandleI
-	f map[uintptr]encFn
-	x []uintptr
-	s []encFn
-}
-
 type ioEncWriterWriter interface {
 	WriteByte(c byte) error
 	WriteString(s string) (n int, err error)
@@ -95,18 +80,111 @@ type ioEncStringWriter interface {
 	WriteString(s string) (n int, err error)
 }
 
+type EncodeOptions struct {
+	// Encode a struct as an array, and not as a map.
+	StructToArray bool
+
+	// AsSymbols defines what should be encoded as symbols.
+	//
+	// Encoding as symbols can reduce the encoded size significantly.
+	//
+	// However, during decoding, each string to be encoded as a symbol must
+	// be checked to see if it has been seen before. Consequently, encoding time
+	// will increase if using symbols, because string comparisons has a clear cost.
+	//
+	// Sample values:
+	//   AsSymbolNone
+	//   AsSymbolAll
+	//   AsSymbolMapStringKeys
+	//   AsSymbolMapStringKeysFlag | AsSymbolStructFieldNameFlag
+	AsSymbols AsSymbolFlag
+}
+
+// ---------------------------------------------
+
 type simpleIoEncWriterWriter struct {
-	w io.Writer 
+	w  io.Writer
 	bw io.ByteWriter
 	sw ioEncStringWriter
 }
 
+func (o *simpleIoEncWriterWriter) WriteByte(c byte) (err error) {
+	if o.bw != nil {
+		return o.bw.WriteByte(c)
+	}
+	_, err = o.w.Write([]byte{c})
+	return
+}
+
+func (o *simpleIoEncWriterWriter) WriteString(s string) (n int, err error) {
+	if o.sw != nil {
+		return o.sw.WriteString(s)
+	}
+	return o.w.Write([]byte(s))
+}
+
+func (o *simpleIoEncWriterWriter) Write(p []byte) (n int, err error) {
+	return o.w.Write(p)
+}
+
+// ----------------------------------------
+
 // ioEncWriter implements encWriter and can write to an io.Writer implementation
 type ioEncWriter struct {
 	w ioEncWriterWriter
 	x [8]byte // temp byte array re-used internally for efficiency
 }
 
+func (z *ioEncWriter) writeUint16(v uint16) {
+	bigen.PutUint16(z.x[:2], v)
+	z.writeb(z.x[:2])
+}
+
+func (z *ioEncWriter) writeUint32(v uint32) {
+	bigen.PutUint32(z.x[:4], v)
+	z.writeb(z.x[:4])
+}
+
+func (z *ioEncWriter) writeUint64(v uint64) {
+	bigen.PutUint64(z.x[:8], v)
+	z.writeb(z.x[:8])
+}
+
+func (z *ioEncWriter) writeb(bs []byte) {
+	n, err := z.w.Write(bs)
+	if err != nil {
+		panic(err)
+	}
+	if n != len(bs) {
+		encErr("write: Incorrect num bytes written. Expecting: %v, Wrote: %v", len(bs), n)
+	}
+}
+
+func (z *ioEncWriter) writestr(s string) {
+	n, err := z.w.WriteString(s)
+	if err != nil {
+		panic(err)
+	}
+	if n != len(s) {
+		encErr("write: Incorrect num bytes written. Expecting: %v, Wrote: %v", len(s), n)
+	}
+}
+
+func (z *ioEncWriter) writen1(b byte) {
+	if err := z.w.WriteByte(b); err != nil {
+		panic(err)
+	}
+}
+
+func (z *ioEncWriter) writen2(b1 byte, b2 byte) {
+	z.writen1(b1)
+	z.writen1(b2)
+}
+
+func (z *ioEncWriter) atEndOfEncode() {}
+
+// ----------------------------------------
+
 // bytesEncWriter implements encWriter and can write to an byte slice.
 // It is used by Marshal function.
 type bytesEncWriter struct {
@@ -115,51 +193,89 @@ type bytesEncWriter struct {
 	out *[]byte // write out on atEndOfEncode
 }
 
-type EncodeOptions struct {
-	// Encode a struct as an array, and not as a map.
-	StructToArray bool
+func (z *bytesEncWriter) writeUint16(v uint16) {
+	c := z.grow(2)
+	z.b[c] = byte(v >> 8)
+	z.b[c+1] = byte(v)
 }
 
-func (o *simpleIoEncWriterWriter) WriteByte(c byte) (err error) {
-	if o.bw != nil {
-		return o.bw.WriteByte(c)
-	}
-	_, err = o.w.Write([]byte{c})
-	return
+func (z *bytesEncWriter) writeUint32(v uint32) {
+	c := z.grow(4)
+	z.b[c] = byte(v >> 24)
+	z.b[c+1] = byte(v >> 16)
+	z.b[c+2] = byte(v >> 8)
+	z.b[c+3] = byte(v)
 }
 
-func (o *simpleIoEncWriterWriter) WriteString(s string) (n int, err error) {
-	if o.sw != nil {
-		return o.sw.WriteString(s)
-	}
-	return o.w.Write([]byte(s))
+func (z *bytesEncWriter) writeUint64(v uint64) {
+	c := z.grow(8)
+	z.b[c] = byte(v >> 56)
+	z.b[c+1] = byte(v >> 48)
+	z.b[c+2] = byte(v >> 40)
+	z.b[c+3] = byte(v >> 32)
+	z.b[c+4] = byte(v >> 24)
+	z.b[c+5] = byte(v >> 16)
+	z.b[c+6] = byte(v >> 8)
+	z.b[c+7] = byte(v)
 }
 
-func (o *simpleIoEncWriterWriter) Write(p []byte) (n int, err error) {
-	return o.w.Write(p)
+func (z *bytesEncWriter) writeb(s []byte) {
+	c := z.grow(len(s))
+	copy(z.b[c:], s)
 }
 
+func (z *bytesEncWriter) writestr(s string) {
+	c := z.grow(len(s))
+	copy(z.b[c:], s)
+}
 
-func (o *EncodeOptions) structToArray() bool {
-	return o.StructToArray
+func (z *bytesEncWriter) writen1(b1 byte) {
+	c := z.grow(1)
+	z.b[c] = b1
+}
+
+func (z *bytesEncWriter) writen2(b1 byte, b2 byte) {
+	c := z.grow(2)
+	z.b[c] = b1
+	z.b[c+1] = b2
+}
+
+func (z *bytesEncWriter) atEndOfEncode() {
+	*(z.out) = z.b[:z.c]
+}
+
+func (z *bytesEncWriter) grow(n int) (oldcursor int) {
+	oldcursor = z.c
+	z.c = oldcursor + n
+	if z.c > cap(z.b) {
+		// Tried using appendslice logic: (if cap < 1024, *2, else *1.25).
+		// However, it was too expensive, causing too many iterations of copy.
+		// Using bytes.Buffer model was much better (2*cap + n)
+		bs := make([]byte, 2*cap(z.b)+n)
+		copy(bs, z.b[:oldcursor])
+		z.b = bs
+	} else if z.c > len(z.b) {
+		z.b = z.b[:cap(z.b)]
+	}
+	return
+}
+
+// ---------------------------------------------
+
+type encFnInfo struct {
+	ti    *typeInfo
+	e     *Encoder
+	ee    encDriver
+	xfFn  func(reflect.Value) ([]byte, error)
+	xfTag byte
 }
 
 func (f *encFnInfo) builtin(rv reflect.Value) {
-	f.ee.encodeBuiltinType(f.ti.rtid, rv)
+	f.ee.encodeBuiltin(f.ti.rtid, rv.Interface())
 }
 
 func (f *encFnInfo) rawExt(rv reflect.Value) {
-	re := rv.Interface().(RawExt)
-	if re.Data == nil {
-		f.ee.encodeNil()
-		return
-	}
-	if f.e.h.writeExt() {
-		f.ee.encodeExtPreamble(re.Tag, len(re.Data))
-		f.e.w.writeb(re.Data)
-	} else {
-		f.ee.encodeStringBytes(c_RAW, re.Data)
-	}
+	f.e.encRawExt(rv.Interface().(RawExt))
 }
 
 func (f *encFnInfo) ext(rv reflect.Value) {
@@ -171,13 +287,13 @@ func (f *encFnInfo) ext(rv reflect.Value) {
 		f.ee.encodeNil()
 		return
 	}
-	if f.e.h.writeExt() {
+	if f.e.hh.writeExt() {
 		f.ee.encodeExtPreamble(f.xfTag, len(bs))
 		f.e.w.writeb(bs)
 	} else {
 		f.ee.encodeStringBytes(c_RAW, bs)
 	}
-	
+
 }
 
 func (f *encFnInfo) binaryMarshal(rv reflect.Value) {
@@ -245,33 +361,56 @@ func (f *encFnInfo) kSlice(rv reflect.Value) {
 		f.ee.encodeNil()
 		return
 	}
+
+	if shortCircuitReflectToFastPath {
+		switch f.ti.rtid {
+		case intfSliceTypId:
+			f.e.encSliceIntf(rv.Interface().([]interface{}))
+			return
+		case intSliceTypId:
+			f.e.encSliceInt(rv.Interface().([]int))
+			return
+		case strSliceTypId:
+			f.e.encSliceStr(rv.Interface().([]string))
+			return
+		}
+	}
+
 	if f.ti.rtid == byteSliceTypId {
 		f.ee.encodeStringBytes(c_RAW, rv.Bytes())
 		return
 	}
 	l := rv.Len()
-	f.ee.encodeArrayPreamble(l)
+	if f.ti.mbs {
+		if l%2 == 1 {
+			encErr("mapBySlice: invalid length (must be divisible by 2): %v", l)
+		}
+		f.ee.encodeMapPreamble(l / 2)
+	} else {
+		f.ee.encodeArrayPreamble(l)
+	}
 	if l == 0 {
 		return
 	}
 	for j := 0; j < l; j++ {
+		// TODO: Consider perf implication of encoding odd index values as symbols if type is string
 		f.e.encodeValue(rv.Index(j))
 	}
 }
 
 func (f *encFnInfo) kArray(rv reflect.Value) {
-	f.e.encodeValue(rv.Slice(0, rv.Len()))
+	// f.e.encodeValue(rv.Slice(0, rv.Len()))
+	f.kSlice(rv.Slice(0, rv.Len()))
 }
 
 func (f *encFnInfo) kStruct(rv reflect.Value) {
-	// debugf(">>>> CALLING kStruct: %T, %v", rv.Interface(), rv.Interface())
 	fti := f.ti
 	newlen := len(fti.sfi)
 	rvals := make([]reflect.Value, newlen)
 	var encnames []string
 	e := f.e
 	tisfi := fti.sfip
-	toMap := !(fti.toArray || e.h.structToArray())
+	toMap := !(fti.toArray || e.h.StructToArray)
 	// if toMap, use the sorted array. If toArray, use unsorted array (to match sequence in struct)
 	if toMap {
 		tisfi = fti.sfi
@@ -301,8 +440,14 @@ func (f *encFnInfo) kStruct(rv reflect.Value) {
 	if toMap {
 		ee := f.ee //don't dereference everytime
 		ee.encodeMapPreamble(newlen)
+		// asSymbols := e.h.AsSymbols&AsSymbolStructFieldNameFlag != 0
+		asSymbols := e.h.AsSymbols == AsSymbolDefault || e.h.AsSymbols&AsSymbolStructFieldNameFlag != 0
 		for j := 0; j < newlen; j++ {
-			ee.encodeSymbol(encnames[j])
+			if asSymbols {
+				ee.encodeSymbol(encnames[j])
+			} else {
+				ee.encodeString(c_UTF8, encnames[j])
+			}
 			e.encodeValue(rvals[j])
 		}
 	} else {
@@ -335,17 +480,41 @@ func (f *encFnInfo) kMap(rv reflect.Value) {
 		f.ee.encodeNil()
 		return
 	}
+
+	if shortCircuitReflectToFastPath {
+		switch f.ti.rtid {
+		case mapStringIntfTypId:
+			f.e.encMapStrIntf(rv.Interface().(map[string]interface{}))
+			return
+		case mapIntfIntfTypId:
+			f.e.encMapIntfIntf(rv.Interface().(map[interface{}]interface{}))
+			return
+		case mapIntIntfTypId:
+			f.e.encMapIntIntf(rv.Interface().(map[int]interface{}))
+			return
+		}
+	}
+
 	l := rv.Len()
 	f.ee.encodeMapPreamble(l)
 	if l == 0 {
 		return
 	}
-	keyTypeIsString := f.ti.rt.Key().Kind() == reflect.String
+	// keyTypeIsString := f.ti.rt.Key().Kind() == reflect.String
+	keyTypeIsString := f.ti.rt.Key() == stringTyp
+	var asSymbols bool
+	if keyTypeIsString {
+		asSymbols = f.e.h.AsSymbols&AsSymbolMapStringKeysFlag != 0
+	}
 	mks := rv.MapKeys()
 	// for j, lmks := 0, len(mks); j < lmks; j++ {
 	for j := range mks {
 		if keyTypeIsString {
-			f.ee.encodeSymbol(mks[j].String())
+			if asSymbols {
+				f.ee.encodeSymbol(mks[j].String())
+			} else {
+				f.ee.encodeString(c_UTF8, mks[j].String())
+			}
 		} else {
 			f.e.encodeValue(mks[j])
 		}
@@ -354,12 +523,34 @@ func (f *encFnInfo) kMap(rv reflect.Value) {
 
 }
 
+// --------------------------------------------------
+
+// encFn encapsulates the captured variables and the encode function.
+// This way, we only do some calculations one times, and pass to the
+// code block that should be called (encapsulated in a function)
+// instead of executing the checks every time.
+type encFn struct {
+	i *encFnInfo
+	f func(*encFnInfo, reflect.Value)
+}
+
+// --------------------------------------------------
 
+// An Encoder writes an object to an output stream in the codec format.
+type Encoder struct {
+	w  encWriter
+	e  encDriver
+	h  *BasicHandle
+	hh Handle
+	f  map[uintptr]encFn
+	x  []uintptr
+	s  []encFn
+}
 
 // NewEncoder returns an Encoder for encoding into an io.Writer.
-// 
+//
 // For efficiency, Users are encouraged to pass in a memory buffered writer
-// (eg bufio.Writer, bytes.Buffer). 
+// (eg bufio.Writer, bytes.Buffer).
 func NewEncoder(w io.Writer, h Handle) *Encoder {
 	ww, ok := w.(ioEncWriterWriter)
 	if !ok {
@@ -372,7 +563,7 @@ func NewEncoder(w io.Writer, h Handle) *Encoder {
 	z := ioEncWriter{
 		w: ww,
 	}
-	return &Encoder{w: &z, h: h, e: h.newEncDriver(&z) }
+	return &Encoder{w: &z, hh: h, h: h.getBasicHandle(), e: h.newEncDriver(&z)}
 }
 
 // NewEncoderBytes returns an encoder for encoding directly and efficiently
@@ -389,36 +580,38 @@ func NewEncoderBytes(out *[]byte, h Handle) *Encoder {
 		b:   in,
 		out: out,
 	}
-	return &Encoder{w: &z, h: h, e: h.newEncDriver(&z) }
+	return &Encoder{w: &z, hh: h, h: h.getBasicHandle(), e: h.newEncDriver(&z)}
 }
 
 // Encode writes an object into a stream in the codec format.
 //
 // Encoding can be configured via the "codec" struct tag for the fields.
-// 
+//
 // The "codec" key in struct field's tag value is the key name,
 // followed by an optional comma and options.
-// 
+//
 // To set an option on all fields (e.g. omitempty on all fields), you
-// can create a field called _struct, and set flags on it. 
-// 
+// can create a field called _struct, and set flags on it.
+//
 // Struct values "usually" encode as maps. Each exported struct field is encoded unless:
 //    - the field's codec tag is "-", OR
 //    - the field is empty and its codec tag specifies the "omitempty" option.
-// 
+//
 // When encoding as a map, the first string in the tag (before the comma)
 // is the map key string to use when encoding.
-// 
+//
 // However, struct values may encode as arrays. This happens when:
 //    - StructToArray Encode option is set, OR
 //    - the codec tag on the _struct field sets the "toarray" option
-// 
-// The empty values (for omitempty option) are false, 0, any nil pointer 
+//
+// Values with types that implement MapBySlice are encoded as stream maps.
+//
+// The empty values (for omitempty option) are false, 0, any nil pointer
 // or interface value, and any array, slice, map, or string of length zero.
 //
 // Anonymous fields are encoded inline if no struct tag is present.
 // Else they are encoded as regular fields.
-// 
+//
 // Examples:
 //
 //      type MyStruct struct {
@@ -429,20 +622,20 @@ func NewEncoderBytes(out *[]byte, h Handle) *Encoder {
 //          Field4 bool     `codec:"f4,omitempty"` //use key "f4". Omit if empty.
 //          ...
 //      }
-//      
+//
 //      type MyStruct struct {
 //          _struct bool    `codec:",omitempty,toarray"`   //set omitempty for every field
 //                                                         //and encode struct as an array
-//      }   
+//      }
 //
 // The mode of encoding is based on the type of the value. When a value is seen:
 //   - If an extension is registered for it, call that extension function
 //   - If it implements BinaryMarshaler, call its MarshalBinary() (data []byte, err error)
 //   - Else encode it based on its reflect.Kind
-// 
+//
 // Note that struct field names and keys in map[string]XXX will be treated as symbols.
 // Some formats support symbols (e.g. binc) and will properly encode the string
-// only once in the stream, and use a tag to refer to it thereafter. 
+// only once in the stream, and use a tag to refer to it thereafter.
 func (e *Encoder) Encode(v interface{}) (err error) {
 	defer panicToErr(&err)
 	e.encode(v)
@@ -486,6 +679,21 @@ func (e *Encoder) encode(iv interface{}) {
 		e.e.encodeFloat32(v)
 	case float64:
 		e.e.encodeFloat64(v)
+	case []byte:
+		e.e.encodeStringBytes(c_RAW, v)
+
+	case []interface{}:
+		e.encSliceIntf(v)
+	case []string:
+		e.encSliceStr(v)
+	case []int:
+		e.encSliceInt(v)
+	case map[int]interface{}:
+		e.encMapIntIntf(v)
+	case map[string]interface{}:
+		e.encMapStrIntf(v)
+	case map[interface{}]interface{}:
+		e.encMapIntfIntf(v)
 
 	case *string:
 		e.e.encodeString(c_UTF8, *v)
@@ -515,11 +723,25 @@ func (e *Encoder) encode(iv interface{}) {
 		e.e.encodeFloat32(*v)
 	case *float64:
 		e.e.encodeFloat64(*v)
+	case *[]byte:
+		e.e.encodeStringBytes(c_RAW, *v)
+
+	case *[]interface{}:
+		e.encSliceIntf(*v)
+	case *[]string:
+		e.encSliceStr(*v)
+	case *[]int:
+		e.encSliceInt(*v)
+	case *map[int]interface{}:
+		e.encMapIntIntf(*v)
+	case *map[string]interface{}:
+		e.encMapStrIntf(*v)
+	case *map[interface{}]interface{}:
+		e.encMapIntfIntf(*v)
 
 	default:
 		e.encodeValue(reflect.ValueOf(iv))
 	}
-
 }
 
 func (e *Encoder) encodeValue(rv reflect.Value) {
@@ -530,12 +752,12 @@ func (e *Encoder) encodeValue(rv reflect.Value) {
 		}
 		rv = rv.Elem()
 	}
-	
+
 	rt := rv.Type()
 	rtid := reflect.ValueOf(rt).Pointer()
-		
+
 	// if e.f == nil && e.s == nil { debugf("---->Creating new enc f map for type: %v\n", rt) }
-	var fn encFn 
+	var fn encFn
 	var ok bool
 	if useMapForCodecCache {
 		fn, ok = e.f[rtid]
@@ -549,47 +771,47 @@ func (e *Encoder) encodeValue(rv reflect.Value) {
 	}
 	if !ok {
 		// debugf("\tCreating new enc fn for type: %v\n", rt)
-		fi := encFnInfo { ti:getTypeInfo(rtid, rt), e:e, ee:e.e }
-		fn.i = &fi 
+		fi := encFnInfo{ti: getTypeInfo(rtid, rt), e: e, ee: e.e}
+		fn.i = &fi
 		if rtid == rawExtTypId {
-			fn.f = (*encFnInfo).rawExt 
+			fn.f = (*encFnInfo).rawExt
 		} else if e.e.isBuiltinType(rtid) {
-			fn.f = (*encFnInfo).builtin 
+			fn.f = (*encFnInfo).builtin
 		} else if xfTag, xfFn := e.h.getEncodeExt(rtid); xfFn != nil {
 			fi.xfTag, fi.xfFn = xfTag, xfFn
-			fn.f = (*encFnInfo).ext 
+			fn.f = (*encFnInfo).ext
 		} else if supportBinaryMarshal && fi.ti.m {
-			fn.f = (*encFnInfo).binaryMarshal 
+			fn.f = (*encFnInfo).binaryMarshal
 		} else {
 			switch rk := rt.Kind(); rk {
 			case reflect.Bool:
-				fn.f = (*encFnInfo).kBool 
+				fn.f = (*encFnInfo).kBool
 			case reflect.String:
-				fn.f = (*encFnInfo).kString 
+				fn.f = (*encFnInfo).kString
 			case reflect.Float64:
-				fn.f = (*encFnInfo).kFloat64 
+				fn.f = (*encFnInfo).kFloat64
 			case reflect.Float32:
-				fn.f = (*encFnInfo).kFloat32 
+				fn.f = (*encFnInfo).kFloat32
 			case reflect.Int, reflect.Int8, reflect.Int64, reflect.Int32, reflect.Int16:
-				fn.f = (*encFnInfo).kInt 
+				fn.f = (*encFnInfo).kInt
 			case reflect.Uint8, reflect.Uint64, reflect.Uint, reflect.Uint32, reflect.Uint16:
-				fn.f = (*encFnInfo).kUint 
+				fn.f = (*encFnInfo).kUint
 			case reflect.Invalid:
-				fn.f = (*encFnInfo).kInvalid 
+				fn.f = (*encFnInfo).kInvalid
 			case reflect.Slice:
-				fn.f = (*encFnInfo).kSlice 
+				fn.f = (*encFnInfo).kSlice
 			case reflect.Array:
-				fn.f = (*encFnInfo).kArray 
+				fn.f = (*encFnInfo).kArray
 			case reflect.Struct:
-				fn.f = (*encFnInfo).kStruct 
+				fn.f = (*encFnInfo).kStruct
 			// case reflect.Ptr:
-			// 	fn.f = (*encFnInfo).kPtr 
+			// 	fn.f = (*encFnInfo).kPtr
 			case reflect.Interface:
-				fn.f = (*encFnInfo).kInterface 
+				fn.f = (*encFnInfo).kInterface
 			case reflect.Map:
-				fn.f = (*encFnInfo).kMap 
+				fn.f = (*encFnInfo).kMap
 			default:
-				fn.f = (*encFnInfo).kErr 
+				fn.f = (*encFnInfo).kErr
 			}
 		}
 		if useMapForCodecCache {
@@ -598,132 +820,79 @@ func (e *Encoder) encodeValue(rv reflect.Value) {
 			}
 			e.f[rtid] = fn
 		} else {
-			e.s = append(e.s, fn )
+			e.s = append(e.s, fn)
 			e.x = append(e.x, rtid)
 		}
 	}
-	
-	fn.f(fn.i, rv)
-
-}
 
-// ----------------------------------------
-
-func (z *ioEncWriter) writeUint16(v uint16) {
-	bigen.PutUint16(z.x[:2], v)
-	z.writeb(z.x[:2])
-}
-
-func (z *ioEncWriter) writeUint32(v uint32) {
-	bigen.PutUint32(z.x[:4], v)
-	z.writeb(z.x[:4])
-}
+	fn.f(fn.i, rv)
 
-func (z *ioEncWriter) writeUint64(v uint64) {
-	bigen.PutUint64(z.x[:8], v)
-	z.writeb(z.x[:8])
 }
 
-func (z *ioEncWriter) writeb(bs []byte) {
-	n, err := z.w.Write(bs)
-	if err != nil {
-		panic(err)
+func (e *Encoder) encRawExt(re RawExt) {
+	if re.Data == nil {
+		e.e.encodeNil()
+		return
 	}
-	if n != len(bs) {
-		encErr("write: Incorrect num bytes written. Expecting: %v, Wrote: %v", len(bs), n)
+	if e.hh.writeExt() {
+		e.e.encodeExtPreamble(re.Tag, len(re.Data))
+		e.w.writeb(re.Data)
+	} else {
+		e.e.encodeStringBytes(c_RAW, re.Data)
 	}
 }
 
-func (z *ioEncWriter) writestr(s string) {
-	n, err := z.w.WriteString(s)
-	if err != nil {
-		panic(err)
-	}
-	if n != len(s) {
-		encErr("write: Incorrect num bytes written. Expecting: %v, Wrote: %v", len(s), n)
-	}
-}
+// ---------------------------------------------
+// short circuit functions for common maps and slices
 
-func (z *ioEncWriter) writen1(b byte) {
-	if err := z.w.WriteByte(b); err != nil {
-		panic(err)
+func (e *Encoder) encSliceIntf(v []interface{}) {
+	e.e.encodeArrayPreamble(len(v))
+	for _, v2 := range v {
+		e.encode(v2)
 	}
 }
 
-func (z *ioEncWriter) writen2(b1 byte, b2 byte) {
-	z.writen1(b1)
-	z.writen1(b2)
-}
-
-func (z *ioEncWriter) atEndOfEncode() { }
-
-// ----------------------------------------
-
-func (z *bytesEncWriter) writeUint16(v uint16) {
-	c := z.grow(2)
-	z.b[c] = byte(v >> 8)
-	z.b[c+1] = byte(v)
-}
-
-func (z *bytesEncWriter) writeUint32(v uint32) {
-	c := z.grow(4)
-	z.b[c] = byte(v >> 24)
-	z.b[c+1] = byte(v >> 16)
-	z.b[c+2] = byte(v >> 8)
-	z.b[c+3] = byte(v)
-}
-
-func (z *bytesEncWriter) writeUint64(v uint64) {
-	c := z.grow(8)
-	z.b[c] = byte(v >> 56)
-	z.b[c+1] = byte(v >> 48)
-	z.b[c+2] = byte(v >> 40)
-	z.b[c+3] = byte(v >> 32)
-	z.b[c+4] = byte(v >> 24)
-	z.b[c+5] = byte(v >> 16)
-	z.b[c+6] = byte(v >> 8)
-	z.b[c+7] = byte(v)
-}
-
-func (z *bytesEncWriter) writeb(s []byte) {
-	c := z.grow(len(s))
-	copy(z.b[c:], s)
-}
-
-func (z *bytesEncWriter) writestr(s string) {
-	c := z.grow(len(s))
-	copy(z.b[c:], s)
+func (e *Encoder) encSliceStr(v []string) {
+	e.e.encodeArrayPreamble(len(v))
+	for _, v2 := range v {
+		e.e.encodeString(c_UTF8, v2)
+	}
 }
 
-func (z *bytesEncWriter) writen1(b1 byte) {
-	c := z.grow(1)
-	z.b[c] = b1
+func (e *Encoder) encSliceInt(v []int) {
+	e.e.encodeArrayPreamble(len(v))
+	for _, v2 := range v {
+		e.e.encodeInt(int64(v2))
+	}
 }
 
-func (z *bytesEncWriter) writen2(b1 byte, b2 byte) {
-	c := z.grow(2)
-	z.b[c] = b1
-	z.b[c+1] = b2
+func (e *Encoder) encMapStrIntf(v map[string]interface{}) {
+	e.e.encodeMapPreamble(len(v))
+	asSymbols := e.h.AsSymbols&AsSymbolMapStringKeysFlag != 0
+	for k2, v2 := range v {
+		if asSymbols {
+			e.e.encodeSymbol(k2)
+		} else {
+			e.e.encodeString(c_UTF8, k2)
+		}
+		e.encode(v2)
+	}
 }
 
-func (z *bytesEncWriter) atEndOfEncode() {
-	*(z.out) = z.b[:z.c]
+func (e *Encoder) encMapIntIntf(v map[int]interface{}) {
+	e.e.encodeMapPreamble(len(v))
+	for k2, v2 := range v {
+		e.e.encodeInt(int64(k2))
+		e.encode(v2)
+	}
 }
 
-func (z *bytesEncWriter) grow(n int) (oldcursor int) {
-	oldcursor = z.c
-	z.c = oldcursor + n
-	if z.c > cap(z.b) {
-		// It tried using appendslice logic: (if cap < 1024, *2, else *1.25).
-		// However, it was too expensive, causing too many iterations of copy.
-		// Using bytes.Buffer model was much better (2*cap + n)
-		bs := make([]byte, 2*cap(z.b)+n)
-		copy(bs, z.b[:oldcursor])
-		z.b = bs
-	} else if z.c > len(z.b) {
-		z.b = z.b[:cap(z.b)]
+func (e *Encoder) encMapIntfIntf(v map[interface{}]interface{}) {
+	e.e.encodeMapPreamble(len(v))
+	for k2, v2 := range v {
+		e.encode(k2)
+		e.encode(v2)
 	}
-	return
 }
 
 // ----------------------------------------
@@ -731,4 +900,3 @@ func (z *bytesEncWriter) grow(n int) (oldcursor int) {
 func encErr(format string, params ...interface{}) {
 	doPanic(msgTagEnc, format, params...)
 }
-

+ 12 - 12
codec/ext_dep_test.go

@@ -29,36 +29,36 @@ func init() {
 	)
 }
 
-func fnVMsgpackEncodeFn(ts *TestStruc) ([]byte, error) {
+func fnVMsgpackEncodeFn(ts interface{}) ([]byte, error) {
 	return vmsgpack.Marshal(ts)
 }
 
-func fnVMsgpackDecodeFn(buf []byte, ts *TestStruc) error {
+func fnVMsgpackDecodeFn(buf []byte, ts interface{}) error {
 	return vmsgpack.Unmarshal(buf, ts)
 }
 
-func fnBsonEncodeFn(ts *TestStruc) ([]byte, error) {
+func fnBsonEncodeFn(ts interface{}) ([]byte, error) {
 	return bson.Marshal(ts)
 }
 
-func fnBsonDecodeFn(buf []byte, ts *TestStruc) error {
+func fnBsonDecodeFn(buf []byte, ts interface{}) error {
 	return bson.Unmarshal(buf, ts)
 }
 
-func Benchmark__Bson_____Encode(b *testing.B) {
-	fnBenchmarkEncode(b, "bson", fnBsonEncodeFn)
+func Benchmark__Bson_______Encode(b *testing.B) {
+	fnBenchmarkEncode(b, "bson", benchTs, fnBsonEncodeFn)
 }
 
-func Benchmark__Bson_____Decode(b *testing.B) {
-	fnBenchmarkDecode(b, "bson", fnBsonEncodeFn, fnBsonDecodeFn)
+func Benchmark__Bson_______Decode(b *testing.B) {
+	fnBenchmarkDecode(b, "bson", benchTs, fnBsonEncodeFn, fnBsonDecodeFn, fnBenchNewTs)
 }
 
-func Benchmark__VMsgpack_Encode(b *testing.B) {
-	fnBenchmarkEncode(b, "v-msgpack", fnVMsgpackEncodeFn)
+func Benchmark__VMsgpack___Encode(b *testing.B) {
+	fnBenchmarkEncode(b, "v-msgpack", benchTs, fnVMsgpackEncodeFn)
 }
 
-func Benchmark__VMsgpack_Decode(b *testing.B) {
-	fnBenchmarkDecode(b, "v-msgpack", fnVMsgpackEncodeFn, fnVMsgpackDecodeFn)
+func Benchmark__VMsgpack___Decode(b *testing.B) {
+	fnBenchmarkDecode(b, "v-msgpack", benchTs, fnVMsgpackEncodeFn, fnVMsgpackDecodeFn, fnBenchNewTs)
 }
 
 func TestMsgpackPythonGenStreams(t *testing.T) {

+ 193 - 140
codec/helper.go

@@ -18,36 +18,33 @@ import (
 )
 
 const (
-	// For >= mapAccessThreshold elements, map outways cost of linear search 
-	//   - this was critical for reflect.Type, whose equality cost is pretty high (set to 4)
-	//   - for integers, equality cost is cheap (set to 16, 32 of 64)
-	// mapAccessThreshold    = 16 // 4
-	
-	binarySearchThreshold = 16
-	structTagName         = "codec"
-	
-	// Support 
+	structTagName = "codec"
+
+	// Support
 	//    encoding.BinaryMarshaler: MarshalBinary() (data []byte, err error)
 	//    encoding.BinaryUnmarshaler: UnmarshalBinary(data []byte) error
 	// This constant flag will enable or disable it.
-	// 
-	// Supporting this feature required a map access each time the en/decodeValue 
-	// method is called to get the typeInfo and look at baseId. This caused a 
-	// clear performance degradation. Some refactoring helps a portion of the loss.
-	// 
-	// All the band-aids we can put try to mitigate performance loss due to stack splitting:
-	//    - using smaller functions to reduce func framesize
-	// 
-	// TODO: Look into this again later.
-	supportBinaryMarshal  = true
+	supportBinaryMarshal = true
 
 	// Each Encoder or Decoder uses a cache of functions based on conditionals,
 	// so that the conditionals are not run every time.
-	// 
+	//
 	// Either a map or a slice is used to keep track of the functions.
 	// The map is more natural, but has a higher cost than a slice/array.
 	// This flag (useMapForCodecCache) controls which is used.
 	useMapForCodecCache = false
+
+	// For some common container types, we can short-circuit an elaborate
+	// reflection dance and call encode/decode directly.
+	// Currently supported for:
+	//    []interface{}
+	//    []int
+	//    []string
+	//
+	//    map[interface{}]interface{}
+	//    map[int]interface{}
+	//    map[string]interface{}
+	shortCircuitReflectToFastPath = true
 )
 
 type charEncoding uint8
@@ -61,13 +58,24 @@ const (
 	c_UTF32BE
 )
 
-type binaryUnmarshaler interface {
-	UnmarshalBinary(data []byte) error
-}
+// valueType is the stream type
+type valueType uint8
 
-type binaryMarshaler interface {
-	MarshalBinary() (data []byte, err error)
-}
+const (
+	valueTypeUnset valueType = iota
+	valueTypeNil
+	valueTypeInt
+	valueTypeUint
+	valueTypeFloat
+	valueTypeBool
+	valueTypeString
+	valueTypeSymbol
+	valueTypeBytes
+	valueTypeMap
+	valueTypeArray
+	valueTypeTimestamp
+	valueTypeExt
+)
 
 var (
 	bigen               = binary.BigEndian
@@ -79,24 +87,39 @@ var (
 	nilIntfSlice     = []interface{}(nil)
 	intfSliceTyp     = reflect.TypeOf(nilIntfSlice)
 	intfTyp          = intfSliceTyp.Elem()
+	intSliceTyp      = reflect.TypeOf([]int(nil))
+	strSliceTyp      = reflect.TypeOf([]string(nil))
 	byteSliceTyp     = reflect.TypeOf([]byte(nil))
 	mapStringIntfTyp = reflect.TypeOf(map[string]interface{}(nil))
 	mapIntfIntfTyp   = reflect.TypeOf(map[interface{}]interface{}(nil))
-	
-	timeTyp          = reflect.TypeOf(time.Time{})
-	int64SliceTyp    = reflect.TypeOf([]int64(nil))
-	rawExtTyp        = reflect.TypeOf(RawExt{})
-	
-	timeTypId        = reflect.ValueOf(timeTyp).Pointer()
-	byteSliceTypId   = reflect.ValueOf(byteSliceTyp).Pointer()
-	rawExtTypId      = reflect.ValueOf(rawExtTyp).Pointer()
-	
-	binaryMarshalerTyp = reflect.TypeOf((*binaryMarshaler)(nil)).Elem()
+	mapIntIntfTyp    = reflect.TypeOf(map[int]interface{}(nil))
+
+	stringTyp     = reflect.TypeOf("")
+	timeTyp       = reflect.TypeOf(time.Time{})
+	int64SliceTyp = reflect.TypeOf([]int64(nil))
+	rawExtTyp     = reflect.TypeOf(RawExt{})
+
+	mapBySliceTyp        = reflect.TypeOf((*MapBySlice)(nil)).Elem()
+	binaryMarshalerTyp   = reflect.TypeOf((*binaryMarshaler)(nil)).Elem()
 	binaryUnmarshalerTyp = reflect.TypeOf((*binaryUnmarshaler)(nil)).Elem()
-	
-	binaryMarshalerTypId = reflect.ValueOf(binaryMarshalerTyp).Pointer()
+
+	intfTypId      = reflect.ValueOf(intfTyp).Pointer()
+	timeTypId      = reflect.ValueOf(timeTyp).Pointer()
+	intfSliceTypId = reflect.ValueOf(intfSliceTyp).Pointer()
+	intSliceTypId  = reflect.ValueOf(intSliceTyp).Pointer()
+	strSliceTypId  = reflect.ValueOf(strSliceTyp).Pointer()
+	byteSliceTypId = reflect.ValueOf(byteSliceTyp).Pointer()
+	rawExtTypId    = reflect.ValueOf(rawExtTyp).Pointer()
+
+	mapStringIntfTypId = reflect.ValueOf(mapStringIntfTyp).Pointer()
+	mapIntfIntfTypId   = reflect.ValueOf(mapIntfIntfTyp).Pointer()
+	mapIntIntfTypId    = reflect.ValueOf(mapIntIntfTyp).Pointer()
+
+	// mapBySliceTypId  = reflect.ValueOf(mapBySliceTyp).Pointer()
+
+	binaryMarshalerTypId   = reflect.ValueOf(binaryMarshalerTyp).Pointer()
 	binaryUnmarshalerTypId = reflect.ValueOf(binaryUnmarshalerTyp).Pointer()
-	
+
 	intBitsize  uint8 = uint8(reflect.TypeOf(int(0)).Bits())
 	uintBitsize uint8 = uint8(reflect.TypeOf(uint(0)).Bits())
 
@@ -104,28 +127,51 @@ var (
 	bsAll0xff = []byte{0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff}
 )
 
-// The RawExt type represents raw unprocessed extension data. 
-type RawExt struct {
-	Tag byte
-	Data []byte
+type binaryUnmarshaler interface {
+	UnmarshalBinary(data []byte) error
+}
+
+type binaryMarshaler interface {
+	MarshalBinary() (data []byte, err error)
+}
+
+// MapBySlice represents a slice which should be encoded as a map in the stream.
+// The slice contains a sequence of key-value pairs.
+type MapBySlice interface {
+	MapBySlice()
+}
+
+// WARNING: DO NOT USE DIRECTLY. EXPORTED FOR GODOC BENEFIT. WILL BE REMOVED.
+//
+// BasicHandle encapsulates the common options and extension functions.
+type BasicHandle struct {
+	extHandle
+	EncodeOptions
+	DecodeOptions
 }
 
 // Handle is the interface for a specific encoding format.
-// 
+//
 // Typically, a Handle is pre-configured before first time use,
 // and not modified while in use. Such a pre-configured Handle
 // is safe for concurrent access.
 type Handle interface {
-	encodeHandleI
-	decodeHandleI
+	writeExt() bool
+	getBasicHandle() *BasicHandle
 	newEncDriver(w encWriter) encDriver
 	newDecDriver(r decReader) decDriver
 }
 
+// RawExt represents raw unprocessed extension data.
+type RawExt struct {
+	Tag  byte
+	Data []byte
+}
+
 type extTypeTagFn struct {
-	rtid uintptr
-	rt reflect.Type
-	tag byte
+	rtid  uintptr
+	rt    reflect.Type
+	tag   byte
 	encFn func(reflect.Value) ([]byte, error)
 	decFn func(reflect.Value, []byte) error
 }
@@ -133,9 +179,9 @@ type extTypeTagFn struct {
 type extHandle []*extTypeTagFn
 
 // AddExt registers an encode and decode function for a reflect.Type.
-// Note that the type must be a named type, and specifically not 
+// Note that the type must be a named type, and specifically not
 // a pointer or Interface. An error is returned if that is not honored.
-// 
+//
 // To Deregister an ext, call AddExt with 0 tag, nil encfn and nil decfn.
 func (o *extHandle) AddExt(
 	rt reflect.Type,
@@ -145,18 +191,18 @@ func (o *extHandle) AddExt(
 ) (err error) {
 	// o is a pointer, because we may need to initialize it
 	if rt.PkgPath() == "" || rt.Kind() == reflect.Interface {
-		err = fmt.Errorf("codec.Handle.AddExt: Takes named type, especially not a pointer or interface: %T", 
+		err = fmt.Errorf("codec.Handle.AddExt: Takes named type, especially not a pointer or interface: %T",
 			reflect.Zero(rt).Interface())
 		return
 	}
-	
-	// o cannot be nil, since it is always embedded in a Handle. 
+
+	// o cannot be nil, since it is always embedded in a Handle.
 	// if nil, let it panic.
 	// if o == nil {
 	// 	err = errors.New("codec.Handle.AddExt: extHandle cannot be a nil pointer.")
 	// 	return
 	// }
-	
+
 	rtid := reflect.ValueOf(rt).Pointer()
 	for _, v := range *o {
 		if v.rtid == rtid {
@@ -164,8 +210,8 @@ func (o *extHandle) AddExt(
 			return
 		}
 	}
-	
-	*o = append(*o, &extTypeTagFn { rtid, rt, tag, encfn, decfn })
+
+	*o = append(*o, &extTypeTagFn{rtid, rt, tag, encfn, decfn})
 	return
 }
 
@@ -213,64 +259,100 @@ func (o extHandle) getEncodeExt(rtid uintptr) (tag byte, fn func(reflect.Value)
 	return
 }
 
+type structFieldInfo struct {
+	encName string // encode name
+
+	// only one of 'i' or 'is' can be set. If 'i' is -1, then 'is' has been set.
+
+	is        []int // (recursive/embedded) field index in struct
+	i         int16 // field index in struct
+	omitEmpty bool
+	toArray   bool // if field is _struct, is the toArray set?
+
+	// tag       string   // tag
+	// name      string   // field name
+	// encNameBs []byte   // encoded name as byte stream
+	// ikind     int      // kind of the field as an int i.e. int(reflect.Kind)
+}
+
+func parseStructFieldInfo(fname string, stag string) *structFieldInfo {
+	if fname == "" {
+		panic("parseStructFieldInfo: No Field Name")
+	}
+	si := structFieldInfo{
+		// name: fname,
+		encName: fname,
+		// tag: stag,
+	}
+
+	if stag != "" {
+		for i, s := range strings.Split(stag, ",") {
+			if i == 0 {
+				if s != "" {
+					si.encName = s
+				}
+			} else {
+				switch s {
+				case "omitempty":
+					si.omitEmpty = true
+				case "toarray":
+					si.toArray = true
+				}
+			}
+		}
+	}
+	// si.encNameBs = []byte(si.encName)
+	return &si
+}
+
+type sfiSortedByEncName []*structFieldInfo
+
+func (p sfiSortedByEncName) Len() int {
+	return len(p)
+}
+
+func (p sfiSortedByEncName) Less(i, j int) bool {
+	return p[i].encName < p[j].encName
+}
+
+func (p sfiSortedByEncName) Swap(i, j int) {
+	p[i], p[j] = p[j], p[i]
+}
+
 // typeInfo keeps information about each type referenced in the encode/decode sequence.
-// 
+//
 // During an encode/decode sequence, we work as below:
 //   - If base is a built in type, en/decode base value
 //   - If base is registered as an extension, en/decode base value
 //   - If type is binary(M/Unm)arshaler, call Binary(M/Unm)arshal method
 //   - Else decode appropriately based on the reflect.Kind
 type typeInfo struct {
-	sfi       []*structFieldInfo // sorted. Used when enc/dec struct to map.
-	sfip      []*structFieldInfo // unsorted. Used when enc/dec struct to array.
-	
-	rt        reflect.Type
-	rtid      uintptr
-	
+	sfi  []*structFieldInfo // sorted. Used when enc/dec struct to map.
+	sfip []*structFieldInfo // unsorted. Used when enc/dec struct to array.
+
+	rt   reflect.Type
+	rtid uintptr
+
 	// baseId gives pointer to the base reflect.Type, after deferencing
 	// the pointers. E.g. base type of ***time.Time is time.Time.
 	base      reflect.Type
 	baseId    uintptr
 	baseIndir int8 // number of indirections to get to base
-	
-	m         bool // base type (T or *T) is a binaryMarshaler
-	unm       bool // base type (T or *T) is a binaryUnmarshaler
-	mIndir    int8 // number of indirections to get to binaryMarshaler type
-	unmIndir  int8 // number of indirections to get to binaryUnmarshaler type
-	toArray   bool // whether this (struct) type should be encoded as an array
-}
-
-type structFieldInfo struct {
-	encName   string // encode name
-	
-	// only one of 'i' or 'is' can be set. If 'i' is -1, then 'is' has been set.
-	
-	is        []int // (recursive/embedded) field index in struct
-	i         int16 // field index in struct
-	omitEmpty bool  
-	toArray   bool  // if field is _struct, is the toArray set?
-	
-	// tag       string   // tag
-	// name      string   // field name
-	// encNameBs []byte   // encoded name as byte stream
-	// ikind     int      // kind of the field as an int i.e. int(reflect.Kind)
-}
 
-type sfiSortedByEncName []*structFieldInfo
+	mbs bool // base type (T or *T) is a MapBySlice
 
-func (p sfiSortedByEncName) Len() int           { return len(p) }
-func (p sfiSortedByEncName) Less(i, j int) bool { return p[i].encName < p[j].encName }
-func (p sfiSortedByEncName) Swap(i, j int)      { p[i], p[j] = p[j], p[i] }
+	m        bool // base type (T or *T) is a binaryMarshaler
+	unm      bool // base type (T or *T) is a binaryUnmarshaler
+	mIndir   int8 // number of indirections to get to binaryMarshaler type
+	unmIndir int8 // number of indirections to get to binaryUnmarshaler type
+	toArray  bool // whether this (struct) type should be encoded as an array
+}
 
 func (ti *typeInfo) indexForEncName(name string) int {
-	//tisfi := ti.sfi 
+	//tisfi := ti.sfi
+	const binarySearchThreshold = 16
 	if sfilen := len(ti.sfi); sfilen < binarySearchThreshold {
 		// linear search. faster than binary search in my testing up to 16-field structs.
-		// for i := 0; i < sfilen; i++ {
-		// 	if ti.sfi[i].encName == name {
-		// 		return i
-		// 	}
-		// }
 		for i, si := range ti.sfi {
 			if si.encName == name {
 				return i
@@ -281,11 +363,10 @@ func (ti *typeInfo) indexForEncName(name string) int {
 		h, i, j := 0, 0, sfilen
 		for i < j {
 			h = i + (j-i)/2
-			// i ≤ h < j
 			if ti.sfi[h].encName < name {
-				i = h + 1 // preserves f(i-1) == false
+				i = h + 1
 			} else {
-				j = h // preserves f(j) == true
+				j = h
 			}
 		}
 		if i < sfilen && ti.sfi[i].encName == name {
@@ -310,9 +391,9 @@ func getTypeInfo(rtid uintptr, rt reflect.Type) (pti *typeInfo) {
 		return
 	}
 
-	ti := typeInfo { rt: rt, rtid: rtid }
+	ti := typeInfo{rt: rt, rtid: rtid}
 	pti = &ti
-	
+
 	var indir int8
 	if ok, indir = implementsIntf(rt, binaryMarshalerTyp); ok {
 		ti.m, ti.mIndir = true, indir
@@ -320,23 +401,26 @@ func getTypeInfo(rtid uintptr, rt reflect.Type) (pti *typeInfo) {
 	if ok, indir = implementsIntf(rt, binaryUnmarshalerTyp); ok {
 		ti.unm, ti.unmIndir = true, indir
 	}
-	
+	if ok, _ = implementsIntf(rt, mapBySliceTyp); ok {
+		ti.mbs = true
+	}
+
 	pt := rt
-	var ptIndir int8 
+	var ptIndir int8
 	// for ; pt.Kind() == reflect.Ptr; pt, ptIndir = pt.Elem(), ptIndir+1 { }
 	for pt.Kind() == reflect.Ptr {
 		pt = pt.Elem()
 		ptIndir++
 	}
 	if ptIndir == 0 {
-		ti.base = rt 
+		ti.base = rt
 		ti.baseId = rtid
 	} else {
-		ti.base = pt 
+		ti.base = pt
 		ti.baseId = reflect.ValueOf(pt).Pointer()
 		ti.baseIndir = ptIndir
 	}
-	
+
 	if rt.Kind() == reflect.Struct {
 		var siInfo *structFieldInfo
 		if f, ok := rt.FieldByName(structInfoFieldName); ok {
@@ -357,7 +441,7 @@ func getTypeInfo(rtid uintptr, rt reflect.Type) (pti *typeInfo) {
 		// 		sfip[i] = &sfip2[i]
 		// 	}
 		// }
-		
+
 		ti.sfip = make([]*structFieldInfo, len(sfip))
 		ti.sfi = make([]*structFieldInfo, len(sfip))
 		copy(ti.sfip, sfip)
@@ -390,7 +474,7 @@ func rgetTypeInfo(rt reflect.Type, indexstack []int, fnameToHastag map[string]bo
 			}
 		}
 		// do not let fields with same name in embedded structs override field at higher level.
-		// this must be done after anonymous check, to allow anonymous field 
+		// this must be done after anonymous check, to allow anonymous field
 		// still include their child fields
 		if _, ok := fnameToHastag[f.Name]; ok {
 			continue
@@ -414,36 +498,6 @@ func rgetTypeInfo(rt reflect.Type, indexstack []int, fnameToHastag map[string]bo
 	}
 }
 
-func parseStructFieldInfo(fname string, stag string) *structFieldInfo {
-	if fname == "" {
-		panic("parseStructFieldInfo: No Field Name")
-	}
-	si := structFieldInfo{
-		// name: fname,
-		encName: fname,
-		// tag: stag,
-	}
-
-	if stag != "" {
-		for i, s := range strings.Split(stag, ",") {
-			if i == 0 {
-				if s != "" {
-					si.encName = s
-				}
-			} else {
-				switch s {
-				case "omitempty":
-					si.omitEmpty = true
-				case "toarray":
-					si.toArray = true
-				}
-			}
-		}
-	}
-	// si.encNameBs = []byte(si.encName)
-	return &si
-}
-
 func panicToErr(err *error) {
 	if x := recover(); x != nil {
 		//debug.PrintStack()
@@ -457,4 +511,3 @@ func doPanic(tag string, format string, params ...interface{}) {
 	copy(params2[1:], params)
 	panic(fmt.Errorf("%s: "+format, params2...))
 }
-

+ 166 - 196
codec/msgpack.go

@@ -8,8 +8,6 @@ import (
 	"io"
 	"math"
 	"net/rpc"
-	"reflect"
-	"time"
 )
 
 const (
@@ -35,53 +33,47 @@ const (
 	mpInt16             = 0xd1
 	mpInt32             = 0xd2
 	mpInt64             = 0xd3
-	
+
 	// extensions below
-	mpBin8 = 0xc4
-	mpBin16 = 0xc5
-	mpBin32 = 0xc6
-	mpExt8 = 0xc7
-	mpExt16 = 0xc8
-	mpExt32 = 0xc9
-	mpFixExt1 = 0xd4
-	mpFixExt2 = 0xd5
-	mpFixExt4 = 0xd6
-	mpFixExt8 = 0xd7
+	mpBin8     = 0xc4
+	mpBin16    = 0xc5
+	mpBin32    = 0xc6
+	mpExt8     = 0xc7
+	mpExt16    = 0xc8
+	mpExt32    = 0xc9
+	mpFixExt1  = 0xd4
+	mpFixExt2  = 0xd5
+	mpFixExt4  = 0xd6
+	mpFixExt8  = 0xd7
 	mpFixExt16 = 0xd8
-	
-	mpStr8              = 0xd9 // new
-	mpStr16             = 0xda
-	mpStr32             = 0xdb
-	
-	mpArray16           = 0xdc
-	mpArray32           = 0xdd
-	
-	mpMap16             = 0xde
-	mpMap32             = 0xdf
-	
-	mpNegFixNumMin      = 0xe0
-	mpNegFixNumMax      = 0xff
 
-)
+	mpStr8  = 0xd9 // new
+	mpStr16 = 0xda
+	mpStr32 = 0xdb
 
-// MsgpackSpecRpc implements Rpc using the communication protocol defined in
-// the msgpack spec at http://wiki.msgpack.org/display/MSGPACK/RPC+specification .
-// It's methods (ServerCodec and ClientCodec) return values that implement RpcCodecBuffered.
-var MsgpackSpecRpc msgpackSpecRpc
+	mpArray16 = 0xdc
+	mpArray32 = 0xdd
+
+	mpMap16 = 0xde
+	mpMap32 = 0xdf
+
+	mpNegFixNumMin = 0xe0
+	mpNegFixNumMax = 0xff
+)
 
 // MsgpackSpecRpcMultiArgs is a special type which signifies to the MsgpackSpecRpcCodec
-// that the backend RPC service takes multiple arguments, which have been arranged 
-// in sequence in the slice. 
-// 
-// The Codec then passes it AS-IS to the rpc service (without wrapping it in an 
+// that the backend RPC service takes multiple arguments, which have been arranged
+// in sequence in the slice.
+//
+// The Codec then passes it AS-IS to the rpc service (without wrapping it in an
 // array of 1 element).
 type MsgpackSpecRpcMultiArgs []interface{}
 
 // A MsgpackContainer type specifies the different types of msgpackContainers.
 type msgpackContainerType struct {
-	fixCutoff             int
-	bFixMin, b8, b16, b32 byte
-	hasFixMin, has8, has8Always  bool
+	fixCutoff                   int
+	bFixMin, b8, b16, b32       byte
+	hasFixMin, has8, has8Always bool
 }
 
 var (
@@ -91,54 +83,19 @@ var (
 	msgpackContainerMap  = msgpackContainerType{16, mpFixMapMin, 0, mpMap16, mpMap32, true, false, false}
 )
 
-// msgpackSpecRpc is the implementation of Rpc that uses custom communication protocol
-// as defined in the msgpack spec at http://wiki.msgpack.org/display/MSGPACK/RPC+specification
-type msgpackSpecRpc struct{}
-
-type msgpackSpecRpcCodec struct {
-	rpcCodec
-}
-
-//MsgpackHandle is a Handle for the Msgpack Schema-Free Encoding Format.
-type MsgpackHandle struct {
-	// RawToString controls how raw bytes are decoded into a nil interface{}.
-	RawToString bool
-	// WriteExt flag supports encoding configured extensions with extension tags.
-	// It also controls whether other elements of the new spec are encoded (ie Str8).
-	// 
-	// With WriteExt=false, configured extensions are serialized as raw bytes 
-	// and Str8 is not encoded.
-	//
-	// A stream can still be decoded into a typed value, provided an appropriate value
-	// is provided, but the type cannot be inferred from the stream. If no appropriate
-	// type is provided (e.g. decoding into a nil interface{}), you get back
-	// a []byte or string based on the setting of RawToString.
-	WriteExt bool
-
-	extHandle
-	EncodeOptions
-	DecodeOptions
-}
+//---------------------------------------------
 
 type msgpackEncDriver struct {
 	w encWriter
 	h *MsgpackHandle
 }
 
-type msgpackDecDriver struct {
-	r      decReader
-	h      *MsgpackHandle
-	bd     byte
-	bdRead bool
-	bdType decodeEncodedType
-}
-
 func (e *msgpackEncDriver) isBuiltinType(rt uintptr) bool {
 	//no builtin types. All encodings are based on kinds. Types supported as extensions.
 	return false
 }
-	
-func (e *msgpackEncDriver) encodeBuiltinType(rt uintptr, rv reflect.Value) {}
+
+func (e *msgpackEncDriver) encodeBuiltin(rt uintptr, v interface{}) {}
 
 func (e *msgpackEncDriver) encodeNil() {
 	e.w.writen1(mpNil)
@@ -278,119 +235,119 @@ func (e *msgpackEncDriver) writeContainerLen(ct msgpackContainerType, l int) {
 
 //---------------------------------------------
 
+type msgpackDecDriver struct {
+	r      decReader
+	h      *MsgpackHandle
+	bd     byte
+	bdRead bool
+	bdType valueType
+}
+
 func (d *msgpackDecDriver) isBuiltinType(rt uintptr) bool {
 	//no builtin types. All encodings are based on kinds. Types supported as extensions.
 	return false
 }
-	
-func (d *msgpackDecDriver) decodeBuiltinType(rt uintptr, rv reflect.Value) {}
+
+func (d *msgpackDecDriver) decodeBuiltin(rt uintptr, v interface{}) {}
 
 // Note: This returns either a primitive (int, bool, etc) for non-containers,
 // or a containerType, or a specific type denoting nil or extension.
 // It is called when a nil interface{} is passed, leaving it up to the DecDriver
 // to introspect the stream and decide how best to decode.
 // It deciphers the value by looking at the stream first.
-func (d *msgpackDecDriver) decodeNaked() (rv reflect.Value, ctx decodeNakedContext) {
+func (d *msgpackDecDriver) decodeNaked() (v interface{}, vt valueType, decodeFurther bool) {
 	d.initReadNext()
 	bd := d.bd
 
-	var v interface{}
-
 	switch bd {
 	case mpNil:
-		ctx = dncNil
+		vt = valueTypeNil
 		d.bdRead = false
 	case mpFalse:
+		vt = valueTypeBool
 		v = false
 	case mpTrue:
+		vt = valueTypeBool
 		v = true
 
 	case mpFloat:
+		vt = valueTypeFloat
 		v = float64(math.Float32frombits(d.r.readUint32()))
 	case mpDouble:
+		vt = valueTypeFloat
 		v = math.Float64frombits(d.r.readUint64())
 
 	case mpUint8:
+		vt = valueTypeUint
 		v = uint64(d.r.readn1())
 	case mpUint16:
+		vt = valueTypeUint
 		v = uint64(d.r.readUint16())
 	case mpUint32:
+		vt = valueTypeUint
 		v = uint64(d.r.readUint32())
 	case mpUint64:
+		vt = valueTypeUint
 		v = uint64(d.r.readUint64())
 
 	case mpInt8:
+		vt = valueTypeInt
 		v = int64(int8(d.r.readn1()))
 	case mpInt16:
+		vt = valueTypeInt
 		v = int64(int16(d.r.readUint16()))
 	case mpInt32:
+		vt = valueTypeInt
 		v = int64(int32(d.r.readUint32()))
 	case mpInt64:
+		vt = valueTypeInt
 		v = int64(int64(d.r.readUint64()))
 
 	default:
 		switch {
 		case bd >= mpPosFixNumMin && bd <= mpPosFixNumMax:
 			// positive fixnum (always signed)
+			vt = valueTypeInt
 			v = int64(int8(bd))
 		case bd >= mpNegFixNumMin && bd <= mpNegFixNumMax:
 			// negative fixnum
+			vt = valueTypeInt
 			v = int64(int8(bd))
 		case bd == mpStr8, bd == mpStr16, bd == mpStr32, bd >= mpFixStrMin && bd <= mpFixStrMax:
-			ctx = dncContainer
-			// v = containerRaw
 			if d.h.RawToString {
 				var rvm string
-				rv = reflect.ValueOf(&rvm).Elem()
+				vt = valueTypeString
+				v = &rvm
 			} else {
-				rvm := []byte{}
-				rv = reflect.ValueOf(&rvm).Elem()
-				//rv = reflect.New(byteSliceTyp).Elem() // Use New, not Zero, so it's settable
+				var rvm = []byte{}
+				vt = valueTypeBytes
+				v = &rvm
 			}
+			decodeFurther = true
 		case bd == mpBin8, bd == mpBin16, bd == mpBin32:
-			ctx = dncContainer
-			rvm := []byte{}
-			rv = reflect.ValueOf(&rvm).Elem()
-			// rv = reflect.New(byteSliceTyp).Elem()
+			var rvm = []byte{}
+			vt = valueTypeBytes
+			v = &rvm
+			decodeFurther = true
 		case bd == mpArray16, bd == mpArray32, bd >= mpFixArrayMin && bd <= mpFixArrayMax:
-			ctx = dncContainer
-			// v = containerList
-			if d.h.SliceType == nil {
-				rv = reflect.New(intfSliceTyp).Elem()
-			} else {
-				rv = reflect.New(d.h.SliceType).Elem()
-			}
+			vt = valueTypeArray
+			decodeFurther = true
 		case bd == mpMap16, bd == mpMap32, bd >= mpFixMapMin && bd <= mpFixMapMax:
-			ctx = dncContainer
-			// v = containerMap
-			if d.h.MapType == nil {
-				rv = reflect.MakeMap(mapIntfIntfTyp)
-			} else {
-				rv = reflect.MakeMap(d.h.MapType)
-			}
+			vt = valueTypeMap
+			decodeFurther = true
 		case bd >= mpFixExt1 && bd <= mpFixExt16, bd >= mpExt8 && bd <= mpExt32:
-			//ctx = dncExt
 			clen := d.readExtLen()
-			xtag := d.r.readn1()
-			xbs := d.r.readn(clen)
-			var bfn func(reflect.Value, []byte) error
-			rv, bfn = d.h.getDecodeExtForTag(xtag)
-			if bfn == nil {
-				// decErr("Unable to find type mapped to extension tag: %v", xtag)
-				re := RawExt { xtag, xbs }
-				rv = reflect.ValueOf(&re).Elem()
-			} else if fnerr := bfn(rv, xbs); fnerr != nil {
-				panic(fnerr)
-			}
+			var re RawExt
+			re.Tag = d.r.readn1()
+			re.Data = d.r.readn(clen)
+			v = &re
+			vt = valueTypeExt
 		default:
 			decErr("Nil-Deciphered DecodeValue: %s: hex: %x, dec: %d", msgBadDesc, bd, bd)
 		}
 	}
-	if ctx == dncHandled {
+	if !decodeFurther {
 		d.bdRead = false
-		if v != nil {
-			rv = reflect.ValueOf(v)
-		}
 	}
 	return
 }
@@ -575,52 +532,51 @@ func (d *msgpackDecDriver) initReadNext() {
 	}
 	d.bd = d.r.readn1()
 	d.bdRead = true
-	d.bdType = detUnset
-}
-
-func (d *msgpackDecDriver) currentEncodedType() decodeEncodedType {
-	if d.bdType == detUnset {
-	bd := d.bd
-	switch bd {
-	case mpNil:
-		d.bdType = detNil
-	case mpFalse, mpTrue:
-		d.bdType = detBool
-	case mpFloat, mpDouble:
-		d.bdType = detFloat
-	case mpUint8, mpUint16, mpUint32, mpUint64:
-		d.bdType = detUint
-	case mpInt8, mpInt16, mpInt32, mpInt64:
-		d.bdType = detInt
-	default:
-		switch {
-		case bd >= mpPosFixNumMin && bd <= mpPosFixNumMax:
-			d.bdType = detInt
-		case bd >= mpNegFixNumMin && bd <= mpNegFixNumMax:
-			d.bdType = detInt
-		case bd == mpStr8, bd == mpStr16, bd == mpStr32, bd >= mpFixStrMin && bd <= mpFixStrMax:
-			if d.h.RawToString {
-				d.bdType = detString
-			} else {
-				d.bdType = detBytes
-			}
-		case bd == mpBin8, bd == mpBin16, bd == mpBin32:
-			d.bdType = detBytes
-		case bd == mpArray16, bd == mpArray32, bd >= mpFixArrayMin && bd <= mpFixArrayMax:
-			d.bdType = detArray
-		case bd == mpMap16, bd == mpMap32, bd >= mpFixMapMin && bd <= mpFixMapMax:
-			d.bdType = detMap
-		case bd >= mpFixExt1 && bd <= mpFixExt16, bd >= mpExt8 && bd <= mpExt32:
-			d.bdType = detExt
+	d.bdType = valueTypeUnset
+}
+
+func (d *msgpackDecDriver) currentEncodedType() valueType {
+	if d.bdType == valueTypeUnset {
+		bd := d.bd
+		switch bd {
+		case mpNil:
+			d.bdType = valueTypeNil
+		case mpFalse, mpTrue:
+			d.bdType = valueTypeBool
+		case mpFloat, mpDouble:
+			d.bdType = valueTypeFloat
+		case mpUint8, mpUint16, mpUint32, mpUint64:
+			d.bdType = valueTypeUint
+		case mpInt8, mpInt16, mpInt32, mpInt64:
+			d.bdType = valueTypeInt
 		default:
-			decErr("currentEncodedType: Undeciphered descriptor: %s: hex: %x, dec: %d", msgBadDesc, bd, bd)
+			switch {
+			case bd >= mpPosFixNumMin && bd <= mpPosFixNumMax:
+				d.bdType = valueTypeInt
+			case bd >= mpNegFixNumMin && bd <= mpNegFixNumMax:
+				d.bdType = valueTypeInt
+			case bd == mpStr8, bd == mpStr16, bd == mpStr32, bd >= mpFixStrMin && bd <= mpFixStrMax:
+				if d.h.RawToString {
+					d.bdType = valueTypeString
+				} else {
+					d.bdType = valueTypeBytes
+				}
+			case bd == mpBin8, bd == mpBin16, bd == mpBin32:
+				d.bdType = valueTypeBytes
+			case bd == mpArray16, bd == mpArray32, bd >= mpFixArrayMin && bd <= mpFixArrayMax:
+				d.bdType = valueTypeArray
+			case bd == mpMap16, bd == mpMap32, bd >= mpFixMapMin && bd <= mpFixMapMax:
+				d.bdType = valueTypeMap
+			case bd >= mpFixExt1 && bd <= mpFixExt16, bd >= mpExt8 && bd <= mpExt32:
+				d.bdType = valueTypeExt
+			default:
+				decErr("currentEncodedType: Undeciphered descriptor: %s: hex: %x, dec: %d", msgBadDesc, bd, bd)
+			}
 		}
 	}
-	}
 	return d.bdType
 }
 
-
 func (d *msgpackDecDriver) tryDecodeAsNil() bool {
 	if d.bd == mpNil {
 		d.bdRead = false
@@ -630,7 +586,7 @@ func (d *msgpackDecDriver) tryDecodeAsNil() bool {
 }
 
 func (d *msgpackDecDriver) readContainerLen(ct msgpackContainerType) (clen int) {
-	bd := d.bd 
+	bd := d.bd
 	switch {
 	case bd == mpNil:
 		clen = -1 // to represent nil
@@ -686,9 +642,9 @@ func (d *msgpackDecDriver) readExtLen() (clen int) {
 func (d *msgpackDecDriver) decodeExt(verifyTag bool, tag byte) (xtag byte, xbs []byte) {
 	xbd := d.bd
 	switch {
-	case xbd == mpBin8, xbd == mpBin16, xbd == mpBin32: 
-		xbs, _ = d.decodeBytes(nil) 
-	case xbd == mpStr8, xbd == mpStr16, xbd == mpStr32, 
+	case xbd == mpBin8, xbd == mpBin16, xbd == mpBin32:
+		xbs, _ = d.decodeBytes(nil)
+	case xbd == mpStr8, xbd == mpStr16, xbd == mpStr32,
 		xbd >= mpFixStrMin && xbd <= mpFixStrMax:
 		xbs = []byte(d.decodeString())
 	default:
@@ -705,27 +661,23 @@ func (d *msgpackDecDriver) decodeExt(verifyTag bool, tag byte) (xtag byte, xbs [
 
 //--------------------------------------------------
 
-// TimeEncodeExt encodes a time.Time as a byte slice.
-// Configure this to support the Time Extension, e.g. using tag 1.
-func (_ *MsgpackHandle) TimeEncodeExt(rv reflect.Value) (bs []byte, err error) {
-	rvi := rv.Interface()
-	switch iv := rvi.(type) {
-	case time.Time:
-		bs = encodeTime(iv)
-	default:
-		err = fmt.Errorf("codec/msgpack: TimeEncodeExt expects a time.Time. Received %T", rvi)
-	}
-	return
-}
+//MsgpackHandle is a Handle for the Msgpack Schema-Free Encoding Format.
+type MsgpackHandle struct {
+	BasicHandle
 
-// TimeDecodeExt decodes a time.Time from the byte slice parameter, and sets it into the reflect value.
-// Configure this to support the Time Extension, e.g. using tag 1.
-func (_ *MsgpackHandle) TimeDecodeExt(rv reflect.Value, bs []byte) (err error) {
-	tt, err := decodeTime(bs)
-	if err == nil {
-		rv.Set(reflect.ValueOf(tt))
-	}
-	return
+	// RawToString controls how raw bytes are decoded into a nil interface{}.
+	RawToString bool
+	// WriteExt flag supports encoding configured extensions with extension tags.
+	// It also controls whether other elements of the new spec are encoded (ie Str8).
+	//
+	// With WriteExt=false, configured extensions are serialized as raw bytes
+	// and Str8 is not encoded.
+	//
+	// A stream can still be decoded into a typed value, provided an appropriate value
+	// is provided, but the type cannot be inferred from the stream. If no appropriate
+	// type is provided (e.g. decoding into a nil interface{}), you get back
+	// a []byte or string based on the setting of RawToString.
+	WriteExt bool
 }
 
 func (h *MsgpackHandle) newEncDriver(w encWriter) encDriver {
@@ -740,20 +692,20 @@ func (h *MsgpackHandle) writeExt() bool {
 	return h.WriteExt
 }
 
-//--------------------------------------------------
-
-func (x msgpackSpecRpc) ServerCodec(conn io.ReadWriteCloser, h Handle) rpc.ServerCodec {
-	return &msgpackSpecRpcCodec{newRPCCodec(conn, h)}
+func (h *MsgpackHandle) getBasicHandle() *BasicHandle {
+	return &h.BasicHandle
 }
 
-func (x msgpackSpecRpc) ClientCodec(conn io.ReadWriteCloser, h Handle) rpc.ClientCodec {
-	return &msgpackSpecRpcCodec{newRPCCodec(conn, h)}
+//--------------------------------------------------
+
+type msgpackSpecRpcCodec struct {
+	rpcCodec
 }
 
 // /////////////// Spec RPC Codec ///////////////////
 func (c *msgpackSpecRpcCodec) WriteRequest(r *rpc.Request, body interface{}) error {
-	// WriteRequest can write to both a Go service, and other services that do 
-	// not abide by the 1 argument rule of a Go service. 
+	// WriteRequest can write to both a Go service, and other services that do
+	// not abide by the 1 argument rule of a Go service.
 	// We discriminate based on if the body is a MsgpackSpecRpcMultiArgs
 	var bodyArr []interface{}
 	if m, ok := body.(MsgpackSpecRpcMultiArgs); ok {
@@ -761,7 +713,7 @@ func (c *msgpackSpecRpcCodec) WriteRequest(r *rpc.Request, body interface{}) err
 	} else {
 		bodyArr = []interface{}{body}
 	}
-	r2 := []interface{}{ 0, uint32(r.Seq), r.ServiceMethod, bodyArr }
+	r2 := []interface{}{0, uint32(r.Seq), r.ServiceMethod, bodyArr}
 	return c.write(r2, nil, false, true)
 }
 
@@ -773,7 +725,7 @@ func (c *msgpackSpecRpcCodec) WriteResponse(r *rpc.Response, body interface{}) e
 	if moe != nil && body != nil {
 		body = nil
 	}
-	r2 := []interface{}{ 1, uint32(r.Seq), moe, body }
+	r2 := []interface{}{1, uint32(r.Seq), moe, body}
 	return c.write(r2, nil, false, true)
 }
 
@@ -829,3 +781,21 @@ func (c *msgpackSpecRpcCodec) parseCustomHeader(expectTypeByte byte, msgid *uint
 	return
 }
 
+//--------------------------------------------------
+
+// msgpackSpecRpc is the implementation of Rpc that uses custom communication protocol
+// as defined in the msgpack spec at https://github.com/msgpack-rpc/msgpack-rpc/blob/master/spec.md
+type msgpackSpecRpc struct{}
+
+// MsgpackSpecRpc implements Rpc using the communication protocol defined in
+// the msgpack spec at https://github.com/msgpack-rpc/msgpack-rpc/blob/master/spec.md .
+// Its methods (ServerCodec and ClientCodec) return values that implement RpcCodecBuffered.
+var MsgpackSpecRpc msgpackSpecRpc
+
+func (x msgpackSpecRpc) ServerCodec(conn io.ReadWriteCloser, h Handle) rpc.ServerCodec {
+	return &msgpackSpecRpcCodec{newRPCCodec(conn, h)}
+}
+
+func (x msgpackSpecRpc) ClientCodec(conn io.ReadWriteCloser, h Handle) rpc.ClientCodec {
+	return &msgpackSpecRpcCodec{newRPCCodec(conn, h)}
+}

+ 31 - 36
codec/rpc.go

@@ -1,24 +1,14 @@
 // Copyright (c) 2012, 2013 Ugorji Nwoke. All rights reserved.
 // Use of this source code is governed by a BSD-style license found in the LICENSE file.
 
-/*
-RPC
-
-RPC Client and Server Codecs are implemented, so the codecs can be used
-with the standard net/rpc package.
-*/
 package codec
 
 import (
-	"io"
 	"bufio"
+	"io"
 	"net/rpc"
 )
 
-// GoRpc implements Rpc using the communication protocol defined in net/rpc package.
-// It's methods (ServerCodec and ClientCodec) return values that implement RpcCodecBuffered.
-var GoRpc goRpc
-
 // Rpc provides a rpc Server or Client Codec for rpc communication.
 type Rpc interface {
 	ServerCodec(conn io.ReadWriteCloser, h Handle) rpc.ServerCodec
@@ -34,29 +24,15 @@ type RpcCodecBuffered interface {
 	BufferedWriter() *bufio.Writer
 }
 
+// -------------------------------------
+
 // rpcCodec defines the struct members and common methods.
 type rpcCodec struct {
 	rwc io.ReadWriteCloser
 	dec *Decoder
 	enc *Encoder
-	bw *bufio.Writer
-	br *bufio.Reader
-}
-
-type goRpcCodec struct {
-	rpcCodec
-}
-
-// goRpc is the implementation of Rpc that uses the communication protocol
-// as defined in net/rpc package.
-type goRpc struct{}
-
-func (x goRpc) ServerCodec(conn io.ReadWriteCloser, h Handle) rpc.ServerCodec {
-	return &goRpcCodec{newRPCCodec(conn, h)}
-}
-
-func (x goRpc) ClientCodec(conn io.ReadWriteCloser, h Handle) rpc.ClientCodec {
-	return &goRpcCodec{newRPCCodec(conn, h)}
+	bw  *bufio.Writer
+	br  *bufio.Reader
 }
 
 func newRPCCodec(conn io.ReadWriteCloser, h Handle) rpcCodec {
@@ -64,14 +40,13 @@ func newRPCCodec(conn io.ReadWriteCloser, h Handle) rpcCodec {
 	br := bufio.NewReader(conn)
 	return rpcCodec{
 		rwc: conn,
-		bw: bw,
-		br: br,
+		bw:  bw,
+		br:  br,
 		enc: NewEncoder(bw, h),
 		dec: NewDecoder(br, h),
 	}
 }
 
-// /////////////// RPC Codec Shared Methods ///////////////////
 func (c *rpcCodec) BufferedReader() *bufio.Reader {
 	return c.br
 }
@@ -90,13 +65,11 @@ func (c *rpcCodec) write(obj1, obj2 interface{}, writeObj2, doFlush bool) (err e
 		}
 	}
 	if doFlush && c.bw != nil {
-		//println("rpc flushing")
 		return c.bw.Flush()
 	}
 	return
 }
 
-
 func (c *rpcCodec) read(obj interface{}) (err error) {
 	//If nil is passed in, we should still attempt to read content to nowhere.
 	if obj == nil {
@@ -114,7 +87,12 @@ func (c *rpcCodec) ReadResponseBody(body interface{}) error {
 	return c.read(body)
 }
 
-// /////////////// Go RPC Codec ///////////////////
+// -------------------------------------
+
+type goRpcCodec struct {
+	rpcCodec
+}
+
 func (c *goRpcCodec) WriteRequest(r *rpc.Request, body interface{}) error {
 	return c.write(r, body, true, true)
 }
@@ -135,5 +113,22 @@ func (c *goRpcCodec) ReadRequestBody(body interface{}) error {
 	return c.read(body)
 }
 
-var _ RpcCodecBuffered = (*rpcCodec)(nil) // ensure *rpcCodec implements RpcCodecBuffered
+// -------------------------------------
+
+// goRpc is the implementation of Rpc that uses the communication protocol
+// as defined in net/rpc package.
+type goRpc struct{}
 
+// GoRpc implements Rpc using the communication protocol defined in net/rpc package.
+// Its methods (ServerCodec and ClientCodec) return values that implement RpcCodecBuffered.
+var GoRpc goRpc
+
+func (x goRpc) ServerCodec(conn io.ReadWriteCloser, h Handle) rpc.ServerCodec {
+	return &goRpcCodec{newRPCCodec(conn, h)}
+}
+
+func (x goRpc) ClientCodec(conn io.ReadWriteCloser, h Handle) rpc.ClientCodec {
+	return &goRpcCodec{newRPCCodec(conn, h)}
+}
+
+var _ RpcCodecBuffered = (*rpcCodec)(nil) // ensure *rpcCodec implements RpcCodecBuffered

+ 51 - 9
codec/time.go

@@ -11,8 +11,55 @@ var (
 	timeDigits = [...]byte{'0', '1', '2', '3', '4', '5', '6', '7', '8', '9'}
 )
 
-// encodeTime encodes a time.Time as a []byte, including
+// EncodeTime encodes a time.Time as a []byte, including
 // information on the instant in time and UTC offset.
+//
+// Format Description
+//
+//   A timestamp is composed of 3 components:
+//
+//   - secs: signed integer representing seconds since unix epoch
+//   - nsces: unsigned integer representing fractional seconds as a
+//     nanosecond offset within secs, in the range 0 <= nsecs < 1e9
+//   - tz: signed integer representing timezone offset in minutes east of UTC,
+//     and a dst (daylight savings time) flag
+//
+//   When encoding a timestamp, the first byte is the descriptor, which
+//   defines which components are encoded and how many bytes are used to
+//   encode secs and nsecs components. *If secs/nsecs is 0 or tz is UTC, it
+//   is not encoded in the byte array explicitly*.
+//
+//       Descriptor 8 bits are of the form `A B C DDD EE`:
+//           A:   Is secs component encoded? 1 = true
+//           B:   Is nsecs component encoded? 1 = true
+//           C:   Is tz component encoded? 1 = true
+//           DDD: Number of extra bytes for secs (range 0-7).
+//                If A = 1, secs encoded in DDD+1 bytes.
+//                    If A = 0, secs is not encoded, and is assumed to be 0.
+//                    If A = 1, then we need at least 1 byte to encode secs.
+//                    DDD says the number of extra bytes beyond that 1.
+//                    E.g. if DDD=0, then secs is represented in 1 byte.
+//                         if DDD=2, then secs is represented in 3 bytes.
+//           EE:  Number of extra bytes for nsecs (range 0-3).
+//                If B = 1, nsecs encoded in EE+1 bytes (similar to secs/DDD above)
+//
+//   Following the descriptor bytes, subsequent bytes are:
+//
+//       secs component encoded in `DDD + 1` bytes (if A == 1)
+//       nsecs component encoded in `EE + 1` bytes (if B == 1)
+//       tz component encoded in 2 bytes (if C == 1)
+//
+//   secs and nsecs components are integers encoded in a BigEndian
+//   2-complement encoding format.
+//
+//   tz component is encoded as 2 bytes (16 bits). Most significant bit 15 to
+//   Least significant bit 0 are described below:
+//
+//       Timezone offset has a range of -12:00 to +14:00 (ie -720 to +840 minutes).
+//       Bit 15 = have\_dst: set to 1 if we set the dst flag.
+//       Bit 14 = dst\_on: set to 1 if dst is in effect at the time, or 0 if not.
+//       Bits 13..0 = timezone offset in minutes. It is a signed integer in Big Endian format.
+//
 func encodeTime(t time.Time) []byte {
 	//t := rv.Interface().(time.Time)
 	tsecs, tnsecs := t.Unix(), t.Nanosecond()
@@ -59,8 +106,7 @@ func encodeTime(t time.Time) []byte {
 	return bs[0:i]
 }
 
-// decodeTime decodes a []byte into a time.Time,
-// and sets into passed reflectValue.
+// DecodeTime decodes a []byte into a time.Time.
 func decodeTime(bs []byte) (tt time.Time, err error) {
 	bd := bs[0]
 	var (
@@ -77,7 +123,7 @@ func decodeTime(bs []byte) (tt time.Time, err error) {
 		i2 = i + n
 		copy(btmp[8-n:], bs[i:i2])
 		//if first bit of bs[i] is set, then fill btmp[0..8-n] with 0xff (ie sign extend it)
-		if bs[i] & (1 << 7) != 0 {
+		if bs[i]&(1<<7) != 0 {
 			copy(btmp[0:8-n], bsAll0xff)
 			//for j,k := byte(0), 8-n; j < k; j++ {	btmp[j] = 0xff }
 		}
@@ -115,7 +161,7 @@ func decodeTime(bs []byte) (tt time.Time, err error) {
 	if tzint == 0 {
 		tt = time.Unix(tsec, int64(tnsec)).UTC()
 	} else {
-		// For Go Time, do not use a descriptive timezone. 
+		// For Go Time, do not use a descriptive timezone.
 		// It's unnecessary, and makes it harder to do a reflect.DeepEqual.
 		// The Offset already tells what the offset should be, if not on UTC and unknown zone name.
 		// var zoneName = timeLocUTCName(tzint)
@@ -145,7 +191,3 @@ func timeLocUTCName(tzint int16) string {
 	return string(tzname)
 	//return time.FixedZone(string(tzname), int(tzint)*60)
 }
-
-
-
-