These define some in-stream descriptors for manual encoding e.g. when doing explicit indefinite-length
const ( CborStreamBytes byte = 0x5f CborStreamString byte = 0x7f CborStreamArray byte = 0x9f CborStreamMap byte = 0xbf CborStreamBreak byte = 0xff )
GenVersion is the current version of codecgen.
const GenVersion = 28
GoRpc implements Rpc using the communication protocol defined in net/rpc package.
Note: network connection (from net.Dial, of type io.ReadWriteCloser) is not buffered.
For performance, you should configure WriterBufferSize and ReaderBufferSize on the handle. This ensures we use an adequate buffer during reading and writing. If not configured, we will internally initialize and use a buffer during reads and writes. This can be turned off via the RPCNoBuffer option on the Handle.
var handle codec.JsonHandle handle.RPCNoBuffer = true // turns off attempt by rpc module to initialize a buffer
Example 1: one way of configuring buffering explicitly:
var handle codec.JsonHandle // codec handle handle.ReaderBufferSize = 1024 handle.WriterBufferSize = 1024 var conn io.ReadWriteCloser // connection got from a socket var serverCodec = GoRpc.ServerCodec(conn, handle) var clientCodec = GoRpc.ClientCodec(conn, handle)
Example 2: you can also explicitly create a buffered connection yourself, and not worry about configuring the buffer sizes in the Handle.
var handle codec.Handle // codec handle var conn io.ReadWriteCloser // connection got from a socket var bufconn = struct { // bufconn here is a buffered io.ReadWriteCloser io.Closer *bufio.Reader *bufio.Writer }{conn, bufio.NewReader(conn), bufio.NewWriter(conn)} var serverCodec = GoRpc.ServerCodec(bufconn, handle) var clientCodec = GoRpc.ClientCodec(bufconn, handle)
var GoRpc goRpc
MsgpackSpecRpc implements Rpc using the communication protocol defined in the msgpack spec at https://github.com/msgpack-rpc/msgpack-rpc/blob/master/spec.md .
See GoRpc documentation, for information on buffering for better performance.
var MsgpackSpecRpc msgpackSpecRpc
SelfExt is a sentinel extension signifying that types registered with it SHOULD be encoded and decoded based on the native mode of the format.
This allows users to define a tag for an extension, but signify that the types should be encoded/decoded as the native encoding. This way, users need not also define how to encode or decode the extension.
var SelfExt = &extFailWrapper{}
func GenHelper() (g genHelper)
GenHelperEncoder is exported so that it can be used externally by codecgen.
Library users: DO NOT USE IT DIRECTLY or INDIRECTLY. IT WILL CHANGE CONTINOUSLY WITHOUT NOTICE.
BasicHandle encapsulates the common options and extension functions.
Deprecated: DO NOT USE DIRECTLY. EXPORTED FOR GODOC BENEFIT. WILL BE REMOVED.
type BasicHandle struct { // TypeInfos is used to get the type info for any type. // // If not configured, the default TypeInfos is used, which uses struct tag keys: codec, json TypeInfos *TypeInfos DecodeOptions EncodeOptions RPCOptions // TimeNotBuiltin configures whether time.Time should be treated as a builtin type. // // All Handlers should know how to encode/decode time.Time as part of the core // format specification, or as a standard extension defined by the format. // // However, users can elect to handle time.Time as a custom extension, or via the // standard library's encoding.Binary(M|Unm)arshaler or Text(M|Unm)arshaler interface. // To elect this behavior, users can set TimeNotBuiltin=true. // // Note: Setting TimeNotBuiltin=true can be used to enable the legacy behavior // (for Cbor and Msgpack), where time.Time was not a builtin supported type. // // Note: DO NOT CHANGE AFTER FIRST USE. // // Once a Handle has been initialized (used), do not modify this option. It will be ignored. TimeNotBuiltin bool // ExplicitRelease is ignored and has no effect. // // Deprecated: Pools are only used for long-lived objects shared across goroutines. // It is maintained for backward compatibility. ExplicitRelease bool // contains filtered or unexported fields }
func (x *BasicHandle) AddExt(rt reflect.Type, tag byte, encfn func(reflect.Value) ([]byte, error), decfn func(reflect.Value, []byte) error) (err error)
AddExt registes an encode and decode function for a reflect.Type. To deregister an Ext, call AddExt with nil encfn and/or nil decfn.
Deprecated: Use SetBytesExt or SetInterfaceExt on the Handle instead.
func (x *BasicHandle) SetExt(rt reflect.Type, tag uint64, ext Ext) (err error)
SetExt will set the extension for a tag and reflect.Type. Note that the type must be a named type, and specifically not a pointer or Interface. An error is returned if that is not honored. To Deregister an ext, call SetExt with nil Ext.
Deprecated: Use SetBytesExt or SetInterfaceExt on the Handle instead.
func (x BasicHandle) TimeBuiltin() bool
TimeBuiltin returns whether time.Time OOTB support is used, based on the initial configuration of TimeNotBuiltin
BincHandle is a Handle for the Binc Schema-Free Encoding Format defined at https://github.com/ugorji/binc .
BincHandle currently supports all Binc features with the following EXCEPTIONS:
Note that these EXCEPTIONS are temporary and full support is possible and may happen soon.
type BincHandle struct { BasicHandle // AsSymbols defines what should be encoded as symbols. // // Encoding as symbols can reduce the encoded size significantly. // // However, during decoding, each string to be encoded as a symbol must // be checked to see if it has been seen before. Consequently, encoding time // will increase if using symbols, because string comparisons has a clear cost. // // Values: // - 0: default: library uses best judgement // - 1: use symbols // - 2: do not use symbols AsSymbols uint8 // contains filtered or unexported fields }
func (h *BincHandle) Name() string
Name returns the name of the handle: binc
func (h *BincHandle) SetBytesExt(rt reflect.Type, tag uint64, ext BytesExt) (err error)
SetBytesExt sets an extension
func (x BincHandle) TimeBuiltin() bool
TimeBuiltin returns whether time.Time OOTB support is used, based on the initial configuration of TimeNotBuiltin
BytesExt handles custom (de)serialization of types to/from []byte. It is used by codecs (e.g. binc, msgpack, simple) which do custom serialization of the types.
type BytesExt interface { // WriteExt converts a value to a []byte. // // Note: v is a pointer iff the registered extension type is a struct or array kind. WriteExt(v interface{}) []byte // ReadExt updates a value from a []byte. // // Note: dst is always a pointer kind to the registered extension type. ReadExt(dst interface{}, src []byte) }
CborHandle is a Handle for the CBOR encoding format, defined at http://tools.ietf.org/html/rfc7049 and documented further at http://cbor.io .
CBOR is comprehensively supported, including support for:
None of the optional extensions (with tags) defined in the spec are supported out-of-the-box. Users can implement them as needed (using SetExt), including spec-documented ones:
type CborHandle struct { // noElemSeparators BasicHandle // IndefiniteLength=true, means that we encode using indefinitelength IndefiniteLength bool // TimeRFC3339 says to encode time.Time using RFC3339 format. // If unset, we encode time.Time using seconds past epoch. TimeRFC3339 bool // SkipUnexpectedTags says to skip over any tags for which extensions are // not defined. This is in keeping with the cbor spec on "Optional Tagging of Items". // // Furthermore, this allows the skipping over of the Self Describing Tag 0xd9d9f7. SkipUnexpectedTags bool // contains filtered or unexported fields }
func (h *CborHandle) Name() string
Name returns the name of the handle: cbor
func (h *CborHandle) SetInterfaceExt(rt reflect.Type, tag uint64, ext InterfaceExt) (err error)
SetInterfaceExt sets an extension
func (x CborHandle) TimeBuiltin() bool
TimeBuiltin returns whether time.Time OOTB support is used, based on the initial configuration of TimeNotBuiltin
DecodeOptions captures configuration options during decode.
type DecodeOptions struct { // MapType specifies type to use during schema-less decoding of a map in the stream. // If nil (unset), we default to map[string]interface{} iff json handle and MapKeyAsString=true, // else map[interface{}]interface{}. MapType reflect.Type // SliceType specifies type to use during schema-less decoding of an array in the stream. // If nil (unset), we default to []interface{} for all formats. SliceType reflect.Type // MaxInitLen defines the maxinum initial length that we "make" a collection // (string, slice, map, chan). If 0 or negative, we default to a sensible value // based on the size of an element in the collection. // // For example, when decoding, a stream may say that it has 2^64 elements. // We should not auto-matically provision a slice of that size, to prevent Out-Of-Memory crash. // Instead, we provision up to MaxInitLen, fill that up, and start appending after that. MaxInitLen int // ReaderBufferSize is the size of the buffer used when reading. // // if > 0, we use a smart buffer internally for performance purposes. ReaderBufferSize int // MaxDepth defines the maximum depth when decoding nested // maps and slices. If 0 or negative, we default to a suitably large number (currently 1024). MaxDepth int16 // If ErrorIfNoField, return an error when decoding a map // from a codec stream into a struct, and no matching struct field is found. ErrorIfNoField bool // If ErrorIfNoArrayExpand, return an error when decoding a slice/array that cannot be expanded. // For example, the stream contains an array of 8 items, but you are decoding into a [4]T array, // or you are decoding into a slice of length 4 which is non-addressable (and so cannot be set). ErrorIfNoArrayExpand bool // If SignedInteger, use the int64 during schema-less decoding of unsigned values (not uint64). SignedInteger bool // MapValueReset controls how we decode into a map value. // // By default, we MAY retrieve the mapping for a key, and then decode into that. // However, especially with big maps, that retrieval may be expensive and unnecessary // if the stream already contains all that is necessary to recreate the value. // // If true, we will never retrieve the previous mapping, // but rather decode into a new value and set that in the map. // // If false, we will retrieve the previous mapping if necessary e.g. // the previous mapping is a pointer, or is a struct or array with pre-set state, // or is an interface. MapValueReset bool // SliceElementReset: on decoding a slice, reset the element to a zero value first. // // concern: if the slice already contained some garbage, we will decode into that garbage. SliceElementReset bool // InterfaceReset controls how we decode into an interface. // // By default, when we see a field that is an interface{...}, // or a map with interface{...} value, we will attempt decoding into the // "contained" value. // // However, this prevents us from reading a string into an interface{} // that formerly contained a number. // // If true, we will decode into a new "blank" value, and set that in the interface. // If false, we will decode into whatever is contained in the interface. InterfaceReset bool // InternString controls interning of strings during decoding. // // Some handles, e.g. json, typically will read map keys as strings. // If the set of keys are finite, it may help reduce allocation to // look them up from a map (than to allocate them afresh). // // Note: Handles will be smart when using the intern functionality. // Every string should not be interned. // An excellent use-case for interning is struct field names, // or map keys where key type is string. InternString bool // PreferArrayOverSlice controls whether to decode to an array or a slice. // // This only impacts decoding into a nil interface{}. // // Consequently, it has no effect on codecgen. // // *Note*: This only applies if using go1.5 and above, // as it requires reflect.ArrayOf support which was absent before go1.5. PreferArrayOverSlice bool // DeleteOnNilMapValue controls how to decode a nil value in the stream. // // If true, we will delete the mapping of the key. // Else, just set the mapping to the zero value of the type. // // Deprecated: This does NOTHING and is left behind for compiling compatibility. // This change is necessitated because 'nil' in a stream now consistently // means the zero value (ie reset the value to its zero state). DeleteOnNilMapValue bool // RawToString controls how raw bytes in a stream are decoded into a nil interface{}. // By default, they are decoded as []byte, but can be decoded as string (if configured). RawToString bool // ZeroCopy controls whether decoded values of []byte or string type // point into the input []byte parameter passed to a NewDecoderBytes/ResetBytes(...) call. // // To illustrate, if ZeroCopy and decoding from a []byte (not io.Writer), // then a []byte or string in the output result may just be a slice of (point into) // the input bytes. // // This optimization prevents unnecessary copying. // // However, it is made optional, as the caller MUST ensure that the input parameter []byte is // not modified after the Decode() happens, as any changes are mirrored in the decoded result. ZeroCopy bool // PreferPointerForStructOrArray controls whether a struct or array // is stored in a nil interface{}, or a pointer to it. // // This mostly impacts when we decode registered extensions. PreferPointerForStructOrArray bool // ValidateUnicode controls will cause decoding to fail if an expected unicode // string is well-formed but include invalid codepoints. // // This could have a performance impact. ValidateUnicode bool }
Decoder reads and decodes an object from an input stream in a supported format.
Decoder is NOT safe for concurrent use i.e. a Decoder cannot be used concurrently in multiple goroutines.
However, as Decoder could be allocation heavy to initialize, a Reset method is provided so its state can be reused to decode new input streams repeatedly. This is the idiomatic way to use.
type Decoder struct {
// contains filtered or unexported fields
}
func NewDecoder(r io.Reader, h Handle) *Decoder
NewDecoder returns a Decoder for decoding a stream of bytes from an io.Reader.
For efficiency, Users are encouraged to configure ReaderBufferSize on the handle OR pass in a memory buffered reader (eg bufio.Reader, bytes.Buffer).
func NewDecoderBytes(in []byte, h Handle) *Decoder
NewDecoderBytes returns a Decoder which efficiently decodes directly from a byte slice with zero copying.
func NewDecoderString(s string, h Handle) *Decoder
NewDecoderString returns a Decoder which efficiently decodes directly from a string with zero copying.
It is a convenience function that calls NewDecoderBytes with a []byte view into the string.
This can be an efficient zero-copy if using default mode i.e. without codec.safe tag.
func (d *Decoder) Decode(v interface{}) (err error)
Decode decodes the stream from reader and stores the result in the value pointed to by v. v cannot be a nil pointer. v can also be a reflect.Value of a pointer.
Note that a pointer to a nil interface is not a nil pointer. If you do not know what type of stream it is, pass in a pointer to a nil interface. We will decode and store a value in that nil interface.
Sample usages:
// Decoding into a non-nil typed value var f float32 err = codec.NewDecoder(r, handle).Decode(&f) // Decoding into nil interface var v interface{} dec := codec.NewDecoder(r, handle) err = dec.Decode(&v)
When decoding into a nil interface{}, we will decode into an appropriate value based on the contents of the stream:
Configurations exist on the Handle to override defaults (e.g. for MapType, SliceType and how to decode raw bytes).
When decoding into a non-nil interface{} value, the mode of encoding is based on the type of the value. When a value is seen:
There are some special rules when decoding into containers (slice/array/map/struct). Decode will typically use the stream contents to UPDATE the container i.e. the values in these containers will not be zero'ed before decoding.
This in-place update maintains consistency in the decoding philosophy (i.e. we ALWAYS update in place by default). However, the consequence of this is that values in slices or maps which are not zero'ed before hand, will have part of the prior values in place after decode if the stream doesn't contain an update for those parts.
This in-place update can be disabled by configuring the MapValueReset and SliceElementReset decode options available on every handle.
Furthermore, when decoding a stream map or array with length of 0 into a nil map or slice, we reset the destination map or slice to a zero-length value.
However, when decoding a stream nil, we reset the destination container to its "zero" value (e.g. nil for slice/map, etc).
Note: we allow nil values in the stream anywhere except for map keys. A nil value in the encoded stream where a map key is expected is treated as an error.
func (d *Decoder) HandleName() string
func (d *Decoder) MustDecode(v interface{})
MustDecode is like Decode, but panics if unable to Decode.
Note: This provides insight to the code location that triggered the error.
func (d *Decoder) NumBytesRead() int
NumBytesRead returns the number of bytes read
func (d *Decoder) Release()
Release is a no-op.
Deprecated: Pooled resources are not used with a Decoder. This method is kept for compatibility reasons only.
func (d *Decoder) Reset(r io.Reader)
Reset the Decoder with a new Reader to decode from, clearing all state from last run(s).
func (d *Decoder) ResetBytes(in []byte)
ResetBytes resets the Decoder with a new []byte to decode from, clearing all state from last run(s).
func (d *Decoder) ResetString(s string)
ResetString resets the Decoder with a new string to decode from, clearing all state from last run(s).
It is a convenience function that calls ResetBytes with a []byte view into the string.
This can be an efficient zero-copy if using default mode i.e. without codec.safe tag.
EncodeOptions captures configuration options during encode.
type EncodeOptions struct { // WriterBufferSize is the size of the buffer used when writing. // // if > 0, we use a smart buffer internally for performance purposes. WriterBufferSize int // ChanRecvTimeout is the timeout used when selecting from a chan. // // Configuring this controls how we receive from a chan during the encoding process. // - If ==0, we only consume the elements currently available in the chan. // - if <0, we consume until the chan is closed. // - If >0, we consume until this timeout. ChanRecvTimeout time.Duration // StructToArray specifies to encode a struct as an array, and not as a map StructToArray bool // Canonical representation means that encoding a value will always result in the same // sequence of bytes. // // This only affects maps, as the iteration order for maps is random. // // The implementation MAY use the natural sort order for the map keys if possible: // // - If there is a natural sort order (ie for number, bool, string or []byte keys), // then the map keys are first sorted in natural order and then written // with corresponding map values to the strema. // - If there is no natural sort order, then the map keys will first be // encoded into []byte, and then sorted, // before writing the sorted keys and the corresponding map values to the stream. // Canonical bool // CheckCircularRef controls whether we check for circular references // and error fast during an encode. // // If enabled, an error is received if a pointer to a struct // references itself either directly or through one of its fields (iteratively). // // This is opt-in, as there may be a performance hit to checking circular references. CheckCircularRef bool // RecursiveEmptyCheck controls how we determine whether a value is empty. // // If true, we descend into interfaces and pointers to reursively check if value is empty. // // We *might* check struct fields one by one to see if empty // (if we cannot directly check if a struct value is equal to its zero value). // If so, we honor IsZero, Comparable, IsCodecEmpty(), etc. // Note: This *may* make OmitEmpty more expensive due to the large number of reflect calls. // // If false, we check if the value is equal to its zero value (newly allocated state). RecursiveEmptyCheck bool // Raw controls whether we encode Raw values. // This is a "dangerous" option and must be explicitly set. // If set, we blindly encode Raw values as-is, without checking // if they are a correct representation of a value in that format. // If unset, we error out. Raw bool // StringToRaw controls how strings are encoded. // // As a go string is just an (immutable) sequence of bytes, // it can be encoded either as raw bytes or as a UTF string. // // By default, strings are encoded as UTF-8. // but can be treated as []byte during an encode. // // Note that things which we know (by definition) to be UTF-8 // are ALWAYS encoded as UTF-8 strings. // These include encoding.TextMarshaler, time.Format calls, struct field names, etc. StringToRaw bool // OptimumSize controls whether we optimize for the smallest size. // // Some formats will use this flag to determine whether to encode // in the smallest size possible, even if it takes slightly longer. // // For example, some formats that support half-floats might check if it is possible // to store a float64 as a half float. Doing this check has a small performance cost, // but the benefit is that the encoded message will be smaller. OptimumSize bool // NoAddressableReadonly controls whether we try to force a non-addressable value // to be addressable so we can call a pointer method on it e.g. for types // that support Selfer, json.Marshaler, etc. // // Use it in the very rare occurrence that your types modify a pointer value when calling // an encode callback function e.g. JsonMarshal, TextMarshal, BinaryMarshal or CodecEncodeSelf. NoAddressableReadonly bool }
Encoder writes an object to an output stream in a supported format.
Encoder is NOT safe for concurrent use i.e. a Encoder cannot be used concurrently in multiple goroutines.
However, as Encoder could be allocation heavy to initialize, a Reset method is provided so its state can be reused to decode new input streams repeatedly. This is the idiomatic way to use.
type Encoder struct {
// contains filtered or unexported fields
}
func NewEncoder(w io.Writer, h Handle) *Encoder
NewEncoder returns an Encoder for encoding into an io.Writer.
For efficiency, Users are encouraged to configure WriterBufferSize on the handle OR pass in a memory buffered writer (eg bufio.Writer, bytes.Buffer).
func NewEncoderBytes(out *[]byte, h Handle) *Encoder
NewEncoderBytes returns an encoder for encoding directly and efficiently into a byte slice, using zero-copying to temporary slices.
It will potentially replace the output byte slice pointed to. After encoding, the out parameter contains the encoded contents.
func (e *Encoder) Encode(v interface{}) (err error)
Encode writes an object into a stream.
Encoding can be configured via the struct tag for the fields. The key (in the struct tags) that we look at is configurable.
By default, we look up the "codec" key in the struct field's tags, and fall bak to the "json" key if "codec" is absent. That key in struct field's tag value is the key name, followed by an optional comma and options.
To set an option on all fields (e.g. omitempty on all fields), you can create a field called _struct, and set flags on it. The options which can be set on _struct are:
More details on these below.
Struct values "usually" encode as maps. Each exported struct field is encoded unless:
When encoding as a map, the first string in the tag (before the comma) is the map key string to use when encoding. ... This key is typically encoded as a string. However, there are instances where the encoded stream has mapping keys encoded as numbers. For example, some cbor streams have keys as integer codes in the stream, but they should map to fields in a structured object. Consequently, a struct is the natural representation in code. For these, configure the struct to encode/decode the keys as numbers (instead of string). This is done with the int,uint or float option on the _struct field (see above).
However, struct values may encode as arrays. This happens when:
Note that omitempty is ignored when encoding struct values as arrays, as an entry must be encoded for each field, to maintain its position.
Values with types that implement MapBySlice are encoded as stream maps.
The empty values (for omitempty option) are false, 0, any nil pointer or interface value, and any array, slice, map, or string of length zero.
Anonymous fields are encoded inline except:
Examples:
// NOTE: 'json:' can be used as struct tag key, in place 'codec:' below. type MyStruct struct { _struct bool `codec:",omitempty"` //set omitempty for every field Field1 string `codec:"-"` //skip this field Field2 int `codec:"myName"` //Use key "myName" in encode stream Field3 int32 `codec:",omitempty"` //use key "Field3". Omit if empty. Field4 bool `codec:"f4,omitempty"` //use key "f4". Omit if empty. io.Reader //use key "Reader". MyStruct `codec:"my1" //use key "my1". MyStruct //inline it ... } type MyStruct struct { _struct bool `codec:",toarray"` //encode struct as an array } type MyStruct struct { _struct bool `codec:",uint"` //encode struct with "unsigned integer" keys Field1 string `codec:"1"` //encode Field1 key using: EncodeInt(1) Field2 string `codec:"2"` //encode Field2 key using: EncodeInt(2) }
The mode of encoding is based on the type of the value. When a value is seen:
Note that struct field names and keys in map[string]XXX will be treated as symbols. Some formats support symbols (e.g. binc) and will properly encode the string only once in the stream, and use a tag to refer to it thereafter.
func (e *Encoder) HandleName() string
func (e *Encoder) MustEncode(v interface{})
MustEncode is like Encode, but panics if unable to Encode.
Note: This provides insight to the code location that triggered the error.
func (e *Encoder) Release()
Release is a no-op.
Deprecated: Pooled resources are not used with an Encoder. This method is kept for compatibility reasons only.
func (e *Encoder) Reset(w io.Writer)
Reset resets the Encoder with a new output stream.
This accommodates using the state of the Encoder, where it has "cached" information about sub-engines.
func (e *Encoder) ResetBytes(out *[]byte)
ResetBytes resets the Encoder with a new destination output []byte.
func (z *Encoder) WriteStr(s string)
Ext handles custom (de)serialization of custom types / extensions.
type Ext interface { BytesExt InterfaceExt }
Handle defines a specific encoding format. It also stores any runtime state used during an Encoding or Decoding session e.g. stored state about Types, etc.
Once a handle is configured, it can be shared across multiple Encoders and Decoders.
Note that a Handle is NOT safe for concurrent modification.
A Handle also should not be modified after it is configured and has been used at least once. This is because stored state may be out of sync with the new configuration, and a data race can occur when multiple goroutines access it. i.e. multiple Encoders or Decoders in different goroutines.
Consequently, the typical usage model is that a Handle is pre-configured before first time use, and not modified while in use. Such a pre-configured Handle is safe for concurrent access.
type Handle interface { Name() string // contains filtered or unexported methods }
InterfaceExt handles custom (de)serialization of types to/from another interface{} value. The Encoder or Decoder will then handle the further (de)serialization of that known type.
It is used by codecs (e.g. cbor, json) which use the format to do custom serialization of types.
type InterfaceExt interface { // ConvertExt converts a value into a simpler interface for easy encoding // e.g. convert time.Time to int64. // // Note: v is a pointer iff the registered extension type is a struct or array kind. ConvertExt(v interface{}) interface{} // UpdateExt updates a value from a simpler interface for easy decoding // e.g. convert int64 to time.Time. // // Note: dst is always a pointer kind to the registered extension type. UpdateExt(dst interface{}, src interface{}) }
JsonHandle is a handle for JSON encoding format.
Json is comprehensively supported:
It has better performance than the json library in the standard library, by leveraging the performance improvements of the codec library.
In addition, it doesn't read more bytes than necessary during a decode, which allows reading multiple values from a stream containing json and non-json content. For example, a user can read a json value, then a cbor value, then a msgpack value, all from the same stream in sequence.
Note that, when decoding quoted strings, invalid UTF-8 or invalid UTF-16 surrogate pairs are not treated as an error. Instead, they are replaced by the Unicode replacement character U+FFFD.
Note also that the float values for NaN, +Inf or -Inf are encoded as null, as suggested by NOTE 4 of the ECMA-262 ECMAScript Language Specification 5.1 edition. see http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-262.pdf .
Note the following behaviour differences vs std-library encoding/json package:
type JsonHandle struct { BasicHandle // Indent indicates how a value is encoded. // - If positive, indent by that number of spaces. // - If negative, indent by that number of tabs. Indent int8 // IntegerAsString controls how integers (signed and unsigned) are encoded. // // Per the JSON Spec, JSON numbers are 64-bit floating point numbers. // Consequently, integers > 2^53 cannot be represented as a JSON number without losing precision. // This can be mitigated by configuring how to encode integers. // // IntegerAsString interpretes the following values: // - if 'L', then encode integers > 2^53 as a json string. // - if 'A', then encode all integers as a json string // containing the exact integer representation as a decimal. // - else encode all integers as a json number (default) IntegerAsString byte // HTMLCharsAsIs controls how to encode some special characters to html: < > & // // By default, we encode them as \uXXX // to prevent security holes when served from some browsers. HTMLCharsAsIs bool // PreferFloat says that we will default to decoding a number as a float. // If not set, we will examine the characters of the number and decode as an // integer type if it doesn't have any of the characters [.eE]. PreferFloat bool // TermWhitespace says that we add a whitespace character // at the end of an encoding. // // The whitespace is important, especially if using numbers in a context // where multiple items are written to a stream. TermWhitespace bool // MapKeyAsString says to encode all map keys as strings. // // Use this to enforce strict json output. // The only caveat is that nil value is ALWAYS written as null (never as "null") MapKeyAsString bool // RawBytesExt, if configured, is used to encode and decode raw bytes in a custom way. // If not configured, raw bytes are encoded to/from base64 text. RawBytesExt InterfaceExt // contains filtered or unexported fields }
func (h *JsonHandle) Name() string
Name returns the name of the handle: json
func (h *JsonHandle) SetInterfaceExt(rt reflect.Type, tag uint64, ext InterfaceExt) (err error)
SetInterfaceExt sets an extension
func (x JsonHandle) TimeBuiltin() bool
TimeBuiltin returns whether time.Time OOTB support is used, based on the initial configuration of TimeNotBuiltin
MapBySlice is a tag interface that denotes the slice or array value should encode as a map in the stream, and can be decoded from a map in the stream.
The slice or array must contain a sequence of key-value pairs. The length of the slice or array must be even (fully divisible by 2).
This affords storing a map in a specific sequence in the stream.
Example usage:
type T1 []string // or []int or []Point or any other "slice" type func (_ T1) MapBySlice{} // T1 now implements MapBySlice, and will be encoded as a map type T2 struct { KeyValues T1 } var kvs = []string{"one", "1", "two", "2", "three", "3"} var v2 = T2{ KeyValues: T1(kvs) } // v2 will be encoded like the map: {"KeyValues": {"one": "1", "two": "2", "three": "3"} }
The support of MapBySlice affords the following:
type MapBySlice interface { MapBySlice() }
MissingFielder defines the interface allowing structs to internally decode or encode values which do not map to struct fields.
We expect that this interface is bound to a pointer type (so the mutation function works).
A use-case is if a version of a type unexports a field, but you want compatibility between both versions during encoding and decoding.
Note that the interface is completely ignored during codecgen.
type MissingFielder interface { // CodecMissingField is called to set a missing field and value pair. // // It returns true if the missing field was set on the struct. CodecMissingField(field []byte, value interface{}) bool // CodecMissingFields returns the set of fields which are not struct fields. // // Note that the returned map may be mutated by the caller. CodecMissingFields() map[string]interface{} }
MsgpackHandle is a Handle for the Msgpack Schema-Free Encoding Format.
type MsgpackHandle struct { BasicHandle // NoFixedNum says to output all signed integers as 2-bytes, never as 1-byte fixednum. NoFixedNum bool // WriteExt controls whether the new spec is honored. // // With WriteExt=true, we can encode configured extensions with extension tags // and encode string/[]byte/extensions in a way compatible with the new spec // but incompatible with the old spec. // // For compatibility with the old spec, set WriteExt=false. // // With WriteExt=false: // configured extensions are serialized as raw bytes (not msgpack extensions). // reserved byte descriptors like Str8 and those enabling the new msgpack Binary type // are not encoded. WriteExt bool // PositiveIntUnsigned says to encode positive integers as unsigned. PositiveIntUnsigned bool // contains filtered or unexported fields }
func (h *MsgpackHandle) Name() string
Name returns the name of the handle: msgpack
func (h *MsgpackHandle) SetBytesExt(rt reflect.Type, tag uint64, ext BytesExt) (err error)
SetBytesExt sets an extension
func (x MsgpackHandle) TimeBuiltin() bool
TimeBuiltin returns whether time.Time OOTB support is used, based on the initial configuration of TimeNotBuiltin
MsgpackSpecRpcMultiArgs is a special type which signifies to the MsgpackSpecRpcCodec that the backend RPC service takes multiple arguments, which have been arranged in sequence in the slice.
The Codec then passes it AS-IS to the rpc service (without wrapping it in an array of 1 element).
type MsgpackSpecRpcMultiArgs []interface{}
RPCOptions holds options specific to rpc functionality
type RPCOptions struct { // RPCNoBuffer configures whether we attempt to buffer reads and writes during RPC calls. // // Set RPCNoBuffer=true to turn buffering off. // Buffering can still be done if buffered connections are passed in, or // buffering is configured on the handle. RPCNoBuffer bool }
Raw represents raw formatted bytes. We "blindly" store it during encode and retrieve the raw bytes during decode. Note: it is dangerous during encode, so we may gate the behaviour behind an Encode flag which must be explicitly set.
type Raw []byte
RawExt represents raw unprocessed extension data. Some codecs will decode extension data as a *RawExt if there is no registered extension for the tag.
Only one of Data or Value is nil. If Data is nil, then the content of the RawExt is in the Value.
type RawExt struct { Tag uint64 // Data is the []byte which represents the raw ext. If nil, ext is exposed in Value. // Data is used by codecs (e.g. binc, msgpack, simple) which do custom serialization of types Data []byte // Value represents the extension, if Data is nil. // Value is used by codecs (e.g. cbor, json) which leverage the format to do // custom serialization of the types. Value interface{} }
Rpc provides a rpc Server or Client Codec for rpc communication.
type Rpc interface { ServerCodec(conn io.ReadWriteCloser, h Handle) rpc.ServerCodec ClientCodec(conn io.ReadWriteCloser, h Handle) rpc.ClientCodec }
Selfer defines methods by which a value can encode or decode itself.
Any type which implements Selfer will be able to encode or decode itself. Consequently, during (en|de)code, this takes precedence over (text|binary)(M|Unm)arshal or extension support.
By definition, it is not allowed for a Selfer to directly call Encode or Decode on itself. If that is done, Encode/Decode will rightfully fail with a Stack Overflow style error. For example, the snippet below will cause such an error.
type testSelferRecur struct{} func (s *testSelferRecur) CodecEncodeSelf(e *Encoder) { e.MustEncode(s) } func (s *testSelferRecur) CodecDecodeSelf(d *Decoder) { d.MustDecode(s) }
Note: *the first set of bytes of any value MUST NOT represent nil in the format*. This is because, during each decode, we first check the the next set of bytes represent nil, and if so, we just set the value to nil.
type Selfer interface { CodecEncodeSelf(*Encoder) CodecDecodeSelf(*Decoder) }
SimpleHandle is a Handle for a very simple encoding format.
simple is a simplistic codec similar to binc, but not as compact.
The full spec will be published soon.
type SimpleHandle struct { BasicHandle // EncZeroValuesAsNil says to encode zero values for numbers, bool, string, etc as nil EncZeroValuesAsNil bool // contains filtered or unexported fields }
func (h *SimpleHandle) Name() string
Name returns the name of the handle: simple
func (h *SimpleHandle) SetBytesExt(rt reflect.Type, tag uint64, ext BytesExt) (err error)
SetBytesExt sets an extension
func (x SimpleHandle) TimeBuiltin() bool
TimeBuiltin returns whether time.Time OOTB support is used, based on the initial configuration of TimeNotBuiltin
TypeInfos caches typeInfo for each type on first inspection.
It is configured with a set of tag keys, which are used to get configuration for the type.
type TypeInfos struct {
// contains filtered or unexported fields
}
func NewTypeInfos(tags []string) *TypeInfos
NewTypeInfos creates a TypeInfos given a set of struct tags keys.
This allows users customize the struct tag keys which contain configuration of their types.