vendor: Update golang.org/cznic/...

This commit is contained in:
Jakob Borg 2016-09-13 21:49:35 +02:00
parent 9bf6917ae8
commit 58cbd19742
31 changed files with 1104 additions and 8369 deletions

44
vendor/github.com/cznic/b/doc.go generated vendored
View File

@ -6,10 +6,30 @@
// //
// Changelog // Changelog
// //
// 2016-07-16: Update benchmark results to newer Go version. Add a note on
// concurrency.
//
// 2014-06-26: Lower GC presure by recycling things. // 2014-06-26: Lower GC presure by recycling things.
// //
// 2014-04-18: Added new method Put. // 2014-04-18: Added new method Put.
// //
// Concurrency considerations
//
// Tree.{Clear,Delete,Put,Set} mutate the tree. One can use eg. a
// sync.Mutex.Lock/Unlock (or sync.RWMutex.Lock/Unlock) to wrap those calls if
// they are to be invoked concurrently.
//
// Tree.{First,Get,Last,Len,Seek,SeekFirst,SekLast} read but do not mutate the
// tree. One can use eg. a sync.RWMutex.RLock/RUnlock to wrap those calls if
// they are to be invoked concurrently with any of the tree mutating methods.
//
// Enumerator.{Next,Prev} mutate the enumerator and read but not mutate the
// tree. One can use eg. a sync.RWMutex.RLock/RUnlock to wrap those calls if
// they are to be invoked concurrently with any of the tree mutating methods. A
// separate mutex for the enumerator, or the whole tree in a simplified
// variant, is necessary if the enumerator's Next/Prev methods per se are to
// be invoked concurrently.
//
// Generic types // Generic types
// //
// Keys and their associated values are interface{} typed, similar to all of // Keys and their associated values are interface{} typed, similar to all of
@ -34,20 +54,20 @@
// No other changes to int.go are necessary, it compiles just fine. // No other changes to int.go are necessary, it compiles just fine.
// //
// Running the benchmarks for 1000 keys on a machine with Intel i5-4670 CPU @ // Running the benchmarks for 1000 keys on a machine with Intel i5-4670 CPU @
// 3.4GHz, Go release 1.4.2. // 3.4GHz, Go 1.7rc1.
// //
// $ go test -bench 1e3 example/all_test.go example/int.go // $ go test -bench 1e3 example/all_test.go example/int.go
// BenchmarkSetSeq1e3-4 20000 78265 ns/op
// BenchmarkGetSeq1e3-4 20000 67980 ns/op
// BenchmarkSetRnd1e3-4 10000 172720 ns/op
// BenchmarkGetRnd1e3-4 20000 89539 ns/op
// BenchmarkDelSeq1e3-4 20000 87863 ns/op
// BenchmarkDelRnd1e3-4 10000 130891 ns/op
// BenchmarkSeekSeq1e3-4 10000 100118 ns/op
// BenchmarkSeekRnd1e3-4 10000 121684 ns/op
// BenchmarkNext1e3-4 200000 6330 ns/op
// BenchmarkPrev1e3-4 200000 9066 ns/op
// PASS // PASS
// BenchmarkSetSeq1e3 10000 151620 ns/op // ok command-line-arguments 42.531s
// BenchmarkGetSeq1e3 10000 115354 ns/op
// BenchmarkSetRnd1e3 5000 255865 ns/op
// BenchmarkGetRnd1e3 10000 140466 ns/op
// BenchmarkDelSeq1e3 10000 143860 ns/op
// BenchmarkDelRnd1e3 10000 188228 ns/op
// BenchmarkSeekSeq1e3 10000 156448 ns/op
// BenchmarkSeekRnd1e3 10000 190587 ns/op
// BenchmarkNext1e3 200000 9407 ns/op
// BenchmarkPrev1e3 200000 9306 ns/op
// ok command-line-arguments 26.369s
// $ // $
package b package b

View File

@ -500,7 +500,7 @@ http://en.wikipedia.org/wiki/Miller-Rabin_primality_test#Algorithm_and_running_t
return composite return composite
return probably prime return probably prime
... this function behaves like passing 1 for 'k' and additionaly a ... this function behaves like passing 1 for 'k' and additionally a
fixed/non-random 'a'. Otherwise it's the same algorithm. fixed/non-random 'a'. Otherwise it's the same algorithm.
See also: http://mathworld.wolfram.com/Rabin-MillerStrongPseudoprimeTest.html See also: http://mathworld.wolfram.com/Rabin-MillerStrongPseudoprimeTest.html

View File

@ -221,7 +221,7 @@ record handle} and the B+Tree value is not used.
+------+-----------------+ +--------------+ +------+-----------------+ +--------------+
If the indexed values are not all NULL then key of the B+Tree key are the indexed If the indexed values are not all NULL then key of the B+Tree key are the indexed
values and the B+Tree value is the record handle. values and the B+Tree value is the record handle.
B+Tree key B+Tree value B+Tree key B+Tree value
+----------------+ +---------------+ +----------------+ +---------------+
@ -262,7 +262,7 @@ out are stripped off and "resupplied" on decoding transparently. See also
blob.go. If the length of the resulting slice is <= shortBlob, the first and blob.go. If the length of the resulting slice is <= shortBlob, the first and
only chunk is the scalar encoding of only chunk is the scalar encoding of
[]interface{}{typeTag, slice}. // initial (and last) chunk []interface{}{typeTag, slice}. // initial (and last) chunk
The length of slice can be zero (for blob("")). If the resulting slice is long The length of slice can be zero (for blob("")). If the resulting slice is long
@ -285,9 +285,9 @@ Links
Referenced from above: Referenced from above:
[0]: http://godoc.org/github.com/cznic/exp/lldb#hdr-Block_handles [0]: http://godoc.org/github.com/cznic/lldb#hdr-Block_handles
[1]: http://godoc.org/github.com/cznic/exp/lldb#EncodeScalars [1]: http://godoc.org/github.com/cznic/lldb#EncodeScalars
[2]: http://godoc.org/github.com/cznic/exp/lldb#BTree [2]: http://godoc.org/github.com/cznic/lldb#BTree
Rationale Rationale

20
vendor/github.com/cznic/ql/doc.go generated vendored
View File

@ -14,6 +14,20 @@
// //
// Change list // Change list
// //
// 2016-07-29: Release v1.0.6 enables alternatively using = instead of == for
// equality oparation.
//
// https://github.com/cznic/ql/issues/131
//
// 2016-07-11: Release v1.0.5 undoes vendoring of lldb. QL now uses stable lldb
// (github.com/cznic/lldb).
//
// https://github.com/cznic/ql/issues/128
//
// 2016-07-06: Release v1.0.4 fixes a panic when closing the WAL file.
//
// https://github.com/cznic/ql/pull/127
//
// 2016-04-03: Release v1.0.3 fixes a data race. // 2016-04-03: Release v1.0.3 fixes a data race.
// //
// https://github.com/cznic/ql/issues/126 // https://github.com/cznic/ql/issues/126
@ -299,7 +313,7 @@
// andnot = "&^" . // andnot = "&^" .
// lsh = "<<" . // lsh = "<<" .
// le = "<=" . // le = "<=" .
// eq = "==" . // eq = "==" | "=" .
// ge = ">=" . // ge = ">=" .
// neq = "!=" . // neq = "!=" .
// oror = "||" . // oror = "||" .
@ -800,7 +814,7 @@
// //
// expr1 LIKE expr2 // expr1 LIKE expr2
// //
// yeild a boolean value true if expr2, a regular expression, matches expr1 // yield a boolean value true if expr2, a regular expression, matches expr1
// (see also [6]). Both expression must be of type string. If any one of the // (see also [6]). Both expression must be of type string. If any one of the
// expressions is NULL the result is NULL. // expressions is NULL the result is NULL.
// //
@ -887,7 +901,7 @@
// //
// expr IS NOT NULL // case B // expr IS NOT NULL // case B
// //
// yeild a boolean value true if expr does not have a specific type (case A) or // yield a boolean value true if expr does not have a specific type (case A) or
// if expr has a specific type (case B). In other cases the result is a boolean // if expr has a specific type (case B). In other cases the result is a boolean
// value false. // value false.
// //

33
vendor/github.com/cznic/ql/etc.go generated vendored
View File

@ -10,7 +10,6 @@ import (
"io" "io"
"math" "math"
"math/big" "math/big"
"strings"
"time" "time"
) )
@ -2764,38 +2763,6 @@ var isSystemName = map[string]bool{
"__Table": true, "__Table": true,
} }
func qualifier(s string) string {
if pos := strings.IndexByte(s, '.'); pos >= 0 {
s = s[:pos]
}
return s
}
func mustQualifier(s string) string {
q := qualifier(s)
if q == s {
panic("internal error 068")
}
return q
}
func selector(s string) string {
if pos := strings.IndexByte(s, '.'); pos >= 0 {
s = s[pos+1:]
}
return s
}
func mustSelector(s string) string {
q := selector(s)
if q == s {
panic("internal error 053")
}
return q
}
func qnames(l []string) []string { func qnames(l []string) []string {
r := make([]string, len(l)) r := make([]string, len(l))
for i, v := range l { for i, v := range l {

25
vendor/github.com/cznic/ql/expr.go generated vendored
View File

@ -135,12 +135,6 @@ func mentionedColumns(e expression) map[string]struct{} {
return m return m
} }
func mentionedQColumns(e expression) map[string]struct{} {
m := map[string]struct{}{}
mentionedColumns0(e, true, false, m)
return m
}
func staticExpr(e expression) (expression, error) { func staticExpr(e expression) (expression, error) {
if e.isStatic() { if e.isStatic() {
v, err := e.eval(nil, nil) v, err := e.eval(nil, nil)
@ -166,11 +160,6 @@ type (
idealUint uint64 idealUint uint64
) )
type exprTab struct {
expr expression
table string
}
type pexpr struct { type pexpr struct {
expr expression expr expression
} }
@ -3397,20 +3386,6 @@ func (u *unaryOperation) String() string {
} }
} }
// !ident
func (u *unaryOperation) isNotQIdent() (bool, string, expression) {
if u.op != '!' {
return false, "", nil
}
id, ok := u.v.(*ident)
if ok && id.isQualified() {
return true, mustQualifier(id.s), &unaryOperation{'!', &ident{mustSelector(id.s)}}
}
return false, "", nil
}
func (u *unaryOperation) eval(execCtx *execCtx, ctx map[interface{}]interface{}) (r interface{}, err error) { func (u *unaryOperation) eval(execCtx *execCtx, ctx map[interface{}]interface{}) (r interface{}, err error) {
defer func() { defer func() {
if e := recover(); e != nil { if e := recover(); e != nil {

4
vendor/github.com/cznic/ql/file.go generated vendored
View File

@ -19,9 +19,9 @@ import (
"sync" "sync"
"time" "time"
"github.com/cznic/lldb"
"github.com/cznic/mathutil" "github.com/cznic/mathutil"
"github.com/cznic/ql/vendored/github.com/camlistore/go4/lock" "github.com/cznic/ql/vendored/github.com/camlistore/go4/lock"
"github.com/cznic/ql/vendored/github.com/cznic/exp/lldb"
) )
const ( const (
@ -409,7 +409,7 @@ func newFileFromOSFile(f lldb.OSFile) (fi *file, err error) {
w, err = os.OpenFile(wn, os.O_CREATE|os.O_EXCL|os.O_RDWR, 0666) w, err = os.OpenFile(wn, os.O_CREATE|os.O_EXCL|os.O_RDWR, 0666)
closew = true closew = true
defer func() { defer func() {
if closew { if w != nil && closew {
nm := w.Name() nm := w.Name()
w.Close() w.Close()
os.Remove(nm) os.Remove(nm)

View File

@ -42,7 +42,6 @@ type HTTPFile struct {
isFile bool isFile bool
name string name string
off int off int
sz int
} }
// Close implements http.File. // Close implements http.File.
@ -212,7 +211,7 @@ func (db *DB) NewHTTPFS(query string) (*HTTPFS, error) {
// The elements in a file path are separated by slash ('/', U+002F) characters, // The elements in a file path are separated by slash ('/', U+002F) characters,
// regardless of host operating system convention. // regardless of host operating system convention.
func (f *HTTPFS) Open(name string) (http.File, error) { func (f *HTTPFS) Open(name string) (http.File, error) {
if filepath.Separator != '/' && strings.IndexRune(name, filepath.Separator) >= 0 || if filepath.Separator != '/' && strings.Contains(name, string(filepath.Separator)) ||
strings.Contains(name, "\x00") { strings.Contains(name, "\x00") {
return nil, fmt.Errorf("invalid character in file path: %q", name) return nil, fmt.Errorf("invalid character in file path: %q", name)
} }
@ -264,7 +263,7 @@ func (f *HTTPFS) Open(name string) (http.File, error) {
n++ n++
switch name := data[0].(type) { switch name := data[0].(type) {
case string: case string:
if filepath.Separator != '/' && strings.IndexRune(name, filepath.Separator) >= 0 || if filepath.Separator != '/' && strings.Contains(name, string(filepath.Separator)) ||
strings.Contains(name, "\x00") { strings.Contains(name, "\x00") {
return false, fmt.Errorf("invalid character in file path: %q", name) return false, fmt.Errorf("invalid character in file path: %q", name)
} }

2021
vendor/github.com/cznic/ql/parser.go generated vendored

File diff suppressed because it is too large Load Diff

22
vendor/github.com/cznic/ql/ql.go generated vendored
View File

@ -144,8 +144,8 @@ func (l List) String() string {
return b.String() return b.String()
} }
// IsExplainStmt reports whether l is a single EXPLAIN statment or a single EXPLAIN // IsExplainStmt reports whether l is a single EXPLAIN statement or a single EXPLAIN
// statment enclosed in a transaction. // statement enclosed in a transaction.
func (l List) IsExplainStmt() bool { func (l List) IsExplainStmt() bool {
switch len(l.l) { switch len(l.l) {
case 1: case 1:
@ -209,10 +209,10 @@ type TCtx struct {
// NewRWCtx returns a new read/write transaction context. NewRWCtx is safe for // NewRWCtx returns a new read/write transaction context. NewRWCtx is safe for
// concurrent use by multiple goroutines, every one of them will get a new, // concurrent use by multiple goroutines, every one of them will get a new,
// unique conext. // unique context.
func NewRWCtx() *TCtx { return &TCtx{} } func NewRWCtx() *TCtx { return &TCtx{} }
// Recordset is a result of a select statment. It can call a user function for // Recordset is a result of a select statement. It can call a user function for
// every row (record) in the set using the Do method. // every row (record) in the set using the Do method.
// //
// Recordsets can be safely reused. Evaluation of the rows is performed lazily. // Recordsets can be safely reused. Evaluation of the rows is performed lazily.
@ -672,16 +672,6 @@ func (r tableRset) plan(ctx *execCtx) (plan, error) {
return rs, nil return rs, nil
} }
func findFldIndex(fields []*fld, name string) int {
for i, f := range fields {
if f.name == name {
return i
}
}
return -1
}
func findFld(fields []*fld, name string) (f *fld) { func findFld(fields []*fld, name string) (f *fld) {
for _, f = range fields { for _, f = range fields {
if f.name == name { if f.name == name {
@ -1276,7 +1266,7 @@ func (db *DB) run1(pc *TCtx, s stmt, arg ...interface{}) (rs Recordset, tnla, tn
} }
if pc != db.cc { if pc != db.cc {
for db.rw == true { for db.rw {
db.mu.Unlock() // Transaction isolation db.mu.Unlock() // Transaction isolation
db.mu.Lock() db.mu.Lock()
} }
@ -1501,7 +1491,7 @@ type IndexInfo struct {
Name string // Index name Name string // Index name
Table string // Table name. Table string // Table name.
Column string // Column name. Column string // Column name.
Unique bool // Wheter the index is unique. Unique bool // Whether the index is unique.
ExpressionList []string // Index expression list. ExpressionList []string // Index expression list.
} }

2
vendor/github.com/cznic/ql/stmt.go generated vendored
View File

@ -8,7 +8,6 @@ import (
"bytes" "bytes"
"fmt" "fmt"
"strings" "strings"
"sync"
"github.com/cznic/strutil" "github.com/cznic/strutil"
) )
@ -716,7 +715,6 @@ type selectStmt struct {
group *groupByRset group *groupByRset
hasAggregates bool hasAggregates bool
limit *limitRset limit *limitRset
mu sync.Mutex
offset *offsetRset offset *offsetRset
order *orderByRset order *orderByRset
where *whereRset where *whereRset

View File

@ -137,8 +137,7 @@ type table struct {
defaults []expression defaults []expression
} }
func (t *table) hasIndices() bool { return len(t.indices) != 0 || len(t.indices2) != 0 } func (t *table) hasIndices() bool { return len(t.indices) != 0 || len(t.indices2) != 0 }
func (t *table) hasIndices2() bool { return len(t.indices2) != 0 }
func (t *table) constraintsAndDefaults(ctx *execCtx) error { func (t *table) constraintsAndDefaults(ctx *execCtx) error {
if isSystemName[t.name] { if isSystemName[t.name] {
@ -747,14 +746,6 @@ func (t *table) addRecord(execCtx *execCtx, r []interface{}) (id int64, err erro
return return
} }
func (t *table) flds() (r []*fld) {
r = make([]*fld, len(t.cols))
for i, v := range t.cols {
r[i] = &fld{expr: &ident{v.name}, name: v.name}
}
return
}
func (t *table) fieldNames() []string { func (t *table) fieldNames() []string {
r := make([]string, len(t.cols)) r := make([]string, len(t.cols))
for i, v := range t.cols { for i, v := range t.cols {
@ -802,10 +793,10 @@ type root struct {
head int64 // Single linked table list head int64 // Single linked table list
lastInsertID int64 lastInsertID int64
parent *root parent *root
rowsAffected int64 //LATER implement //rowsAffected int64 //LATER implement
store storage store storage
tables map[string]*table tables map[string]*table
thead *table thead *table
} }
func newRoot(store storage) (r *root, err error) { func newRoot(store storage) (r *root, err error) {

View File

@ -1,324 +0,0 @@
// Copyright 2014 The lldb Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Two Phase Commit & Structural ACID
package lldb
import (
"bufio"
"encoding/binary"
"fmt"
"io"
"os"
"github.com/cznic/fileutil"
"github.com/cznic/mathutil"
)
var _ Filer = &ACIDFiler0{} // Ensure ACIDFiler0 is a Filer
type acidWrite struct {
b []byte
off int64
}
type acidWriter0 ACIDFiler0
func (a *acidWriter0) WriteAt(b []byte, off int64) (n int, err error) {
f := (*ACIDFiler0)(a)
if f.bwal == nil { // new epoch
f.data = f.data[:0]
f.bwal = bufio.NewWriter(f.wal)
if err = a.writePacket([]interface{}{wpt00Header, walTypeACIDFiler0, ""}); err != nil {
return
}
}
if err = a.writePacket([]interface{}{wpt00WriteData, b, off}); err != nil {
return
}
f.data = append(f.data, acidWrite{b, off})
return len(b), nil
}
func (a *acidWriter0) writePacket(items []interface{}) (err error) {
f := (*ACIDFiler0)(a)
b, err := EncodeScalars(items...)
if err != nil {
return
}
var b4 [4]byte
binary.BigEndian.PutUint32(b4[:], uint32(len(b)))
if _, err = f.bwal.Write(b4[:]); err != nil {
return
}
if _, err = f.bwal.Write(b); err != nil {
return
}
if m := (4 + len(b)) % 16; m != 0 {
var pad [15]byte
_, err = f.bwal.Write(pad[:16-m])
}
return
}
// WAL Packet Tags
const (
wpt00Header = iota
wpt00WriteData
wpt00Checkpoint
)
const (
walTypeACIDFiler0 = iota
)
// ACIDFiler0 is a very simple, synchronous implementation of 2PC. It uses a
// single write ahead log file to provide the structural atomicity
// (BeginUpdate/EndUpdate/Rollback) and durability (DB can be recovered from
// WAL if a crash occurred).
//
// ACIDFiler0 is a Filer.
//
// NOTE: Durable synchronous 2PC involves three fsyncs in this implementation
// (WAL, DB, zero truncated WAL). Where possible, it's recommended to collect
// transactions for, say one second before performing the two phase commit as
// the typical performance for rotational hard disks is about few tens of
// fsyncs per second atmost. For an example of such collective transaction
// approach please see the colecting FSM STT in Dbm's documentation[1].
//
// [1]: http://godoc.org/github.com/cznic/exp/dbm
type ACIDFiler0 struct {
*RollbackFiler
wal *os.File
bwal *bufio.Writer
data []acidWrite
testHook bool // keeps WAL untruncated (once)
peakWal int64 // tracks WAL maximum used size
peakBitFilerPages int // track maximum transaction memory
}
// NewACIDFiler0 returns a newly created ACIDFiler0 with WAL in wal.
//
// If the WAL is zero sized then a previous clean shutdown of db is taken for
// granted and no recovery procedure is taken.
//
// If the WAL is of non zero size then it is checked for having a
// commited/fully finished transaction not yet been reflected in db. If such
// transaction exists it's committed to db. If the recovery process finishes
// successfully, the WAL is truncated to zero size and fsync'ed prior to return
// from NewACIDFiler0.
func NewACIDFiler(db Filer, wal *os.File) (r *ACIDFiler0, err error) {
fi, err := wal.Stat()
if err != nil {
return
}
r = &ACIDFiler0{wal: wal}
if fi.Size() != 0 {
if err = r.recoverDb(db); err != nil {
return
}
}
acidWriter := (*acidWriter0)(r)
if r.RollbackFiler, err = NewRollbackFiler(
db,
func(sz int64) (err error) {
// Checkpoint
if err = acidWriter.writePacket([]interface{}{wpt00Checkpoint, sz}); err != nil {
return
}
if err = r.bwal.Flush(); err != nil {
return
}
r.bwal = nil
if err = r.wal.Sync(); err != nil {
return
}
wfi, err := r.wal.Stat()
switch err != nil {
case true:
// unexpected, but ignored
case false:
r.peakWal = mathutil.MaxInt64(wfi.Size(), r.peakWal)
}
// Phase 1 commit complete
for _, v := range r.data {
if _, err := db.WriteAt(v.b, v.off); err != nil {
return err
}
}
if err = db.Truncate(sz); err != nil {
return
}
if err = db.Sync(); err != nil {
return
}
// Phase 2 commit complete
if !r.testHook {
if err = r.wal.Truncate(0); err != nil {
return
}
if _, err = r.wal.Seek(0, 0); err != nil {
return
}
}
r.testHook = false
return r.wal.Sync()
},
acidWriter,
); err != nil {
return
}
return r, nil
}
// PeakWALSize reports the maximum size WAL has ever used.
func (a ACIDFiler0) PeakWALSize() int64 {
return a.peakWal
}
func (a *ACIDFiler0) readPacket(f *bufio.Reader) (items []interface{}, err error) {
var b4 [4]byte
n, err := io.ReadAtLeast(f, b4[:], 4)
if n != 4 {
return
}
ln := int(binary.BigEndian.Uint32(b4[:]))
m := (4 + ln) % 16
padd := (16 - m) % 16
b := make([]byte, ln+padd)
if n, err = io.ReadAtLeast(f, b, len(b)); n != len(b) {
return
}
return DecodeScalars(b[:ln])
}
func (a *ACIDFiler0) recoverDb(db Filer) (err error) {
fi, err := a.wal.Stat()
if err != nil {
return &ErrILSEQ{Type: ErrInvalidWAL, Name: a.wal.Name(), More: err}
}
if sz := fi.Size(); sz%16 != 0 {
return &ErrILSEQ{Type: ErrFileSize, Name: a.wal.Name(), Arg: sz}
}
f := bufio.NewReader(a.wal)
items, err := a.readPacket(f)
if err != nil {
return
}
if len(items) != 3 || items[0] != int64(wpt00Header) || items[1] != int64(walTypeACIDFiler0) {
return &ErrILSEQ{Type: ErrInvalidWAL, Name: a.wal.Name(), More: fmt.Sprintf("invalid packet items %#v", items)}
}
tr := NewBTree(nil)
for {
items, err = a.readPacket(f)
if err != nil {
return
}
if len(items) < 2 {
return &ErrILSEQ{Type: ErrInvalidWAL, Name: a.wal.Name(), More: fmt.Sprintf("too few packet items %#v", items)}
}
switch items[0] {
case int64(wpt00WriteData):
if len(items) != 3 {
return &ErrILSEQ{Type: ErrInvalidWAL, Name: a.wal.Name(), More: fmt.Sprintf("invalid data packet items %#v", items)}
}
b, off := items[1].([]byte), items[2].(int64)
var key [8]byte
binary.BigEndian.PutUint64(key[:], uint64(off))
if err = tr.Set(key[:], b); err != nil {
return
}
case int64(wpt00Checkpoint):
var b1 [1]byte
if n, err := f.Read(b1[:]); n != 0 || err == nil {
return &ErrILSEQ{Type: ErrInvalidWAL, Name: a.wal.Name(), More: fmt.Sprintf("checkpoint n %d, err %v", n, err)}
}
if len(items) != 2 {
return &ErrILSEQ{Type: ErrInvalidWAL, Name: a.wal.Name(), More: fmt.Sprintf("checkpoint packet invalid items %#v", items)}
}
sz := items[1].(int64)
enum, err := tr.seekFirst()
if err != nil {
return err
}
for {
k, v, err := enum.current()
if err != nil {
if fileutil.IsEOF(err) {
break
}
return err
}
if _, err = db.WriteAt(v, int64(binary.BigEndian.Uint64(k))); err != nil {
return err
}
if err = enum.next(); err != nil {
if fileutil.IsEOF(err) {
break
}
return err
}
}
if err = db.Truncate(sz); err != nil {
return err
}
if err = db.Sync(); err != nil {
return err
}
// Recovery complete
if err = a.wal.Truncate(0); err != nil {
return err
}
return a.wal.Sync()
default:
return &ErrILSEQ{Type: ErrInvalidWAL, Name: a.wal.Name(), More: fmt.Sprintf("packet tag %v", items[0])}
}
}
}

View File

@ -1,44 +0,0 @@
// Copyright 2014 The lldb Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
/*
Anatomy of a WAL file
WAL file
A sequence of packets
WAL packet, parts in slice notation
[0:4], 4 bytes: N uint32 // network byte order
[4:4+N], N bytes: payload []byte // gb encoded scalars
Packets, including the 4 byte 'size' prefix, MUST BE padded to size == 0 (mod
16). The values of the padding bytes MUST BE zero.
Encoded scalars first item is a packet type number (packet tag). The meaning of
any other item(s) of the payload depends on the packet tag.
Packet definitions
{wpt00Header int, typ int, s string}
typ: Must be zero (ACIDFiler0 file).
s: Any comment string, empty string is okay.
This packet must be present only once - as the first packet of
a WAL file.
{wpt00WriteData int, b []byte, off int64}
Write data (WriteAt(b, off)).
{wpt00Checkpoint int, sz int64}
Checkpoint (Truncate(sz)).
This packet must be present only once - as the last packet of
a WAL file.
*/
package lldb
//TODO optimize bitfiler/wal/2pc data above final size

File diff suppressed because it is too large Load Diff

View File

@ -1,170 +0,0 @@
// Copyright 2014 The lldb Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Some errors returned by this package.
//
// Note that this package can return more errors than declared here, for
// example io.EOF from Filer.ReadAt().
package lldb
import (
"fmt"
)
// ErrDecodeScalars is possibly returned from DecodeScalars
type ErrDecodeScalars struct {
B []byte // Data being decoded
I int // offending offset
}
// Error implements the built in error type.
func (e *ErrDecodeScalars) Error() string {
return fmt.Sprintf("DecodeScalars: corrupted data @ %d/%d", e.I, len(e.B))
}
// ErrINVAL reports invalid values passed as parameters, for example negative
// offsets where only non-negative ones are allowed or read from the DB.
type ErrINVAL struct {
Src string
Val interface{}
}
// Error implements the built in error type.
func (e *ErrINVAL) Error() string {
return fmt.Sprintf("%s: %+v", e.Src, e.Val)
}
// ErrPERM is for example reported when a Filer is closed while BeginUpdate(s)
// are not balanced with EndUpdate(s)/Rollback(s) or when EndUpdate or Rollback
// is invoked which is not paired with a BeginUpdate.
type ErrPERM struct {
Src string
}
// Error implements the built in error type.
func (e *ErrPERM) Error() string {
return fmt.Sprintf("%s: Operation not permitted", string(e.Src))
}
// ErrTag represents an ErrILSEQ kind.
type ErrType int
// ErrILSEQ types
const (
ErrOther ErrType = iota
ErrAdjacentFree // Adjacent free blocks (.Off and .Arg)
ErrDecompress // Used compressed block: corrupted compression
ErrExpFreeTag // Expected a free block tag, got .Arg
ErrExpUsedTag // Expected a used block tag, got .Arg
ErrFLT // Free block is invalid or referenced multiple times
ErrFLTLoad // FLT truncated to .Off, need size >= .Arg
ErrFLTSize // Free block size (.Arg) doesn't belong to its list min size: .Arg2
ErrFileSize // File .Name size (.Arg) != 0 (mod 16)
ErrFreeChaining // Free block, .prev.next doesn't point back to this block
ErrFreeTailBlock // Last block is free
ErrHead // Head of a free block list has non zero Prev (.Arg)
ErrInvalidRelocTarget // Reloc doesn't target (.Arg) a short or long used block
ErrInvalidWAL // Corrupted write ahead log. .Name: file name, .More: more
ErrLongFreeBlkTooLong // Long free block spans beyond EOF, size .Arg
ErrLongFreeBlkTooShort // Long free block must have at least 2 atoms, got only .Arg
ErrLongFreeNextBeyondEOF // Long free block .Next (.Arg) spans beyond EOF
ErrLongFreePrevBeyondEOF // Long free block .Prev (.Arg) spans beyond EOF
ErrLongFreeTailTag // Expected a long free block tail tag, got .Arg
ErrLostFreeBlock // Free block is not in any FLT list
ErrNullReloc // Used reloc block with nil target
ErrRelocBeyondEOF // Used reloc points (.Arg) beyond EOF
ErrShortFreeTailTag // Expected a short free block tail tag, got .Arg
ErrSmall // Request for a free block (.Arg) returned a too small one (.Arg2) at .Off
ErrTailTag // Block at .Off has invalid tail CC (compression code) tag, got .Arg
ErrUnexpReloc // Unexpected reloc block referred to from reloc block .Arg
ErrVerifyPadding // Used block has nonzero padding
ErrVerifyTailSize // Long free block size .Arg but tail size .Arg2
ErrVerifyUsedSpan // Used block size (.Arg) spans beyond EOF
)
// ErrILSEQ reports a corrupted file format. Details in fields according to Type.
type ErrILSEQ struct {
Type ErrType
Off int64
Arg int64
Arg2 int64
Arg3 int64
Name string
More interface{}
}
// Error implements the built in error type.
func (e *ErrILSEQ) Error() string {
switch e.Type {
case ErrAdjacentFree:
return fmt.Sprintf("Adjacent free blocks at offset %#x and %#x", e.Off, e.Arg)
case ErrDecompress:
return fmt.Sprintf("Compressed block at offset %#x: Corrupted compressed content", e.Off)
case ErrExpFreeTag:
return fmt.Sprintf("Block at offset %#x: Expected a free block tag, got %#2x", e.Off, e.Arg)
case ErrExpUsedTag:
return fmt.Sprintf("Block at ofset %#x: Expected a used block tag, got %#2x", e.Off, e.Arg)
case ErrFLT:
return fmt.Sprintf("Free block at offset %#x is invalid or referenced multiple times", e.Off)
case ErrFLTLoad:
return fmt.Sprintf("FLT truncated to size %d, expected at least %d", e.Off, e.Arg)
case ErrFLTSize:
return fmt.Sprintf("Free block at offset %#x has size (%#x) should be at least (%#x)", e.Off, e.Arg, e.Arg2)
case ErrFileSize:
return fmt.Sprintf("File %q size (%#x) != 0 (mod 16)", e.Name, e.Arg)
case ErrFreeChaining:
return fmt.Sprintf("Free block at offset %#x: .prev.next doesn point back here.", e.Off)
case ErrFreeTailBlock:
return fmt.Sprintf("Free block at offset %#x: Cannot be last file block", e.Off)
case ErrHead:
return fmt.Sprintf("Block at offset %#x: Head of free block list has non zero .prev %#x", e.Off, e.Arg)
case ErrInvalidRelocTarget:
return fmt.Sprintf("Used reloc block at offset %#x: Target (%#x) is not a short or long used block", e.Off, e.Arg)
case ErrInvalidWAL:
return fmt.Sprintf("Corrupted write ahead log file: %q %v", e.Name, e.More)
case ErrLongFreeBlkTooLong:
return fmt.Sprintf("Long free block at offset %#x: Size (%#x) beyond EOF", e.Off, e.Arg)
case ErrLongFreeBlkTooShort:
return fmt.Sprintf("Long free block at offset %#x: Size (%#x) too small", e.Off, e.Arg)
case ErrLongFreeNextBeyondEOF:
return fmt.Sprintf("Long free block at offset %#x: Next (%#x) points beyond EOF", e.Off, e.Arg)
case ErrLongFreePrevBeyondEOF:
return fmt.Sprintf("Long free block at offset %#x: Prev (%#x) points beyond EOF", e.Off, e.Arg)
case ErrLongFreeTailTag:
return fmt.Sprintf("Block at offset %#x: Expected long free tail tag, got %#2x", e.Off, e.Arg)
case ErrLostFreeBlock:
return fmt.Sprintf("Free block at offset %#x: not in any FLT list", e.Off)
case ErrNullReloc:
return fmt.Sprintf("Used reloc block at offset %#x: Nil target", e.Off)
case ErrRelocBeyondEOF:
return fmt.Sprintf("Used reloc block at offset %#x: Link (%#x) points beyond EOF", e.Off, e.Arg)
case ErrShortFreeTailTag:
return fmt.Sprintf("Block at offset %#x: Expected short free tail tag, got %#2x", e.Off, e.Arg)
case ErrSmall:
return fmt.Sprintf("Request for of free block of size %d returned a too small (%d) one at offset %#x", e.Arg, e.Arg2, e.Off)
case ErrTailTag:
return fmt.Sprintf("Block at offset %#x: Invalid tail CC tag, got %#2x", e.Off, e.Arg)
case ErrUnexpReloc:
return fmt.Sprintf("Block at offset %#x: Unexpected reloc block. Referred to from reloc block at offset %#x", e.Off, e.Arg)
case ErrVerifyPadding:
return fmt.Sprintf("Used block at offset %#x: Nonzero padding", e.Off)
case ErrVerifyTailSize:
return fmt.Sprintf("Long free block at offset %#x: Size %#x, but tail size %#x", e.Off, e.Arg, e.Arg2)
case ErrVerifyUsedSpan:
return fmt.Sprintf("Used block at offset %#x: Size %#x spans beyond EOF", e.Off, e.Arg)
}
more := ""
if e.More != nil {
more = fmt.Sprintf(", %v", e.More)
}
off := ""
if e.Off != 0 {
off = fmt.Sprintf(", off: %#x", e.Off)
}
return fmt.Sprintf("Error%s%s", off, more)
}

File diff suppressed because it is too large Load Diff

View File

@ -1,192 +0,0 @@
// Copyright 2014 The lldb Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// An abstraction of file like (persistent) storage with optional (abstracted)
// support for structural integrity.
package lldb
import (
"fmt"
"github.com/cznic/mathutil"
)
func doubleTrouble(first, second error) error {
return fmt.Errorf("%q. Additionally, while attempting to recover (rollback): %q", first, second)
}
// A Filer is a []byte-like model of a file or similar entity. It may
// optionally implement support for structural transaction safety. In contrast
// to a file stream, a Filer is not sequentially accessible. ReadAt and WriteAt
// are always "addressed" by an offset and are assumed to perform atomically.
// A Filer is not safe for concurrent access, it's designed for consumption by
// the other objects in package, which should use a Filer from one goroutine
// only or via a mutex. BeginUpdate, EndUpdate and Rollback must be either all
// implemented by a Filer for structural integrity - or they should be all
// no-ops; where/if that requirement is relaxed.
//
// If a Filer wraps another Filer implementation, it usually invokes the same
// methods on the "inner" one, after some possible argument translations etc.
// If a Filer implements the structural transactions handling methods
// (BeginUpdate, EndUpdate and Rollback) as no-ops _and_ wraps another Filer:
// it then still MUST invoke those methods on the inner Filer. This is
// important for the case where a RollbackFiler exists somewhere down the
// chain. It's also important for an Allocator - to know when it must
// invalidate its FLT cache.
type Filer interface {
// BeginUpdate increments the "nesting" counter (initially zero). Every
// call to BeginUpdate must be eventually "balanced" by exactly one of
// EndUpdate or Rollback. Calls to BeginUpdate may nest.
BeginUpdate() error
// Analogous to os.File.Close().
Close() error
// EndUpdate decrements the "nesting" counter. If it's zero after that
// then assume the "storage" has reached structural integrity (after a
// batch of partial updates). If a Filer implements some support for
// that (write ahead log, journal, etc.) then the appropriate actions
// are to be taken for nesting == 0. Invocation of an unbalanced
// EndUpdate is an error.
EndUpdate() error
// Analogous to os.File.Name().
Name() string
// PunchHole deallocates space inside a "file" in the byte range
// starting at off and continuing for size bytes. The actual hole
// created by PunchHole may be smaller than requested. The Filer size
// (as reported by `Size()` does not change when hole punching, even
// when punching the end of a file off. In contrast to the Linux
// implementation of FALLOC_FL_PUNCH_HOLE in `fallocate`(2); a Filer is
// free not only to ignore `PunchHole()` (implement it as a nop), but
// additionally no guarantees about the content of the hole, when
// eventually read back, are required, i.e. any data, not only zeros,
// can be read from the "hole", including just anything what was left
// there - with all of the possible security problems.
PunchHole(off, size int64) error
// As os.File.ReadAt. Note: `off` is an absolute "file pointer"
// address and cannot be negative even when a Filer is a InnerFiler.
ReadAt(b []byte, off int64) (n int, err error)
// Rollback cancels and undoes the innermost pending update level.
// Rollback decrements the "nesting" counter. If a Filer implements
// some support for keeping structural integrity (write ahead log,
// journal, etc.) then the appropriate actions are to be taken.
// Invocation of an unbalanced Rollback is an error.
Rollback() error
// Analogous to os.File.FileInfo().Size().
Size() (int64, error)
// Analogous to os.Sync().
Sync() (err error)
// Analogous to os.File.Truncate().
Truncate(size int64) error
// Analogous to os.File.WriteAt(). Note: `off` is an absolute "file
// pointer" address and cannot be negative even when a Filer is a
// InnerFiler.
WriteAt(b []byte, off int64) (n int, err error)
}
var _ Filer = &InnerFiler{} // Ensure InnerFiler is a Filer.
// A InnerFiler is a Filer with added addressing/size translation.
type InnerFiler struct {
outer Filer
off int64
}
// NewInnerFiler returns a new InnerFiler wrapped by `outer` in a way which
// adds `off` to every access.
//
// For example, considering:
//
// inner := NewInnerFiler(outer, 10)
//
// then
//
// inner.WriteAt([]byte{42}, 4)
//
// translates to
//
// outer.WriteAt([]byte{42}, 14)
//
// But an attempt to emulate
//
// outer.WriteAt([]byte{17}, 9)
//
// by
//
// inner.WriteAt([]byte{17}, -1)
//
// will fail as the `off` parameter can never be < 0. Also note that
//
// inner.Size() == outer.Size() - off,
//
// i.e. `inner` pretends no `outer` exists. Finally, after e.g.
//
// inner.Truncate(7)
// outer.Size() == 17
//
// will be true.
func NewInnerFiler(outer Filer, off int64) *InnerFiler { return &InnerFiler{outer, off} }
// BeginUpdate implements Filer.
func (f *InnerFiler) BeginUpdate() error { return f.outer.BeginUpdate() }
// Close implements Filer.
func (f *InnerFiler) Close() (err error) { return f.outer.Close() }
// EndUpdate implements Filer.
func (f *InnerFiler) EndUpdate() error { return f.outer.EndUpdate() }
// Name implements Filer.
func (f *InnerFiler) Name() string { return f.outer.Name() }
// PunchHole implements Filer. `off`, `size` must be >= 0.
func (f *InnerFiler) PunchHole(off, size int64) error { return f.outer.PunchHole(f.off+off, size) }
// ReadAt implements Filer. `off` must be >= 0.
func (f *InnerFiler) ReadAt(b []byte, off int64) (n int, err error) {
if off < 0 {
return 0, &ErrINVAL{f.outer.Name() + ":ReadAt invalid off", off}
}
return f.outer.ReadAt(b, f.off+off)
}
// Rollback implements Filer.
func (f *InnerFiler) Rollback() error { return f.outer.Rollback() }
// Size implements Filer.
func (f *InnerFiler) Size() (int64, error) {
sz, err := f.outer.Size()
if err != nil {
return 0, err
}
return mathutil.MaxInt64(sz-f.off, 0), nil
}
// Sync() implements Filer.
func (f *InnerFiler) Sync() (err error) {
return f.outer.Sync()
}
// Truncate implements Filer.
func (f *InnerFiler) Truncate(size int64) error { return f.outer.Truncate(size + f.off) }
// WriteAt implements Filer. `off` must be >= 0.
func (f *InnerFiler) WriteAt(b []byte, off int64) (n int, err error) {
if off < 0 {
return 0, &ErrINVAL{f.outer.Name() + ":WriteAt invalid off", off}
}
return f.outer.WriteAt(b, f.off+off)
}

View File

@ -1,812 +0,0 @@
// Copyright 2014 The lldb Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Utilities to encode/decode and collate Go predeclared scalar types (and the
// typeless nil and []byte). The encoding format is a variation of the one
// used by the "encoding/gob" package.
package lldb
import (
"bytes"
"fmt"
"math"
"github.com/cznic/mathutil"
)
const (
gbNull = iota // 0x00
gbFalse // 0x01
gbTrue // 0x02
gbFloat0 // 0x03
gbFloat1 // 0x04
gbFloat2 // 0x05
gbFloat3 // 0x06
gbFloat4 // 0x07
gbFloat5 // 0x08
gbFloat6 // 0x09
gbFloat7 // 0x0a
gbFloat8 // 0x0b
gbComplex0 // 0x0c
gbComplex1 // 0x0d
gbComplex2 // 0x0e
gbComplex3 // 0x0f
gbComplex4 // 0x10
gbComplex5 // 0x11
gbComplex6 // 0x12
gbComplex7 // 0x13
gbComplex8 // 0x14
gbBytes00 // 0x15
gbBytes01 // 0x16
gbBytes02 // 0x17
gbBytes03 // 0x18
gbBytes04 // 0x19
gbBytes05 // 0x1a
gbBytes06 // 0x1b
gbBytes07 // 0x1c
gbBytes08 // 0x1d
gbBytes09 // 0x1e
gbBytes10 // 0x1f
gbBytes11 // 0x20
gbBytes12 // 0x21
gbBytes13 // 0x22
gbBytes14 // 0x23
gbBytes15 // 0x24
gbBytes16 // 0x25
gbBytes17 // Ox26
gbBytes1 // 0x27
gbBytes2 // 0x28: Offset by one to allow 64kB sized []byte.
gbString00 // 0x29
gbString01 // 0x2a
gbString02 // 0x2b
gbString03 // 0x2c
gbString04 // 0x2d
gbString05 // 0x2e
gbString06 // 0x2f
gbString07 // 0x30
gbString08 // 0x31
gbString09 // 0x32
gbString10 // 0x33
gbString11 // 0x34
gbString12 // 0x35
gbString13 // 0x36
gbString14 // 0x37
gbString15 // 0x38
gbString16 // 0x39
gbString17 // 0x3a
gbString1 // 0x3b
gbString2 // 0x3c
gbUintP1 // 0x3d
gbUintP2 // 0x3e
gbUintP3 // 0x3f
gbUintP4 // 0x40
gbUintP5 // 0x41
gbUintP6 // 0x42
gbUintP7 // 0x43
gbUintP8 // 0x44
gbIntM8 // 0x45
gbIntM7 // 0x46
gbIntM6 // 0x47
gbIntM5 // 0x48
gbIntM4 // 0x49
gbIntM3 // 0x4a
gbIntM2 // 0x4b
gbIntM1 // 0x4c
gbIntP1 // 0x4d
gbIntP2 // 0x4e
gbIntP3 // 0x4f
gbIntP4 // 0x50
gbIntP5 // 0x51
gbIntP6 // 0x52
gbIntP7 // 0x53
gbIntP8 // 0x54
gbInt0 // 0x55
gbIntMax = 255 - gbInt0 // 0xff == 170
)
// EncodeScalars encodes a vector of predeclared scalar type values to a
// []byte, making it suitable to store it as a "record" in a DB or to use it as
// a key of a BTree.
func EncodeScalars(scalars ...interface{}) (b []byte, err error) {
for _, scalar := range scalars {
switch x := scalar.(type) {
default:
return nil, &ErrINVAL{"EncodeScalars: unsupported type", fmt.Sprintf("%T in `%#v`", x, scalars)}
case nil:
b = append(b, gbNull)
case bool:
switch x {
case false:
b = append(b, gbFalse)
case true:
b = append(b, gbTrue)
}
case float32:
encFloat(float64(x), &b)
case float64:
encFloat(x, &b)
case complex64:
encComplex(complex128(x), &b)
case complex128:
encComplex(x, &b)
case string:
n := len(x)
if n <= 17 {
b = append(b, byte(gbString00+n))
b = append(b, []byte(x)...)
break
}
if n > 65535 {
return nil, fmt.Errorf("EncodeScalars: cannot encode string of length %d (limit 65536)", n)
}
pref := byte(gbString1)
if n > 255 {
pref++
}
b = append(b, pref)
encUint0(uint64(n), &b)
b = append(b, []byte(x)...)
case int8:
encInt(int64(x), &b)
case int16:
encInt(int64(x), &b)
case int32:
encInt(int64(x), &b)
case int64:
encInt(x, &b)
case int:
encInt(int64(x), &b)
case uint8:
encUint(uint64(x), &b)
case uint16:
encUint(uint64(x), &b)
case uint32:
encUint(uint64(x), &b)
case uint64:
encUint(x, &b)
case uint:
encUint(uint64(x), &b)
case []byte:
n := len(x)
if n <= 17 {
b = append(b, byte(gbBytes00+n))
b = append(b, []byte(x)...)
break
}
if n > 655356 {
return nil, fmt.Errorf("EncodeScalars: cannot encode []byte of length %d (limit 65536)", n)
}
pref := byte(gbBytes1)
if n > 255 {
pref++
}
b = append(b, pref)
if n <= 255 {
b = append(b, byte(n))
} else {
n--
b = append(b, byte(n>>8), byte(n))
}
b = append(b, x...)
}
}
return
}
func encComplex(f complex128, b *[]byte) {
encFloatPrefix(gbComplex0, real(f), b)
encFloatPrefix(gbComplex0, imag(f), b)
}
func encFloatPrefix(prefix byte, f float64, b *[]byte) {
u := math.Float64bits(f)
var n uint64
for i := 0; i < 8; i++ {
n <<= 8
n |= u & 0xFF
u >>= 8
}
bits := mathutil.BitLenUint64(n)
if bits == 0 {
*b = append(*b, prefix)
return
}
// 0 1 2 3 4 5 6 7 8 9
// . 1 1 1 1 1 1 1 1 2
encUintPrefix(prefix+1+byte((bits-1)>>3), n, b)
}
func encFloat(f float64, b *[]byte) {
encFloatPrefix(gbFloat0, f, b)
}
func encUint0(n uint64, b *[]byte) {
switch {
case n <= 0xff:
*b = append(*b, byte(n))
case n <= 0xffff:
*b = append(*b, byte(n>>8), byte(n))
case n <= 0xffffff:
*b = append(*b, byte(n>>16), byte(n>>8), byte(n))
case n <= 0xffffffff:
*b = append(*b, byte(n>>24), byte(n>>16), byte(n>>8), byte(n))
case n <= 0xffffffffff:
*b = append(*b, byte(n>>32), byte(n>>24), byte(n>>16), byte(n>>8), byte(n))
case n <= 0xffffffffffff:
*b = append(*b, byte(n>>40), byte(n>>32), byte(n>>24), byte(n>>16), byte(n>>8), byte(n))
case n <= 0xffffffffffffff:
*b = append(*b, byte(n>>48), byte(n>>40), byte(n>>32), byte(n>>24), byte(n>>16), byte(n>>8), byte(n))
case n <= math.MaxUint64:
*b = append(*b, byte(n>>56), byte(n>>48), byte(n>>40), byte(n>>32), byte(n>>24), byte(n>>16), byte(n>>8), byte(n))
}
}
func encUintPrefix(prefix byte, n uint64, b *[]byte) {
*b = append(*b, prefix)
encUint0(n, b)
}
func encUint(n uint64, b *[]byte) {
bits := mathutil.Max(1, mathutil.BitLenUint64(n))
encUintPrefix(gbUintP1+byte((bits-1)>>3), n, b)
}
func encInt(n int64, b *[]byte) {
switch {
case n < -0x100000000000000:
*b = append(*b, byte(gbIntM8), byte(n>>56), byte(n>>48), byte(n>>40), byte(n>>32), byte(n>>24), byte(n>>16), byte(n>>8), byte(n))
case n < -0x1000000000000:
*b = append(*b, byte(gbIntM7), byte(n>>48), byte(n>>40), byte(n>>32), byte(n>>24), byte(n>>16), byte(n>>8), byte(n))
case n < -0x10000000000:
*b = append(*b, byte(gbIntM6), byte(n>>40), byte(n>>32), byte(n>>24), byte(n>>16), byte(n>>8), byte(n))
case n < -0x100000000:
*b = append(*b, byte(gbIntM5), byte(n>>32), byte(n>>24), byte(n>>16), byte(n>>8), byte(n))
case n < -0x1000000:
*b = append(*b, byte(gbIntM4), byte(n>>24), byte(n>>16), byte(n>>8), byte(n))
case n < -0x10000:
*b = append(*b, byte(gbIntM3), byte(n>>16), byte(n>>8), byte(n))
case n < -0x100:
*b = append(*b, byte(gbIntM2), byte(n>>8), byte(n))
case n < 0:
*b = append(*b, byte(gbIntM1), byte(n))
case n <= gbIntMax:
*b = append(*b, byte(gbInt0+n))
case n <= 0xff:
*b = append(*b, gbIntP1, byte(n))
case n <= 0xffff:
*b = append(*b, gbIntP2, byte(n>>8), byte(n))
case n <= 0xffffff:
*b = append(*b, gbIntP3, byte(n>>16), byte(n>>8), byte(n))
case n <= 0xffffffff:
*b = append(*b, gbIntP4, byte(n>>24), byte(n>>16), byte(n>>8), byte(n))
case n <= 0xffffffffff:
*b = append(*b, gbIntP5, byte(n>>32), byte(n>>24), byte(n>>16), byte(n>>8), byte(n))
case n <= 0xffffffffffff:
*b = append(*b, gbIntP6, byte(n>>40), byte(n>>32), byte(n>>24), byte(n>>16), byte(n>>8), byte(n))
case n <= 0xffffffffffffff:
*b = append(*b, gbIntP7, byte(n>>48), byte(n>>40), byte(n>>32), byte(n>>24), byte(n>>16), byte(n>>8), byte(n))
case n <= 0x7fffffffffffffff:
*b = append(*b, gbIntP8, byte(n>>56), byte(n>>48), byte(n>>40), byte(n>>32), byte(n>>24), byte(n>>16), byte(n>>8), byte(n))
}
}
func decodeFloat(b []byte) float64 {
var u uint64
for i, v := range b {
u |= uint64(v) << uint((i+8-len(b))*8)
}
return math.Float64frombits(u)
}
// DecodeScalars decodes a []byte produced by EncodeScalars.
func DecodeScalars(b []byte) (scalars []interface{}, err error) {
b0 := b
for len(b) != 0 {
switch tag := b[0]; tag {
//default:
//return nil, fmt.Errorf("tag %d(%#x) not supported", b[0], b[0])
case gbNull:
scalars = append(scalars, nil)
b = b[1:]
case gbFalse:
scalars = append(scalars, false)
b = b[1:]
case gbTrue:
scalars = append(scalars, true)
b = b[1:]
case gbFloat0:
scalars = append(scalars, 0.0)
b = b[1:]
case gbFloat1, gbFloat2, gbFloat3, gbFloat4, gbFloat5, gbFloat6, gbFloat7, gbFloat8:
n := 1 + int(tag) - gbFloat0
if len(b) < n-1 {
goto corrupted
}
scalars = append(scalars, decodeFloat(b[1:n]))
b = b[n:]
case gbComplex0, gbComplex1, gbComplex2, gbComplex3, gbComplex4, gbComplex5, gbComplex6, gbComplex7, gbComplex8:
n := 1 + int(tag) - gbComplex0
if len(b) < n-1 {
goto corrupted
}
re := decodeFloat(b[1:n])
b = b[n:]
if len(b) == 0 {
goto corrupted
}
tag = b[0]
if tag < gbComplex0 || tag > gbComplex8 {
goto corrupted
}
n = 1 + int(tag) - gbComplex0
if len(b) < n-1 {
goto corrupted
}
scalars = append(scalars, complex(re, decodeFloat(b[1:n])))
b = b[n:]
case gbBytes00, gbBytes01, gbBytes02, gbBytes03, gbBytes04,
gbBytes05, gbBytes06, gbBytes07, gbBytes08, gbBytes09,
gbBytes10, gbBytes11, gbBytes12, gbBytes13, gbBytes14,
gbBytes15, gbBytes16, gbBytes17:
n := int(tag - gbBytes00)
if len(b) < n+1 {
goto corrupted
}
scalars = append(scalars, append([]byte(nil), b[1:n+1]...))
b = b[n+1:]
case gbBytes1:
if len(b) < 2 {
goto corrupted
}
n := int(b[1])
b = b[2:]
if len(b) < n {
goto corrupted
}
scalars = append(scalars, append([]byte(nil), b[:n]...))
b = b[n:]
case gbBytes2:
if len(b) < 3 {
goto corrupted
}
n := int(b[1])<<8 | int(b[2]) + 1
b = b[3:]
if len(b) < n {
goto corrupted
}
scalars = append(scalars, append([]byte(nil), b[:n]...))
b = b[n:]
case gbString00, gbString01, gbString02, gbString03, gbString04,
gbString05, gbString06, gbString07, gbString08, gbString09,
gbString10, gbString11, gbString12, gbString13, gbString14,
gbString15, gbString16, gbString17:
n := int(tag - gbString00)
if len(b) < n+1 {
goto corrupted
}
scalars = append(scalars, string(b[1:n+1]))
b = b[n+1:]
case gbString1:
if len(b) < 2 {
goto corrupted
}
n := int(b[1])
b = b[2:]
if len(b) < n {
goto corrupted
}
scalars = append(scalars, string(b[:n]))
b = b[n:]
case gbString2:
if len(b) < 3 {
goto corrupted
}
n := int(b[1])<<8 | int(b[2])
b = b[3:]
if len(b) < n {
goto corrupted
}
scalars = append(scalars, string(b[:n]))
b = b[n:]
case gbUintP1, gbUintP2, gbUintP3, gbUintP4, gbUintP5, gbUintP6, gbUintP7, gbUintP8:
b = b[1:]
n := 1 + int(tag) - gbUintP1
if len(b) < n {
goto corrupted
}
var u uint64
for _, v := range b[:n] {
u = u<<8 | uint64(v)
}
scalars = append(scalars, u)
b = b[n:]
case gbIntM8, gbIntM7, gbIntM6, gbIntM5, gbIntM4, gbIntM3, gbIntM2, gbIntM1:
b = b[1:]
n := 8 - (int(tag) - gbIntM8)
if len(b) < n {
goto corrupted
}
u := uint64(math.MaxUint64)
for _, v := range b[:n] {
u = u<<8 | uint64(v)
}
scalars = append(scalars, int64(u))
b = b[n:]
case gbIntP1, gbIntP2, gbIntP3, gbIntP4, gbIntP5, gbIntP6, gbIntP7, gbIntP8:
b = b[1:]
n := 1 + int(tag) - gbIntP1
if len(b) < n {
goto corrupted
}
i := int64(0)
for _, v := range b[:n] {
i = i<<8 | int64(v)
}
scalars = append(scalars, i)
b = b[n:]
default:
scalars = append(scalars, int64(b[0])-gbInt0)
b = b[1:]
}
}
return append([]interface{}(nil), scalars...), nil
corrupted:
return nil, &ErrDecodeScalars{append([]byte(nil), b0...), len(b0) - len(b)}
}
func collateComplex(x, y complex128) int {
switch rx, ry := real(x), real(y); {
case rx < ry:
return -1
case rx == ry:
switch ix, iy := imag(x), imag(y); {
case ix < iy:
return -1
case ix == iy:
return 0
case ix > iy:
return 1
}
}
//case rx > ry:
return 1
}
func collateFloat(x, y float64) int {
switch {
case x < y:
return -1
case x == y:
return 0
}
//case x > y:
return 1
}
func collateInt(x, y int64) int {
switch {
case x < y:
return -1
case x == y:
return 0
}
//case x > y:
return 1
}
func collateUint(x, y uint64) int {
switch {
case x < y:
return -1
case x == y:
return 0
}
//case x > y:
return 1
}
func collateIntUint(x int64, y uint64) int {
if y > math.MaxInt64 {
return -1
}
return collateInt(x, int64(y))
}
func collateUintInt(x uint64, y int64) int {
return -collateIntUint(y, x)
}
func collateType(i interface{}) (r interface{}, err error) {
switch x := i.(type) {
default:
return nil, fmt.Errorf("invalid collate type %T", x)
case nil:
return i, nil
case bool:
return i, nil
case int8:
return int64(x), nil
case int16:
return int64(x), nil
case int32:
return int64(x), nil
case int64:
return i, nil
case int:
return int64(x), nil
case uint8:
return uint64(x), nil
case uint16:
return uint64(x), nil
case uint32:
return uint64(x), nil
case uint64:
return i, nil
case uint:
return uint64(x), nil
case float32:
return float64(x), nil
case float64:
return i, nil
case complex64:
return complex128(x), nil
case complex128:
return i, nil
case []byte:
return i, nil
case string:
return i, nil
}
}
// Collate collates two arrays of Go predeclared scalar types (and the typeless
// nil or []byte). If any other type appears in x or y, Collate will return a
// non nil error. String items are collated using strCollate or lexically
// byte-wise (as when using Go comparison operators) when strCollate is nil.
// []byte items are collated using bytes.Compare.
//
// Collate returns:
//
// -1 if x < y
// 0 if x == y
// +1 if x > y
//
// The same value as defined above must be returned from strCollate.
//
// The "outer" ordering is: nil, bool, number, []byte, string. IOW, nil is
// "smaller" than anything else except other nil, numbers collate before
// []byte, []byte collate before strings, etc.
//
// Integers and real numbers collate as expected in math. However, complex
// numbers are not ordered in Go. Here the ordering is defined: Complex numbers
// are in comparison considered first only by their real part. Iff the result
// is equality then the imaginary part is used to determine the ordering. In
// this "second order" comparing, integers and real numbers are considered as
// complex numbers with a zero imaginary part.
func Collate(x, y []interface{}, strCollate func(string, string) int) (r int, err error) {
nx, ny := len(x), len(y)
switch {
case nx == 0 && ny != 0:
return -1, nil
case nx == 0 && ny == 0:
return 0, nil
case nx != 0 && ny == 0:
return 1, nil
}
r = 1
if nx > ny {
x, y, r = y, x, -r
}
var c int
for i, xi0 := range x {
yi0 := y[i]
xi, err := collateType(xi0)
if err != nil {
return 0, err
}
yi, err := collateType(yi0)
if err != nil {
return 0, err
}
switch x := xi.(type) {
default:
panic(fmt.Errorf("internal error: %T", x))
case nil:
switch yi.(type) {
case nil:
// nop
default:
return -r, nil
}
case bool:
switch y := yi.(type) {
case nil:
return r, nil
case bool:
switch {
case !x && y:
return -r, nil
case x == y:
// nop
case x && !y:
return r, nil
}
default:
return -r, nil
}
case int64:
switch y := yi.(type) {
case nil, bool:
return r, nil
case int64:
c = collateInt(x, y)
case uint64:
c = collateIntUint(x, y)
case float64:
c = collateFloat(float64(x), y)
case complex128:
c = collateComplex(complex(float64(x), 0), y)
case []byte:
return -r, nil
case string:
return -r, nil
}
if c != 0 {
return c * r, nil
}
case uint64:
switch y := yi.(type) {
case nil, bool:
return r, nil
case int64:
c = collateUintInt(x, y)
case uint64:
c = collateUint(x, y)
case float64:
c = collateFloat(float64(x), y)
case complex128:
c = collateComplex(complex(float64(x), 0), y)
case []byte:
return -r, nil
case string:
return -r, nil
}
if c != 0 {
return c * r, nil
}
case float64:
switch y := yi.(type) {
case nil, bool:
return r, nil
case int64:
c = collateFloat(x, float64(y))
case uint64:
c = collateFloat(x, float64(y))
case float64:
c = collateFloat(x, y)
case complex128:
c = collateComplex(complex(x, 0), y)
case []byte:
return -r, nil
case string:
return -r, nil
}
if c != 0 {
return c * r, nil
}
case complex128:
switch y := yi.(type) {
case nil, bool:
return r, nil
case int64:
c = collateComplex(x, complex(float64(y), 0))
case uint64:
c = collateComplex(x, complex(float64(y), 0))
case float64:
c = collateComplex(x, complex(y, 0))
case complex128:
c = collateComplex(x, y)
case []byte:
return -r, nil
case string:
return -r, nil
}
if c != 0 {
return c * r, nil
}
case []byte:
switch y := yi.(type) {
case nil, bool, int64, uint64, float64, complex128:
return r, nil
case []byte:
c = bytes.Compare(x, y)
case string:
return -r, nil
}
if c != 0 {
return c * r, nil
}
case string:
switch y := yi.(type) {
case nil, bool, int64, uint64, float64, complex128:
return r, nil
case []byte:
return r, nil
case string:
switch {
case strCollate != nil:
c = strCollate(x, y)
case x < y:
return -r, nil
case x == y:
c = 0
case x > y:
return r, nil
}
}
if c != 0 {
return c * r, nil
}
}
}
if nx == ny {
return 0, nil
}
return -r, nil
}

View File

@ -1,155 +0,0 @@
// Copyright 2014 The lldb Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package lldb (WIP) implements a low level database engine. The database
// model used could be considered a specific implementation of some small(est)
// intersection of models listed in [1]. As a settled term is lacking, it'll be
// called here a 'Virtual memory model' (VMM).
//
// Experimental release notes
//
// This is an experimental release. Don't open a DB from two applications or
// two instances of an application - it will get corrupted (no file locking is
// implemented and this task is delegated to lldb's clients).
//
// WARNING: THE LLDB API IS SUBJECT TO CHANGE.
//
// Filers
//
// A Filer is an abstraction of storage. A Filer may be a part of some process'
// virtual address space, an OS file, a networked, remote file etc. Persistence
// of the storage is optional, opaque to VMM and it is specific to a concrete
// Filer implementation.
//
// Space management
//
// Mechanism to allocate, reallocate (resize), deallocate (and later reclaim
// the unused) contiguous parts of a Filer, called blocks. Blocks are
// identified and referred to by a handle, an int64.
//
// BTrees
//
// In addition to the VMM like services, lldb provides volatile and
// non-volatile BTrees. Keys and values of a BTree are limited in size to 64kB
// each (a bit more actually). Support for larger keys/values, if desired, can
// be built atop a BTree to certain limits.
//
// Handles vs pointers
//
// A handle is the abstracted storage counterpart of a memory address. There
// is one fundamental difference, though. Resizing a block never results in a
// change to the handle which refers to the resized block, so a handle is more
// akin to an unique numeric id/key. Yet it shares one property of pointers -
// handles can be associated again with blocks after the original handle block
// was deallocated. In other words, a handle uniqueness domain is the state of
// the database and is not something comparable to e.g. an ever growing
// numbering sequence.
//
// Also, as with memory pointers, dangling handles can be created and blocks
// overwritten when such handles are used. Using a zero handle to refer to a
// block will not panic; however, the resulting error is effectively the same
// exceptional situation as dereferencing a nil pointer.
//
// Blocks
//
// Allocated/used blocks, are limited in size to only a little bit more than
// 64kB. Bigger semantic entities/structures must be built in lldb's client
// code. The content of a block has no semantics attached, it's only a fully
// opaque `[]byte`.
//
// Scalars
//
// Use of "scalars" applies to EncodeScalars, DecodeScalars and Collate. Those
// first two "to bytes" and "from bytes" functions are suggested for handling
// multi-valued Allocator content items and/or keys/values of BTrees (using
// Collate for keys). Types called "scalar" are:
//
// nil (the typeless one)
// bool
// all integral types: [u]int8, [u]int16, [u]int32, [u]int, [u]int64
// all floating point types: float32, float64
// all complex types: complex64, complex128
// []byte (64kB max)
// string (64kb max)
//
// Specific implementations
//
// Included are concrete implementations of some of the VMM interfaces included
// to ease serving simple client code or for testing and possibly as an
// example. More details in the documentation of such implementations.
//
// [1]: http://en.wikipedia.org/wiki/Database_model
package lldb
const (
fltSz = 0x70 // size of the FLT
maxShort = 251
maxRq = 65787
maxFLTRq = 4112
maxHandle = 1<<56 - 1
atomLen = 16
tagUsedLong = 0xfc
tagUsedRelocated = 0xfd
tagFreeShort = 0xfe
tagFreeLong = 0xff
tagNotCompressed = 0
tagCompressed = 1
)
// Content size n -> blocksize in atoms.
func n2atoms(n int) int {
if n > maxShort {
n += 2
}
return (n+1)/16 + 1
}
// Content size n -> number of padding zeros.
func n2padding(n int) int {
if n > maxShort {
n += 2
}
return 15 - (n+1)&15
}
// Handle <-> offset
func h2off(h int64) int64 { return (h + 6) * 16 }
func off2h(off int64) int64 { return off/16 - 6 }
// Get a 7B int64 from b
func b2h(b []byte) (h int64) {
for _, v := range b[:7] {
h = h<<8 | int64(v)
}
return
}
// Put a 7B int64 into b
func h2b(b []byte, h int64) []byte {
for i := range b[:7] {
b[i], h = byte(h>>48), h<<8
}
return b
}
// Content length N (must be in [252, 65787]) to long used block M field.
func n2m(n int) (m int) {
return n % 0x10000
}
// Long used block M (must be in [0, 65535]) field to content length N.
func m2n(m int) (n int) {
if m <= maxShort {
m += 0x10000
}
return m
}
func bpack(a []byte) []byte {
if cap(a) > len(a) {
return append([]byte(nil), a...)
}
return a
}

View File

@ -1,344 +0,0 @@
// Copyright 2014 The lldb Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// A memory-only implementation of Filer.
/*
pgBits: 8
BenchmarkMemFilerWrSeq 100000 19430 ns/op 1646.93 MB/s
BenchmarkMemFilerRdSeq 100000 17390 ns/op 1840.13 MB/s
BenchmarkMemFilerWrRand 1000000 1903 ns/op 133.94 MB/s
BenchmarkMemFilerRdRand 1000000 1153 ns/op 221.16 MB/s
pgBits: 9
BenchmarkMemFilerWrSeq 100000 16195 ns/op 1975.80 MB/s
BenchmarkMemFilerRdSeq 200000 13011 ns/op 2459.39 MB/s
BenchmarkMemFilerWrRand 1000000 2248 ns/op 227.28 MB/s
BenchmarkMemFilerRdRand 1000000 1177 ns/op 433.94 MB/s
pgBits: 10
BenchmarkMemFilerWrSeq 100000 16169 ns/op 1979.04 MB/s
BenchmarkMemFilerRdSeq 200000 12673 ns/op 2524.91 MB/s
BenchmarkMemFilerWrRand 1000000 5550 ns/op 184.30 MB/s
BenchmarkMemFilerRdRand 1000000 1699 ns/op 601.79 MB/s
pgBits: 11
BenchmarkMemFilerWrSeq 100000 13449 ns/op 2379.31 MB/s
BenchmarkMemFilerRdSeq 200000 12058 ns/op 2653.80 MB/s
BenchmarkMemFilerWrRand 500000 4335 ns/op 471.47 MB/s
BenchmarkMemFilerRdRand 1000000 2843 ns/op 719.47 MB/s
pgBits: 12
BenchmarkMemFilerWrSeq 200000 11976 ns/op 2672.00 MB/s
BenchmarkMemFilerRdSeq 200000 12255 ns/op 2611.06 MB/s
BenchmarkMemFilerWrRand 200000 8058 ns/op 507.14 MB/s
BenchmarkMemFilerRdRand 500000 4365 ns/op 936.15 MB/s
pgBits: 13
BenchmarkMemFilerWrSeq 200000 10852 ns/op 2948.69 MB/s
BenchmarkMemFilerRdSeq 200000 11561 ns/op 2767.77 MB/s
BenchmarkMemFilerWrRand 200000 9748 ns/op 840.15 MB/s
BenchmarkMemFilerRdRand 500000 7236 ns/op 1131.59 MB/s
pgBits: 14
BenchmarkMemFilerWrSeq 200000 10328 ns/op 3098.12 MB/s
BenchmarkMemFilerRdSeq 200000 11292 ns/op 2833.66 MB/s
BenchmarkMemFilerWrRand 100000 16768 ns/op 978.75 MB/s
BenchmarkMemFilerRdRand 200000 13033 ns/op 1258.43 MB/s
pgBits: 15
BenchmarkMemFilerWrSeq 200000 10309 ns/op 3103.93 MB/s
BenchmarkMemFilerRdSeq 200000 11126 ns/op 2876.12 MB/s
BenchmarkMemFilerWrRand 50000 31985 ns/op 1021.74 MB/s
BenchmarkMemFilerRdRand 100000 25217 ns/op 1297.65 MB/s
pgBits: 16
BenchmarkMemFilerWrSeq 200000 10324 ns/op 3099.45 MB/s
BenchmarkMemFilerRdSeq 200000 11201 ns/op 2856.80 MB/s
BenchmarkMemFilerWrRand 20000 55226 ns/op 1184.76 MB/s
BenchmarkMemFilerRdRand 50000 48316 ns/op 1355.16 MB/s
pgBits: 17
BenchmarkMemFilerWrSeq 200000 10377 ns/op 3083.53 MB/s
BenchmarkMemFilerRdSeq 200000 11018 ns/op 2904.18 MB/s
BenchmarkMemFilerWrRand 10000 143425 ns/op 913.12 MB/s
BenchmarkMemFilerRdRand 20000 95267 ns/op 1376.99 MB/s
pgBits: 18
BenchmarkMemFilerWrSeq 200000 10312 ns/op 3102.96 MB/s
BenchmarkMemFilerRdSeq 200000 11069 ns/op 2890.84 MB/s
BenchmarkMemFilerWrRand 5000 280910 ns/op 934.14 MB/s
BenchmarkMemFilerRdRand 10000 188500 ns/op 1388.17 MB/s
*/
package lldb
import (
"bytes"
"fmt"
"io"
"github.com/cznic/fileutil"
"github.com/cznic/mathutil"
)
const (
pgBits = 16
pgSize = 1 << pgBits
pgMask = pgSize - 1
)
var _ Filer = &MemFiler{} // Ensure MemFiler is a Filer.
type memFilerMap map[int64]*[pgSize]byte
// MemFiler is a memory backed Filer. It implements BeginUpdate, EndUpdate and
// Rollback as no-ops. MemFiler is not automatically persistent, but it has
// ReadFrom and WriteTo methods.
type MemFiler struct {
m memFilerMap
nest int
size int64
}
// NewMemFiler returns a new MemFiler.
func NewMemFiler() *MemFiler {
return &MemFiler{m: memFilerMap{}}
}
// BeginUpdate implements Filer.
func (f *MemFiler) BeginUpdate() error {
f.nest++
return nil
}
// Close implements Filer.
func (f *MemFiler) Close() (err error) {
if f.nest != 0 {
return &ErrPERM{(f.Name() + ":Close")}
}
return
}
// EndUpdate implements Filer.
func (f *MemFiler) EndUpdate() (err error) {
if f.nest == 0 {
return &ErrPERM{(f.Name() + ": EndUpdate")}
}
f.nest--
return
}
// Name implements Filer.
func (f *MemFiler) Name() string {
return fmt.Sprintf("%p.memfiler", f)
}
// PunchHole implements Filer.
func (f *MemFiler) PunchHole(off, size int64) (err error) {
if off < 0 {
return &ErrINVAL{f.Name() + ": PunchHole off", off}
}
if size < 0 || off+size > f.size {
return &ErrINVAL{f.Name() + ": PunchHole size", size}
}
first := off >> pgBits
if off&pgMask != 0 {
first++
}
off += size - 1
last := off >> pgBits
if off&pgMask != 0 {
last--
}
if limit := f.size >> pgBits; last > limit {
last = limit
}
for pg := first; pg <= last; pg++ {
delete(f.m, pg)
}
return
}
var zeroPage [pgSize]byte
// ReadAt implements Filer.
func (f *MemFiler) ReadAt(b []byte, off int64) (n int, err error) {
avail := f.size - off
pgI := off >> pgBits
pgO := int(off & pgMask)
rem := len(b)
if int64(rem) >= avail {
rem = int(avail)
err = io.EOF
}
for rem != 0 && avail > 0 {
pg := f.m[pgI]
if pg == nil {
pg = &zeroPage
}
nc := copy(b[:mathutil.Min(rem, pgSize)], pg[pgO:])
pgI++
pgO = 0
rem -= nc
n += nc
b = b[nc:]
}
return
}
// ReadFrom is a helper to populate MemFiler's content from r. 'n' reports the
// number of bytes read from 'r'.
func (f *MemFiler) ReadFrom(r io.Reader) (n int64, err error) {
if err = f.Truncate(0); err != nil {
return
}
var (
b [pgSize]byte
rn int
off int64
)
var rerr error
for rerr == nil {
if rn, rerr = r.Read(b[:]); rn != 0 {
f.WriteAt(b[:rn], off)
off += int64(rn)
n += int64(rn)
}
}
if !fileutil.IsEOF(rerr) {
err = rerr
}
return
}
// Rollback implements Filer.
func (f *MemFiler) Rollback() (err error) { return }
// Size implements Filer.
func (f *MemFiler) Size() (int64, error) {
return f.size, nil
}
// Sync implements Filer.
func (f *MemFiler) Sync() error {
return nil
}
// Truncate implements Filer.
func (f *MemFiler) Truncate(size int64) (err error) {
switch {
case size < 0:
return &ErrINVAL{"Truncate size", size}
case size == 0:
f.m = memFilerMap{}
f.size = 0
return
}
first := size >> pgBits
if size&pgMask != 0 {
first++
}
last := f.size >> pgBits
if f.size&pgMask != 0 {
last++
}
for ; first < last; first++ {
delete(f.m, first)
}
f.size = size
return
}
// WriteAt implements Filer.
func (f *MemFiler) WriteAt(b []byte, off int64) (n int, err error) {
pgI := off >> pgBits
pgO := int(off & pgMask)
n = len(b)
rem := n
var nc int
for rem != 0 {
if pgO == 0 && rem >= pgSize && bytes.Equal(b[:pgSize], zeroPage[:]) {
delete(f.m, pgI)
nc = pgSize
} else {
pg := f.m[pgI]
if pg == nil {
pg = new([pgSize]byte)
f.m[pgI] = pg
}
nc = copy((*pg)[pgO:], b)
}
pgI++
pgO = 0
rem -= nc
b = b[nc:]
}
f.size = mathutil.MaxInt64(f.size, off+int64(n))
return
}
// WriteTo is a helper to copy/persist MemFiler's content to w. If w is also
// an io.WriterAt then WriteTo may attempt to _not_ write any big, for some
// value of big, runs of zeros, i.e. it will attempt to punch holes, where
// possible, in `w` if that happens to be a freshly created or to zero length
// truncated OS file. 'n' reports the number of bytes written to 'w'.
func (f *MemFiler) WriteTo(w io.Writer) (n int64, err error) {
var (
b [pgSize]byte
wn, rn int
off int64
rerr error
)
if wa, ok := w.(io.WriterAt); ok {
lastPgI := f.size >> pgBits
for pgI := int64(0); pgI <= lastPgI; pgI++ {
sz := pgSize
if pgI == lastPgI {
sz = int(f.size & pgMask)
}
pg := f.m[pgI]
if pg != nil {
wn, err = wa.WriteAt(pg[:sz], off)
if err != nil {
return
}
n += int64(wn)
off += int64(sz)
if wn != sz {
return n, io.ErrShortWrite
}
}
}
return
}
var werr error
for rerr == nil {
if rn, rerr = f.ReadAt(b[:], off); rn != 0 {
off += int64(rn)
if wn, werr = w.Write(b[:rn]); werr != nil {
return n, werr
}
n += int64(wn)
}
}
if !fileutil.IsEOF(rerr) {
err = rerr
}
return
}

View File

@ -1,130 +0,0 @@
// Copyright 2014 The lldb Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package lldb
import (
"io"
"os"
"github.com/cznic/mathutil"
)
var _ Filer = (*OSFiler)(nil)
// OSFile is an os.File like minimal set of methods allowing to construct a
// Filer.
type OSFile interface {
Name() string
Stat() (fi os.FileInfo, err error)
Sync() (err error)
Truncate(size int64) (err error)
io.Closer
io.Reader
io.ReaderAt
io.Seeker
io.Writer
io.WriterAt
}
// OSFiler is like a SimpleFileFiler but based on an OSFile.
type OSFiler struct {
f OSFile
nest int
size int64 // not set if < 0
}
// NewOSFiler returns a Filer from an OSFile. This Filer is like the
// SimpleFileFiler, it does not implement the transaction related methods.
func NewOSFiler(f OSFile) (r *OSFiler) {
return &OSFiler{
f: f,
size: -1,
}
}
// BeginUpdate implements Filer.
func (f *OSFiler) BeginUpdate() (err error) {
f.nest++
return nil
}
// Close implements Filer.
func (f *OSFiler) Close() (err error) {
if f.nest != 0 {
return &ErrPERM{(f.Name() + ":Close")}
}
return f.f.Close()
}
// EndUpdate implements Filer.
func (f *OSFiler) EndUpdate() (err error) {
if f.nest == 0 {
return &ErrPERM{(f.Name() + ":EndUpdate")}
}
f.nest--
return
}
// Name implements Filer.
func (f *OSFiler) Name() string {
return f.f.Name()
}
// PunchHole implements Filer.
func (f *OSFiler) PunchHole(off, size int64) (err error) {
return
}
// ReadAt implements Filer.
func (f *OSFiler) ReadAt(b []byte, off int64) (n int, err error) {
return f.f.ReadAt(b, off)
}
// Rollback implements Filer.
func (f *OSFiler) Rollback() (err error) { return }
// Size implements Filer.
func (f *OSFiler) Size() (n int64, err error) {
if f.size < 0 { // boot
fi, err := f.f.Stat()
if err != nil {
return 0, err
}
f.size = fi.Size()
}
return f.size, nil
}
// Sync implements Filer.
func (f *OSFiler) Sync() (err error) {
return f.f.Sync()
}
// Truncate implements Filer.
func (f *OSFiler) Truncate(size int64) (err error) {
if size < 0 {
return &ErrINVAL{"Truncate size", size}
}
f.size = size
return f.f.Truncate(size)
}
// WriteAt implements Filer.
func (f *OSFiler) WriteAt(b []byte, off int64) (n int, err error) {
if f.size < 0 { // boot
fi, err := os.Stat(f.f.Name())
if err != nil {
return 0, err
}
f.size = fi.Size()
}
f.size = mathutil.MaxInt64(f.size, int64(len(b))+off)
return f.f.WriteAt(b, off)
}

View File

@ -1,123 +0,0 @@
// Copyright 2014 The lldb Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// A basic os.File backed Filer.
package lldb
import (
"os"
"github.com/cznic/fileutil"
"github.com/cznic/mathutil"
)
var _ Filer = &SimpleFileFiler{} // Ensure SimpleFileFiler is a Filer.
// SimpleFileFiler is an os.File backed Filer intended for use where structural
// consistency can be reached by other means (SimpleFileFiler is for example
// wrapped in eg. an RollbackFiler or ACIDFiler0) or where persistence is not
// required (temporary/working data sets).
//
// SimpleFileFiler is the most simple os.File backed Filer implementation as it
// does not really implement BeginUpdate and EndUpdate/Rollback in any way
// which would protect the structural integrity of data. If misused e.g. as a
// real database storage w/o other measures, it can easily cause data loss
// when, for example, a power outage occurs or the updating process terminates
// abruptly.
type SimpleFileFiler struct {
file *os.File
nest int
size int64 // not set if < 0
}
// NewSimpleFileFiler returns a new SimpleFileFiler.
func NewSimpleFileFiler(f *os.File) *SimpleFileFiler {
return &SimpleFileFiler{file: f, size: -1}
}
// BeginUpdate implements Filer.
func (f *SimpleFileFiler) BeginUpdate() error {
f.nest++
return nil
}
// Close implements Filer.
func (f *SimpleFileFiler) Close() (err error) {
if f.nest != 0 {
return &ErrPERM{(f.Name() + ":Close")}
}
return f.file.Close()
}
// EndUpdate implements Filer.
func (f *SimpleFileFiler) EndUpdate() (err error) {
if f.nest == 0 {
return &ErrPERM{(f.Name() + ":EndUpdate")}
}
f.nest--
return
}
// Name implements Filer.
func (f *SimpleFileFiler) Name() string {
return f.file.Name()
}
// PunchHole implements Filer.
func (f *SimpleFileFiler) PunchHole(off, size int64) (err error) {
return fileutil.PunchHole(f.file, off, size)
}
// ReadAt implements Filer.
func (f *SimpleFileFiler) ReadAt(b []byte, off int64) (n int, err error) {
return f.file.ReadAt(b, off)
}
// Rollback implements Filer.
func (f *SimpleFileFiler) Rollback() (err error) { return }
// Size implements Filer.
func (f *SimpleFileFiler) Size() (int64, error) {
if f.size < 0 { // boot
fi, err := os.Stat(f.file.Name())
if err != nil {
return 0, err
}
f.size = fi.Size()
}
return f.size, nil
}
// Sync implements Filer.
func (f *SimpleFileFiler) Sync() error {
return f.file.Sync()
}
// Truncate implements Filer.
func (f *SimpleFileFiler) Truncate(size int64) (err error) {
if size < 0 {
return &ErrINVAL{"Truncate size", size}
}
f.size = size
return f.file.Truncate(size)
}
// WriteAt implements Filer.
func (f *SimpleFileFiler) WriteAt(b []byte, off int64) (n int, err error) {
if f.size < 0 { // boot
fi, err := os.Stat(f.file.Name())
if err != nil {
return 0, err
}
f.size = fi.Size()
}
f.size = mathutil.MaxInt64(f.size, int64(len(b))+off)
return f.file.WriteAt(b, off)
}

View File

@ -1,642 +0,0 @@
// Copyright 2014 The lldb Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Structural transactions.
package lldb
//DONE+ TransactionalMemoryFiler
// ----
// Use NewRollbackFiler(myMemFiler, ...)
/*
bfBits: 3
BenchmarkRollbackFiler 20000000 102 ns/op 9.73 MB/s
bfBits: 4
BenchmarkRollbackFiler 50000000 55.7 ns/op 17.95 MB/s
bfBits: 5
BenchmarkRollbackFiler 100000000 32.2 ns/op 31.06 MB/s
bfBits: 6
BenchmarkRollbackFiler 100000000 20.6 ns/op 48.46 MB/s
bfBits: 7
BenchmarkRollbackFiler 100000000 15.1 ns/op 66.12 MB/s
bfBits: 8
BenchmarkRollbackFiler 100000000 10.5 ns/op 95.66 MB/s
bfBits: 9
BenchmarkRollbackFiler 200000000 8.02 ns/op 124.74 MB/s
bfBits: 10
BenchmarkRollbackFiler 200000000 9.25 ns/op 108.09 MB/s
bfBits: 11
BenchmarkRollbackFiler 100000000 11.7 ns/op 85.47 MB/s
bfBits: 12
BenchmarkRollbackFiler 100000000 17.2 ns/op 57.99 MB/s
bfBits: 13
BenchmarkRollbackFiler 100000000 32.7 ns/op 30.58 MB/s
bfBits: 14
BenchmarkRollbackFiler 50000000 39.6 ns/op 25.27 MB/s
*/
import (
"fmt"
"io"
"sync"
"github.com/cznic/fileutil"
"github.com/cznic/mathutil"
)
var (
_ Filer = &bitFiler{} // Ensure bitFiler is a Filer.
_ Filer = &RollbackFiler{} // ditto
)
const (
bfBits = 9
bfSize = 1 << bfBits
bfMask = bfSize - 1
)
var (
bitmask = [8]byte{1, 2, 4, 8, 16, 32, 64, 128}
bitZeroPage bitPage
allDirtyFlags [bfSize >> 3]byte
)
func init() {
for i := range allDirtyFlags {
allDirtyFlags[i] = 0xff
}
}
type (
bitPage struct {
prev, next *bitPage
data [bfSize]byte
flags [bfSize >> 3]byte
dirty bool
}
bitFilerMap map[int64]*bitPage
bitFiler struct {
parent Filer
m bitFilerMap
size int64
sync.Mutex
}
)
func newBitFiler(parent Filer) (f *bitFiler, err error) {
sz, err := parent.Size()
if err != nil {
return
}
return &bitFiler{parent: parent, m: bitFilerMap{}, size: sz}, nil
}
func (f *bitFiler) BeginUpdate() error { panic("internal error") }
func (f *bitFiler) EndUpdate() error { panic("internal error") }
func (f *bitFiler) Rollback() error { panic("internal error") }
func (f *bitFiler) Sync() error { panic("internal error") }
func (f *bitFiler) Close() (err error) { return }
func (f *bitFiler) Name() string { return fmt.Sprintf("%p.bitfiler", f) }
func (f *bitFiler) Size() (int64, error) { return f.size, nil }
func (f *bitFiler) PunchHole(off, size int64) (err error) {
first := off >> bfBits
if off&bfMask != 0 {
first++
}
off += size - 1
last := off >> bfBits
if off&bfMask != 0 {
last--
}
if limit := f.size >> bfBits; last > limit {
last = limit
}
f.Lock()
for pgI := first; pgI <= last; pgI++ {
pg := &bitPage{}
pg.flags = allDirtyFlags
f.m[pgI] = pg
}
f.Unlock()
return
}
func (f *bitFiler) ReadAt(b []byte, off int64) (n int, err error) {
avail := f.size - off
pgI := off >> bfBits
pgO := int(off & bfMask)
rem := len(b)
if int64(rem) >= avail {
rem = int(avail)
err = io.EOF
}
for rem != 0 && avail > 0 {
f.Lock()
pg := f.m[pgI]
if pg == nil {
pg = &bitPage{}
if f.parent != nil {
_, err = f.parent.ReadAt(pg.data[:], off&^bfMask)
if err != nil && !fileutil.IsEOF(err) {
f.Unlock()
return
}
err = nil
}
f.m[pgI] = pg
}
f.Unlock()
nc := copy(b[:mathutil.Min(rem, bfSize)], pg.data[pgO:])
pgI++
pgO = 0
rem -= nc
n += nc
b = b[nc:]
off += int64(nc)
}
return
}
func (f *bitFiler) Truncate(size int64) (err error) {
f.Lock()
defer f.Unlock()
switch {
case size < 0:
return &ErrINVAL{"Truncate size", size}
case size == 0:
f.m = bitFilerMap{}
f.size = 0
return
}
first := size >> bfBits
if size&bfMask != 0 {
first++
}
last := f.size >> bfBits
if f.size&bfMask != 0 {
last++
}
for ; first < last; first++ {
delete(f.m, first)
}
f.size = size
return
}
func (f *bitFiler) WriteAt(b []byte, off int64) (n int, err error) {
off0 := off
pgI := off >> bfBits
pgO := int(off & bfMask)
n = len(b)
rem := n
var nc int
for rem != 0 {
f.Lock()
pg := f.m[pgI]
if pg == nil {
pg = &bitPage{}
if f.parent != nil {
_, err = f.parent.ReadAt(pg.data[:], off&^bfMask)
if err != nil && !fileutil.IsEOF(err) {
f.Unlock()
return
}
err = nil
}
f.m[pgI] = pg
}
f.Unlock()
nc = copy(pg.data[pgO:], b)
pgI++
pg.dirty = true
for i := pgO; i < pgO+nc; i++ {
pg.flags[i>>3] |= bitmask[i&7]
}
pgO = 0
rem -= nc
b = b[nc:]
off += int64(nc)
}
f.size = mathutil.MaxInt64(f.size, off0+int64(n))
return
}
func (f *bitFiler) link() {
for pgI, pg := range f.m {
nx, ok := f.m[pgI+1]
if !ok || !nx.dirty {
continue
}
nx.prev, pg.next = pg, nx
}
}
func (f *bitFiler) dumpDirty(w io.WriterAt) (nwr int, err error) {
f.Lock()
defer f.Unlock()
f.link()
for pgI, pg := range f.m {
if !pg.dirty {
continue
}
for pg.prev != nil && pg.prev.dirty {
pg = pg.prev
pgI--
}
for pg != nil && pg.dirty {
last := false
var off int64
first := -1
for i := 0; i < bfSize; i++ {
flag := pg.flags[i>>3]&bitmask[i&7] != 0
switch {
case flag && !last: // Leading edge detected
off = pgI<<bfBits + int64(i)
first = i
case !flag && last: // Trailing edge detected
n, err := w.WriteAt(pg.data[first:i], off)
if n != i-first {
return 0, err
}
first = -1
nwr++
}
last = flag
}
if first >= 0 {
i := bfSize
n, err := w.WriteAt(pg.data[first:i], off)
if n != i-first {
return 0, err
}
nwr++
}
pg.dirty = false
pg = pg.next
pgI++
}
}
return
}
// RollbackFiler is a Filer implementing structural transaction handling.
// Structural transactions should be small and short lived because all non
// committed data are held in memory until committed or discarded by a
// Rollback.
//
// While using RollbackFiler, every intended update of the wrapped Filler, by
// WriteAt, Truncate or PunchHole, _must_ be made within a transaction.
// Attempts to do it outside of a transaction will return ErrPERM. OTOH,
// invoking ReadAt outside of a transaction is not a problem.
//
// No nested transactions: All updates within a transaction are held in memory.
// On a matching EndUpdate the updates held in memory are actually written to
// the wrapped Filer.
//
// Nested transactions: Correct data will be seen from RollbackFiler when any
// level of a nested transaction is rollbacked. The actual writing to the
// wrapped Filer happens only when the outer most transaction nesting level is
// closed.
//
// Invoking Rollback is an alternative to EndUpdate. It discards all changes
// made at the current transaction level and returns the "state" (possibly not
// yet persisted) of the Filer to what it was before the corresponding
// BeginUpdate.
//
// During an open transaction, all reads (using ReadAt) are "dirty" reads,
// seeing the uncommitted changes made to the Filer's data.
//
// Lldb databases should be based upon a RollbackFiler.
//
// With a wrapped MemFiler one gets transactional memory. With, for example a
// wrapped disk based SimpleFileFiler it protects against at least some HW
// errors - if Rollback is properly invoked on such failures and/or if there's
// some WAL or 2PC or whatever other safe mechanism based recovery procedure
// used by the client.
//
// The "real" writes to the wrapped Filer (or WAL instead) go through the
// writerAt supplied to NewRollbackFiler.
//
// List of functions/methods which are recommended to be wrapped in a
// BeginUpdate/EndUpdate structural transaction:
//
// Allocator.Alloc
// Allocator.Free
// Allocator.Realloc
//
// CreateBTree
// RemoveBTree
// BTree.Clear
// BTree.Delete
// BTree.DeleteAny
// BTree.Clear
// BTree.Extract
// BTree.Get (it can mutate the DB)
// BTree.Put
// BTree.Set
//
// NOTE: RollbackFiler is a generic solution intended to wrap Filers provided
// by this package which do not implement any of the transactional methods.
// RollbackFiler thus _does not_ invoke any of the transactional methods of its
// wrapped Filer.
//
// RollbackFiler is safe for concurrent use by multiple goroutines.
type RollbackFiler struct {
mu sync.RWMutex
inCallback bool
inCallbackMu sync.RWMutex
bitFiler *bitFiler
checkpoint func(int64) error
closed bool
f Filer
parent Filer
tlevel int // transaction nesting level, 0 == not in transaction
writerAt io.WriterAt
// afterRollback, if not nil, is called after performing Rollback
// without errros.
afterRollback func() error
}
// NewRollbackFiler returns a RollbackFiler wrapping f.
//
// The checkpoint parameter
//
// The checkpoint function is called after closing (by EndUpdate) the upper
// most level open transaction if all calls of writerAt were successful and the
// DB (or eg. a WAL) is thus now in a consistent state (virtually, in the ideal
// world with no write caches, no HW failures, no process crashes, ...).
//
// NOTE: In, for example, a 2PC it is necessary to reflect also the sz
// parameter as the new file size (as in the parameter to Truncate). All
// changes were successfully written already by writerAt before invoking
// checkpoint.
//
// The writerAt parameter
//
// The writerAt interface is used to commit the updates of the wrapped Filer.
// If any invocation of writerAt fails then a non nil error will be returned
// from EndUpdate and checkpoint will _not_ ne called. Neither is necessary to
// call Rollback. The rule of thumb: The [structural] transaction [level] is
// closed by invoking exactly once one of EndUpdate _or_ Rollback.
//
// It is presumed that writerAt uses WAL or 2PC or whatever other safe
// mechanism to physically commit the updates.
//
// Updates performed by invocations of writerAt are byte-precise, but not
// necessarily maximum possible length precise. IOW, for example an update
// crossing page boundaries may be performed by more than one writerAt
// invocation. No offset sorting is performed. This may change if it proves
// to be a problem. Such change would be considered backward compatible.
//
// NOTE: Using RollbackFiler, but failing to ever invoke a matching "closing"
// EndUpdate after an "opening" BeginUpdate means neither writerAt or
// checkpoint will ever get called - with all the possible data loss
// consequences.
func NewRollbackFiler(f Filer, checkpoint func(sz int64) error, writerAt io.WriterAt) (r *RollbackFiler, err error) {
if f == nil || checkpoint == nil || writerAt == nil {
return nil, &ErrINVAL{Src: "lldb.NewRollbackFiler, nil argument"}
}
return &RollbackFiler{
checkpoint: checkpoint,
f: f,
writerAt: writerAt,
}, nil
}
// Implements Filer.
func (r *RollbackFiler) BeginUpdate() (err error) {
r.mu.Lock()
defer r.mu.Unlock()
parent := r.f
if r.tlevel != 0 {
parent = r.bitFiler
}
r.bitFiler, err = newBitFiler(parent)
if err != nil {
return
}
r.tlevel++
return
}
// Implements Filer.
//
// Close will return an error if not invoked at nesting level 0. However, to
// allow emergency closing from eg. a signal handler; if Close is invoked
// within an open transaction(s), it rollbacks any non committed open
// transactions and performs the Close operation.
//
// IOW: Regardless of the transaction nesting level the Close is always
// performed but any uncommitted transaction data are lost.
func (r *RollbackFiler) Close() (err error) {
r.mu.Lock()
defer r.mu.Unlock()
if r.closed {
return &ErrPERM{r.f.Name() + ": Already closed"}
}
r.closed = true
if err = r.f.Close(); err != nil {
return
}
if r.tlevel != 0 {
err = &ErrPERM{r.f.Name() + ": Close inside an open transaction"}
}
return
}
// Implements Filer.
func (r *RollbackFiler) EndUpdate() (err error) {
r.mu.Lock()
defer r.mu.Unlock()
if r.tlevel == 0 {
return &ErrPERM{r.f.Name() + " : EndUpdate outside of a transaction"}
}
sz, err := r.size() // Cannot call .Size() -> deadlock
if err != nil {
return
}
r.tlevel--
bf := r.bitFiler
parent := bf.parent
w := r.writerAt
if r.tlevel != 0 {
w = parent
}
nwr, err := bf.dumpDirty(w)
if err != nil {
return
}
switch {
case r.tlevel == 0:
r.bitFiler = nil
if nwr == 0 {
return
}
return r.checkpoint(sz)
default:
r.bitFiler = parent.(*bitFiler)
sz, _ := bf.Size() // bitFiler.Size() never returns err != nil
return parent.Truncate(sz)
}
}
// Implements Filer.
func (r *RollbackFiler) Name() string {
r.mu.RLock()
defer r.mu.RUnlock()
return r.f.Name()
}
// Implements Filer.
func (r *RollbackFiler) PunchHole(off, size int64) error {
r.mu.Lock()
defer r.mu.Unlock()
if r.tlevel == 0 {
return &ErrPERM{r.f.Name() + ": PunchHole outside of a transaction"}
}
if off < 0 {
return &ErrINVAL{r.f.Name() + ": PunchHole off", off}
}
if size < 0 || off+size > r.bitFiler.size {
return &ErrINVAL{r.f.Name() + ": PunchHole size", size}
}
return r.bitFiler.PunchHole(off, size)
}
// Implements Filer.
func (r *RollbackFiler) ReadAt(b []byte, off int64) (n int, err error) {
r.inCallbackMu.RLock()
defer r.inCallbackMu.RUnlock()
if !r.inCallback {
r.mu.RLock()
defer r.mu.RUnlock()
}
if r.tlevel == 0 {
return r.f.ReadAt(b, off)
}
return r.bitFiler.ReadAt(b, off)
}
// Implements Filer.
func (r *RollbackFiler) Rollback() (err error) {
r.mu.Lock()
defer r.mu.Unlock()
if r.tlevel == 0 {
return &ErrPERM{r.f.Name() + ": Rollback outside of a transaction"}
}
if r.tlevel > 1 {
r.bitFiler = r.bitFiler.parent.(*bitFiler)
}
r.tlevel--
if f := r.afterRollback; f != nil {
r.inCallbackMu.Lock()
r.inCallback = true
r.inCallbackMu.Unlock()
defer func() {
r.inCallbackMu.Lock()
r.inCallback = false
r.inCallbackMu.Unlock()
}()
return f()
}
return
}
func (r *RollbackFiler) size() (sz int64, err error) {
if r.tlevel == 0 {
return r.f.Size()
}
return r.bitFiler.Size()
}
// Implements Filer.
func (r *RollbackFiler) Size() (sz int64, err error) {
r.mu.Lock()
defer r.mu.Unlock()
return r.size()
}
// Implements Filer.
func (r *RollbackFiler) Sync() error {
r.mu.Lock()
defer r.mu.Unlock()
return r.f.Sync()
}
// Implements Filer.
func (r *RollbackFiler) Truncate(size int64) error {
r.mu.Lock()
defer r.mu.Unlock()
if r.tlevel == 0 {
return &ErrPERM{r.f.Name() + ": Truncate outside of a transaction"}
}
return r.bitFiler.Truncate(size)
}
// Implements Filer.
func (r *RollbackFiler) WriteAt(b []byte, off int64) (n int, err error) {
r.mu.Lock()
defer r.mu.Unlock()
if r.tlevel == 0 {
return 0, &ErrPERM{r.f.Name() + ": WriteAt outside of a transaction"}
}
return r.bitFiler.WriteAt(b, off)
}

View File

@ -10,6 +10,10 @@
package zappy package zappy
import (
"github.com/cznic/internal/buffer"
)
/* /*
#include <stdint.h> #include <stdint.h>
@ -109,7 +113,7 @@ func Decode(buf, src []byte) ([]byte, error) {
} }
if len(buf) < dLen { if len(buf) < dLen {
buf = make([]byte, dLen) buf = *buffer.Get(dLen)
} }
d := int(C.decode(C.int(s), C.int(len(src)), (*C.uint8_t)(&src[0]), C.int(len(buf)), (*C.uint8_t)(&buf[0]))) d := int(C.decode(C.int(s), C.int(len(src)), (*C.uint8_t)(&src[0]), C.int(len(buf)), (*C.uint8_t)(&buf[0])))

View File

@ -12,6 +12,8 @@ package zappy
import ( import (
"encoding/binary" "encoding/binary"
"github.com/cznic/internal/buffer"
) )
func puregoDecode() bool { return true } func puregoDecode() bool { return true }
@ -35,7 +37,7 @@ func Decode(buf, src []byte) ([]byte, error) {
} }
if len(buf) < dLen { if len(buf) < dLen {
buf = make([]byte, dLen) buf = *buffer.Get(dLen)
} }
var d, offset, length int var d, offset, length int

View File

@ -33,5 +33,5 @@ func emitLiteral(dst, lit []byte) (n int) {
// MaxEncodedLen returns the maximum length of a zappy block, given its // MaxEncodedLen returns the maximum length of a zappy block, given its
// uncompressed length. // uncompressed length.
func MaxEncodedLen(srcLen int) int { func MaxEncodedLen(srcLen int) int {
return 10 + srcLen return 10 + srcLen + (srcLen+1)/2
} }

View File

@ -107,6 +107,8 @@ import (
"encoding/binary" "encoding/binary"
"fmt" "fmt"
"math" "math"
"github.com/cznic/internal/buffer"
) )
func puregoEncode() bool { return false } func puregoEncode() bool { return false }
@ -117,7 +119,7 @@ func puregoEncode() bool { return false }
// It is valid to pass a nil buf. // It is valid to pass a nil buf.
func Encode(buf, src []byte) ([]byte, error) { func Encode(buf, src []byte) ([]byte, error) {
if n := MaxEncodedLen(len(src)); len(buf) < n { if n := MaxEncodedLen(len(src)); len(buf) < n {
buf = make([]byte, n) buf = *buffer.Get(n)
} }
if len(src) > math.MaxInt32 { if len(src) > math.MaxInt32 {

View File

@ -14,6 +14,8 @@ import (
"encoding/binary" "encoding/binary"
"fmt" "fmt"
"math" "math"
"github.com/cznic/internal/buffer"
) )
func puregoEncode() bool { return true } func puregoEncode() bool { return true }
@ -24,7 +26,7 @@ func puregoEncode() bool { return true }
// It is valid to pass a nil buf. // It is valid to pass a nil buf.
func Encode(buf, src []byte) ([]byte, error) { func Encode(buf, src []byte) ([]byte, error) {
if n := MaxEncodedLen(len(src)); len(buf) < n { if n := MaxEncodedLen(len(src)); len(buf) < n {
buf = make([]byte, n) buf = *buffer.Get(n)
} }
if len(src) > math.MaxInt32 { if len(src) > math.MaxInt32 {

View File

@ -124,7 +124,6 @@ Old=Go snappy, new=zappy:
The package builds with CGO_ENABLED=0 as well, but the performance is worse. The package builds with CGO_ENABLED=0 as well, but the performance is worse.
$ CGO_ENABLED=0 go test -test.run=NONE -test.bench=. > old.benchcmp $ CGO_ENABLED=0 go test -test.run=NONE -test.bench=. > old.benchcmp
$ CGO_ENABLED=1 go test -test.run=NONE -test.bench=. > new.benchcmp $ CGO_ENABLED=1 go test -test.run=NONE -test.bench=. > new.benchcmp
$ benchcmp old.benchcmp new.benchcmp $ benchcmp old.benchcmp new.benchcmp

8
vendor/manifest vendored
View File

@ -34,7 +34,7 @@
{ {
"importpath": "github.com/cznic/b", "importpath": "github.com/cznic/b",
"repository": "https://github.com/cznic/b", "repository": "https://github.com/cznic/b",
"revision": "47184dd8c1d2c7e7f87dae8448ee2007cdf0c6c4", "revision": "bcff30a622dbdcb425aba904792de1df606dab7c",
"branch": "master", "branch": "master",
"notests": true "notests": true
}, },
@ -86,14 +86,14 @@
{ {
"importpath": "github.com/cznic/mathutil", "importpath": "github.com/cznic/mathutil",
"repository": "https://github.com/cznic/mathutil", "repository": "https://github.com/cznic/mathutil",
"revision": "38a5fe05cd94d69433fd1c928417834c604f281d", "revision": "78ad7f262603437f0ecfebc835d80094f89c8f54",
"branch": "master", "branch": "master",
"notests": true "notests": true
}, },
{ {
"importpath": "github.com/cznic/ql", "importpath": "github.com/cznic/ql",
"repository": "https://github.com/cznic/ql", "repository": "https://github.com/cznic/ql",
"revision": "f5e72a6fe84f25e7539555bdfcee8dc440d93894", "revision": "c81467d34c630800dd4ba81033e234a8159ff2e3",
"branch": "master", "branch": "master",
"notests": true "notests": true
}, },
@ -114,7 +114,7 @@
{ {
"importpath": "github.com/cznic/zappy", "importpath": "github.com/cznic/zappy",
"repository": "https://github.com/cznic/zappy", "repository": "https://github.com/cznic/zappy",
"revision": "4f5e6ef19fd692f1ef9b01206de4f1161a314e9a", "revision": "2533cb5b45cc6c07421468ce262899ddc9d53fb7",
"branch": "master", "branch": "master",
"notests": true "notests": true
}, },