Scenarios

In many backend services, configuration files or dictionary data need to be loaded dynamically. So when accessing these configurations or dictionaries, it is necessary to add locks to these data to ensure the security of concurrent reads and writes. Normally, read and write locks are required. Here’s an example of a read/write lock.

Read/Write Locks to Load Data

Using read/write locks ensures that access to data does not result in a race to state.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
type Config struct {
        sync.RWMutex
        data map[string]interface{}
}

func (c *Config) Load() {
        c.Lock()
        defer c.Unlock()

        c.data = c.load()
}

func (c *Config) load() map[string]interface{} {
        // 数据的加载
        return make(map[string]interface{})
}

func (c *Config) Get() map[string]interface{} {
        c.RLock()
        defer c.RUnlock()
        return c.data
}

Dynamic data replacement using atomic operations

One characteristic of this type of business requirement is that reads are very frequent, but updates to the data will be relatively infrequent. We can replace the read and write locks with the following method.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
import "sync/atomic"
import "unsafe"

type Config struct {
        data unsafe.Pointer
}

func (c *Config) Load() {
        v := c.load()
        atomic.StorePointer(&c.data, unsafe.Pointer(&v))
}

func (c *Config) load() map[string]interface{} {
        // Loading of data
        return make(map[string]interface{})
}

func (c *Config) Get() map[string]interface{} {
        v := atomic.LoadPointer(&c.data)
        return *(*map[string]interface{})(v)
}

Using atomic operations ensures that when concurrent reads and writes are performed, the new map is not acquired by the previous read operation when the data is updated, thus ensuring concurrent security.

Performance Test

Here is a performance test where ConfigV2 is using atomic operations to replace the data of map.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
func BenchmarkConfig(b *testing.B) {
        config := &Config{}
        go func() {
                for range time.Tick(time.Second) {
                        config.Load()
                }
        }()

        config.Load()
        b.ResetTimer()
        for i := 0; i < b.N; i++ {
                _ = config.Get()
        }
}

func BenchmarkConfigV2(b *testing.B) {
        config := &ConfigV2{}
        go func() {
                for range time.Tick(time.Second) {
                        config.Load()
                }
        }()

        config.Load()
        b.ResetTimer()
        for i := 0; i < b.N; i++ {
                _ = config.Get()
        }
}

The difference between the two is 40 times, and the results are as follows:

1
2
3
4
5
6
7
8
goos: linux
goarch: amd64
pkg: lpflpf/loaddata
cpu: Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz
BenchmarkConfig-32              551491118               21.79 ns/op            0 B/op          0 allocs/op
BenchmarkConfigV2-32            1000000000               0.5858 ns/op          0 B/op          0 allocs/op
PASS
ok      lpflpf/loaddata 14.870s

Summary

  1. In some kinds of business where there are many reads and few writes, atomic operations can be used instead of read/write locks to ensure concurrency security.
  2. The reason for the higher performance of atomic operations may be: read-write locks require one more atomic operation to be added. (To be verified)