In complex distributed systems, it is often necessary to uniquely identify a large number of data and messages. A unique ID is needed to identify a piece of data or a message after the data is divided into libraries and tables, and the self-incrementing ID of the database obviously does not meet the demand at this time a system that can generate a globally unique ID is very necessary. To summarize, what are the requirements of the business system for ID number?
- Global uniqueness: No duplicate ID numbers can appear.
- Incremental trend: In MySQL InnoDB engine uses aggregated indexes, as most RDBMS use B-tree data structure to store indexed data, we should try to use ordered primary keys to ensure the write performance on top of the primary key selection.
- Monotonic incremental: ensure that the next ID must be greater than the previous ID, such as transaction version number, IM incremental messages, sorting and other special needs.
- Information security: If the IDs are consecutive, it is very easy to do the crawling work of malicious users, just download the specified URLs directly in order; if the order number is more dangerous, competitors can directly know the single volume of a day. So in some application scenarios, there will be a need for ID irregularity and irregularity.
Take the business-side requirements for order numbers as an example.
- Order number cannot be duplicated
- Order number has no rules, i.e., the coding rules cannot include any data related to the company’s operation, and external personnel cannot guess the order quantity by the order ID. It cannot be traversed.
- Order number length is fixed and not too long
- Easy to read, easy to communicate, no messy number-letter exchange
- Can be generated quickly
Common global unique ID schemes.
Database self-growing ID
- No coding required
- Large tables cannot be divided horizontally, otherwise problems will occur when inserting and deleting
- Need to add transaction mechanism for inserting data under high concurrency
- When inserting parent and child tables (related tables) in business operations, the parent table must be inserted first, and then the child table
Redis Self-Growing IDs
Redis to generate distributed IDs is actually similar to using Mysql to self-increment IDs, which can be done atomically by using the incr command in Redis to self-increment and return, e.g.
Redis supports both RDB and AOF persistence.
- RDB persistence is equivalent to taking a snapshot for persistence at regular intervals. If after taking a snapshot, it self-increments several times in a row before it has time to do the next snapshot persistence, Redis hangs at this time and the ID will be duplicated after restarting Redis.
- AOF persistence is equivalent to persisting every write command. If Redis hangs, there will be no ID duplication, but it will take too long to restart and recover the data due to the incr command passing.
- Ordered incremental, readable.
- Bandwidth hungry, each time a request is made to Redis.
Timestamp + Random Number
- Simple coding
- There is a duplication problem with random numbers, even with the same timestamp. Need to check if the same value already exists before each insert into the database.
Timestamp + Member ID
- No two orders exist for one user at the same time
- Member ID will also reveal operational data, the chicken lays the egg and the egg lays the chicken
The standard type of UUID (Universally Unique Identifier) contains 32 hexadecimal digits, divided into five segments by hyphen, in the form of 36 characters of 8-4-4-4-12, example: 550e8400-e29b-41d4-a716-446655440000, so far the industry has a total of 5 There are five ways to generate UUIDs so far, see the IETF UUID specification A Universally Unique IDentifier (UUID) URN Namespace for details.
- Very high performance: locally generated, no network consumption.
- Not user friendly
- Not easy to store: The UUID is too long, 16 bytes and 128 bits, usually represented as a 36-length string, which is not applicable in many scenarios.
- Insecure information: The algorithm of generating UUID based on MAC address may cause MAC address leakage, a vulnerability that was used to find the location of the creator of Melissa virus.
- There are some problems in specific environments when ID is used as primary key, for example, the scenario of being DB primary key, UUID is very inapplicable:.
- MySQL has a clear official recommendation to keep the primary key as short as possible, and a UUID of 36 characters in length does not meet the requirement.
- Not good for MySQL index: If used as database primary key, the unordered nature of UUID may cause frequent data location changes under InnoDB engine, which seriously affects performance.
The more popular UUID variant is based on the MySQL UUID variant: timestamp + machine number + random, as described in [GUID/UUID Performance Breakthrough](http://mysql.rjweb.org/doc.php/ uuid)
- Lower development cost
- Poor performance based on MySQL stored procedures
Also, with the advent of the UUID_TO_BIN(str, swap_flag) method, the the above implementations are not so applicable anymore.
Snowflake is a global unique id generation scheme open sourced by Twitter. The background of the Twitter-Snowflake algorithm is quite simple, in order to meet the tens of thousands of messages per second requested by Twitter, each message must be The core of the Snowflake algorithm combines a timestamp, a working machine id, and a sequence number (41 bits of millisecond time + 10 bits of machine ID + 12 bits of sequence in milliseconds).
In the above string, the first bit is unused (and can actually be used as a sign bit for long), the next 41 bits are the millisecond time, then 5 bits datacenter identification bits, 5 bits machine ID (not really an identifier, but actually for the thread identification), and then 12 bits the current count in milliseconds within that millisecond, adding up to exactly 64 bits for a Long type.
Except for the highest bit which is marked as unavailable, the remaining three bit occupancy groups can be floated, depending on the specific business requirements. By default, the 41-bit timestamp can support the algorithm until 2082, the 10-bit work machine id can support 1023 machines, and the sequence number can support 4095 self-incrementing sequence ids generated in 1 millisecond.
Strictly speaking the use of the bit segment of the working machine id can be process level, at the machine level you can use MAC address to uniquely mark the working machine, and at the working process level you can use IP+Path to distinguish the working process. If there are fewer working machines, it is a good choice to use a configuration file to set this id, if there are too many machines the maintenance of the configuration file is a disaster.
The solution here is to need a process for job id assignment, which can use a simple process written by yourself to record the assignment id.
The work process interacts with the work id allocator just once when the work process is started, and then the work process can drop the allocated id data to a file by itself and read the id in the file directly for use in the next start. The bit segment of this work machine id can also be further split, such as using the first 5 bits to mark the process id and the last 5 bits to mark the thread id or something like that.
The sequence number is a series of self-incrementing ids (atomic is recommended for multi-threads), in order to handle the need to assign ids to multiple messages in the same millisecond, and if the sequence number is used up in the same millisecond, then “wait until the next millisecond”.
Snowflake is an algorithm that generates IDs by dividing namespaces (UUIDs are also counted, and are analyzed separately because they are more common), and this scheme divides the 64-bit into multiple segments: timestamp + work number + seq number, separately to indicate the machine, time, etc.
- The milliseconds are at the high level, the self-incrementing sequence is at the low level, and the entire ID is trending incrementally.
- It does not depend on third-party systems such as databases, and is deployed as a service, which is more stable and generates IDs with very high performance.
- It is very flexible as you can assign bit bits according to your business characteristics.
- Strong dependence on machine clock, if the clock on the machine dials back, it will lead to duplicate issuing numbers or the service will be in unavailable state.
- Need to introduce zookeeper and independent snowflake dedicated server
The specific implementation has the following options in addition to the official Scala version.
- Java version: https://github.com/sumory/uc/blob/master/src/com/sumory/uc/id/IdWorker.java
- Go language version: https://github.com/sumory/idgen
- Python version: https://github.com/erans/pysnowflake
snowflake’s algorithm is still good, in fact, itself is not complicated, the complexity is how your client uses. The problems encountered are as follows.
- The code is deployed on different servers, how to set the machine ID in the middle, is there a more convenient way to get the machine ID?
- The whole algorithm relies on time continuity, but the display environment is that the online servers are all enabled with ntp, and the ntp case will have the problem of time regression.
Snowflake generated unique ID process.
- 10 bits of machine number, obtained from a Zookeeper cluster when the ID-assigning worker starts (to ensure that all workers do not have duplicate machine numbers)
- 41 bits of Timestamp: Each time a new ID is generated, the current Timestamp is fetched, and the sequence number is generated in two cases:
- If the current Timestamp is the same as the Timestamp of the previous generated ID (in the same millisecond), the sequence number of the previous ID + 1 is used as the new sequence number (12 bits); if all the IDs in this millisecond are used up, wait until the next millisecond to continue (during this waiting process, no new IDs can be (during this wait, no new IDs can be assigned)
- If the current Timestamp is larger than the previous ID’s Timestamp, a random initial sequence number (12 bits) is generated as the first sequence number in this millisecond
The whole process is decentralized, except for the external dependency when the worker is started (it needs to get the worker number from Zookeeper), and then it can work independently.
- When fetching the current timestamp, what if the timestamp fetched is smaller than the timestamp of the previous generated ID? Snowflake’s approach is to continue fetching the current machine’s time until a larger Timestamp is fetched (while waiting, no new IDs can be assigned).
As you can see from this exception, if there is a large deviation in the clocks of the machines Snowflake is running on, the whole Snowflake system will not work properly (the more deviations, the longer the wait time to assign a new ID). As you can see from Snowflake’s official documentation (https://github.com/twitter/snowflake/#system-clock-dependency) it explicitly requires that “You should use NTP to keep your system clock accurate”. And it is better to configure NTP in a mode that does not adjust backwards. In other words, when NTP corrects the time, it will not dial back the machine clock.
Solving time synchronization problems
Options that can be taken to solve the above mentioned timing problems.
Using this method requires attention to.
- In order to keep the growing trend, avoid the time of some servers early and some servers late, you need to control the time of all servers, and avoid the NTP time server to call back the time of the server.
- The sequence number is always returned to 0 when spanning milliseconds, which will make more IDs with sequence number 0, resulting in uneven generated IDs after modulo, so the sequence number is not returned to 0 every time, but to a random number from 0 to 9.
- It is better to use this CustomUUID class in a system that can keep the single instance mode running.
Solving distributed deployments
There are some variants of Snowflake, and individual applications have made some changes to Snowflake to suit their actual scenarios.
- ID length extended to 128 bits
- Up to 64 bits timestamp
- Then a 48 bits Worker number (as long as the Mac address)
- Finally, a 16 bits Seq Number
- Since it uses 48 bits as the Worker ID, the same length as the Mac address, there is no need to communicate with Zookeeper to get the Worker ID at startup.
- It is based on Erlang.
The purpose of this is to achieve a smaller conflict probability with more bits, so that more workers can work at the same time. It can also allocate more IDs per millisecond.
The idea of Simpleflake is to eliminate the worker number, keep the 41 bits timestamp, and extend the sequence number to 22 bits.
Features of Simpleflake.
- sequence number is generated completely at random (this also leads to duplicate IDs)
- There is no worker number, so there is no need to communicate with Zookeeper, which makes it completely decentralized.
- Timestamp remains the same as Snowflake, so you can seamlessly upgrade to Snowflake in the future
The problem with Simpleflake is that the sequence number is generated completely randomly, leading to the possibility of duplicate generated IDs. The probability of duplicate generated IDs grows with the number of IDs generated per second.
So the limitation of Simpleflake is that it cannot generate too many IDs per second (preferably less than 100 times/sec, if the scenario is greater than 100 times/sec, Simpleflake is not applicable and it is recommended to switch back to Snowflake).
MongoDB official documentation ObjectID can be regarded as similar to snowflake method, through “time + machine code + pid + inc” a total of 12 bytes, through the way 4 + 3 + 2 + 3 finally identified into a 24-length hexadecimal characters. Compared to snowflake length and readability is less.
uid-generator uses snowflake, but differs in the production of the machine id, also called workId. uid-generator’s workId is automatically generated by uid-generator and takes into account that the application is deployed on docker, in uid-generator generator, the default policy is: the workId is assigned by the database when the application is started. To put it simply, the application will insert a piece of data into the database table (uid-generator needs to add a new WORKER_NODE table) at startup, and the unique id corresponding to the data returned after successful insertion is the workId of the machine, and the data consists of host and port. For the workId in the uid-generator, it occupies 22 bits, the time occupies 28 bits, and the serialization occupies 13 bits. It should be noted that, unlike the original snowflake, the unit of time is seconds, not milliseconds, and the workId is different, as the same application consumes one workId per restart. workId every time the same application restarts.
Leaf by Meituan is also a distributed ID generation framework. It is very comprehensive, i.e., it supports number segment mode as well as snowflake mode.
Number mode can be understood as a bulk acquisition, such as the DistributIdService from the database to get IDs, if you can get multiple IDs in bulk and cached locally, then it will greatly provide the efficiency of the business application to get IDs. For example, each time the DistributionIdService obtains IDs from the database, it obtains a number range, such as (1,1000], which represents 1000 IDs, and when the business application requests the DistributionIdService to provide IDs, the DistributionIdService only needs to increment itself locally from 1 and return Instead of requesting the database every time, the business application will only need to go to the database to get the next ID when the local self-increment reaches 1000, i.e., when the current ID has been used up.
The difference between Leaf’s snowflake pattern and the original snowflake algorithm is mainly in the generation of workId, which is based on ZooKeeper’s sequential Id.
Flickr’s database is self-increasing
Flickr is using something called Ticket Servers. It is implemented using pure MySQL.
Insert a record first, then use replace to get the id
- Very simple, using the existing database system functions to achieve, small cost, with DBA professional maintenance.
- ID number is monotonically self-increasing, which can realize some business with special requirements for ID.
- Strong dependency on DB, when DB is abnormal the whole system is unavailable, which is a fatal problem. Configuring master-slave replication can increase the availability as much as possible, but data consistency is difficult to guarantee in special cases. Inconsistency in master-slave switchover may lead to duplicate issuance.
- The ID issuance performance bottleneck is limited to the read and write performance of a single MySQL.
For MySQL performance problem, the following solution can be used: In a distributed system we can deploy several machines, and each machine sets different initial values with equal step size and number of machines. For example, there are two machines. The initial value of TicketServer1 is 1 (1, 3, 5, 7, 9, 11…) and the initial value of TicketServer2 is 2 (2, 4, 6, 8, 10…).
Instagram Stored Procedures
The Instagram ID generation is simply described as 41b ts + 13b shard id + 10b increment seq.
- 41 bits: Timestamp (milliseconds)
- 13 bits: code number of each logic shard (maximum 8 * 1024 logic shards supported)
- 10 bits: sequence number, each Shard can generate up to 1024 IDs per millisecond
The implementation is as follows.
- first divide each Table into logical shards, the number of logical shards can be very large, for example, 2000 logical shards
- Then make a rule that specifies which database instance each logical shard is stored in; there need not be many database instances, for example, for a system with 2 PostgreSQL instances (instagram using PostgreSQL) you can use the rule that the odd number of logical shards are stored in the first database instance and the even number of logical shards are stored in the second database instance
- Specify one field per Table as the slice field (e.g. for user tables you can specify uid as the slice field)
- When inserting a new data, first decide which logical shard the data is assigned to based on the value of the shard field.
- Then, based on the correspondence between the logic shard and the PostgreSQL instance, determine which PostgreSQL instance the data should be stored on
The 41 bits Timestamp is similar to Snowflake when generating unique IDs, and this cry mainly describes how to generate the 13 bits logic Shard code and the 10 bits sequence number.
logic Shard code.
- Suppose a new user record is inserted, and when inserting, the uid is used to determine which logic shard the record should be inserted into.
- Assume that the current record to be inserted will be inserted into logic shard number 1341 (assuming that there are 2000 logic shards in this Table)
- The 13 bits of the newly generated ID will be filled with the number 1341
The sequence number is generated using the auto-increment sequence on each PostgreSQL Table.
- If there are already 5000 rows on the current table, then the next auto-increment sequence for this table is 5001 (you can get it directly by calling the method provided by PL/PGSQL)
- Then this 5001 is modulo 1024 to get a sequence number of 10 bits
The advantage of the Instagram solution is that
- Using the logic Shard number to replace the Worker number used by Snowflake, there is no need to go to the central node to get the Worker number, so it is completely decentralized.
- You can directly know which logic shard the record is stored on by its ID
- When you do data migration in the future, you can also do data migration by logic shard, which will not affect the future data migration.
For more details, see: Sharding & IDs at Instagram
- Low development costs
- PostgreSQL-based stored procedures, poor generality