Redis’ slow query logging feature is used to log command requests that take longer than a given amount of time to execute, which can be used to analyze and optimize query speed. In this article, we will analyze how Redis’ slow query logging feature is implemented. Redis provides two configuration options for slow logging. slowlog-log-slower-than: specifies how many microseconds a command request will be logged if it takes longer than 10,000 microseconds to execute, the default is 10,000 microseconds.
Redis is an in-memory database that stores data in memory in exchange for faster read speeds. However, because memory is volatile, Redis stored data can be lost once a process quits or a hardware device fails. To solve the data persistence problem, Redis provides two solutions, RDB snapshot and AOF write operation logging, and this article analyzes the implementation of these two sub-cases. The so-called persistence is to back up the server data at a certain point in time to a disk file, so that when the program exits or the server is down, the data can be recovered at the next restart using the previously persisted file.
Redis has long been known for its high performance, yet Redis runs as a single thread, which is often contrary to perception. So what mechanisms does Redis use to keep up with the huge volume of processing required? How to achieve high performance with “single threadedness” is the main question explored in this article. The word “single-threaded” is in quotes in the title because Redis is single-threaded in the sense
Hash function, also known as a hash algorithm, is a method for creating small numerical “fingerprints” from any data (files, characters, etc.). Hash algorithms only need to satisfy the need to map a hash object to another interval, so they can be divided into cryptographic hashes and non-cryptographic hashes depending on the usage scenario. Overview Cryptographic hashes are considered one-way functions, meaning that it is extremely difficult to extrapolate back from the output of a hash function to what the input data is.
Raft is a consistency protocol based on a message-passing communication model for managing log replication, which allows a group of machines to work as a whole and provide services even if some of them have errors. The Paxos protocol was the first proven consistency algorithm before Raft was proposed, but the principles of Paxos were difficult to understand and engineer. Raft is an implementation of Paxos that aims to provide a better understood algorithm and has been shown to provide the same fault tolerance and performance as Paxos.
We know that the Go team redefined the team’s release cadence in 2015, setting the frequency of major Go releases to twice a year, with release windows set for February and August each year. The Go 1.5 release, which implements the bootstrap, is the first release under this cadence. Generally, the Go team releases releases in the middle of these two windows, but there have been accidents in the past few years, for example, Go 1.
Argo Rollouts is a Kubernetes Operator implementation that provides more advanced deployment capabilities for Kubernetes, such as Bluegreen, Canary, Canary Analytics, Experimentation, and Progressive Delivery capabilities. Enables automated, GitOps-based incremental delivery for cloud-native applications and services. The following features are supported. Bluegreen update strategy Canary update policies More fine-grained, weighted traffic splitting Automatic rollback Manual judgment Customizable metric queries and business KPI analysis Ingress controller integration: NGINX, ALB Service Grid
Since version 1.12, Docker has introduced a native health check implementation. The simplest health check for containers is the process-level health check, which verifies whether a process is alive or not; Docker Daemon automatically monitors the PID1 process in the container and can restart the ended container according to the restart policy if specified in the docker run command. In many practical scenarios, it is not enough to use the process-level health check mechanism.
BoltDB is an embedded K/V database implemented in Go language with the goal of providing a simple, fast and reliable embedded database for projects that do not require full database services such as Postgres or MySQL. boltDB has been implemented as the underlying database in projects such as etcd, Bitcoin, etc. This article provides a brief analysis of the design principles of BoltDB. BoltDB is currently archived by the original author, so the version analyzed in this article is the one maintained by etcd: etcd-io/bbolt .
Use PromQL to query the Error budget used in the past month, and then display the current SLI. The effect is shown in the following figure. The difficulty of this query is that the contents of the PromQL query are all the values of the time series. For example, the query of memory > 0.6 finds the correspondence of the time and value of all the time series that satisfy the condition.
Linkerd is a fully open source Service Grid implementation of Kubernetes. It makes running services easier and safer by providing you with runtime debugging, observability, reliability, and security, all without requiring any changes to your code. Linkerd works by installing a set of ultra-light, transparent agents next to each service instance. These agents automatically handle all traffic to and from the service. Because they are transparent, these agents act as highly instrumented out-of-process network stacks, sending telemetry data to and receiving control signals from the control plane.
MySQL is a widely used relational database, and understanding the internal mechanism and architecture of MySQL can help us better solve the problems we encounter in the process of using it. Therefore, I have read some books and materials related to MySQL InnoDB storage engine, and summarize them in this article. MySQL Architecture Two terms that are easily confused in the database world are database and instance. As common database
Inappropriate indexes are the most common cause of poor performance in relational database systems. Common situations include not having enough indexes, some SELECT statements may not have valid indexes, index columns are not in the right order, etc. Some developers believe that if a SQL statement uses indexes, then the query performance of the statement will be greatly improved, and that professional index design should be done by the DBA. However, we can design efficient indexes as long as we know how the database handles the task internally.
CHAR and VARCHAR are commonly used data types for storing strings in MySQL. The official documentation describes the maximum length of CHAR as 255 and the maximum length of VARCHAR as 65535. However, after operation, we found that the actual maximum ’length’ that VARCHAR can create is an indefinite value. This article will analyze this issue. When we enter the table build statement CREATE TABLE test( a VARCHAR(65535) NOT NULL)CHARSET=latin1;
The IEEE standard for binary floating-point arithmetic (IEEE 754) is the most widely used standard for floating-point arithmetic since the 1980s and is used by many CPUs and floating-point operators. However, this floating-point representation also poses certain accuracy problems, which we will discuss. IEEE 754 provides four precision specifications, of which single-precision floating-point and double-precision floating-point are the most commonly used types, and most programming languages today such as C,
I’ve been doing a five-day Kubernetes education training at an enterprise for the past few weeks, and our Lab environment is all based on lightweight MicroK8s. But the course didn’t start well because there were about 20 people in the class, and they were all using the company’s network, and they all shared the same IP connection to the extranet, so they ran into the annoying Rate limit on Docker Hub problem.
The Python language is designed to make complex tasks simple, so updates iterate relatively quickly and require us to keep up with them! The release of a new version is always accompanied by new features and functions, and the first thing we need to understand and attribute these points before upgrading the version, so that we may use them flexibly in our programming later. Can’t wait, ready to go, so let’s start now!
ZooKeeper is a typical distributed data consistency solution dedicated to providing a high performance, highly available, distributed orchestration service with strict sequential access control. In the previous article Principles and Implementation of etcd, a Distributed Key-Value Store we learned about the implementation principles of the distributed orchestration service etcd key modules, in this article we take a look at the ZooKeeper implementation ideas. ZooKeeper was created by Yahoo, an Internet company, and uses the Zab protocol, a consensus algorithm designed specifically for the service.
etcd is an open source project initiated by CoreOS to build a highly available distributed key-value storage system. etcd can be used to store critical data and implement distributed orchestration and configuration services, playing a key role in modern cluster operations. etcd is a distributed key-value storage service based on the Raft consensus algorithm. The project structure is modular, with the Raft module for distributed consensus, the WAL module for data persistence, and the MVCC module for state machine storage.