Building flat container networks in Docker

There are many articles on the web about k8s flattening network construction, mostly for large-scale clusters. But now there are many people also use Docker deployment service in NAS or home server. This article focuses on how to use Docker to build a flat network and provide cross-host interoperability of containers. With the release of Docker in 2013, container technology started to come into major Internet companies. Container technology not only serves the online business of Internet companies, but also provides great convenience for developers to build test environments and three-party dependency services.

Rapid deployment of Cilium BGP environments using Containerlab + Kind

1 Pre-requisite knowledge 1.1 Introduction to Cilium Cilium is a Kubernetes CNI plug-in based on eBPF technology, which Cilium positions on its official website as being dedicated to providing a range of eBPF-based networking, observability, and security solutions for container workloads. Cilium implements networking, observability and security related features by using eBPF technology to dynamically insert control logic inside Linux that can be applied and updated without modifying application code or container configuration.

Kubernetes - Container Orchestration Engine (Resource Management)

What is a resource? All the content in K8s is abstracted as resources, and after resources are instantiated, they are called objects. List of resource types Workload Pod, ReplicaSet, Deployment, StatefulSet, DaemonSet, Job, CronJob (ReplicationController is deprecated in v1.11) Service Discovery and Load Balancing Service, Ingress … Storage Volume, CSI Configuration ConfigMap, Secret, DownwardAPI Cluster Namespace, Node, Role, ClusterRole, RoleBinding, ClusterRoleBinding Metadata HPA, PodTemplate, LimitRange List of resources What is a resource manifest?

Streams in gRPC

HTTP/2 has two concepts, stream and frame, where frame is the smallest transmission unit of communication in HTTP/2, usually a request or response is divided into one or more frames for transmission, and stream represents a virtual channel with established connection, which can transmit multiple requests or responses. Each frame contains a Stream Identifier that identifies the stream to which it belongs. HTTP/2 achieves multiplexing through streams and frames, and

gRPC's interceptor

In Web services, in addition to the actual business code, often need to achieve a unified record of request logs, rights management or exception handling and other functions, these in the web framework Gin or Django can be achieved through middleware, while gRPC can use interceptor, rpc request or response to intercept processing. gRPC server and client can implement their own interceptor, according to the two types of rpc requests

What is the difference between println and fmt.Println in Golang

1. Output location is different Looking at the source code of fmt.Println, it is obvious in the comments and the code: writes to standard output, the content is output to os.Stdout, which is the standard output. 1 2 3 4 5 6 // Println formats using the default formats for its operands and writes to standard output. // Spaces are always added between operands and a newline is appended. // It returns the number of bytes written and any write error encountered.

Duff's device and Go language zero values

I’ve heard of Duff’s device, an optimization of serial replication, for a long time. Recently, I found that Go language also uses the idea of Duff’s device when initializing zero values. So I took a closer look at it and share it with you today. If you need to copy a certain length of data to another address starting from a certain address, the simplest code is as follows. 1 2 3 4 5 6 send(short *to, short *from, int count) { do { *to++ = *from++; } while (--count > 0) } Because only one byte is copied in each loop, a total of count conditional branch judgments are performed.

Evolution of WaitGroup in golang

WaitGroup is a concurrency primitive often used in Go concurrent programming to do task scheduling. It looks like it has only a few simple methods and is relatively easy to use. In fact, the internal implementation of WaitGroup has been changed several times, mainly to optimize the atomic operations of its fields. The original implementation of WaitGroup The earliest implementation of WaitGroup is as follows. 1 2 3 4 5

Using mTLS in Linkerd to protect application communications

Security is a top priority for cloud-native applications, and while security is a very broad topic, Linkerd still has an important role to play: its two-way TLS (mTLS) feature is designed to enable a zero-trust approach to security in Kubernetes. Zero-trust security is an IT security model that requires strict authentication for every person and every device that attempts to access resources on a private network, whether located inside or outside the network boundary.

Implementation of retry mechanism

When a service requests a resource, if it encounters a network exception or other situation that causes the request to fail, it needs a retry mechanism to continue the request. A common practice is to retry 3 times and sleep randomly for a few seconds. For business development scaffolding, the HTTP Client basically encapsulates the retry method and automatically retries when the request fails according to the configuration. Here is an example of a common HTTP Client to see how it implements request retry.

Temporal: Microservice workflow orchestration using a familiar programming language

Problem Background Suppose this business scenario: after posting an article, if the article contains a video link, the user needs to download the video and transcode it. There is some subsequent business logic after the transcoding is done. 1 2 3 4 5 6 7 8 9 10 11 12 @RestController class ArticleController { @PostMapping("/article") fun createArticle(article: Article) { if (article.videos.isNotEmpty()) { val urls = videoTranscodeService.transcode(articles.videos) article.setVideoUrls(urls) } // Continue other business logic after video transcoding is complete processArticle(article) } } If this code is written in the article publishing service, and videoTranscodeService is another remote service that provides an asynchronous interface, then we can’t simply write a synchronous method, wait for the method to return and continue executing the business logic later.

Knowledge about Envoy

Development environment build Development based on Ubuntu 18.04. Basic Tools Download Install Bazel 1 2 sudo wget -O /usr/local/bin/bazel https://github.com/bazelbuild/bazelisk/releases/latest/download/bazelisk-linux-amd64 sudo chmod +x /usr/local/bin/bazel Install base dependencies 1 sudo apt-get install libtool cmake automake autoconf make ninja-build curl unzip virtualenv Clang build environment (optional) Download and install from llvm, version 9.0 is more compatible. 1 2 bazel/setup_clang.sh <PATH_TO_EXTRACTED_CLANG_LLVM> echo "build --config=clang" >> user.bazelrc Building DEBUG versions with bazel 1 bazel

Isito Virtual Machine Health Check

Istio is known to have health checks for the VMs it accesses, and for services on k8s, the health check section is included in the Pod. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 apiVersion: v1 kind: Pod metadata: name: goproxy labels: app: goproxy spec: containers: - name: goproxy image: k8s.gcr.io/goproxy:0.1 ports: - containerPort: 8080 readinessProbe: tcpSocket: port: 8080 initialDelaySeconds: 5 periodSeconds: 10 livenessProbe: tcpSocket: port: 8080 initialDelaySeconds: 15 periodSeconds: 20 And for VM-accessed Workload, istio provides a similar capability.

Linkerd implements timeouts and retries via ServiceProfile

One of the most important problems that Linkerd Service Mesh solves is observability: providing a detailed view of service behavior, Linkerd’s value proposition for observability is that it can provide golden metrics for your HTTP and gRPC services that are executed automatically, without code changes or developer involvement. Out of the box, Linkerd provides these metrics on a per-service basis: all requests across services, regardless of what those requests are.

Property Reflection in C++

1 Preface Java/Go languages have built-in reflection mechanism to support getting information about classes/methods/properties at runtime. Reflection mechanism has many application scenarios, such as the most common data serialization, if there is no reflection mechanism, either based on code generation, such as protobuf; or is a line of hand-written code that is pure manual work. C++ does not have a built-in set of reflection mechanisms, and implementing reflection mechanisms is

Kubernetes Resource Quotas

Resource Quotas are defined through the ResourceQuota object, which provides an overall resource limit for each namespace: it can limit the total number of objects of a certain type in the namespace, or it can set the total limit of computational resources that can be used by Pods in the namespace. When using resource quotas, there are two things to keep in mind. If the total available resources in the cluster are less than the sum of the resource quotas in each namespace, then this may lead to resource contention.

Kubernetes ResourceQuota and LimitRange Practices

The cluster configuration is adjusted according to the number of cluster users to achieve the purpose of controlling the amount of resources used in a specific namespace and ultimately achieving fair usage and cost control of the cluster. The functions to be implemented are as follows. Limit the amount of compute resources used by Pods in the running state. Limit the number of persistent volumes to control access to storage. Limit the number of load balancers to control costs.

Multiple containers in a Pod share the process namespace

To enable process namespace sharing, simply set shareProcessNamespace=true in the Pod definition. The following example shows the effect of two containers sharing a process namespace in a Pod. The contents of the share-process-namespace.yaml configuration file are as follows. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 apiVersion: v1 kind: Pod metadata: name: nginx spec: shareProcessNamespace: true containers: - name: nginx image: nginx - name: shell image: busybox securityContext: capabilities: add: - SYS_PTRACE stdin: true tty: true The main container is a service provided by nginx, and the other container is an error checking tool provided by busybox, named “shell”.

Simulating the K8s scheduler environment with kube-scheduler-simulator

Since the default Kubernetes scheduler is highly configurable, in many cases we don’t need to write any code to customize the scheduling behavior. However, people who want to understand how the scheduler works or have a need for custom features may try to develop their own scheduler. In this article, I will describe how to build a scheduler development environment with the help of kube-scheduler-simulator, a scheduler simulator. Installing the emulator First, the code for the Clone emulator.

Try using go generate

In Go, there is an unwritten habit that many people like to use generated code, for example, the directory structure of the project, the stub code of grpc are generated by tools, and small ones such as static files embedded in the code, automatically generated enum type String form, etc. Anyway, the pattern of generated code can always be seen. In fact, this may be related to the Go official is also to promote this way, for example, Go from version 1.