ping is a network tool that is widely used to test the quality and stability of network connections. When we want to know if our computer is able to communicate with other devices or servers, ping is our best friend. The ping tool is also often used to measure the connectivity and network quality between networks when we want to detect them, as it is a small but powerful network diagnostic tool that often comes with the operating system.
Starting with ASP.NET Core 7.0, .NET SDK 7 has built-in support for the dotnet user-jwts command, which helps you manage the keys and JWT Tokens you need during development. Create example project lock the .NET SDK version to 7.0.203 (and subsequent versions) 1 dotnet new globaljson --sdk-version 7.0.203 --roll-forward latestFeature Building ASP.NET Core Web API Projects 1 2 3 4 5 6 dotnet new webapi -n AspNetCoreUserJwtsDemo cd AspNetCoreUserJwtsDemo dotnet new gitignore git init git add .
We can usually use one or more servers as Docker hosts and use containers to run some open source tool services. And often we do not know when this application has an updated version, recently discovered an open source tool that can check whether the image of the container running on the host has been updated and can send update notifications by integrating multiple channels, this tool is DIUN (Docker Image Update Notifier).
From the kube-scheduler’s perspective, it calculates the best node to run a Pod through a series of algorithms, and when a new Pod appears for scheduling, the scheduler makes the best scheduling decision based on its resource description of the Kubernetes cluster at that time. But Kubernetes clusters are very dynamic. As a result of cluster-wide changes, such as a node that we first perform an eviction operation for maintenance purposes, all Pods on that node will be evicted to other nodes, but when we finish maintenance, the previous Pods do not automatically come back to that node.
1. Background In a previous article, I wonder if you have noticed that all example code executes with an environment variable ASSUME_NO_MOVING_GC_UNSAFE_RISK_IT_WITH added in front, like the following: 1 $ASSUME_NO_MOVING_GC_UNSAFE_RISK_IT_WITH=go1.20 go run tensor.go What is going on here? What happens if you don’t add this environment variable? Let’s try: 1 2 3 4 5 6 7 8 9 // https://github.com/bigwhite/experiments/blob/master/go-and-nn/tensor-operations/tensor.go $go run tensor.go panic: Something in this program imports go4.org/unsafe/assume-no-moving-gc to declare that it assumes a non-moving garbage collector, but your version of go4.
The Linux networking stack is not lacking in features, and it performs well enough for most purposes. However, with high-speed networks, the extra overhead of traditional network programming is too large a percentage. In the previous article on syscall.Socket, we introduced the AF_PACKET type socket, which has a really mediocre performance, all the data has to be converted between user and kernel state, and there are a lot of interrupts in case of high concurrency.
I recently received a question about golang from a reader that reads as follows: In a directory, I wrote a.go and a_test.go, after go mod init main and execute go test, it will report error: could not import main( can not import "main"). I know its solved by changing the package name. My questions are: is it impossible to execute package tests on main package. what is the underlying reason for the error reported here.
This article is a part of “Programming with gopacket library”, which mainly focuses on manually constructing data link layer, network layer and transport layer packets to scan the IPv4 IP addresses of the whole network (in the example, mainland China) to see if the corresponding network is reachable. First, we need to know the IP addresses of the entire network. In fact, we can use fping to detect whether these IPs are connected, then we can quickly scan these IPs ourselves based on ICMP to find out which IP addresses are active on the entire network, and finally we can use tcp scan to scan the IPs of the entire network, and you can even scan the exposed Redis instances on the public network.
Submariner is a completely open source project that helps us network communication between different Kubernetes clusters, either locally or in the cloud.Submariner has the following features: L3 connectivity across clusters Service discovery across clusters Globalnet support for CIDR overlap Submariner provides a command line tool, subctl, to simplify deployment and management. Compatible with various CNIs 1. Submariner architecture Submariner consists of several main parts: Broker: essentially two CRDs (Endpoint and Cluster) for exchanging cluster information, where we need to select one cluster as the Broker cluster and the other clusters connect to the API Server of the Broker cluster to exchange cluster information:: Endpoint: contains the information needed by the Gateway Engine to establish inter-cluster connections, such as Private IP and Public IP, NAT ports, etc.
pyenv is a Python version manager, but it doesn’t manage virtual environments, so you can’t manage them out of the box like conda. pyenv, however, has the ability to manage virtual environments with the pyenv-virtualenv plugin. This article provides a brief introduction to how Python’s virtual environments work under pyenv-virtualenv. Here, I have installed Python version 3.10.9 via pyenv and created a virtual environment called play with the pyenv-virtualenv plugin. See the pyenv homepage and the pyenv-virtualenv homepage for more information on how to install pyenv and pyenv-virtualenv and create an environment.
I recently needed to make the group’s cluster computing environment available to a third party that was not fully trusted, and needed some isolation (most notably user data under /home), but also needed to provide a normal operating environment (including GPU, Infiniband, SLURM job submission, toolchain management, etc.). After thinking about it, some of the existing options are not very suitable: POSIX permission control: changing the individual folders under /home to 0750 would be a simple and quick way to prevent other users from reading and writing.
Docker started out leading the container runtime and then fell behind Kubernetes in the container orchestration dimension. in order to retain the container runtime dominance, Docker has introduced OCI and supports more WebAssembly runtimes with containerd-wasm-shim. To address the limitations of Docker in terms of security, stability, performance, and portability, the Kubernetes community has developed other container runtimes with different implementations and features, and has developed the Container Runtime Interface (CRI) specification for them.
The socket function is a system call that creates a new network socket in the operating system kernel. A socket is an abstract object for communication over a network. Through sockets, applications can communicate using different network protocols, such as TCP, UDP, etc., and we can even implement custom protocols. syscall.Socket There are many articles introducing the epoll method of Go standard library, but there are not too many articles
Redis Cluster and Docker Currently, Redis Cluster does not support NATted environments and in eneral environments where IP addresses or TCP ports are remapped. Docker uses a technique called port mapping: programs running inside Docker containers may be exposed with a different port compared to the one the program believes to be using. This is useful for running multiple containers using the same ports, at the same time, in the same server.
In Kubernetes, a Pod is the concept of a container group, and a Pod can contain multiple containers. These containers share resources such as network and storage, providing a more flexible environment for applications to run in. A container is essentially a special process, created with a NameSpace to isolate the runtime environment, Cgroups to control resource overheads and some Linux network virtualisation techniques to solve network communication problems. What Pods do is allow multiple containers to join the same NameSpace for resource sharing.
Helm is a highly recommended tool for managing K8S clusters, as it provides a clearer and more efficient way to manage K8S resources. Here is a summary of how Helm has been used in real life projects. Before you start using Helm tools, install the tools and configure the environment in advance. Helm is very easy to install, and if the local kubectl can access the K8S cluster properly, then Helm is ready to use.
Basic processes and methods Query the status of the pod, for pod Pending scenarios: 1 kubectl describe <pod-name> -n <namespace> Get exception events in the cluster as a supplement to troubleshooting the cause of pod Pending. 1 kubectl get events -n <namespace> --sort-by .lastTimestamp [-w] Get the pod’s log, for pod Error or CrashLoopBack scenarios. 1 kubectl logs <pod-name> -n <namespace> [name-of-container, if multiple] [-f] If the pod is already running and the existing logs do not directly indicate a problem, it is necessary to go into the pod container for further testing, for example to verify the status of a running process, the configuration, or to check the container’s network connection.
OpenKruise (https://openkruise.io) is a suite of Kubernetes-based extensions focused on automating cloud-native applications, such as deployment, publishing, operations and availability protection. The majority of the capabilities provided by OpenKruise are defined based on CRD extensions, which do not exist in any external dependencies and can run on any pure Kubernetes cluster; Kubernetes itself provides some application deployment management capabilities that are not sufficient for large-scale application and cluster scenarios, and OpenKruise bridges the gap between Kubernetes in the areas of application deployment, upgrades, protection, and operations and maintenance.
1. Background The team was short of testers and had no choice but to do it ourselves. Automating tests for the systems we develop saves manpower, improves efficiency and increases confidence in the quality assurance of the system. Our goal is to have automated tests covering three environments, as follows: Automated testing in the CI/CD pipeline. Post-release automated smoke/acceptance testing in various stage environments. Post-release automated smoke/acceptance testing in production environments.
The author has recently attempted to write some Rust code, and this article focuses on a view of Rust and some of the differences between Rust and C++. Background S2 studied the CPP core guidelines when advancing the team’s code specification, which led to learning about clang-tidy, and Google Chrome’s exploration of security. C++ is a very powerful language, but with great power comes great responsibility, and it has been