In some production environments, we only need a native K8s cluster, and there is no need to deploy a graphical management console such as KubeSphere . In our existing technology stack, we are accustomed to using KubeKey to deploy KubeSphere and K8s clusters. Today, I will demonstrate for you how to deploy a pure K8s cluster using KubeKey on openEuler 22.03 LTS SP3 .
Actual server configuration (architecture 1:1 replica of a small-scale production environment, configuration is slightly different)
CPU name | IP | CPU | Memory | system disk | data disk | use |
---|---|---|---|---|---|---|
ksp-master-1 | 192.168.9.131 | 8 | 16 | 40 | 100 | k8s-master |
ksp-master-2 | 192.168.9.132 | 8 | 16 | 40 | 100 | k8s-master |
ksp-master-3 | 192.168.9.133 | 8 | 16 | 40 | 100 | k8s-master |
total | 3 | 24 | 48 | 120 | 300 |
The actual combat environment involves software version information
-
Operating system: openEuler 22.03 LTS SP3 x64
-
K8s: v1.28.8
-
Containerd:1.7.13
-
KubeKey: v3.1.1
1. Basic configuration of operating system
Please note that the following operations need to be performed on all servers unless otherwise specified. This article only selects the Master-1 node for demonstration, and assumes that the other servers have been configured and set up in the same way.
1.1 Configure host name
hostnamectl hostname ksp-master-1
1.2 Configure DNS
echo "nameserver 114.114.114.114" > /etc/resolv.conf
1.3 Configure server time zone
- Configure the server time zone to Asia/Shanghai .
timedatectl set-timezone Asia/Shanghai
1.4 Configure time synchronization
- Install chrony as time synchronization software
yum install chrony
- Edit the configuration file
/etc/chrony.conf
and modify the ntp server configuration
vi /etc/chrony.conf
# 删除所有的 pool 配置
pool pool.ntp.org iburst
# 增加国内的 ntp 服务器,或是指定其他常用的时间服务器
pool cn.pool.ntp.org iburst
# 上面的手工操作,也可以使用 sed 自动替换
sed -i 's/^pool pool.*/pool cn.pool.ntp.org iburst/g' /etc/chrony.conf
- Restart and set the chrony service to start automatically at boot
systemctl enable chronyd --now
- Verify chrony synchronization status
# 执行查看命令
chronyc sourcestats -v
# 正常的输出结果如下
[root@ksp-master-1 ~]# chronyc sourcestats -v
.- Number of sample points in measurement set.
/ .- Number of residual runs with same sign.
| / .- Length of measurement set (time).
| | / .- Est. clock freq error (ppm).
| | | / .- Est. error in freq.
| | | | / .- Est. offset.
| | | | | | On the -.
| | | | | | samples. \
| | | | | | |
Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev
==============================================================================
111.230.189.174 18 11 977 -0.693 6.795 -1201us 2207us
electrode.felixc.at 18 10 917 +2.884 8.258 -31ms 2532us
tick.ntp.infomaniak.ch 14 7 720 +2.538 23.906 +6176us 4711us
time.cloudflare.com 18 7 913 +0.633 9.026 -2543us 3142us
1.5 Turn off the system firewall
systemctl stop firewalld && systemctl disable firewalld
1.6 Disable SELinux
The minimally installed system of openEuler 22.03 SP3 has SELinux enabled by default. In order to reduce trouble, all our nodes disable SELinux.
# 使用 sed 修改配置文件,实现彻底的禁用
sed -i 's/^SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
# 使用命令,实现临时禁用,这一步其实不做也行,KubeKey 会自动配置
setenforce 0
1.7 Install system dependencies
On all nodes, execute the following command to install the basic system dependency packages for Kubernetes.
# 安装 Kubernetes 系统依赖包
yum install curl socat conntrack ebtables ipset ipvsadm
# 安装 tar 包,不装的话后面会报错。openEuler 也是个奇葩,迭代这么多版本了,默认居然还不安装 tar
yum install tar
2. Operating system disk configuration
The server adds a new data disk /dev/sdb for persistent storage of Containerd and K8s Pod .
In order to satisfy the wishes of some users, dynamic expansion can be achieved when the disk capacity is insufficient after production goes online. This article uses LVM to configure the disk ( in fact, the production environment I maintain rarely uses LVM ).
Please note that the following operations must be performed on all nodes in the cluster unless otherwise specified. This article only selects the Master-1 node for demonstration, and assumes that the other servers have been configured and set up in the same way.
2.1 Use LVM to configure disks
- Create PV
pvcreate /dev/sdb
- CreateVG
vgcreate data /dev/sdb
- Create LV
# 使用所有空间,VG 名字为 data,LV 名字为 lvdata
lvcreate -l 100%VG data -n lvdata
2.2 Format disk
mkfs.xfs /dev/mapper/data-lvdata
2.3 Disk mounting
- Manual mounting
mkdir /data
mount /dev/mapper/data-lvdata /data/
- Automatically mount on boot
tail -1 /etc/mtab >> /etc/fstab
2.4 Create data directory
- Create OpenEBS local data root directory
mkdir -p /data/openebs/local
- Create Containerd data directory
mkdir -p /data/containerd
- Create a soft connection to the Containerd data directory
ln -s /data/containerd /var/lib/containerd
Note: Until version v3.1.1, KubeKey has not supported changing the data directory of Containerd during deployment. You can only use this directory soft link to workaround to increase storage space ( Containerd can also be installed manually in advance ).
3. Install and deploy K8s
3.1 Download KubeKey
This article uses the master-1 node as the deployment node and downloads the latest version of KubeKey ( v3.1.1 ) binary file to the server. The specific KubeKey version number can be viewed on the KubeKey release page .
- Download the latest version of KubeKey
mkdir ~/kubekey
cd ~/kubekey/
# 选择中文区下载(访问 GitHub 受限时使用)
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | sh -
- The correct execution result is as follows
[root@ksp-master-1 ~]# mkdir ~/kubekey
[root@ksp-master-1 ~]# cd ~/kubekey/
[root@ksp-master-1 kubekey]# export KKZONE=cn
[root@ksp-master-1 kubekey]# curl -sfL https://get-kk.kubesphere.io | sh -
Downloading kubekey v3.1.1 from https://kubernetes.pek3b.qingstor.com/kubekey/releases/download/v3.1.1/kubekey-v3.1.1-linux-amd64.tar.gz ...
Kubekey v3.1.1 Download Complete!
[root@ksp-master-1 kubekey]# ll -h
total 114M
-rwxr-xr-x. 1 root root 79M Apr 16 12:30 kk
-rw-r--r--. 1 root root 36M Apr 25 09:37 kubekey-v3.1.1-linux-amd64.tar.gz
- View the list of Kubernetes versions supported by KubeKey
./kk version --show-supported-k8s
[root@ksp-master-1 kubekey]# ./kk version --show-supported-k8s
v1.19.0
......(受限于篇幅,中间的不展示,请读者根据需求查看)
v1.28.0
v1.28.1
v1.28.2
v1.28.3
v1.28.4
v1.28.5
v1.28.6
v1.28.7
v1.28.8
v1.29.0
v1.29.1
v1.29.2
v1.29.3
Note: The output results are those supported by KubeKey, but it does not mean that KubeSphere and other K8s can also perfectly support it. This article only uses KubeKey to deploy K8s, so there is no need to consider version compatibility.
The K8s version supported by KubeKey is still relatively new. This article chooses v1.28.8 . For the production environment, you can choose v1.26.15 or other versions with an even number of minor versions and more than 5 patch versions . It is not recommended to choose a version that is too old. After all, v1.30 has been released.
3.2 Create K8s cluster deployment configuration file
- Create cluster configuration file
This article chose K8s v1.28.8 . Therefore, the specified configuration file name is k8s-v1288.yaml . If not specified, the default file name is config-sample.yaml .
./kk create config -f k8s-v1288.yaml --with-kubernetes v1.28.8
Note: The generated default configuration file has a lot of content, so I won’t show it in detail here. For more detailed configuration parameters, please refer to the official configuration example .
- Modify configuration file
The example in this article uses three nodes as control-plane, etcd and worker nodes at the same time.
Edit the configuration file k8s-v1288.yaml
, mainly modify the related configuration of the kind: Cluster section
Modify the hosts and roleGroups information in the kind: Cluster section. The modification instructions are as follows.
- hosts: Specify the node's IP, ssh user, ssh password, ssh port
- roleGroups: Specify 3 etcd and control-plane nodes, and reuse the same machine as 3 worker nodes
- internalLoadbalancer: Enable the built-in HAProxy load balancer
- Domain: Custom domain name lb.opsxlab.cn . If there are no special requirements, you can use the default value lb.kubesphere.local.
- clusterName: Customize opsxlab.cn . If there are no special requirements, you can use the default value cluster.local.
- autoRenewCerts: This parameter can realize automatic renewal of certificate expiration, the default is true
- containerManager: using containerd
The complete modified example is as follows:
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: ksp-master-1, address: 192.168.9.131, internalAddress: 192.168.9.131, user: root, password: "OpsXlab@2024"}
- {name: ksp-master-2, address: 192.168.9.132, internalAddress: 192.168.9.132, user: root, password: "OpsXlab@2024"}
- {name: ksp-master-3, address: 192.168.9.133, internalAddress: 192.168.9.133, user: root, password: "OpsXlab@2024"}
roleGroups:
etcd:
- ksp-master-1
- ksp-master-2
- ksp-master-3
control-plane:
- ksp-master-1
- ksp-master-2
- ksp-master-3
worker:
- ksp-master-1
- ksp-master-2
- ksp-master-3
controlPlaneEndpoint:
## Internal loadbalancer for apiservers
internalLoadbalancer: haproxy
domain: lb.opsxlab.cn
address: ""
port: 6443
kubernetes:
version: v1.28.8
clusterName: opsxlab.cn
autoRenewCerts: true
containerManager: containerd
etcd:
type: kubekey
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
multusCNI:
enabled: false
registry:
privateRegistry: ""
namespaceOverride: ""
registryMirrors: []
insecureRegistries: []
addons: []
3.3 Deploy K8s
Next, we execute the following command to deploy K8s using the configuration file generated above.
export KKZONE=cn
./kk create cluster -f k8s-v1288.yaml
After the above command is executed, KubeKey will first check the dependencies and other detailed requirements for deploying K8s. After passing the check, you will be prompted to confirm the installation. Enter yes and press ENTER to continue deployment.
[root@ksp-master-1 kubekey]# ./kk create cluster -f k8s-v1288.yaml
_ __ _ _ __
| | / / | | | | / /
| |/ / _ _| |__ ___| |/ / ___ _ _
| \| | | | '_ \ / _ \ \ / _ \ | | |
| |\ \ |_| | |_) | __/ |\ \ __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
__/ |
|___/
10:45:28 CST [GreetingsModule] Greetings
10:45:28 CST message: [ksp-master-3]
Greetings, KubeKey!
10:45:28 CST message: [ksp-master-1]
Greetings, KubeKey!
10:45:28 CST message: [ksp-master-2]
Greetings, KubeKey!
10:45:28 CST success: [ksp-master-3]
10:45:28 CST success: [ksp-master-1]
10:45:28 CST success: [ksp-master-2]
10:45:28 CST [NodePreCheckModule] A pre-check on nodes
10:45:31 CST success: [ksp-master-3]
10:45:31 CST success: [ksp-master-1]
10:45:31 CST success: [ksp-master-2]
10:45:31 CST [ConfirmModule] Display confirmation form
+--------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| name | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time |
+--------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| ksp-master-1 | y | y | y | y | y | y | y | y | y | | | | | | CST 10:45:31 |
| ksp-master-2 | y | y | y | y | y | y | y | y | y | | | | | | CST 10:45:31 |
| ksp-master-3 | y | y | y | y | y | y | y | y | y | | | | | | CST 10:45:31 |
+--------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]:
Notice:
- The three storage-related clients, nfs client, ceph client, and glusterfs client, are not installed. We will install them separately later in the actual implementation of docking storage.
- docker and containerd will be automatically installed according to the containerManager type selected in the configuration file .
It takes about 10-20 minutes to complete the deployment. It depends on the network speed and machine configuration. This deployment takes 20 minutes to complete.
Once the deployment is complete, you should see output similar to the following on your terminal.
10:59:25 CST [ConfigureKubernetesModule] Configure kubernetes
10:59:25 CST success: [ksp-master-1]
10:59:25 CST skipped: [ksp-master-2]
10:59:25 CST skipped: [ksp-master-3]
10:59:25 CST [ChownModule] Chown user $HOME/.kube dir
10:59:26 CST success: [ksp-master-3]
10:59:26 CST success: [ksp-master-2]
10:59:26 CST success: [ksp-master-1]
10:59:26 CST [AutoRenewCertsModule] Generate k8s certs renew script
10:59:27 CST success: [ksp-master-2]
10:59:27 CST success: [ksp-master-3]
10:59:27 CST success: [ksp-master-1]
10:59:27 CST [AutoRenewCertsModule] Generate k8s certs renew service
10:59:28 CST success: [ksp-master-3]
10:59:28 CST success: [ksp-master-2]
10:59:28 CST success: [ksp-master-1]
10:59:28 CST [AutoRenewCertsModule] Generate k8s certs renew timer
10:59:29 CST success: [ksp-master-2]
10:59:29 CST success: [ksp-master-3]
10:59:29 CST success: [ksp-master-1]
10:59:29 CST [AutoRenewCertsModule] Enable k8s certs renew service
10:59:29 CST success: [ksp-master-3]
10:59:29 CST success: [ksp-master-2]
10:59:29 CST success: [ksp-master-1]
10:59:29 CST [SaveKubeConfigModule] Save kube config as a configmap
10:59:29 CST success: [LocalHost]
10:59:29 CST [AddonsModule] Install addons
10:59:29 CST success: [LocalHost]
10:59:29 CST Pipeline[CreateClusterPipeline] execute successfully
Installation is complete.
Please check the result using the command:
kubectl get pod -A
4. Verify K8s cluster
4.1 kubectl command line to verify cluster status
This section only briefly takes a look at the basic status, which is not comprehensive. You can experience and explore more details by yourself.
- View cluster node information
Run the kubectl command on the master-1 node to obtain the list of available nodes on the K8s cluster.
kubectl get nodes -o wide
As you can see in the output, the current K8s cluster has three available nodes, the node's internal IP, node role, node's K8s version number, container runtime and version number, operating system type, kernel version and other information.
[root@ksp-master-1 kubekey]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ksp-master-1 Ready control-plane,worker 9m43s v1.28.8 192.168.9.131 <none> openEuler 22.03 (LTS-SP3) 5.10.0-182.0.0.95.oe2203sp3.x86_64 containerd://1.7.13
ksp-master-2 Ready control-plane,worker 8m8s v1.28.8 192.168.9.132 <none> openEuler 22.03 (LTS-SP3) 5.10.0-182.0.0.95.oe2203sp3.x86_64 containerd://1.7.13
ksp-master-3 Ready control-plane,worker 8m9s v1.28.8 192.168.9.133 <none> openEuler 22.03 (LTS-SP3) 5.10.0-182.0.0.95.oe2203sp3.x86_64 containerd://1.7.13
- View Pod list
Enter the following command to get the list of Pods running on the K8s cluster.
kubectl get pods -o wide -A
As you can see in the output, all pods are running.
[root@ksp-master-1 kubekey]# kubectl get pod -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system calico-kube-controllers-64f6cb8db5-fsgnq 1/1 Running 0 4m59s 10.233.84.2 ksp-master-1
kube-system calico-node-5hkm4 1/1 Running 0 4m59s 192.168.9.133 ksp-master-3
kube-system calico-node-wqz9s 1/1 Running 0 4m59s 192.168.9.132 ksp-master-2
kube-system calico-node-zzr5n 1/1 Running 0 4m59s 192.168.9.131 ksp-master-1
kube-system coredns-76dd97cd74-66k8z 1/1 Running 0 6m22s 10.233.84.1 ksp-master-1
kube-system coredns-76dd97cd74-94kvl 1/1 Running 0 6m22s 10.233.84.3 ksp-master-1
kube-system kube-apiserver-ksp-master-1 1/1 Running 0 6m39s 192.168.9.131 ksp-master-1
kube-system kube-apiserver-ksp-master-2 1/1 Running 0 4m52s 192.168.9.132 ksp-master-2
kube-system kube-apiserver-ksp-master-3 1/1 Running 0 5m9s 192.168.9.133 ksp-master-3
kube-system kube-controller-manager-ksp-master-1 1/1 Running 0 6m39s 192.168.9.131 ksp-master-1
kube-system kube-controller-manager-ksp-master-2 1/1 Running 0 4m58s 192.168.9.132 ksp-master-2
kube-system kube-controller-manager-ksp-master-3 1/1 Running 0 5m5s 192.168.9.133 ksp-master-3
kube-system kube-proxy-2xpq4 1/1 Running 0 5m3s 192.168.9.131 ksp-master-1
kube-system kube-proxy-9frmd 1/1 Running 0 5m3s 192.168.9.133 ksp-master-3
kube-system kube-proxy-bhg2k 1/1 Running 0 5m3s 192.168.9.132 ksp-master-2
kube-system kube-scheduler-ksp-master-1 1/1 Running 0 6m39s 192.168.9.131 ksp-master-1
kube-system kube-scheduler-ksp-master-2 1/1 Running 0 4m59s 192.168.9.132 ksp-master-2
kube-system kube-scheduler-ksp-master-3 1/1 Running 0 5m5s 192.168.9.133 ksp-master-3
kube-system nodelocaldns-gl6dc 1/1 Running 0 6m22s 192.168.9.131 ksp-master-1
kube-system nodelocaldns-q45jf 1/1 Running 0 5m9s 192.168.9.133 ksp-master-3
kube-system nodelocaldns-rskk5 1/1 Running 0 5m8s 192.168.9.132 ksp-master-2
- View Image List
Enter the following command to obtain the list of images that have been downloaded on the K8s cluster node.
[root@ksp-master-1 kubekey]# crictl images ls
IMAGE TAG IMAGE ID SIZE
registry.cn-beijing.aliyuncs.com/kubesphereio/cni v3.27.3 6527a35581401 88.4MB
registry.cn-beijing.aliyuncs.com/kubesphereio/coredns 1.9.3 5185b96f0becf 14.8MB
registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache 1.22.20 ff71cd4ea5ae5 30.5MB
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver v1.28.8 e70a71eaa5605 34.7MB
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager v1.28.8 e5ae3e4dc6566 33.5MB
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers v3.27.3 3e4fd05c0c1c0 33.4MB
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy v1.28.8 5ce97277076c6 28.1MB
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler v1.28.8 ad3260645145d 18.7MB
registry.cn-beijing.aliyuncs.com/kubesphereio/node v3.27.3 5c6ffd2b2a1d0 116MB
registry.cn-beijing.aliyuncs.com/kubesphereio/pause 3.9 e6f1816883972 321kB
So far, we have completed the deployment of a minimal K8s cluster with three Master nodes and Worker nodes reused.
Next, we will deploy a simple Nginx web server on the K8s cluster to test and verify whether the K8s cluster is normal.
5. Deploy test resources
This example uses command line tools to deploy an Nginx web server on a K8s cluster.
5.1 Create Nginx Deployment
Run the following command to create a Deployment that deploys the Nginx web server. In this example, we will create a pod with two replicas based on the nginx:alpine image.
kubectl create deployment nginx --image=nginx:alpine --replicas=2
5.2 Create Nginx Service
Create a new K8s service with service name nginx, service type Nodeport, and external service port 80.
kubectl create service nodeport nginx --tcp=80:80
5.3 Verify Nginx Deployment and Pod
- Run the following commands to view the created Deployment and Pod resources.
kubectl get deployment -o wide
kubectl get pods -o wide
- View the results as follows:
[root@ksp-master-1 kubekey]# kubectl get deployment -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
nginx 2/2 2 2 20s nginx nginx:alpine app=nginx
[root@ksp-master-1 kubekey]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-6c557cc74d-tbw9c 1/1 Running 0 23s 10.233.102.187 ksp-master-2 <none> <none>
nginx-6c557cc74d-xzzss 1/1 Running 0 23s 10.233.103.148 ksp-master-1 <none> <none>
5.4 Verify Nginx Service
Run the following command to view the list of available services. In the list, we can see that the nginx service type is Nodeport and port 30619 is opened on the Kubernetes host.
kubectl get svc -o wide
View the results as follows:
[root@ksp-master-1 kubekey]# kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 4d22h <none>
nginx NodePort 10.233.14.48 <none> 80:30619/TCP 5s app=nginx
5.5 Verification service
Run the following command to access the deployed Nginx service and verify whether the service is deployed successfully.
- Verify direct access to Pod
curl 10.233.102.187
# 访问结果如下
[root@ks-master-1 ~]# curl 10.233.102.187
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
- Verify access to Service
curl 10.233.14.48
# 访问结果同上,略
- Verify access to Nodeport
curl 192.168.9.131:30619
# 访问结果同上,略
6. Automated Shell Scripts
All the steps in the article have been compiled into automated scripts and are not shown in this document due to space limitations.
7. Summary
This article shares the detailed process and precautions for deploying a K8s v1.28.8 cluster using KubeKey , a tool developed by KubeSphere, on the openEuler 22.03 LTS SP3 operating system.
The main contents are summarized as follows:
- openEuler 22.03 LTS SP3 operating system basic configuration
- LVM disk creation configuration on openEuler 22.03 LTS SP3 operating system
- Use KubeKey to deploy K8s high-availability cluster
- Verification test after K8s cluster deployment is completed
Disclaimer:
- The author's level is limited. Although he has gone through many verifications and checks and tried his best to ensure the accuracy of the content, there may still be omissions . Please feel free to give us advice from industry experts.
- The content described in this article has only been verified and tested in actual combat environments. Readers can learn and learn from it, but it is strictly prohibited to be used directly in production environments . The author is not responsible for any problems caused by this !
RustDesk suspends domestic services due to rampant fraud Apple releases M4 chip Taobao (taobao.com) restarts web version optimization work High school students create their own open source programming language as a coming-of-age gift - Netizens' critical comments: Relying on the defense Yunfeng resigned from Alibaba, and plans to produce in the future The destination for independent game programmers on the Windows platform . Visual Studio Code 1.89 releases Java 17. It is the most commonly used Java LTS version. Windows 10 has a market share of 70%, and Windows 11 continues to decline. Open Source Daily | Google supports Hongmeng to take over; open source Rabbit R1; Docker supports Android phones; Microsoft’s anxiety and ambitions; Haier Electric has shut down the open platformThis article is published by OpenWrite , a blog that publishes multiple articles !