May 22, 2021 Docker From entry to practice
Any good project can't be separated from a good architecture and blueprint, and in this section we'll look at how Kubernetes plans its architecture. In order to understand and use Kubernets, we need to understand the basic concepts and functions of Kubernetes.
kubecfg
command.
In Kubernetes, nodes are the points of actual work, formerly called Minions. A node can be a virtual machine or a physical machine that depends on a cluster environment. E ach node has some necessary services to run container groups, and they can be managed through the primary node. Necessary services include docker, kubelet, and network agents.
The container state is used to describe the current state of the node. It now contains three messages:
Host IP needs a cloud platform to query, and Kubernetes saves it as part of the state. I f Kubernetes is not running on a cloud platform, a node ID is required. IP addresses can vary and can contain many types of IP addresses, such as public IP, private IP, dynamic IP, ipv6, and so on.
Typically,
Pending
cycles of
Running
Terminated
and if Kubernetes finds a node and it is available, Kubernetes marks it
Pending
T
hen at some point, Kubernetes will mark it
Running
T
he end cycle of a node is
Terminated
A node that is already terminated does not accept and schedule any requests, and container groups that are already running on it are deleted.
The state of a node is primarily used to
Running
that is running.
NodeReachable
NodeReady
A
dditional states may be added in the future.
NodeReachable
cluster reach.
NodeReady
that kubelet returns statusOk and http status checks for health.
Nodes are not created by Kubernetes, but by cloud platforms, or by physical machines, virtual machines. I n Kubernetes, a node is just a record, and after the node is created, Kubernetes checks whether it is available. In Kubernetes, nodes are saved in the following structure:
{
"id": "10.1.2.3",
"kind": "Minion",
"apiVersion": "v1beta1",
"resources": {
"capacity": {
"cpu": 1000,
"memory": 1073741824
},
},
"labels": {
"name": "my-first-k8s-node",
},
}
Kubernetes check nodes can be used dependent on id. In the current version, there are two interfaces that can be used to manage nodes: node control and Kube management.
In the Kubernetes primary node, the node controller is the component used to manage the node. The main contents are:
Node control has a synchronous round hunt that primarily listens to virtual instances of all cloud platforms and is created and deleted based on node state. T
he
--node_sync_period
by the --2000 flag. I
f an instance has already been created, node control creates a structure for it. S
imilarly, if a node is deleted, node control deletes the structure. T
he specified node can be displayed through the
--machines
Kubernetes starts. Y
ou can
kubectl
to add nodes one by one, both of which are the same.
You can
--sync_nodes=false
to add and remove nodes by setting the --and/or the s false tag to prevent node synchronization between clusters.
In Kubernetes, the smallest unit used is the container group, which is the smallest unit for creation, scheduling, and management.
A container group uses the same Dokcer container and shares the volume (mount point). A container group is a packaged collection of specific applications that contains one or more containers.
Similar to a running container, a container group is considered to have only a short run period. C ontainer groups are scheduled to run to a group of nodes, knowing that the life cycle of the container is over or that it is deleted. I f the node dies, the container group running on it is deleted instead of rescheduled. (Perhaps a move of the container group will be added in a future release.)
Container groups are primarily for data sharing and communication between them.
In a container group, containers use the same network address and port and can communicate with each other over a local network. Each container group has a separate ip that can be used over the network to communicate with other physical hosts or containers.
Container groups have a set of storage volumes (mount points) to allow the container to restart without losing data.
A container group is a high-level abstraction that uses management and deployment, and it is also an interface for a set of containers. Container groups are the smallest units of deployment, horizontal downssing.
Container groups can be combined to build complex applications that have an original meaning that includes:
Why not run multiple programs in a single container?
This synot will briefly describe container state types, container group lifecycles, events, restart policies, and replication controllers.
Container groups have been accepted by nodes, but one or more containers have not yet run. This will include when some nodes are downloading the image, which depends on the network situation.
Container groups have been dispatched to nodes, and all containers have been started. At least one container is running (or restarting).
All containers exit normally.
All containers in the container group were unexpectedly interrupted.
In general, container groups are not automatically destroyed if they are created, unless they are started by some behavior, which can be triggered by human, or by the replication controller. The only exception is that the container group exits successfully from the succeeded state, or fails multiple retrys over a period of time.
If a node dies or cannot connect, the node controller will mark the status of the container group on it
failed
running
there is 1 container, container normal exit - record completion events
running
succeeded
Never: The container group
succeeded
running
there is 1 container, container exception exit - log failure events
running
running
Never: The container group
failed
running
there are 2 containers, there is 1 container exception exit - record failure events
running
running
Never: Container groups remain running -
running
2 containers exit
running
running
Never: The container group
failed
running
container out of memory - flag container error interrupt
running
running
Never: Log error events, and the container group
failed
running
a piece of disk dies - kills all containers
failed
If the container group runs under one controller, the container group is recreated elsewhere
running
corresponding node segment overflow - the node controller waits until the timeout
failed