Jun 01, 2021 Article blog
The article was reproduced from the public number: Little Sister Taste
Today's
HTTP
protocols, and even
WebSocket
protocol, and even
WebSocket
protocols that adopt the
MQTT
protocol, inevitably use
Nginx.
T
he so-called disease from the mouth, the scourge from the mouth. A
s an entry point,
Nginx's
responsibilities are very important.
If it doesn't work at some point, it's a disaster.
How do I keep Nginx highly available? T hat's a problem. No matter what kind of scheme you use, it's always going to end up being single, which is very distressing.
The so-called high availability, there are no more than two ways. O ne way is to write on the component itself, and the other way is to add a middle tier. We usually want to be able to load balance when highly available, typical cats and dogs want, greedy very much.
Whenever we can't solve the problem, we add a middle tier and place our hopes on this new component.
If this middle tier doesn't solve the problem, we can add another middle tier. With this layer by layer, the system's highly available architecture becomes very complex at the end of the day.
The first way, of course, is to write on DNS. H igh availability is possible by binding multiple Nginx IP addresses on DNS. Not only is it highly available, but load balancing can be done by the way.
But there's a fatal problem with this thing, and that's the perceived time of the fault.
Before our browser can access the real
Nginx,
it needs to convert the domain name into a real IP address, and DNS is doing the
解析
consuming 20-20ms at a time.
To speed up parsing, there are generally multiple levels of cache. Browsers, for example, have DNS caches, you use them on your PC, IPS service providers have caches, and some enterprises have more cache layers in between in order to speed up access to their own DNS servers.
DNS queries the true IP address only if all caches are not hit. S o, if there's an Nginx on the machine, the perception of the fault is particularly poor. There is always a part of the user's request that will fall on this dead machine.
We said it earlier.
If you can't solve the problem, you can add an intermediate layer, even if it's hardware, like
F5
This kind of structure general enterprises can not afford to play, only those who purchase rebates have oil and water companies, will like this. The Internet is used very little, not too much introduction.
Of course, the F5 also has a single point of problem. A lthough the hardware is certainly a little more stable than the software, but it is always a hidden danger. Just as Oracle is no matter how good it is, it still has a problem, and it's a must to get ready.
Some manufacturers in the sale of hardware, recommend you to buy 3 at a time! W hy is that? There is also a reason.
One of your hardware is in service and you have two backup machines. When there is a problem with the machine you service, you can choose one of the backup machines as the host, the other is still a spare, and the cluster is still highly available.
That's an intoxicating reason. According to this logic, I can sell 100 units when I meet a fool!
Hard can't, it's going to be soft. Use the main mode and use the software to complete the switching process.
As shown in the figure, the simplest highly available configuration can be completed using the
keepalived
component and through the
VRRP
protocol.
We bind the address of
DNS
to the
VIP
and when there is a problem with
Nginx
being serviced, the
VIP
漂移
and moves to another
Nginx
As you can see, the backup of Nginx, which is normally unable to perform services, is also called
影子节点
and is only useful if the primary Nginx has a problem.
If you have a lot of nodes, there is a lot of waste in this mode.
In addition to waste, there is a very big problem. T hat is, a single Nginx always has an upper limit, no matter how powerful it is. When network card traffic peaks, where do we go next?
This model is certainly not sufficient.
At this point, we can work with DNS resolution, as well as the main preparation mode to do the article. A s shown below, DNS resolves to two VIPs, and the VIP itself is highly available. This reduces downtime and ensures high availability of each component.
This architectural pattern is very clear, but there is still a waste of shadow nodes.
LVS
is short for
Linux Virtual Server
or Linux Virtual Server.
LVS
is part of the Linux standard kernel, the various functional modules of LVS have been fully built into the Linux
LVS
kernel, eliminating the need to patch the kernel and using the various features provided by
LVS
directly.
LVS
works on Layer 4 of the OSI model: the transport layer, such as TCP/UDP, so like the HTTP protocol for a 7-tier network, it is unrecognizable.
That is, we can't take some of the content of the HTTP protocol to control the route, and its routing cuts to a lower level.
As shown below, the server cluster system set up by LVS consists of three parts:
THE DR (DIRECT ROUTING) MODE RETURNS RESPONSE PACKETS DIRECTLY TO THE USER'S BROWSER, AVOIDING LOAD BALANCING SERVER NETWORK CARD BANDWIDTH AS A BOTTLENECK, AND IS CURRENTLY THE MOST WIDELY USED (DATA IS UNKNOWN AND FULLNAT MODE IS WIDELY USED).
Therefore, with the load balancing of DNS, coupled with the load balancing of
LVS
double load balancing and high availability can be achieved.
As shown, DNS can bind requests to VIPs. B ecause the LVS DR mode is very efficient and the network card needs a very large amount of requests to reach the bottleneck (only the inlet traffic goes to LVS), it is generally sufficient to do nginx load balancing through LVS. If LVS still has a bottleneck, you can do it again on DNS.
In fact, most of the schemes we talked about above are in the same room. I f in multiple rooms, how to let users choose the fastest node, how to ensure load balancing, is a big problem. I n addition, you can see that packets are forwarded and coordinated through layers, and there are a variety of load balancing algorithms involved, and how to maintain the session is also a challenge. In general, a four-tier session is implemented through an IP address, and a seven-tier session is implemented through cookies or header information, etc.
Developers generally don't have access to such inlet-level things, but if they do, they can get caught up in the mess.
This article is
xjjdog
based on some experience that you hope will help you when your company needs some highly available
方案
What is a
方案
A
ll you have to do
is
cajole your leader and make him feel like he agrees. A
s for whether or not to do, how to do, that is the latter thing.
Jun don't see, pulled so half a day, a lot of enterprises in fact an
nginx
you can go to the world.
Above is
W3Cschool编程狮
about
HA (highly available) like a set of va, like fat, peel off a layer and a layer
of related information, I hope to help you.