Heavy! The Nginx full solution note that has exploded in Huawei internal forums has just been launched, and the collection has exceeded 10,000

Heavy! The Nginx full solution note that has exploded in Huawei internal forums has just been launched, and the collection has exceeded 10,000

cHello, today I will share Nginx with all children's shoes, please take out a small book and write it down!

1. Introduction to Nginx

1.1 Nginx overview

Nginx ("engine x") is a high-performance HTTP and reverse proxy server, which is characterized by less memory and strong concurrency. In fact, the concurrency of nginx does perform better in the same type of web server. It is used in mainland China Users of nginx website include: Baidu, Jingdong, Sina, Netease, Tencent, Taobao, etc.

1.2 Nginx as a web server

Nginx can be used as a web server for static pages, and it also supports dynamic languages of the CGI protocol, such as perl, php, etc. But java is not supported. Java programs can only be completed by cooperating with tomcat. Nginx is specially developed for performance optimization. Performance is its most important consideration. The implementation focuses on efficiency and can withstand the test of high load. Reports indicate that it can support up to 50,000 concurrent connections.

Official website: lnmp.org/nginx.html

1.3 Forward proxy

Nginx can not only act as a reverse proxy, but also achieve load balancing. It can also be used as a forward proxy to perform functions such as surfing the Internet. Forward proxy: If you think of the Internet outside the local area network as a huge resource library, the clients in the local area network need to access the Internet through a proxy server. This proxy service is called forward proxy.

1.4 Reverse proxy

Reverse proxy, in fact, the client is unaware of the proxy, because the client can access without any configuration, we only need to send the request to the reverse proxy server, and the reverse proxy server selects the target server to obtain the data. Then return to the client. At this time, the reverse proxy server and the target server are external servers. The address of the proxy server is exposed, but the real server IP address is hidden.

1.5 Load balancing

The client sends multiple requests to the server, and the server processes the requests, and some may need to interact with the database. After the server finishes processing, the results are returned to the client.

This architecture model is more suitable for the early system when the system is relatively single and there are relatively few concurrent requests, and the cost is also low. However, with the continuous increase in the amount of information, the rapid increase in the amount of access and data, and the increase in the complexity of the system business, this architecture will cause the server's corresponding client requests to become increasingly slow, and when the amount of concurrency is particularly large, it is easy to cause the server Crash directly. Obviously this is a problem caused by the bottleneck of server performance, so how to solve this situation?

The first thing we think of may be to upgrade the server configuration, such as increasing the CPU execution frequency, increasing the memory, etc. to improve the physical performance of the machine to solve this problem, but we know that Moore's Law is becoming invalid, and the performance improvement of hardware can no longer meet the increasing demand. Demand. The most obvious example is that on Tmall Double Eleven, the instantaneous visit volume of a hot-selling product is extremely large, so similar to the above system architecture, adding machines to the existing top-level physical configuration is not possible. Meet the demand. So what to do?

In the above analysis, we removed the method of increasing the physical configuration of the server to solve the problem. That is to say, the vertical solution to the problem does not work, so how to increase the number of servers horizontally? At this time, the concept of clustering came into being. A single server could not solve it. We increased the number of servers, and then distributed the request to each server. The original request was concentrated on a single server instead of the request distributed to multiple servers. The load is distributed to different servers, which is what we call load balancing.

1.6 Dynamic and static separation

In order to speed up the analysis speed of the website, dynamic pages and static pages can be parsed by different servers to speed up the analysis speed. Reduce the pressure of the original single server.

2. Commonly used commands and configuration files of nginx

2.1 Commonly used commands in nginx:

Start command

Execute under the/usr/local/nginx/sbin directory./nginx

Close command

Execute under the/usr/local/nginx/sbin directory./nginx -s stop

Reload command

Execute in the/usr/local/nginx/sbin directory./nginx -s reload

2.2 nginx.conf configuration file

In the nginx installation directory, its default configuration files are placed in the conf directory of this directory, and the main configuration file nginx.conf is also in it. The subsequent use of nginx is basically to modify this configuration file accordingly.

There are a lot of # in the configuration file, the beginning of which means the content of the comment, we remove all the paragraphs beginning with #, and the simplified content is as follows:

worker_processes 1;

events {

worker_connections 1024;

}

http {

include mime.types;

default_type application/octet-stream;

sendfile on;

keepalive_timeout 65;

server {

listen 80;

server_name localhost;

According to the above files, we can clearly divide the nginx.conf configuration file into three parts:

2.2.1 Part 1: Global Block

From the beginning of the configuration file to the contents of the events block, some configuration instructions that affect the overall operation of the nginx server will be set, mainly including the configuration of the user (group) running the Nginx server, the number of worker processes allowed to be generated, the process PID storage path, and the log The storage path and type, and the introduction of configuration files, etc.

For example, the configuration in the first line above:

worker_processes 1;

This is the key configuration of the Nginx server's concurrent processing service. The larger the worker_processes value, the more concurrent processing can be supported, but it will be restricted by hardware, software and other equipment.

2.2.2 Part 2: events block

The commands involved in the events block mainly affect the network connection between the Nginx server and the user. Commonly used settings include whether to enable serialization of network connections under multiple work processes, whether to allow multiple network connections to be received at the same time, and which event-driven model is selected for processing Connection request, the maximum number of connections that each word process can support at the same time, etc.

events {

worker_connections 1024;

}

The above example indicates that the maximum number of connections supported by each work process is 1024.

This part of the configuration has a greater impact on the performance of Nginx, and should be configured flexibly in practice.

2.2.3 Part 3: http block

This is the most frequent part of the Nginx server configuration. Most of the functions such as proxy, cache, and log definition and the configuration of third-party modules are here.

Note that: http block can also include http global block and server block.

http global block:

The instructions for http global block configuration include file import, MIME-TYPE definition, log customization, connection timeout time, upper limit of single link requests, etc.

server block:

This is closely related to the virtual host. From the user's point of view, the virtual host is exactly the same as an independent hardware host. This technology was created to save the cost of Internet server hardware.

  • Each http block can include multiple server blocks, and each server block is equivalent to a virtual host.

  • Each server block is also divided into a global server block, and can contain multiple locaton blocks at the same time.

1. Global server block

The most common configuration is the monitoring configuration of the virtual machine host and the name or IP configuration of the virtual host.

2.location block

A server block can be configured with multiple location blocks.

The main function of this block is based on the request string received by the Nginx server (such as server_name/uri-string), and the string other than the virtual host name (or IP alias) (such as the previous/uri-string) Match, process a specific request. Functions such as address orientation, data caching and response control, as well as the configuration of many third-party modules are also carried out here.

3. Nginx configuration example-reverse proxy

3.1 Reverse proxy example one

Realization effect: use nginx reverse proxy, visit www.fanxiangdaili.com and jump directly to 127.0.0.1:8080

Map www.fanxiangdaili.com to 127.0.0.1 by modifying the local host file

After the configuration is complete, we can access the Tomcat initial interface that appeared in the first step through www.fanxiangdaili.com . So how can I jump to the Tomcat initial interface just by typing www.fanxiangdaili.com ? Use the reverse proxy of nginx.

Add the following configuration in the nginx.conf configuration file; what you need to pay attention to is which configuration file is used to start nginx by default. If you start nginx with the default configuration file, you need to modify the default configuration file.

As configured above, we listen on port 80 and the access domain name is www.fanxiangdaili.com. If the port number is not added, the default port is 80, so when accessing the domain name, it will jump to the path 127.0.0.1:8080. Enter www.fanxiangdaili.com in the browser and the result is as follows

3.2 Reverse proxy instance 2

Realization effect: use nginx reverse proxy, jump to different port services according to the access path, nginx listening port is 9001,

Visit http://127.0.0.1:9001/edu/ to jump directly to 127.0.0.1:8081

Visit http://127.0.0.1:9001/vod/ to jump directly to 127.0.0.1:8082

Implementation steps:

The first step is to prepare two tomcats, one 8001 port and one 8002 port, and prepare the test page

The second step is to modify the configuration file of nginx and add server{} in the http block

The location instruction indicates that the instruction is used to match the URL. The syntax is as follows:

  1. =: Before the uri without regular expressions, the request string is required to strictly match the uri. If the match is successful, the search will stop and the request will be processed immediately.
  2. ~: Used to indicate that the uri contains regular expressions and is case sensitive.
  3. ~*: Used to indicate that the uri contains regular expressions and is not case sensitive.
  4. ^~: Before uri without regular expressions, the Nginx server is required to find the location with the highest matching degree between the identification uri and the request string, and then use this location to process the request immediately, instead of using the regular uri and request in the location block The string is matched.

Note: If the uri contains a regular expression, it must be marked with ~ or ~*.

4. nginx configuration instance-load balancing

Implementation steps:

1) First prepare two Tomcats that are started at the same time

2) Configure in nginx.conf

With the explosive growth of Internet information, load balancing is no longer a very unfamiliar topic. As the name suggests, load balancing is to distribute the load to different service units, which not only guarantees the availability of services, but also ensures that the response is fast enough. , To give users a good experience. The rapid growth of visits and data traffic has spawned a variety of load balancing products. Many professional load balancing hardware provide good functions, but they are expensive. This makes load balancing software very popular, and nginx is one of them. One of them, under Linux, there are services such as Nginx, LVS, Haproxy, etc. that can provide load balancing services, and Nginx provides several distribution methods (strategies):

1. Polling (default)

Each request is assigned to different back-end servers one by one in chronological order. If the back-end server is down, it can be automatically eliminated.

2. weight

weight represents the weight, the default weight is 1, the higher the weight, the more clients will be allocated

Specify the polling probability, the weight is proportional to the access ratio, which is used in the case of uneven back-end server performance. E.g

upstream server_pool{

server 192.168.5.21 weight=10;

server 192.168.5.22 weight=10;

}

3. Each request of ip_hash is allocated according to the hash result of the access ip, so that each visitor has fixed access to a backend server, which can solve the problem of session sharing. E.g:

upstream server_pool{

ip_hash;

server 192.168.5.21:80;

server 192.168.5.22:80;

}

4. Fair (third-party) allocates requests according to the response time of the back-end server, and priority is given to those with a short response time.

upstream server_pool{

server 192.168.5.21:80;

server 192.168.5.22:80;

fair;

}

5. nginx configuration example-dynamic and static separation

Nginx dynamic and static separation simply means to separate dynamic and static requests. It cannot be understood as simply physically separating dynamic and static pages. Strictly speaking, dynamic requests should be separated from static requests, which can be understood as using Nginx to process static pages and Tomcat to process dynamic pages. From the perspective of current implementation, it can be roughly divided into two types. One is to purely separate static files into separate domain names and put them on a separate server, which is also the current mainstream recommended solution; the other method is dynamic and static files. Mixed together and released, separated by nginx.

Specify different suffixes through location to achieve different request forwarding. Through the expires parameter setting, you can make the browser cache expiration time and reduce the previous requests and traffic with the server. Specific Expires definition: It is to set an expiration time for a resource, which means that there is no need to go to the server for verification, and you can directly confirm whether it has expired through the browser itself, so no additional traffic will be generated. This method is very suitable for resources that do not change frequently. (If the file is updated frequently, it is not recommended to use Expires to cache). I set 3d here, which means that I will visit this URL within these 3 days, send a request, and compare the server with the last update time of the file. The server grabs it and returns a status code of 304. If there is any modification, download it directly from the server and return a status code of 200.

Find the nginx installation directory, open the/conf/nginx.conf configuration file,

The point is to add location

Finally, check whether the Nginx configuration is correct, and then test whether the dynamic and static separation is successful. You need to delete a static file on the back-end tomcat server to see if it can be accessed. If you can access it, it means that the static resource nginx has returned directly, not the back-end tomcat server.

6. nginx principle and optimized parameter configuration

master-workers mechanism:

Benefits of the master-workers mechanism

First of all, for each worker process, an independent process does not need to be locked, so the overhead caused by the lock is saved, and it is also much more convenient in programming and problem finding.

Secondly, independent processes can be used so that they will not affect each other. After a process exits, other processes are still working, and the service will not be interrupted. The master process will quickly start a new worker process. Of course, the abnormal exit of the worker process must be due to a bug in the program. Abnormal exit will cause all requests on the current worker to fail, but it will not affect all requests, so the risk is reduced.

How many workers need to be set

Nginx, similar to redis, uses an io multiplexing mechanism. Each worker is an independent process, but there is only one main thread in each process. Requests are processed in an asynchronous and non-blocking manner, even if there are thousands of them. The request is no problem. Each worker thread can maximize the performance of a cpu. Therefore, it is most appropriate that the number of workers and the number of CPUs of the server are equal. Setting less will waste cpu, setting too much will cause the loss of cpu frequently switching context.

Number of connections worker_connection

This value is the maximum number of connections that can be established by each worker process. Therefore, the maximum number of connections that can be established by an nginx should be worker_connections * worker_processes. Of course, here is the maximum number of connections. For HTTP requests for local resources, the maximum number of concurrency that can be supported is worker_connections worker_processes. If it is a browser that supports http1.1, each visit will occupy two connections, so ordinary static The maximum concurrent number of accesses is: worker_connections worker_processes/2, and if HTTP is used as a reverse proxy, the maximum concurrent number should be worker_connections* worker_processes/4. Because as a reverse proxy server, each concurrency will establish a connection with the client and a connection with the back-end service, which will occupy two connections.

Okay, that s all for today s article, I hope it can help you who are confused in front of the screen