Explain in detail the "microservice" architecture system-SpringCloud Alibaba!

Explain in detail the "microservice" architecture system-SpringCloud Alibaba!

01 Introduction

The term "microservices" is derived from Martin Fowler's blog named Microservices, which can be found on his official blog http://martinfowler. com/articles/microservices.html. Simply put, microservices are based on the system architecture. A design style whose main purpose is to split an originally independent system into multiple small services. These small services run in separate processes. The services communicate and collaborate through HTTP-based RESTfuL AP. Common microservice frameworks: Spring cloud, Ali dubbo, Huawei ServiceComb, Tencent Tars, Facebook thrift, Sina Weibo Motan. In this chapter, we start by understanding the various components that make up the complete system. The next chapter will use these components to build a complete distributed system.

1.1 Spring Cloud

Needless to say, the official website has a detailed introduction.

1.2 Spring Cloud Alibaba

Spring Cloud Alibaba is committed to providing a one-stop solution for microservice development. This project contains the necessary components for the development of distributed application microservices, so that developers can easily use these components to develop distributed application services through the Spring Cloud programming model

Sentinel, the main component, uses flow as an entry point to protect the stability of services from multiple dimensions such as flow control, fuse degradation, and system load protection. Nacos: A dynamic service discovery, configuration management and service management platform that makes it easier to build cloud-native applications. RocketMQ: An open source distributed messaging system, based on high-availability distributed cluster technology, provides low-latency, highly reliable message publishing and subscription services. Dubbo: Apache Dubbo is a high-performance Java RPC framework. Seata: Alibaba's open source product, an easy-to-use high-performance microservice distributed transaction solution.

02 Service Registration and Discovery

Eureka: It is officially announced that 2.x is no longer open source (closed source), and the previous version has stopped updating; in other words, Eureka will no longer have more technical improvements in the future. Therefore, if you want the registry to have more powerful functions, you need to find another way. Zookeeper: In the enterprise-level Zookeeper registration center, there are more combinations with Dubbo. Kafka also uses it. With the suspension of Eureka, we can use the spring-cloud-starter-zookeeper-discovery starter to register Zookeeper as springcloud. center. Consul: It is also an excellent service registration framework developed by go language, and it is used a lot. Nacos: From SpringCloud ibaba, it has passed the test of millions of registrations in the enterprise. It can not only replace Eureka perfectly, but also replace other components. Therefore, Naocs is also strongly recommended.

2.1 Introduce Nacos as a registration center

(1) Introduction to Nacos

Nacos (Dynamic Naming and Configuration Service) is an open source project of Alibaba in July 2018, dedicated to discovering, configuring and managing microservices.

(2) Nacos installation

Single node

--Download mirror docker pull nacos/nacos-server:1.3.1 --Start the container docker run --name nacos --env MODE=standalone --privileged=true -p 8848:8848 --restart=always -d dc833dc45d8f Copy code

access:

http://127.0.0.1:8848/nacos account and password are both nacos

Cluster

Installation prerequisites

64 bit OS Linux/Unix/Mac, Linux system is recommended. The cluster needs to rely on mysql, but a single machine does not need to 3 or more Nacos nodes can form a cluster Copy code

(3) Steps to build a Nacos high-availability cluster:

1. You need to go to the Nacos official website to clone the Nacos cluster project nacos-docker 2. Nacos-docker uses Docker Compose to organize containers, so first you need to install Docker Compose. For more information, please refer to the Nacos official website: https://nacos.io/zh -cn/docs/quick-start-docker.html

1) Install Docker Compose What is Docker Compose The Compose project is an official open source project of Docker, responsible for the rapid orchestration of Docker container clusters.

#Download under Linux (download to/usr/local/bin) curl -L https://github.com/docker/compose/releases/download/1.25.0/run.sh>/usr/local/bin/docker-compose # Set file executable permissions chmod +x/usr/local/bin/docker-compose # View version information docker-compose --version Copy code

2) Clone the Nacos-docker project

#Switch to custom directory cd/usr/local/nacos #Start clone git clone https://github.com/nacos-group/nacos-docker.git Copy code

3) Run the nacos-docker script

#Execute orchestration commands docker-compose -f/usr/local/nacos/nacos-docker/example/cluster-hostname.yaml up Copy code

The above orchestration command mainly downloads the mysql image and nacos image, automatically completes the image download/container startup Pulling from nacos/nacos-mysql, the version is 5.7 (execute the initialization script) Pulling nacos3 (nacos/nacos-server:latest) latest version

4) Stop and start

#start up docker-compose -f/usr/local/nacos/nacos-docker/example/cluster-hostname.yaml start#stop docker-compose -f/usr/local/nacos/nacos-docker/example/cluster-hostname.yaml stop Copy code

5) Visit Nacos

http://192.168.1.1:8848/nacos http://192.168.1.1:8849/nacos http://192.168.1.1:8850/nacos Copy code

(4) Nacos quick start

Configure service provider

Service providers can register their services to the Nacos server through the service registration discovery function of Nacos.

Add nacos dependency

<dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId> <version>${latest.version}</version> </dependency> Copy code

Add configuration

server.port=8070 spring.application.name=nacos-demo spring.cloud.nacos.discovery.server-addr=127.0.0.1:8848 Copy code

Start class

@SpringBootApplication @EnableDiscoveryClient public class NacosProviderApplication { public static void main(String[] args) { SpringApplication.run(NacosProviderApplication.class, args); } @RestController class EchoController { @RequestMapping(value = "/echo/{string}", method = RequestMethod.GET) public String echo(@PathVariable String string) { return "Hello Nacos Discovery "+ string; } } } Copy code

After starting, the console:

It means that the registration is successful, and the service is checked in the background:

**

  1. Personally organize some information, friends in need can directly click to receive
  2. Microservice architecture: RPC+Dubbo+SpirngBoot+Alibaba+Docker+K8s
  3. Java core knowledge collection + 25 special interview collection

**

Below we use spring cloud to integrate naocs to implement service calls

Configure service consumers

Add configuration

server.port=8080 spring.application.name=service-consumer spring.cloud.nacos.discovery.server-addr=127.0.0.1:8848 Copy code

Add a startup class: service consumers use @LoadBalanced RestTemplate to implement service calls

@SpringBootApplication @EnableDiscoveryClient public class NacosConsumerApplication { @LoadBalanced @Bean public RestTemplate restTemplate() { return new RestTemplate(); } public static void main(String[] args) { SpringApplication.run(NacosConsumerApplication.class, args); } @RestController public class TestController { private final RestTemplate restTemplate; @Autowired public TestController(RestTemplate restTemplate) {this.restTemplate = restTemplate;} @RequestMapping(value = "/echo/{str}", method = RequestMethod.GET) public String echo(@PathVariable String str) { return restTemplate.getForObject("http://service-provider/echo/" + str, String.class); } } } Copy code

(5) Test

Start ProviderApplication and ConsumerApplication, call http://localhost:8080/echo/2018, and the return content is Hello Nacos Discovery 2018.

03 Solution and Application of Distributed Configuration Center

Currently, there are more configuration centers on the market (chronological order) Disconf: Baidu's open source configuration management center in July 2014 also has configuration management capabilities, but it is currently not maintained. The most recent submission is 4-5 Years ago. Spring Cloud Config: Open sourced in September 2014, Spring Cloud ecological components can be seamlessly integrated with the Spring Cloud system. Apollo: In May 2016, Ctrip's open source configuration management center has features such as standardized permissions and process governance. Nacos: In June 2018, Ali's open source configuration center can also do DNS and RPC service discovery

3.1 Introduce Nacos as a distributed configuration center

After starting the Nacos server, you can refer to the following sample code to start the Nacos configuration management service for your Spring Cloud application. For complete sample code, please refer to: nacos-spring-cloud-config-example

  1. Add dependency:

    com.alibaba.cloud spring-cloud-starter-alibaba-nacos-config ${latest.version}

Note: Version 2.1.x.RELEASE corresponds to Spring Boot 2.1.x version. Version 2.0.x.RELEASE corresponds to Spring Boot 2.0.x version, and version 1.5.x.RELEASE corresponds to Spring Boot 1.5.x version.

For more version correspondences, please refer to: Release Notes Wiki

  1. Configure the address and application name of the Nacos server in bootstrap.properties

    spring.cloud.nacos.config.server-addr=127.0.0.1:8848

    spring.application.name=example

Note: The reason why spring.application.name needs to be configured is because it forms part of the Nacos configuration management dataId field.

In Nacos Spring Cloud, the complete format of dataId is as follows:

$ {prefix} -. $ { spring.profiles.active} $ {file-extension} copy the code
  • The prefix defaults to the value of spring.application.name, which can also be configured through the configuration item spring.cloud.nacos.config.prefix.

  • spring.profiles.active is the profile corresponding to the current environment. For details, please refer to the Spring Boot documentation. Note: When spring.profiles.active is empty, the corresponding connector-will not exist, and the splicing format of dataId becomesprefix.{prefix}. {file-extension}

  • file-exetension is the data format of the configuration content, which can be configured through the configuration item spring.cloud.nacos.config.file-extension. Currently, only properties and yaml types are supported.

  1. Automatic configuration update is achieved through Spring Cloud native annotation @RefreshScope:

    @RestController @RequestMapping("/config") @RefreshScope public class ConfigController {

    @Value("${useLocalCache:false}") private boolean useLocalCache; @RequestMapping("/get") public boolean get() { return useLocalCache; } Copy code

    }

  2. First publish the configuration to Nacos Server by calling Nacos Open API: dataId is example.properties, and the content is useLocalCache=true

    curl -X POST " http://127.0.0.1:8848/nacos/v1/cs/configs?dataId=example.properties&group=DEFAULT_GROUP&content=useLocalCache=true "

  3. Run NacosConfigApplication, call curl http://localhost:8080/config/get, the return content is true.

  4. Call Nacos Open API again to publish configuration to Nacos server: dataId is example.properties, and the content is useLocalCache=false

    curl -X POST " http://127.0.0.1:8848/nacos/v1/cs/configs?dataId=example.properties&group=DEFAULT_GROUP&content=useLocalCache=false "

  5. Visit http://localhost:8080/config/get again, and the content returned at this time is false, indicating that the useLocalCache value in the program has been dynamically updated.

04 Distributed service call

4.1 Overview of RPC

The main functional goal of RPC is to make it easier to build distributed computing (applications) without losing the semantic simplicity of local calls while providing powerful remote call capabilities. In order to achieve this goal, the RPC framework needs to provide a transparent calling mechanism, so that users do not have to explicitly distinguish between local calls and remote calls.

The advantages of RPC: distributed design, flexible deployment, decoupling services, and strong scalability.

4.2 RPC framework

Dubbo: The earliest open source RPC framework in China, developed by Alibaba and open sourced at the end of 2011, only supports the Java language. Motan: The RPC framework used internally by Weibo was open sourced in 2016 and only supports the Java language. Tars: The RPC framework used internally by Tencent was open sourced in 2017 and only supports the C++ language. Spring Cloud: The foreign Pivotal company's open source RPC framework in 2014 provides a wealth of ecological components. gRPC: Google's open source cross-language RPC framework in 2015, supporting multiple languages. Thrift: It was originally a cross-language RPC framework for internal systems developed by Facebook. In 2007, it contributed to the Apache Fund and became Apache: one of the open source projects, supporting multiple languages.

4.3 Advantages of RPC framework

The RPC framework generally uses a long link, and there is no need for a three-way handshake for each communication, which reduces network overhead. The RPC framework generally has a registry, with rich monitoring and management releases, offline interfaces, dynamic extensions, etc., for the caller, it is a non-perceptive, unified operation protocol, privacy, and higher security. The RPC protocol is simpler and smaller in content. , Higher efficiency, service-oriented architecture, service-oriented governance, RPC framework is a strong support.

4.4 RPC framework application: use Spring Cloud Alibaba to integrate Dubbo implementation

Since Dubbo Spring Cloud is built on the native Spring Cloud, its service governance capabilities can be considered Spring Cloud Plus, which not only completely covers the native features of Spring Cloud, but also provides a more stable and mature implementation. The feature comparison is shown in the following table Show:

4.5 Dubbo is called as a Spring Cloud service

By default, Spring Cloud Open Feign and @LoadBalanced`RestTemplate are the two service invocation methods of Spring Cloud. Dubbo Spring Cloud provides it with a third option, that is, Dubbo services will appear as equivalent citizens of Spring Cloud service calls. Applications can expose and reference Dubbo services through Apache Dubbo annotations @Service and @Reference to achieve multiple agreements between services communication. At the same time, you can also use Dubbo generalized interface to easily implement a service gateway.

4.6 Quick start

According to the traditional Dubbo development model, before building a service provider, the first step is to define the Dubbo service interface for the service provider and service consumer. In order to ensure the consistency of the contract, the recommended approach is to package the Dubbo service interface in a second-party or third-party artifact (jar). The artifact does not even need to add any dependencies. For service providers, not only do they introduce Dubbo service interfaces in the form of relying on artifacts, but they also need to be implemented. The corresponding service consumer also needs to rely on the artifact and execute remote methods in the way of interface calls. The next step is to create an artifact.

4.7 Create Service API

Create an api module to write various interfaces:

/** * @author original * @date 2020/12/8 * @since 1.0 **/ public interface TestService { String getMsg(); } Copy code

4.8 Create a service provider

Import dependencies

<dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-openfeign</artifactId> <version>2.2.2.RELEASE</version> </dependency> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-dubbo</artifactId> </dependency> <!-- Dependent package of api interface--> <dependency> <groupId>com.dubbo.demo</groupId> <artifactId>dubbo-demo-api</artifactId> <version>0.0.1-SNAPSHOT</version> </dependency> </dependencies> Copy code

Write configuration

dubbo: scan: # dubbo Service Scanning Benchmark Package base-packages: org.springframework.cloud.alibaba.dubbo.bootstrap protocol: # dubbo agreement name: dubbo # dubbo protocol port (-1 means auto-increment port, starting from 20880) port: -1 spring: cloud: nacos: # Nacos Service Discovery and Registration Configuration discovery: server-addr: 127.0.0.1:8848 implementation/** * @author original * @date 2021/1/28 * @since 1.0 **/ @Service//dubbo's service annotation public class TestServiceImpl implements TestService { @Override public String getMsg() { return "123123"; } } Copy code

Start class

/** * @author original * @date 2021/1/28 * @since 1.0 **/ @SpringBootApplication @EnableDiscoveryClient public class DubboProviderApplication { public static void main(String[] args) { SpringApplication.run(DubboProviderApplication.class,args); } } Copy code

4.9 Create Service Consumer

In addition to the implementation class of api, the code of other reuse providers

Write test class

/** * @author original * @date 2021/1/28 * @since 1.0 **/ @RestController public class TestController { @Reference TestService testService; @GetMapping("/dubbo/test") public String getMsg(){ return testService.getMsg(); } } Copy code

access:

Returns 111

05 Service Flow Management

5.1 Why is flow control downgraded?

Traffic is very random and unpredictable. The first second may have been calm and the waves may have been calm, and there may be a traffic peak in the next second (for example, the double eleven zero o'clock scene). However, the capacity of our system is always limited. If the sudden traffic exceeds the capacity of the system, it may cause the request to be processed, the accumulation of request processing is slow, the CPU/Load soars, and finally the system crashes. Therefore, we need to limit this burst of traffic, and process the request as much as possible to ensure that the service is not overwhelmed. This is flow control.

A service often calls other modules, which may be another remote service, database, or third-party API. For example, when paying, you may need to call the API provided by UnionPay remotely; to query the price of a product, you may need to query the database. However, the stability of this dependent service cannot be guaranteed. If the dependent service is unstable and the response time of the request becomes longer, the response time of the method calling the service will also become longer, and threads will accumulate, which may eventually exhaust the thread pool of the business itself, and the service itself will also change. It's not available.

Modern microservice architectures are distributed and consist of many services. Different services call each other to form a complex call link. The above problems will have an enlarged effect in link call. If a certain ring on a complex link is unstable, it may cascade layer by layer, and eventually cause the entire link to be unavailable. Therefore, we need to fuse and downgrade unstable weakly dependent services, temporarily cut off unstable calls, and avoid local instability factors leading to an overall avalanche.

About the shutdown/upgrade/replacement of fault-tolerant components

Service degradation: Hystrix: The official website is not highly recommended, but it is still used on a large scale in Chinese companies. Although the current limit and fuse degradation are officially supported in version 1.5 (the version is stable), the official has now begun to recommend everyone to use Resilience4j Resilience4J : It is recommended on the official website, but it is rarely used in China. Sentienl: Comes from Spring Cloud Alibaba, replaces Hystrix components in Chinese companies, it is strongly recommended in China

Sentinel is mainly introduced here.

5.2 Introduction to Sentinel

Sentinel is an open source project of Ali, which provides multiple dimensions such as flow control, circuit breaker degradation, and system load protection to ensure the stability of services. Sentinel's flow control operation is very simple, you can see the effect when you configure it in the console, what you see is what you get

Sentinel has the following characteristics:

  • Rich application scenarios: Sentinel has undertaken the core scenarios of Alibaba s double eleven major traffic promotion in the past 10 years, such as spike (that is, burst traffic control within the range of the system capacity), message peak reduction and valley filling, and cluster flow control , Real-time fusing downstream unavailable applications, etc.

  • Complete real-time monitoring: Sentinel also provides real-time monitoring functions. You can see the second-level data of a single machine connected to the application in the console, and even the summary operation status of a cluster of less than 500 units.

  • Extensive open source ecology: Sentinel provides out-of-the-box integration modules with other open source frameworks/libraries, such as integration with Spring Cloud, Dubbo, and gRPC. You only need to introduce the corresponding dependencies and perform a simple configuration to quickly access Sentinel.

  • Complete SPI extension point: Sentinel provides a simple, easy-to-use and complete SPI extension interface. You can quickly customize the logic by implementing an extended interface. For example, custom rule management, adaptation of dynamic data sources, etc.

Official website

https://github.com/alibaba/Sentinel Chinese https://github.com/alibaba/Sentinel/wiki/%E4%BB%8B%E7%BB%8D https://sentinelguard.io/zh-cn/docs/introduction.html Copy code

The use of Sentinel can be divided into two parts: Core library (Java client): does not rely on any framework/library, can run in the runtime environment of Java 7 and above, and is also good for Dubbo/Spring Cloud and other frameworks support. Dashboard: The console is mainly responsible for managing push rules, monitoring, cluster current limiting distribution management, machine discovery, etc.

scenes to be used

In the service provider (Service Provider) scenario, we need to protect the service provider itself from being overwhelmed by traffic peaks. At this time, flow control is usually performed according to the service capability of the service provider, or restrictions are imposed on specific service callers. We can evaluate the endurance of the core port in combination with the previous stress test and configure the current limit of the QPS mode. When the number of requests per second exceeds the set threshold, the redundant requests will be automatically rejected.

In order to avoid being dragged down by unstable services when calling other services, we need to isolate and fuse unstable service dependencies on the service call side (Service Consumer). The methods include semaphore isolation, abnormal ratio degradation, and RT degradation.

When the system is at a low water level for a long time, when the flow suddenly increases, directly pulling the system to a high water level may instantly crush the system. At this time, we can use Sentinel's WarmUp flow control mode to control the passing traffic to slowly increase, and gradually increase to the upper threshold within a certain period of time, instead of letting it go all at once. This can give the cold system a warm-up time to prevent the cold system from being overwhelmed.

Use Sentinel's uniform-speed queuing mode to "cut peaks and fill valleys", spread the request spikes evenly over a period of time, keep the system load within the request processing level, and process as many requests as possible.

Use Sentinel's gateway flow control feature to protect traffic at the gateway entrance or limit the frequency of API calls.

5.2 Sentinel installation

1. Download the jar package github.com/alibaba/Sen...

2. Start

java -Dserver.port = 8787 -Dcsp.sentinel.dashboard.server = 127.0.0.1: 8787 -Dproject.name = sentinel-dashboard -jar/home/sentinel/sentinel-dashboard-1.8.0.jar duplicated code

3. Visit

http://127.0.0.1:8787/#/loginCopy code

Initial account password sentinel/sentinel

You can see that sentinel is its own monitoring

5.3 Quick Start for Sentinel

1. Import dependencies

<dependency> <groupId>com.alibaba.csp</groupId> <artifactId>sentinel-core</artifactId> <version>1.8.0</version> </dependency> Copy code

2. Testing

public class TestService { public static void main(String[] args) { initFlowRules(); while (true) { Entry entry = null; try { entry = SphU.entry("HelloWorld"); /*Your business logic-start*/ System.out.println("hello world"); /*Your business logic-end*/ } catch (BlockException e1) { /*Flow control logic processing-start*/ System.out.println("block!"); /*Flow control logic processing-end*/ } finally { if (entry != null) { entry.exit(); } } } } //Set the flow control rules to limit the flow when QPS reaches 20 (throw an exception, you can perform processing) private static void initFlowRules(){ List<FlowRule> rules = new ArrayList<>(); FlowRule rule = new FlowRule(); rule.setResource("HelloWorld"); rule.setGrade(RuleConstant.FLOW_GRADE_QPS); //Set limit QPS to 20. rule.setCount(20); rules.add(rule); FlowRuleManager.loadRules(rules); } } Copy code

Results of the:

It can be seen that this program stably outputs "hello world" 20 times per second, which is the same as the preset threshold in the rule. block represents a blocked request.

Official documentation: github.com/alibaba/Sen...

5.4 Sentinel integrates SpringCloud to achieve service current limit/fuse

Import dependencies

<dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-sentinel</artifactId> </dependency> </dependencies> Copy code

Configuration

server.port=8082 spring.application.name=sentinel-demo spring.cloud.nacos.discovery.server-addr=127.0.0.1:8848 spring.cloud.sentinel.transport.dashboard=127.0.0.1:8787//Add @SentinelResource to methods that require flow control @RestController public class TestController { @GetMapping("/sentinel") @SentinelResource public String getMsg(){ return "11"; } } Copy code

Start the application, visit http://127.0.0.1:8082/sentinel and go to the sentinel background

Let's configure one of the simplest flow control rules. For sentinel_spring_web_context/sentinel this service call to configure the current limiting rules (you need to have the traffic to see). We are equipped with a flow control rule with a QPS of 1, which means that the service method cannot be called more than once per second, and if it exceeds, it will be rejected directly.

Now quickly visit: http://localhost:8082/sentinel

View the real-time monitoring page:

For the use of other functions, you can refer to the official documents to explore on your own.

5.5 How to choose flow control downgrade components?

The following is a comparison between Sent inel and other fault-tolerance components:

06 Distributed Transaction

6.1 Why do we need distributed transactions?

Distributed transaction means that the participants of the transaction, the server supporting the transaction, the resource server and the transaction manager are located on different nodes of different distributed systems. Simply put, the units that make up a transaction are on different database servers. How to ensure consistency between services? When in a long microservice invocation chain, the microservice node in the middle position is abnormal, how to ensure the data consistency of the entire service? The introduction of distributed consistency will inevitably bring performance problems. How to solve the distributed consistency problem more efficiently has always been the key starting point for us to solve this problem.

6.2 Classification of Distributed Transaction Solutions

Rigid affairs

Rigid transactions refer to strongly consistent transactions that follow the four major characteristics of local transactions (ACID). It is characterized by strong consistency, requiring all units that make up a transaction to be submitted or rolled back immediately, there is no time flexibility, and it is required to be executed in a synchronous manner. It is usually used in monolithic architecture projects, which are generally enterprise-level applications (or LAN applications). For example: record the log when generating the contract, generate the credentials after the payment is successful, and so on. However, in the popular Internet projects nowadays, this kind of rigid transaction to solve the distributed transaction will have many drawbacks. The most obvious or fatal one is the performance problem.

Because a participant cannot commit the transaction by himself, and must wait for all participants to execute OK and commit the transaction together, then the transaction lock time becomes very long, resulting in very low performance. Based on this, we have reached a conclusion that the more stages, the worse the performance.

Flexible transaction

Flexible transaction refers to rigid transaction. We just analyzed rigid transaction. It has two characteristics, the first is strong consistency, and the second is near real-time (NRT). The feature of flexible transactions is that they do not need to be executed immediately (synchronization) and do not require strong consistency. It only needs to be basically usable and eventually consistent. To truly understand, we need to start with BASE theory and CAP theory.

6.3 CAP theory and BASE theory

1) CAP theory The CAP theory, also known as the CAP principle, has a clear and consistent description of its concept on the Internet. In other words, the descriptions of the CAP theory we found on the Internet are basically the same. Its description is like this: CAP refers to the consistency (Consistency), availability (Availability), and partition tolerance (Partition tolerance) in a distributed system. Among them, the description of C, A, P is as follows: Consistency (C): Whether all data backups in a distributed system have the same value at the same time. (Equivalent to all nodes accessing the same copy of the latest data) Availability (A): After a part of the nodes in the cluster fails, whether the cluster as a whole can respond to the read and write requests of the client]. (High availability for data update) Partition fault tolerance (P): In terms of actual effect, partition is equivalent to the time limit of communication. If the system cannot achieve data consistency within the time limit, it means that a partition has occurred, and a choice between C and A must be made for the current operation.

The CAP principle means that these three elements can only achieve two at most at the same time, and it is impossible to give consideration to all three. Therefore, when designing a distributed architecture, a trade-off must be made. For distributed data systems, partition fault tolerance is a basic requirement, otherwise it will lose value. Therefore, to design a distributed data system is to strike a balance between consistency and availability. For most web applications, strong consistency is not actually required. Therefore, sacrificing consistency in exchange for high availability is the current direction of most distributed database products.

2) BASE theory The BASE theory refers to Basically Available, Soft-state, Eventual Consistency. 1. Basically available BA: (Basically Available): It means that when a distributed system fails, it is allowed to lose part of the availability to ensure that the core is available. But it is not equivalent to not being available. For example: the search engine returns the query result in 0.5 seconds, but due to a failure, the query result is responded to in 2 seconds; when the web page visits too large, some users provide downgrade services, etc. Simply put, it is basically usable. 2. Soft State S: (Soft State): Soft state means that an intermediate state is allowed for the system, and the intermediate state will not affect the overall availability of the system. That is, to allow the system to have a delay when replicas are synchronized between different nodes. Simply put, the state can be out of sync for a period of time. 3. Eventually Consistent E: (Eventually Consistent):

After a certain period of time, all data copies in the system can finally reach a consistent state, and there is no need to ensure strong consistency of system data in real time. Eventual consistency is a special case of weak consistency. BASE theory is oriented to large-scale, highly available and scalable distributed systems, which gain availability by sacrificing strong consistency. ACID is a commonly used conceptual design for traditional databases and pursues a strong consistency model. Simply put, within a certain time window, the final data can be agreed.

The BASE theory evolved based on the CAP principle, and is the result of a balance between consistency and availability in CAP.

Core idea: Even if strong consistency cannot be achieved, each business adopts an appropriate method to make the system achieve ultimate consistency according to its own characteristics.

6.4 Distributed Transaction Solution

(1) Submission of 2PC (3PC) in the second stage

For more introduction to 2pc and 3pc, you can refer to the blog post: Principles of 2PC and 3PC, this is not the focus of our discussion today.

(2) XA

After the XA standard was proposed for more than 20 years, it has not been continuously evolved. In the academic world, there are related researches on protocol optimization and log collaborative processing. Relatively few XA landing solutions are used in the industry, mainly in the application server scenario. The XA solution requires related vendors to provide the implementation of their specific protocols. At present, most relational databases support the XA protocol, but the degree of support is not the same. For example, MySQL did not fully support the semantics of xa_prepare until 5.7. The XA solution is criticized for its performance. In fact, it is more serious that it occupies connection resources, which causes insufficient connection resources to respond to requests in high concurrency, which becomes the bottleneck of the system. Under the microservice architecture, the XA transaction scheme has become an anti-scaling mode with the expansion of microservice links, which further intensifies the occupation of resources. In addition, the XA transaction scheme requires that all the resources in the transaction link implement the XA protocol before they can be used. If one of the resources is not satisfied, the data consistency of the entire link cannot be guaranteed.

(3) TCC compensation plan

TCC refers to Try, Confirm, and Cancel respectively. It is a compensating distributed transaction solution. What is compensation? In fact, we can clearly understand what the three parts of TCC are. First of all, let's take a look at their main functions:

The Try stage is mainly to check the business system and reserve resources. The Confirm phase is mainly to confirm and submit the business system. When the Try phase is executed successfully and the Confirm phase is started, the default Confirm phase will not make mistakes. That is: As long as the Try is successful, Confirm must be successful. The Cancel phase is mainly to cancel the business executed in the state of business execution error and need to be rolled back, and the reserved resources are released.

From this, we can conclude that it is an attempt to commit the transaction in the Try phase. When the Try execution is OK, the Confirm is executed, and it is assumed to be successful by default. But when the Try submission fails, the rollback and resource release are handled by Cancel.

Conceptually, the TCC framework can be regarded as a universal framework, but its difficulty is that the business realizes these three interfaces, the development cost is relatively high, there are many businesses that are difficult to do logical processing related to resource reservation, and whether it is necessary to While reserving resources, it also guarantees isolation from the business level. Therefore, this model is more suitable for the deduction model that is easy to do resource reservation in financial scenarios.

(4) Saga

Saga is actually a concept mentioned in a database paper 30 years ago. In the paper, a Saga transaction is a long-running transaction. This transaction is composed of multiple local transactions. Each local transaction has a corresponding execution module and compensation module. When any local transaction in the saga transaction goes wrong, you can Recover by calling the corresponding compensation method of the related transaction to achieve the final consistency of the transaction.

With TCC solution, why do you need Saga transaction solution? As mentioned above, the cost of business transformation in the TCC plan is relatively high. The internal system can be drastically promoted from the top to the bottom. However, it is often difficult to promote the third party to carry out the transformation of TCC by calling the third-party interface. The other party will modify the TCC solution for you, but other users do not need it. The demand is obviously unreasonable. The third-party business interface is required to provide positive and negative interfaces such as deductions and refunds, and the necessary data flush is reasonable in abnormal scenarios. In addition, the Saga solution is more suitable for workflow-style long transaction solutions and can be asynchronous.

(5) Eventually consistent plan

Local message table

This implementation method should be the most used in the industry. Its core idea is to split distributed transactions into local transactions for processing. This idea is derived from eBay. It has the same idea of implementing MQ transaction messages, and both use MQ to notify different services to implement transaction operations. The difference is that for the trust situation of the message queue, it is divided into two different implementations. The local message table does not trust the stability of the message queue, thinking that messages may be lost, or the running network of the message queue will be blocked, so an independent table is created in the database to store transaction execution The status of the message queue to achieve transaction control.

MQ transaction message

There are some third-party MQs that support transaction messages, such as RocketMQ and ActiveMQ. The way they support transaction messages is also similar to the two-phase commit used. However, some commonly used MQs do not support transaction messages, such as RabbitMQ and Kafka. Take Ali's RocketMQ middleware as an example. The idea is roughly as follows: In the first stage of the Prepared message, the address of the message will be obtained. The second phase executes local affairs. The third stage uses the address obtained in the first stage to access the message and modify the state. That is to say, in the business method, you want to submit two requests to the message queue, once to send the message and once to confirm the message. If the confirmation message fails to be sent, RocketMQ will periodically scan the transaction messages in the message cluster. When it finds the Prepared message, it will confirm to the message sender, so the producer needs to implement a check interface, and RocketMQ will follow the strategy set by the sender. Decide whether to roll back or continue sending confirmation messages. This ensures that the message sending and the local transaction succeed or fail at the same time.

Use an order business flowchart to represent:

Comparison of the two

6.5 Distributed Transaction Framework Seata

FESCAR is Alibaba's open source distributed transaction middleware, which solves the distributed transaction problems faced by microservice scenarios in an efficient and business-intrusive manner. Seata is an upgraded version of fescar. From April 2019, it has been renamed seata.

(1) Typical case analysis of seata

A set of detailed business process introduction is provided on the official website of seata. We will use this as an example to explain.

The first is the introduction of the transaction scenario of the monolithic architecture: the three business modules (inventory, order and account) in e-commerce shopping. They operate the local database, because the same database is the same source, so the local transaction can ensure its consistency, as shown in the following figure:

However, in the microservice architecture, because each business module becomes an independent service, and each service connects to its own database, at this time, it has changed from the same database with the same source to a different database with different data sources, although each Services' own database operations can still use local transaction control, but distributed transactions are required to maintain consistency in transactions between services.

At this time, seata was introduced, which provided a "perfect" solution. As shown below:

The three components related to seata in the figure have been introduced when fescar was introduced. It is also the basic component of seata. They are: 1) Transaction Coordinator (TC): maintains the status of global and branch transactions, Drive global commit or rollback. 2) Transaction Manager (TM): Define the scope of the global transaction: start the global transaction, commit or roll back the global transaction. 3) Resource Manager (RM): Manage the resources of branch transactions, communicate with TC to register branch transactions and report the status of branch transactions, and drive branch transactions to commit or roll back.

seata manages the life cycle of distributed transactions

  1. TM asks TC to start a new global transaction. The TC generates an XID that represents a global transaction.

  2. XID is spread through the call chain of microservices.

  3. The RM registers the local transaction as a branch of the corresponding global transaction from XID to TC.

  4. TM requires TC to commit or roll back the corresponding global transaction of XID.

  5. TC drives all branch transactions under the corresponding global transaction of XID to complete branch commit or rollback.

Inventory Service

public interface StorageService { /** * deduct storage count */ void deduct(String commodityCode, int count); } Copy code

Order service

public interface OrderService { /** * create order */ Order create(String userId, String commodityCode, int orderCount); } Copy code

Account service

public interface AccountService { /** * debit balance of user's account */ void debit(String userId, int money); } Copy code

Main business logic

public class BusinessServiceImpl implements BusinessService { private StorageService storageService; private OrderService orderService; /** * purchase */ public void purchase(String userId, String commodityCode, int orderCount) { storageService.deduct(commodityCode, orderCount); orderService.create(userId, commodityCode, orderCount); } } public class OrderServiceImpl implements OrderService { private OrderDAO orderDAO; private AccountService accountService; public Order create(String userId, String commodityCode, int orderCount) { int orderMoney = calculate(commodityCode, orderCount); accountService.debit(userId, orderMoney); Order order = new Order(); order.userId = userId; order.commodityCode = commodityCode; order.count = orderCount; order.money = orderMoney; //INSERT INTO orders ... return orderDAO.insert(order); } } Copy code

We only need a @GlobalTransactional annotation on the business method:

@GlobalTransactional public void purchase(String userId, String commodityCode, int orderCount) { ...... } Copy code

6.6 Examples supported by Dubbo + SEATA

(1) Step 1: Create a database

  • Requirements: MySQL with InnoDB engine.

Note: Actually, in the example use case, these 3 services should have 3 databases. However, for the sake of simplicity, we can only create one database and configure 3 data sources.

Modify Spring XML using the database URL/username/password you just created.

dubbo-account-service.xml dubbo-order-service.xml dubbo-storage-service.xml

<property name="url" value="jdbc:mysql://xxxx:3306/xxx"/> <property name="username" value="xxx"/> <property name="password" value="xxx"/> Copy code

(2) Step 2: Create UNDO_LOG table

UNDO_LOG SEATA AT mode requires this table.

- Note that 0.3.0+ adds unique index ux_undo_log here CREATE TABLE `undo_log` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `branch_id` bigint(20) NOT NULL, `xid` varchar(100) NOT NULL, `context` varchar(128) NOT NULL, `rollback_info` longblob NOT NULL, `log_status` int(11) NOT NULL, `log_created` datetime NOT NULL, `log_modified` datetime NOT NULL, `ext` varchar(100) DEFAULT NULL, PRIMARY KEY (`id`), UNIQUE KEY `ux_undo_log` (`xid`,`branch_id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; Copy code

(3) Step 3: Create a form, such as business

DROP TABLE IF EXISTS `storage_tbl`; CREATE TABLE `storage_tbl` ( `id` int(11) NOT NULL AUTO_INCREMENT, `commodity_code` varchar(255) DEFAULT NULL, `count` int(11) DEFAULT 0, PRIMARY KEY (`id`), UNIQUE KEY (`commodity_code`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; DROP TABLE IF EXISTS `order_tbl`; CREATE TABLE `order_tbl` ( `id` int(11) NOT NULL AUTO_INCREMENT, `user_id` varchar(255) DEFAULT NULL, `commodity_code` varchar(255) DEFAULT NULL, `count` int(11) DEFAULT 0, `money` int(11) DEFAULT 0, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; DROP TABLE IF EXISTS `account_tbl`; CREATE TABLE `account_tbl` ( `id` int(11) NOT NULL AUTO_INCREMENT, `user_id` varchar(255) DEFAULT NULL, `money` int(11) DEFAULT 0, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; Copy code

(4) Step 4: Start the server

  • From github.com/seata/seata...

    Usage: sh seata-server.sh(for linux and mac) or cmd seata-server.bat(for windows) [options] Options: --host, -h The host to bind. Default: 0.0.0.0 --port, -p The port to listen. Default: 8091 --storeMode, -m log store mode: file, db Default: file --help

    eg

    sh seata-server.sh -p 8091 -h 127.0.0.1 -m file

(5) Step 5: Run the example

Go to the sample warehouse: seata-samples

  • Start DubboAccountServiceStarter

  • Start DubboStorageServiceStarter

  • Start DubboOrderServiceStarter

  • Run DubboBusinessTester for demo test

TBD: Script used to run the demo application