web123456

Springboot Beginner to Master (Super Detailed Documentation)

1. What is Spring Boot

We know that since 2002, Spring has been growing rapidly, and now has become in Java EE (Java Enterprise Edition) development in the true sense of the standard , but with the development of technology , Java EE using Spring gradually become bulky , a large number of XML files exist in the project.Cumbersome configuration, integration of third-party frameworks configuration issues, resulting in reduced development and deployment efficiency

October 2012, Mike Youngstrom at Springjira A feature request has been created in theSupporting Containerless Web Application Architecture in the Spring FrameworkHe talked about configuring Web container services within the master container boot spring container. He talks about configuring Web container services within the main container boot spring container. This is an excerpt from a jira request:

I think Spring's web application architecture could be greatly simplified if it provided the tools and reference architecture to utilize Spring components and configuration models from top to bottom. In a simplemain()method-guided Spring container to embed and unify the configuration of these common Web container services.

This requirement led to the development of the Spring Boot project starting in early 2013, and today, Spring Boot is at version 2.0.3 RELEASE. Spring Boot is not intended to be a replacement solution for Spring, but rather an alternative to Spring Boot.A tool that is tightly integrated with the Spring framework to enhance the Spring developer experience.

It integrates a large number of commonly used third-party libraries configuration, Spring Boot application of these third-party libraries can be almost zero configuration out-of-the-box (out-of-the-box), most of the Spring Boot application requires only a very small amount of configuration code (Java-based configuration), the developer is able to focus more on business logic.

2. Why learn Spring Boot

2.1 From the official Spring view

We open Spring'sOfficial website, which can be seen in the figure below:

Spring官网首图

We can see the official positioning of Spring Boot in the figure:Build AnythingSpring Boot is designed to get up and running as fast as possible and with minimal pre-Spring configuration. Let's also take a look at the official position of the last two:

SpringCloudCoordinate Anything, coordinating anything;
SpringCloud Data Flow:Connect everything, connecting anything.

Taste carefully, Spring's official website on Spring Boot, SpringCloud and SpringCloud Data Flow positioning of the three wording is very tasteful, but also can be seen, Spring official on these three technologies is very important, is now as well as the focus of future learning (SpringCloud related courses) The SpringCloud course will also be online).

2.2 Looking at the benefits of Spring Boot

What are the advantages of Spring Boot? What are the main problems it solves for us? Let's take a look at the following diagram:

Spring Boot的优点

2.2.1 Good genes

Spring Boot was born with Spring 4.0 , literally , Boot means boot , so Spring Boot is designed to help developers quickly build the Spring framework.Spring Boot inherits the original Spring framework's excellent genes , so that Spring in the use of more convenient and fast .

Spring Boot与Spring

2.2.2 Simplified coding

For example, if we want to create a web project, and if you use Spring, you know that you need to add several dependencies in the pom file, while Spring Boot will help developers to start a web container quickly.starter-web dependency is sufficient.

<dependency>
	<groupId></groupId>
	<artifactId>spring-boot-starter-web</artifactId>
</dependency>
  • 1
  • 2
  • 3
  • 4

After we click into the dependency, we can see that Spring Boot starter-web already contains several dependencies, including the dependencies that need to be imported in the Spring project, let's take a look at some of them, as follows:

<!-- .....Omit other dependencies -->
<dependency>
    <groupId></groupId>
    <artifactId>spring-web</artifactId>
    <version>5.0.</version>
    <scope>compile</scope>
</dependency>
<dependency>
    <groupId></groupId>
    <artifactId>spring-webmvc</artifactId>
    <version>5.0.</version>
    <scope>compile</scope>
</dependency>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

As you can see, Spring Boot greatly simplifies our coding, we do not have to import dependencies one by one, directly a dependency can be.

2.2.3 Simplified configuration

Spring Although Java EE lightweight framework , but because of its cumbersome configuration , was once considered to be "configuration hell. A variety of XML, Annotation configuration will dazzle people , and more configuration , if something goes wrong it is difficult to find out the reason . Spring Boot is more of a Java Config way, the Spring configuration. An example:

I create a new class, but I don't use@Serviceannotation, i.e., it's a regular class, so how do we make it a bean for Spring to manage? All we need is the@Configurationcap (a poem)@BeanTwo annotations will suffice, as follows:

public class TestService {
    public String sayHello () {
        return "Hello Spring Boot!";
    }
}


import ;
import ;

@Configuration
public class JavaConfig {
    @Bean
    public TestService getTestService() {
        return new TestService();
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

@Configurationindicates that the class is a configuration class.@BeanThis means that the method returns a bean.TestServiceWe let Spring manage it as a bean, and if we need to use the bean in other places, we can just use the@ResourceIt's very easy to inject the annotations in and use them.

@Resource
private TestService testService;
  • 1
  • 2

In addition, the deployment configuration, the original Spring has more than one xml and properties configuration, in Spring Boot only need a that can be.

2.2.4 Simplified deployment

When using Spring, theProject deploymentThis requires us to deploy thetomcatAfter using Spring Boot, we don't need to deploy tomcat on the server because Spring Boot has embedded tomcat, we just need to break the project into a jar package and use thejava -jar One-touch startup program.

In addition, it also reduces the basic requirements for the runtime environment, the environment variable has the JDK can be.

2.2.5 Simplified monitoring

We can introduce the spring-boot-start-actuator dependency to directly use REST to get the runtime performance parameters of the process, so as to achieve the purpose of monitoring, which is more convenient. However, Spring Boot is just a micro-framework, does not provide the corresponding service discovery and registration of supporting functions, no peripheral monitoring integration program, no peripheral security management program, so in the microservices architecture, you need to Spring Cloud to work together with the use.

2.3 In terms of future development trends

Microservices is the trend of future development, the project will slowly shift from traditional architecture to microservices architecture, because microservices can enable different teams to focus on a smaller range of job responsibilities, the use of independent technology, more secure and more frequent deployment. Inherit the good characteristics of Spring, and Spring lineage , and support for a variety of REST API implementation.Spring Boot is also the official technology strongly recommended , you can see that Spring Boot is a major trend in the future development .

3. What can be learned from this course

This course uses the latest version of Spring Boot 2.0.3 RELEASE, the course articles are the author of the actual project stripped out of the scene and demo, the goal is to lead the learner to quickly get started with Spring Boot, Spring Boot related technology points quickly used in microservice projects. The whole course is divided into two parts: the basic part and the advanced part.

The basic part (01-10 lessons) mainly introduces Spring Boot in the project most often used in some of the function points, aims to lead the learner to quickly master Spring Boot in the development of the knowledge needed to be able to apply Spring Boot related technologies to the actual project architecture. This part of the Spring Boot framework as the main line, including Json data encapsulation, logging, property configuration, MVC support, online documentation, template engine, exception handling, AOP processing, persistence layer integration and so on.

Advanced (11-17 lessons) is mainly to introduce Spring Boot in the project to pull up some of the technical points, including the integration of some of the components, designed to lead the learner in the project encountered specific scenarios can be quickly integrated to complete the corresponding functions. This part of the Spring Boot framework as the main line, including interceptors, listeners, caching, security authentication, word splitter plug-ins, message queues and so on.

After carefully reading the series of articles, learners will quickly understand and master the most commonly used Spring Boot technology in the project, the author of the course will be based on the content of the course to build a Spring Boot project empty architecture, the architecture is also stripped from the actual project, the learner can use the architecture in the actual project, with the use of Spring Boot for the actual project development capabilities. The author's course will end with an empty Spring Boot project architecture based on the course content, which is also stripped from the actual project.

All source code for the course is offered for free download:download address

5. Development environment and plug-ins for this course

The development environment for this course:

  • Development tools: IDEA 2017
  • JDK version: JDK 1.8
  • Spring Boot version: 2.0.3 RELEASE
  • Maven version: 3.5.2

Plugins involved:

  • FastJson
  • Swagger2
  • Thymeleaf
  • MyBatis
  • Redis
  • ActiveMQ
  • Shiro
  • Lucence

Lesson 01: Spring Boot development environment setup and project startup

The previous section provided an introduction to SpringBoot's features, this section focuses on thejdk configuration, Spring Boot project construction and project startup, Spring Boot project project structure to do a little explanation and analysis

1. jdk configuration

This course is developed using IDEA, and the way to configure the jdk in IDEA is very simple, open theFile->Project Structure, as shown below:

IDEA中配置jdk

  1. Select SDKs
  2. Select the local jdk installation directory in the JDK home path.
  3. Customize the name for the jdk in Name

If you are using STS or eclipse, you can add it in two steps:

  • window->preference->java->Instralled JRESto add a local jdk.
  • window-->preference-->java-->CompilerSelect jre to be consistent with jdk.

2. Spring Boot project construction

2.1 IDEA Quick Build

IDEA can be accessed through theFile->New->ProjectTo quickly build a Spring Boot project. As below, select Spring Initializr, select the jdk we just imported in Project SDK, and click Next to get to the configuration information of the project.

  • Group: fill in the enterprise domain name, used in this course
  • Artifact: Fill in the name of the project, the name of the project for each lesson in this course starts withcourse+course numbercommand, here we use course01
  • Dependencies: you can add the dependency information we need in the project, according to the actual situation to add, this course only need to choose the Web that can be.

2.2 Official construction

The second way can be through an official build with the following steps:

  • Visit /.
  • Enter the appropriate Spring Boot version, Group and Artifact information, and project dependencies on the page and create the project.
  • 创建Spring Boot工程
  • After unpacking, use IDEA to import the maven project:File->New->Model from Existing SourceIf you are using eclipse, you can use it to run the project, and then select the unzipped project folder. For those of you who use eclipse, you can use theImport->Existing Maven Projects->Next, and then just select the unzipped project folder.

2.3 maven configuration

After creating the Spring Boot project, you need to configure maven. Open theFile->settingsIf you want to use maven, search for maven and configure your local maven information. The following is the list:

maven配置

In the Maven home directory, select the local Maven installation path; in the User settings file, select the path to the local Maven configuration file. In the configuration file, let's configure the domestic Ali's mirror, so that when downloading the maven dependencies, it will be very fast.

<mirror>
	<id>nexus-aliyun</id>
	<mirrorOf>*</mirrorOf>
	<name>Nexus aliyun</name>
	<url>/nexus/content/groups/public</url>
</mirror>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

For those of you who are using eclipse, you can use thewindow-->preference-->Maven-->User Settingsto configure it in the same way as above.

2.4 Coding Configuration

Similarly, after creating a new project, we usually need to configure the coding, which is very important, and many beginners forget this step, so it is important to develop good habits.

In IDEA, it still opens theFile->settingsIf you want to configure the local encoding information, search for encoding. The following is the list:

编码配置

For those of you using eclipse, there are two places where you need to set the encoding:

  • window–> perferences–>General–>Workspace,commander-in-chief (military)Text file encodingadapt (a story to another medium)utf-8
  • window–>perferences–>General–>content types,decide upon a candidateText,commander-in-chief (military)Default encodingfill inutf-8

OK, the code is set up and you can start the project.

3. Spring Boot project engineering structure

The Spring Boot project has a total of three modules, as shown in the following figure:

Spring Boot项目工程结构

  • src/main/java path: mainly write business programs
  • src/main/resources path: static files and configuration files.
  • src/test/java path: the main writing test program

By default, a startup class Course01Application is created as shown above with a@SpringBootApplicationannotation, the startup class has a main method, yes, Spring Boot startup as long as you run the main method can be very convenient. In addition, Spring Boot internal integration of tomcat, we do not need to manually configure tomcat, developers only need to focus on specific business logic.

To this point, Spring Boot will start successfully, in order to see the effect more clearly, we write a Controller to test the following:

package .;

import ;
import ;

@RestController
@RequestMapping("/start")
public class StartController {

    @RequestMapping("/springboot")
    public String startSpringBoot() {
        return "Welcome to the world of Spring Boot!";
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

Re-run the main method to start the project, and in your browser, typelocalhost:8080/start/springbootIf you see“Welcome to the world of Spring Boot!”Spring Boot is so simple and convenient! The port number is 8080 by default, if you want to change it, you can use theThe incoming person specifies the port, for example, port 8001:

server:
  port: 8001
  • 1
  • 2

4. Summary

In this section, we quickly learned how to import jdk in IDEA, how to configure maven and coding with IDEA, and how to quickly create and start a Spring Boot project.IDEA's support for Spring Boot is very friendly, and we recommend that you use IDEA for Spring Boot development. From the next lesson, we really enter the Spring Boot learning.
Course source code download address:Poke me to download

Lesson 02: Spring Boot returns Json data and data encapsulation

In project development, the transfer of data between interfaces and between front and back ends uses the Json format. In Spring Boot, it is very simple for interfaces to return data in Json format by using the@RestControllerannotation returns data in Json format.@RestControllerIt's also a new annotation added to Spring Boot, so let's click on it to see what's included in the annotation.

@Target({})
@Retention()
@Documented
@Controller
@ResponseBody
public @interface RestController {
    String value() default "";
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

It can be seen that@RestControllerannotation contains the original@Controllercap (a poem)@ResponseBodyannotation, those of you who have used Spring are familiar with the@Controllerannotations are already very well understood and will not be repeated here.@ResponseBodyannotation is to convert the returned data structure to Json format. So by default, the@RestControllerannotation can be returned to the data structure can be converted to Json format, Spring Boot in the default use of the Json parsing technology framework is jackson. we tap thespring-boot-starter-webdependency, you can see aspring-boot-starter-jsonDependency:

<dependency>
    <groupId></groupId>
    <artifactId>spring-boot-starter-json</artifactId>
    <version>2.0.</version>
    <scope>compile</scope>
</dependency>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

Dependencies are well encapsulated in Spring Boot, as you can see with many of thespring-boot-starter-xxxThis is one of the features of Spring Boot that eliminates the need to introduce a lot of dependencies, as the starter-xxx series directly contains the necessary dependencies, so let's tap into the above again.spring-boot-starter-jsondependencies that can be seen:

<dependency>
    <groupId></groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.9.6</version>
    <scope>compile</scope>
</dependency>
<dependency>
    <groupId></groupId>
    <artifactId>jackson-datatype-jdk8</artifactId>
    <version>2.9.6</version>
    <scope>compile</scope>
</dependency>
<dependency>
    <groupId></groupId>
    <artifactId>jackson-datatype-jsr310</artifactId>
    <version>2.9.6</version>
    <scope>compile</scope>
</dependency>
<dependency>
    <groupId></groupId>
    <artifactId>jackson-module-parameter-names</artifactId>
    <version>2.9.6</version>
    <scope>compile</scope>
</dependency>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24

At this point, we know that the default json parsing framework used in Spring Boot is jackson, so let's take a look at how the default jackson framework handles the conversion of common data types to Json.

1. Spring Boot's default handling of Json

In the actual project, commonly used data structures are class objects, List objects, Map objects, we look at the default jackson framework for these three commonly used data structures into json after the format.

1.1 Creating the User entity class

To test this, we need to create an entity class, which we'll demonstrate here with User.

public class User {
    private Long id;
    private String username;
    private String password;
	/* an omissionget、setand parameterized constructors */
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

1.2 Creating the Controller class

Then we create a Controller that returnsUserObject,Listcap (a poem)Map

import .;
import ;
import ;
import ;
import ;
import ;
import ;

@RestController
@RequestMapping("/json")
public class JsonController {

    @RequestMapping("/user")
    public User getUser() {
        return new User(1, "Ni Shengwu (1919-1952), novelist", "123456");
    }

    @RequestMapping("/list")
    public List<User> getUserList() {
        List<User> userList = new ArrayList<>();
        User user1 = new User(1, "Ni Shengwu (1919-1952), novelist", "123456");
        User user2 = new User(2, "master class", "123456");
        (user1);
        (user2);
        return userList;
    }

    @RequestMapping("/map")
    public Map<String, Object> getMap() {
        Map<String, Object> map = new HashMap<>(3);
        User user = new User(1, "Ni Shengwu (1919-1952), novelist", "123456");
        ("Author Information", user);
        ("Blog Address", "");
        ("CSDNaddress", "/eson_15");
        ("Number of fans", 4153);
        return map;
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38

1.3 Testing json returned by different data types

OK, we have written the interface, which returns a User object, a List collection and a Map collection, where the value in the Map collection is stored in a different data type. Next, let's test the results in turn.

Enter it in your browser:localhost:8080/json/userReturns the json as follows:

{"id":1, "username": "Ni Shengwu", "password": "123456"}
  • 1

Enter it in your browser:localhost:8080/json/listReturns the json as follows:

[{"id":1, "username": "Ni Shengwu", "password": "123456"},{"id":2, "username": "Daren Class", "password": "123456"}]]
  • 1

Enter it in your browser:localhost:8080/json/mapReturns the json as follows:

{"authorInfo":{"id":1, "username": "ni shengwu", "password": "123456"}, "csdn_address":"/eson_15", "number_of_fans":4153, "blog_address":""}
  • 1

As you can see, whatever data type is in the map can be converted to the corresponding json format, which is very convenient.

1.4 Handling of null in jackson

In the actual project, we will inevitably encounter some null values, we turn to json, do not want to have these null, such as we expect all the null in the turn of the json will become "" this kind of empty string, so how to do it? In Spring Boot, we can do a little configuration, create a new jackson configuration class:

import ;
import ;
import ;
import ;
import ;
import ;
import ;
import ;
import .Jackson2ObjectMapperBuilder;

import ;

@Configuration
public class JacksonConfig {
    @Bean
    @Primary
    @ConditionalOnMissingBean()
    public ObjectMapper jacksonObjectMapper(Jackson2ObjectMapperBuilder builder) {
        ObjectMapper objectMapper = (false).build();
        ().setNullValueSerializer(new JsonSerializer<Object>() {
            @Override
            public void serialize(Object o, JsonGenerator jsonGenerator, SerializerProvider serializerProvider) throws IOException {
                ("");
            }
        });
        return objectMapper;
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28

Then we modify the interface that returns the map above and test it by changing a couple of values to null:

@RequestMapping("/map")
public Map<String, Object> getMap() {
    Map<String, Object> map = new HashMap<>(3);
    User user = new User(1, "Ni Shengwu (1919-1952), novelist", null);
    ("Author Information", user);
    ("Blog Address", "");
    ("CSDNaddress", null);
    ("Number of fans", 4153);
    return map;
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

Restart the project and enter it again:localhost:8080/json/mapYou can see that jackson has converted all the null fields to empty strings.

{"authorInfo":{"id":1, "username": "Ni Shengwu", "password":""}, "CSDN Address":"", "numberoffans":4153, "blogaddress":""}
  • 1
  • 1

2. Using Alibaba FastJson settings

2.1 Comparison of jackson and fastJson

There are a lot of friends used to use Alibaba's fastJson to do the project in the json conversion related work, at present our project is used in Ali's fastJson, then jackson and fastJson what are the differences? According to the public information on the Internet to compare the following table.

options (as in computer software settings)

fastJson

jackson

ease of use

liable (to)

moderate

Advanced Feature Support

moderate

enrichment

Official Documentation, Example Support

Chinese writing

English (language)

Processing json speed

slightly faster

sharp (of knives or wits)

On the fastJson and jackson comparison, there is a lot of information on the Internet can view, mainly based on their actual project to choose the right framework. From the point of view of expansion, fastJson not jackson flexible, from the point of view of speed or the difficulty of getting started, fastJson can be considered, our project is currently using Ali's fastJson, quite convenient.

2.2 fastJson dependency import

To use fastJson, you need to import dependencies. This course uses version 1.2.35 with the following dependencies:

<dependency>
	<groupId></groupId>
	<artifactId>fastjson</artifactId>
	<version>1.2.35</version>
</dependency>
  • 1
  • 2
  • 3
  • 4
  • 5

2.2 Handling null with fastJson

When using fastJson, the handling of null is a bit different from jackson and requires the inheritance of theWebMvcConfigurationSupportclass and then overrides theconfigureMessageConvertersmethod, in the method, we can choose the scenario for which we want to implement null conversion, and configure it. It is as follows:

import ;
import ;
import ;
import ;
import ;
import ;
import ;

import ;
import ;
import ;

@Configuration
public class fastJsonConfig extends WebMvcConfigurationSupport {

    /**
     * Using Ali FastJson act asJSON MessageConverter
     * @param converters
     */
    @Override
    public void configureMessageConverters(List<HttpMessageConverter<?>> converters) {
        FastJsonHttpMessageConverter converter = new FastJsonHttpMessageConverter();
        FastJsonConfig config = new FastJsonConfig();
        (
                // reservationsmapEmpty fields
                ,
                // commander-in-chief (military)Stringtypednullconvert""
                ,
                // commander-in-chief (military)Numbertypednullconvert0
                ,
                // commander-in-chief (military)Listtypednullconvert[]
                ,
                // commander-in-chief (military)Booleantypednullconvertfalse
                ,
                // Avoiding circular references
                );

        (config);
        (("UTF-8"));
        List<MediaType> mediaTypeList = new ArrayList<>();
        // Solve Chinese messy code problem,representControllerupper@RequestMappingAn attribute has been added to theproduces = "application/json"
        (MediaType.APPLICATION_JSON);
        (mediaTypeList);
        (converter);
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46

3. Encapsulation of data structures for harmonized returns

The above are a few representative examples of Spring Boot return json , but in the actual project , in addition to encapsulating data , we often need to add some other information in the return json , such as return some status code code , return some msg to the caller , so that the caller can be based on the code or msg to make some logical judgments . So in the actual project, we need to encapsulate a unified json return structure to store the return information.

3.1 Defining a Unified json Structure

Since the type of encapsulated json data is uncertain, we need to use generics when defining a unified json structure. Unified json structure attributes include data, status code, prompt information can be constructed according to the actual business needs to do the corresponding additions can be, in general, there should be a default return structure, there should also be a user-specified return structure. In general, there should be a default return structure as well as a user-specified return structure:

public class JsonResult<T> {
    private T data; private String code; {
    private String code; private String msg; { private String
    private String code; private String msg; private String

    /**
     * If no data is returned, the default status code is 0 and the message is: operation successful!
     */
    public JsonResult() {
         = "0".
         = "Operation successful!" ;
    }

    /**
     * If no data is returned, you can artificially specify a status code and alert message
     * @param code
     * @param msg
     */
    public JsonResult(String code, String msg) {
         = code; @param code
         = msg; }
    }

    /**
     * When there is data returned, the status code is 0 and the default message is: operation successful!
     * @param data
     */
    public JsonResult(T data) {
         = data.
         = "0".
         = "Operation successful!" ;
    }

    /**
     * There is data returned, the status code is 0, and a human-specified alert message is given.
     * @param data
     * @param msg
     */
    public JsonResult(T data, String msg) {
         = data.
         = "0".
         = msg;
    }
    // Omit the get and set methods
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45

3.2 Modifying the Return Value Type and Testing in the Controller

Since JsonResult uses a generic, so all the return value types can use the unified structure, in specific scenarios will be replaced by a generic specific data types can be very convenient, but also easy to maintain. In the actual project, you can continue to encapsulate, such as status codes and alerts can be defined as an enumeration type, we only need to maintain this enumeration type of data can be (in this course will not be expanded). According to the above JsonResult, we rewrite the Controller as follows:

@RestController
@RequestMapping("/jsonresult")
public class JsonResultController {

    @RequestMapping("/user")
    public JsonResult<User> getUser() {
        User user = new User(1, "Ni Shengwu (1919-1952), novelist", "123456");
        return new JsonResult<>(user);
    }

    @RequestMapping("/list")
    public JsonResult<List> getUserList() {
        List<User> userList = new ArrayList<>();
        User user1 = new User(1, "Ni Shengwu (1919-1952), novelist", "123456");
        User user2 = new User(2, "master class", "123456");
        (user1);
        (user2);
        return new JsonResult<>(userList, "Get User List Success");
    }

    @RequestMapping("/map")
    public JsonResult<Map> getMap() {
        Map<String, Object> map = new HashMap<>(3);
        User user = new User(1, "Ni Shengwu (1919-1952), novelist", null);
        ("Author Information", user);
        ("Blog Address", "");
        ("CSDNaddress", null);
        ("Number of fans", 4153);
        return new JsonResult<>(map);
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31

Let's re-enter it in the browser:localhost:8080/jsonresult/userReturns the json as follows:

{"code": "0", "data":{"id":1, "password": "123456", "username": "Ni Shengwu"}, "msg": "Successful operation!}
  • 1

Input:localhost:8080/jsonresult/list, which returns the json as follows:

{"code": "0", "data":[{"id":1, "password": "123456", "username": "Ni Shengwu"},{"id":2, "password": "123456", "username": "Daren Class"}], "msg": "Successful in obtaining the user list" }
  • 1

Input:localhost:8080/jsonresult/map, which returns the json as follows:

{"code": "0", "data":{"authorInfo":{"id":1, "password":"", "username": "Ni Shengwu (1919-1952), novelist"}, "CSDNaddress":null, "fans":4153, "博客address":""}, "msg": "OperationSuccessful!"}
  • 1

Through encapsulation, we not only pass the data via json to the front-end or other interfaces, but also bring status codes and hints, which are widely used in real-world project scenarios.

4. Summary

This section analyzes the return of json data in Spring Boot, from the default Spring Boot jackson framework to Alibaba's fastJson framework, and explains their configurations. In addition, in conjunction with the actual project situation, summarizes the actual project used in the json wrapper structure, adding the status code and hints, so that the return of json data information is more complete.
Course source code download address:Poke me to download

Lesson 03: Spring Boot logging with slf4j

In development, we often use()to print some information, but this is not good because of the extensive use of thewill increase the consumption of resources. Our actual project uses slf4j logback to output logs, the efficiency is quite high, Spring Boot provides a logging system, logback is the optimal choice.

1. slf4j Introduction

To quote from the Baidu encyclopedia:

SLF4J, Simple Logging Facade for Java, is not a specific logging solution, it only serves a wide variety of logging systems. According to the official statement , SLF4J is a simple Facade for logging systems , allowing end-users to deploy their applications to use their desired logging system .

The general idea of this paragraph is that you only need to write the code for logging in a uniform way, and you don't need to care which logging system the logs go through and in what style they are output. This is because they depend on the logging system that is bound to the project when it is deployed. For example, if you use slf4j to record logs in your project and bind log4j (i.e., import the corresponding dependency), the logs will be output in the style of log4j; if you need to change to the style of outputting logs in the style of logback at a later stage, you just need to replace log4j with logback, without modifying the code in your project. This is almost zero learning cost for different logging systems introduced by third-party components, and it has more advantages than just this one, as well as the use of concise placeholders and log level judgment.

Because sfl4j has so many advantages , Alibaba has slf4j as their logging framework . In the "Alibaba Java Development Manual (Official Edition)", the logging protocol a mandatory requirement to use the first slf4j:

1. [Mandatory] applications may not directly use the logging system (Log4j, Logback) in the API, but should rely on the use of logging framework SLF4J in the API, the use of facade mode logging framework, is conducive to the maintenance and unity of the various classes of logging processing.

The "mandatory" two fonts now show the advantages of slf4j, so it is recommended in the actual project, use slf4j as their own logging framework. Use slf4j logging is very simple, directly using LoggerFactory to create can be.

import org.;
import org.;

public class Test {
    private static final Logger logger = ();
    // ……
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

2. Configuration of the log in

Spring Boot slf4j support is very good, the internal slf4j has been integrated, generally we will do a little configuration of slf4j when using.file is the only one in Spring Boot that needs to be configured, at the beginning when the project is created thefile, personally more refined with yml file, because yml file hierarchy is particularly good, looks more intuitive, but yml file format requirements are relatively high, for example, the English colon must be followed by a space, otherwise the project is not expected to be able to start, and also do not report errors. Whether you use properties or yml depends on your personal habits. This course uses yml.

Let's look at the configuration of logging in the file:

logging:
  config: 
  level:
    .: trace
  • 1
  • 2
  • 3
  • 4

is used to specify which configuration file to read when the project is started, here it is specified that the log configuration file is under the root path of thefile, the relevant configuration information about logging is placed in theIt's in the file.is used to specify the output level of the logs in a specific mapper, the configuration above says.All mapper log output level under the package is trace, which prints out the sql of the database operation. Set it to trace to locate the problem when you are developing, and then set the log level to error level in the production environment (we will not discuss the mapper layer in this lesson, and we will discuss it in more detail later on when we integrate MyBatis with Spring Boot).

The commonly used log levels are, in descending order: ERROR, WARN, INFO, DEBUG.

3. Configuration file parsing

up topfile, we specify the logging configuration filefile is mainly used for logging related configuration. TheWe can define the log output format, path, console output format, file size, save time and so on. Here's how to analyze it:

3.1 Defining Log Output Formats and Storage Paths

<configuration>
	<property name="LOG_PATTERN" value="%date{HH:mm:} [%thread] %-5level %logger{36} - %msg%n" />
	<property name="FILE_PATH" value="D:/logs/course03/demo.%d{yyyy-MM-dd}.%" />
</configuration>
  • 1
  • 2
  • 3
  • 4

Let's look at what this definition means: first define a format, named "LOG_PATTERN", which has%dateindicates the date.%threaddenotes the thread name.%-5levelIndicates that the level is displayed 5 characters wide from the left.%logger{36}Indicates that the logger name is up to 36 characters long.%msgindicates a log message.%nis a line break.

Then define the path to the file called "FILE_PATH" where the logs will be stored.%idenotes the ith file, when the log file reaches the specified size, the log will be generated to a new file, where i is the file index, the allowable size of the log file can be set, will be explained below. It is important to note that, whether it is a windows system or a Linux system, the path to the log storage must be an absolute path.

3.2 Defining Console Output

<configuration>
	<appender name="CONSOLE" class="">
		<encoder>
            <!-- As configured aboveLOG_PATTERNto print the log -->
			<pattern>${LOG_PATTERN}</pattern>
		</encoder>
	</appender>
</configuration>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

Use the node to set up a console output (class="") configuration, defined as "CONSOLE". Use the output format defined above (LOG_PATTERN) to output using the${}Just quote it in.

3.3 Defining Log File Related Parameters

<configuration>
<appender name="FILE" class="">
<rollingPolicy class="">
<! -- Save logs according to the FILE_PATH path configured above -->
<fileNamePattern> ${FILE_PATH}</fileNamePattern>
<! -- logs are kept for 15 days -->
<maxHistory>15</maxHistory>
< timeBasedFileNamingAndTriggeringPolicy class="">
<! -- Maximum for a single log file, if exceeded, new log file is stored -->
<maxFileSize>10MB</maxFileSize>.
</timeBasedFileNamingAndTriggeringPolicy>
</rollingPolicy>.

<encoder> <!
<! -- Print logs as configured above with LOG_PATTERN -->
<pattern>${LOG_PATTERN}</pattern>
</encoder>
</appender>.
</configuration>.
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19

Use to define a file configuration called "FILE", mainly to configure how long the log file is saved, the size of the individual log file storage, as well as the path where the file is saved and the output format of the log.

3.4 Defining Log Output Levels

<configuration>
	<logger name=".course03" level="INFO" />
	<root level="INFO">
		<appender-ref ref="CONSOLE" />
		<appender-ref ref="FILE" />
	</root>
</configuration>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

With the above definitions, we finally use to define the project's default logging output level, where the level is defined as INFO, and then for the INFO level of logging, the use of reference to the console logging output defined above and the parameters of the log file. In this way, the configuration in the file is set up.

4. Using Logger to print logs in the project

In code, we generally use the Logger object to print out some log information, you can specify the level of the log to print out, also supports placeholders, very convenient.

import org.
import org.
import ;
import ;

@RestController
@RequestMapping("/test")
public class TestController {

    private final static Logger logger = ();

    @RequestMapping("/log")
    public String testLog() {
        ("===== testLog debug level printing ===="); }
        ("====== test log info level print =====");;
        ("===== test log error level prints ====");;
        ("====== test log warn level prints =====");.

        // You can use placeholders to print out some parameter information
        String str1 = "";
        String str2 = "/eson_15";
        ("====== Ni Shengwu's personal blog: {}; Ni Shengwu's CSDN blog: {}", str1, str2);

        return "success";
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26

Start the project and type in your browserlocalhost:8080/test/logAfter that you can see the logging of the console:

====== test log info level printing =====
===== test log error level print ====
====== test log warn level print =====
====== Ni Shengwu's personal blog:; Ni Shengwu's CSDN blog: /eson_15
  • 1
  • 2
  • 3
  • 4

Because the INFO level is higher than the DEBUG level, the debug line is not printed out, if you set the log level in the DEBUG, then all four statements will be printed out, you can test it yourself. At the same time, you can open the D:logscourse03 directory, which contains all the logs generated just after the project was started. After the project is deployed, most of us are looking at the log files to locate the problem.

5. Summary

This lesson gives a brief introduction to slf4j and explains how to use slf4j to output logs in Spring Boot, with a focus on analyzing theThe configuration of the logging related information in the file, including the different levels of logging. Finally, for these configurations, use Logger in the code to print out some for testing. In the actual project, these logs are very important information in the process of troubleshooting.
Course source code download address:Poke me to download

Lesson 04: Configuring Project Properties in Spring Boot

We know that in the project, many times you need to use some of the configuration information, this information may be in the test environment and production environment will have a different configuration, the back according to the actual business situation may also be modified, for this case, we can not write these configurations in the code dead, it is best to write to the configuration file. For example, you can write this information toDocumentation.

1. Situations where little configuration information is available

For example, in the microservices architecture, the most common is that a service needs to call other services to get the information it provides, then in the configuration file of the service you need to configure the address of the service to be called, for example, in the current service, we need to call the order microservices to get the information related to the order, assuming that the port number of the order service is 8002, then we can do the following configuration:

server.
  port: 8001

# Configure the address of the microservice
url.
  # Address of the order microservice
  orderUrl: http://localhost:8002
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

How do we then get this configured order service address in the business code? We can use the@Valueannotation to solve it. Add an attribute to the corresponding class, use the@Valueannotation can get the configuration information in the configuration file as follows:

import org.;
import org.;
import ;
import ;
import ;

@RestController
@RequestMapping("/test")
public class ConfigController {

    private static final Logger LOGGER = ();

    @Value("${}")
    private String orderUrl;
    
    @RequestMapping("/config")
    public String testConfig() {
        ("=====The order service address obtained is:{}", orderUrl);
        return "success";
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21

@ValueThe annotation is passed on the${key}You can get the value corresponding to the key in the configuration file. Let's start the project and typelocalhost:8080/test/configAfter requesting the service, you can see that the console prints out the address of the order service:

The order service address obtained by ===== is: http://localhost:8002
  • 1
  • 1

This means that we have successfully obtained the address of the order microservice in the configuration file, which is also used in the actual project. Later, if you need to modify the address of a service because of server deployment, then just modify it in the configuration file.

2. Multiple configuration information scenarios

Here is another question , with the increase in business complexity , a project may have more and more microservices , a module may need to call multiple microservices to get different information , then you need to configure multiple microservices in the configuration file address . However, in the code that needs to call these microservices, if you use them one by one like this@ValueIt would be too cumbersome and unscientific to annotate to introduce the corresponding microservice address.

Therefore, in the actual project, the business is cumbersome, complex logic, you need to consider encapsulating one or more configuration classes. For example: If in the current service, a business needs to call the order microservice, user microservice and shopping cart microservice, respectively, to get the order, user and shopping cart related information, and then do some logical processing of this information. Then in the configuration file, we need to configure the addresses of these microservices:

# Configure addresses for multiple microservices
url.
  # Address of the order microservice
  orderUrl: http://localhost:8002
  # Address of the user microservice
  userUrl: http://localhost:8003
  # Address of the shopping cart microservice
  shoppingUrl: http://localhost:8004
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

Perhaps in actual business, there are far more than these three microservices, even a dozen are possible. For this case, we can start by defining aMicroServiceUrlclass to specifically hold the url of the microservice, as follows:

@Component
@ConfigurationProperties(prefix = "url")
public class MicroServiceUrl {

    private String orderUrl;
    private String userUrl;
    private String shoppingUrl;
    // make unnecesarygetcap (a poem)setmethodologies
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

A careful observer should be able to see that the use of@ConfigurationPropertiesannotation and use prefix to specify a prefix, then the attribute name in the class is the name in the configuration after removing the prefix, one-to-one correspondence. That is, the prefix + the property name is the key defined in the configuration file, and the class needs to be annotated with a@Componentannotation, the class as a component into the Spring container, let Spring to manage, we use directly injected when you can.

It is important to note that the use of@ConfigurationPropertiesannotation needs to import its dependencies:

<dependency>
	<groupId></groupId>
	<artifactId>spring-boot-configuration-processor</artifactId>
	<optional>true</optional>
</dependency>
  • 1
  • 2
  • 3
  • 4
  • 5

OK, so we've written the configuration, and we're going to write a Controller to test it out. At this point, you don't need to introduce the urls of these microservices one by one in your code, you can just pass them directly through the@Resourceannotation will just write a good configuration class can be injected into the use of a very convenient. The following:

@RestController
@RequestMapping("/test")
public class TestController {

    private static final Logger LOGGER = ();

    @Resource
    private MicroServiceUrl microServiceUrl;

    @RequestMapping("/config")
    public String testConfig() {
        ("The order service address obtained by ===== is:{}", ());
        ("The user service address obtained by ===== is:{}", ());
        ("The shopping cart service address obtained by ===== is:{}", ());

        return "success";
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

Start the project again, request it can be seen, the console prints out the following information, indicating that the configuration file is in effect, while correctly obtaining the contents of the configuration file:

The order service address obtained by ===== is: http://localhost:8002
The order service address obtained by ===== is: http://localhost:8002
The user service address obtained by ===== is: http://localhost:8003
Shopping cart service address obtained by ===== is: http://localhost:8004
  • 1
  • 2
  • 3
  • 4

3. Designation of project profiles

As we know, in the actual project, there are generally two environments: the development environment and the production environment. The configurations in the development environment and the production environment are often different, such as: environment, port, database, related addresses and so on. We can not be debugged in the development environment, deployed to the production environment, and then to modify all the configuration information into the production environment on the configuration, this is too much trouble, but also unscientific.

The best solution is to have a set of configuration information for both the development and production environments, and then when we are developing, we specify to read the configuration of the development environment, and when we deploy the project to the server, we specify to read the configuration of the production environment.

We create two new configuration files:respond in singingThe port numbers are used to configure the development and production environments respectively. For convenience, we set up two access ports, 8001 for the development environment and 8002 for the production environment.

# Development environment configuration files
server.
  port: 8001


# Development environment configuration file
server: port: 8001 # Development environment configuration file
  port: 8002
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

after thatfile to specify which configuration file to read. For example, in our development environment, we specify which configuration file to read in thefile, as follows:

spring:
  profiles:
    active:
    - dev
  • 1
  • 2
  • 3
  • 4

This allows you to specify, at the time of development, the reading of thefile, which is accessed using port 8001, is deployed to the server, you only need to set theto the file specified inThis can be done, and then accessed using port 8002, which is very convenient.

4. Summary

This lesson mainly explains how Spring Boot in the business code to read the relevant configuration, including a single configuration and multiple configuration items, in the microservices, this situation is very common, there are often a lot of other microservices need to call, so encapsulation of a configuration class to receive these configurations is a good way to deal with. In addition, such as database-related connection parameters , etc., can also be put into a configuration class , other similar scenarios encountered , can be handled in this way . Finally introduced the development environment and production environment configuration of the fast switching way to save the project deployment, many configuration information changes.
Course source code download address:Poke me to download

Lesson 05: MVC Support in Spring Boot

Spring Boot's MVC support focuses on a few annotations that are most commonly used in real-world projects, including@RestController@RequestMapping@PathVariable@RequestParamas well as@RequestBody. The main introduction to the common use of these annotations and features.

1. @RestController

@RestControlleris a new annotation added to Spring Boot, so let's take a look at what's included in that annotation.

@Target({})
@Retention()
@Documented
@Controller
@ResponseBody
public @interface RestController {
    String value() default "";
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

It can be seen that@RestControllerannotation contains the original@Controllerrespond in singing@ResponseBodyannotation, those of you who have used Spring are familiar with the@Controllerannotations are already very well understood and will not be repeated here.@ResponseBodyannotation is to convert the returned data structure to Json format. So@RestControllercan be seen as@Controllercap (a poem)@ResponseBodywhich is equivalent to a lazy combination, we use the@RestControllerYou don't have to use it after that.@ControllerBut there is one thing to note: if the front-end and back-end are separated, such as Thymeleaf, you can use the template directly. However, there is one thing to keep in mind: if the front-end and back-end are separated and not rendered by templates, like Thymeleaf, it is possible to directly use the@RestControllerThe data is passed to the front-end in json format, and the front-end parses it; however, if the front-end and back-end are not separated, and you need to use a template for rendering, the Controller will usually return to a specific page, so you can't use the@RestControllerUp, for example:

public String getUser() {
	return "user";
}
  • 1
  • 2
  • 3

is actually required to return to the page, if you use the@RestControllerwill return the user as a string, so we need to use the@Controllerannotation. This is described in the next section, Spring Boot Integration with the Thymeleaf Template Engine.

2. @RequestMapping

@RequestMappingis an annotation used to handle request address mapping, which can be used on classes or on methods. An annotation at the level of a class maps a particular request or request pattern to a controller, indicating that all methods in the class that respond to the request have that address as a parent path; at the level of a method it indicates a mapping relationship that further specifies to a processing method.

The annotation has six attributes, generally more commonly used in the project has three attributes: value, method and produces.

  • value attribute: specify the actual address of the request, value can be omitted.
  • method attribute: specifies the type of request, mainly GET, PUT, POST, DELETE, the default is GET.
  • produces attribute: specifies the type of content to be returned, e.g. produces = "application/json; charset=UTF-8".

@RequestMappingThe annotations are relatively simple, as an example:

@RestController
@RequestMapping(value = "/test", produces = "application/json; charset=UTF-8")
public class TestController {

    @RequestMapping(value = "/get", method = )
    public String testGet() {
        return "success";
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

This is easy, start the project and type in the browserlocalhost:8080/test/getJust test it.

There are annotations for the four different request methods, so you don't have to annotate them each time in the@RequestMappingannotation with the method attribute, the above GET request can be made directly using the@GetMapping("/get")annotations, the effect is the same. Accordingly, the corresponding annotations for PUT, POST and DELETE are@PutMapping@PostMappingrespond in singingDeleteMapping

3. @PathVariable

@PathVariableannotation is mainly used to get url parameters, Spring Boot supports restfull style url, for example, a GET request carries a parameter id over, we will receive the id as a parameter, you can use the@PathVariableAnnotation. Below:

@GetMapping("/user/{id}")
public String testPathVariable(@PathVariable Integer id) {
	("acquiredidbecause of:" + id);
	return "success";
}
  • 1
  • 2
  • 3
  • 4
  • 5

Here you need to pay attention to a problem, if you want to url in the placeholder id value directly assigned to the parameter id, you need to ensure that the url in the parameters and the method to receive parameters are consistent, otherwise it will not be able to receive. If they don't, it's possible to solve the problem by using the@PathVariablein the value attribute to specify the correspondence. as follows:

@RequestMapping("/user/{idd}")
public String testPathVariable(@PathVariable(value = "idd") Integer id) {
	("acquiredidbecause of:" + id);
	return "success";
}
  • 1
  • 2
  • 3
  • 4
  • 5

For visited urls, the placeholder position can be anywhere, it doesn't have to be at the end, e.g. this is fine:/xxx/{id}/userThe method parameters are received using the same number of parameters as one. Also, url supports multiple placeholders, and method arguments are received using the same number of arguments, in the same principle as one argument, for example:

@GetMapping("/user/{idd}/{name}")
    public String testPathVariable(@PathVariable(value = "idd") Integer id, @PathVariable String name) {
        ("acquiredidbecause of:" + id);
        ("acquirednamebecause of:" + name);
        return "success";
    }
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

Run the project and request in the browserlocalhost:8080/test/user/2/zhangsanYou can see that the console outputs the following message:

The id is: 2
Get the name as: zhangsan
  • 1
  • 2

So it supports receiving multiple parameters. Similarly, if the name of the parameter in the url is different from the name of the parameter in the method, you need to use the value attribute to bind the two parameters.

4. @RequestParam

@RequestParamannotation, as the name suggests, also gets the request parameters, which we described above in the@PathValiableannotation also gets the request parameters, then the@RequestParamcap (a poem)@PathVariableWhat's the difference? The main difference is:@PathValiableis to get the parameter values from the url template, i.e. this style of url:http://localhost:8080/user/{id}whereas@RequestParamis to get the parameter values from inside the request, i.e. this style of url:http://localhost:8080/user?id=1We'll use this url with the parameter id to test the code. Let's test the following code using this url with the parameter id:

@GetMapping("/user")
public String testRequestParam(@RequestParam Integer id) {
	("acquiredidbecause of:" + id);
	return "success";
}
  • 1
  • 2
  • 3
  • 4
  • 5

The id information will be printed from the console as normal. Similarly, the arguments to the url need to be consistent with the arguments to the method, and if they are not, they need to be specified with a value attribute, e.g., the url is:http://localhost:8080/user?idd=1

@RequestMapping("/user")
public String testRequestParam(@RequestParam(value = "idd", required = false) Integer id) {
	("acquiredidbecause of:" + id);
	return "success";
}
  • 1
  • 2
  • 3
  • 4
  • 5

In addition to the value attribute, there are two other attributes that are more commonly used:

  • required attribute: true means the parameter must be passed, otherwise a 404 error will be reported, false means optional.
  • defaultValue property: default value, indicates the default value if there is no parameter with the same name in the request.

As you can see from the url, the@RequestParamThis annotation is used in GET requests to receive parameters spliced into the url. In addition, this annotation can also be used for POST requests to receive parameters submitted by the front-end form. If the front-end submits username and password parameters through the form, then we can use the@RequestParamto receive, the usage is the same as above.

@PostMapping("/form1")
    public String testForm(@RequestParam String username, @RequestParam String password) {
        ("acquiredusernamebecause of:" + username);
        ("acquiredpasswordbecause of:" + password);
        return "success";
    }
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

We'll use postman to simulate a form submission and test the interface:

使用postman测试表单提交

So the question is, if there is a lot of form data, we can't possibly write a lot of parameters in the backend method, and each parameter will also have to be@RequestParamAnnotation. For this case, we need to encapsulate an entity class to receive these parameters, and the property names in the entity and the parameter names in the form will be the same.

public class User {
	private String username;
	private String password;
	// set get
}
  • 1
  • 2
  • 3
  • 4
  • 5

If we use an entity to receive, we can't prepend the@RequestParamannotated and used directly.

@PostMapping("/form2")
    public String testForm(User user) {
        ("acquiredusernamebecause of:" + ());
        ("acquiredpasswordbecause of:" + ());
        return "success";
    }
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

Use postman to test the form submission again, observe the return value and the console to print out the log can be. In the actual project, it is generally wrapped in an entity class to receive the form data, because the actual project form data are generally a lot.

5. @RequestBody

@RequestBodyAnnotations are used to receive the entity passed by the front-end to receive the parameters are also the corresponding entity, for example, the front-end through the json submission passed two parameters username and password, at this time we need to encapsulate an entity in the back-end to receive. In the case of passing more parameters, use the@RequestBodyReceiving would be very convenient. Example:

public class User {
	private String username;
	private String password;
	// set get
}


@PostMapping("/user")
public String testRequestBody(@RequestBody User user) {
	("acquiredusernamebecause of:" + ());
	("acquiredpasswordbecause of:" + ());
	return "success";
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

We use the postman tool to test the effect, open postman, and then enter the request address and parameters, parameters we use json to simulate, the following figure all, after the call returns success.

使用Postman测试requestBody

Also look at the log output from the backend console:

The username is: Ni Shengwu
The password is 123456.
  • 1
  • 2

It can be seen that@RequestBodyannotation is used on POST requests to receive json entity parameters. It is a bit similar to the form submission we introduced above, except that the format of the parameters is different, one is a json entity and the other is a form submission. In the actual project according to specific scenarios and the need to use the corresponding annotations can be.

6. Summary

This lesson explains the MVC support in Spring Boot, analyzing the@RestController@RequestMapping@PathVariable@RequestParamcap (a poem)@RequestBodyThe way the four annotations are used, due to the@RestControlleris integrated in the@ResponseBodySo the return of json annotations will not be repeated. The above four annotations is the use of very high frequency annotations, in all the actual project will basically encounter, to master.

Course source code download address:Poke me to download

Lesson 06: Spring Boot Integration with Swagger2 Showing Online Interface Documentation

1. Introduction to Swagger

1.1 Problems addressed

With the development of Internet technology, now the site architecture is basically from the original back-end rendering, into a front-end and back-end separation of the form, and the front-end technology and back-end technology in their respective paths further and further away. The only link between the front-end and the back-end has become the API interface, so the API document has become the link between the front-end and the back-end developers, which is becoming more and more important.

So the problem is, with the continuous updating of the code, developers in the development of new interfaces or update the old interface, due to the heavy task of development, it is often difficult to follow the documentation to update, Swagger is used to solve the problem of an important tool, for the use of the interface, the developer does not need to provide them with the documentation, as long as you tell them a Swagger address, you can display the online API interface documentation. For those who use the interface, developers don't need to provide them with documentation, they just need to tell them a Swagger address, and then they can display the online API interface documentation. In addition to that, those who call the interface can also test the interface data online, and similarly, developers can utilize Swagger's online interface documentation to test the interface data when they are developing the interface, which is a convenient way for developers to do so.

1.2 Swagger Official

Let's open it.Swagger Official WebsiteThe official definition of Swagger is:

The Best APIs are Built with Swagger Tools

The translation is: "The best APIs are built using Swagger tools". This shows that Swagger officials are very confident in its functionality and position, and since it is very easy to use, the official position of Swagger is also reasonable. As you can see from the image below:

官方对swagger的定位

This article explains how to import the Swagger2 tool in Spring Boot to show the project's interface documentation. The version of Swagger used in this lesson is 2.2.2, so let's get started on the Swagger2 journey.

2. maven dependencies for Swagger2

To use the Swagger2 tool, you must import the maven dependency, the current official highest version is 2.8.0, I tried it, I personally feel that the page display is not very good, and not compact enough, not conducive to the operation. In addition, the latest version is not necessarily the most stable version, the current version we are using in our actual project is 2.2.2, which is stable and interface friendly, so this lesson mainly focuses on the 2.2.2 version, the dependencies are as follows:

<dependency>
	<groupId></groupId>
	<artifactId>springfox-swagger2</artifactId>
	<version>2.2.2</version>
</dependency>
<dependency>
	<groupId></groupId>
	<artifactId>springfox-swagger-ui</artifactId>
	<version>2.2.2</version>
</dependency>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

3. Swagger2 configuration

To use Swagger2, you need to configure it. It is very easy to configure Swagger2 in Spring Boot by creating a new configuration class.@ConfigurationIn addition to the annotations, you need to add the@EnableSwagger2Annotation.

import ;
import ;
import ;
import ;
import ;import ;import ;import ;import
import ;import ;import ;import ;import
import ;import ;import ;import ;import
import ;
import . .EnableSwagger2; .

EnableSwagger2; /**
 * @author shengwu ni
 /** * @author shengwu ni
@Configuration
@EnableSwagger2
public class SwaggerConfig {

    @Bean
    public Docket createRestApi() {
        return new Docket(DocumentationType.SWAGGER_2)
                // Specify the method that builds the details of the api document: apiInfo()
                .apiInfo(apiInfo())
                .select()
                // Specify the package path where the api interfaces are to be generated, here controller is used as the package path to generate all interfaces in the controller
                .apis(("."))
                .paths(())
                .build();
    }

    /**
     * Build the details of the api documentation
     * @return
     */
    private ApiInfo apiInfo() {
        return new ApiInfoBuilder()
                // Set the page title
                .title("Spring Boot Integration with Swagger2 Interface Overview")
                // Set the interface description
                .description("Learning Spring Boot with Wu, Lesson 06")
                // Set the contact information
                .contact("Ni Shengwu, " + "CSDN: /eson_15")
                // Set the version
                .version("1.0")
                // Build
                .build();
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47

In the configuration class, the role of each method is explained in detail using comments, so I won't repeat it here. At this point, we have configured Swagger2. Now we can test if the configuration is working by starting the project and typing in the browserlocalhost:8080/If you can see the swagger2 interface page, as shown in the following figure, the integration of Swagger2 has been successful.

swagger2页面

Combining this figure with the configuration in the Swagger2 configuration file above, you can clearly know the role of each method in the configuration class. This makes it easy to understand and master the configuration in Swagger2, and also shows that actually Swagger2 configuration is very simple.

[Friendly reminder] There may be a lot of friends in the configuration of Swagger when you will encounter the following situation, and can not be turned off, this is because of the browser cache caused by clearing the browser cache to solve the problem.

[External link image dump failure, the source station may have anti-piracy chain mechanism, it is recommended to save the image directly uploaded (img-gBvZvwz7-1595163751524) (/blog/images/1/)]

4. Use of Swagger2

Above we have configured Swagger2, and also launched the test, the function is normal, the following we start to use Swagger2, mainly to introduce a few common annotations in Swagger2, respectively, on the entity class, Controller class and Controller methods, and finally we look at Swagger2 is Finally, we'll see how Swagger2 renders the online interface documentation on the page, and test the data in the interface with the methods in the Controller.

4.1 Entity class annotations

In this section, we create a User entity class and focus on the Swagger2@ApiModelcap (a poem)@ApiModelPropertyannotations while preparing for later tests.

import ;
import ;

@ApiModel(value = "user entity class")
public class User {

    @ApiModelProperty(value = "user-unique identifier")
    private Long id;

    @ApiModelProperty(value = "user name and surname")
    private String username;

    @ApiModelProperty(value = "user password")
    private String password;

	// an omissionsetrespond in singinggetmethodologies
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

explain@ApiModelrespond in singing@ApiModelPropertyAnnotation:

@ApiModelAnnotations are used for entity classes to indicate a description of the class for parameters to be received with the entity class.
@ApiModelPropertyAnnotations are used for attributes in a class to indicate a description of a model attribute or a change in data manipulation.

The specific effect of this annotation in the online API documentation is described below.

4.2 Controller Class Annotations

Let's write a TestController, write a few more interfaces, and then learn about the annotations in the Controller that are relevant to Swagger2.

import .;
import .;
import ;
import ;
import ;
import ;
import ;
import ;
import ;

@RestController
@RequestMapping("/swagger")
@Api(value = "Swagger2 Online Interface Documentation")
public class TestController {

    @GetMapping("/get/{id}")
    @ApiOperation(value = "Get user information based on the user's unique identifier")
    public JsonResult<User> getUserInfo(@PathVariable @ApiParam(value = "user-unique identifier") Long id) {
        // The simulation database is based on theidgainUsertext
        User user = new User(id, "Ni Shengwu (1919-1952), novelist", "123456");
        return new JsonResult(user);
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23

Let's learn.@Api@ApiOperationcap (a poem)@ApiParamAnnotation.

@Apiannotation is used on a class to identify the class as a swagger resource.
@ApiOperationannotation is used for methods that represent the operation of an http request.
@ApiParamAnnotations are used on parameters to label them with information.

The result returned here is JsonResult, which is the entity encapsulated in Lesson 02 when we learned how to return json data. The above are the 5 most commonly used annotations in Swagger, next run the project, in the browser, enterlocalhost:8080/Look at the interface status on the Swagger page.

swagger接口展示

As you can see, the Swagger page to the interface of the information displayed very comprehensive, the role of each annotation and the display of the place has been marked in the figure above, through the page you can know all the information of the interface, then we directly test the information returned by the interface online, enter the id of 1, look at the return data:

返回数据测试

As you can see, the data is returned in json format directly on the page, and developers can use this online interface directly to test whether the data is correct or not, which is very convenient. The above is for a single parameter input, if the input parameter for an object in this case, Swagger is what it looks like? We write another interface.

@PostMapping("/insert")
    @ApiOperation(value = "Add user information")
    public JsonResult<Void> insertUser(@RequestBody @ApiParam(value = "user information") User user) {
        // Handling of add-in logic
        return new JsonResult<>();
    }
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

Restart the project and type in the browserlocalhost:8080/Look at the effect:

swagger接口展示

5. Summary

OK, this lesson analyzes in detail the advantages of Swagger, and Spring Boot how to integrate Swagger2, including the configuration, the relevant annotations to explain the entity classes and interface classes involved, and how to use. Finally, through the page test, experience the power of Swagger, basically one of the necessary tools for each project team, so to master the use of the tool, it is not difficult.

Course source code download address:Poke me to download

Lesson 07: Spring Boot Integration with Thymeleaf Template Engine

1. Introduction to Thymeleaf

Thymeleaf is a modern server-side Java template engine for Web and standalone environments.
Thymeleaf's main goal is to bring elegant, natural templates to your development workflow - HTML that displays correctly in the browser and can also be used as static prototypes, enabling stronger collaboration among development teams.

The above is translated from the official Thymeleaf website. The traditional JSP+JSTL combination is a thing of the past, Thymeleaf is a modern server-side template engine, different from the traditional JSP, Thymeleaf can be opened directly using the browser, because you can ignore the expansion of the attributes, equivalent to open the native page, to the front-end staff to bring a certain degree of convenience.

What does it mean? It means that Thymeleaf can be run in local environment or environment with network. Since thymeleaf supports html prototypes, it also supports adding extra attributes to html tags to achieve "template+data" presentation, so the artist can view the page effect directly in the browser, and when the service is started, it is also possible for the background developer to view the effect of the dynamic page with data. For example:

<div class="ui right aligned basic segment">
      <div class="ui orange basic label" th:text="${}">Static original message</div>
</div>
<h2 class="ui center aligned header" th:text="${}">This is the static title</h2>
  • 1
  • 2
  • 3
  • 4

Similar to the above, static information is displayed when the page is static, and dynamic data can be displayed when the service is started and the data in the database is dynamically fetched.th:textTags are used to replace text dynamically, which will be explained below. This example illustrates that browsers interpret html in a way that ignores undefined tag attributes in html (such asth:text), so thymeleaf templates can run statically; when data is returned to the page, the Thymeleaf tag dynamically replaces the static content so that the page dynamically displays the data.

2. Dependency import

To use thymeleaf templates in Spring Boot, you need to introduce dependencies. You can check Thymeleaf when you create your project, or you can import it manually after you create your project, as follows:

<dependency>
    <groupId></groupId>
    <artifactId>spring-boot-starter-thymeleaf</artifactId>
</dependency>
  • 1
  • 2
  • 3
  • 4

Also, if you want to use thymeleaf templates on html pages, you need to introduce them in the page tags:

<html xmlns:th="">
  • 1
  • 1

3. Thymeleaf-related configuration

Because Thymeleaf already has a default configuration, we don't need to do too much configuration on it, there is one thing you need to pay attention to, Thymeleaf default is to turn on page caching, so in the development time, you need to turn off this page caching, the configuration is as follows.

spring.
  thymeleaf.
    cache: false #Disable caching
  • 1
  • 2
  • 3

Otherwise, there will be a cache, resulting in the page not being able to see the effect of the update in time. For example, if you modify a file and update it to tomcat, but the page is still the same when you refresh it, it is because of the cache.

4. Use of Thymeleaf

4.1 Accessing static pages

This has nothing to do with Thymeleaf, it should be said that the general, the reason I write it together here is that generally when we do the site, we will do a 404 page and a 500 page, in order to give the user a user-friendly display in case of error, not a bunch of exceptions thrown out of the message. Spring Boot will automatically recognize the templates directory (templates/) under the and files in the templates/ directory. We create a new error folder in the templates/ directory to hold the html page with the error, and then print some information respectively. Let's take an example:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8"> <title>Title>.
    <title> Title</title>
</head>.
<body>
    This is the 404 page.
</body>.
</html>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

Let's write another controller to test the 404 and 500 pages:

@Controller
@RequestMapping("/thymeleaf")
public class ThymeleafController {

    @RequestMapping("/test404")
    public String test404() {
        return "index";
    }

    @RequestMapping("/test500")
    public String test500() {
        int i = 1 / 0;
        return "index";
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

When we typelocalhost:8080/thymeleaf/test400When you intentionally input the wrong method and cannot find the corresponding method, it will jump to the display.
When we typelocalhost:8088/thymeleaf/test505When it does, it throws an exception and then automatically jumps to the display.

Note: There is an issue that needs to be noted here. In the previous course, we said that we are moving towards front-end and back-end separation in microservices, and that we are using the@RestControllerannotation, which automatically converts the returned data into json format. However, when using the template engine, the Controller layer can't use the@RestControlleris annotated, because when using thymeleaf templates, the name of the view file is returned, e.g. in the Controller above, it is returned to the page, if you use the@RestControllerIf you do, the index will be parsed as a String and returned directly to the page instead of looking for the page, so you can try it. So when using templates, use the@ControllerAnnotation.

4.2 Handling Objects in Thymeleaf

Let's take a look at how object information is handled in thymeleaf templates, if we are doing a personal blog, we need to pass blogger related information to the front-end to display it, then we will encapsulate it into a blogger object, for example:

public class Blogger {
    private Long id;
    private String name;
    private String pass;
	// make unnecesarysetcap (a poem)get
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

Then initialize it in the controller layer:

@GetMapping("/getBlogger")
public String getBlogger(Model model) {
	Blogger blogger = new Blogger(1L, "Ni Shengwu (1919-1952), novelist", "123456");
	("blogger", blogger);
	return "blogger";
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

We initialize a Blogger object, put it into the Model, and return it to the page for rendering. Next, we'll write another one to render the blogger information:

<!DOCTYPE html>
<html xmlns:th="">
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Blogger Information</title>
</head>
<body>
<form action="" th:object="${blogger}" >
    user ID:<input name="id" th:value="${}"/><br>
    user name and surname:<input type="text" name="username" th:value="${()}" /><br>
    login password:<input type="text" name="password" th:value="*{pass}" />
</form>
</body>
</html>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

As you can see, in thymeleaf templates, using theth:object="${}"to get the object information, and then there can be three ways to get the object properties inside the form. as follows:

utilizationth:value="*{property name}"
utilizationth:value="${Object. PropertyName}"The object refers to the above use of theth:objectAcquired objects
utilizationth:value="${Object.get method}"The object refers to the above use of theth:objectAcquired objects

As you can see, it's easy to write code in Thymeleaf as if it were java. Let's typelocalhost:8080/thymeleaf/getBloggerto test the data:

thymeleaf中处理对象

4.3 Handling Lists in Thymeleaf

Working with Lists is similar to working with the objects described above, but requires traversal in thymeleaf. We'll start by modeling a List in the Controller.

@GetMapping("/getList")
public String getList(Model model) {
    Blogger blogger1 = new Blogger(1L, "Ni Shengwu", "123456");
    Blogger blogger2 = new Blogger(2L, "Daren's Class", "123456");
    List<Blogger> list = new ArrayList<>();
    (blogger1);;
    (blogger2);
    (blogger1); (blogger2); ("list", list);
    return "list";
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

Next, we'll write a program to get the list information, and then iterate through the list in the following:

<!DOCTYPE html>
<html xmlns:th="">
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>Blogger Information</title>
</head>
<body>
<form action="" th:each="blogger : ${list}" >
    user ID:<input name="id" th:value="${}"/><br>
    user name and surname:<input type="text" name="password" th:value="${}"/><br>
    login password:<input type="text" name="username" th:value="${()}"/>
</form>
</body>
</html>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

As you can see, it's actually pretty much the same as handling information about a single object, Thymeleaf uses theth:eachPerform the traversal.${}Takes the parameters passed in the model and customizes each object in the list, defined here as blogger, which can be used directly in the form.${Object. AttributeName}to get the value of a property of an object in a list, or you can use the${Object.get method}to get it, which is the same as handling object information above, but you cannot use the*{attribute name}to get the properties in the object, which the thymeleaf template can't get.

4.4 Other common thymeleaf operations

Let's summarize some common tag operations in thymeleaf as follows:

tab (of a window) (computing)

functionality

(for) instance

th:value

Assigning Values to Attributes

th:style

Setting the style

th:style="'display:'+@{(${sitrue}?'none':'inline-block')} + ''"

th:onclick

click event

th:onclick="'getInfo()'"

th:if

conditional judgment

th:href

hyperlink, link

Login />

th:unless

conditional judgment andth:ifin contrast

Login

th:switch

become man and wifeth:case

th:case

become man and wifeth:switch

administator

th:src

address introduction

th:action

Address for form submission

There are many other uses of Thymeleaf, which I won't summarize here, but you can refer to Thymeleaf'sOfficial Documentation (v3.0)The main point is to learn how to use thymeleaf in Spring Boot. The main thing to learn is how to use thymeleaf in Spring Boot. When you come across the corresponding tags or methods, just consult the official documentation.

5. Summary

Thymeleaf is widely used in Spring Boot, this lesson mainly analyzes the advantages of thymeleaf, and how to integrate and use thymeleaf templates in Spring Boot, including dependencies, configuration, access to relevant data, and some precautions and so on. Finally, some of the commonly used thymeleaf tags, in the actual project more use, more access to be able to master, thymeleaf in some of the tags or methods do not have to memorize, what to use what to check what, the key is to be integrated in Spring Boot, use more will be familiar with it.

Course source code download address:Poke me to download

Lesson 08: Global Exception Handling in Spring Boot

In the project development process, whether it is the operation of the underlying database process, or the business layer of the process, or the control layer of the process, will inevitably encounter a variety of predictable and unpredictable exceptions need to be dealt with. If each process is a separate exception handling, the code coupling of the system will become very high, in addition, the development workload will increase and not good uniformity, which also increases the maintenance cost of the code.
For this practical situation, we need to decouple all types of exception handling from each processing, which not only ensures the single function of the relevant processing, but also realizes the unified handling and maintenance of exception information. At the same time, we do not want to throw the exception to the user directly, we should handle the exception, encapsulate the error message, and then return a friendly message to the user. This section summarizes how to use Spring Boot in the project how to intercept and handle global exceptions.

1. Define a uniform json structure for returns

When the front-end or other services request the interface of this service, the interface needs to return the corresponding json data, generally the service only needs to return the parameters needed for the request, but in the actual project, we need to encapsulate more information, such as the status code code, related information msg and so on, which on the one hand, there can be a unified return structure in the project, the entire project team is applicable, and on the other hand, it is convenient to combine the global exception handling information, because in the exception handling information, we generally need to feedback the status code and the exception content to the caller. On the other hand, it is convenient to combine the global exception handling information, because in the exception handling information, we generally need to feedback the status code and the content of the exception to the caller.
This unified json structure can be found in theLesson 02: Spring Boot returns JSON data and data encapsulationIn this section, we simplify it by keeping only the status code code and the exception msg. Here's how it works:

public class JsonResult {
    /**
     * exception code
     */
    protected String code;

    /**
     * Exception information
     */
    protected String msg;
	
    public JsonResult() {
         = "200";
         = "The operation was successful.";
    }
    
    public JsonResult(String code, String msg) {
         = code;
         = msg;
    }
	// get set
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22

2. Handling system exceptions

Create a new GlobalExceptionHandler global exception handling class and add the@ControllerAdviceannotation can intercept exceptions thrown in the project as follows:

@ControllerAdvice
@ResponseBody
public class GlobalExceptionHandler {
	// printablelog
    private static final Logger logger = ();
    // ……
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

Let's click on it.@ControllerAdviceThe annotation shows that the@ControllerAdviceThe annotation contains the@Componentannotation, which indicates that the class will also be handed over to Spring as a component when Spring Boot starts. In addition to this, the annotation has abasePackagesattribute, this attribute is used to intercept which package in the exception information, generally we do not specify this attribute, we intercept all exceptions in the project works.@ResponseBodyThe annotation is intended to output a json wrapper to the caller after the exception has been handled.
How to use this in your project is simple in Spring Boot, and is done in the method via the@ExceptionHandlerannotation to specify a specific exception, then process the exception information in a method, and return the result to the caller in a unified json structure. Here are a few examples of how this can be used.

2.1 Dealing with missing parameter anomalies

In front-end and back-end architecture, the front-end requests the back-end interfaces through rest style, sometimes, such as POST requests need to carry some parameters, but often sometimes the parameters will be missed. In addition, in microservice architecture, when it comes to interface calls between multiple microservices, this situation may also occur, at this time we need to define a method to deal with the missing parameter exception, in order to give the front-end or the caller to prompt a friendly message.

parameter is missing, it throws theHttpMessageNotReadableException, we can intercept the exception and do a friendly handling as follows:

/**
* Missing request parameter exception
* @param ex HttpMessageNotReadableException
* @return
*/
@ExceptionHandler()
@ResponseStatus(value = HttpStatus.BAD_REQUEST)
public JsonResult handleHttpMessageNotReadableException(
    MissingServletRequestParameterException ex) {
    ("Missing request parameters,{}", ());
    return new JsonResult("400", "Missing required request parameters");
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

Let's write a simple Controller to test the exception, receiving two parameters via a POST request: name and password.

@RestController
@RequestMapping("/exception")
public class ExceptionController {

    private static final Logger logger = ();

    @PostMapping("/test")
    public JsonResult test(@RequestParam("name") String name,
                           @RequestParam("pass") String pass) {
        ("name:{}", name);
        ("pass:{}", pass);
        return new JsonResult();
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

Then use Postman to make a call to the interface. When you make the call, if you pass only the name and not the password, a missing parameter exception will be thrown, and after the exception is caught, it will go into the logic that we have written to return a friendly message to the caller, as follows:

缺失参数异常

2.2 Handling Null Pointer Exceptions

Null pointer exceptions are now commonplace in development, what are some of the places where they typically occur?
First of all, let's talk about some attention, for example, in microservices, often call other services to get data, the data is mainly in json format, but in the process of parsing the json, there may be empty appearances, so we get a jsonObject, and then go through the jsonObject to get the relevant information, it should be the first to do a non-empty judgment.
There is also a very common place to query the data from the database, whether it is querying a record encapsulated in an object, or querying multiple records encapsulated in a List, we next go to process the data, then there may be a null pointer exception, because no one can guarantee that the things from the database to check out must not be null, so in the use of the data must be the first to do a non-null judgment So when you use the data, you should always make a non-null judgment first.
The handling of the null pointer exception is simple, the same logic as above, just replace the exception message. as follows:

@ControllerAdvice
@ResponseBody
public class GlobalExceptionHandler {

    private static final Logger logger = ();

    /**
     * null pointer exception
     * @param ex NullPointerException
     * @return
     */
    @ExceptionHandler()
    @ResponseStatus(value = HttpStatus.INTERNAL_SERVER_ERROR)
    public JsonResult handleTypeMismatchException(NullPointerException ex) {
        ("null pointer exception,{}", ());
        return new JsonResult("500", "null pointer exception了");
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

I'm not going to test this one. In the code, there's an ExceptionController.testNullPointExceptionmethod, which emulates a null pointer exception, and we can see the information returned by requesting the corresponding url in the browser:

{"code": "500", "msg": "Null Pointer Exception" }
  • 1
  • 1

2.3 Once and for all?

Of course, there are many exceptions, such as RuntimeException, database query or operation exceptions, and so on. Since Exception is the parent class, all exceptions inherit from it, so we can intercept Exception directly, once and for all:

@ControllerAdvice
@ResponseBody
public class GlobalExceptionHandler {

    private static final Logger logger = ();
    /**
     * system anomaly Unexpected anomalies
     * @param ex
     * @return
     */
    @ExceptionHandler()
    @ResponseStatus(value = HttpStatus.INTERNAL_SERVER_ERROR)
    public JsonResult handleUnexpectedServer(Exception ex) {
        ("system anomaly:", ex);
        return new JsonResult("500", "Anomalies in the system,Please contact the administrator");
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

However, in the project, we usually intercept some common exceptions in detail. Although intercepting Exception can solve the problem once and for all, it is not conducive to troubleshooting or locating the problem. In the actual project, you can intercept the Exception exception in the GlobalExceptionHandler at the bottom, if not found, and then finally intercept the Exception exception to ensure that the output information is friendly.

3. Blocking customized exceptions

In the actual project, in addition to intercepting some system exceptions, in some business, we need to customize some business exceptions, such as in microservices, services call each other is very ordinary and common. To deal with a service call, then the call may fail or call timeout , etc., at this time we need to customize an exception, when the call fails to throw the exception to GlobalExceptionHandler to capture.

3.1 Defining Exception Information

Because there are many exceptions in the business, for different business, may give different tips, so in order to facilitate the project exception information management, we will generally define an exception information enumeration class. For example:

/**
 * Business exception message enumeration class
 * @author shengwu ni
 */
public enum BusinessMsgEnum {
    /** Parameter Exception */
    PARMETER_EXCEPTION("102", "Parameter exception!") ,
    /** Wait for timeout */
    SERVICE_TIME_OUT("103", "Service call timeout!") ,
    /** Parameter too large */
    PARMETER_BIG_EXCEPTION("102", "The number of images entered cannot exceed 50!") ,
    /** 500 : The once-and-for-all prompt can also be defined here */
    UNEXPECTED_EXCEPTION("500", "An exception occurred in the system, please contact the administrator!") ;)
    // More business exceptions can also be defined

    /**
     * Message code.
     */
    private String code;
    /**
     * Message content
     */ private String code; /** * Message content
    private String msg;

    private BusinessMsgEnum(String code, String msg) {
         = code; String msg; private String msg.
         = code; = msg.
    }
// set get method
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30

3.2 Blocking customized exceptions

Then we can define a business exception, and when a business exception occurs, we just throw this customized business exception. For example, let's define a BusinessErrorException exception as follows:

/**
 * Customizing Business Exceptions
 * @author shengwu ni
 */
public class BusinessErrorException extends RuntimeException {
    
    private static final long serialVersionUID = -7480022450501760611L;

    /**
     * exception code
     */
    private String code;
    /**
     * Exception message
     */
    private String message;

    public BusinessErrorException(BusinessMsgEnum businessMsgEnum) {
         = ();
         = ();
    }
	// get setmethodologies
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23

In the constructor, pass in our custom exception enumeration class above, so in the project, if there is a new exception information needs to be added, we directly add the enumeration class can be added, it is very convenient to do a unified maintenance, and then intercept the exception to get.

@ControllerAdvice
@ResponseBody
public class GlobalExceptionHandler {

    private static final Logger logger = ();
    /**
     * Intercepting operational anomalies,Return business exception information
     * @param ex
     * @return
     */
    @ExceptionHandler()
    @ResponseStatus(value = HttpStatus.INTERNAL_SERVER_ERROR)
    public JsonResult handleBusinessError(BusinessErrorException ex) {
        String code = ();
        String message = ();
        return new JsonResult(code, message);
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

In the business code, we can just simulate throwing a business exception and test it:

@RestController
@RequestMapping("/exception")
public class ExceptionController {

    private static final Logger logger = ();

    @GetMapping("/business")
    public JsonResult testException() {
        try {
            int i = 1 / 0;
        } catch (Exception e) {
            throw new BusinessErrorException(BusinessMsgEnum.UNEXPECTED_EXCEPTION);
        }
        return new JsonResult();
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16

Run the project, test it, and return the json as follows, indicating that our customized business exception catching was successful:

{"code": "500", "msg": "An exception has occurred in the system, please contact an administrator!"}
  • 1
  • 1

4. Summary

This section of the course mainly explains the Spring Boot global exception handling, including the packaging of exception information, exception information capture and processing, as well as in the actual project, we use the custom exception enumeration class and business exception capture and processing, in the project is very widely used, basically every project needs to do global exception handling.

Course source code download address:Poke me to download

Lesson 09: Cutting AOP Handling in Spring Boot

1. What is AOP

AOP: Abbreviation for Aspect Oriented Programming, meaning: cutter-oriented programming. The goal of Aspect Oriented Programming is to separate concerns. What is concern? It's the point of attention, which is what you're trying to do. If you are a son, no life goals, every day clothes to hand, food to mouth, all day only know one thing: play (this is your focus, you just do this one thing)! But there's a problem, before you play, you still need to get up, get dressed, put on shoes, fold the quilt, make breakfast, etc., etc., etc., etc., but you don't want to pay attention to these things, you don't need to pay attention to them, you just want to think about playing, so what to do?

Right! All of these things are left to the servants. You have a special servant A to help you get dressed, servant B to help you put on your shoes, servant C to help you fold up your quilt, servant D to help you cook your meals, and then you start to eat and go play (that's your business for the day), and then you come back when you've finished your business, and then a whole series of servants start to help you with this and that, and then the day is over!

This is AOP, and the beauty of AOP is that you just do your job and someone else does the rest for you. Maybe one day you want to run around naked and don't want to wear clothes, so you fire servant A. Maybe one day you want to take some money with you before you go out, so you hire servant E to do it for you! Maybe one day you want to take some money with you before you go out, so you hire another servant, E, to help you get the money! This is AOP, where everyone has a role to play and can be flexibly combined to achieve a configurable, pluggable program structure.

2. AOP processing in Spring Boot

2.1 AOP Dependencies

To use AOP, you first need to introduce AOP dependencies.

<dependency>
	<groupId></groupId>
	<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
  • 1
  • 2
  • 3
  • 4

2.2 Implementing an AOP cutout

Using AOP in Spring Boot is very simple. If we want to print some logs in our project, after introducing the above dependencies, we create a new class LogAspectHandler, which defines the facets and handlers. Just add a@AspectAnnotation is sufficient.@AspectAn annotation is used to describe a facet class, which needs to be typed when defining the facet class.@Componentannotation lets the class be handed off to Spring for management.

@Aspect
@Component
public class LogAspectHandler {

}
  • 1
  • 2
  • 3
  • 4
  • 5

Here are a few common annotations and their use:

1.@Pointcut: Define a cutout, i.e. an entry point to something of concern as described above.
2.@Before: something done before doing something.
3. @After: something done after doing something.
4.@AfterReturning: after doing something, enhance its return value.
5.@AfterThrowing: Handle when doing something that throws an exception.

2.2.1 @Pointcut annotation

@PointcutAnnotation: used to define a facet (entry point), i.e., an entry point to something of interest to the above. The entry point determines what the connection point is concerned about, allowing us to control when the notification is executed.

@Aspect
@Component
public class LogAspectHandler {

    /**
     * Define a facet that intercepts . All methods under the package and sub-packages
     */
    @Pointcut("execution(* ... *. *(...))")
    public void pointCut() {}
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

@Pointcutannotation specifies a cutout that defines what needs to be intercepted, and two common expressions are described here: one using theexecution()The other is to use theannotation()
in order toexecution(* ...*.*(..)))expression as an example, the syntax is as follows:

execution()is the body of the expression
first*The position of the number: indicates the type of return value, the*Indicates all types
Package name: indicates the name of the package to be intercepted, followed by two periods indicating the current package and all sub-packages of the current package..Methods of all classes under a package or sub-package
second reason*The position of the number: indicates the class name, the*Indicates all classes
*(..): This asterisk indicates the method name.*denotes all methods, followed by the parameters of the method in parentheses, and two periods denoting any parameter.

annotation()The way to do this is to define a facet against a certain annotation, for example, we define a facet for an annotation with a@GetMappingAnnotated methods make cutouts, which can be defined as follows:

@Pointcut("@annotation()")
public void annotationCut() {}
  • 1
  • 2

Then if you use that cutout, it cuts into the annotations that are@GetMappingmethod. This is because in real projects, there may be different logical treatments for different annotations such as@GetMapping@PostMapping@DeleteMappingetc. So this kind of cut-and-dried approach according to annotations is also commonly used in real projects.

2.2.2 The @Before annotation

@Beforeannotation specified in the method before the cut into the target method to execute , you can do some logging , you can also do some statistical information , such as obtaining the user's request url and the user's ip address , etc., this can be used to do personal site , are commonly used methods. For example:

@Aspect
@Component
public class LogAspectHandler {

    private final Logger logger = (());

    /**
     * Execute this method before the cutout method defined above
     * @param joinPoint jointPoint
     */
    @Before("pointCut()")
    public void doBefore(JoinPoint joinPoint) {
        ("The ====doBefore method enters the ===="); ;

        // Get the signature
        Signature signature = ();
        // Get the package name of the cut-in
        String declaringTypeName = (); // Get the name of the method to be executed.
        // Get the name of the method to be executed
        String funcName = (); // Get the name of the method to be executed.
        ("The upcoming method is: {} and belongs to the {} package", funcName, declaringTypeName); // Get the name of the upcoming method.

        // Can also be used to record information such as the url and ip of the request.
        ServletRequestAttributes attributes = (ServletRequestAttributes) ();
        HttpServletRequest request = (); // Get the request url.
        // Get the request url
        String url = ().toString(); // Get the request url.
        // Get the request ip
        String ip = (); // Get the request ip.
        ("The user request url is: {} and the ip address is: {}", url, ip);
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32

The JointPoint object is useful for getting a signature, which can then be used to get the requested package name, method name, including parameters (via the()(Get) and so on.

2.2.3 @After annotation

@Afterannotations and@Beforeannotation corresponds to the method specified to be executed after the facet cuts into the target method, and can also do some logging after the completion of a method.

@Aspect
@Component
public class LogAspectHandler {

    private final Logger logger = (());

    /**
     * Define a facet that intercepts . All methods under the package
     */
    @Pointcut("execution(* ... *. *(...))")
    public void pointCut() {}

    /**
     * Execute this method after the cut method defined above
     * @param joinPoint jointPoint
     */
    @After("pointCut()")
    public void doAfter(JoinPoint joinPoint) {

        ("The ====doAfter method enters the ====");;
        Signature signature = ();
        String method = ();
        ("Method {} has finished executing", method);
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25

Here, let's write a Controller to test the execution results, create a new AopController as follows:

@RestController
@RequestMapping("/aop")
public class AopController {

    @GetMapping("/{name}")
    public String testAop(@PathVariable String name) {
        return "Hello " + name;
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

Start the project and type in your browserlocalhost:8080/aop/CSDN, observe the console output message:

The ====doBefore method enters the ====
The upcoming method is: testAop and belongs to the . package
The user requested the url: http://localhost:8080/aop/name and the ip address: 0:0:0:0:0:0:0:1
The ====doAfter method enters the ====
Method testAop has been executed
  • 1
  • 2
  • 3
  • 4
  • 5

The logic and sequence of program execution can be seen in the printed logs, which can be used to intuitively understand@Beforecap (a poem)@AfterWhat the two annotations actually do.

2.2.4 @AfterReturning annotation

@AfterReturningannotations and@AfterSomewhat similar, the difference is@AfterReturningAnnotations can be used to capture the return value after the cut-in method has finished executing, to make business logic enhancements to the return value, for example:

@Aspect
@Component
public class LogAspectHandler {

    private final Logger logger = (());

    /**
     * Execute this method after the return of the above defined faceted method, either to capture the returned object or to augment it.
     * @param joinPoint joinPoint
     * @param result result
     */
    @AfterReturning(pointcut = "pointCut()", returning = "result")
    public void doAfterReturning(JoinPoint joinPoint, Object result) {

        Signature signature = (); String classMethod = ()
        String classMethod = ();
        ("Method {} is executed, return parameters are: {}", classMethod, result);
        // In real projects, you can make business-specific enhancements to the return value
        ("Business-specific enhancements to the return parameter: {}", result + "Enhanced version");
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21

It is important to note that in@AfterReturningannotation, the attributereturningThe value of the method must be consistent with the argument or it will not be detected. The second input parameter in this method is the return value of the method being cut, in thedoAfterReturningThe return value can be enhanced in the method, which can be wrapped accordingly to business needs. Let's restart the service and test it again (I won't post the extra logs):

The method testAop is executed and the return parameter is: Hello CSDN
Business enhancements to the return parameter: Hello CSDN Enhancements
  • 1
  • 2

2.2.5 @AfterThrowing annotation

As the name suggests.@AfterThrowingannotation is that when an exception is thrown during the execution of the cut method, it goes into the@AfterThrowingannotation is executed in a method where you can do some exception handling logic. It is important to note that thethrowingThe value of the attribute must match the parameter or an error will be reported. The second entry in this method is the exception thrown.

/**
 * utilizationAOPdeal withlog
 * @author shengwu ni
 * @date 2018/05/04 20:24
 */
@Aspect
@Component
public class LogAspectHandler {

    private final Logger logger = (());

    /**
     * When throwing an exception on the execution of the above defined faceted method,Execute the method
     * @param joinPoint jointPoint
     * @param ex ex
     */
    @AfterThrowing(pointcut = "pointCut()", throwing = "ex")
    public void afterThrowing(JoinPoint joinPoint, Throwable ex) {
        Signature signature = ();
        String method = ();
        // deal with异常的逻辑
        ("Method of implementation{}make a mistake,abnormalities:{}", method, ex);
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24

I will not test that method, so you can test it yourself.

3. Summary

This section of the lesson for the Spring Boot faceted AOP to do a detailed explanation of the introduction of AOP in Spring Boot, the use of common annotations, the use of parameters, and the introduction of commonly used api. AOP is very useful in the actual project, the faceted methods before and after the implementation of the business can be based on the specific business, to do the appropriate pre-processing or enhancement of the processing, but also can be used for exception handling, you can reasonably use AOP according to specific business scenarios. AOP is very useful in real projects, before and after the execution of the faceted methods can be based on specific business, do the appropriate pre-processing or enhancement of processing, but also can be used for exception capture processing, according to specific business scenarios, the use of AOP reasonable.

Course source code download address:Poke me to download

Lesson 10: Spring Boot Integration with MyBatis

1. Introduction to MyBatis

As we all know , MyBatis framework is a persistence layer framework , is the top project under Apache . Mybatis allows developers to focus on sql , through the mapping provided by Mybatis , the freedom and flexibility to generate to meet the needs of the sql statement . Using simple XML or annotations to configure and map native information, mapping interfaces and Java's POJOs to records in a database can be said to occupy half of the country. This course focuses on Spring Boot integration with MyBatis in two ways. Focus on the annotation-based approach. Because the actual project to use annotations in the way a little more, a little more concise, eliminating the need for a lot of xml configuration (this is not absolute, some project teams may also be using the xml way).

2. MyBatis configuration

2.1 Dependency Import

Spring Boot integration with MyBatis requires the import of themybatis-spring-boot-starterand mysql dependencies, here we are using version 1.3.2, as follows:

<dependency>
	<groupId></groupId>
	<artifactId>mybatis-spring-boot-starter</artifactId>
	<version>1.3.2</version>
</dependency>
<dependency>
	<groupId>mysql</groupId>
	<artifactId>mysql-connector-java</artifactId>
	<scope>runtime</scope>
</dependency>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

Let's click on it.mybatis-spring-boot-starterDependencies, you can see that we are familiar with the previous use of Spring dependencies, as I introduced at the beginning of the course, Spring Boot is committed to simplify the coding, using the starter series will be related to the integration of dependencies, developers do not need to pay attention to the tedious configuration, very convenient.

<!-- dispense with the rest -->
<dependency>
    <groupId></groupId>
    <artifactId>mybatis</artifactId>
</dependency>
<dependency>
    <groupId></groupId>
    <artifactId>mybatis-spring</artifactId>
</dependency>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

2.2 Configuration

Let's take a look at what basic configuration you need to do in your configuration file to integrate with MyBatis.

# service port number
server:
  port: 8080

# Database address
datasource:
  url: localhost:3306/blog_test

spring:
  datasource: # Database Configuration
    driver-class-name:
    url: jdbc:mysql://${}?useSSL=false&useUnicode=true&characterEncoding=utf-8&allowMultiQueries=true&autoReconnect=true&failOverReadOnly=false&maxReconnects=10
    username: root
    password: 123456
    hikari:
      maximum-pool-size: 10 # Maximum Connection Pool
      max-lifetime: 1770000

mybatis:
  # Specifies that the packages set by the alias are allentity
  type-aliases-package: .
  configuration:
    map-underscore-to-camel-case: true # Hump naming convention
  mapper-locations: # mapperMapping File Location
    - classpath:mapper/*.xml
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25

Let's briefly introduce these configurations above: about the database configuration, I do not explain in detail, this point I believe we have been very skilled, configure a username, password, database connection, etc., here the use of connection pooling is Spring Boot comes with hikari, interested friends can go to Baidu or Google search, to learn more about it.

Here's a note.map-underscore-to-camel-case: true, which is used to turn on the camel naming convention, which is better used, for example, if the field name in the database is:user_name, then the attributes can be defined in the entity class asuserName(It can even be written asusernameIf you don't configure it this way, it will not be mapped for cases where the field name and attribute name are different.

3. xml-based integration

To use the original xml method, you need to create a new file. In the configuration file above, we have defined the path to the xml file:classpath:mapper/*.xmlSo we create a new mapper folder in the resources directory and create a file.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mapper PUBLIC "-////DTD Mapper 3.0//EN" "/dtd/">
<mapper namespace=".">
  <resultMap  type=".">

    <id column="id" jdbcType="BIGINT" property="id" />
    <result column="user_name" jdbcType="VARCHAR" property="username" />
    <result column="password" jdbcType="VARCHAR" property="password" />
  </resultMap>
  
   <select  resultType="User" parameterType="String">
       select * from user where user_name = #{username}
  </select>
</mapper>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

This is the same as integrating Spring, where the namespace specifies the corresponding Mapper, and the namespace specifies the corresponding entity class, i.e., User, and then internally specifies that the fields in the table correspond to the attributes of the entity. Here we write a sql that queries the user by username.

The entity class has id, username and password, I'm not going to post the code here, you can download the source code and check it out. Just write an interface in the file:

User getUserByName(String username);
  • 1

Omitting the service code in the middle, let's write a Controller to test it:

@RestController
public class TestController {

    @Resource
    private UserService userService;
    
    @RequestMapping("/getUserByName/{name}")
    public User getUserByName(@PathVariable String name) {
        return (name);
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

Launch the project and type in your browser:http://localhost:8080/getUserByName/CSDNYou can query the database table for users with the username CSDN (just get two pieces of data in beforehand):

{"id":2,"username":"CSDN","password":"123456"}
  • 1

One thing to note here: how does Spring Boot know about this Mapper? One way is to add the@Mapperannotation, but this approach has a drawback, when we have many mappers, then each class has to add the@Mapperannotation. An easier way to do this is to add the@MaperScanannotation to scan for all mappers under a package, as follows:

@SpringBootApplication
@MapperScan(".")
public class Course10Application {

	public static void main(String[] args) {
		(, args);
	}
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

In that case..All mappers under the package will be scanned now.

4. Annotation-based integration

Annotation-based integration eliminates the need for an xml configuration file; MyBatis primarily provides the@Select@Insert@UpdateDeleteFour annotations. These four annotations are used very much, but also very simple, the annotation followed by the corresponding sql statement can be, we give an example:

@Select("select * from user where id = #{id}")
User getUser(Long id);
  • 1
  • 2

This is the same as writing a sql statement in an xml file, which eliminates the need for an xml file, but there's a problem, one might ask, what if it's two parameters? If it's two parameters, we need to use the@Paramannotation to specify the correspondence of each parameter as follows:

@Select("select * from user where id = #{id} and user_name=#{name}")
User getUserByIdAndName(@Param("id") Long id, @Param("name") String username);
  • 1
  • 2

It can be seen that@ParamThe parameters specified should be the same as those in the sql#{}If the parameter names are the same, but not the same, then it won't work. You can test it in the controller by yourself, the interface is in the source code, I won't post the test code and result in the article.

There is a problem to pay attention to, generally we design table fields, will be generated according to the automatic generation tool to generate entity classes, in this case, basically the entity class is able to correspond to the table fields, at least is the camel counterparts, due to the configuration file in the above configuration file to open the camel configuration, so the fields are able to be on the right. But what if there is a mismatch? We have a solution for that, using the@Resultsannotation to fix it.

@Select("select * from user where id = #{id}")
@Results({
        @Result(property = "username", column = "user_name"),
        @Result(property = "password", column = "password")
})
User getUser(Long id);
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6

@Resultshit the nail on the head@ResultAnnotations are used to specify the correspondence between each attribute and field, which would solve the problem described above.

Of course, we can also use a combination of xml and annotations, we are currently using a mixed approach in the actual project, because sometimes xml is convenient, and sometimes it is convenient to annotate, for example, in the case of the above problem, if we define the above, then we can use the@ResultMapannotation to replace the@ResultsNote, below:

@Select("select * from user where id = #{id}")
@ResultMap("BaseResultMap")
User getUser(Long id);
  • 1
  • 2
  • 3

@ResultMapWhere do the values in the annotations come from? It corresponds to the id value when defined in the file:

<resultMap  type=".">
  • 1

This combination of xml and annotations is also very common, and also reduces a lot of code, because the xml file can be generated using an automated generation tool, and does not need to be manually knocked out by human beings, so this kind of use is also very common.

5. Summary

This lesson mainly systematically explains the process of Spring Boot integration of MyBatis, divided into the form of xml-based and annotation-based form to explain, through the actual configuration of the hands-on explanation of the use of Spring Boot MyBatis, and for the annotation method, explains the common problems have been solved, there is a strong practical significance. In the actual project, it is recommended that the actual situation to determine which way to use , general xml and annotations are used.

Course source code download address:Poke me to download

Lesson 11: Spring Boot Transaction Configuration Management

1. Transaction-related

Scenario: When we develop enterprise applications, due to data operations in the process of sequential execution, there may be a variety of unpredictable problems on the line, any step of the operation may occur exceptions, exceptions will lead to the subsequent operation can not be completed. At this time, because the business logic is not completed correctly, so in the previous operation of the database is not reliable, you need to roll back the data in this case.

The role of the transaction is to ensure that the user's every operation is reliable, each step in the transaction must be successfully executed, as long as there is an exception back to the beginning of the transaction has not been operated in the state. This is very understandable, transfer, purchase tickets, etc., the entire event process must be fully executed in order for the event to be executed successfully, can not be transferred to the halfway point, the system is dead, the transfer of money is gone, the recipient of the money has not yet arrived.

Transaction management is one of the most commonly used features of the Spring Boot framework, we are in the actual application development, basically in the service layer when dealing with business logic should be added to the transaction, of course, sometimes due to the needs of the scenario, do not have to add transactions (for example, we have to insert data into a table, each other does not affect, insert how much is how much, not because of a data hangs, and all the previous inserted rollback. (to roll back all of the previous insertion).

2. Spring Boot transaction configuration

2.1 Dependency Import

To use transactions in Spring Boot, you need to import the mysql dependency:

<dependency>
	<groupId></groupId>
	<artifactId>mybatis-spring-boot-starter</artifactId>
	<version>1.3.2</version>
</dependency>
  • 1
  • 2
  • 3
  • 4
  • 5

After importing the mysql dependency, Spring Boot will automatically inject the DataSourceTransactionManager, and we don't need any other configuration to use the@Transactionalannotation for the use of transactions. The configuration of mybatis was explained in the previous lesson, so you can use the same mybatis configuration as in the previous lesson.

2.2 Testing of transactions

We start by inserting a piece of data into the database table:

id

user_name

password

1

Ni Shengwu (1919-1952), novelist

123456

Then we write an insertion mapper:

public interface UserMapper {

    @Insert("insert into user (user_name, password) values (#{username}, #{password})")
    Integer insertUser(User user);
}
  • 1
  • 2
  • 3
  • 4
  • 5

OK, let's test the transaction processing in Spring Boot, in the service layer, we manually throw an exception to simulate the actual exception, and then observe whether the transaction has been rolled back, if there is no new record in the database, then it means that the transaction is rolled back successfully.

@Service
public class UserServiceImpl implements UserService {

    @Resource
    private UserMapper userMapper;

    @Override
    @Transactional
    public void isertUser(User user) {
        // Insert user information
        (user);
        // Throwing exceptions manually
        throw new RuntimeException();
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

Let's test it:

@RestController
public class TestController {

    @Resource
    private UserService userService;

    @PostMapping("/adduser")
    public String addUser(@RequestBody User user) throws Exception {
        if (null != user) {
            (user);
            return "success";
        } else {
            return "false";
        }
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16

We use postman to call the interface, because in the program threw an exception, will cause the transaction rollback, we refresh the database, and did not add a record, indicating that the transaction is in effect. Transaction is very simple, we usually in the use of the time, usually will not have much problem, but not only so ......

3. Summary of frequently asked questions

As you can see from the above, using transactions in Spring Boot is very simple.@Transactionalannotations can solve the problem, said so, but in the actual project, there are a lot of small pits waiting for us, these small pits is that we do not pay attention to when we write the code, and under normal circumstances it is not easy to find these small pits, and so the project to write a big one day suddenly out of the problem, troubleshooting problems is very difficult, when it is sure to be blind, and need to spend a lot of energy to troubleshooting the problem.

This subsection, I specifically for the actual project often occurs, and transaction-related details to do a summary, I hope that the reader after reading, can be implemented into their own projects, can be beneficial.

3.1 Exceptions aren't "caught" to

The first thing to say is that the exception is not "caught" and the transaction is not rolled back. We may have considered the existence of exceptions in the business code, or the editor has prompted us to throw exceptions, but there is a need to pay attention to the place: it does not mean that we throw exceptions, there are exceptions to the transaction will be rolled back, let's look at an example:

@Service
public class UserServiceImpl implements UserService {

    @Resource
    private UserMapper userMapper;
    
    @Override
    @Transactional
    public void isertUser2(User user) throws Exception {
        // Insert user information
        (user);
        // Throwing exceptions manually
        throw new SQLException("Database exceptions");
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

When we look at this code above, there's nothing really wrong with manually throwing aSQLExceptionto simulate the actual operation of the database exception, in this method, since the exception is thrown, then the transaction should be rolled back, the actual is not so, the reader can use my source code in the controller's interface, through the postman test, you will find that it is still possible to insert a user data.

So what's the problem? Because Spring Boot's default transaction rules are to roll back only when it encounters a RuntimeException and a Program Error. For example, the RuntimeException thrown in our example above is fine, but the SQLException thrown is not rolled back. For non-runtime exceptions, if you want to roll back a transaction, you can do so in the@Transactionalannotation using therollbackForattribute to specify an exception, such as@Transactional(rollbackFor = ), so that there is no problem, so in the actual project, be sure to specify exceptions.

3.2 Anomalies are "eaten" away

The title is funny, how can an exception be eaten? Or return to the real project, we are dealing with exceptions, there are two ways, either thrown out, let the upper layer to catch processing; or exceptions try catch off, where the exception to deal with it. Because of this try...catch, the exception is "eaten" and the transaction cannot be rolled back. Let's look at the above example again, but with a few simple changes to the code:

@Service
public class UserServiceImpl implements UserService {

    @Resource
    private UserMapper userMapper;

    @Override
    @Transactional(rollbackFor = )
    public void isertUser3(User user) {
        try {
            // Insert user information
            (user);
            // Throwing exceptions manually
            throw new SQLException("Database exceptions");
        } catch (Exception e) {
			// Exception Handling Logic
        }
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19

Readers can use the controller interface in my source code, through the postman test, you will find that it is still possible to insert a user data, indicating that the transaction is not because of the exception thrown and rolled back. This detail is often more difficult to find than the above pit, because our thinking can easily lead to try...catch code, once the problem, often troubleshooting is more laborious, so we usually write code, we must think more, pay more attention to such details, try to avoid burying themselves in the pit.

So how do you resolve this? Just throw it up to the next level and don't "eat" the exception in the transaction itself.

3.3 Scope of affairs

Transaction scope is a deeper pit than the two above! The reason why I write this also, because this is my previous encounter in the actual project, the scene in this course I will not simulate, I write a demo for you to see, the pit can be remembered, in the future in the code, encounter concurrency problems, will pay attention to this pit, then this lesson is also valuable.

I'll write a demo:

@Service
public class UserServiceImpl implements UserService {

    @Resource
    private UserMapper userMapper;

    @Override
    @Transactional(rollbackFor = )
    public synchronized void isertUser4(User user) {
        // Specific operations in practice……
        (user);
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

You can see, because to consider concurrency issues, I added a synchronized keyword on the method of the business layer code. I give a practical scenario, such as a database, for a user, only one record, the next insertion action over, will first determine whether the database has the same user, if there is not inserted, on the update, there is no insertion, so theoretically, the database will always be the same user information, there will not be the same database inserted two of the same user's information.

However, during the pressure test, the above problem occurs. There are indeed two messages from the same user in the database, and the reason for this is analyzed to lie in the scope of the transaction and the scope of the lock.

As you can see from the method above, a transaction is added to the method, which means that the transaction is started at the beginning of the execution of the method, and closed when the execution is finished. But synchronized doesn't work, because the scope of the transaction is larger than the scope of the lock. That is to say, after the execution of the locking part of the code, the lock is released, but the transaction is not yet closed, at this time another thread came in, the transaction is not closed, the second thread came in, the database state and the first thread just came in is the same. That is, due to mysql Innodb engine's default isolation level is repeatable read (in the same transaction, SELECT result is the beginning of the transaction time point of the state), thread two transactions start, thread one has not been submitted to complete, resulting in the reading of the data has not been updated. The second thread also did the insert action, resulting in dirty data.

This problem can be avoided by, first, removing the transaction (not recommended), and, second, adding a lock to the call to the service, ensuring that the scope of the lock is larger than the scope of the transaction.

4. Summary

This chapter summarizes how to use transactions in Spring Boot, as long as you use the@Transactionalannotations can be used, very simple and convenient. In addition, the focus summarizes the three pitfalls that may be encountered in the actual project, which is very meaningful, because the transaction of this thing is not out of the problem is okay, out of the problem is more difficult to troubleshoot, so the summary of these three points of attention, I hope to help friends in the development.

Course source code download address:Poke me to download

Lesson 12: Using Listeners in Spring Boot

1. Introduction to the listener

What is web listener? web listener is a special class in Servlet, they can help developers to listen to specific events in the web, such as the creation and destruction of ServletContext, HttpSession, ServletRequest; the creation, destruction, and modification of variables and so on. You can add processing before and after certain actions to achieve monitoring.

2. Spring Boot listener use

There are many scenarios where web listeners can be used, such as listening to the servlet context to initialize some data, listening to the http session to get the number of people who are currently online, and listening to the client request servlet request object to get the user's access to the information, etc. In this section, we will learn how to use these three practical scenarios in Spring Boot. In this section, we mainly through the three actual use of the scene to learn the use of Spring Boot listeners.

2.1 Listening to Servlet Context Objects

The listening servlet context object can be used to initialize data for caching. What does it mean? Let's take a very common scenario, for example, when a user clicks on the home page of a site, it will usually show some information on the home page, which basically remains unchanged for most of the time, but the information comes from the database. If the user's every click, have to go from the database to get the data, if the number of users is small can be accepted, if the number of users is very large, this is a very large amount of overhead on the database.

For this kind of home page data, most of them are not often updated, we can cache them, every time a user clicks, we are directly from the cache, which can improve the speed of access to the home page, but also to reduce the pressure on the server. If you do a little more flexible, you can add a timer, regularly to update the home page cache. Similar to the CSDN personal blog home page ranking changes.

Here we are for this function, to write a demo, in practice, the reader can completely apply the code to achieve their own projects in the relevant logic. First of all, write a Service to simulate the query data from the database:

@Service
public class UserService {

    /**
     * Get user information
     * @return
     */
    public User getUser() {
        // In practice, the information will be queried from the database according to the specific business scenario.
        return new User(1L, "Ni Shengwu", "123456");
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

Then write a listener that implements theApplicationListenerinterface, overriding theonApplicationEventmethod, passing in the ContextRefreshedEvent object. If we want to also refresh our preloaded resources when loading or refreshing the application context, we can do so by listening to ContextRefreshedEvent. Here's how:

/**
 * utilizationApplicationListenerto initialize some data into theapplicationListeners in the domain
 * @author shengni ni
 * @date 2018/07/05
 */
@Component
public class MyServletContextListener implements ApplicationListener<ContextRefreshedEvent> {

    @Override
    public void onApplicationEvent(ContextRefreshedEvent contextRefreshedEvent) {
        // First get theapplication(textual) context
        ApplicationContext applicationContext = ();
        // Get the correspondingservice
        UserService userService = ();
        User user = ();
        // gainapplicationdomain object,Put the information found into theapplicationdomain
        ServletContext application = ();
        ("user", user);
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20

As described in the note, first get the application context through contextRefreshedEvent, then get the UserService bean through the application context, according to the actual business scenarios in the project, you can also get other beans, and then call their own business Then you can call your own business code to get the corresponding data, and finally store it in the application domain, so that when the front-end requests the corresponding data, we can get the information directly from the application domain, reducing the pressure on the database. The following is a test of writing a Controller to get user information directly from the application domain.

@RestController
@RequestMapping("/listener")
public class TestController {

    @GetMapping("/user")
    public User getUser(HttpServletRequest request) {
        ServletContext application = ();
        return (User) ("user");
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

Start the project and type in your browserhttp://localhost:8080/listener/userIf the user information is returned normally, then the data has been cached successfully. However, the application is cached in memory, which is memory intensive, and I'll talk about that later in this course.redisI'll tell you more about redis caching when I get there.

2.2 Listening to an HTTP session Session object

Listener is also a more commonly used place is used to listen to the session object, to get the number of users online, now there are many developers have their own website, listening to the session to get the number of users currently under the number of a very common use of the scenario, the following to introduce how to use.

/**
 * utilizationHttpSessionListenerListener for counting the number of online users
 * @author shengwu ni
 * @date 2018/07/05
 */
@Component
public class MyHttpSessionListener implements HttpSessionListener {

    private static final Logger logger = ();

    /**
     * Record the number of users online
     */
    public Integer count = 0;

    @Override
    public synchronized void sessionCreated(HttpSessionEvent httpSessionEvent) {
        ("New users are online!");
        count++;
        ().getServletContext().setAttribute("count", count);
    }

    @Override
    public synchronized void sessionDestroyed(HttpSessionEvent httpSessionEvent) {
        ("The user is offline.");
        count--;
        ().getServletContext().setAttribute("count", count);
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29

As you can see, first the listener needs to implement the HttpSessionListener interface and then override thesessionCreatedrespond in singingsessionDestroyedmethod in thesessionCreatedmethod by passing an HttpSessionEvent object and then adding 1 to the number of users in the current session.sessionDestroyedThe method is just the opposite, so I won't go into it again. Then we write a Controller to test it.

@RestController
@RequestMapping("/listener")
public class TestController {

    /**
     * Get the number of people currently online,The method hasbug
     * @param request
     * @return
     */
    @GetMapping("/total")
    public String getTotalUser(HttpServletRequest request) {
        Integer count = (Integer) ().getServletContext().getAttribute("count");
        return "Currently online:" + count;
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

In this controller, the number of users in the current session is retrieved directly, the server is started, and in the browser, you enterlocalhost:8080/listener/totalYou can see that the return result is 1, and then open a browser, request the same address you can see that the count is 2, which is not a problem. But if you close a browser and open it again, theoretically it should still be 2, but the actual test is 3. The reason is that the session destruction method is not executed (you can observe the log printout in the background console), when re-opened, the server can not find the user's original session, so it re-created a session, so how to solve the problem? We can modify the Controller method above:

@GetMapping("/total2")
public String getTotalUser(HttpServletRequest request, HttpServletResponse response) {
    Cookie cookie;
    try {
        // particle marking the following noun as a direct objectsessionIdRecording in the browser
        cookie = new Cookie("JSESSIONID", (().getId(), "utf-8"));
        ("/");
        //set upcookiesell-by date2sky,set up长一点
        ( 48*60 * 60);
        (cookie);
    } catch (UnsupportedEncodingException e) {
        ();
    }
    Integer count = (Integer) ().getServletContext().getAttribute("count");
    return "Currently online:" + count;
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16

As you can see, the processing logic is to let the server remember the original session, i.e. record the original sessionId in the browser, and the next time you open it, pass this sessionId so that the server won't recreate it. Restart the server and test it again in the browser to avoid the above problem.

2.3 Listening to client requests Servlet Request object

Using a listener to get information about the user's access is relatively simple, implement the ServletRequestListener interface, and then get some information through the request object. The following is an example:

/**
 * utilizationServletRequestListenerGetting access information
 * @author shengwu ni
 * @date 2018/07/05
 */
@Component
public class MyServletRequestListener implements ServletRequestListener {

    private static final Logger logger = ();

    @Override
    public void requestInitialized(ServletRequestEvent servletRequestEvent) {
        HttpServletRequest request = (HttpServletRequest) ();
        ("session idbecause of:{}", ());
        ("request urlbecause of:{}", ());

        ("name", "Ni Shengwu (1919-1952), novelist");
    }

    @Override
    public void requestDestroyed(ServletRequestEvent servletRequestEvent) {

        ("request end");
        HttpServletRequest request = (HttpServletRequest) ();
        ("requestSaved in the domainname值because of:{}", ("name"));

    }

}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29

This is relatively simple, so I won't go into it again. Next, you can write a Controller to test it.

@GetMapping("/request")
public String getRequestInfo(HttpServletRequest request) {
    ("requestListenerThe initialization of thenamedigital:" + ("name"));
    return "success";
}
  • 1
  • 2
  • 3
  • 4
  • 5

3. Spring Boot custom event listener

In the actual project, we often need to customize some events and listeners to meet the business scenarios, such as in the microservices there will be such a scenario: microservice A in the processing of a logic, you need to notify the microservice B to deal with another logic, or microservice A to deal with the logic of the need to synchronize the data to the microservice B, such a scenario is very common, at this time, we can customize the In this case, we can customize events and listeners to listen to them, and then notify microservice B to process the corresponding logic once we have listened to an event in microservice A.

3.1 Custom events

Custom events need to inherit the ApplicationEvent object, define a User object in the event to simulate the data, and pass the User object in the constructor method to initialize it. The following is an example of a custom event:

/**
 * Custom Events
 * @author shengwu ni
 * @date 2018/07/05
 */
public class MyEvent extends ApplicationEvent {

    private User user;

    public MyEvent(Object source, User user) {
        super(source);
         = user;
    }

    // make unnecesaryget、setmethodologies
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16

3.2 Customizing Listeners

Next, customize a listener to listen to the MyEvent event defined above, customizing the listener requires implementing theApplicationListenerinterface is sufficient. as follows:

/**
 * Custom listener to listen for MyEvent events.
 * @author shengwu ni
 * @date 2018/07/05
 */
@Component
public class MyEventListener implements ApplicationListener<MyEvent> {
    @Override
    public void onApplicationEvent(MyEvent myEvent) {
        // Get the information from the event
        User user = ();
        // Handle the event, which can be used in real projects to notify other microservices or handle other logic, etc.
        ("User name: " + ()); // Handle the event.
        ("Password: " + ()); // Handle the event.

    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

Then rewrite theonApplicationEventmethod, passing in the custom MyEvent event, because in that event, we define the User object (which in reality is the data to be processed, modeled below), and then we can use the information from that object.

OK, after defining the event and listener, you need to manually release the event so that the listener can listen to, which needs to be triggered according to the actual business scenarios, for the example of this article, I write a trigger logic, as follows:

/**
 * UserService
 * @author shengwu ni
 */
@Service
public class UserService {

    @Resource
    private ApplicationContext applicationContext;

    /**
     * Publishing Events
     * @return
     */
    public User getUser2() {
        User user = new User(1L, "Ni Shengwu (1919-1952), novelist", "123456");
        // Publishing Events
        MyEvent event = new MyEvent(this, user);
        (event);
        return user;
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22

Inject ApplicationContext into the service, and after the business code is processed, manually publish the MyEvent event through the ApplicationContext object, so that our customized listener can listen to it and then process the business logic written in the listener.

Finally, write an interface in the Controller to test it:

@GetMapping("/request")
public String getRequestInfo(HttpServletRequest request) {
    ("requestListenerThe initialization of thenamedigital:" + ("name"));
    return "success";
}
  • 1
  • 2
  • 3
  • 4
  • 5

In your browser, typehttp://localhost:8080/listener/publishIf the custom listener has taken effect, then observe the username and password printed on the console.

4. Summary

This lesson systematically introduces the principle of listeners , and how to use the listener in Spring Boot , listing the listener of the three commonly used cases , there is a good practical significance . Finally, it explains how to customize the events and listeners in the project, and combined with common scenarios in microservices, giving specific code models, can be applied to the actual project, I hope readers carefully digest.

Course source code download address:Poke me to download

Lesson 13: Using Interceptors in Spring Boot

The principle of the interceptor is very simple , is an implementation of AOP , specializing in the interception of dynamic resources on the background request , that is, to intercept the request of the control layer . The use of more scenarios is to determine whether the user has permission to request the background, a higher level of use of the scenario also, such as the interceptor can be used in conjunction with websocket, used to intercept websocket requests, and then do the corresponding processing and so on. Interceptor will not intercept static resources, Spring Boot's default static directory for resources/static, the directory of static pages, js, css, images, etc., will not be intercepted (also depends on how to implement, there are some cases will be intercepted, I will point out below).

1. Quick use of the interceptor

Using interceptors is simple and requires only two steps: defining an interceptor and configuring an interceptor. In configuring interceptors, Spring Boot 2.0 and later versions are different from previous versions, and I'll focus on the pitfalls that can occur here.

1.1 Defining Interceptors

To define an interceptor, simply implement theHandlerInterceptorInterface.HandlerInterceptorinterface is the granddaddy of all custom interceptors or interceptors provided by Spring Boot, so let's first understand the interface. There are three methods in this interface:preHandle(……)postHandle(……)respond in singingafterCompletion(……)

preHandle(……)method: this method is executed when a url has been matched to a method in the corresponding Controller, and before the method is executed. SopreHandle(……)method determines whether or not the request will be released, which is determined by a return value of true for release or false for no backward execution.
postHandle(……)method: This method is executed when a url has been matched to a method in the corresponding Controller and after the method has been executed, but before the DispatcherServlet view is rendered. So there is a ModelAndView parameter in this method where you can do some modifications.
afterCompletion(……)method: as the name suggests, this method is executed after the entire request processing is complete (including view rendering), when some resource cleanup is done, this method is only available when thepreHandle(……)will not be executed until it has been successfully executed and returns true.

Knowing the interface, next customize an interceptor.

/**
 * Custom Interceptors
 * @author shengwu ni
 * @date 2018/08/03
 */
public class MyInterceptor implements HandlerInterceptor {

    private static final Logger logger = ();

    @Override
    public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {

        HandlerMethod handlerMethod = (HandlerMethod) handler;
        Method method = ();
        String methodName = ();
        ("====Intercepted method:{},Execute the method before the====", methodName);
        // come (or go) backtrueOnly then will it continue to be implemented,come (or go) backfalsethen the current request is canceled
        return true;
    }

    @Override
    public void postHandle(HttpServletRequest request, HttpServletResponse response, Object handler, ModelAndView modelAndView) throws Exception {
        ("After executing the method, proceed to execute the(Controllerafter a method call),But the view has not been rendered at this point");
    }

    @Override
    public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex) throws Exception {
        ("The entire request has been processed.,DispatcherServletThe corresponding view is also rendered,At this point I can do some cleanup");
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30

OK, up to this point, the interceptor has been defined, the next step is to intercept the configuration of the interceptor.

1.2 Configuring Interceptors

Prior to Spring Boot 2.0, we inherited directly from the WebMvcConfigurerAdapter class and then rewrote theaddInterceptorsmethod to implement the configuration of the interceptor. However, after Spring Boot 2.0, this method has been deprecated (of course, it can still be used) and replaced by the WebMvcConfigurationSupport method as follows:

@Configuration
public class MyInterceptorConfig extends WebMvcConfigurationSupport {

    @Override
    protected void addInterceptors(InterceptorRegistry registry) {
        (new MyInterceptor()).addPathPatterns("/**");
        (registry);
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

Rewrite theaddInterceptorsmethod, adding our custom interceptor from above to theaddPathPatternsmethod is to add the request to be intercepted, here we intercept all requests. This will configure the interceptor, next write a Controller to test it:

@Controller
@RequestMapping("/interceptor")
public class InterceptorController {

    @RequestMapping("/test")
    public String test() {
        return "hello";
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

To make it jump to the page, output it directly in thehello interceptorThat's all it takes. Start the project and in your browser typelocalhost:8080/interceptor/testTake a look at the console log:

==== intercepted the method: test, before the execution of the method into the execution of ====
After the execution of the method into the execution (after the Controller method call), but at this time has not been view rendering
The entire request is processed, DispatcherServlet also rendered the corresponding view, at this time I can do some cleanup work!
  • 1
  • 2
  • 3

You can see that the interceptor is in effect and you can see the order in which it is executed.

1.3 Addressing the interception of static resources

The definition and configuration of the interceptor has been introduced above, but is this not a problem? In fact, if we use the above configuration, we will find a defect, that is, static resources are intercepted. You can place an image resource or html file in the resources/static/ directory, and then start the project to access it directly, you can see the phenomenon of inaccessibility.

That is, although Spring Boot 2.0 deprecated the WebMvcConfigurerAdapter, WebMvcConfigurationSupport will cause the default static resources to be blocked again, which requires us to manually release the static resources.

How do I let go of it? In addition to overriding in the MyInterceptorConfig configuration class theaddInterceptorsmethod in addition to another method rewrite:addResourceHandlers, liberalizing static resources:

/**
 * Used to specify that static resources are not to be intercepted, otherwise inheriting WebMvcConfigurationSupport in this way would result in static resources not being directly accessible.
 * @param registry
 */
@Override
protected void addResourceHandlers(ResourceHandlerRegistry registry) {
    ("/**").addResourceLocations("classpath:/static/");
    (registry);
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

After this configuration, restart the project, static resources can also be accessed normally. If you are a good learner or researcher, it certainly does not stop here, yes, this way above does solve the problem of static resources can not be accessed, but there is a more convenient way to configure.

Instead of inheriting the WebMvcConfigurationSupport class, we implement the WebMvcConfigurer interface and override theaddInterceptorsmethod, just add the custom interceptor as follows:

@Configuration
public class MyInterceptorConfig implements WebMvcConfigurer {
    @Override
    public void addInterceptors(InterceptorRegistry registry) {
        // realizationWebMvcConfigurerDoes not cause static resources to be blocked
        (new MyInterceptor()).addPathPatterns("/**");
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

This is very convenient, as implementing the WebMvcConfigure interface will not intercept Spring Boot's default static resources.

Both ways can be, specific details between them, interested readers can do further research, due to the difference between the two ways, inherit WebMvcConfigurationSupport class can be used in the front and back-end separation of the project, the background does not need to access the static resources (there would be no need to release the static resources); the implementation of the WebMvcConfigure interface can be used in non-front-end and back-end separation of the project, because you need to read some images, css, js files and so on.

2. Examples of the use of interceptors

2.1 Determine whether the user has logged in or not

General user login function we can do, either write a user to the session, or for each user to generate a token, the second one is a little better, then for the second way, if the user logged in successfully, each request will bring the user's token, if not logged in, there is no token, the server can detect the token parameter to determine whether the user has logged in or not, so as to realize the interception function. The server can detect the presence or absence of the token parameter to determine whether the user is logged in or not, thus realizing the interception function. Let's modify thepreHandlemethod, as follows:

@Override
public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {

    HandlerMethod handlerMethod = (HandlerMethod) handler;
    Method method = (); String methodName = ()
    String methodName = (); ("")
    ("==== intercepted method: {}, execute ==== before this method is executed", methodName);

    // Determine whether the user is logged in or not, usually the user has a token after logging in.
    String token = ("token"); if (null == token;)
    if (null == token || "".equals(token)) {
        ("User is not logged in and does not have permission to execute ...... please log in"); }
        ("The user is not logged in and does not have permission to execute  please log in"); return false;
    }

    // Return true to continue, false to cancel the request.
    return true; }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

Restart the project and type in the browserlocalhost:8080/interceptor/testAfter checking the console logs, it was found to be blocked, if you type in the browserlocalhost:8080/interceptor/test?token=123You can go down normally.

2.2 Cancel Intercept Operation

Based on the above, if I want to intercept all/adminThis prefix needs to be added to the interceptor configuration for url requests that start with the url, but in real projects there may be scenarios where a request is also/adminthat begin with, but can't be intercepted, such as/admin/loginAnd so on, in which case it would need to be configured again. So, is it possible to make something like a switch, where I don't need to intercept, I'll get a switch up there to make this flexible and pluggable effect?

Yes, we can define an annotation, the annotation is specifically used to cancel the interception operation, if a Controller in the method we do not need to intercept off, you can add our custom annotations on the method can be, the following first define an annotation:

/**
 * This annotation is used to specify that a method is not to be intercepted.
 */
@Target()
@Retention()
public @interface UnInterception {
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

Then add that annotation to a method in the Controller, and add the logic to cancel the intercept in the interceptor handler method as follows:

@Override
public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {

    HandlerMethod handlerMethod = (HandlerMethod) handler;
    Method method = (); String methodName = ()
    String methodName = (); ("")
    ("==== intercepted method: {}, execute ==== before this method is executed", methodName);

    // Through the method, you can get the custom annotation on the method, and then use the annotation to determine if the method is to be intercepted or not
    // @UnInterception is our custom annotation
    UnInterception unInterception = ();
    if (null ! = unInterception) {
        unInterception = (; if (null != unInterception) { return true; }
    }
    // Return true to continue execution, false to cancel the request.
    return true; }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

The code for the methods in the Controller can be found in the source code, restart the project and type in the browserhttp://localhost:8080/interceptor/test2?token=123A test shows that methods with this annotation are not intercepted.

3. Summary

This section introduces the use of interceptors in Spring Boot, from the creation of interceptors, configuration, to the interceptor on the impact of static resources, have done a detailed analysis.Spring Boot 2.0 after the configuration of interceptors to support two ways, according to the actual situation to choose a different configuration. Finally, in conjunction with the actual use, cited two commonly used scenarios, I hope that readers can carefully digest, master the use of interceptors.

Course source code download address:Poke me to download

Lesson 14: Integrating Redis in Spring Boot

1. Introduction to Redis

Redis is a non-relational database (NoSQL), NoSQL is stored in the form of key-value, and traditional relational databases are not the same, do not necessarily follow some of the basic requirements of traditional databases, such as the SQL standard, ACID properties, table structure, etc., such databases have the following main characteristics: non-relational, distributed, open source, and horizontally scalable.
NoSQL usage scenarios include: highly concurrent reading and writing of data, efficient storage and access to large amounts of data, high scalability and high availability of data, and so on.
Redis keys can be strings, hashes, linked lists, sets, and ordered sets. value types are many, including string, list, set, and zset. all of these data types support push/pop, add/remove, fetching intersections and concatenation, as well as many more and richer operations, and Redis also supports sorting in a variety of different ways. For efficiency, data is cached in memory, and it can also periodically write updates to disk or write changes to an appended log file. What are the benefits of having redis? For a relatively simple example, look at the following diagram:

Redis使用场景

The Redis cluster is synchronized with Mysql, so it will get data from redis first, and if redis hangs, it will get data from mysql, so that the site won't hang. For more information about redis and its usage scenarios, Google and Baidu, so I won't go into details here.

2. Redis Installation

This course is in vmvare virtual machine to install redis (centos 7), learning time if you have their own aliyun server, you can also in aliyun to install redis, can. As long as you can ping the ip of the cloud host or virtual machine, and then release the corresponding port in the virtual machine or cloud host (or turn off the firewall) to access redis. the following is to introduce the installation process of redis:

  • Install the gcc compiler

Because when you install redis later, you need to compile, so you have to install gcc compile first. AliCloud hosts already have gcc installed by default, but if you are installing your own virtual machine, you need to install gcc first:

yum install gcc-c++
  • 1
  • Download redis

There are two ways to download the installation package, one is to go to the official website to download (), and then the installation package into centos, the other way is to use wget to download directly:

wget /releases/redis-3.2.
  • 1
  • 1

If you have not installed wget before, you can install it with the following command:

yum install wget
  • 1
  • unzip and install

Extract the installation package:

tar –vzxf redis-3.2.
  • 1

Then put the extracted folder redis-3.2.8 into the/usr/local/under which the software is usually installed./usr/localDown. Then enter the/usr/local/redis-3.2.8/folder, execute themakecommand to complete the installation.
If make fails, try the following command:

make MALLOC=libc
make install
  • 1
  • 2
  • Modify the configuration file

After successful installation, you need to modify the configuration file, including the ip allowed to access, allow background execution, set the password and so on.
Open the redis configuration file:vi
In command mode, enter/bindto find the bind configuration, press n to find the next one, and when you find the configuration, configure the bind to 0.0.0.0 to allow any server to access redis, ie:

bind 0.0.0.0
  • 1

Using the same method, change daemonize to yes (the default is no) to allow redis to execute in the background.
Leave the requirepass annotation on and set the password to 123456 (set your own password).

  • Starting redis

In the redis-3.2.8 directory, specify the configuration file you just modified to start redis:

redis-server ./
  • 1

Start the redis client again:

redis-cli
  • 1

Since we set a password, after starting the client, enter theauth 123456You can log in to access the client.
Then we'll test it by inserting a data into redis:

set name CSDN
  • 1

Then get the name.

get name
  • 1

If CSDN is fetched normally, there is no problem.

3. Spring Boot integration Redis

3.1 Dependency Import

Spring Boot integration with redis is very easy, you just need to import a starter dependency for redis. Here's how it works:

<dependency>
	<groupId></groupId>
	<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<!--Ali Baba, character from The Ar* Nightsfastjson -->
<dependency>
    <groupId></groupId>
    <artifactId>fastjson</artifactId>
    <version>1.2.35</version>
</dependency>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

The reason for importing Alibaba's fastjson here is that we're going to store an entity later on, so it's easier to convert the entity to a json string and store it there.

3.2 Redis Configuration

After importing the dependencies, we configure redis in the file:

server.
  port: 8080
spring.
  #redis configuration
  redis.
    database: 5
    # Configure the redis host address, you need to change it to your own
    host: 192.168.48.190
    port: 6379
    password: 123456
    timeout: 5000
    jedis.
      pool.
        # Maximum free connections in the connection pool, default is also 8.
        max-idle: 500
        # Minimum free connections in the connection pool, defaults to 0.
        min-idle: 50
        # If -1, no limit; if the pool has maxActive jedis instances allocated, the state of the pool is exhausted.
        max-active: 1000
        # Maximum time to wait for an available connection, in milliseconds, defaults to -1, which means never timeout. If the wait time is exceeded, a JedisConnectionException will be thrown.
        max-wait: 2000
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21

3.3 Introduction to common api

Spring Boot support for redis has been very perfect, rich api has been enough for our daily development, here I introduce a few of the most commonly used for everyone to learn, and other api hope that you learn more, more research. Other api's I hope you will study and research more on your own.

There are two redis templates: RedisTemplate and StringRedisTemplate. we don't use RedisTemplate, RedisTemplate provides us with the ability to manipulate objects, and when we manipulate an object, we usually store it in a json format, but when we do, we use the default Redis internal serializer, which causes us to store the objects in json format, but when we do so, we use Redis's default internal serializer, which causes us to store the objects in json format. Of course, we can define our own serialization, but it's a bit of a pain, so we use the StringRedisTemplate template, which mainly provides us with string manipulation, so we can convert entity classes and such into json strings, and then convert them into corresponding objects when we take them out. This is why I imported Ali fastjson above.

3.3.1 redis:string type

Create a new RedisService, inject the StringRedisTemplate, use the()AccessibleValueOperationsobject, through which you can read and write to the redis database. The following is an example:

public class RedisService {

    @Resource
    private StringRedisTemplate stringRedisTemplate;

    /**
     * set redis: stringtypology
     * @param key key
     * @param value value
     */
    public void setString(String key, String value){
        ValueOperations<String, String> valueOperations = ();
        (key, value);
    }

    /**
     * get redis: stringtypology
     * @param key key
     * @return
     */
    public String getString(String key){
        return ().get(key);
    }
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23

This object operates on a string, we can also store entity classes, we just need to convert the entity class to a json string. Here's a test:

@RunWith()
@SpringBootTest
public class Course14ApplicationTests {

    private static final Logger logger = ();

@Resource
private RedisService redisService;

@Test
public void contextLoads() {
        // Test the string type of redis.
        ("weichat", "Programmer's Private Room");
        ("My weichat public number is: {}", ("weichat"));

        // If it's an entity, we can convert it to a json string using the json tool.
        User user = new User("CSDN", "123456");
        ("userInfo", (user));
        ("UserInfo: {}", ("userInfo"));
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21

Start redis first, then run this test case and observe the log printed on the console as follows:

My WeChat public number is: programmer's private kitchen
User information: {"password": "123456", "username": "CSDN"}
  • 1
  • 2

3.3.2 redis:hash type

The hash type is actually the same as string, but with two keys, using the()AccessibleHashOperationsobjects. For example, if we want to store order information, all order information is placed under order, for different users of the order entity, you can distinguish by the user's id, which is equivalent to two keys.

@Service
public class RedisService {

    @Resource
    private StringRedisTemplate stringRedisTemplate;

    /**
     * set redis: hashtypology
     * @param key key
     * @param filedKey filedkey
     * @param value value
     */
    public void setHash(String key, String filedKey, String value){
        HashOperations<String, Object, Object> hashOperations = ();
        (key,filedKey, value);
    }

    /**
     * get redis: hashtypology
     * @param key key
     * @param filedkey filedkey
     * @return
     */
    public String getHash(String key, String filedkey){
        return (String) ().get(key, filedkey);
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27

As you can see, hash and string are no different, just more parameters, Spring Boot operation redis is very simple and convenient. Let's test it out:

@SpringBootTest
public class Course14ApplicationTests {

    private static final Logger logger = ();

	@Resource
	private RedisService redisService;

	@Test
	public void contextLoads() {
        //beta (software)redis(used form a nominal expression)hashtypology
        ("user", "name", (user));
        ("user name and surname:{}", ("user","name"));
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

3.3.3 redis:list type

utilization()AccessibleListOperations listOperationsredis list object, the list is a simple list of strings that can support adding from the left or from the right, and a list contains at most 2 ^ 32 -1 elements.

@Service
public class RedisService {

    @Resource
    private StringRedisTemplate stringRedisTemplate;

    /**
     * set redis:listtypology
     * @param key key
     * @param value value
     * @return
     */
    public long setList(String key, String value){
        ListOperations<String, String> listOperations = ();
        return (key, value);
    }

    /**
     * get redis:listtypology
     * @param key key
     * @param start start
     * @param end end
     * @return
     */
    public List<String> getList(String key, long start, long end){
        return ().range(key, start, end);
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28

As you can see, these api are the same form, easy to remember and easy to use. Specific api details I will not expand, you can look at their own api documentation. In fact, these api according to the parameters and return values can also know what they are doing. To test:

@RunWith()
@SpringBootTest
public class Course14ApplicationTests {

    private static final Logger logger = ();

	@Resource
	private RedisService redisService;

	@Test
	public void contextLoads() {
        //test (machinery etc)redis(used form a nominal expression)listtypology
        ("list", "football");
        ("list", "basketball");
        List<String> valList = ("list",0,-1);
        for(String value :valList){
            ("listthere are:{}", value);
        }
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20

4. Summary

This section describes the scenarios for using redis, the installation process, and the detailed steps for integrating redis in Spring Boot. In actual projects, redis is usually used as a cache, when querying the database, it will first be looked up from redis, if there is information, it will be taken from redis; if not, it will be looked up from the database and synchronized to redis, so that the next time redis will be available. The same is true for updates and deletes, which need to be synchronized to redis. redis is used a lot in highly concurrent scenarios.

Course source code download address:Poke me to download

Lesson 15: Integrating ActiveMQ in Spring Boot

1. Introduction to JMS and ActiveMQ

1.1 What is JMS

Baidu's explanation:

JMS, the Java Message Service application program interface, is a Java platform API for Message Oriented Middleware (MOM), which is used to send messages for asynchronous communication between two applications or in a distributed system. The Java Message Service is a platform-independent API, and most MOM providers support JMS.

JMS is just the interface, and different providers or open source organizations have different implementations of it, ActiveMQ is one of them, which supports JMS and is introduced by Apache.There are several object models in JMS:

Connection Factory: ConnectionFactory
JMS connection: Connection
JMS session: Session
JMS Purpose: Destination
JMS Producer: Producer
JMS Consumer: Consumer
Two types of JMS messages: peer-to-peer and publish/subscribe.

It can be seen that JMS is actually somewhat similar to JDBC, in that JDBC is an API that can be used to access a number of different relational databases, whereas JMS provides the same vendor-independent access methods to message sending and receiving services. In this article, we will mainly use ActiveMQ.

1.2 ActiveMQ

ActiveMQ is Apache's powerful open source message bus. activeMQ fully supports the JMS 1.1 and J2EE 1.4 specifications, although the JMS specification has been a long time coming, but JMS still plays a special role in today's Java EE applications. activeMQ is used in the asynchronous message processing, the so-called ActiveMQ is used for asynchronous message processing, where the sender of a message does not have to wait for the receiver of the message to process it and return it, and does not even have to care about whether the message was sent successfully or not.

Asynchronous messages have two main destinations, queues (queue) and topics (topic), queues are used for peer-to-peer message communication and topics are used for publish/subscribe message communication. This section focuses on learning how to use these two forms of messages in Spring Boot.

2. ActiveMQ installation

To use ActiveMQ, you first need to go to the official website to download it, the official website address is: /.
The version used in this course is apache-activemq-5.15.3, after downloading and unpacking there will be a folder named apache-activemq-5.15.3, yes, this is installed, very simple, out of the box. Open the folder and you will see aactivemq-all-5.15.This jar can be added to the project, but we don't need it if we use maven.

Before using ActiveMQ, you have to start it first, there is a bin directory in the directory after unzipping, there are two directories in it, win32 and win64, according to your own computer, choose one of them to open and run the one in it, that is, you can start ActiveMQ.
A message producer produces a message and publishes it in a queue, then a message consumer takes it out of the queue and consumes it. Note that after a message has been consumed by a consumer, there is no more storage in the queue, so the message consumer cannot consume a message that has already been consumed.Queues support the existence of multiple message consumers, but there will only be one consumer that can consume a message.
After startup is complete, type in your browserhttp://127.0.0.1:8161/admin/to access the ActiveMQ server with the username and password admin/admin. as follows:

activemq

We can see that there are two options, Queues and Topics, which are the view windows for peer-to-peer messages and publish/subscribe messages respectively. What are peer-to-peer messages and publish/subscribe messages?

Peer-to-peer messaging: A message producer produces a message and publishes it in a queue, then a message consumer takes it out of the queue and consumes it. Note that once a message has been consumed by a consumer, there is no more storage in the queue, so the consumer cannot consume a message that has already been consumed.Queue supports the existence of multiple message consumers, but there is only one consumer that can consume a message.

Publish/Subscribe Messages: A message producer (publish) publishes a message to a topic and multiple message consumers (subscribe) consume the message. Unlike the peer-to-peer approach, a message published to a topic will be consumed by all subscribers. The implementation is analyzed below.

3. ActiveMQ integration

3.1 Dependency Import and Configuration

To integrate ActiveMQ in Spring Boot, you need to import the following starter dependencies:

<dependency>
	<groupId></groupId>
	<artifactId>spring-boot-starter-activemq</artifactId>
</dependency>
  • 1
  • 2
  • 3
  • 4

Then in the configuration file, do a little configuration on activemq:

spring.
  activemq.
  # activemq url
    broker-url: tcp://localhost:61616
    in-memory: true
    activemq url broker-url: tcp://localhost:161616
      # If this is set to true, the activemq-pool dependency needs to be added, otherwise autoconfiguration fails and JmsMessagingTemplate cannot be injected.
      enabled: false
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8

3.2 Queue and Topic Creation

First of all, we need to create two kinds of message Queue and Topic, the creation of these two kinds of message, we put into the ActiveMqConfig to create, as follows:

/**
 * activemqspecifications
 * @author shengwu ni
 */
@Configuration
public class ActiveMqConfig {
    /**
     * post/Subscription Mode Queue Name
     */
    public static final String TOPIC_NAME = "";
    /**
     * Peer-to-Peer Mode Queue Name
     */
    public static final String QUEUE_NAME = "";

    @Bean
    public Destination topic() {
        return new ActiveMQTopic(TOPIC_NAME);
    }

    @Bean
    public Destination queue() {
        return new ActiveMQQueue(QUEUE_NAME);
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25

You can see that both Queue and Topic messages are created, using thenew ActiveMQQueuecap (a poem)new ActiveMQTopicto create them, each followed by the name of the corresponding message. You can then inject these two messages directly as components elsewhere.

3.3 Message Sending Interface

In Spring Boot, we just need to inject the JmsMessagingTemplate template to send messages quickly, as follows:

/**
 * message sender
 * @author shengwu ni
 */
@Service
public class MsgProducer {

    @Resource
    private JmsMessagingTemplate jmsMessagingTemplate;

    public void sendMessage(Destination destination, String msg) {
        (destination, msg);
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

convertAndSendThe first parameter in the method is the destination to which the message is sent, and the second parameter is the specific message content.

3.4 Peer-to-peer message production and consumption

3.4.1 Peer-to-peer message production

The production of the message is done in the Controller. Since we have already generated the Queue message component above, we can inject it directly in the Controller. Then we call the message sending method abovesendMessageA message can be successfully produced.

/**
 * ActiveMQ controller
 * @author shengwu ni
 */
@RestController
@RequestMapping("/activemq")
public class ActiveMqController {

    private static final Logger logger = ();

    @Resource
    private MsgProducer producer;
    @Resource
    private Destination queue;

    @GetMapping("/send/queue")
    public String sendQueueMessage() {

        ("===Start sending peer-to-peer messages===");
        (queue, "Queue: hello activemq!");
        return "success";
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23

3.4.2 Consumption of peer-to-peer messages

Consumption of peer-to-peer messages is simple, as long as we specify the destination, the jms listener is always listening to see if a message is coming through, and if so, consumes it.

/**
 * message consumer
 * @author shengwu ni
 */
@Service
public class QueueConsumer {

    /**
     * receive point-to-point messages
     * @param msg
     */
    @JmsListener(destination = ActiveMqConfig.QUEUE_NAME)
    public void receiveQueueMsg(String msg) {
        ("The message received was:" + msg);
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16

It can be seen that using@JmsListenerannotation to specify the destination to listen to, in the message receiving method, we can according to specific business needs to do the corresponding logic can be processed.

3.4.3 Put it to the test

Launch the project and type in your browser:http://localhost:8081/activemq/send/queueThe following log appears to indicate that the message was sent and consumed successfully.

The message received is: Queue: hello activemq!
  • 1
  • 1

3.5 Production and consumption of publish/subscribe messages

3.5.1 Publish/Subscribe Message Production

As with peer-to-peer messages, we inject the topic and call the producer'ssendMessagemethod to send a subscription message, as follows, without further elaboration:

@RestController
@RequestMapping("/activemq")
public class ActiveMqController {

    private static final Logger logger = ();

    @Resource
    private MsgProducer producer;
    @Resource
    private Destination topic;

    @GetMapping("/send/topic")
    public String sendTopicMessage() {

        ("===Start sending subscription messages===");
        (topic, "Topic: hello activemq!");
        return "success";
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19

3.5.2 Consumption of publish/subscribe messages

Consumption of publish/subscribe messages is different from peer-to-peer, and subscribe messages support multiple consumers together. Secondly, Spring Boot defaults to peer-to-peer messaging, so when using a topic, it won't work and we need to add a configuration in the configuration file:

spring:
  jms:
    pub-sub-domain: true
  • 1
  • 2
  • 3

If this configuration is false, then it is a peer-to-peer message, which is the default for Spring Boot. This solves the problem, but if you configure it this way, the peer-to-peer messages mentioned above won't be consumed properly. So you can't have both, and this is not a good solution.

The better solution is that we define a factory that@JmsListenerThe annotation only receives queue messages by default, if you want to receive topic messages, you need to set up the containerFactory, which we also added in the ActiveMqConfig configuration class above:

/**
 * activemqspecifications
 *
 * @author shengwu ni
 */
@Configuration
public class ActiveMqConfig {
    // Omit the rest

    /**
     * JmsListenerBy default, the annotation only receivesqueuemessages,To receivetopicmessages,Required settingscontainerFactory
     */
    @Bean
    public JmsListenerContainerFactory topicListenerContainer(ConnectionFactory connectionFactory) {
        DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
        (connectionFactory);
        // Equivalent to configuring in:-sub-domain=true
        (true);
        return factory;
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21

After such a configuration, we are consuming in the@JmsListenerThis container factory is specified in the annotation to consume topic messages. The following is an example:

/**
 * Topicmessage consumer
 * @author shengwu ni
 */
@Service
public class TopicConsumer1 {

    /**
     * Receive subscription messages
     * @param msg
     */
    @JmsListener(destination = ActiveMqConfig.TOPIC_NAME, containerFactory = "topicListenerContainer")
    public void receiveTopicMsg(String msg) {
        ("The message received was:" + msg);
    }

}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

Specify the containerFactory attribute as the topicListenerContainer we configured above. Since topic messages can be consumed by more than one person, you can copy a few of the consumed classes and test them together, so I'm not going to post the code here, you can refer to my source code for testing.

3.5.3 Put it to the test

Start the project and type it in your browser:http://localhost:8081/activemq/send/topicThe following log appears to indicate that the message was sent and consumed successfully.

Message received as: Topic: hello activemq!
Message received: Topic: hello activemq!
  • 1
  • 2

4. Summary

This chapter introduces the concepts of jms and activemq, and the installation and startup of activemq. It analyzes the configuration, message production, and consumption of peer-to-peer messaging and publish/subscribe messaging in Spring Boot. It analyzes in detail the configuration, message production and consumption of peer-to-peer and publish/subscribe messaging in Spring Boot. activeMQ is a powerful open source message bus that is very useful for asynchronous message processing, so I hope you digest it.

Course source code download address:Poke me to download

Lesson 16: Integrating Shiro in Spring Boot

Shiro is a powerful, easy-to-use Java security framework, mainly used for easier authentication, authorization, encryption, session management and so on, can provide security for any application. This course introduces the authentication and authorization features of Shiro.

1. The three core components of Shiro

Shiro has three core components:SubjectSecurityManagerrespond in singingRealm. Let's first look at how they relate to each other.

三大核心组件的关系

  1. Subject: the authentication subject. It contains two pieces of information: Principals and Credentials. see what these two pieces of information are.

Principals: Identity. This can be a username, email, cell phone number, etc., and is used to identify a login subject;
Credentials. Commonly, there are passwords, digital certificates, etc.

To put it bluntly, it is something that needs to be authenticated, the most common is the user name and password, such as the user to log in, Shiro needs to go to the identity of the authentication, you need Subject authentication body.

  1. SecurityManager. This is the centerpiece of the Shiro architecture and acts as an umbrella for all the originals inside Shiro. We usually configure the SecurityManager in our projects, and most of the developer's effort is focused on the Subject authentication body. When we interact with Subject, it is actually the SecurityManager that does the security operations behind the scenes.

  2. Realms: Realms are domains that bridge the gap between Shiro and specific applications. When there is a need to interact with security data, such as user accounts, access control, etc., Shiro looks in one or more Realms. We generally customize Realm ourselves, which is described in more detail below.

1. Shiro Identity and Privilege Authentication

1.2 Shiro authentication

Let's analyze the Shiro authentication process and look at one of the official authentication diagrams:

认证过程

Step1: The application code in the call to the(token)method after passing in the AuthenticationToken instance token representing the end user's identity and credentials.

Step2: Delegate the Subject instance to the application's SecurityManager to start the actual authentication work. Here's where the actual authentication begins.

Step3, 4, 5: Then SecurityManager will do the security authentication based on the specific realm. As you can see from the figure, the realm can be customized (Custom Realm).

1.3 Shiro Privilege Authentication

Privilege authentication, also known as access control, is the control of who can access which resources in an application. The three most central elements in privilege authentication are: permissions, roles, and users.

Permission: the right to manipulate resources, such as access to a page, as well as adding, modifying, deleting, and viewing the data of a module;
Role (role): refers to the role of the user, a role can have multiple permissions;
user: In Shiro, represents the user who accesses the system, i.e. the Subject authentication subject mentioned above.

The relationship between them can be represented by the following diagram:

用户、角色和权限的关系

A user can have multiple roles, and different roles can have different permissions, but also by having the same permissions. For example, there are now three roles, 1 is an ordinary role, 2 is also an ordinary role, 3 is the administrator, role 1 can only view information, role 2 can only add information, the administrator can, and can also delete information, similar to this.

2. Spring Boot integration Shiro process

2.1 Dependency Import

Spring Boot 2.0.3 integration with Shiro requires importing the following starter dependencies:

<dependency>
    <groupId></groupId>
    <artifactId>shiro-spring</artifactId>
    <version>1.4.0</version>
</dependency>
  • 1
  • 2
  • 3
  • 4
  • 5

2.2 Initialization of database table data

Here mainly involves three tables: user table, role table and permission table, in fact, in the demo, we can completely simulate their own, do not have to build a table, but in order to be closer to the actual situation, we still add mybatis, to operate the database. The following is the script of the database table.

CREATE TABLE `t_role` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'primary key',
  `rolename` varchar(20) DEFAULT NULL COMMENT 'character name',
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8

CREATE TABLE `t_user` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT '用户primary key',
  `username` varchar(20) NOT NULL COMMENT 'user ID',
  `password` varchar(20) NOT NULL COMMENT 'cryptographic',
  `role_id` int(11) DEFAULT NULL COMMENT 'Foreign Key Associationsrolea meter (measuring sth)',
  PRIMARY KEY (`id`),
  KEY `role_id` (`role_id`),
  CONSTRAINT `t_user_ibfk_1` FOREIGN KEY (`role_id`) REFERENCES `t_role` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8

CREATE TABLE `t_permission` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'primary key',
  `permissionname` varchar(50) NOT NULL COMMENT 'authority name',
  `role_id` int(11) DEFAULT NULL COMMENT 'Foreign Key Associationsrole',
  PRIMARY KEY (`id`),
  KEY `role_id` (`role_id`),
  CONSTRAINT `t_permission_ibfk_1` FOREIGN KEY (`role_id`) REFERENCES `t_role` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24

Among them, t_user, t_role and t_permission store user information, role information and permission information, respectively. After the table is built, we insert some test data into the table.
t_user table:

id

username

password

role_id

1

csdn1

123456

1

2

csdn2

123456

2

3

csdn3

123456

3

t_role table:

id

rolename

1

admin

2

teacher

3

student

t_permission table:

id

permissionname

role_id

1

user:*

1

2

student:*

2

Explain the permissions here:user:*Indicates that the permissions can beuser:createOr otherwise.*The placeholder is a placeholder that we can define ourselves, as described in the Shiro Configuration section below.

2.2 Customizing Realm

After we have the database table and data, we start to customize the realm, customizing the realm needs to inherit the AuthorizingRealm class, because the class encapsulates a lot of methods, which is also inherited from the Realm class step by step, after inheriting the AuthorizingRealm class, we need to override two methods:

doGetAuthenticationInfo()Method: Used to authenticate the currently logged in user and obtain authentication information
doGetAuthorizationInfo()Method: Used to grant permissions and roles to the currently logged in successful users

Specific implementation of the following, the relevant explanation I put in the comments of the code, which is more convenient and intuitive:

/**
 * Customize realm
 * @author shengwu ni
 */
public class MyRealm extends AuthorizingRealm {

    @Resource
    private UserService userService; @Resource

    @Override
    protected AuthorizationInfo doGetAuthorizationInfo(PrincipalCollection principalCollection) {
        // Get the username
        String username = (String) (); // Get the username.
        SimpleAuthorizationInfo authorizationInfo = new SimpleAuthorizationInfo();
        // Set the role for the user, the role information is stored in the t_role table.
        ((username)); // Set permissions for the user.
        // Set permissions for the user, the permissions information is stored in the t_permission table.
        ((username)); return authorizationInfo; // Set permissions for this user.
        return authorizationInfo; }
    }

    @Override
    protected AuthenticationInfo doGetAuthenticationInfo(AuthenticationToken authenticationToken) throws AuthenticationException {
        // Get the username based on the token, if you don't know where the token comes from, leave it alone for now, as explained below.
        String username = (String) (); // Get the username from the database based on the token.
        // Query the database for the user based on the username.
        User user = (username); if(user !
        if(user ! = null) {
            // Store the current user in the session
            ().getSession().setAttribute("user", user); // Pass in the username and password for authentication.
            // Pass in a username and password for authentication and return the authentication information
            AuthenticationInfo authcInfo = new SimpleAuthenticationInfo((), (), "myRealm");
            return authcInfo;
        } else {
            return null; } else {
        }
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38

As you can see from the above two methods: the authentication is based on the user's username entered by the user first from the database to find out the user corresponding to the username, which does not involve the password, that is to say, to this step, even if the user enters the password is not correct, but also can be found out that the user, and then the user's correct information encapsulated into the authcInfo returned to Shiro, the next thing is Shiro will be based on the real information with the username and password entered by the user foreground to verify the password. The next thing is Shiro, it will be based on the real information and the user's front-end user name and password for verification, this time to verify the password, if the verification passes let the user login, otherwise jump to the specified page. Similarly, permission validation is also based on the user name from the database to get the user name related roles and permissions, and then encapsulated into authorizationInfo returned to Shiro.

2.3 Shiro Configuration

Now that the custom realm is written, it's time to configure Shiro. There are three main things to configure: the custom realm, the security manager SecurityManager, and the Shiro filter. These are listed below:

Configure a custom realm:

@Configuration
public class ShiroConfig {

    private static final Logger logger = ();

    /**
     * Inject a customizedrealm
     * @return MyRealm
     */
    @Bean
    public MyRealm myAuthRealm() {
        MyRealm myRealm = new MyRealm();
        ("====myRealmRegistration Completed=====");
        return myRealm;
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16

Configure the SecurityManager:

@Configuration
public class ShiroConfig {

    private static final Logger logger = ();

    /**
     * Injection Security Manager
     * @return SecurityManager
     */
    @Bean
    public SecurityManager securityManager() {
        // will customizerealmadd in
        DefaultWebSecurityManager securityManager = new DefaultWebSecurityManager(myAuthRealm());
        ("====securityManagerRegistration Completed====");
        return securityManager;
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17

When configuring SecurityManager, you need to add the custom realm from above so that Shiro will walk to the custom realm.

Configure Shiro filters:

@Configuration
public class ShiroConfig {

    private static final Logger logger = ();

    /**
     * Inject Shiro filters
     * @param securityManager securityManager
     * @return ShiroFilterFactoryBean
     */
    @Bean
    public ShiroFilterFactoryBean shiroFilter(SecurityManager securityManager) {
        // Define the shiroFilterFactoryBean.
        ShiroFilterFactoryBean shiroFilterFactoryBean=new ShiroFilterFactoryBean();

        // Set up the custom securityManager
        (securityManager).

        // Set the default login url, which will be accessed if authentication fails.
        ("/login"); // Set the default login url that will be accessed if authentication fails.
        // Set the link to jump to after success
        ("/success"); // Set the default login url, which will be accessed if authentication fails.
        // Set the url for unauthorized interface, which will be accessed if authentication fails
        ("/unauthorized"); // Set up the unauthorized interface.

        // LinkedHashMap is ordered, for sequential interceptor configuration
        Map<String,String> filterChainMap = new LinkedHashMap<>();

        // Configure the address that can be accessed anonymously, you can add your own according to the actual situation, release some static resources, etc., anon indicates release
        ("/css/**", "anon"); ("/css/**", "anon"); ("/imgs/**")
        ("/imgs/**", "anon").
        ("/js/**", "anon").
        ("/swagger-*/**", "anon"); ("/swagger-*/**", "anon").
        ("//**", "anon"); ("/js/**", "anon").
        // Log in url, put it in line
        ("/login", "anon");

        // "/user/admin" requires authentication, authc means authenticate
        ("/user/admin*", "authc");
        // Roles starting with "/user/student" require authentication, and are only allowed if they are "admin".
        ("/user/student*/**", "roles[admin]"); // "/user/teacher"; // "/user/teacher"; // "/user/admin", "authc")
        // "/user/teacher" requires permissions, which are allowed with "user:create".
        ("/user/teacher*/**", "perms["user:create"]"); // "/user/teacher" requires authentication, it's "user:create" that allows it.

        // Configure the logout filter
        ("/logout", "logout"); // Configure the logout filter.

        // Set the FilterChainDefinitionMap of the shiroFilterFactoryBean
        (filterChainMap); // Set the filterChainDefinitionMap of the shiroFilterFactoryBean.
        ("====shiroFilterFactoryBean registration completed ====");;
        return shiroFilterFactoryBean;
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53

Configuring a Shiro filter passes in a security manager, and as you can see, it's one loop after another, reaml -> SecurityManager -> filter. In the filter, we need to define a shiroFactoryBean, and then add the SecurityManager to it, which in combination with the above code you can see that the main things to configure are:

Default login url: this url is accessed if authentication fails
The url to jump to after successful authentication
Failed authentication will access this url
urls to be intercepted or released: these are placed in a map

From the above code, we can see that in the map, for different url, there are different permissions requirements, here is a summary of the common permissions.

Filter

clarification

anon

Open access, understood as anonymous users or visitors, can directly access the

authc

authenticated

logout

logout, the execution will jump directly to the();The url set, i.e. the login page

roles[admin]

Parameters can be written more than one, indicating that it is a certain role or roles that can be passed, when there are more than one parameter, write roles["admin, user"], when there are more than one parameter, each parameter must be passed in order to be considered as passed.

perms[user]

You can write more than one parameter to indicate that you need a certain permission or permissions to pass, for multiple parameters write perms["user, admin"], when there are more than one parameter you must pass each parameter to pass it

2.4 Authentication with Shiro

Now that we're all set up for Shiro, let's get started with authentication using Shiro. Let's start by designing a few interfaces:

Interface I: Usehttp://localhost:8080/user/adminto verify authentication
Interface II: Usehttp://localhost:8080/user/studentto verify role authentication
Interface III: Utilizationhttp://localhost:8080/user/teacherto validate privilege authentication
Interface IV: Utilizationhttp://localhost:8080/user/loginto implement user login

Then come the certification process:

Process 1: Direct access to interface 1 (not yet logged in), authentication fails, jump to the page to let the user log in, log in will request interface 4 to realize the user login function, at this time Shiro has already saved the user information.
Process 2: Access interface 1 again (at this time the user has logged in), successful authentication, jump to the page, display user information.
Process 3: Access interface two and test whether the role authentication is successful.
Process 4: Access interface three and test whether the permission authentication is successful.

2.4.1 Identity, Role, and Privilege Authentication Interfaces

@Controller
@RequestMapping("/user")
public class UserController {

    /**
     * Authentication Test Interface
     * @param request
     * @return
     */
    @RequestMapping("/admin")
    public String admin(HttpServletRequest request) {
        Object user = ().getAttribute("user");
        return "success";
    }

    /**
     * Role Authentication Test Interface
     * @param request
     * @return
     */
    @RequestMapping("/student")
    public String student(HttpServletRequest request) {
        return "success";
    }

    /**
     * Privilege Authentication Test Interface
     * @param request
     * @return
     */
    @RequestMapping("/teacher")
    public String teacher(HttpServletRequest request) {
        return "success";
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35

These three interfaces are very simple, directly return to the specified page display can be, as long as the authentication is successful will be normal jump, if the authentication fails, it will jump to the page configured in the ShrioConfig above to display.

2.4.2 User login interface

@Controller
@RequestMapping("/user")
public class UserController {

    /**
     * User Login Interface
     * @param user user
     * @param request request
     * @return string
     */
    @PostMapping("/login")
    public String login(User user, HttpServletRequest request) {

        // Created from username and passwordtoken
        UsernamePasswordToken token = new UsernamePasswordToken((), ());
        // gainsubjectCertified Subjects
        Subject subject = ();
        try{
            // Commencement of accreditation,This step jumps to our customizedrealmcenter
            (token);
            ().setAttribute("user", user);
            return "success";
        }catch(Exception e){
            ();
            ().setAttribute("user", user);
            ("error", "Incorrect user name or password!");
            return "login";
        }
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30

Let's focus on analyzing this login interface, which first creates a token based on the username and password passed to it from the front-end, then uses SecurityUtils to create an authentication body, and next starts calling the(token)Starting the authentication process, note that the token we just created is passed here, and as mentioned in the comments, this step will jump to our custom realm, go to thedoGetAuthenticationInfomethod, so by now you'll understand the token argument in that method. Then it's time to start the authentication process as analyzed above.

2.4.3 Put it to the test

Finally, start the project and test it:
browser requesthttp://localhost:8080/user/adminAuthentication is performed, and since you're not logged in at this point, you'll be redirected to the IndexController's/logininterface, and then jumps to theThe page lets us log in, using the username and password csdn/123456 After logging in, we request in the browser thehttp://localhost:8080/user/studentinterface, role authentication will be performed, because the user role of csdn1 in the database is admin, so it matches with the configuration, and the authentication passes; we then request thehttp://localhost:8080/user/teacherinterface, permission authentication will be performed because the user rights of csdn1 in the database areuser:*that meets the configuration of theuser:createSo the certification passed.

Next, we point to exit, the system will log out and let us log in again, we use the user csdn2 to log in, repeat the above operation, when in the role authentication and permissions authentication of these two steps, the authentication does not pass, because the database csdn2 the user stored in the role and permissions and the configuration of a different, so the authentication does not pass.

3. Summary

This section introduces the integration of Shiro security framework with Spring Boot. First, it introduces the three core components of Shiro and their roles; then it introduces Shiro's authentication, role authentication, and privilege authentication; finally, it combines the code and describes in detail how to integrate Shiro with Spring Boot and designs a set of test processes to analyze the workflow and principles of Shiro step by step, so that readers can more intuitively appreciate the whole workflow of Shiro. Shiro is widely used, and we hope that readers will master it and be able to apply it to real projects.

Course source code download address:Poke me to download

Lesson 17: Integrating Lucence in Spring Boot

1. Lucence and full-text search

What is Lucene? Take a look at the Baidu encyclopedia:

Lucene is a set of open source libraries for full-text search and retrieval , supported and provided by the Apache Software Foundation . Lucene provides a simple but powerful application program interface , can do full-text indexing and searching . Lucene is a mature free open source tool in the Java development environment. For its part, Lucene is the most popular free Java information retrieval library today and in recent years. --Baidu Encyclopedia

1.1 Full text search

Here mentioned the concept of full-text search, let's first analyze what is full-text search, understand the full-text search, and then understand the principle of Lucene is very simple.

What is full-text search? For example, for example, now to find a string in a file, the most direct idea is to start from the beginning of the search, find the OK, this kind of file for the small amount of data, it is very practical, but for the large amount of data for the file, it is a bit of a struggle. Or to find a file that contains a string, too, if you have dozens of G in a hard disk to find that efficiency can be imagined, is very low.

The data in the document is unstructured data, that is to say, it has no structure to speak of, to solve the efficiency problems mentioned above, first of all, we have to extract a part of the information in the unstructured data, reorganize it, so that it becomes a certain structure, and then search for these data that have a certain structure, so as to achieve the purpose of searching relatively fast. This is called full-text search. That is, the process of building an index and then searching the index.

1.2 How Lucene builds indexes

So how is indexing done in Lucene? Let's say there are now two articles that read as follows:

writings1The content of the:Tom lives in Guangzhou, I live in Guangzhou too.
Article 2 reads: He once lived in Shanghai.

The first step is to pass the document to the Tokenizer, which breaks the document into words and removes punctuation and stop words. The so-called stop words refer to words with no special meaning, such as a, the, too, etc. in English. After the word segmentation, we get a token. It is as follows:

The result of article 1 after subitizing:[Tom]``[lives]``[Guangzhou]``[I]``[live]``[Guangzhou]
Article 2 results after subitizing:[He]``[lives]``[Shanghai]

The lexical elements are then passed to the Linguistic Processor, which, for English, typically changes letters to lowercase, reduces words to root forms such as "lives" to "live For English, the Linguistic Processor generally changes letters to lowercase, reduces words to root forms such as "lives" to "live," etc., and transforms words to root forms such as "drove" to "drive. Then get the word (Term). As follows:

Article 1 Results after treatment:[tom]``[live]``[guangzhou]``[i]``[live]``[guangzhou]
Article 2 Results after treatment:[he]``[live]``[shanghai]

Finally the obtained word is passed to the indexing component (Indexer), which is processed to get the following index structure:

byword

Article number [frequency of occurrence]

appearing position

guangzhou

1[2]

3,6

he

2[1]

1

i

1[1]

4

live

1[2],2[1]

2,5,2

shanghai

2[1]

3

tom

1[1]

1

The above is the core part of the Lucene index structure. Its keywords are arranged in character order, so Lucene can use binary search algorithms to quickly locate keywords. Lucene implements the above three columns as Term Dictionary, frequencies and positions respectively. The dictionary file not only stores each keyword, but also retains pointers to the frequency file and location file, through which you can find the frequency and location information of the keyword.
The process of searching is to first binary lookup the dictionary, find the word, read all the article numbers through the pointer to the frequency file, and then return the results, and then you can find the word in a specific article according to the location where it appears. So Lucene may be slower when indexing for the first time, but later it will be faster without indexing every time.

Understanding the principle of Lucene's word splitting, next we integrate Lucene in Spring Boot and realize the indexing and searching functions.

2. Integration of Lucence in Spring Boot

2.1 Dependency Import

First you need to import Lucene's dependencies, which are several, as follows:

<!-- Lucencecore package -->
<dependency>
	<groupId></groupId>
	<artifactId>lucene-core</artifactId>
	<version>5.3.1</version>
</dependency>

<!-- Lucenequery parsing package -->
<dependency>
	<groupId></groupId>
	<artifactId>lucene-queryparser</artifactId>
	<version>5.3.1</version>
</dependency>

<!-- regular participle(English (language)) -->
<dependency>
	<groupId></groupId>
	<artifactId>lucene-analyzers-common</artifactId>
	<version>5.3.1</version>
</dependency>

<!--Split word highlighting support -->
<dependency>
	<groupId></groupId>
	<artifactId>lucene-highlighter</artifactId>
	<version>5.3.1</version>
</dependency>

<!--Support Chinese Segmentation -->
<dependency>
	<groupId></groupId>
	<artifactId>lucene-analyzers-smartcn</artifactId>
	<version>5.3.1</version>
</dependency>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34

The last dependency is used to support Chinese participles, because English is supported by default. The highlighting dependency is the last thing I want to do is to do a search and then highlight the searched content to simulate the current practice on the Internet, so you can apply it to the actual project.

2.2 Quick Start

According to the analysis above, full-text search has two steps, first build the index, and then retrieve. So in order to test this process, I create two new java classes, one for indexing, the other for retrieval.

2.2.1 Indexing

Let's get ourselves a couple of files and put them inD:lucenedatadirectory, create a new Indexer class to implement the indexing function. First initialize the standard splitter and write index instance in the constructor method.

public class Indexer {

    /**
     * Example of writing an index
     */
    private IndexWriter writer;

    /**
     * constructor method,instantiatedIndexWriter
     * @param indexDir
     * @throws Exception
     */
    public Indexer(String indexDir) throws Exception {
        Directory dir = ((indexDir));
        //Standardized Phrase Separator,It automatically removes the spaces.,is a theequivalents
        Analyzer analyzer = new StandardAnalyzer();
        //将Standardized Phrase Separator配到写索引的配置中
        IndexWriterConfig config = new IndexWriterConfig(analyzer);
        //instantiated写索引对象
        writer = new IndexWriter(dir, config);
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22

Pass a path to the folder where the index will be stored in the constructor release, then construct the standard participle (which is in English), and then use the standard participle to instantiate the write index object. The next step is to start building the index, and I'll put the explanation in the code comments to make it easier for you to follow along.

/**
 * Index all files in the specified directory
 * @param dataDir
 * @return
 * @throws Exception
 */
public int indexAll(String dataDir) throws Exception {
    // Get all the files in the path
    File[] files = new File(dataDir).listFiles(); if (null != new File(dataDir).listFiles()); // Get all the files under this path.
    if (null ! = files) {
        for (File file : files) {
            // Call the following indexFile method to index each file
            indexFile(file);
        }
    }
    // Return the number of files indexed
    return ();
}

/**
 * Indexes the specified file
 * @param file
 * @throws Exception
 */
private void indexFile(File file) throws Exception {
    ("Path to indexed file: " + ());
    // Call the following getDocument method to get the document of the file
    Document doc = getDocument(file);
    // Add doc to the index
    (doc);
}

/**
 * Get the document, and then set each field in the document, similar to a row of records in a database
 * @param file
 * @return
 * @throws Exception
 */
private Document getDocument(File file) throws Exception {
    Document doc = new Document();
    //Start adding fields.
    // Add the content
    (new TextField("contents", new FileReader(file)));; // Add the file name and save this field to the file index.
    // Add the filename and save this field to the index file
    (new TextField("fileName", (), ));
    //Add the file path
    (new TextField("fullPath", (), )); //Add the file path.
    return doc.
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49

Now that the index is set up, let's write a main method in the class to test it:

public static void main(String[] args) {
        //Path to save the index to
        String indexDir = "D:\lucene";
        // The directory where the data of the file to be indexed is stored.
        String dataDir = "D:\lucene\data";
        Indexer indexer = null;
        int indexedNum = 0;
        // Record the start time of indexing
        long startTime = ();
        try {
            // Start building the index
            indexer = new Indexer(indexDir); indexedNum = (dataDir); // Start building the index.
            indexedNum = (dataDir); } catch (Exception e) { // Start building the index.
        } catch (Exception e) {
            (); } catch (Exception e) {
        } finally {
            try {
                if (null ! = indexer) {
                    (); }
                }
            } catch (Exception e) {
                (); } catch (Exception e) {
            }
        }
        // Record the index end time
        long endTime = (); ("Index elapsed time" + (endTime - startTime) + "ms")
        ("Indexing took " + (endTime - startTime) + "milliseconds");
        ("Total files indexed " + indexedNum + "documents");
    }
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29

I've put two tomcat-related files into theD:lucenedataDown, after execution, see the console output:

Path to index file: D:
Path to index file: D:
Indexing took 882 milliseconds
Total 2 files indexed
  • 1
  • 2
  • 3
  • 4

Then we go.D:luceneYou can see some index files in the directory, these files can't be deleted, if you delete them, you need to rebuild the index, otherwise you won't be able to go and retrieve the content without the index.

####2.2.2 Retrieving content

Having indexed these two files above, we can next write the search program to look for specific words in these two files.

public class Searcher {

    public static void search(String indexDir, String q) throws Exception {

        // Get the path to be searched, that is, where the index is located
        Directory dir = ((indexDir));
        IndexReader reader = (dir);
        // Build the IndexSearcher
        IndexSearcher searcher = new IndexSearcher(reader); //Standard searcher, will be the index of the index.
        //Standard Segmenter, will automatically remove spaces ah, is a the other words
        Analyzer analyzer = new StandardAnalyzer(); //Query parser.
        // query parser
        QueryParser parser = new QueryParser("contents", analyzer); //by parsing the String to be queried.
        // Get the query object by parsing the String to be queried, q is the string passed in to be queried
        Query query = (q);

        // Record the index start time
        long startTime = ();
        //Start the query, query the first 10 data, the record will be saved in docs
        TopDocs docs = (query, 10); //record index end time
        // Record the end time of the index
        long endTime = ();
        ("Match" + q + "Total time taken" + (endTime-startTime) + "milliseconds");
        ("Query to " + + "records");

        // Take out the result of each query
        for(ScoreDoc scoreDoc : ) {
            // Equivalent to docID, according to this docID to get the document
            Document doc = ();; //from each query result.
            //fullPath is just to build the index when we define a field, that the path. Can also take other content, as long as we have defined in the establishment of the index can be.
            (("fullPath"));
        }
        ();
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35

ok, so we retrieve the code is written, each step of the explanation I wrote in the code on the comments, the following write a main method to test:

public static void main(String[] args) {
    String indexDir = "D:\lucene";
    //Query this string
    String q = "security";
    try {
        search(indexDir, q);
    } catch (Exception e) {
        ();
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10

Check.securityThis string, execute it and see what the console prints:

Matching security took a total of 23 milliseconds
Query found 1 record
D.
  • 1
  • 2
  • 3

As you can see, it took 23 milliseconds to find the security string in both files and output the name of the file. I have written the above code in great detail, and this code is complete enough to be used in a production environment.

2.3 Chinese Segmentation Search Highlighting Practice

The above has been written to build the index and retrieve the code, but in the actual project, we are often combined with the page to do some of the query results of the display, such as I want to check a certain keyword, after checking, the relevant information points will be displayed, and will be the query of the keyword highlighting and so on. This kind of demand is very common in real projects, and most of the websites will have this effect. So in this subsection, we will use Lucene to realize this effect.

2.3.1 Chinese Segmentation

We create a new ChineseIndexer class to build a Chinese index, the process of building the same as the English index, the difference is that the use of the Chinese word splitter. In addition, here we do not have to read the file to build the index, we simulate a string to build, because in the actual project, most of the cases are to get some text strings, and then according to some keywords to query the content and so on. The code is as follows:

public class ChineseIndexer {

    /**
     * The location of the index
     */
    private Directory dir.

    // Prepare the data to be used for testing.
    //Identify the document.
    
    private String citys[] = {"Shanghai", "Nanjing", "Qingdao"}; private String descs[] = {"Shanghai", "Nanjing", "Qingdao"}; //Use to identify the document.
    private String descs[] = {
            "Shanghai is a bustling city." ,
            "Nanjing is a city of culture Nanjing, abbreviated as Ning, is the capital of Jiangsu Province, located in the eastern region of China, the lower reaches of the Yangtze River, near the river and the sea. The city has 11 districts under its jurisdiction, with a total area of 6,597 square kilometers, and a built-up area of 7,528,300 square kilometers in 2013, with a resident population of 8,187,800, of which 6,591,000 are urban. [1-4] "Jiangnan beautiful place, Jinling imperial state", Nanjing has a civilization history of more than 6000 years, nearly 2,600 years of history of the city and nearly 500 years of history of the capital, is one of the four major ancient capitals of China, there are "six dynasties of the ancient capital", It is one of the four major ancient capitals of China, known as the "Ancient Capital of the Six Dynasties" and the "Capital of the Ten Dynasties", and is an important birthplace of Chinese civilization, which has historically blessed China several times, and has long been the political, economic, and cultural center of southern China, with a thick cultural heritage and rich historical remains.[5-7] Nanjing is an important city in China's history. [Nanjing is an important national science and education center, and has been a city of culture and education since ancient times, with the reputation of being the "cultural hub of the world" and "the first school in Southeast China". As of 2013, Nanjing has 75 institutions of higher education, including 8 211 colleges and universities, second only to Beijing and Shanghai; 25 national key laboratories, 169 national key disciplines, and 83 academicians of the two academies, all of which rank third in China." [8-10] . [8-10] ." ,
            "Qingdao is a beautiful city."
    };.

    /**
     * Generate index
     * @param indexDir
     * @throws Exception
     */
    public void index(String indexDir) throws Exception {
        indexDir = ((indexDir)); // Call getWriter first to get the IndexWriter object.
        // Get the IndexWriter object by calling getWriter first.
        IndexWriter writer = getWriter(); for(int i = 0; i = 0; i = 0)
        for(int i = 0; i < ; i++) {
            Document doc = new Document(); // Generate an index of all the data above.
            // Index all the data above, using id, city, and desc.
            (new IntField("id", ids[i], ));; // Generate indexes for all the data above, identifying them with id, city and desc respectively.
            (new StringField("city", cities[i], )); (new TextField("city", citys[i], )).
            (new TextField("desc", descs[i], )); (new TextField("desc", descs[i], )); (new TextField("desc", descs[i], ))
            //Add the document
            (doc);
        }
        //close before actually writing to the document
        ();
    }

    /**
     * Get an instance of IndexWriter
     * @return
     * @throws Exception
     */
    private IndexWriter getWriter() throws Exception {
        // Use the Chinese Analyzer
        SmartChineseAnalyzer analyzer = new SmartChineseAnalyzer(); //Assign the Chinese word splitter to the write index configuration.
        // Assign the Chinese analyzer to the configuration for writing indexes
        IndexWriterConfig config = new IndexWriterConfig(analyzer); //Instantiate the write index object.
        // Instantiate the write index object
        IndexWriter writer = new IndexWriter(dir, config); //Instantiate the write index object.
        return writer;
    }

    public static void main(String[] args) throws Exception {
        new ChineseIndexer().index("D:\lucene2"); }
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58

Here we use id, city, desc on behalf of id, city name and description of the city, use them as keywords to build the index, later we get the content, mainly to get the city description. Nanjing's description I deliberately write a little longer, because the following search, according to different keywords will retrieve different parts of the information, there is a concept of weight in it.
Then you run the main method, which saves the index into theD:lucene2Center.

2.3.2 Chinese Segmentation Query

Chinese participle query code logic and the default query is almost the same, there are some differences is that we need to query the keywords out of the red bold, etc. need to deal with the need to calculate a score fragment, what does this mean? For example, I searched for "Nanjing culture" with the search "Nanjing civilization", the two search results should be based on the location of the keywords appear, the return results are not the same, which will be tested below. Let's look at the code and comments:

public class ChineseSearch {

    private static final Logger logger = ();

    public static List<String> search(String indexDir, String q) throws Exception {

        //Get the path to be searched, that is, where the index is located
        Directory dir = ((indexDir));
        IndexReader reader = (dir);
        IndexSearcher searcher = new IndexSearcher(reader);
        //Use the Chinese word splitter
        SmartChineseAnalyzer analyzer = new SmartChineseAnalyzer(); //by Chinese Segmenter initialization.
        // Initialize the query parser from the Chinese word splitter.
        QueryParser parser = new QueryParser("desc", analyzer); //Initialize the query parser by parsing the String query.
        // Get the query object by parsing the String to be queried
        Query query = (q);

        // Record the index start time
        long startTime = ();
        //Start the query, query the first 10 data, save the records in docs
        TopDocs docs = (query, 10); //record index end time
        // Record the end time of the index
        long endTime = ();
        ("Matching {} took {} milliseconds in total", q, (endTime - startTime));
        ("The query found {} records", );

        //If no parameters are specified, the default is bold, i.e. <b> <b/>
        SimpleHTMLFormatter simpleHTMLFormatter = new SimpleHTMLFormatter("<b><font color=red>","</font></b>");;
        // Calculate the score based on the query object, it will initialize a score with the highest query result
        QueryScorer scorer = new QueryScorer(query);
        // Calculate a fragment based on this score
        Fragmenter fragmenter = new SimpleSpanFragmenter(scorer);; //Calculate a fragment based on this score.
        // Highlight the keywords in this fragment with the initialized highlight format above
        Highlighter highlight = new Highlighter(simpleHTMLFormatter, scorer); //Set the fragment to be displayed.
        //Set up the fragment to be displayed
        (fragmenter).

        //Take out the results of each query
        List<String> list = new ArrayList<>();
        for(ScoreDoc scoreDoc : ) {
            // Equivalent to docID, according to this docID to get the document
            Document doc = (); ("city:{}
            ("city:{}", ("city"));; ("desc:{}", ("city")); ("city", ("city"))
            ("desc:{}", ("desc")); String desc = ("desc:{}", ("desc")); String
            String desc = ("desc");

            //Highlight
            if(desc ! = null) {
                TokenStream tokenStream = ("desc", new StringReader(desc));
                String summarize = (tokenStream, desc);
                ("Highlighted desc:{}", summary);
                (summary);
            }
        }
        ();
        return list.
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58

I've written detailed notes for each step, so I won't go into it here. Next, let's test the effect.

2.3.3 Put it to the test

Here we use thymeleaf to write a simple page to display the retrieved data and highlight it. In the controller we specify the indexed directory and the string to be queried, as follows:

@Controller
@RequestMapping("/lucene")
public class IndexController {

    @GetMapping("/test")
    public String test(Model model) {
        // The directory where the index is located
        String indexDir = "D:\lucene2";
        // Characters to query
// String q = "Nanjing Civilization";
        String q = "Nanjing Culture";
        try {
            List<String> list = (indexDir, q);
            ("list", list);
        } catch (Exception e) {
            ();
        }
        return "result";
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20

Returns you directly to the page, which is used to display the data in the model.

<!DOCTYPE html>
<html lang="en" xmlns:th="">
<head>
    <meta charset="UTF-8">
    <title>Title</title>
</head>
<body>
<div th:each="desc : ${list}">
    <div th:utext="${desc}"></div>
</div>
</body>
</html>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

Note here that you can't useth:testOtherwise the html tags in the string are escaped and not rendered to the page. Here's how to start the service, in a browser typehttp://localhost:8080/lucene/test, to test the results, we searched for "Nanjing culture".

南京文化

Then change the search keyword in controller to "Nanjing civilization" and see the hit.

南京文明

As you can see, it calculates a score fragment for different keywords, which means that different keywords will hit content in different locations, and then highlights the keywords according to the form we set ourselves. As you can see from the results, Lucene can also be very smart about splitting keyword hits, which can be very useful in real projects.

3. Summary

This lesson first analyzes in detail the theoretical rules of full-text search, and then combined with Lucene, a systematic description of the integration steps in Spring Boot, first quickly lead you to intuitively feel Lucene how to build an index has been if the retrieval, followed by specific examples of Chinese search, showing the Lucene in the full-text search in the wide range of applications. Lucene is not difficult, mainly more steps, the code does not have to memorize, get the project according to the actual situation to do the corresponding changes can be.

Course source code download address:Poke me to download

Lesson 18: Spring Boot builds architecture for real-world project development

The previous course, I mainly explain to you Spring Boot commonly used in some technical points, these technical points in the actual project may not be all used, because different projects may use different technologies, but I hope that we can master how to use, and can be based on the actual needs of the project to expand accordingly.

I don't know if you know about microcontrollers, there is a minimum system in microcontrollers, and after this minimum system is built, it can be artificially extended on this basis. This lesson we want to do is to build a "Spring Boot minimum system architecture". Take this architecture, you can do according to the actual needs based on this basis to expand accordingly.

Build an environment from scratch, the main points to consider: unified encapsulated data structures, adjustable interfaces, json processing, the use of template engines (this article does not write the project, because most projects are now separated from the front and back end, but taking into account that there are also non-front and back-end separation of the project, so I also added thymeleaf in the source code), the persistence layer of the integration, the interceptor (which is also optional) and the global exception handling. is also optional) and global exception handling. In general, if you include these things, basically a Spring Boot project environment is almost the same, and then it is according to the specific circumstances to extend the.

Combining the previous lessons and these points above, this lesson hand in hand leads you to build a Spring Boot architecture available in the actual project development. The entire project is shown in the following figure. When learning, you can combine it with my source code, which will be more effective.

工程架构

1. Harmonized data encapsulation

Since the type of encapsulated json data is uncertain, we need to use generics when defining a unified json structure. Unified json structure attributes include data, status code, prompt information can be constructed according to the actual business needs to do the corresponding additions can be, in general, there should be a default return structure, there should also be a user-specified return structure. In general, there should be a default return structure as well as a user-specified return structure:

/**
 * Uniform return object
 * @author shengwu ni
 * @param <T>.
 */
public class JsonResult<T> {

    public class JsonResult<T> { private T data; private String code; {
    private String code; private String msg; { private String
    private String code; private String msg; private String

    /**
     * If no data is returned, the default status code is 0 and the message is: operation successful!
     */
    public JsonResult() {
         = "0".
         = "Operation successful!" ;
    }

    /**
     * If no data is returned, you can artificially specify a status code and alert message
     * @param code
     * @param msg
     */
    public JsonResult(String code, String msg) {
         = code; @param code
         = msg; }
    }

    /**
     * When there is data returned, the status code is 0 and the default message is: operation successful!
     * @param data
     */
    public JsonResult(T data) {
         = data.
         = "0".
         = "Operation successful!" ;
    }

    /**
     * There is data returned, the status code is 0, and a human-specified alert message is given.
     * @param data
     * @param msg
     */
    public JsonResult(T data, String msg) {
         = data.
         = "0".
         = msg;
    }

    /**
     * Use custom exceptions as parameters to pass status codes and alert messages
     * @param msgEnum
     */
    public JsonResult(BusinessMsgEnum msgEnum) {
         = ();
         = ();
    }

    // Omit the get and set methods
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61

You can rationalize the field information in the unified structure according to some things you need in your project.

2. json processing

Json processing tools, such as Alibaba's fastjson, but fastjson for some unknown types of null can not be converted to an empty string, which may be fastjson's own shortcomings, scalability is not too good, but easy to use, the use of the people are quite a lot. In this lesson, we mainly integrate the Spring Boot jackson, mainly on the jackson null configuration can be done, and then you can use in the project.

/**
 * jacksonConfig
 * @author shengwu ni
 */
@Configuration
public class JacksonConfig {
    @Bean
    @Primary
    @ConditionalOnMissingBean()
    public ObjectMapper jacksonObjectMapper(Jackson2ObjectMapperBuilder builder) {
        ObjectMapper objectMapper = (false).build();
        ().setNullValueSerializer(new JsonSerializer<Object>() {
            @Override
            public void serialize(Object o, JsonGenerator jsonGenerator, SerializerProvider serializerProvider) throws IOException {
                ("");
            }
        });
        return objectMapper;
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20

Let's not test it here, we'll test it together once swagger2 is configured below.

3. swagger2 online tunable interface

With swagger, developers don't need to provide interface documentation to other people, just tell them a Swagger address, you can show the online API interface documentation, in addition, the person who calls the interface can also test the interface data online, similarly, developers in the development of the interface, you can also utilize the Swagger online interface documentation to test the interface data, which gives the This is convenient for developers. To use swagger, you need to configure it:

/**
 * swagger configuration
 * @author shengwu ni
 */
@Configuration
@EnableSwagger2
public class SwaggerConfig {

    @Bean
    public Docket createRestApi() {
        return new Docket(DocumentationType.SWAGGER_2)
                // Specify the method that builds the details of the api document: apiInfo()
                .apiInfo(apiInfo())
                .select()
                // Specify the package path where the api interfaces are to be generated, here controller is used as the package path to generate all interfaces in the controller
                .apis(("."))
                .paths(())
                .build();
    }

    /**
     * Build the details of the api documentation
     * @return
     */
    private ApiInfo apiInfo() {
        return new ApiInfoBuilder()
                // Set the page title
                .title("Spring Boot builds the architecture developed in the actual project")
                // Set the interface description
                .description("Learning Spring Boot with Wu 18th lesson")
                // Set the contact information
                .contact("Ni Shengwu, " + "WeChat public number: programmer's private kitchen")
                // Set the version
                .version("1.0")
                // Build
                .build();
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38

To get here, you can test it first by writing a Controller and getting a static interface to test the above integration.

@RestController
@Api(value = "User Information Interface")
public class UserController {

    @Resource
    private UserService userService;

    @GetMapping("/getUser/{id}")
    @ApiOperation(value = "Get user information based on the user's unique identifier")
    public JsonResult<User> getUserInfo(@PathVariable @ApiParam(value = "user-unique identifier") Long id) {
        User user = new User(id, "Ni Shengwu (1919-1952), novelist", "123456");
        return new JsonResult<>(user);
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14

Then start the project and type in your browserlocalhost:8080/You can see the swagger interface documentation page, call the above interface, you can see the returned json data.

4. Persistence layer integration

Every project must have a persistence layer to interact with the database, here we mainly to integrate mybatis, integrate mybatis first in the configuration.

# service port number
server:
  port: 8080

# Database address
datasource:
  url: localhost:3306/blog_test

spring:
  datasource: # Database Configuration
    driver-class-name:
    url: jdbc:mysql://${}?useSSL=false&useUnicode=true&characterEncoding=utf-8&allowMultiQueries=true&autoReconnect=true&failOverReadOnly=false&maxReconnects=10
    username: root
    password: 123456
    hikari:
      maximum-pool-size: 10 # Maximum Connection Pool
      max-lifetime: 1770000

mybatis:
  # Specifies that the packages set by the alias are allentity
  type-aliases-package: .
  configuration:
    map-underscore-to-camel-case: true # Hump naming convention
  mapper-locations: # mapperMapping File Location
    - classpath:mapper/*.xml
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25

Configured, then we write the dao layer, in practice we use annotations more, because it is more convenient, of course, you can also use the xml way, or even use both at the same time can be used, here we mainly use annotations to integrate the way, on the xml way, you can check the previous course, in practice, according to the project situation to determine.

public interface UserMapper {

    @Select("select * from user where id = #{id}")
    @Results({
            @Result(property = "username", column = "user_name"),
            @Result(property = "password", column = "password")
    })
    User getUser(Long id);

    @Select("select * from user where id = #{id} and user_name=#{name}")
    User getUserByIdAndName(@Param("id") Long id, @Param("name") String username);

    @Select("select * from user")
    List<User> getAll();
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

I'm not going to write code about the service layer in this article, you can learn it from my source code, this section will lead you to build a Spring Boot empty architecture. Finally, don't forget to add the annotation scan on the startup class.@MapperScan(".")

5. Interceptors

Interceptor in the project is used very much (but not absolutely), such as intercepting some of the top url, do some judgment and processing and so on. In addition, you also need to put the commonly used static pages or swagger pages, you can not intercept these static resources. First of all, first customize an interceptor.

public class MyInterceptor implements HandlerInterceptor {

    private static final Logger logger = ();

    @Override
    public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {

        ("Execute the method before executing the(ControllerBefore the method call)");
        return true;
    }

    @Override
    public void postHandle(HttpServletRequest request, HttpServletResponse response, Object handler, ModelAndView modelAndView) throws Exception {
        ("After executing the method, proceed to execute the(Controllerafter a method call),But the view has not been rendered at this point");
    }

    @Override
    public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex) throws Exception {
        ("The entire request has been processed.,DispatcherServletThe corresponding view is also rendered,At this point I can do some cleanup");
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21

The custom interceptor is then added to the interceptor configuration.

@Configuration
public class MyInterceptorConfig implements WebMvcConfigurer {
    @Override
    public void addInterceptors(InterceptorRegistry registry) {
        // realizationWebMvcConfigurerDoes not cause static resources to be blocked
        (new MyInterceptor())
                // Intercept allurl
                .addPathPatterns("/**")
                // let passswagger
                .excludePathPatterns("/swagger-resources/**");
    }
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

In Spring Boot, we usually store some static resources in the following directory:

classpath:/static
classpath:/public
classpath:/resources
classpath:/META-INF/resources

The code above configures the/**It does intercept all url's, but our implementation of the WebMvcConfigurer interface does not cause Spring Boot to intercept static resources in these directories. The swagger page is in the swagger-resources directory, so just release all the files in that directory.

Then enter the swagger page in the browser, if it can display swagger normally, it means the release is successful. At the same time, you can determine the order of code execution according to the log printed in the background.

6. Global exception handling

Global exception handling is something that must be used in every project. In specific exceptions, we may do specific handling, but for unhandled exceptions, there is usually a unified global exception handling. Before exception handling, it is best to maintain an exception alert message enumeration class that is specifically designed to hold the exception alert message. The following:

public enum BusinessMsgEnum {
    /** Parameter exception */
    PARMETER_EXCEPTION("102", "Parameter exception!") ,
    /** Wait for timeout */
    SERVICE_TIME_OUT("103", "Service call timeout!") ,
    /** Parameter too large */
    PARMETER_BIG_EXCEPTION("102", "The number of images entered cannot exceed 50!") ,
    /** 500 : An exception occurred */
    UNEXPECTED_EXCEPTION("500", "An exception occurred in the system, please contact the administrator!") ;

    /**
     * Message code.
     */
    private String code.
    /**
     * Message content
     */ private String code; /** * Message content
    private String msg;

    private BusinessMsgEnum(String code, String msg) {
         = code; String msg; private String msg.
         = code; = msg.
    }

    public String code() {
        return code; } public String code() {
    }

    public String msg() {
        return msg; }
    }

}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33

In the Global Unified Exception Handling class, we generally handle custom business exceptions first, then go to some common system exceptions, and finally come to a one-and-done (Exception exception).

@ControllerAdvice
@ResponseBody
public class GlobalExceptionHandler {

    private static final Logger logger = ();

    /**
     * Intercepting operational anomalies,Return business exception information
     * @param ex
     * @return
     */
    @ExceptionHandler()
    @ResponseStatus(value = HttpStatus.INTERNAL_SERVER_ERROR)
    public JsonResult handleBusinessError(BusinessErrorException ex) {
        String code = ();
        String message = ();
        return new JsonResult(code, message);
    }

    /**
     * null pointer exception
     * @param ex NullPointerException
     * @return
     */
    @ExceptionHandler()
    @ResponseStatus(value = HttpStatus.INTERNAL_SERVER_ERROR)
    public JsonResult handleTypeMismatchException(NullPointerException ex) {
        ("null pointer exception,{}", ());
        return new JsonResult("500", "null pointer exception了");
    }

    /**
     * system anomaly Unexpected anomalies
     * @param ex
     * @return
     */
    @ExceptionHandler()
    @ResponseStatus(value = HttpStatus.INTERNAL_SERVER_ERROR)
    public JsonResult handleUnexpectedServer(Exception ex) {
        ("system anomaly:", ex);
        return new JsonResult(BusinessMsgEnum.UNEXPECTED_EXCEPTION);
    }

}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44

Among them, BusinessErrorException is a customized business exception, inherited from RuntimeException that is, you can look at my source code, the article will not post the code.
There is a testException method in UserController, which is used to test for global exceptions. Open the swagger page, call the interface, and you can see that it returns the user alert message: "An exception occurred in the system, please contact the administrator! ". Of course, in practice, you need to prompt different information according to different business.

7. Summary

In this paper, the main hand to lead you to quickly build a project can be used in the Spring Boot empty architecture , mainly from the unified encapsulation of data structures , adjustable interfaces , json processing , the use of template engine ( code reflected in the code ), persistence layer integration, interceptors and global exception handling. Generally include these things , basically a Spring Boot project environment is almost the same , and then it is based on the specific circumstances to extend the .

Course source code download address:Poke me to download