List and Set comparison, their respective subclass comparison
Comparison 1: Comparison between Arraylist and LinkedList
1. ArrayList implements a data structure based on dynamic arrays. Because the addresses are continuous, once the data is stored, the query operation efficiency will be relatively high (it is placed in memory contiguously).
2. Because the addresses are continuous, ArrayList needs to move data, so the insertion and deletion operations are relatively inefficient.
3. LinkedList is based on the data structure of the linked list, and the address is arbitrary, so you don’t need to wait for a continuous address when opening up memory space. For new and delete operations add and remove, LinedList is more advantageous.
4. Because LinkedList needs to move the pointer, the query operation performance is relatively low.
Applicable scenario analysis:
When accessing the data requires this, use ArrayList, and use LinkedList when adding and deleting the data multiple times.
Comparison 2: Comparison between ArrayList and Vector
1. The Vector methods are synchronous and thread-safe, while the ArrayList method is not, because the synchronization of threads will inevitably affect performance. Therefore, ArrayList performs better than Vector.
2. When the element in a Vector or ArrayList exceeds its initial size, the Vector will double its capacity, and the ArrayList will only increase its size by 50%, which is like this. ArrayList is conducive to saving memory space.
3. Vector is not used in most cases because the performance is poor, but it supports thread synchronization, that is, only one thread can write a vector at a certain moment, avoiding inconsistency caused by multiple threads writing at the same time.
4. Vector can set the growth factor, but ArrayList cannot.
Applicable scenario analysis:
1. Vector is thread-synchronous, so it is also thread-safe, while ArrayList is thread-asyn, which is not safe. If thread safety factors are not taken into account, ArrayList is generally more efficient.
2. If the number of elements in the set is greater than the length of the current set array, using data with a relatively large amount of data in the set has certain advantages.
Comparison 3: Comparison between HashSet and TreeSet
It is implemented by a binary tree. The data in Treeset is automatically sorted and null values are not allowed.
It is implemented by a hash table. The data in the HashSet is unordered. You can put null, but you can only put one null. The values in both cannot be repeated, just like the unique constraint in the database.
The object required to be placed must implement the HashCode() method. The object placed is marked with the hashcode code, and the hashcode object with the same content has the same hashcode, so the content placed cannot be repeated. But objects of the same class can be placed in different instances.
Applicable scenario analysis:
HashSet is implemented based on the Hash algorithm, and its performance is usually better than TreeSet. We should usually use HashSet, and we only use TreeSet when we need sorting functionality.
The difference between HashMap and ConcurrentHashMap
1. HashMap is not thread-safe, while ConcurrentHashMap is thread-safe.
2. ConcurrentHashMap uses lock segmentation technology to segment the entire Hash bucket, that is, divide the large array into several small segment segments, and each small segment segment has a lock. When inserting the element, you need to find which segment segment should be inserted into first, and then insert it on this segment. And you also need to obtain the segment lock here.
3. ConcurrentHashMap makes the lock's granularity more refined and has better concurrency performance.
The memory structure of JVM
According to the JVM specification, JVM memory is divided into five parts: virtual machine stack, heap, method area, program counter, and local method stack.
1. Java virtual machine stack:
Thread is private; each method will create a stack frame when executing, which stores local variable tables, operand stacks, dynamic connections, method return address, etc.; each method will correspond to the stack entry and exit of a stack frame in the virtual machine stack from call to execution.
2. Heap:
Thread sharing; a memory area shared by all threads is created when the virtual machine is started and used to store object instances.
3. Method area:
Thread sharing; a memory area shared by all threads; used to store class information, constants, static variables, etc. that have been loaded by the virtual machine.
4. Program counter:
Thread private; is the line number indicator of the bytecode executed by the current thread. Each thread must have an independent program counter. This type of memory is also called "thread private" memory.
5. Local method stack:
Thread private; mainly serves the Native method used by virtual machines.
The difference between strong citations, soft citations and weak citations
Strong quote:
The object will only be released after this reference is released. As long as the reference exists, the garbage collector will never recycle it. This is the most common New object.
Soft Quote:
References recovered by code before memory overflow. The main soft reference users realize caching-like functions. When there is enough memory, they directly use soft references to get values, without querying data from busy real sources, and improving speed; when there is insufficient memory, they will automatically delete this part of the cached data and query these data from real sources.
Weak quote:
References collected during the second garbage collection can be retrieved through weak references in a short time, and null will be returned when the second garbage collection has been performed. Weak references are mainly used to monitor whether the object has been marked as garbage to be recycled by the garbage collector. The weak reference isEnQueued method can be used to return whether the object is marked by the garbage collector.
What is the core of springmvc, how to handle the request process, and how to implement control inversion
core:
Control inversion and face-oriented
Request processing flow:
1. First, the user sends a request to the front-end controller. The front-end controller decides which page controller to choose to process based on the request information (such as the URL) and delegates the request to it, that is, the control logic part of the previous controller;
2. After the page controller receives the request, it performs functional processing. First, it needs to collect and bind the request parameters to an object and verify it, and then delegate the command object to the business object for processing; after the processing is completed, a ModelAndView (model data and logical view name);
3. The front-end controller reclaims control rights, and then selects the corresponding view for rendering based on the returned logical view name, and passes model data in for view rendering;
4. The front-end controller reclaims control rights again and returns the response to the user.
How to implement control inversion:
Every time we use the spring framework, we need to configure the xml file, which configures the bean's id and class.
The default bean in spring is a single instance mode, and this instance can be created through the bean's class reference reflection mechanism.
Therefore, the spring framework creates instances for us through reflection and maintains them for us.
A needs to refer to Class B, and the spring framework will pass the reference of B instance to A's member variables through xml.
The difference between BIO, NIO and AIO
Java BIO: Synchronize and block. The server implementation mode is to connect to one thread. That is, when the client has a connection request, the server needs to start a thread for processing. If this connection does not do anything, it will cause unnecessary thread overhead. Of course, it can be improved through the thread pooling mechanism.
Java NIO: Synchronous non-blocking, the server implementation mode is to request one thread, that is, the connection requests sent by the client will be registered with the multiplexer. The multiplexer polls the connection to an I/O request before starting a thread for processing.
Java AIO: Asynchronous non-blocking, the server implementation mode is to request a thread effectively, and the client's I/O requests are completed by the OS first and then notify the server application to start the thread for processing.
Applicable scenario analysis:
The BIO method is suitable for architectures with relatively small connections and fixed connections. This method requires high server resources and concurrency is limited to applications. It is the only choice before JDK1.4, but the program is intuitive, simple and easy to understand, as was used in Apache.
The NIO method is suitable for architectures with a large number of connections and relatively short connections (light operation), such as chat servers. Concurrency is limited to applications and programming is relatively complex. JDK1.4 has begun to support it, such as in Nginx and Netty.
The AIO method is used for architectures with a large number of connections and a relatively long connection (re-operation), such as album servers, which fully call the OS to participate in concurrent operations, and the programming is relatively complicated. JDK7 started to support it. During its growth, Netty used it, but later gave up.
Why use thread pool
Then first understand what a thread pool is
A thread pool is when creating a collection of threads during initialization of a multi-threaded application, and then reuses those threads instead of creating a new thread when new tasks are needed.
Benefits of using thread pool
1. Thread pool improves the response time of an application. Since threads in the thread pool are ready and waiting for assignment, the application can use them directly without creating a new thread.
2. Thread pool saves the overhead of creating a complete thread for each short-lived cycle task and can recycle resources after the task is completed.
3. The thread pool optimizes thread time slices based on the current process running in the system.
4. The thread pool allows us to enable multiple tasks without setting properties for each thread.
5. The thread pool allows us to pass an object reference containing status information to the program parameters of the task being executed.
6. Thread pools can be used to solve the problem of limiting the maximum number of threads for handling a specific request.
How to achieve the difference between pessimistic lock and optimistic lock
Pessimistic lock: A piece of execution logic plus pessimistic lock. When different threads execute at the same time, only one thread can execute, and other threads wait at the entrance until the lock is released.
Optimistic lock: A piece of execution logic plus optimistic lock. When different threads execute at the same time, execution can be entered at the same time. When the data is updated at the end, check whether these data have been modified by other threads (whether the version and the initial execution are the same), and if there is no modification, update, otherwise this operation will be abandoned.
Implementation of pessimistic locks:
begin;/begin work;/start transaction; (just choose one of three)
//1. Query product information
select status from t_goodswhere id=1 forupdate;
//2. Generate orders based on product information
insert into t_orders (id,goods_id) values (null,1);
//3. Modify the product status to 2
update t_goodssetstatus=2;
//4. Submit transaction
commit;/commit work;
Implementation of optimistic locks:
1. Query product information
select (status,status,version) from t_goodswhere id=#{id}2. Generate orders based on product information
3. Modify the product status to 2
update t_goodsset status=2,version=version+1
where id=#{id} and version=#{version};
What is thread deadlock? How to generate a deadlock? How to avoid thread deadlocks?
Deadlock introduction:
Thread deadlock refers to the fact that two or more threads hold the resources required by each other, causing these threads to be in a waiting state and cannot go to execution. When a thread enters the object's synchronized code block, it occupies the resource and does not release the resource until it exits the code block or calls the wait method. During this period, other threads will not be able to enter the code block. When threads hold on to each other's resources needed by each other, they will wait for each other to release resources. If the threads do not actively release the resources occupied, a deadlock will occur.
Some specific conditions for the occurrence of deadlocks:
1. Mutual Exclusion Condition: The process is exclusive to the allocated resources, that is, a resource can only be occupied by one process until it is released by the process.
2. Request and hold conditions: When a process blocks due to the request being occupied by the occupied resources, it remains unsubscribed to the acquired resources.
3. Conditions for not depriving: No other process can deprive any resource before it is released by the process.
4. Loop waiting conditions: When a deadlock occurs, the waiting process will inevitably form a loop (similar to a dead loop), causing permanent blockage.
How to avoid:
1. Locking order:
When multiple threads require the same locks, but locking in different orders, deadlocks are easy to occur. If you can ensure that all threads are obtained in the same order, then deadlocks will not occur. Of course, this method requires you to know all the locks you may use in advance, but sometimes it is unpredictable.
2. Locking time limit:
Add a timeout time. If a thread does not successfully obtain all required locks within a given time limit, it will back and release all the already acquired locks, and then wait for a random time before trying again. However, if there are a lot of threads competing for the same batch of resources at the same time, even if there is a timeout and fallback mechanism, it may still cause these threads to try repeatedly but never get the lock.
3. Deadlock detection:
Deadlock detection means that whenever a thread acquires a lock, it will record it in the thread and the lock-related data structure (map, graph, etc.). In addition, whenever a thread requests a lock, it also needs to be recorded in this data structure. Deadlock detection is a better deadlock prevention mechanism, which is mainly aimed at scenarios where it is impossible to implement locking in sequence and lock timeout is not feasible.