web123456

2024 Embedded Interview: OPPO Embedded Interview Questions and Reference Answers

Table of contents

What is the difference between TCP and UDP?

Please briefly describe the three-time handshake process of TCP.

What does the HTTP protocol work?

What new features have been introduced in C++11?

What is a smart pointer? How to solve its memory leak problem?

What are the communication methods between processes?

What are the scheduling strategies of the CPU?

How to ensure thread safety? What issues should be paid attention to when programming multithreaded?

What is SPI? How many lines does it have? How many modes are supported?

Have you used IO simulation SPI? Please describe it.

What is the difference between heap and stack in memory management?

What content needs to be stacked when calling a function?

Please briefly describe the startup process of uboot.

What preparations are needed before uboot started?

Is uboot using a physical or virtual address when starting? Do I need to turn on MMU?

What are the differences between x86 assembly and Arm assembly?

Please introduce a driver you are familiar with.

Have you learned the operating system? What is the difference between spin lock and semaphore?

What is the startup process of the Linux system?

What professional courses have you learned? Which courses are better to learn?

What drivers have you written in Linux?

Do you know about linux epoll?

Please tell me about the LCD driver and input subsystem.

How should the driver interrupt function be written?

Do you understand the underlying implementation of key_report?

How to write a character device driver?

How to write a key driver and implement a break function?

Please tell me the differences and differences between arrays and linked lists.

What do you understand about SPI and interrupts?

What do you understand about Linux interrupts?

What do you know about multithreading programming?

What do you know about memory management?

What are zombie processes, orphan processes, and daemons?

What are the dangers of zombie processes?

What are the communication methods between threads?

What is Friends? How to use it in C++?

Can the constructor and destructor of the base class be inherited by the derived class?

Which functions cannot be declared as virtual functions?

What is the underlying implementation of vector?

What is a wild pointer? How did it happen? How to avoid it?

What is the role of stack in C?

How is C++ memory management done?

What is a memory leak? How to judge and reduce memory leaks?

How does the byte alignment problem affect the program?

What is the order of stacking of C function parameters?

How to handle return values ​​in C++?

What is the maximum space value of the stack? Can malloc (1.2G) be found in a computer with 1G memory? Why?

Under what circumstances will functions such as strcat, strncat, strcmp, strcpy and other functions cause memory overflow? How to improve?

What are the differences and usage scenarios of memory application functions such as malloc, calloc, realloc, etc.?


What is the difference between TCP and UDP?

TCPTransmission Control Protocol and UDP (User Datagram Protocol) are two different network transmission protocols, and they differ in many aspects:

  1. Connectivity

    • TCP is a connection-oriented protocol. Before communication, it is necessary to establish a connection to ensure the reliability and sequence of data transmission.
    • UDP is a connectionless protocol. You do not need to establish a connection and send datagrams directly. Data loss, out of order, etc. may occur.
  2. reliability

    • TCP provides reliable data transmission, ensuring the integrity and accuracy of data through mechanisms such as confirmation, retransmission, and congestion control.
    • UDP does not guarantee the reliable delivery of data, and the data received by the recipient may be incomplete or incorrect.
  3. Sequence

    • TCP ensures that the data arrives at the receiver in the order it is sent.
    • UDP does not guarantee the order of data, and data may arrive out of order.
  4. Head overhead

    • The TCP header is larger and contains more control information, such as serial number, confirmation number, window size, etc.
    • The UDP header is small, with only a few fields such as source port, destination port, length and checksum.
  5. Application scenarios

    • TCP is suitable for applications with high requirements for data accuracy and sequence, such as file transfer, email, web browsing, etc.
    • UDP is suitable for applications that require high real-time and low data accuracy, such as live video broadcasts, voice calls, online games, etc.

For example, in file downloads, we want the data to arrive in order, so we use the TCP protocol; while in real-time video streams, occasional packet loss or out-of-order effects on the viewing experience, but we need to quickly transmit data, so we usually use the UDP protocol.

Please briefly describe the three-time handshake process of TCP.

The three-time handshake process of TCP is an important step in establishing a reliable connection, as follows:

First handshake: The client sends a SYN (synchronous) packet to the server, which contains the initial sequence number (SEQ) selected by the client. At this time, the client enters the SYN_SENT state.

Second handshake: After the server receives the client's SYN packet, it sends a SYN/ACK (synchronous acknowledgement) packet to the client, where the confirmation number is SEQ + 1 of the client, and also contains the initial serial number selected by the server. At this time, the server enters the SYN_RECV state.

Third handshake: After the client receives the server's SYN/ACK packet, it sends an ACK (confirm) packet to the server, with the confirmation number being SEQ + 1 of the server. At this time, the client enters the ESTABLISHED state, and the server also enters the ESTABLISHED state after receiving the ACK packet, and the connection is successfully established.

Through three handshakes, the client and the server can confirm each other's reception and transmission capabilities, preparing for subsequent data transmission.

For example, suppose the initial serial number of the client is 1000 and the initial serial number of the server is 2000. In the first handshake, the client sends a SYN packet, SEQ = 1000; in the second handshake, the server responds to a SYN/ACK packet, ACK = 1001, SEQ = 2000; in the third handshake, the client sends an ACK packet, ACK = 2001.

What does the HTTP protocol work?

HTTP (HyperText Transfer Protocol) is an application-layer protocol used to transfer data on the Web. Its working principle includes the following main steps:

  1. Client initiates a request

    • The client (usually a browser) sends an HTTP request to the server by entering the URL. Requests include request methods (such as GET, POST, etc.), request headers (including client information and expected response format, etc.), and request bodies (if there is data to be sent).
  2. Server handles requests

    • After the server receives the request, it performs corresponding processing based on the requested URL and method. For example, if it is a GET request, the server will obtain the corresponding resource from the database or file system.
  3. Server response

    • After the server processes the request, it sends an HTTP response to the client. The response includes a response status code (representing the processing result of the request, such as 200 means success, 404 means no resource found, etc.), a response header (including the server's information and relevant parameters of the response), and a response body (representing the requested resource content).
  4. Client receives and processes responses

    • After the client receives the response, it performs corresponding processing based on the response status code and the response header. If the status code indicates success, the client parses the response body based on the information in the response header and displays it on the page.

For example, when a user enters a URL into the browser, the browser will send a GET request to the server to obtain the HTML content of the web page. After the server processes the request, it returns a response containing HTML code, and the browser parses and displays the web page.

What new features have been introduced in C++11?

C++11 Many new features have been introduced, greatly enhancing the functionality and programming convenience of C++:

  1. Automatic type deduction

    • useautoKeywords allow the compiler to automatically infer the type of variable based on the initialized value.
  2. scopeforCycle

    • Make traversing containers or arrays more concise and intuitive.
  3. Initialization list

    • It is easier to initialize objects.
  4. Rvalue reference and move semantics

    • Improved resource management and performance optimization capabilities.
  5. lambdaExpressions

    • Allows you to define anonymous functions where needed.
  6. Smart pointer

    • likeunique_ptrandshared_ptr, better manage dynamic memory and reduce the risk of memory leakage.
  7. Concurrent support

    • Including thread libraries, mutexes, condition variables, etc., it is convenient for multi-threading programming.

For example, using automatic type derivation can be written like this:auto num = 42;without explicitly specifying the type. Range of useforLoop through the array:int arr[] = {1, 2, 3}; for (auto& x : arr) {... }

What is a smart pointer? How to solve its memory leak problem?

Smart pointers are tools in C++ that are used to automatically manage dynamic allocation of memory.

The main types of smart pointers includeunique_ptr(exclusive ownership),shared_ptr(Shared ownership) andweak_ptr(Weak quote).

The principle of smart pointers to solve memory leak problem is to automatically manage the life cycle of memory. When the smart pointer exceeds its scope or is no longer in use, it automatically releases the managed memory.

byunique_ptrFor example, it ensures that at any time only one pointer has the resource pointed to whenunique_ptrWhen destroyed, the memory it points to will be automatically released.

shared_ptrManage memory by reference counting. Multipleshared_ptrOwnership of the same memory can be shared when the last oneshared_ptrWhen it is destroyed and the reference count is 0, the memory is freed.

In order to correctly use smart pointers to avoid memory leaks, the following points need to be paid attention to:

  1. Avoid circular references, especially when usingshared_ptrhour.
  2. Ensure the ownership transfer and sharing logic of smart pointers is clear.

For example, if two objects hold each other'sshared_ptr, it will cause memory to be unable to be released and form a circular reference.

What are the communication methods between processes?

Inter-process communication methods mainly include the following:

  1. Pipe

    • It is divided into anonymous pipes and named pipes. Anonymous pipes can only be used between processes with kinship, while named pipes can communicate between processes without kinship.
  2. Message Queue

    • Messages are stored in memory in the form of linked lists, and processes can send messages to or receive messages from the queue.
  3. Shared Memory

    • Multiple processes can share the same memory area, read and write data directly, and realize fast data exchange, but a synchronization mechanism is required to ensure data consistency.
  4. Semaphore

    • Used to implement synchronization and mutual exclusion between processes.
  5. Sockets (Socket

    • It can not only be used for inter-process communication on the same machine, but also for process communication on different machines.

For example, in a producer-consumer problem, a message queue can be used to deliver product information produced; in multithreaded access to shared resources, a semaphore can be used to achieve mutual exclusion.

What are the scheduling strategies of the CPU?

There are mainly the following CPU scheduling strategies:

  1. First Come First Server (FCFS)

    • Schedules in the order in which processes arrive.
  2. Shortest Job First (SJF)

    • Priority is given to scheduling processes with short execution time.
  3. Round Robin (RR)

    • Assign each process a time slice and switch to the next process after the time slice is used up.
  4. Priority Scheduling

    • Assign a priority to each process, and processes with high priority priority will obtain CPU resources.
  5. Multi-level feedback queue scheduling

    • Combining the characteristics of multiple scheduling strategies, multiple queues with different priorities are set up.

For example, for systems with high interactive requirements, time slice rotation scheduling may be used; for real-time systems with high response time requirements, priority scheduling may be used.

How to ensure thread safety? What issues should be paid attention to when programming multithreaded?

To ensure thread safety, the following common methods can be taken:

  1. Mutex

    • Locking and unlocking ensures that only one thread accesses shared resources at the same time.
  2. Condition Variable

    • Used for synchronization and waiting between threads.
  3. Atomic operation

    • Ensure the atomicity of the operation and will not be interrupted by other threads.

Multithreaded programming needs to be paid attention to:

  1. Data competition

    • Multiple threads can lead to inconsistent results when accessing and modifying shared data at the same time.
  2. Deadlock

    • Threads are waiting for each other's resources, which makes the system unable to continue execution.
  3. Performance degradation caused by too many threads

    • Too many thread switching will consume system resources.

For example, in a multi-threaded bank account operation, a mutex lock is used to protect the modification of the account balance to avoid data inconsistencies. In complex resource competition scenarios, be careful to design to avoid deadlocks.

What is SPI? How many lines does it have? How many modes are supported?

SPI (Serial Peripheral Interface) is a high-speed, full-duplex, synchronous communication bus.

SPI usually consists of four lines:

  1. SCLK (Serial Clock): used to synchronize data transmission.
  2. MOSI (Master Output Slave Input, master device outputs slave device input line): master device sends data to slave device.
  3. MISO (Master Input Slave Output, master device input slave device output line): Send data to master device.
  4. SS/CS (Slave Select/Chip Select, slave device selection line): Used to select the slave device to communicate with.

SPI supports four operating modes, the main difference is the combination of clock polarity (CPOL) and clock phase (CPHA):

  1. Mode 0: CPOL = 0, CPHA = 0. In the idle state of the clock, the data is sampled on the rising edge of the clock.
  2. Mode 1: CPOL = 0, CPHA = 1. In the idle state of the clock, the data is sampled on the falling edge of the clock.
  3. Mode 2: CPOL = 1, CPHA = 0. In the idle state of the clock, the data is sampled on the falling edge of the clock.
  4. Mode 3: CPOL = 1, CPHA = 1. When the clock is idle state is high, data is sampled on the rising edge of the clock.

For example, in communication between some sensor modules and microcontrollers, the appropriate SPI mode is selected according to the sensor specification requirements to ensure accurate data transmission.

Have you used IO simulation SPI? Please describe it.

I've used IO emulation SPI to communicate.

IO analog SPI is a function of SPI communication through software control of ordinary GPIO pins without a hardware SPI interface.

First, the relevant GPIO pin needs to be set to output or input mode. For clock pins, clock signals of specific frequency and polarity are generated as required by the SPI protocol. When sending data, the level state of the MOSI pin is changed on the rising or falling edge of the clock according to the data bit to be sent. Meanwhile, the level of the MISO pin is read on the corresponding clock edge to receive data.

For example, in a simple project, the SPI is used to simulate SPI communication with an EEPROM on an SPI interface to realize data read and write operations. In this process, precise control of pin level changes and time intervals is required to comply with the timing requirements of the SPI protocol.

What is the difference between heap and stack in memory management?

Heap and stack are two important concepts in memory management. They have the following significant differences:

  1. Memory allocation method

    • The stack is automatically allocated and released by the compiler, storing local variables, function parameters, etc. When the function call ends, the space on the stack will be automatically recycled.
    • The heap is manually allocated and released by the programmer, using such asmallocnewetc. functions are allocated and usedfreedeleteetc. function to release.
  2. Memory allocation efficiency

    • The stack is allocated and released quickly because it is simple to operate and is automatically managed by the compiler.
    • The allocation and release of the heap is relatively slow because it involves the system's memory management mechanism and complex algorithms.
  3. Memory space size

    • The stack space is usually small, usually several megabytes.
    • There is almost no limit on the size of the heap space, depending on the system's physical memory and virtual memory.
  4. Store content

    • The stack mainly stores the calling information of functions, local variables, etc., which are deterministic and temporary.
    • The heap can store larger data structures, dynamically allocated objects, etc., with greater flexibility.

For example, local variables defined in a function are stored on the stack, while dynamically created large arrays or objects are usually allocated on the heap.

What content needs to be stacked when calling a function?

When calling a function, the following content usually requires stacking:

  1. The return address of the function

    • It is used to return to the calling point correctly after the function is executed and continue execution.
  2. Function parameters

    • Press the stack in sequence according to the order of passing parameters.
  3. Caller's stack frame pointer

    • Used to restore the caller's stack frame.
  4. The values ​​of some registers

    • For example, some key general registers are to ensure that the caller's register state will not be damaged during function execution.

For example, when the functionfunc(int a, int b)When called, parametersaandb, return address and the values ​​of related registers will be pushed into the stack. Inside the function, the stack may be further pressed to save local variables and other information.

Please briefly describe the startup process of uboot.

The startup process of uboot (Universal Boot Loader, universal boot loader) is roughly as follows:

  1. Hardware initialization

    • Initialize basic hardware such as processors, clocks, memory controllers, etc.
  2. Environment variable initialization

    • Read and set some key environment variables, such as startup parameters, network configuration, etc.
  3. Loading the kernel image

    • Read kernel images to memory from storage devices (such as Flash, SD cards, etc.).
  4. Verify kernel image

    • Verify the integrity and correctness of the loaded kernel image.
  5. Pass parameters to the kernel

    • Pass some necessary parameters to the kernel so that the kernel starts correctly.
  6. Jump to kernel startup

    • After completing the preparation work, jump to the kernel entry point to start the kernel.

For example, in an embedded system, uboot first completes the basic initialization of the hardware, then loads the compressed kernel image from a specific storage location, passes the relevant parameters to the kernel and starts the kernel after verification.

What preparations are needed before uboot started?

Before uboot starts, the following preparations are required:

  1. Hardware initialization

    • Including initializing the processor core, setting the clock frequency, and initializing the memory controller to ensure memory is available.
  2. Configure storage devices

    • Identify and initialize storage media used to store uboot, kernel images and file systems, such as Flash, SD cards, etc.
  3. Initialize the serial port

    • Used to output debugging information and interact with users.
  4. Loading boot configuration

    • Read boot configuration information from a specific storage location, such as startup mode, default parameters, etc.

For example, in an embedded system based on a specific chip, the chip's pin function needs to be configured before uboot starts to ensure proper connection to the storage device.

Is uboot using a physical or virtual address when starting? Do I need to turn on MMU?

Uboot uses a physical address when starting, and does not need to enable MMU (Memory Management Unit).

In the uboot stage, the system is in the early stage of initialization and has not yet established a complete memory management and virtual address mapping mechanism. At this time, directly operate the physical address to access hardware resources and perform memory reading and writing.

Only after the subsequent kernel is started, the MMU will be enabled to map virtual addresses to physical addresses, realizing more complex memory management and protection mechanisms.

For example, in some embedded systems, uboot directly uses physical addresses to read and write data in Flash, and after the kernel starts and configures the MMU, the application accesses through the virtual address.

What are the differences between x86 assembly and Arm assembly?

There are some differences between x86 assembly and Arm assembly:

  1. Instruction set architecture

    • x86 is a complex instruction set (CISC) architecture with a variety of instruction lengths and formats.
    • Arm is a streamlined instruction set (RISC) architecture, with relatively simple and regular instructions.
  2. register

    • x86 has more general-purpose registers, but its name and functions are relatively complex.
    • Arm has relatively few general-purpose registers with clear features.
  3. Memory access method

    • x86 accesses memory more flexible, but more complex.
    • Arm usually adopts a simpler and more regular memory access mode.
  4. Instruction encoding

    • The x86 instruction encoding is relatively complex and has variable lengths.
    • Arm instruction encoding is usually simple and has a fixed length.
  5. Application scenarios

    • x86 is commonly found on personal computers and servers.
    • Arm is widely used in mobile devices, embedded systems, etc.

For example, when performing some simple calculations, the instructions for Arm assembly may be more concise and intuitive, while when dealing with complex operating systems and large applications, x86 assembly may have more specific instructions to optimize performance.

Please introduce a driver you are familiar with.

The driver I am familiar with is the USB driver.

The USB (Universal Serial Bus) driver is responsible for communication between the host and the USB device. It requires the connection, disconnection, data transmission, and power management of various USB devices.

When implementing USB drivers, you first need to understand the specifications of the USB protocol and the characteristics of various device types. For example, common USB devices include storage devices (such as USB drives), input devices (such as mouse, keyboard), audio devices, etc. Each device has its own specific protocol requirements and data format.

For device connection and disconnection detection, drivers need to be able to respond to hardware interrupts or polling status in real time to determine the presence and availability of the device. In terms of data transmission, the transmission and reception of data should be processed according to different transmission types (such as control transmission, batch transmission, interrupt transmission, etc.). At the same time, error handling and recovery mechanisms need to be considered to ensure the reliability of data transmission.

Take a USB mouse driver as an example. When the mouse is connected to the host, the driver will detect the device's insertion and obtain the device's descriptor to understand its characteristics, such as resolution, number of keys, etc. During data transmission, the driver will continuously receive the position and key status information sent by the mouse and pass it to the upper-level application of the operating system for processing.

In addition, in order to improve the power efficiency of the system, the USB driver also needs to participate in power management, such as reducing power consumption when the device is idle or entering power saving mode when the device is not in use for a long time.

Have you learned the operating system? What is the difference between spin lock and semaphore?

I've learned operating systems.

Spinlock and semaphore are two different mechanisms used in operating systems to achieve synchronization and mutual exclusion. They have the following differences:

  1. Waiting mechanism

    • Spinlock: When the acquisition of the lock fails, the thread will "spin" in place and continue to try to acquire the lock, and will not enter a sleep state.
    • Semaphore: When the semaphore is obtained, the thread will enter a sleep state and wait for it to be awakened.
  2. Applicable scenarios

    • Spinlock: It is suitable for situations where locks can be obtained in a short period of time, because spin will not cause thread switching and the overhead is small. But if the lock cannot be acquired for a long time, CPU resources will be wasted.
    • Semaphore: Suitable for situations where a lock may take longer, as threads go to sleep to avoid CPU idling.
  3. CPU usage

    • Spinlock: While waiting for the lock, the thread will always occupy the CPU.
    • Semaphore: The thread does not occupy the CPU during waiting.

For example, in a multi-core environment, if multiple cores may compete for a spin lock, and the time to acquire the lock is short, it is more appropriate to use a spin lock. In a single-core system, or when the time to acquire the lock is uncertain and may be long, it is more appropriate to use the semaphore to avoid wasting CPU resources.

What is the startup process of the Linux system?

LinuxThe system startup process is roughly as follows:

  1. BIOS self-test

    • After the computer is turned on, the hardware self-test is first performed by the BIOS (Basic Input/Output System), including checking whether the memory, hard disk, graphics card and other devices are normal.
  2. Bootloader

    • After the BIOS completes the self-test, select the boot device (such as hard disk, USB disk, etc.) according to the settings, and load the boot loader on the device. The common one is GRUB (Grand Unified Bootloader).
  3. Loading the kernel

    • The bootloader reads the Linux kernel image from the specified location and loads it into memory.
  4. Kernel initialization

    • The kernel begins to initialize, including detecting hardware devices, establishing memory management mechanisms, initializing various kernel data structures, etc.
  5. Start the first process

    • The kernel starts the first process, usuallyinitProcess.
  6. RuninitProcess

    • initThe process is based on the configuration file (e.g./etc/inittab) determines the running level of the system and starts the corresponding services and processes.
  7. Start the system service

    • According to the operation level, various system services are started, such as network services, file system services, etc.
  8. Login interface

    • After the system service is started, the login interface is displayed and the user is waiting for the user to log in.

For example, in a server system, the kernel initialization phase detects multiple network interfaces and storage devices.initThe process will start specific network services and database services according to the server configuration.

What professional courses have you learned? Which courses are better to learn?

The professional courses I have studied include: computer composition principles, operating systems, data structures and algorithms, computer networks, embedded system principles, etc.

In these courses, I studied relatively well in the two courses of operating systems and data structures and algorithms.

In the operating system course, I have a deep understanding of the core concepts and mechanisms such as process management, memory management, file system, and device management. Through actual programming and experiments, we have mastered the implementation of process scheduling algorithms, the application of memory allocation strategies, and the operation of file systems. This allows me to better understand the resource management and optimization of computer systems.

In the data structure and algorithm course, I have mastered common data structures, such as linked lists, stacks, queues, trees, graphs, etc., as well as various algorithms, such as sorting algorithms, search algorithms, dynamic programming, etc. The ability to select appropriate data structures and algorithms according to specific problems to improve the efficiency and performance of the program.

For example, when solving a complex path planning problem, I can use the data structure of the graph and related algorithms to find the optimal solution.

What drivers have you written in Linux?

Under Linux, I've written character device drivers.

Character device drivers are a common type of Linux device drivers that enable operation and control of character devices.

When writing character device drivers, a series of interface functions need to be implemented, such asopenclosereadwriteetc. to handle the device's opening, closing, reading, writing and other operations. At the same time, it also needs to handle the registration, cancellation of the device, and interaction with the kernel.

For example, I wrote a simple virtual character device driver that simulates a data acquisition device. In the driver, by implementingreadFunctions provide the collected data, and applications in user space can read this data through system calls for processing and analysis.

In addition, it is also necessary to handle interrupts of the device. When new data is generated or state changes, applications in the kernel and user space are notified by interrupts.

Do you know about linux epoll?

I understand Linux's epoll.

epoll is the next efficient I/O multiplexing mechanism for Linux.

Compared with the traditional select and poll mechanisms, epoll has significant advantages. It solves the inefficiency of select and poll when dealing with large numbers of file descriptors.

The working principle of epoll is mainly based on event-driven. By creating an epoll instance, add the file descriptor to the attention and specify the event type of interest (such as readable, writable, etc.). epoll maintains the status of these file descriptors in the kernel and notifies the application when an event occurs.

epoll supports two operating modes: horizontal trigger (LT) and edge trigger (ET). In horizontal trigger mode, notifications will be continuously triggered as long as there is data on the file descriptor that can be read or written. In edge trigger mode, notifications are triggered only when the state changes from unreadable/write to readable/write.

For example, in a highly concurrent network server, using epoll can efficiently handle a large number of network connections, respond to client requests in a timely manner, and improve server performance and responsiveness.

Please tell me about the LCD driver and input subsystem.

LCD driver:

LCD (Liquid Crystal Display) driver is a program used to control the operation of the LCD screen. It is responsible for interacting with the hardware, setting display parameters, such as resolution, color depth, refresh rate, etc., and sending image data to the display screen for display.

When implementing LCD drivers, you need to understand the hardware interface and control protocol of the display, such as common SPI, I2C and other interfaces. It also needs to handle the management of display buffers and interfaces with graphics libraries or applications.

For example, in an embedded system, LCD drivers need to reasonably allocate display buffers according to the system's resource and performance requirements to ensure smooth display of images.

input subsystem:

The input subsystem is a framework in the Linux kernel for processing input devices. It provides a unified interface to facilitate driver developers to realize the driver of various input devices, such as keyboard, mouse, touch screen, etc.

The input subsystem abstracts the hardware operations of the input device into events, such as key presses, mouse movements, touch operations, etc., and passes these events to the upper layer application.

When writing input device drivers, you need to register the device with the input subsystem and implement related event handling functions.

For example, for touch screen drivers, it is necessary to report information such as touch coordinates and operation types through the input subsystem when a touch operation is detected.

How should the driver interrupt function be written?

Writing a driver's interrupt function requires following the following steps and precautions:

First, you need to determine the type of interrupt and the trigger condition. An interrupt can be a hardware interrupt, generated by an external device, or a software interrupt, triggered by a program actively.

Inside the function, the following key operations are to be done:

  1. Save site: Save the key register values ​​of the current process, etc., to ensure that the execution environment can be restored correctly after the interrupt processing is completed.
  2. Handling interrupt transactions: Perform corresponding data processing, status updates and other operations based on the source and purpose of the interrupt.
  3. Clear the interrupt flag: Ensure that after the interrupt processing is completed, clear the relevant interrupt flag so that the next interrupt can be triggered correctly.
  4. Recovery site: Restore previously saved register values, etc., so that the interrupted process can continue to execute.

When writing interrupt functions, you need to pay attention to the following points:

  1. Try to be short and concise: interrupt handling functions should be executed quickly to avoid consuming CPU resources for a long time and affecting the real-time nature of the system.
  2. Avoid blocking operations: Do not perform operations that may cause blocking in interrupt functions, such as waiting for resources, sleeping, etc.
  3. Pay attention to concurrency: consider the situation where multiple interrupts occur simultaneously to ensure data consistency and operational accuracy.

For example, in a network-driven interrupt function, it may be possible to simply receive data packets and put them in a buffer and then notify the upper layer for processing, while specific data parsing and processing are performed in other non-interruption contexts.

Do you understand the underlying implementation of key_report?

key_report is usually used to report key events, and its underlying implementation involves the input subsystem and related hardware interfaces of the operating system.

At the bottom, the hardware generates an electrical signal when the key is pressed or released. This electrical signal is connected to the input interface of the computer system, such as through the GPIO pin or a dedicated keyboard interface.

The operating system monitors the state changes of these interfaces through drivers. The driver converts the hardware's original signal into meaningful key event information, including key code, pressed or released status, and possibly other related attributes.

These key event information will be passed to the input subsystem, which is responsible for further processing and distribution of these events. It may pass key events to the corresponding process for processing based on the current focus window or application.

For example, in an embedded system, when a user presses a key, the signal generated by the hardware circuit is captured by the pin of the microcontroller, the driver reads the state change of the pin and converts it into a specific key code, and then passes it through the input subsystem to the running application, thereby achieving a response to the key operation.

How to write a character device driver?

Writing a character device driver usually involves the following main steps:

  1. Define the device structure

    • Contains the device's private data and related operation function pointers.
  2. Implement device operation functions

    • likeopenclosereadwritewait. These functions handle logic related to the device's on, off, read, and write operations.
  3. Register a device

    • Register a device with the kernel so that it can be recognized and managed by the system.
  4. Handle interrupts (if required)

    • For devices that may cause interrupts, an interrupt handling function is implemented.
  5. accomplishfile_operationsStructure

    • Associate the defined operation function with the kernel's interface.
  6. Module loading and unloading functions

    • Do the necessary initialization work when the module is loaded and frees up resources when unloading.

Taking a simple character device driver as an example, suppose we want to implement a device for counting. existopenInitialize the counter in the function,readThe function returns the current count value,writeThe function performs corresponding operations based on the written data (such as resetting the counter). Register the device in the module loading function, cancel the device in the unload function and release the relevant resources.

For example, in an embedded system, write a character device driver for a temperature sensor.readThe function reads the current temperature value from the sensor and returns it to the user space.writeFunctions can set some configuration parameters of the sensor.

How to write a key driver and implement a break function?

Writing a key driver and implementing a break function usually requires the following steps:

  1. Hardware connection and initialization

    • Understand how the keys are connected to the microcontroller and configure the relevant GPIO pins to input mode.
  2. Registration interruption

    • Register the interrupt corresponding to the key to the kernel.
  3. Implement interrupt handling function

    • In the interrupt handling function, key transactions are quickly processed, such as recording key status, setting flags, etc.
  4. Poll or wait flag

    • In the main program, the change in the key state is obtained by polling the flag or waiting for events, and the corresponding processing is performed.

For example, suppose we useSTM32For the microcontroller, first configure the GPIO pin as the pull-up input. Then register the interrupt using the corresponding interrupt controller and set a global flag in the interrupt handling function to indicate that the key is pressed. In the main loop, the corresponding operation is performed by checking this flag, such as sending a key value to the upper layer application.

Please tell me the differences and differences between arrays and linked lists.

Arrays and linked lists are two common data structures, which have the following similarities and differences:

Similarities:

  1. They are all structures used to store a set of data.

Differences:

  1. Memory allocation

    • Array: is allocated continuously in memory, and once created, the size is fixed.
    • Linked list: Memory allocation is discontinuous, and each node is linked through a pointer.
  2. Random access

    • Array: You can directly and quickly access any element through index.
    • Linked list: Random access is not supported. To access specific elements, you need to traverse one by one from the beginning node.
  3. Insert and delete operations

    • Array: Inserting and deleting elements may require moving a large number of elements, which is less efficient.
    • Linked list: Just modify the pointer, the operation is simple and efficient.
  4. Memory utilization

    • Array: There may be memory waste if pre-allocated space is too large.
    • Linked list: Allocate memory as needed, but each node requires additional pointer space.

For example, when frequent random access is required and the data size is fixed, it is more appropriate to use arrays, such as storing a fixed-size matrix. In scenarios where elements are frequently inserted and deleted, such as implementing a dynamic task queue, the linked list is more advantageous.

What do you understand about SPI and interrupts?

SPI (Serial Peripheral Interface, serial peripheral interface):

SPI is a synchronous serial communication interface standard, which is often used for communication between microcontrollers and external devices (such as sensors, EEPROMs, displays, etc.).

SPI has the following characteristics:

  1. Full-duplex communication: can send and receive data at the same time.
  2. High-speed transmission: It can achieve relatively high data transmission rates.
  3. Master-slave mode: There is usually one master to control communication and multiple slave devices respond.

Interrupt:

Interrupts are an important mechanism in computer systems that handle asynchronous events.

When an event occurs (such as external device requests, timer timeouts, etc.), an interrupt will be triggered, causing the CPU to suspend the currently executing task and instead handle the interrupt service program. After the interrupt processing is completed, return to the original task and continue execution.

Advantages of interrupts include:

  1. Real-time response: Can handle emergency events in a timely manner and improve the real-time nature of the system.
  2. Improve efficiency: Avoid the CPU polling all the time to wait for events to occur, saving CPU resources.

For example, in an embedded system, analog data is obtained through the SPI interface to communicate with an external ADC chip. When the ADC conversion is completed, the CPU is notified to read the conversion result by interrupts, thereby achieving efficient data acquisition.

What do you understand about Linux interrupts?

In Linux, interrupts are an important mechanism for handling external events and asynchronous operations.

Linux interrupts are divided into hardware interrupts and software interrupts. Hardware interrupts are generated by external hardware devices, such as keyboard keys, network packet arrivals, etc. Software interrupts are usually triggered actively by the kernel or application.

When an interrupt occurs, the CPU suspends the currently executing process and executes the interrupt handler instead. The interrupt handler needs to perform critical operations quickly and then return as quickly as possible to reduce the impact on system performance.

The Linux kernel uses the second half of the interrupt mechanism to handle long-term interrupt tasks, such as work queues, soft interrupts, etc., to avoid the long execution time of the interrupt handler affecting the system's responsiveness.

For example, in network communication, a hardware interrupt occurs when the network card receives a data packet, and the interrupt handler quickly places the data packet into the receiving buffer, and then processes the analysis and distribution of the data packets in subsequent processing through soft interrupts.

What do you know about multithreading programming?

Multithreaded programming refers to the programming method of running multiple threads simultaneously in a program to implement concurrent execution of tasks.

Advantages of multithreaded programming include:

  1. Improve resource utilization: You can switch to other threads to execute while waiting for certain operations (such as I/O operations) to make full use of CPU resources.
  2. Enhanced responsiveness: It can handle multiple concurrent requests in a timely manner to improve the response speed of the program.
  3. Simplify program structure: For some tasks that can be processed in parallel, using multithreading can make the program logic clearer.

However, multi-threaded programming also presents some challenges:

  1. Thread synchronization: Multiple threads may access shared resources at the same time, and synchronization mechanisms (such as mutexes, semaphores, etc.) are needed to ensure the consistency and correctness of the data.
  2. Deadlock problem: If the thread acquires resources in the wrong order, it may cause deadlock, making the program unable to continue execution.
  3. Thread safety: It is necessary to ensure that the operation of shared data in a multi-threaded environment is safe.

For example, in a file download program, one thread can be used to download the file and another thread can be used to update the download progress display, thereby improving the efficiency and user experience of the program.

What do you know about memory management?

Memory management is an important part of the operating system that is responsible for allocating, recycling and managing memory resources.

The main functions of memory management include:

  1. Memory allocation: Allocate the required memory space for a process or program.
  2. Memory Recycling: When the process ends or no longer requires certain memory, recycles this memory for use again.
  3. Address translation: converts logical addresses into physical addresses, implements virtual memory mechanisms, so that programs can use a larger address space than actual physical memory.

Common memory management algorithms and strategies are:

  1. First adaptation algorithm: start looking at the starting position of memory and find the first free partition that meets the needs for allocation.
  2. Optimal adaptation algorithm: select the free partition with the closest size to the demand for allocation.
  3. Page storage management: divides memory into fixed-sized pages and converts address through page tables.

Memory management also needs to consider the problem of memory fragmentation, that is, the memory is divided into many small pieces, resulting in the inability to meet larger memory requests.

For example, in a multitasking operating system, different processes obtain their own independent memory space through the memory management mechanism, without interfering with each other, and effectively utilize limited physical memory resources.

What are zombie processes, orphan processes, and daemons?

Zombie Process:

A zombie process is a process whose child process has ended but whose parent process has not yet recycled it (get its terminated state). After the child process is completed, the kernel will still retain a small amount of information (such as process number, termination status, resource usage, etc.) and wait for the parent process to obtain. If the parent process does not obtain this information, the child process will become a zombie process.

Orphan process:

An orphan process refers to the parent process ending before the child process. At this time, the child process will be adopted by the init process of the operating system, and the init process will be responsible for subsequent resource recycling and other processing.

Daemon:

A daemon is a process that runs in the background and is not controlled by the terminal. It usually starts automatically when the system starts and runs until the system is shut down. They are independent of the control terminal and provide various system services, such as printing services, log services, etc.

What are the dangers of zombie processes?

The zombie process will cause the following harms:

  1. Resource occupation: Although zombie processes occupy relatively few resources, if there are a large number of zombie processes in the system, it will also consume a certain amount of system resources, such as process table space, accumulated.

  2. Process number waste: Each process has a unique process number, and the process number occupied by the zombie process cannot be used by the new process, which may lead to a decrease in available process numbers.

  3. Impact system performance: Too many zombie processes may affect the performance and stability of the system, especially when resources are tight.

For example, in a highly concurrency server environment, if there are a large number of short-life cycle child processes, if the parent process is not handled properly, many zombie processes are easily generated, which affects the overall performance and responsiveness of the server.

What are the communication methods between threads?

There are mainly the following communication methods between threads:

  1. Shared variables

    • Multiple threads can access and modify the same shared variable to enable communication. However, synchronization mechanisms (such as mutex locks, condition variables, etc.) need to be used to ensure thread safety.
  2. Message Queue

    • A message queue can be created where threads communicate by sending messages to and receiving messages from the queue.
  3. pipeline

    • Similar to pipeline communication between processes, pipelines can also be used between threads to pass data.
  4. Conditional variables

    • Used in conjunction with mutex locks for waiting and notification between threads.
  5. Semaphore

    • Used to control access to shared resources by multiple threads.

For example, in a multithreaded producer-consumer model, the producer thread and the consumer thread communicate through shared buffers (shared variables) and conditional variables. When the producer produces data, if the buffer is full, he will wait for the condition variable; when the consumer consumes data, if the buffer is empty, he will also wait for the condition variable.

What is Friends? How to use it in C++?

Friends are a special mechanism in C++ that allows a class to declare other functions or classes as friends.

A friend function or friend class can access private and protected members of the class as if they were public members.

The main steps to use friend elements are as follows:

Inside the class definition, usefriendKeyword statement friend.

For example, suppose there is aClassAClass, hope for a functionfuncIf you can access its private members, you can declare it like this:

  1. class ClassA {
  2. private:
  3. int privateMember;
  4. public:
  5. friend void func(ClassA& obj);
  6. };
  7. void func(ClassA& obj) {
  8. // private member of obj can be accessed
  9. }

The use of friendly elements can increase the flexibility of the program, but it also destroys the encapsulation of the class and should be used with caution.

Can the constructor and destructor of the base class be inherited by the derived class?

Base classConstructorand destructors cannot be inherited by derived classes.

When creating a derived class object, the base class constructor will first be called to initialize members inherited from the base class, and then the derived class's own constructor will be called to complete the initialization work unique to the derived class.

When the object is destroyed,DestructorThe order of call is the opposite, first call the destructor of the derived class, and then call the destructor of the base class.

For example, there is a base classBaseand derived classesDerived, createDerivedWhen the object is called firstBaseThe constructor of  , then callDerivedThe constructor of  ; when the object is destroyed, call it firstDeriveddestructor, then callBasedestructor.

Which functions cannot be declared as virtual functions?

Functions of the following types cannot usually be declared as virtual functions:

  1. Constructor

    • The constructor is used for the initialization of the object, and the virtual mechanism has not yet taken effect until the object is fully constructed.
  2. Static member functions

    • Static member functions are not associated with specific object instances, but are related to the entire class, and do not conform to the characteristics of virtual functions based on object polymorphism.
  3. Inline functions

    • Inline functions are usually expanded at compile time, and if declared as virtual functions, it may result in some unmet results.

For example, a static member function of a class is used to calculate the total number of all objects in a class. It does not need to be based on object polymorphism, so it cannot be declared as a virtual function.

What is the underlying implementation of vector?

vectorYes C++Standard librarydynamic array container in.

Its underlying implementation usually stores elements through continuous memory space.

whenvectorWhen capacity expansion is required, a larger continuous memory space will be re-allocated and the original elements will be copied to the new space.

In order to improve efficiency,vectorA certain extra space is usually reserved to reduce the performance overhead caused by frequent expansion.

For example, initially create avector, it may only allocate space that can accommodate a few elements. When the number of added elements exceeds the current capacity, the capacity will be expanded, such as doubled.

What is a wild pointer? How did it happen? How to avoid it?

A wild pointer is a pointer to an invalid memory address.

There are usually several reasons for the generation of wild pointers:

  1. Pointer not initialized

    • If a pointer is used without initialization, it may point to an arbitrary memory address.
  2. The memory pointed to by the pointer has been released

    • When useddeleteorfreeAfter freeing the memory pointed to by the pointer, if the pointer continues to be used, it will become a wild pointer.
  3. Pointers exceed their scope

    • For example, a pointer defined inside a function, after the function ends, the memory pointed to by the pointer may no longer exist, but the pointer itself still exists.

Methods to avoid wild pointers include:

  1. Initialize pointer

    • Initialize pointer asNULLor a valid memory address.
  2. After freeing memory, set the pointer toNULL

    • In this way, during subsequent use, you can determine whether the pointer isNULLTo avoid errors.
  3. Pay attention to the scope of the pointer

    • Do not continue to use this pointer after the scope is over.

For example, when using a pointer to dynamically allocate memory, it is set toNULL, and check whether it isNULL 。

What is the role of stack in C?

In C language, the stack plays the following important roles:

  1. Store local variables

    • Non-static variables defined inside the function will allocate space on the stack and will be automatically released after the function is executed, saving memory management overhead.
  2. Save function call information

    • Including the return address of the function, parameters, the caller's stack frame pointer, etc., enables the function to correctly return and restore the execution context.
  3. Temporary data storage

    • For example, intermediate results, temporary variables, etc. generated during function execution.

For example, when a function is called, its parameters will be pushed into the stack, and local variables defined inside the function also allocate space on the stack. After the function is executed, these stack spaces will be automatically released, avoiding the complexity and errors of manually managing memory.

How is C++ memory management done?

C++ memory management mainly uses the following methods:

  1. Automatic storage

    • Local variables are usually allocated on the stack and are automatically released at the end of the function.
  2. Dynamic storage

    • usenewOperator allocates memory on the heap and usesdeleteRelease.
  3. Static storage

    • Global variables and static variables exist throughout the life of the program and are stored in static storage areas.

When using dynamic memory allocation, programmers need to be responsible for properly managing memory and avoiding memory leaks and illegal access.

For example, create a dynamically allocated object:MyClass* ptr = new MyClass();, after use, you must passdelete ptr;Free memory.

What is a memory leak? How to judge and reduce memory leaks?

Memory leak refers to the program that cannot be recycled and reused due to dynamically allocating memory but not being properly released after it is no longer used.

How to judge memory leaks:

  1. Use memory detection tools such as Valgrind, etc.
  2. Monitor the trend of memory usage. If the memory continues to grow and is not released during the program operation, there may be a leak.

How to reduce memory leaks:

  1. Develop good programming habits and usenewThe allocated memory must be useddeleteRelease, usemallocThe allocated memory needs to be usedfreeRelease.
  2. Use smart pointers and other mechanisms to automatically manage memory.
  3. Establish complete memory management policies and specifications in complex programs.

For example, in a long-running server program, if there is a memory leak, it will cause the server performance to gradually decline or even crash. By regularly using memory detection tools for inspection, leakage problems can be discovered and fixed in a timely manner.

How does the byte alignment problem affect the program?

Byte alignment is to improve memory access efficiency.

The impact of byte alignment problems on the program mainly includes:

  1. Performance impact

    • Unaligned access may lead to multiple memory reads or writes, reducing the program's running efficiency.
  2. Portability issues

    • Different hardware platforms may have different alignment requirements, and if the program does not take into account, errors may occur on some platforms.
  3. Structural space waste

    • This may result in filling bytes inside the structure, increasing the consumption of storage space.

For example, in some real-time systems that require high performance, byte misaligned access can lead to increased latency for mission-critical purposes.

What is the order of stacking of C function parameters?

In C language, the stacking order of function parameters is usually from right to left.

That is, first push the rightmost parameter on the stack, and then push it to the left in turn.

Such a stack pressing order is mainly to support the implementation of variable parameter functions, so that parameters can be accessed correctly without knowing the specific number and type of parameters.

For example, for functionsfunc(int a, int b), press in firstb, press in againa 。

How to handle return values ​​in C++?

In C++, the way the return value is processed depends on the type and size of the return value.

For smaller objects, it is usually returned via registers. If the return value is large, it is possible to allocate the space on the caller's stack and then the called function copies the return value to that space.

For reference return, you can directly return the object's reference to avoid copying.

For example, returning a simple integer can be passed through a register, while returning a large object may be handled by allocating space on the caller's stack.

What is the maximum space value of the stack? Can malloc (1.2G) be found in a computer with 1G memory? Why?

The maximum space value of the stack depends on the operating system and compiler settings, usually between several megabytes and dozens of megabytes.

Cannot be used in computers with 1G memorymalloc(1.2G),becausemallocIt is to allocate memory on the heap, and the size of the heap is limited by physical memory and virtual memory.

Even if there is 1G of physical memory, the system still needs to reserve a certain amount of memory resources for other processes and the system itself, and cannot provide 1.2G of continuous free memory space.

For example, in a resource-intensive system, excessive memory allocation requests can cause memory allocation to fail or even cause system crashes.

Under what circumstances will functions such as strcat, strncat, strcmp, strcpy and other functions cause memory overflow? How to improve?

strcatandstrcpyFunctions will cause memory overflow when the length of the source string exceeds the remaining space of the target string.

strncatIf the specified copy length is too long, it may also cause overflow.

Improvement method:

  1. usestrncpyandstrncatWhen  , make sure the specified length is reasonable and add the ending symbol manually.
  2. Implement safer string operation functions by yourself, check the string length before operation.

For example, in a program that uses user input processing, ifstrcpyDirectly copying the string entered by the user to a fixed-length buffer may cause memory overflow due to the long user input.

What are the differences and usage scenarios of memory application functions such as malloc, calloc, realloc, etc.?

mallocFunctions are used to allocate memory space of a specified size, but are not initialized.

callocThe function allocates the specified number and size of memory space and initializes it to 0.

reallocFunctions are used to resize allocated memory blocks.

Use scenarios:

mallocApplicable to cases where there is no concern about the initial value and only the specified size of memory is required.

callocApplicable to situations where a piece of memory needs to be allocated and initialized, such as creating an array.

reallocApplicable to situations where resizing needs to be resized based on allocated memory.

For example, when creating a dynamic array, you can start usingcallocInitialization, use when it needs to be expanded in the futurerealloc 。