ID
int64 1
1.96k
| Split
stringclasses 1
value | Domain
stringclasses 4
values | SubDomain
stringclasses 24
values | Format
stringclasses 1
value | Tag
stringclasses 2
values | Language
stringclasses 1
value | Question
stringlengths 15
717
| A
stringlengths 1
292
| B
stringlengths 1
232
| C
stringlengths 1
217
| D
stringlengths 1
192
| Answer
stringclasses 4
values | Explanation
stringlengths 21
1.43k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,720 |
Test
|
Operating System
|
Overview
|
Multiple-choice
|
Reasoning
|
English
|
The following statement about operating systems is incorrect ().
|
The operating system is a program that manages resources.
|
The operating system is the program that manages the execution of user programs.
|
The operating system is a program that can enhance the efficiency of system resources.
|
The operating system is a program used for programming.
|
D
|
An operating system is a program used to manage resources, and user programs are also executed under the management of the operating system. Compared to a bare-metal machine, the resource utilization is greatly improved with an operating system installed. The operating system cannot be used directly for programming, so option D is incorrect.
|
1,721 |
Test
|
Operating System
|
Overview
|
Multiple-choice
|
Reasoning
|
English
|
It is best to use a real-time operating system platform for the following () application work: Ⅰ. Airline reservation II.Office Automation Ⅲ. Machine tool control Ⅵ. Stock trading system
|
I, II, and III
|
I, III, and IV
|
I, V, and IV
|
I, III, and VI
|
D
|
Real-time operating systems are primarily used in situations where immediate response to external inputs is required, without delays, as delays can lead to serious consequences. Among the options in this question, an airline reservation system needs real-time processing of ticketing because the number of tickets in the database directly reflects the available booking seats on flights. Machine tool control must also be real-time, otherwise errors will occur. Stock market quotes are constantly changing, and if trading is not real-time, time lags can occur, leading to deviations in transactions.
|
1,722 |
Test
|
Operating System
|
Overview
|
Multiple-choice
|
Reasoning
|
English
|
Among the following statements about operating systems, the incorrect one is (). Ⅰ. Running a program on a computer under the management of a general-purpose operating system requires reserving running time with the operating system. Ⅱ. Running a program on a computer under the management of a general-purpose operating system requires determining a starting address and executing from that address. Ⅲ. The operating system needs to provide compilers for high-level programming languages. Ⅳ. Managing computer system resources is the main concern of the operating system.
|
I, III
|
II, III
|
I, II, III, IV
|
All of the above answers are correct.
|
A
|
I: General-purpose operating systems use a round-robin scheduling algorithm, and users do not need to reserve running time in advance, so option I is incorrect; II: When executing a program, the operating system must start from the beginning address, so option II is correct; III: The compiler is a higher-level software than the operating system and is not a function that the operating system needs to provide, so option III is incorrect; IV: The operating system is the manager of computer resources, and managing computer system resources is the main concern of the operating system, so option IV is correct. Upon comprehensive analysis, options I and III are incorrect, therefore the answer is A.
|
1,723 |
Test
|
Operating System
|
Overview
|
Multiple-choice
|
Reasoning
|
English
|
Among the following statements, the correct one is (). Ⅰ. The main disadvantage of batch processing is the need for a large amount of memory Ⅱ. When a computer provides kernel mode and user mode, input/output instructions must be executed in kernel mode Ⅲ. The primary reason for adopting multiprogramming technology in operating systems is to improve the reliability of the CPU and external devices Ⅳ. In operating systems, channel technology is a type of hardware technology
|
I, II
|
I, III
|
II, IV
|
II, III, IV
|
C
|
Ⅰ Error: The main disadvantage of batch processing is the lack of interactivity. The main drawback of batch processing systems is a common examination point, and readers should be very sensitive to this. Ⅱ Correct: Input/output instructions require interrupt operations, and interrupts must be executed in kernel mode. Ⅲ Error: Multiprogramming is proposed to improve system utilization and throughput. Ⅳ Correct: An I/O channel is actually a special type of processor, which has the capability to execute I/O instructions and controls I/O operations by executing channel programs. In summary, Ⅱ and Ⅳ are correct.
|
1,724 |
Test
|
Operating System
|
Overview
|
Multiple-choice
|
Reasoning
|
English
|
Among the following statements about system calls, the correct one(s) is (are) (). Ⅰ. When designing user programs, system call commands are used, which after compilation, form several parameters and a trap instruction. Ⅱ. When designing user programs, system call commands are used, which after compilation, form several parameters and an interrupt mask instruction. Ⅲ. The functionality of system calls is an interface provided by the operating system to user programs. Ⅳ. Users and their applications and application systems utilize system resources to complete their operations through the support and services provided by system calls.
|
I, III
|
II, IV
|
I, III, IV
|
II, III, IV
|
C
|
Ⅰ Correct: System calls require triggering a trap instruction, such as int 0x80 or sysenter on x86-based Linux systems. Ⅱ is a distractor; program design cannot form instructions to mask interrupts. Ⅲ Correct: The concept of system calls. Ⅳ Correct: The operating system is an interface layer that provides services to the upper layer and abstraction to the lower layer. It offers the use of system resources to its upper-layer users, applications, and application systems through system calls.
|
1,725 |
Test
|
Operating System
|
Overview
|
Multiple-choice
|
Reasoning
|
English
|
Among the following options, the instruction that must be executed in kernel mode is ().
|
Fetch data from memory
|
Load the computation result into memory.
|
Arithmetic Operations
|
Input/Output
|
D
|
Input/output instructions involve interrupt operations, and interrupt handling is the responsibility of the system kernel, operating in kernel mode. Options A, B, and C can all be implemented using assembly language programming, hence they can be executed in user mode. Of course, readers can also use the previously mentioned "cup" example to think about how to rule out options A, B, and C. When the operating system manages memory, it deals with issues such as where data in memory is placed, where data can be placed, where data cannot be placed (memory protection), and where there is free space. However, what the data in memory is, and how to read and write it, are not concerns of kernel mode. It's like the operating system manages where cups are placed, which cups' water can be drunk, and which cups' water cannot be drunk, but whether the cup contains water or a beverage, and whether you pick up the cup to drink or insert a straw to sip, are not concerns of the operating system. The "cup" example can help us accurately understand the tasks of the operating system, and many issues in subsequent chapters will become very clear when compared using this example.
|
1,726 |
Test
|
Operating System
|
Overview
|
Multiple-choice
|
Reasoning
|
English
|
When the CPU is in kernel mode, the instructions it can execute are ().
|
Only privileged instructions
|
Only non-privileged instructions
|
Only the "visit control" instruction
|
All instructions except the "visit control" instruction
|
D
|
The visit management instruction is used in user mode as a means for user programs to "voluntarily enter privileged mode," and privileged instructions cannot be executed in user mode. In kernel mode, the CPU can execute any instruction in the instruction set.
|
1,727 |
Test
|
Operating System
|
Overview
|
Multiple-choice
|
Reasoning
|
English
|
Among the following statements about hierarchical structured operating systems, () is incorrect.
|
Dependencies or invocations between layers must be unidirectional only.
|
Easily implement the addition or replacement of a layer in the system without affecting other layers.
|
With highly flexible dependencies
|
The system efficiency is low.
|
C
|
Unidirectional dependency is a characteristic of layered operating systems. In a layered OS, adding or replacing a module or an entire layer does not affect other layers as long as the interfaces between the corresponding layers remain unchanged, making it easy to expand and maintain. Once the hierarchy is defined, the dependencies between the layers are also set, which often results in a lack of flexibility, hence option C is incorrect. Executing a function typically requires traversing through multiple layers from top to bottom, which adds extra overhead and leads to reduced system efficiency.
|
1,728 |
Test
|
Operating System
|
Overview
|
Multiple-choice
|
Reasoning
|
English
|
Compared to traditional operating system architectures, designing and implementing an operating system with a microkernel structure has many advantages. The following () are characteristics of the microkernel architecture. Ⅰ. Makes the system more efficient Ⅱ. Adding system services without modifying the kernel Ⅲ. Microkernel architecture lacks a single stable kernel Ⅳ. Makes the system more reliable
|
I, III, IV
|
I, II, IV
|
II, IV
|
I, IV
|
C
|
Microkernel architecture requires frequent switching between kernel mode and user mode, leading to relatively high execution overhead for the operating system. The operating system code that is moved out of the kernel is divided into several service programs according to the principle of layering. Their execution is independent of each other, and interactions are facilitated through the microkernel for communication, which affects the system's efficiency; therefore, 1 is not an advantage. Since the number of services provided by the kernel is reduced, and generally speaking, the fewer services the kernel provides, the more stable it is, so Ⅲ is incorrect. However, Ⅱ and Ⅳ are indeed the advantages of microkernel architecture.
|
1,729 |
Test
|
Operating System
|
Overview
|
Multiple-choice
|
Reasoning
|
English
|
Among the following statements about operating system structures, the correct one is (). Ⅰ. The widely used Windows XP operating system adopts a layered OS structure. Ⅱ. The basic principle of modular OS structure design is that each layer only uses the functions and services provided by the layer below it, which makes the system's debugging and verification easier. Ⅲ. Because the microkernel structure can effectively support multiprocessor operation, it is very suitable for distributed system environments. Ⅳ. Adopting a microkernel structure to design and implement an operating system has many advantages, such as not needing to modify the kernel when adding system services, making the system more efficient.
|
Type I and Type II
|
Quadrants I and III
|
III
|
III and IV
|
C
|
Windows is a monolithic kernel operating system, Ⅰ incorrect. Ⅱ describes the principle of a hierarchical architecture. In a microkernel architecture, communication between clients and servers, as well as between servers, is achieved through a message-passing mechanism, which enables microkernel systems to effectively support distributed systems, Ⅲ correct. Adding system services without modifying the kernel enhances the scalability and flexibility of the microkernel architecture; the main issue with microkernel architecture is performance, so "making the system more efficient" is clearly incorrect.
|
1,730 |
Test
|
Operating System
|
Overview
|
Multiple-choice
|
Reasoning
|
English
|
The incorrect description of the computer operating system boot process is ().
|
The computer's boot program resides in the ROM and is automatically executed upon startup.
|
The bootloader first performs a self-test of critical components and identifies connected peripherals.
|
The bootloader will load the entire operating system stored on the hard drive into memory.
|
If a dual-boot system is installed on a computer, the bootloader will interact with the user to load the relevant system.
|
C
|
Only the operating system kernel resides permanently in memory; other parts are loaded as needed.
|
1,731 |
Test
|
Operating System
|
Overview
|
Multiple-choice
|
Reasoning
|
English
|
The following statement about VMware Workstation virtual machines is incorrect ().
|
Real hardware does not directly execute sensitive instructions in a virtual machine.
|
Only one type of operating system can be installed in a virtual machine.
|
A virtual machine is an application that runs within a computer.
|
The virtual machine files are encapsulated in a folder and stored in the data storage.
|
B
|
VMware Workstation virtual machines belong to the Type 2 hypervisor category. If the real hardware were to directly execute sensitive instructions within the virtual machine, illegal instructions could potentially cause the host operating system to crash, which is not possible. In reality, it is the Type 2 hypervisor that simulates the real hardware environment. Virtual machines appear no different from real physical computers, which is why it is certainly possible to install multiple operating systems. VMware Workstation is a program installed on a computer that creates a set of files for the virtual machine during its creation. These virtual machine files are all stored on the host's disk.
|
1,732 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
A process image is ().
|
A program executed by a coprocessor
|
An independent program + dataset
|
The combination of PCB structure with programs and data.
|
An independent program
|
C
| null |
1,733 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
After the completion of an I/O operation requested by a system process, the process state will transition from ().
|
Transition from Running State to Ready State
|
Transition from running state to blocked state.
|
Transition from ready state to running state
|
Transition from blocked state to ready state
|
D
| null |
1,734 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
The system threads in the dynamic DLL library of the system, which are called by different processes, are () threads.
|
Different
|
identical
|
May differ or may be the same.
|
cannot be invoked
|
B
| null |
1,735 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
The priority of () is determined when creating a process and does not change during the entire runtime.
|
First Come, First Served
|
Dynamic
|
Shortest Job (Process) First Algorithm
|
static
|
D
| null |
1,736 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
In process scheduling algorithms, the one that is disadvantageous to short processes is ().
|
Shortest Job First Scheduling Algorithm
|
First-Come, First-Served (FCFS) scheduling algorithm
|
High Response Ratio Priority Scheduling Algorithm
|
Multilevel Feedback Queue Scheduling Algorithm
|
B
| null |
1,737 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
The functionality that can be implemented without semaphores is ().
|
Process Synchronization
|
Process Mutual Exclusion
|
Precedence relationship of execution
|
Concurrent execution of processes
|
D
| null |
1,738 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
The resource preemption method can be used to resolve deadlocks, and the () method can also be used to resolve deadlocks.
|
Execute parallel operations
|
Termination of Process
|
Refuse to allocate new resources
|
Modify semaphore
|
B
| null |
1,739 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
One of the prerequisites for introducing multiprogramming technology is that the system has ().
|
Multiple CPUs
|
Multiple terminals
|
Interrupt function
|
Time-sharing function
|
C
| null |
1,740 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
Inter-process data exchange cannot be conducted through ().
|
Shared file
|
Message Passing
|
Accessing the process address space
|
Access shared storage area
|
C
| null |
1,741 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
The fundamental difference between a process and a program is ().
|
Static and dynamic characteristics
|
Is it swapped into memory?
|
Does it have three states: ready, running, and waiting?
|
Does it occupy the processor?
|
A
| null |
1,742 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
The operating system controls and manages concurrently executing processes based on ().
|
Basic States of a Process
|
Process Control Block (PCB)
|
Multiprogramming
|
Process Priority
|
B
| null |
1,743 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
The basic state to which a process can transition from the other two basic states must be ().
|
Execution state
|
Blocked state
|
Transition from ready state to running state
|
Completion status
|
C
| null |
1,744 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
The communication mechanism for inter-process information exchange using mailboxes requires two communication primitives, which are ().
|
Send primitive and execute primitive
|
Ready primitive and execute primitive
|
Send primitive and receive primitive
|
Ready primitive and Receive primitive
|
C
| null |
1,745 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
The same program, after being created multiple times and run on different datasets, forms () processes.
|
Different
|
identical
|
Synchronous
|
Mutually exclusive
|
A
| null |
1,746 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
After the process creation is completed, it will enter a sequence, which is called ().
|
Blocking Queue
|
Pending sequence
|
Ready Queue
|
Run Queue
|
C
| null |
1,747 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
Process management and control are performed using ().
|
Instruction
|
Original language
|
Semaphore
|
mailbox
|
B
| null |
1,748 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
During process transition, the following () transition is not possible.
|
Ready State → Running State
|
Running state → Ready state
|
Running state → Blocked state
|
Blocked state → Running state
|
D
| null |
1,749 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
() will inevitably cause a process switch.
|
After a process is created, it enters the ready state.
|
A process transitions from the running state to the ready state.
|
A process transitions from the blocked state to the ready state.
|
None of the above answers are correct.
|
B
| null |
1,750 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
When employing the round-robin scheduling algorithm to allocate CPU time, once the process in the running state exhausts a time slice, its state is the () state.
|
Blocking
|
Execution
|
Ready
|
Extinction
|
C
| null |
1,751 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
The mutually exclusive resource for two travel agencies, A and B, when booking plane tickets for passengers with a certain airline is ().
|
Travel Agency
|
airline
|
Airplane Ticket
|
Travel Agency and Airline
|
C
| null |
1,752 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
The critical section refers to the period during which concurrent processes access the shared variable segment ().
|
Management Information
|
Information Storage
|
Data
|
Code program
|
D
| null |
1,753 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
The following () are not considered critical resources.
|
printer
|
Non-shared data
|
Shared variable
|
Shared Buffer
|
B
| null |
1,754 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
The following () are considered critical resources.
|
Magnetic disk storage medium
|
Common Queue
|
Private Data
|
Reentrant program code
|
B
| null |
1,755 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
The original language is ().
|
Process running in user mode
|
The kernel of an operating system
|
Interruptible instruction sequence
|
Indivisible instruction sequence
|
D
| null |
1,756 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
Implement process synchronization using P, V operations, with the initial value of the semaphore set to ().
|
-1
|
0
|
1
|
Determined by the user
|
D
| null |
1,757 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
The code that can be shared by multiple processes at any given time must be reentrant ().
|
Sequential code
|
Machine language code
|
Code that must not be modified
|
Non-transfer instruction code
|
C
| null |
1,758 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
When a process wakes up another process by performing the V(mutex) operation on the mutual exclusion semaphore mutex, the value of mutex after the V operation is ().
|
greater than 0
|
Less than 0
|
greater than or equal to 0
|
Less than or equal to 0
|
D
| null |
1,759 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
The following () option is not a component of a monitor (in concurrent programming).
|
Shared data structures confined to a scope
|
A set of procedures that operate on the data structures within a monitor.
|
Description of the external process calling the internal data structures of the monitor.
|
Statements that initialize data structures confined to a monitor.
|
C
| null |
1,760 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
The relationship between concurrent processes is ().
|
irrelevant
|
related to
|
Potentially related
|
It may be irrelevant, or there may be interactions.
|
D
| null |
1,761 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Knowledge
|
English
|
The possible causes of system deadlock are ().
|
Improper allocation of exclusive resources
|
Insufficient system resources
|
The process is running too fast.
|
Too many CPU cores
|
A
| null |
1,762 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
The following statement about threads is correct ().
|
A thread contains CPU context and can execute programs independently.
|
Each thread has its own independent address space.
|
A process can only contain one thread.
|
Inter-thread communication must use system call functions.
|
A
|
A thread is the basic unit of CPU scheduling and can certainly execute programs independently, A correct; threads do not have their own independent address space, they share the address space of the process they belong to, B incorrect; a process can create multiple threads, C incorrect; communication between threads within a process can be done directly through the shared memory space, D incorrect.
|
1,763 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
The correct statement below is ().
|
The process obtains the CPU for execution through scheduling.
|
Priority is an important basis for process scheduling and cannot be changed once it is determined.
|
In a single-processor system, at any given moment, only one process is in the running state.
|
When a process requests the CPU and is not satisfied, its state becomes blocked.
|
A
|
Option B is incorrect because it divides priority into static and dynamic types, where dynamic priority is adjusted according to the running conditions. Option C is incorrect because when the system encounters a deadlock, it is possible that all processes are in a blocked state, or there are no process tasks, leaving the CPU idle. Option D is incorrect because when a process's request for the processor is not satisfied, it is in the ready state, waiting for the processor's scheduling.
|
1,764 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
Concurrent processes losing closure refers to ().
|
Multiple relatively independent processes advance at their own pace.
|
The execution result of concurrent processes is independent of their speed.
|
Concurrent processes may experience errors at different moments in time.
|
Concurrent processes share variables, and their execution results depend on the speed.
|
D
|
Program closure refers to the property that the result of a process execution depends solely on the process itself and is not affected by external factors. In other words, whether the process executes continuously or intermittently, the speed of execution will not alter the outcome. Without closure, the results of execution at different speeds would vary.
|
1,765 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
When a process is executed on the processor, ().
|
Processes are independent and possess encapsulation characteristics.
|
Processes interact with each other, exhibiting interdependence and mutual restraint, and possess concurrency.
|
Possesses concurrency, that is, the characteristic of being executed simultaneously.
|
Processes may be unrelated, but they can also be interactive.
|
D
|
A and B are both too absolute; processes may be correlated or they may be independent of each other. C's mistake lies in "simultaneously."
|
1,766 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
In a many-to-one threading model, when a thread in a multi-threaded process is blocked, ().
|
Other threads of the process can still continue to run.
|
The entire process will be blocked.
|
The blocking thread will be revoked.
|
The blocked thread will never be able to execute again.
|
B
|
In the many-to-one threading model, since there is only one kernel-level thread, the "many" user-level threads are transparent to the operating system, which means the OS kernel can only perceive the existence of a single scheduling unit. Consequently, if one thread of the process is blocked, the entire process is blocked, and naturally, all other threads of the process are also blocked. Note: In contrast, in the one-to-one model, each user-level thread is mapped to a kernel-level thread, so when a particular thread is blocked, it does not cause the entire process to be blocked.
|
1,767 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
Among the following descriptions of processes, () is the least consistent with the operating system's understanding of processes.
|
A process is a complete program in a multi-programming environment.
|
A process can be described by the program, data, and PCB (Process Control Block).
|
A thread is a special kind of process.
|
A process is the execution of a program on a set of data; it is an independent unit for system resource allocation and scheduling.
|
A
|
A process is an independent running unit and the basic unit for resource allocation and scheduling by the operating system. It includes the Process Control Block (PCB), program and data, as well as the execution stack. It is not appropriate to simply say that a process is a complete program in a multiprogramming environment, because a program is static, stored on the computer's hard drive in the form of files, whereas a process is dynamic.
|
1,768 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
Operating systems that support multiprogramming continuously select new processes to run during operation to achieve CPU sharing, but () is not the direct cause for the operating system to select a new process.
|
The time slice for the running process has expired.
|
Error occurred during process execution.
|
The running process must wait for a certain event to occur.
|
A new process has been created and entered the ready state.
|
D
|
It is necessary to understand the situations where process scheduling and switching cannot be performed (during interrupt handling, accessing critical sections, atomic operations), as well as the situations where process scheduling and switching should be carried out. A running process can activate the scheduler for rescheduling due to reasons such as the exhaustion of its time slice, completion of execution, waiting for an event to occur (such as waiting for keyboard input), errors, or self-blocking. This leads to the selection of a new ready process to be put into execution. The addition of a new process to the ready queue is not a direct cause for scheduling; when the CPU is running other processes, the new process still needs to wait. Even in systems that use high-priority scheduling algorithms, when a highest-priority process enters the ready queue, it is necessary to consider whether preemption is allowed. If preemption is not allowed, the process must still wait.
|
1,769 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
The PCB is the unique identifier of a process, the following () is not part of the PCB.
|
Process ID
|
CPU status
|
Stack pointer
|
Global variable
|
D
|
A process entity mainly consists of code, data, and PCB. Therefore, it is important to understand the data structure contained within the PCB, which primarily includes four categories: process identification information, process control information, process resource information, and CPU context information. From the above, it is clear that global variables are unrelated to the PCB; they are only associated with the user code.
|
1,770 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
In a multiprogramming system, if the ready queue is not empty, the more processes that are ready, the higher the processor efficiency ().
|
The higher
|
Lower
|
Invariant
|
Uncertain
|
C
|
The process state diagram shows that the more processes are ready, the more competition there is for the CPU. However, as long as the ready queue is not empty, the CPU can always schedule processes to run and remain busy. This is independent of the number of ready processes, unless the ready queue is empty, at which point the CPU enters a wait state, resulting in reduced CPU efficiency.
|
1,771 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
The correct statement below is ().
|
After introducing threads, the processor can only switch between threads.
|
After introducing threads, the processor still switches between processes.
|
Thread switching does not cause process switching.
|
Thread switching may lead to process switching.
|
D
|
Within the same process, switching between threads does not cause a process switch. A process switch only occurs when switching from a thread in one process to a thread in another process, therefore A, B, C are incorrect.
|
1,772 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
The correct statement below is ().
|
Threads within the same process can execute concurrently, while threads from different processes can only execute serially.
|
Threads within the same process can only be executed serially, while threads from different processes can be executed concurrently.
|
Threads within the same process or across different processes can only execute serially.
|
Threads within the same process or across different processes can execute concurrently.
|
D
|
In systems without threads, processes are the basic units for resource scheduling and concurrent execution. In systems with threads, processes become the basic units for resource allocation, while threads replace processes to be scheduled by the operating system, allowing for concurrent execution.
|
1,773 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
The following description does not highlight the strengths of a multithreaded system.
|
Utilize threads to execute matrix multiplication operations in parallel.
|
Web servers use threads to respond to HTTP requests.
|
The keyboard driver equips each running application with a thread to respond to keyboard input for that application.
|
A GUI-based debugger processes user input, computation, and tracking operations on separate threads.
|
C
|
The entire system has only one keyboard, and since keyboard input is a human operation which is relatively slow, it is entirely feasible to use a single thread to handle the keyboard input for the entire system.
|
1,774 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
Two cooperating processes cannot exchange data using ().
|
File system
|
Shared Memory
|
Global Variables in High-Level Language Programming
|
Message Passing System
|
C
|
Different processes have different code segments and data segments. Global variables are specific to the same process and are different variables in different processes, with no connection between them, so they cannot be used for data exchange. This question can also be solved by the process of elimination, as options A, B, and D are all discussed in the textbook. A pipe is a type of file.
|
1,775 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
The events that may cause a process to transition from the running state to the ready state are ().
|
An I/O operation is completed.
|
The running process needs to perform I/O operations.
|
Process termination
|
A process with a higher priority than the current one has appeared.
|
D
|
When a process is in the running state, it must have acquired the necessary resources and will be terminated after the execution is complete. It only transitions to the ready state when the time slice expires or a process with a higher priority than the current one appears. Option A causes the process to move from the blocked state to the ready state, option B causes the process to move from the running state to the blocked state, and option C terminates the process.
|
1,776 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
When a process is in (), it is in a non-blocking state.
|
Waiting for data input from the keyboard.
|
Waiting for a signal from a cooperating process
|
Waiting for the operating system to allocate CPU time.
|
Waiting for network data to enter memory
|
C
|
Processes have three basic states: a process in the blocked state is waiting due to an unsatisfied event. Such events are generally I/O operations, such as keyboard inputs, or waiting caused by mutual exclusion or synchronization of data, such as waiting for a signal or waiting to enter a section of mutex critical code, etc. Waiting for network data to enter memory is for process synchronization. Meanwhile, a process waiting for CPU scheduling is in the ready state, provided that it is non-blocked.
|
1,777 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
A process being awakened means ().
|
The process can re-contest for the CPU.
|
Increased priority
|
The PCB is moved to the front of the ready queue.
|
The process transitions to the running state.
|
A
|
When a process is awakened, it enters the ready state, waiting for process scheduling to take over the CPU for execution. Upon awakening, the priority of a process can increase under certain circumstances, but it generally does not become the highest. Instead, it is calculated by a fixed algorithm. The process will not be placed at the head of the ready queue after being awakened; its position in the ready queue is assigned according to certain rules, such as first-come-first-served, priority-based, or shortest-job-first, etc. It cannot directly take control of the processor to run.
|
1,778 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
The process creation does not require ().
|
Fill in a process table entry for this process.
|
Allocate appropriate memory for the process.
|
Insert the process into the ready queue.
|
Allocate CPU to the process
|
D
|
The work completed by the process creation primitive is: to apply to the system for a free PCB, allocate the necessary resources for the process being created, then initialize its PCB, insert this PCB into the ready queue, and finally return a process identifier. When the scheduler allocates CPU to the process, the process begins to run. Therefore, the process of creating a process does not include the allocation of CPU, as this is not the job of the process creator, but the job of the scheduler.
|
1,779 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
Among the following statements, the incorrect one is ().
|
A process can create one or more threads.
|
A thread can create one or more threads.
|
A thread can create one or more processes.
|
A process can create one or more processes.
|
C
|
A process can create processes or threads, and a thread can also create threads, but a thread cannot create a process.
|
1,780 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
Among the following descriptions of the advantages of kernel-level threads over user-level threads, the incorrect one is ().
|
Thread switching within the same process incurs low system overhead.
|
When a kernel thread is blocked, the CPU will schedule other kernel threads within the same process to execute.
|
The program entity of kernel-level threads can run in kernel mode.
|
For multiprocessor systems, cores can simultaneously schedule multiple threads of the same process to run in parallel.
|
A
|
In kernel-level threading, thread switching within the same process requires transitioning from user mode to kernel mode, which incurs significant system overhead, A is incorrect. CPU scheduling is performed in the kernel, and in kernel-level threading, scheduling is done at the thread level, allowing the kernel to schedule multiple threads from the same process to run in parallel on multiple CPUs (which is not possible with user-level threads), B is correct, D is correct. When a kernel-level thread within a process is running in kernel mode, it indicates that the process is also running in kernel mode, C is correct.
|
1,781 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
In the following description of the advantages of user-level threads over kernel-level threads, the incorrect one is ().
|
The blocking of one thread does not affect the execution of another thread.
|
Thread scheduling does not require direct kernel involvement, allowing for simple control.
|
Thread switching has a low overhead.
|
Allow each process to customize its own scheduling algorithm; thread management is relatively flexible.
|
A
|
If a user-level thread in a process is blocked, the entire process is also blocked, meaning that other user-level threads within the process are blocked as well, which is incorrect (A). The scheduling of user-level threads is done in user space, saving the overhead of mode switching. Different processes can choose different scheduling algorithms for their threads according to their own needs, therefore B, C, and D are all correct.
|
1,782 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
The round-robin scheduling algorithm is designed for ().
|
Multiple users can intervene in the system promptly.
|
Make the system efficient
|
Processes with higher priority receive timely responses.
|
The process requiring the least CPU time is executed first.
|
A
|
The primary purpose of round-robin time slicing is to ensure that multiple interactive users receive timely responses, creating the illusion that they have "exclusive" use of the computer. Therefore, it does not show preference and does not provide special services for specific processes. Round-robin time slicing increases system overhead, so it does not enhance system efficiency, and both throughput and turnaround time are not as good as batch processing. However, its relatively quick response time allows users to interact with the computer, improving the human-computer environment and meeting user needs.
|
1,783 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
In a single-processor multiprocess system, when a process occupies the processor and determines the duration of occupation is decided by ().
|
The corresponding code length of the process
|
The total time required for the process to run
|
Process Characteristics and Process Scheduling Policies
|
What functions does the process complete?
|
C
|
The timing of process scheduling is related to the characteristics of the process, such as whether the process is CPU-bound or I/O-bound, its own priority, etc. However, these characteristics alone are not sufficient. Whether a process can be scheduled also depends on the process scheduling policy. If a priority scheduling algorithm is used, then the process's priority becomes effective. As for the duration of processor occupation, it depends on the process itself. If the process is I/O-bound, it will frequently access I/O ports during execution, which means it may often relinquish the CPU, so it will not occupy the CPU for a long time. Once it gives up the CPU, it must wait for the next scheduling. If the process is CPU-bound, once it occupies the CPU, it may run for a long time. However, the running time also depends on the process scheduling policy. In most cases, interactive systems, to improve user response time, mostly adopt the round-robin algorithm with time slices. This algorithm forces a process to be switched out after it has occupied the CPU for a certain amount of time, to ensure the CPU usage rights of other processes. Therefore, choose C.
|
1,784 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
() is beneficial for CPU-intensive jobs, but not for I/O-intensive jobs.
|
Round Robin Scheduling Algorithm
|
First-Come, First-Served (FCFS) scheduling algorithm
|
Shortest Job (Process) First Algorithm
|
Priority Scheduling Algorithm
|
B
|
The FCFS (First-Come, First-Served) scheduling algorithm is more favorable to long jobs and less favorable to short jobs. CPU-bound jobs refer to those that require a significant amount of CPU time for computation and seldom request I/O operations; thus, using FCFS allows them to complete computations with ease. I/O-bound jobs, on the other hand, frequently request I/O operations during CPU processing, resulting in the need to re-queue and wait for scheduling after operations are completed. Therefore, CPU-bound jobs are closer to long jobs, and if FCFS is used, the waiting time can be excessively long. The Round-Robin scheduling method assigns the same time slice to both short and long jobs, so their status is almost equal. Priority scheduling favors processes with higher priorities, and priority is not necessarily related to the length of the job time. Therefore, option B is selected.
|
1,785 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
The following criteria for selecting a process scheduling algorithm is incorrect ().
|
Respond promptly to interactive user requests.
|
Maximize processor utilization.
|
Maximize system throughput as much as possible.
|
Appropriately increase the waiting time of the process ready queue.
|
D
|
When selecting a process scheduling algorithm, the following criteria should be considered: ① Fairness: Ensure that each process gets a reasonable share of the CPU; ② Efficiency: Keep the CPU as busy as possible; ③ Response Time: Minimize the response time for interactive users; ④ Turnaround Time: Minimize the waiting time for batch processing users to receive output; ⑤ Throughput: Maximize the number of processes handled per unit of time. Therefore, D is incorrect.
|
1,786 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
The timing of process (thread) scheduling includes (). Ⅰ. The running process (thread) has finished execution Ⅱ. The resources required by the running process (thread) are not ready Ⅲ. The time slice of the running process (thread) has expired Ⅳ. The running process (thread) blocks itself Ⅴ. The running process (thread) encounters an error.
|
II, III, IV, and V
|
Quadrants I and III
|
II, IV, and V
|
All are
|
D
|
The timing of process (thread) scheduling includes: the running process (thread) has completed, the running process (thread) blocks itself, the time slice of the running process (thread) has expired, the resources required by the running process (thread) are not ready, and the running process (thread) encounters an error. All these timings cause process (thread) scheduling when the CPU mode is non-preemptive. In a preemptive CPU mode, process (thread) scheduling also occurs when the priority of a process (thread) in the ready queue is higher than that of the currently running process (thread). Therefore, I, II, III, IV, and V are all correct.
|
1,787 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
There are three jobs J_1, J_2, and J_3 arriving at the same time, with execution times T_1, T_2, and T_3 respectively, and T_1 < T_2 < T_3. The system operates in single-task mode and uses the Shortest Job First scheduling algorithm, then the average turnaround time is ().
|
T_1 + T_2 + T_3
|
(3T_1 + 2T_2 + T_3) / 3
|
(T_1+T_2+T_3)/3
|
(T_1 + 2T_2 + 3T_3) / 3
|
B
|
The system adopts the Shortest Job First scheduling algorithm, with the job execution order being J_1, J_2, J_3. The turnaround time for J_1 is T_1, for J_2 is T_1+T_2, and for J_3 is T_1+T_2+T_3. Therefore, the average turnaround time is (T_1+T_1+T_2+T_1+T_2+T_3)/3 = (3T_1+2T_2+T_3)/3.
|
1,788 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
The correct statement regarding the precedence of magnitude is ().
|
The priority of computational tasks should be higher than that of I/O-bound tasks.
|
The priority of user processes should be higher than that of system processes.
|
In dynamic priority, as the waiting time for a job increases, its priority will correspondingly decrease.
|
In dynamic priority, as the process execution time increases, its priority decreases.
|
D
|
In priority scheduling algorithms, I/O-bound jobs have higher priority over CPU-bound jobs, and system processes should have higher priority than user processes. The priority of a job is not necessarily related to whether it is a long or short job, or the amount of system resources it requires. In dynamic priority scheduling, a process's priority decreases as its execution time increases, while its priority increases as the job's waiting time increases.
|
1,789 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
The process scheduling algorithm uses a fixed time slice round-robin scheduling algorithm. When the time slice is too large, it will cause the round-robin scheduling algorithm to transform into a () scheduling algorithm.
|
Highest Response Ratio Next (HRRN)
|
First Come, First Served
|
Shortest Job First
|
None of the above options are correct.
|
B
|
The round-robin scheduling algorithm also uses time slices in sequential order during actual operation. When the time slice is too large, it can be considered to exceed the runtime required by the process, thus effectively becoming a first-come, first-served scheduling algorithm.
|
1,790 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
Assuming all processes in the system arrive at the same time, the scheduling algorithm that minimizes the average turnaround time for processes is ().
|
First Come, First Served
|
Shortest Job First
|
Round Robin Scheduling
|
Priority
|
B
|
The Shortest Job First scheduling algorithm has the shortest average turnaround time. Average turnaround time = sum of each process's turnaround time / number of processes. Since the execution time for each process is fixed, the variable is the waiting time, and only the Shortest Job First algorithm can minimize the waiting time.
|
1,791 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
Among the following statements, the correct one(s) is (are) (). Ⅰ. The time slice of a time-sharing system is fixed, so the more users there are, the longer the response time. Ⅱ. UNIX is a powerful multi-user, multitasking operating system that supports multiple processor architectures and is classified as a time-sharing operating system according to the categorization of operating systems. Ⅲ. The interrupt vector address is the entry address of the interrupt service routine. Ⅳ. When an interrupt occurs, the program counter (PC) is protected and updated by hardware, not by software, mainly to improve processing speed.
|
I, II
|
II, III
|
III, IV
|
Only IV
|
A
|
Option Ⅰ is correct. In a time-sharing system, the response time is directly proportional to the time slice and the number of users. Option Ⅱ is correct. Option Ⅲ is incorrect; the interrupt vector itself is used to store the entry address of the interrupt service routine, hence the address of the interrupt vector should be the address of that entry. Option Ⅳ is incorrect; interrupts are protected and completed by hardware, mainly to ensure the system operates reliably and correctly. Improving processing speed is also a benefit, but it is not the primary purpose. In summary, options Ⅲ and Ⅳ are incorrect.
|
1,792 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
The correct statement about the critical section is ().
|
The critical section refers to the segment of code in a process that is used to achieve mutual exclusion among processes.
|
The critical section refers to the segment of code within a process that is used to achieve process synchronization.
|
The critical section refers to the segment of code within a process that is used to achieve inter-process communication.
|
The critical section refers to the segment of code in a process that is used to access critical resources.
|
D
|
Multiple processes can share resources in the system, and a resource that only allows one process to use at a time is called a critical resource. The section of code that accesses the critical resource is known as the critical section.
|
1,793 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
Which of the following is not a guideline that a synchronization mechanism should follow? ().
|
Yielding Wait
|
Idle Let-In
|
Busy-waiting
|
Infinite waiting
|
D
|
The four criteria for synchronization mechanisms are mutual exclusion, hold and wait, no preemption, and bounded waiting.
|
1,794 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
Processes A and B collaborate to complete data processing through a shared buffer, with Process A responsible for generating data and placing it into the buffer, and Process B reading data from the buffer and outputting it. The constraining relationship between Process A and Process B is ().
|
Mutual exclusion relationship
|
Synchronous relationship
|
Mutual exclusion and synchronization relationship
|
Unconstrained relationship
|
C
|
Concurrent processes create interdependencies due to shared resources, which can be divided into two categories: ① Mutual exclusion, which refers to the restrictive relationship that arises when processes compete for exclusive access to mutually exclusive resources; ② Synchronization, which refers to the restrictive relationship that arises when processes need to exchange information and wait for each other in order to coordinate their work. In this question, the restrictive relationship between the two processes is synchronization, where process B can only read data from the buffer after process A has placed the data into the buffer. Additionally, the shared buffer must be accessed mutually exclusively, so they also have a mutual exclusion relationship.
|
1,795 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
In operating systems, P and V operations are a type of ().
|
Machine instruction
|
System call command
|
Job Control Commands
|
Low-level interprocess communication primitives
|
D
|
The P and V operations are low-level process communication primitives that are uninterruptible.
|
1,796 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
The PV operations used to achieve process synchronization and mutual exclusion are actually composed of () processes.
|
An interruptible
|
An uninterruptible
|
interruptible
|
Two non-interruptible
|
D
|
The P operation and V operation are both primitive operations and cannot be interrupted.
|
1,797 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
For two concurrent processes, let the mutual exclusion semaphore be mutex (initial value is 1). If mutex=0, it indicates that ( ).
|
No process enters the critical section.
|
A process enters the critical section.
|
One process enters the critical section, while another process waits to enter.
|
A process is waiting to enter.
|
B
|
The initial value of mutex is 1, indicating that one process is allowed to enter the critical section. When a process enters the critical section and no other process is waiting to enter, mutex is decremented by 1, becoming 0. |mutex| represents the number of processes waiting to enter. Therefore, option B is selected.
|
1,798 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
For two concurrent processes, given a mutual exclusion semaphore named mutex (initial value is 1), if mutex = -1, then ().
|
Indicates that no process has entered the critical section.
|
Indicates that a process has entered the critical section.
|
Indicates that one process has entered the critical section, while another process is waiting to enter.
|
Indicates that two processes have entered the critical section.
|
C
|
When one process enters the critical section and another process is waiting to enter the critical section, mutex = -1. When mutex is less than 0, its absolute value is equal to the number of processes waiting to enter the critical section.
|
1,799 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
The following statement about monitors is incorrect ().
|
Monitors are process synchronization tools that address the issue of semaphore mechanisms having a large number of scattered synchronization operations.
|
The monitor allows only one process to enter at a time.
|
The function of the signal operation in a monitor is equivalent to the V operation in semaphore mechanisms.
|
Monitors are invoked by processes; they are a syntactic scope and cannot be created or destroyed.
|
C
|
The signal operation in monitors is different from the V operation in semaphore mechanisms; in semaphore mechanisms, the V operation always changes the value of the semaphore S=S+1. However, in monitors, the signal operation is targeted at a specific condition variable, and if there are no processes blocked due to that condition, the signal operation will have no effect.
|
1,800 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
Among the following statements about PV operations, the correct one(s) is (are) (). Ⅰ. PV operation is a system call command Ⅱ. PV operation is a low-level process communication primitive Ⅲ. PV operation consists of an uninterruptible process Ⅳ. PV operation consists of two uninterruptible processes
|
I, III
|
II, IV
|
I, II, IV
|
I, IV
|
B
|
PV operations are low-level process communication primitives, not system calls, so Ⅱ is correct; both P and V operations are atomic operations, hence PV operations consist of two uninterruptible processes, therefore Ⅳ is correct.
|
1,801 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
Among the following statements about critical sections and critical resources, the correct one(s) is (are) (). Ⅰ. The Banker's algorithm can be used to solve the Critical Section problem Ⅱ. A critical section refers to the code segment in a process that is used to achieve mutual exclusion among processes Ⅲ. A shared queue is a critical resource Ⅳ. Private data is a critical resource
|
I, II
|
I, IV
|
Only Ⅲ
|
All of the above answers are incorrect.
|
C
|
A critical resource refers to a resource that only allows one process to access at a time. The section of code within each process that accesses the critical resource is called the critical section. Ⅰ Incorrect, the Banker's algorithm is an algorithm to avoid deadlock. Ⅱ Incorrect, the section of code within each process that accesses the critical resource is called the critical section. Ⅲ Correct, a common queue can be used by multiple processes, but only one program can use it at a time. Ⅳ Incorrect, private data is for the use of a single process only, and there is no issue of critical sections. Based on the above analysis, the correct answer is C.
|
1,802 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
There is a counting semaphore S: 1) If several processes perform 28 P operations and 18 V operations on S, the value of semaphore S is 0. 2) If several processes have performed 15 P operations and 2 V operations on semaphore S, how many processes are waiting in the queue of semaphore S? ()
|
2
|
3
|
5
|
7
|
B
|
After performing 28 P operations and 18 V operations on S, we have S-28+18=0, which indicates that the initial value of the semaphore S is 10. Subsequently, 15 P operations and 2 V operations are performed on semaphore S, resulting in S-15+2=10-15+2=-3. The absolute value of the negative value of semaphore S represents the number of processes in the waiting queue. Therefore, there are 3 processes waiting in the queue of semaphore S.
|
1,803 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
If there are 4 processes sharing the same program segment, and each time 2 processes are allowed to enter the program segment, and if P and V operations are used as synchronization mechanisms, then the range of values for the semaphore is ().
|
4,3,2,1,-1
|
2, 1, 0, -1, -2
|
3, 2, 1, 0, -1
|
2,1,0,-2,-3
|
C
|
Since the program segment allows three processes to enter at a time, the possible scenarios are: no process enters, one process enters, two processes enter, three processes enter, and three processes enter with one waiting to enter. Therefore, the semaphore values corresponding to these five situations are 3, 2, 1, 0, -1.
|
1,804 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
In the producer-consumer problem with 9 producers, 6 consumers, and a shared buffer of capacity 8, the initial value of the semaphore for mutual exclusion of the buffer is ().
|
1
|
6
|
8
|
9
|
A
|
The so-called mutual exclusion of a critical resource means that only one process is allowed to use this resource at the same time. Therefore, the initial value of a mutual exclusion semaphore is always 1.
|
1,805 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
There are two concurrent programs with the same priority, P_1 and P_2, whose execution processes are as follows. Assuming the current semaphore s_1=0, s_2=0. The current value of z=2, after the processes have finished running, the values of x, y, and z are respectively (). Process P1 \n Y:=1; \n Y:=y+2; \n z:=y+1; \n V(s1); \n P(s2); \n Y:=z+y; \n … Process P2 \n x:=1 \n x:=x+1; \n P(s1); \n x:=x+y; \n Z:=X+Z; \n V(s2); \n ...
|
5,9,9
|
5,9,4
|
5,12,9
|
5,12,4
|
C
|
Due to concurrent processes, the execution of processes is uncertain. Before P_1 and P_2 reach their first P and V operations, they should be independent of each other. Now consider the first P and V operations on s_1. Since process P_2 performs the P(s_1) operation, it must wait until P_1 completes the V(s_1) operation before it can continue. At this point, the values of x, y, and z are 2, 3, and 4, respectively. After process P_1 completes the V(s_1), it becomes blocked on P(s_2), allowing P_2 to run until V(s_2). At this point, the values of x, y, and z are 5, 3, and 9, respectively. Process P_1 continues to run until the end, with the final values of x, y, and z being 5, 12, and 9, respectively.
|
1,806 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
Among the following situations, the one that may lead to a deadlock is ().
|
Process Releases Resources
|
A process enters an infinite loop.
|
Multiple processes are experiencing a deadlock due to competition for resources.
|
Multiple processes compete for the use of shared devices.
|
C
|
The four necessary conditions for a deadlock are: mutual exclusion, hold and wait, no preemption, and circular wait. In this question, the phenomenon of circular wait has occurred, which means it could lead to a deadlock. The release of resources by a process will not cause a deadlock, and a process entering an infinite loop on its own can only cause "starvation," which does not involve other processes. Shared devices allow multiple processes to request usage, so they do not lead to deadlocks. A reminder again, a deadlock must involve two or more processes to occur, while starvation can be caused by a single process.
|
1,807 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
Allocating all resources at once can prevent the occurrence of deadlocks, as it breaks one of the four necessary conditions for deadlocks ().
|
Mutually exclusive
|
Occupation and Requisition
|
Non-preemptive
|
Cyclic waiting
|
B
|
The four necessary conditions for a deadlock to occur are: mutual exclusion, hold and wait, no preemption, and circular wait. The method of allocating all resources at once is to have a process make all its requests at once when it needs resources. If all the requested resources are available, they are allocated; if even one is not available, no resources are allocated, and the process is blocked until all resources are free and the process's entire needs can be satisfied. This allocation method does not partially hold resources, thus breaking one of the four necessary conditions for deadlock, achieving deadlock prevention. However, this method requires gathering all resources, so when a process needs many resources, the utilization rate of resources may be low, and it may even cause process "starvation."
|
1,808 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
Deadlock avoidance is implemented by taking measures based on ().
|
Allocate sufficient system resources
|
Ensure a logical progression of processes.
|
One of the four necessary conditions to break a deadlock.
|
Prevent the system from entering an unsafe state.
|
D
|
Deadlock avoidance refers to the use of certain algorithms to impose restrictions during the dynamic allocation of resources, preventing the system from entering an unsafe state and thereby avoiding the occurrence of deadlocks. Option B is the outcome after avoiding deadlock, not the principle of the measure.
|
1,809 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
Deadlock prevention is a static strategy to ensure that the system does not enter a deadlock state, which involves breaking one of the four necessary conditions for a deadlock to occur. Among the following methods, the one that breaks the "circular wait" condition is ().
|
Banker's Algorithm
|
One-time allocation strategy
|
Deprivation of Resources Method
|
Orderly Resource Allocation Strategy
|
D
|
A resource allocation strategy can limit the occurrence of circular wait conditions. Option A determines whether it is an unsafe state; Option B breaks the hold-and-wait condition; Option C breaks the no-preemption condition.
|
1,810 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
In a system with 11 tape drives, X processes share these tape drive devices, with each process requesting at most 3 drives. The maximum value of X that guarantees the system will not experience deadlock is ().
|
4
|
5
|
6
|
7
|
B
|
Consider the extreme case: each process has already been allocated two tape drives, then any one of those processes would be able to meet its maximum demand by being allocated just one more tape drive. That process could always continue to run until completion, after which it would return the tape drives to the system for reallocation to other processes. Therefore, as long as the system satisfies the condition 2X+1=11, it can be considered that the system will not experience deadlock. Solving for X gives X=5, which means that the system can concurrently run a maximum of 5 such processes without risking deadlock.
|
1,811 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
A method not typically used to resolve deadlock is ().
|
Terminate a deadlocked process
|
Terminate all deadlocked processes.
|
Preempt resources from a deadlocked process.
|
Preempt resources from non-deadlocked processes.
|
D
|
Methods to resolve deadlock include: ① Preemption: Suspend some of the deadlocked processes and preempt their resources to allocate to other deadlocked processes; ② Termination: Forcibly terminate some or all of the deadlocked processes and reclaim their resources.
|
1,812 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
Among the following solutions to deadlock, the one that is a deadlock prevention strategy is ().
|
Banker's Algorithm
|
In the resource ordered allocation algorithm, homeland
|
Deadlock Detection Algorithm
|
Resource Allocation Graph Simplification Method
|
B
|
Among them, the Banker's algorithm is a deadlock avoidance algorithm, deadlock detection algorithm and resource allocation graph reduction are for deadlock detection, and by the process of elimination, it can be concluded that the resource ordering allocation algorithm is a deadlock prevention strategy.
|
1,813 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
Three processes share four identical resources, with allocation and release of these resources occurring one at a time. It is known that each process requires at most two of such resources, then the system ().
|
Some processes may never obtain such resources.
|
There must be a deadlock.
|
The process will inevitably obtain the requested type of resource.
|
It must be a deadlock.
|
C
|
Deadlock will not occur. This is because when each process is allocated one resource, there is still one resource available that can satisfy any one process, allowing it to run to completion and subsequently release its resources.
|
1,814 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
The correct description of the resource allocation graph is ().
|
Directed edges include two types: allocation edges from processes to resource classes and request edges from resource classes to processes.
|
Rectangular boxes represent processes, where dots within indicate the various processes applying for the same type of resource.
|
The circular node represents a resource class.
|
The resource allocation graph is a directed graph used to represent the state of system resources and processes at a given moment.
|
D
|
The directed edge from a process to a resource is called a request edge, and the directed edge from a resource to a process is called an allocation edge. Option A is incorrect; a rectangular box represents a resource, with dots inside indicating the number of resources, so option B is incorrect; a circular node represents a process, which means option C is incorrect; the statement in option D is correct.
|
1,815 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
The one condition among the four necessary conditions for deadlock that cannot be broken is ().
|
Circular wait for resources
|
Mutual exclusion of resources
|
Hold and wait for resources
|
Non-preemptive allocation
|
B
|
The so-called violation of mutual exclusion of resources refers to allowing multiple processes to access resources simultaneously. However, some resources, such as printers, can only be used exclusively. Therefore, the method of preventing deadlock by violating the mutual exclusion condition is not very feasible, and in some cases, this exclusivity should be protected. The other three conditions can be realized.
|
1,816 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
The relationship between deadlock and safe state is ().
|
The deadlock state can be a safe state.
|
It is possible for a safe state to become a deadlock state.
|
An unsafe state is a deadlock state.
|
A deadlock state is definitely an unsafe state.
|
D
|
Not all unsafe states are deadlock states, but once the system enters an unsafe state, it may progress into a deadlock state; conversely, as long as the system is in a safe state, it can avoid entering a deadlock state; a deadlock state is necessarily an unsafe state.
|
1,817 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
The system adopts the following resource allocation strategy: If a process's resource request cannot be satisfied and there are no processes currently blocked due to waiting for resources, then it will block itself. However, if there are already processes blocked due to waiting for resources, it will check all processes that are blocked due to waiting for resources. If they have the resources needed by the requesting process, these resources will be taken and allocated to the requesting process. This allocation strategy can lead to ().
|
Deadlock
|
bumpy
|
rollback
|
Hunger
|
D
|
A process actively releasing resources will not lead to a deadlock because it breaks the hold-and-wait condition, so option A is incorrect. Thrashing, also known as jitter, is a phenomenon caused by improper page scheduling in a demand paging system, which will be discussed in the next chapter; for now, we can conclude that option B is incorrect. Rollback refers to reverting from the current state to the state one minute ago. If resource X was owned one minute ago and has possibly been released, then it is not a return to the state one minute ago, and thus not a rollback, making option C incorrect. Because a process is too "generous," constantly giving away resources it has acquired to others, resulting in its own long-term inability to complete tasks, this is starvation, making option D correct.
|
1,818 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
The system's resource allocation graph cannot determine whether it is in a deadlock state under the following conditions (). Ⅰ. A cycle is present Ⅱ. No cycle is present Ⅲ. There is only one instance of each resource, and a cycle is present Ⅳ. Each process node has at least one request edge.
|
I, II, III, IV
|
I, III, IV
|
I, IV
|
None of the above answers are correct.
|
C
|
The presence of a cycle merely satisfies the necessary condition of circular wait, but meeting a necessary condition does not necessarily lead to a deadlock, I is correct; without a cycle, the condition for circular wait is broken, and a deadlock will definitely not occur, II is incorrect; if there is only one instance of each resource and a cycle is present, this is a sufficient condition for deadlock, and it can be determined whether a deadlock exists, III is incorrect; even if each process has at least one request edge, if resources are sufficient, a deadlock will not occur, but if resources are insufficient, there is a possibility of deadlock, IV is correct. In summary, the answer is C.
|
1,819 |
Test
|
Operating System
|
Processes and Threads
|
Multiple-choice
|
Reasoning
|
English
|
Among the following statements about deadlock, the correct ones are (). Ⅰ. A deadlock state is definitely an unsafe state Ⅱ. The fundamental cause of deadlock is insufficient system resource allocation and illegal process advancement order Ⅲ. An ordered resource allocation strategy can break the deadlock's circular wait condition Ⅳ. Deadlock can be resolved by using resource preemption, and the process termination method can also be used to resolve deadlock.
|
I, III
|
II
|
IV
|
All four statements are correct.
|
D
|
I correct. II correct: These are the two main reasons for deadlock occurrence. III correct: When resources are allocated in an ordered manner, there is no possibility of a circular chain among processes, meaning no circular wait can occur. IV correct: The resource preemption method allows a process to forcibly take over the system resources held by other processes. Similarly, aborting a process forcibly releases the system resources held by a process. Both approaches resolve deadlock by breaking the "hold and wait" condition, therefore the answer is D.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.