1) What is the first step in problem-solving? A) Writing code B) Debugging C) Understanding the problem D) Optimizing the solution Answer: C 2) Which of these is not a step in the problem-solving process? A) Algorithm development B) Problem analysis C) Random guessing D) Testing and debugging Answer: C 3) What is an algorithm? A) A high-level programming language B) A step-by-step procedure to solve a problem C) A flowchart D) A data structure Answer: B 4) Which of these is the simplest data structure for representing a sequence of elements? A) Dictionary B) List C) Set D) Tuple Answer: B 5) What does a flowchart represent? A) Errors in a program B) A graphical representation of an algorithm C) The final solution to a problem D) A set of Python modules Answer: B 6) What is pseudocode? A) Code written in Python B) Fake code written for fun C) An informal high-level description of an algorithm D) A tool for testing code Answer: C 7) Which of the following tools is NOT commonly used in pr...
Disk Scheduling
* As mentioned earlier, disk moved speeds are limited primarily by seek times and rotational latency. When many requests are to be processed there is also some inherent delay in waiting for other requests to be processed.
* Bandwidth is calculated by the amount of data transferred divided by the total amount of time from the first request being made to the last transfer being completed, (for a series of disk requests.)
* Both bandwidth and access time can be increased by processing requests in a good order.
* Disk requests include the disk address, memory address, number of sectors to moved,and whether the request is for reading or writing.
FCFS Scheduling
First-Come First-Serve is easy and basically fair, but not very efficient. Consider in the following order the wild change from cylinder 122 to 14 and then back to 124:
SSTF Scheduling
* Shortest Seek Time First scheduling is more systematic, but may lead to starvation if a constant stream of requests arrives for the same general area of the disk.
* SSTF decreases the total head of 236 cylinders movements, down from 640 required for the same set of requests under FCFS. Note, however that the distance could be decreased still further to 208 by starting with 37 and then 14 first before processing the rest of the requests.
SCAN Scheduling
The SCAN algorithm, a.k.a. the elevator algorithm rotates back and front from one end of the disk to the other, similarly to an elevator processing requests in a tall building.
* Under the SCAN algorithm, if a request enters just ahead of the moving head then it will be processed right away, but if it arrives just after the head has passed, then it will have to delays for the head to pass going the other way on the return trip. This leads to a fairly wide difference in access times which can be improved upon.
* Consider, for example, when the head extends the high end of the disk: Requests with high cylinder numbers just missed the passing head, which means they are all fairly recent requests, whereas requests with low numbers may have been delaying for a much longer time. Making the return scan from high to low then ends up processing recent requests first and making older requests delays that much longer.
C-SCAN Scheduling
The Circular-SCAN algorithm improves upon SCAN by treating all requests in a circular queue fashion - Once the head extends the end of the disk, it backs to the other end without processing any requests, and then begins again from the beginning of the disk:
LOOK Scheduling
LOOK scheduling increases upon SCAN by looking ahead at the queue of pending requests, and not moving the heads any farther towards the end of the disk than is required. The following diagram demonstrates the circular form of LOOK:
Selection of a Disk-Scheduling Algorithm
* With very low charges all algorithms are same, since there will normally only be one request to process at a time.
* For moderately larger loads, SSTF offers good performance than FCFS, but may lead to starvation when loads become heavy enough.
* For busier systems, SCAN and LOOK algorithms delete starvation problems.
* The actual optimal algorithm may be something even more difficult than those discussed here, but the incremental enhancements are generally not worth the additional overhead.
* Some increment to overall file system access times can be made by intelligent
placement of directory and/or inode information. If those structures are placed in the middle of the disk instead of at the starting of the disk, then the maximum distance from those structures to data blocks is decreased to only half of the disk size. If those structures can be further issues and furthermore have their data blocks stored as close as possible to the corresponding directory structures, then that decrease still further
the overall time to find the disk block numbers and then process the corresponding data blocks.
* On modern disks the rotational latency can be almost as significant as they seek time, however it is not within the OSes manage to account for that, because modern disks do not reveal their interior sector mapping schemes, ( particularly when bad blocks have been remapped to additional sectors. )
• Some disk producers give for disk scheduling algorithms directly on their disk controllers, ( which do know the actual geometry of the disk as well as any
remapping ), so that if a series of requests are sent from the computer to the controller then those requests can be served in an proper order.
• Unfortunately there are some considerations that the OS must take into account that are far away the abilities of the on-board disk-scheduling algorithms, such as importance of some requests over others, or the require to process certain requests in a particular order. For this reason OSes may elect to spoon-feed requests to the disk controller one at a time in various situations.