1) What is the first step in problem-solving? A) Writing code B) Debugging C) Understanding the problem D) Optimizing the solution Answer: C 2) Which of these is not a step in the problem-solving process? A) Algorithm development B) Problem analysis C) Random guessing D) Testing and debugging Answer: C 3) What is an algorithm? A) A high-level programming language B) A step-by-step procedure to solve a problem C) A flowchart D) A data structure Answer: B 4) Which of these is the simplest data structure for representing a sequence of elements? A) Dictionary B) List C) Set D) Tuple Answer: B 5) What does a flowchart represent? A) Errors in a program B) A graphical representation of an algorithm C) The final solution to a problem D) A set of Python modules Answer: B 6) What is pseudocode? A) Code written in Python B) Fake code written for fun C) An informal high-level description of an algorithm D) A tool for testing code Answer: C 7) Which of the following tools is NOT commonly used in pr...
Thrashing
* If a process cannot keep its least required number of frames, then it must be interchanged out, freeing up frames for other processes. This is an mid level of CPU scheduling.
* But what about a process that can keep its least, but cannot keep all of the frames that it is currently using on a regular basis? In this case it is forced to page out pages that it will required again in the very near future, leading to large numbers of page faults.
* A process that is taking more time paging than implementing is said to be thrashing.
Cause of Thrashing
* Early process scheduling schemes would manage the level of multiprogramming permitted based on CPU utilization, adding in more processes when CPU utilization was low.
* The problem is that when memory filled up and processes started take up lots of time waiting for their pages to page in, then CPU utilization would lower, causing the schedule to add in even more processes and aggravate the problem! Eventually the system would basically grind to a stop.
* Local page replacement policies can stop one thrashing process from taking pages away from other processes, but it still tends to clog up the I/O queue, thereby slowing down any other process that requires to do even a little bit of paging (or any other I/O for
that matter. )
* To prevent thrashing we must give processes with as many frames as they really require "right now", but how do we know what that is?
* The locality model notes that processes typically obtains memory references in a given locality, making lots of references to the same general area of memory before moving periodically to a new locality, as shown in Figure below. If we could just keep as many frames as are concerned in the current locality, then page faulting would occur primarily on switches from one locality to another. ( E.g. when one function subsists and another is called. )
working-set model
The working set model is situate on the concept of locality, and determines a working set window, of length delta. Whatever pages are added in the most recent delta page references are said to be in the processes working set window, and has its present working set, as illustrated in below Figure:
* The selection of delta is anlytical to the success of the working set model - If it is too small then it does not enclose all of the pages of the current locality, and if it is too large, then it encloses pages that are no longer being frequently accessed.
* The total demand, D, is the total of the sizes of the working sets for all processes. If D surpass the limit the total number of available frames, then at least one process is thrashing, because there are not sufficient frames available to satisfy its minimum working set. If D is possibly less than the currently accessable frames, then additional processes can be launched.
* The strong part of the working-set model is keeping trace of what pages are in the present working set, since every reference adds one to the set and deletes one older page. An estimation can be made using reference bits and a timer that goes off after a regulate interval of memory references:
• For example, suppose that we place the timer to go off after each 5000 references ( by any process ), and we can store two additional historical reference bits in addition to the current reference bit.
• Every time the timer goes off, the current reference bit is imitate to one of the two
historical bits, and then cleared.
• If one of the three bits is place, then that page was referenced within the last 15,000 references, and is considered to be in that processes reference set.
• Finer resolution can be reached with more historical bits and a more frequent
timer, at the expense of greater overhead.
Page-Fault Frequency
* A more direct method is to recognize that what we really want to control is the page-fault rate, and to allote frames based on this directly measurable value. If the page-fault rate exceeds a certain upper bound then that process requires more frames, and if it is below a given lower bound, then it can provide to give up some of its frames to other processes.
* ( I suppose a page-replacement strategy could be devised that would choose victim frames based on the process with the lowest current page-fault frequency. )