Skip to main content

PROBLEM SOLVING AND PYTHON PROGRAMMING QUIZ

1) What is the first step in problem-solving? A) Writing code B) Debugging C) Understanding the problem D) Optimizing the solution Answer: C 2) Which of these is not a step in the problem-solving process? A) Algorithm development B) Problem analysis C) Random guessing D) Testing and debugging Answer: C 3) What is an algorithm? A) A high-level programming language B) A step-by-step procedure to solve a problem C) A flowchart D) A data structure Answer: B 4) Which of these is the simplest data structure for representing a sequence of elements? A) Dictionary B) List C) Set D) Tuple Answer: B 5) What does a flowchart represent? A) Errors in a program B) A graphical representation of an algorithm C) The final solution to a problem D) A set of Python modules Answer: B 6) What is pseudocode? A) Code written in Python B) Fake code written for fun C) An informal high-level description of an algorithm D) A tool for testing code Answer: C 7) Which of the following tools is NOT commonly used in pr...

DEMAND PAGING

Demand Paging
          The fundamental idea behind demand paging is that when a process is exchanged in, its pages are not exchanged in all at once. Probably they are exchanged in only when the process requires them (on demand) This is termed a lazy swapper, although a pager is a more exact term.
Basic Concepts
* The basic idea behind paging is that when a process is swapped in, the pager only enters into memory those pages that it expects the process to need (right away)
* Pages that are not entered into memory are decided as invalid in the page table, using the invalid bit. (The rest of the page table entry may neither be nothing or consists of information about where to find the swapped-out page on the hard drive)
* If the process only ever obtains pages that are entered in memory (memory resident pages), then the process runs exactly as if all the pages were loaded in to memory.
* On the other hand, if a page is needed that was not originally loaded up, then a page fault trap is created, which must be handled in a series of steps:
1. The memory address requested is first examined, to make sure it was a valid 
memory request.
2. If the recommondation was invalid, the process is finished. Otherwise, the page must be paged in.
3. A free frame is detected, possibly from a free-frame list.
4. A disk operation is scheduled to bring in the required page from disk. ( This will 
usually block the process on an I/O wait, permitting some other process to use the 
CPU in the meantime)
5. When the I/O operation is finished,the process's page table is updated with the 
new frame number, and the invalid bit is changed to specify that this is now a 
valid page reference.
6. The instruction that sourced the page condemn must now be resumed from the 
beginning, ( as soon as this process obtains another turn on the CPU. )
* In an uttermost case, NO pages are exchanged in for a process until they are requested by page condemns. This is called as pure demand paging.
* In theory each instruction could create multiple page faults. In practice this is very scarce, due to locality of reference.
* The hardware required to support virtual memory is the same as for paging and swapping: A page table and secondary memory. 
* A critical part of the process is that the specification must be resumed from scratch once the wanted page has been made available in memory. For most simple instructions this is not a major strain. However there are some architectures that permits a single instruction to alter a fairly long block of data, ( which may span a page boundary ), and if some of the data gets modified before the page condemn occurs, this could cause problems. One solution is to access both ends of the block before implementing the instruction, guaranteeing that the necessary pages get paged in before the instruction begins.

Performance of Demand Paging
* Obviously there is some slowdown and performance hit whenever a page condemns occurs and the system has to go get it from memory, but just how big a hit is it exactly?
* There are many steps that occur when checking a page fault ( see book for full details ), and some of the steps are optional or variable. But just for the sake of discussion, assume that a usual memory access requires 200 nanoseconds, and that servicing a page fault takes 8 milliseconds. ( 8,000,000 nanoseconds, or 40,000 times a normal memory access) With a page condemn rate of p, ( on a scale from 0 to 1 ), the effective access time is now:
( 1 - p ) * ( 200 ) + p * 8000000
= 200 + 7,999,800 * p
which clearly depends heavily on p! Even if only one access in 1000 causes a page condemn, the effective access time drops from 200 nanoseconds to 8.2 microseconds, a slowdown of a factor of 40 times. Orderly to keep the slowdown less than 10%, the page condemn rate must be less than 0.0000025, or one in 399,990 accesses.
* A delicacy is that swap space is faster to access than the regular file system, because it does not have to go through the whole directory structure. For this reason some systems will exchange an whole process from the file system to swap space before starting up the process, so that future paging all occurs from the (relatively) faster swap space.
* Some systems take demand paging directly from the file system for binary code (which never changes and hence does not have to be stored on a page operation), and to save the swap space for data segments that must be stored. This prespective is used by both Solaris and BSD Unix.

Copy-on-Write
* The idea beyond a copy-on-write fork is that the pages for a parent process do not have to be actually copied for the child until one or the other of the processes changes the page. They can be easily shared between the two processes in the meantime, with a bit set that the page needs to be copied if it ever gets written to. This is a reasonable approach, since 
the child process normally issues an exec( ) system call immediately after the fork.
* Clearly only pages that can be modified even need to be labeled as copy-on-write. Code segments can simply be shared.
* Pages used to convince copy-on-write duplications are typically allocated using zero-fill-on-demand, meaning that their previous contents are zeroed out before the copy proceeds.
* Various systems supply an substitute to the fork( ) system call called a virtual memory fork, vfork( ). In this case the parent is hanged, and the child uses the parent's memory pages. This is very fast for process creation, but requires that the child not modify any of the divided memory pages before performing the exec( ) system call. ( In essence this addresses the question of which process implements first after a call to fork, the 
parent or the child. With vfork, the parent is hanged, permitting the child to execute first until it calls exec( ), sharing pages with the parent in the meantime.m

Popular posts from this blog

Abbreviations

No :1 Q. ECOSOC (UN) Ans. Economic and Social Commission No: 2 Q. ECM Ans. European Comman Market No : 3 Q. ECLA (UN) Ans. Economic Commission for Latin America No: 4 Q. ECE (UN) Ans. Economic Commission of Europe No: 5 Q. ECAFE (UN)  Ans. Economic Commission for Asia and the Far East No: 6 Q. CITU Ans. Centre of Indian Trade Union No: 7 Q. CIA Ans. Central Intelligence Agency No: 8 Q. CENTO Ans. Central Treaty Organization No: 9 Q. CBI Ans. Central Bureau of Investigation No: 10 Q. ASEAN Ans. Association of South - East Asian Nations No: 11 Q. AITUC Ans. All India Trade Union Congress No: 12 Q. AICC Ans. All India Congress Committee No: 13 Q. ADB Ans. Asian Development Bank No: 14 Q. EDC Ans. European Defence Community No: 15 Q. EEC Ans. European Economic Community No: 16 Q. FAO Ans. Food and Agriculture Organization No: 17 Q. FBI Ans. Federal Bureau of Investigation No: 18 Q. GATT Ans. General Agreement on Tariff and Trade No: 19 Q. GNLF Ans. Gorkha National Liberation Front No: ...

Operations on data structures

OPERATIONS ON DATA STRUCTURES This section discusses the different operations that can be execute on the different data structures before mentioned. Traversing It means to process each data item exactly once so that it can be processed. For example, to print the names of all the employees in a office. Searching It is used to detect the location of one or more data items that satisfy the given constraint. Such a data item may or may not be present in the given group of data items. For example, to find the names of all the students who secured 100 marks in mathematics. Inserting It is used to add new data items to the given list of data items. For example, to add the details of a new student who has lately joined the course. Deleting It means to delete a particular data item from the given collection of data items. For example, to delete the name of a employee who has left the office. Sorting Data items can be ordered in some order like ascending order or descending order depending ...

The Rise of Green Buildings: A Sustainable Future

In an era where climate change and environmental sustainability dominate global conversations, the concept of green buildings has emerged as a pivotal solution. These structures, designed with both ecological and human health in mind, represent a shift towards more sustainable urban development. But what exactly are green buildings, and why are they so important? What Are Green Buildings? Green buildings, also known as sustainable buildings, are structures that are environmentally responsible and resource-efficient throughout their life cycle—from planning and design to construction, operation, maintenance, renovation, and demolition. This holistic approach seeks to minimize the negative impact of buildings on the environment and human health by efficiently using energy, water, and other resources. Key Features of Green Buildings Energy Efficiency: Green buildings often incorporate advanced systems and technologies to reduce energy consumption. This can include high-efficiency HVAC sys...