Skip to main content

PROBLEM SOLVING AND PYTHON PROGRAMMING QUIZ

1) What is the first step in problem-solving? A) Writing code B) Debugging C) Understanding the problem D) Optimizing the solution Answer: C 2) Which of these is not a step in the problem-solving process? A) Algorithm development B) Problem analysis C) Random guessing D) Testing and debugging Answer: C 3) What is an algorithm? A) A high-level programming language B) A step-by-step procedure to solve a problem C) A flowchart D) A data structure Answer: B 4) Which of these is the simplest data structure for representing a sequence of elements? A) Dictionary B) List C) Set D) Tuple Answer: B 5) What does a flowchart represent? A) Errors in a program B) A graphical representation of an algorithm C) The final solution to a problem D) A set of Python modules Answer: B 6) What is pseudocode? A) Code written in Python B) Fake code written for fun C) An informal high-level description of an algorithm D) A tool for testing code Answer: C 7) Which of the following tools is NOT commonly used in pr...

OPERATING SYSTEM STRUCTURE

       Figure: MS-DOS layer structure
          It was written to provide the most functionality in the least space, so it was not divided into modules carefully. In MS-DOS, the interfaces and levels of functionality are not well separated. For instance, application programs are able to access the basic I/O routines to write directly to the display and disk drives. Such freedom leaves MS-DOS vulnerable to errant (or malicious) programs, causing entire system crashes when user programs fail. Of course, MS-DOS was also limited by the hardware of its era. Another example of limited structuring is the original UNIX operating system. UNIX is another system that initially was limited by hardware functionality.It consists of two separable parts: the kernel and the system programs. The kernel is further separated into a series of interfaces and device drivers, which have been added and expanded over the years as UNIX 
has evolved.
        Figure: Unix system structure
Layered Approach:
          The operating system can then retain much greater control over the computer and over the applications 
that make use of that computer. Implementers have more freedom in changing the inner workings of the 
system and in creating modular operating systems. Under the top down approach, the overall functionality and features are determined and are separated into components. Information hiding is also important, because it leaves programmers free to implement the low-level routines as they see fit, provided that the external interface of the routine stays unchanged and that the routine itself performs the advertised task.A system can be made modular in many ways. One method is the layered approach, in which 
the operating system is broken up into a number of layers (levels). The bottom layer (layer 0) is the hardware; the highest (layer N) is the user interface.
Figure : layered operating system
          An operating-system layer is an implementation of an abstract object made up of data and the operations that can manipulate those data. A typical operating-system layer0 to layer M—consists of data structures and a set of routines that can be invoked by higher-level layers. Layer M, in turn, can invoke 
operations on lower-level layers.The main advantage of the layered approach is simplicity of construction and debugging. The  layers are selected so that each uses functions (operations) and services of only lower-level layers. This approach simplifies debugging and system verification. The first layer can be debugged without any concern for the rest of the system, because, by definition, it uses only the basic hardware (which is assumed correct) to implement its functions. Once the first layer is debugged, its correct functioning can be assumed while the second layer is debugged, and so on. If an error is found during the debugging of a particular layer, the error must be on that layer, because the layers below it are already debugged. Thus, the design and implementation of the system is simplified.Each layer is implemented with only those operations provided by lower level layers. A layer does not need to know how these operations are implemented; it needs to know only what these operations do. Hence, each layer hides the existence of certain data structures, operations, and hardware from higher-level layers.The major difficulty with the layered approach involve appropriately defining the various layers. The backing-store driver would normally be above the CPU scheduler, because the driver may need to wait for I/O and the CPU can be rescheduled during this time. A final problem with layered implementations is that they tend to be less efficient than other types. For instance, when a user program executes an I/O operation, it executes a system call that is trapped to the I/O layer, which calls the memory-management layer, which in turn calls the CPU-scheduling layer, which is thenpassed to the hardware.
Micro kernels:
The kernel became large and difficult to manage. In the mid-1980s, researchers at Carnegie Mellon University developed an operating system called Mach that modularized the kernel using the microkernel approach. This method structures the operating system by removing all nonessential components from the kernel and implementing them as system and user-level programs. The result is a smaller kernel. microkernels provide minimal process and memory management, in addition to a communication facility. The main function of the microkernel is to provide a communication facility between the client program and the various services that are also running in user space. One benefit of the microkernel approach is ease of extending the operating system. All new services are added to user space and consequently do not require modification of the kernel. When the kernel does have to be modified, the changes tend to be fewer, because the microkernel is a smaller kernel.The resulting operating system is easier to port from one hardware design to another. The microkernel also provides more security and reliability, since most services are running as user rather than kernel processes. If a service fails, the rest of the operating system remains untouched.
Modules:
The best current methodology for operating-system design involves using object-oriented programming techniques to create a modular kernel. Here, the kernel has a set of core components and dynamically links in additional services either during boot time or during run time. Such a strategy uses dynamically loadable modules and is common in modern implementations of UNIX, such as Solaris, Linux, and Mac OS X. A core kernel with seven types of loadable kernel modules:
1. Scheduling classes
2. File systems
3. Loadable system calls
4. Executable formats
5. STREAMS modules
6. Miscellaneous
7. Device and bus drivers
          Such a design allows the kernel to provide core services yet also allows certain features to be implemented dynamically. The overall result resembles a layered system in that each kernel section has defined, protected interfaces; but it is more flexible than a layered system in that any module can call any 
other module. The approach is like the microkernel approach in that the primary module has only core functions and knowledge of how to load and communicate with other modules; but it is more efficient, because modules do not need to invoke message passing in order to communicate.The Apple Macintosh Mac OS X operating system uses a hybrid structure. Mac OS X (also known as 
Danvin) structures the operating system using a layered technique where one layer consists of the Mach microkernel. The top layers include application environments and a set of services providing a graphical 
interface to applications. Below these layers is the kernel environment, which consists primarily of the Mach microkernel and the BSD kernel. Mach provides memory management; support for remote 
procedure calls (RPCs) and inter process communication (IPC) facilities, including message passing; and thread scheduling. The BSD component provides a BSD command line interface, support for networkingand file systems, and an implementation of POSIX APIs, including Pthreads.

Popular posts from this blog

Abbreviations

No :1 Q. ECOSOC (UN) Ans. Economic and Social Commission No: 2 Q. ECM Ans. European Comman Market No : 3 Q. ECLA (UN) Ans. Economic Commission for Latin America No: 4 Q. ECE (UN) Ans. Economic Commission of Europe No: 5 Q. ECAFE (UN)  Ans. Economic Commission for Asia and the Far East No: 6 Q. CITU Ans. Centre of Indian Trade Union No: 7 Q. CIA Ans. Central Intelligence Agency No: 8 Q. CENTO Ans. Central Treaty Organization No: 9 Q. CBI Ans. Central Bureau of Investigation No: 10 Q. ASEAN Ans. Association of South - East Asian Nations No: 11 Q. AITUC Ans. All India Trade Union Congress No: 12 Q. AICC Ans. All India Congress Committee No: 13 Q. ADB Ans. Asian Development Bank No: 14 Q. EDC Ans. European Defence Community No: 15 Q. EEC Ans. European Economic Community No: 16 Q. FAO Ans. Food and Agriculture Organization No: 17 Q. FBI Ans. Federal Bureau of Investigation No: 18 Q. GATT Ans. General Agreement on Tariff and Trade No: 19 Q. GNLF Ans. Gorkha National Liberation Front No: ...

ELECTROMAGNETIC WAVES

Understanding Electromagnetic Waves: The Invisible Messengers of Energy Electromagnetic (EM) waves are everywhere around us, shaping the way we live and communicate, though most of the time we are unaware of their presence. From the light we see to the signals carrying our favorite songs on the radio, EM waves play a fundamental role in both nature and modern technology. In this post, we’ll explore the nature of electromagnetic waves, their types, and their significance in daily life. What Are Electromagnetic Waves? At their core, electromagnetic waves are fluctuations of electric and magnetic fields that travel through space. Unlike sound waves, which need a medium like air or water to propagate, electromagnetic waves can travel through a vacuum. This means they can traverse the vast emptiness of space, which is how sunlight reaches Earth from the Sun. The discovery of electromagnetic waves is credited to James Clerk Maxwell in the 19th century. He formulated a set of equations—now kn...

Introduction to C Programs

INTRODUCTION The programming language ‘C’ was developed by Dennis Ritchie in the early 1970s at Bell Laboratories. Although C was first developed for writing system software, today it has become such a famous language that a various of software programs are written using this language. The main advantage of using C for programming is that it can be easily used on different types of computers. Many other programming languages such as C++ and Java are also based on C which means that you will be able to learn them easily in the future. Today, C is mostly used with the UNIX operating system. Structure of a C program A C program contains one or more functions, where a function is defined as a group of statements that perform a well-defined task.The program defines the structure of a C program. The statements in a function are written in a logical series to perform a particular task. The most important function is the main() function and is a part of every C program. Rather, the execution o...