Skip to main content

Smart Grids and Energy Storage Systems

Smart Grids and Energy Storage Systems: Powering the Future of Energy In today’s rapidly evolving energy landscape, the push towards sustainability, efficiency, and reliability is stronger than ever. Traditional power grids, though robust in their time, are no longer sufficient to meet the demands of a modern, digital, and environmentally conscious society. This is where smart grids and energy storage systems (ESS) come into play — revolutionizing how electricity is generated, distributed, and consumed. What is a Smart Grid? A smart grid is an advanced electrical network that uses digital communication, automation, and real-time monitoring to optimize the production, delivery, and consumption of electricity. Unlike conventional grids, which operate in a one-way flow (from generation to end-user), smart grids enable a two-way flow of information and energy. Key Features of Smart Grids: Real-time monitoring of power usage and quality. Automated fault detection and rapid restoration. Int...

OPERATING SYSTEM STRUCTURE

       Figure: MS-DOS layer structure
          It was written to provide the most functionality in the least space, so it was not divided into modules carefully. In MS-DOS, the interfaces and levels of functionality are not well separated. For instance, application programs are able to access the basic I/O routines to write directly to the display and disk drives. Such freedom leaves MS-DOS vulnerable to errant (or malicious) programs, causing entire system crashes when user programs fail. Of course, MS-DOS was also limited by the hardware of its era. Another example of limited structuring is the original UNIX operating system. UNIX is another system that initially was limited by hardware functionality.It consists of two separable parts: the kernel and the system programs. The kernel is further separated into a series of interfaces and device drivers, which have been added and expanded over the years as UNIX 
has evolved.
        Figure: Unix system structure
Layered Approach:
          The operating system can then retain much greater control over the computer and over the applications 
that make use of that computer. Implementers have more freedom in changing the inner workings of the 
system and in creating modular operating systems. Under the top down approach, the overall functionality and features are determined and are separated into components. Information hiding is also important, because it leaves programmers free to implement the low-level routines as they see fit, provided that the external interface of the routine stays unchanged and that the routine itself performs the advertised task.A system can be made modular in many ways. One method is the layered approach, in which 
the operating system is broken up into a number of layers (levels). The bottom layer (layer 0) is the hardware; the highest (layer N) is the user interface.
Figure : layered operating system
          An operating-system layer is an implementation of an abstract object made up of data and the operations that can manipulate those data. A typical operating-system layer0 to layer M—consists of data structures and a set of routines that can be invoked by higher-level layers. Layer M, in turn, can invoke 
operations on lower-level layers.The main advantage of the layered approach is simplicity of construction and debugging. The  layers are selected so that each uses functions (operations) and services of only lower-level layers. This approach simplifies debugging and system verification. The first layer can be debugged without any concern for the rest of the system, because, by definition, it uses only the basic hardware (which is assumed correct) to implement its functions. Once the first layer is debugged, its correct functioning can be assumed while the second layer is debugged, and so on. If an error is found during the debugging of a particular layer, the error must be on that layer, because the layers below it are already debugged. Thus, the design and implementation of the system is simplified.Each layer is implemented with only those operations provided by lower level layers. A layer does not need to know how these operations are implemented; it needs to know only what these operations do. Hence, each layer hides the existence of certain data structures, operations, and hardware from higher-level layers.The major difficulty with the layered approach involve appropriately defining the various layers. The backing-store driver would normally be above the CPU scheduler, because the driver may need to wait for I/O and the CPU can be rescheduled during this time. A final problem with layered implementations is that they tend to be less efficient than other types. For instance, when a user program executes an I/O operation, it executes a system call that is trapped to the I/O layer, which calls the memory-management layer, which in turn calls the CPU-scheduling layer, which is thenpassed to the hardware.
Micro kernels:
The kernel became large and difficult to manage. In the mid-1980s, researchers at Carnegie Mellon University developed an operating system called Mach that modularized the kernel using the microkernel approach. This method structures the operating system by removing all nonessential components from the kernel and implementing them as system and user-level programs. The result is a smaller kernel. microkernels provide minimal process and memory management, in addition to a communication facility. The main function of the microkernel is to provide a communication facility between the client program and the various services that are also running in user space. One benefit of the microkernel approach is ease of extending the operating system. All new services are added to user space and consequently do not require modification of the kernel. When the kernel does have to be modified, the changes tend to be fewer, because the microkernel is a smaller kernel.The resulting operating system is easier to port from one hardware design to another. The microkernel also provides more security and reliability, since most services are running as user rather than kernel processes. If a service fails, the rest of the operating system remains untouched.
Modules:
The best current methodology for operating-system design involves using object-oriented programming techniques to create a modular kernel. Here, the kernel has a set of core components and dynamically links in additional services either during boot time or during run time. Such a strategy uses dynamically loadable modules and is common in modern implementations of UNIX, such as Solaris, Linux, and Mac OS X. A core kernel with seven types of loadable kernel modules:
1. Scheduling classes
2. File systems
3. Loadable system calls
4. Executable formats
5. STREAMS modules
6. Miscellaneous
7. Device and bus drivers
          Such a design allows the kernel to provide core services yet also allows certain features to be implemented dynamically. The overall result resembles a layered system in that each kernel section has defined, protected interfaces; but it is more flexible than a layered system in that any module can call any 
other module. The approach is like the microkernel approach in that the primary module has only core functions and knowledge of how to load and communicate with other modules; but it is more efficient, because modules do not need to invoke message passing in order to communicate.The Apple Macintosh Mac OS X operating system uses a hybrid structure. Mac OS X (also known as 
Danvin) structures the operating system using a layered technique where one layer consists of the Mach microkernel. The top layers include application environments and a set of services providing a graphical 
interface to applications. Below these layers is the kernel environment, which consists primarily of the Mach microkernel and the BSD kernel. Mach provides memory management; support for remote 
procedure calls (RPCs) and inter process communication (IPC) facilities, including message passing; and thread scheduling. The BSD component provides a BSD command line interface, support for networkingand file systems, and an implementation of POSIX APIs, including Pthreads.

Popular posts from this blog

Abbreviations

No :1 Q. ECOSOC (UN) Ans. Economic and Social Commission No: 2 Q. ECM Ans. European Comman Market No : 3 Q. ECLA (UN) Ans. Economic Commission for Latin America No: 4 Q. ECE (UN) Ans. Economic Commission of Europe No: 5 Q. ECAFE (UN)  Ans. Economic Commission for Asia and the Far East No: 6 Q. CITU Ans. Centre of Indian Trade Union No: 7 Q. CIA Ans. Central Intelligence Agency No: 8 Q. CENTO Ans. Central Treaty Organization No: 9 Q. CBI Ans. Central Bureau of Investigation No: 10 Q. ASEAN Ans. Association of South - East Asian Nations No: 11 Q. AITUC Ans. All India Trade Union Congress No: 12 Q. AICC Ans. All India Congress Committee No: 13 Q. ADB Ans. Asian Development Bank No: 14 Q. EDC Ans. European Defence Community No: 15 Q. EEC Ans. European Economic Community No: 16 Q. FAO Ans. Food and Agriculture Organization No: 17 Q. FBI Ans. Federal Bureau of Investigation No: 18 Q. GATT Ans. General Agreement on Tariff and Trade No: 19 Q. GNLF Ans. Gorkha National Liberation Front No: ...

Operations on data structures

OPERATIONS ON DATA STRUCTURES This section discusses the different operations that can be execute on the different data structures before mentioned. Traversing It means to process each data item exactly once so that it can be processed. For example, to print the names of all the employees in a office. Searching It is used to detect the location of one or more data items that satisfy the given constraint. Such a data item may or may not be present in the given group of data items. For example, to find the names of all the students who secured 100 marks in mathematics. Inserting It is used to add new data items to the given list of data items. For example, to add the details of a new student who has lately joined the course. Deleting It means to delete a particular data item from the given collection of data items. For example, to delete the name of a employee who has left the office. Sorting Data items can be ordered in some order like ascending order or descending order depending ...

Points to Remember

• A data structure is a particular way of storing and organizing data either in computer’s memory or on the disk storage so that it can be used efficiently. • There are two types of data structures: primitive and non-primitive data structures. Primitive data structures are the fundamental data types which  are supported by a programming language. Non-primitive data structures are those data structures which are created using primitive data structures. • Non-primitive data structures can further be classified into two categories: linear and non-linear data structures.  • If the elements of a data structure are stored in a linear or sequential order, then it is a linear data structure. However, if the elements of a data structure are not stored in sequential order, then it is a non-linear data structure.  • An array is a collection of similar data elements which are stored in consecutive memory locations. • A linked list is a linear data structure consisting of a grou...