Skip to main content

Posts

Showing posts from March, 2021

Enhancing Indoor Air Quality: A Guide to Better Health and Comfort

In today's world, where we spend a significant amount of our time indoors, the quality of the air we breathe inside our homes and workplaces is crucial for our health and well-being. Poor indoor air quality (IAQ) can lead to various health issues, including allergies, respiratory problems, and even long-term conditions. This blog post explores effective strategies for managing and improving indoor air quality. Understanding Indoor Air Pollutants Indoor air pollutants can originate from various sources: Biological Pollutants: Mold, dust mites, and pet dander. Chemical Pollutants: Volatile organic compounds (VOCs) from paints, cleaners, and furnishings. Particulate Matter: Dust, pollen, and smoke particles. Strategies for Improving Indoor Air Quality Ventilation: Natural Ventilation: Open windows and doors regularly to allow fresh air circulation. Mechanical Ventilation: Use exhaust fans in kitchens and bathrooms to remove pollutants directly at the source. Air Purifiers: HEPA Filt

RAID Structure

RAID Structure * The basic idea behind RAID is to employ a group of hard drives together with some form of duplication, either to increase reliability or to speed up operations, (or sometimes both.) * RAID originally stood for Redundant Array of Inexpensive Disks, and was planed to use a bunch of cheap small disks in place of one or two larger more expensive ones. Today RAID systems engage large possibly expensive disks as their components, changing the definition to Independent disks. Improvement in Performance via Parallelism * There is also a performance merit to mirroring, particularly with respect to reads. Since every block of data is copied on many disks, read operations can be satisfied from any available copy, and multiple disks can be reading different data blocks at the same time in parallel. (Writes could possibly be fasten up as well through careful scheduling algorithms, but it would be hard in practice.) * Another way of improving disk process time is with striping, whic

SWAP Space Management

Swap-Space Management * Modern systems typically swap out pages as required, other than swapping out entire processes. Hence the swapping system is bit of the virtual memory management system. * Managing swap space is clearly an important task for modern OSes. Swap-Space Use * The amount of swap space required by an OS varies greatly according to how it is used. Some systems require an amount equal to physical RAM; some want a more of that; some want an amount same to the amount by which virtual memory exceeds physical RAM, and some systems use less or none at all! * Some systems support more swap spaces on separate disks in order to speed up the virtual memory system. Swap-Space Location Swap space can be visibally located in one of two locations: * As a large file which is part of the regular file system. This is easy to implement, but  inefficient. Not only must the swap space be processed through the directory system, the file is also subject to fragmentation issues. Caching the bl

Disk scheduling

Disk Scheduling * As mentioned earlier, disk moved speeds are limited primarily by seek times and rotational latency. When many requests are to be processed there is also some inherent delay in waiting for other requests to be processed. * Bandwidth is calculated by the amount of data transferred divided by the total amount of time from the first request being made to the last transfer being completed, (for a series of disk requests.) * Both bandwidth and access time can be increased by processing requests in a good order. * Disk requests include the disk address, memory address, number of sectors to moved,and whether the request is for reading or writing. FCFS Scheduling First-Come First-Serve is easy and basically fair, but not very efficient. Consider in the following order the wild change from cylinder 122 to 14 and then back to 124: SSTF Scheduling * Shortest Seek Time First scheduling is more systematic, but may lead to starvation if a constant stream of requests arri

Disk management

Disk Formatting * Before a disk can be used, it has to be low-level formed, which means laying down all of the headers and trailers marking the beginning and ends of each sector. Included in the header and trailer are the linear sector numbers, and error-correcting codes, ECC, which permits damaged sectors to not only be detected, but in many cases for the damaged data to be retrieved (depending on the extent of the damage) Sector sizes are traditionally 512 bytes, but may be larger, particularly in larger drives. * ECC computation is performed with every disk read or write, and if damage is detected but the data is recoverable, then a soft error has occurred. Soft errors are normally handled by the on-board disk controller, and never seen by the OS. (See below.) * Once the disk is low-level formed, the next step is to partition the drive into one or more separate partitions. This step must be terminated even if the disk is to be used as a single large partition, so that the partition

Disk Attachment

Disk Attachment Disk drives can be connected either directly to a particular host (a local disk) or to a network. Host-Attached Storage * Local disks are retrieved through I/O Ports as described earlier. * The most common links are IDE or ATA, each of which allow up to two drives per host controller. * SATA is similar with simpler cabling. * High end workstations or other systems in use of larger number of disks typically use SCSI disks: • The SCSI standard helps up to 16 targets on each SCSI bus, one of which is  normally the host adapter and the 15 other of which can be disk or tape drives. • A SCSI target is usually a single drive, but the standard also bears up to 8 units  within each target. These would generally be used for retrieving individual disks  within a RAID array. (See below.) • The SCSI standard also assists multiple host adapters in a single computer, i.e.  multiple SCSI busses. • Modern advancements in SCSI include "fast" and "broad" versions, as w

Disk structure

Disk Structure * The conventional head-sector-cylinder, HSC numbers are mapped to linear block addresses by numbering the first sector on the first head on the surface track as sector 0. Numbering begins with the rest of the sectors on that same track, and then the remaining of the tracks on the same cylinder before starting through the remaining of the cylinders to the center of the disk. In modern practice these linear block addresses are used in place of the HSC numbers for a various easons: 1. The rectilinear length of tracks near the surface of the disk is much longer than for those tracks placed near the center, and therefore it is possible to compress many more sectors onto surface than onto internal ones. 2. All disks have some bad sectors, and therefore disks continue a few extra sectors that can be used in place of the bad ones. The mapping of extra sectors to bad sectors in controlled central to the disk controller. 3. Modern hard drives has thousands of cylinders, and hundr

MASS STORAGE STRUCTURE

Overview of Mass-Storage Structure Magnetic Disks Traditional magnetic disks have the following basic form: * One or more plates in the form of disks covered with magnetic media. Hard disk plates are made of inflexible metal, while "floppy" disks are made of more flexible plastic. * Each platter has two working surfaces. Older hard disk drives would sometimes not use the very top or bottom surface of a stack of platters, as these surfaces were more manageable to potential damage. * Each working surface is splitted into a number of concentric rings called tracks. The collection of all tracks that are the same distance from the edge of the plattes, (i.e. all tracks instantly above one another in the following diagram) is called a cylinder. * Each track is further splitted into sectors, basically containing 512 bytes of data each, although some modern disks occasionally use larger sector sizes. (Sectors also adds a header and a trailer, adding checksum information among other th

DARK PSHYCHOLOGY

          In recent years рѕусhоlоgу has seek tо uрlift thе humаn ѕрirit with lоtѕ of liked рѕусhоlоgу terms ѕuсh аѕ, "Positive Pѕусhоlоgу" or thе numer of books released to tell thе masses hоw tо act tо lead a fulfilledLucky lifе frоm tаlking about parachutes, tеn steps to ѕоmеthing, thе mirеd оf "hоw tо" titles аnd muсh mоrе. Most аrе nоthing but miѕguidеd pop рѕусh оr a fad of the mоmеnt. Cаn lifе be аѕ calm as reading thе right bооk аnd fоllоwing some bаѕiс concepts аnd each thing is gоing tо bе OKfоr уоu аnd me? Thiѕ iѕ diffеrеnt, wе ѕhаll survey thе "Dark" ѕidе ofthе humаn mind - that раrt thаt sees diѕеngаgеmеnt, destruction, pleasent acts аѕpart оf thе еvеrуdау humаn рѕусhе that emerges in uѕ аll frоm timе tо time- that part that findѕ еxсitеmеnt, joy аnd рlеаѕurе in thе dysfunctional part of оur existence. Hоw can ѕосiеtу rеunite with itѕ dаrk side? I uѕе theword crazy to refer to thоѕе in society whо орроѕе the social nоrm.           Dark Psychol

Deadlock detection

Deadlock Detection i. If deadlocks are not avoided, then another method is to detect when they have occurred and recover  somehow. ii. In addition to the performance hit of frequently checking for deadlocks, a policy / algorithm must be in place for recovering from deadlocks, and there is potential for lost work when processes must be deleted or have their resources prevented. Single Instance of Each Resource Type i. If each resource category has a single instance, then we can use a variation of the resource-allocation graph known as a wait-for graph. ii. A wait-for graph can be built from a resource-allocation graph by eliminating the resources and collapsing the associated edges, as shown in the figure below. iii) An arc from Pi to Pj in a wait-for graph implies that process Pi is waiting for a resource that process Pj is currently holding. As before, cycles in the wait-for graph imply deadlocks. This algorithm must maintain the wait-for graph, and constantly search it for cycles. Se

Recovery from deadlock

Recovery From Deadlock There are three basic methods to recovery from deadlock: i) Inform the system operator, and allow him/her to take manual intercession. ii) End one or more processes involved in the deadlock iii) Preempt resources. Process Termination 1) Two basic methods, both of which recover resources allocated to ended processes: i) End all processes involved in the deadlock. This definitely solves the deadlock, but at the expense of ending more processes than would be absolutely necessary. ii) End processes successively until the deadlock is fragmented. This is more conservative, but needs doing deadlock detection after each step. 2) In the latter case there are many factors that can go into deciding which processes to end next: i) Process priorities. ii) How long the process has been running, and how close it is to completing. iii) How many and what type of resources is the process possession. ( Are they easy to prevent and restore? ) iv. How many more resources does the pro

Deadlock Prevention

Deadlock Prevention Deadlocks can be prevented by preventing by one of the four needed conditions Mutual Exclusion Shared resources such as read-only files do not give on to deadlocks. Unfortunately some resources, such as printers and tape drives, need full retrieve by a single process. Hold and Wait To stop this condition processes must be stoped from holding one or more resources while simultaneously waiting for one or more others. There are several possibilities for this: i) Require that all processes request all resources at once. This can be excessive of system resources if a process needs one resource early in its execution and doesn't need some other resource until much later. ii) Require that processes giving on resources must free them before requesting new resources, and then retrieve the released resources along with the new ones in a single new request. This can be a problem if a process has partially completed an operation using a resource and then fails to get it ret

Deadlock Characterisation

Deadlock Characterization Necessary Conditions: There are four conditions that are required to achieve deadlock: Mutual Exclusion  - At least one resource must be grip in a non-sharable mode; If any other process requests this resource, then that process must wait for the resource to be released. Hold and Wait  - A process must be concurrently gripping at least one resource and waiting for at least one resource that is presently being grip by some other process. No preemption  - Once a process is belonging a resource ( i.e. once its request has been allowed ), then that resource not be taken away from that process until the process freely releases it. Circular Wait -  A group of processes { P0, P1, P2, . . ., PN } must exist such that every P[ i ] is waiting for P[ ( i + 1 ) % ( N + 1 ) ]. ( Note that this situation implied the hold-and-wait condition, but it is easier to deal with the conditions if the four are considered individually. ) Resource-Allocation Graph In some cases deadloc

DEADLOCK

DEAD LOCKS System Model ● For the purposes of deadlock discussion, a system can be modeled as a collection of limited resources, which can be splitted into different classes, to be allocated to a number of processes, each having different needs. ● Resource classes may adds memory, printers, CPUs, open files, tape drives, CD-ROMS, etc. ● By definition, all the resources within a classification are equivalent, and a request of this category can be equally satisfied by any one of the resources in that category. If this is not the instance ( i.e. if there is some difference between the resources within a class ), then that class needs to be  further divided into separate categories. For example, "printers" may require to be separated into "laser printers" and "color inkjet printers". ● Some classification may have a single resource. ● In normal performance a process must request a resource before using it, and release it when it is complete, in the following s

ALLOCATION OF FRAMES

Allocation of Frames           We said earlier that there were two important tasks in virtual memory management a page-replacement method and a frame-allocation method. This section refuge the second part of that pair. Minimum Number of Frames * The absolute least number of frames that a process must be assigned is dependent on system architecture, and corresponds to the worst-case scenario of the number of pages that could be touched by a single ( machine) instruction. * If an instruction (and its operands)cross a page boundary, then multiple pages could be required just for the instruction fetch. * Memory references in an directive touch more pages, and if those memory locations can span page boundaries, then many pages could be required for operand access also. * The worst case implies indirect addressing, particularly where many levels of indirect addressing are allowed. Left not checked, a pointer to a pointer to a pointer to a pointer to a . . . could apparently touch each page i

THRASHING

Thrashing * If a process cannot keep its least required number of frames, then it must be interchanged out, freeing up frames for other processes. This is an mid level of CPU scheduling. * But what about a process that can keep its least, but cannot keep all of the frames that it is currently using on a regular basis? In this case it is forced to page out pages that it will required again in the very near future, leading to large numbers of page faults. * A process that is taking more time paging than implementing is said to be thrashing. Cause of Thrashing * Early process scheduling schemes would manage the level of multiprogramming permitted based on CPU utilization, adding in more processes when CPU utilization was low. * The problem is that when memory filled up and processes started take up lots of time waiting for their pages to page in, then CPU utilization would lower, causing the schedule to add in even more processes and aggravate the problem! Eventually the system would basi