Skip to main content

Understanding Oscillations, Optics, and Lasers

Oscillations: The Rhythmic Heartbeat of Physics Oscillations describe any system that moves back and forth in a periodic manner. The most familiar example might be the swinging of a pendulum, but oscillatory behavior occurs in countless natural systems, from the vibrations of molecules to the orbits of celestial bodies. Key Concepts in Oscillations: Simple Harmonic Motion (SHM) : This is the most basic type of oscillation, where the restoring force acting on an object is proportional to its displacement. Classic examples include a mass on a spring or a pendulum swinging with small amplitudes. The equations governing SHM are simple, but they form the basis for understanding more complex oscillatory systems. Damped and Driven Oscillations : In real-world systems, oscillations tend to lose energy over time due to friction or air resistance, leading to  damped oscillations . In contrast,  driven oscillations  occur when an external force continuously adds energy to the system, preventing i

Allocation Methods

Allocation Methods
There are three main methods of storing files on disks: 
* contiguous, 
* linked, and 
* indexed.

Contiguous Allocation
* Contiguous Allocation requires that all blocks of a file be kept together continously.
* Performance is very fast, because reading following blocks of the same file generally requires no movement of the disk heads, or at most one small step to the next adjacent cylinder.
* Storage allocation involves the same issues discussed earlier for the allocation of contiguous blocks of memory ( first fit, best fit, fragmentation problems, etc. ) The distinction is that the high time fine required for moving the disk heads from spot to spot may now justify the benefits of keeping files continously when possible.
* ( Even file systems that do not by default store files contiguously can benefit from certain utilities that compact the disk and make all files contiguous in the process. )
* Problems can raise when files grow, or if the exact size of a file is unknown at creation time:
• Over-estimation of the file's final size increases external segmentation and wastes disk space.
• Under-estimation may need that a file be moved or a process aborted if the file 
grows beyond its originally allocated space.
• If a file grows slowly over a long time period and the total final space must be 
allocated initially, then a lot of space becomes not usable before the file fills the space.
* A variation is to allocate file space in large contiguous blocks, called extents. When a file outgrows its original extent, then an additional one is allocated. ( For example an extent may be the size of a complete track or even cylinder, aligned on an appropriate track or cylinder boundary) The high-performance files system Veritas uses extents to optimize performance.
Linked Allocation
* Disk files can be saved as linked lists, with the expense of the storage space consumed by each link. ( E.g. a block may be 508 bytes instead of 512. )
* Linked allocation involves no external fragmentation, does not require pre-known filesizes, and allows files to grow dynamically at any time.
* Unfortunately linked allocation is only efficient for continous access files, as random access requires starting at the beginning of the list for each new location access.
* Allocating clusters of blocks decreases the space wasted by pointers, at the cost of internal fragmentation.
* Another big problem with linked allocation is reliability if a pointer is lost or fault. Doubly linked lists provide some protection, at the cost of additional overhead and wasted space.
* The File Allocation Table, FAT, used by DOS is a difference of linked allocation, where all the links are stored in a separate table at the beginning of the disk. The benefit of this approach is that the FAT table can be cached in memory, greatly improving random access speeds.
Indexed Allocation
* Indexed Allocation groups all of the indexes for accessing each file into a common block ( for that file ), as opposed to spreading them all over the disk or storing them in a FAT table.
* Some disk space is wasted ( relative to linked lists or FAT tables ) because an entire index block must be allocated for each file, regardless of how many data blocks the file contains. This leads to questions of how big the index block should be, and how it should be implemented. There are several approaches:
Linked Scheme - An index block is one disk block, which can be read and written in a single disk operation. The first index block contains some header information, the first N block addresses, and if necessary a pointer to additional linked index blocks.
Multi-Level Index - The first index block contains a set of pointers to secondary 
index blocks, which in back contain pointers to the actual data blocks.
Combined Scheme - This is the scheme used in UNIX inodes, in which the first 
12 or so data block pointers are store directly in the inode, and then singly, 
doubly, and triply indirect pointers provide access to more data blocks as needed. 
( See below. ) The advantage of this scheme is that for small files ( which many are ), the data blocks are readily accessible (up to 48K with 4K block sizes); files up to about 4144K ( using 4K blocks ) are accessible with only a single indirect 
block ( which can be cached ), and huge files are still accessible using a relatively 
small number of disk accesses ( larger in theory than can be addressed by a 32-bit 
address, which is why some systems have moved to 64-bit file pointers. )
Performance
* The optimal allocation method is different for sequential access files than for random access files, and is also different for small files than for large files.
* Some systems support more than one allocation method, which may require specifying how the file is to be used (sequential or random access ) at the time it is allocated. Such systems also provide conversion utilities.
* Some systems have been known to use contiguous access for small files, and 
automatically switch to an indexed scheme when file sizes surpass a certain threshold.
* And of course some systems adjust their allocation schemes ( e.g. block sizes ) to best match the characteristics of the hardware for optimum performance.

Popular posts from this blog

Introduction to C Programs

INTRODUCTION The programming language ‘C’ was developed by Dennis Ritchie in the early 1970s at Bell Laboratories. Although C was first developed for writing system software, today it has become such a famous language that a various of software programs are written using this language. The main advantage of using C for programming is that it can be easily used on different types of computers. Many other programming languages such as C++ and Java are also based on C which means that you will be able to learn them easily in the future. Today, C is mostly used with the UNIX operating system. Structure of a C program A C program contains one or more functions, where a function is defined as a group of statements that perform a well-defined task.The program defines the structure of a C program. The statements in a function are written in a logical series to perform a particular task. The most important function is the main() function and is a part of every C program. Rather, the execution o

Human Factors in Designing User-Centric Engineering Solutions

Human factors play a pivotal role in the design and development of user-centric engineering solutions. The integration of human-centered design principles ensures that technology not only meets functional requirements but also aligns seamlessly with users' needs, abilities, and preferences. This approach recognizes the diversity among users and aims to create products and systems that are intuitive, efficient, and enjoyable to use. In this exploration, we will delve into the key aspects of human factors in designing user-centric engineering solutions, examining the importance of user research, usability, accessibility, and the overall user experience. User Research: Unveiling User Needs and Behaviors At the core of human-centered design lies comprehensive user research. Understanding the target audience is fundamental to creating solutions that resonate with users. This involves studying user needs, behaviors, and preferences through various methodologies such as surveys, interview

Performance

Performance ( Optional ) * The I/O system is a main factor in overall system performance, and can place heavy loads on other main components of the system ( interrupt handling, process switching, bus contention, memory access and CPU load for device drivers just to name a few. ) * Interrupt handling can be relatively costly ( slow ), which causes programmed I/O to be faster than interrupt driven I/O when the time spent busy waiting is not excessive. * Network traffic can also loads a heavy load on the system. Consider for example the sequence of events that occur when a single character is typed in a telnet session, as shown in figure( And the fact that a similar group of events must happen in reverse to echo back the character that was typed. ) Sun uses in-kernel threads for the telnet daemon, improving the supportable number of simultaneous telnet sessions from the hundreds to the thousands.   fig: Intercomputer communications. * Rather systems use front-end processors to