Skip to main content

Quiz based on Digital Principles and Computer Organization

1) Base of hexadecimal number system? Answer : 16 2) Universal gate in digital logic? Answer : NAND 3) Memory type that is non-volatile? Answer : ROM 4) Basic building block of digital circuits? Answer : Gate 5) Device used for data storage in sequential circuits? Answer : Flip-flop 6) Architecture with shared memory for instructions and data? Answer : von Neumann 7) The smallest unit of data in computing? Answer : Bit 8) Unit that performs arithmetic operations in a CPU? Answer : ALU 9) Memory faster than main memory but smaller in size? Answer : Cache 10) System cycle that includes fetch, decode, and execute? Answer : Instruction 11) Type of circuit where output depends on present input only? Answer : Combinational 12) The binary equivalent of decimal 10? Answer : 1010 13) Memory used for high-speed temporary storage in a CPU? Answer : Register 14) Method of representing negative numbers in binary? Answer : Two's complement 15) Gate that inverts its input signal? Answer : NOT 16)...

Time and space complexity

TIME AND SPACE COMPLEXITY
Analysing an algorithm means determining the amount of resources (such as time and memory) needed to execute it. Algorithms are generally designed to work with an arbitrary number of inputs, so the efficiency or complexity of an algorithm is stated in terms of time and space complexity.
The time complexity of an algorithm is basically the running time of a program as a function of the input size. Similarly, the space complexity of an algorithm is the amount of computer memory that is required during the program execution as a function of the input size.
In other words, the number of machine instructions which a program executes is called its time complexity. This number is primarily dependent on the size of the program’s input and the algorithm used.
Generally, the space needed by a program depends on the following two parts:
* Fixed part: It varies from problem to problem. It includes the space needed for storing instructions, constants, variables, and structured variables (like arrays and structures).
* Variable part: It varies from program to program. It includes the space needed for recursion stack, and for structured variables that are allocated space dynamically during the runtime of a program.
However, running time requirements are more critical than memory requirements. Therefore, in this section, we will concentrate on the running time efficiency of algorithms.

Worst-case, Average-case, Best-case, and Amortized Time Complexity
Worst-case running time This denotes the behaviour of an algorithm with respect to the worst-possible case of the input instance. The worst-case running time of an algorithm is an upper bound on the running time for any input. Therefore, having the knowledge of worst-case running time gives us an assurance that the algorithm will never go beyond this time limit.
Average-case running time The average-case running time of an algorithm is an estimate of the running time for an ‘average’ input. It specifies the expected behaviour of the algorithm when the input is randomly drawn from a given distribution. Average-case running time assumes that all inputs of a given size are equally likely.
Best-case running time The term ‘best-case performance’ is used to analyse an algorithm under optimal conditions. For example, the best case for a simple linear search on an array occurs when the desired element is the first in the list. However, while developing and choosing an algorithm to solve a problem, we hardly base our decision on the best-case performance. It is always recommended to improve the average performance and the worst-case performance of an algorithm.
Amortized running time Amortized running time refers to the time required to perform a sequence of (related)operations averaged over all the operations performed. Amortized analysis guarantees the average performance of each operation in the worst case.

Time–Space Trade-off
The best algorithm to solve a particular problem at hand is no doubt the one that requires less memory space and takes less time to complete its execution. But practically, designing such an ideal algorithm is not a trivial task. There can be more than one algorithm to solve a particular problem. One may require less memory space, while the other may require less CPU time to execute. Thus, it is not uncommon to sacrifice one thing for the other. Hence, there exists atime–space trade-off among algorithms.
So, if space is a big constraint, then one might choose a program that takes less space at the cost of more CPU time. On the contrary, if time is a major constraint, then one might choose a program that takes minimum time to execute at the cost of more space.

Expressing Time and Space Complexity
The time and space complexity can be expressed using a function f(n) where n is the input size for a given instance of the problem being solved. Expressing the complexity is required when
* We want to predict the rate of growth of complexity as the input size of the problem increases.
* There are multiple algorithms that find a solution to a given problem and we need to find the algorithm that is most efficient.
The most widely used notation to express this function f(n) is the Big O notation. It provides the upper bound for the complexity.

Algorithm Efficiency
If a function is linear (without any loops or recursions), the efficiency of that algorithm or the running time of that algorithm can be given as the number of instructions it contains. However, if an algorithm contains loops, then the efficiency of that algorithm may vary depending on the number of loops and the running time of each loop in the algorithm.
Let us consider different cases in which loops determine the efficiency of an algorithm.
Linear Loops
To calculate the efficiency of an algorithm that has a single loop, we need to first determine the number of times the statements in the loop will be executed. This is because the number of iterations is directly proportional to the loop factor. Greater the loop factor, more is the number of iterations. For example, consider the loop given below:
for(i=0;i<100;i++)
statement block;
Here, 100 is the loop factor. We have already said that efficiency is directly proportional to the number of iterations. Hence, the general formula in the case of linear loops may be given as
f(n) = n
However calculating efficiency is not as simple as is shown in the above example. Consider the loop given below:
for(i=0;i<100;i+=2)
 statement block;
Here, the number of iterations is half the number of the loop factor. So, here the efficiency can be given as
f(n) = n/2
Logarithmic Loops
We have seen that in linear loops, the loop updation statement either adds or subtracts the loop-controlling variable. However, in logarithmic loops, the loop-controlling variable is either multiplied or divided during each iteration of the loop. For example, look at the loops given below:
for(i=1;i<1000;i*=2) for(i=1000;i>=1;i/=2)
statement block; statement block;
Consider the first for loop in which the loop-controlling variable i is multiplied by 2. The loop will be executed only 10 times and not 1000 times because in each iteration the value of i doubles. Now, consider the second loop in which the loop-controlling variable i is divided by 2. 
In this case also, the loop will be executed 10 times. Thus, the number of iterations is a function of the number by which the loop-controlling variable is divided or multiplied. In the examples discussed, it is 2. That is, when n = 1000, the number of iterations can be given by log 1000 which 
is approximately equal to 10.
Therefore, putting this analysis in general terms, we can conclude that the efficiency of loops in which iterations divide or multiply the loop-controlling variables can be given as
f(n) = log n
Nested Loops
Loops that contain loops are known as nested loops. In order to analyse nested loops, we need to determine the number of iterations each loop completes. The total is then obtained as the product of the number of iterations in the inner loop and the number of iterations in the outer loop.
In this case, we analyse the efficiency of the algorithm based on whether it is a linear logarithmic, quadratic, or dependent quadratic nested loop.
Linear logarithmic loop Consider the following code in which the loop-controlling variable of the inner loop is multiplied after each iteration. The number of iterations in the inner loop is log 10. This inner loop is controlled by an outer loop which iterates 10 times. Therefore, according to the formula, the number of iterations for this code can be given as 10 log 10.
for(i=0;i<10;i++)
for(j=1; j<10;j*=2)
statement block;
In more general terms, the efficiency of such loops can be given as f(n) = n log n.
Quadratic loop In a quadratic loop, the number of iterations in the inner loop is equal to the number of iterations in the outer loop. Consider the following code in which the outer loop executes 10 times and for each iteration of the outer loop, the inner loop also executes 10 times. Therefore, the efficiency here is 100.
for(i=0;i<10;i++)
for(j=0; j<10;j++)
statement block;
The generalized formula for quadratic loop can be given as f(n) = n2.
Dependent quadratic loop In a dependent quadratic loop, the number of iterations in the inner loop is dependent on the outer loop. Consider the code given below:
for(i=0;i<10;i++)
for(j=0; j<=i;j++)
statement block;
In this code, the inner loop will execute just once in the first iteration, twice in the second iteration, thrice in the third iteration, so on and so forth. In this way, the number of iterations can be calculated as
1 + 2 + 3 + ... + 9 + 10 = 55
If we calculate the average of this loop (55/10 = 5.5), we will observe that it is equal to the number of iterations in the outer loop (10) plus 1 divided by 2. In general terms, the inner loop iterates (n + 1)/2 times. Therefore, the efficiency of such a code can be given as
f(n) = n (n + 1)/2

Popular posts from this blog

Human Factors in Designing User-Centric Engineering Solutions

Human factors play a pivotal role in the design and development of user-centric engineering solutions. The integration of human-centered design principles ensures that technology not only meets functional requirements but also aligns seamlessly with users' needs, abilities, and preferences. This approach recognizes the diversity among users and aims to create products and systems that are intuitive, efficient, and enjoyable to use. In this exploration, we will delve into the key aspects of human factors in designing user-centric engineering solutions, examining the importance of user research, usability, accessibility, and the overall user experience. User Research: Unveiling User Needs and Behaviors At the core of human-centered design lies comprehensive user research. Understanding the target audience is fundamental to creating solutions that resonate with users. This involves studying user needs, behaviors, and preferences through various methodologies such as surveys, interview...

Introduction to C Programs

INTRODUCTION The programming language ‘C’ was developed by Dennis Ritchie in the early 1970s at Bell Laboratories. Although C was first developed for writing system software, today it has become such a famous language that a various of software programs are written using this language. The main advantage of using C for programming is that it can be easily used on different types of computers. Many other programming languages such as C++ and Java are also based on C which means that you will be able to learn them easily in the future. Today, C is mostly used with the UNIX operating system. Structure of a C program A C program contains one or more functions, where a function is defined as a group of statements that perform a well-defined task.The program defines the structure of a C program. The statements in a function are written in a logical series to perform a particular task. The most important function is the main() function and is a part of every C program. Rather, the execution o...

Performance

Performance ( Optional ) * The I/O system is a main factor in overall system performance, and can place heavy loads on other main components of the system ( interrupt handling, process switching, bus contention, memory access and CPU load for device drivers just to name a few. ) * Interrupt handling can be relatively costly ( slow ), which causes programmed I/O to be faster than interrupt driven I/O when the time spent busy waiting is not excessive. * Network traffic can also loads a heavy load on the system. Consider for example the sequence of events that occur when a single character is typed in a telnet session, as shown in figure( And the fact that a similar group of events must happen in reverse to echo back the character that was typed. ) Sun uses in-kernel threads for the telnet daemon, improving the supportable number of simultaneous telnet sessions from the hundreds to the thousands.   fig: Intercomputer communications. * Rather systems use front-end processor...