1) Base of hexadecimal number system? Answer : 16 2) Universal gate in digital logic? Answer : NAND 3) Memory type that is non-volatile? Answer : ROM 4) Basic building block of digital circuits? Answer : Gate 5) Device used for data storage in sequential circuits? Answer : Flip-flop 6) Architecture with shared memory for instructions and data? Answer : von Neumann 7) The smallest unit of data in computing? Answer : Bit 8) Unit that performs arithmetic operations in a CPU? Answer : ALU 9) Memory faster than main memory but smaller in size? Answer : Cache 10) System cycle that includes fetch, decode, and execute? Answer : Instruction 11) Type of circuit where output depends on present input only? Answer : Combinational 12) The binary equivalent of decimal 10? Answer : 1010 13) Memory used for high-speed temporary storage in a CPU? Answer : Register 14) Method of representing negative numbers in binary? Answer : Two's complement 15) Gate that inverts its input signal? Answer : NOT 16)
BIG O NOTATION
In today’s era of massive advancement in computer technology, we are hardly concerned about the efficiency of algorithms. Rather, we are more interested in knowing the generic order of the magnitude of the algorithm. If we have two different algorithms to solve the same problem where one algorithm executes in 10 iterations and the other in 20 iterations, the difference between the two algorithms is not much. However, if the first algorithm executes in 10 iterations and the other in 1000 iterations, then it is a matter of concern.
We have seen that the number of statements executed in the program for n elements of the data is a function of the number of elements, expressed as f(n). Even if the expression derived for a
function is complex, a dominant factor in the expression is sufficient to determine the order of the magnitude of the result and, hence, the efficiency of the algorithm. This factor is the Big O, and is expressed as O(n).
The Big O notation, where O stands for ‘order of’, is concerned with what happens for very large values of n. For example, if a sorting algorithm performs n2 operations to sort just n elements, then that algorithm would be described as an O(n2) algorithm.
When expressing complexity using the Big O notation, constant multipliers are ignored. So, an O(4n) algorithm is equivalent to O(n), which is how it should be written.
If f(n) and g(n) are the functions defined on a positive integer number n, then
f(n) = O(g(n))
That is, f of n is Big–O of g of n if and only if positive constants c and n exist, such that f(n)£cg(n). It means that for large amounts of data, f(n) will grow no more than a constant factor than g(n). Hence, g provides an upper bound. Note that here c is a constant which depends on the
following factors:
* the programming language used,
* the quality of the compiler or interpreter,
* the CPU speed,
* the size of the main memory and the access time to it,
* the knowledge of the programmer, and
* the algorithm itself, which may require simple but also time-consuming machine instructions.
We have seen that the Big O notation provides a strict upper bound for f(n). This means that the function f(n) can do better but not worse than the specified value. Big O notation is simply written as f(n) ∈ O(g(n)) or as f(n) = O(g(n)).
Here, n is the problem size and O(g(n)) = {h(n): ∃ positive constants c, n0 such that 0 ≤ h(n) ≤ cg(n), ∀ n ≥ n0}. Hence, we can say that O(g(n)) comprises a set of all the functions h(n)that are less than or equal to cg(n) for all values of n ≥ n0.
If f(n) ≤ cg(n), c > 0, ∀ n ≥ n0
, then f(n) = O(g(n)) and g(n) is an asymptotically tight upper
bound for f(n).
Examples of functions in O(n3) include: n2.9, n3, n3+ n, 540n3 + 10.
Examples of functions not in O(n3) include: n3.2, n2, n2+ n, 540n + 10, 2n
To summarize,
• Best case O describes an upper bound for all combinations of input. It is possibly lower than the worst case. For example, when sorting an array the best case is when the array is already correctly sorted.
• Worst case O describes a lower bound for worst case input combinations. It is possibly greater than the best case. For example, when sorting an array the worst case is when the array is sorted in reverse order.
• If we simply write O, it means same as worst case O.Now let us look at some examples of g(n) and f(n).Below table shows the relationship between g(n) and f(n).
Note that the constant values will be ignored because the main purpose of the Big O notation is to analyse the algorithm in a general fashion, so the anomalies that appear for small input sizes are simply ignored.
Categories of Algorithms
According to the Big O notation, we have five different categories of algorithms:
* Constant time algorithm: running time complexity given as O(1)
* Linear time algorithm: running time complexity given as O(n)
* Logarithmic time algorithm: running time complexity given as O(log n)
* Polynomial time algorithm: running time complexity given as O(nk) where k > 1
* Exponential time algorithm: running time complexity given as O(2n)
Below shows the number of operations that would be performd for various values of n.
Limitations of Big O Notation
There are certain limitations with the Big O notation of expressing the complexity of algorithms. These limitations are as follows:
* Many algorithms are simply too hard to analyse mathematically.
* There may not be sufficient information to calculate the behaviour of the algorithm in the average case.
* Big O analysis only tells us how the algorithm grows with the size of the problem, not how efficient it is, as it does not consider the programming effort.
* It ignores important constants. For example, if one algorithm takes O(n2) time to execute and the other takes O(100000n2) time to execute, then as per Big O, both algorithm have equal time
complexity. In real-time systems, this may be a serious consideration.