When we look at input sizes large enough to make only the order of growth of the running time relevant, we are studying the asymptotic efficiency of the algorithm. In this blog, I’ll give the Short description of Asymptotic Notation.
The asymptotic notation is used to describe the running time of an algorithm.This shows the order of growth of function.It describes the algorithm efficiency & performance in a meaningful way.It also describes the behaviour of time and space complexity for large instance characteristics.
The time complexity of a program is the simple measurement of how fast the time taken by a program grows if the input size increases.
The space complexity of a program is the measurement of space required by a program.
Read Also: Kerela
The asymptotic running time of an algorithm is defined in terms of function:
- Big-oh Notation
- Big-omega notation
- Theta notation
- It is a formal method of expressing the upper bound of algorithm’s running time.
- It is the measures of the largest amount of time it could possibly take for the algorithm to complete.
- It is denoted by O-NOTATION.
f(n)=O(g(n)) if and only if there exist a positive constant ”c” such that
f(n)<=c*g(n) for all n>=n0,c>0 & n>=1
- It provides a asymptotic lower bound .
- It describes the best that can happen for a given data size.
- It is denoted by Ω.
For non negative function f(n) & g(n) if there exist an integer n0 & a constant c>0 such that for all integer n>n0,f(n)>=c(g(n)) then f(n) is big omega of g(n) and denoted by
- The lower and upper bound for function”f” is provided by it.
- It is denoted by Θ.
For non negative function,
f(n) & g(n) if there exist an integer n0 & positive constant c1 & c2 i.e c1>0 & c2>0 such that for all integer n>n0
c1g(n)<=f(n)<=c2g(n) then f(n) is Θ of g(n) and denoted by
Read Also:Uttar Pradesh: The Heartland of India
So, this is the end of my blog.Hope this will be helpful for you all.Do like and share & let others get information about this. 🙂