kdocs
GitHub
SC - DSA
SC - DSA
  • Complexity or Asymptotic Behavior
  • Data Structures
    • Graphs
    • Trees
  • Algorithms
    • Search
    • Sort
Powered by GitBook
On this page
  • Big-O
  • O(1)
  • O(log N)
  • O(N)
  • O(N log N)
  • O(N²)
  • O(2^N)
  • O(N!)
  • Big-Omega
  • Theta

Complexity or Asymptotic Behavior

NextData Structures

Last updated 6 months ago

Describes the effort given an input of NNN size.

  • This is used to analyze an algorithm efficiency as your input goes to ∞\infty∞. (Growth Rate)

  • How drastically the time grows as the input grows.

In the real world

Memory growth is not computationally free. In languages like Go or Javascript you pay even heavier penalties because the memory can be kept around, grows faster, and causes complete halts in the program for cleanup.

But these things are not considered for Complexity calculations.

Rules:

  • Ignore constants:

    • Constants are any value that do not scale with the input.

    • Constant values are ignored.

    • Ex.: , the 222 is ignored.

  • Ignore lower order terms:

    • Ex.: O(N)+10O(N) + 10O(N)+10, the 101010 is ignored.

    • Ex.: O(N2)+10NO(N^2) + 10NO(N2)+10N, 10N10N10N is also ignored.

Big-O (O)(O)(O)

Describes the upper-bound, or the worst case for a given function.

In the code evaluation, you must always consider the costliest paths.

It can be less.

O(1)

Constant - Excellent

O(log N)

Logarithm - Excellent

It is when the algorithm has a behavior close to "Divide to Conquer". Meaning you just keep dividing until there is just 1 element left. (You cannot scan linear search the divided parts)

When you divide the data and analyses just part that needs.

It is much closer to Constant Time than Linear.

The larger the input the more constant it gets.

Ex.:

Search for a name in a list. You don't have to look the entire list, but start from the names where the first letter is the same as the one you want. Ex.: Binary Search.

O(N)

Linear - Good

The cost is linearly proportional as the input.

Recursions end up being just like regular loops.

O(N log N)

Linearithmic ou Quasilinear - Bad

Ex.:

Some search algorithms like Merge Sort.

O(N²)

Quadratic - Horrible

Ex.:

Nested loops

O(2^N)

Exponential - Horrible

Ex.:

Hanoi Tower.

O(N!)

Factorial - Horrible

Ex.:

Travelling Salesman Problem.

Describes the lower-bounds, or the best case scenarios.

It can be more.

Denotes the asymptotically tight bound on the growth rate of runtime of an algorithm.

This means that it takes exactly the asymptotic time:

Also Big-O states that a given function f()f()f() will be of a complexity of NO MORE THAN some asymptotic notation.

Given a input NNN, the cost will always be constant (1)(1)(1), no matter how big NNN is.

As if it were log Nlog\ Nlog N executed NNN times.

It can be when you keep dividing your input at each step, like in O(logN)O(log N)O(logN), but you do scan the part each time.

The cost rises quadratically as NNN.

The cost rises exponentially as NNN.

Big-Omega (Ω)(\Omega)(Ω)

Also Omega states that a given function f()f()f() will be of a complexity of AT LEAST some asymptotic notation.

Theta (Θ)(\Theta)(Θ)

No more than O()O()O().

And no less than Ω()\Omega()Ω().

Ω()=Θ()=O()\Omega() = \Theta() = O()Ω()=Θ()=O()