# Time Complexity VS Space Complexity

## Introduction

In the field of computer programming, one encounters many different kinds of problems. The best way to approach this is to first understand the problem properly through proper analysis. The solution to a problem may vary depending upon the analysis made about it. For example, if a computational problem is provided to three individuals, it is very much a possibility that the solution provided by each individual varies from the other individual’s solution. Factors such as knowledge and experience of the individual, the computational power of the individual’s system, and many different metrics such as the interpretation and conclusions derived. It is very important to check the efficiency of the solutions generated and to study the approach taken to solve the problem, in order to decide the best possible solution for the given problem.

The solution to a computational problem is provided in the form of algorithms, an algorithm is the set of instructions, that provides the procedure for an individual to solve the problem, and these instructions are then used by the computer to know what, when and how in order to solve the problem.

## What are Algorithms?

Algorithms, a set of well-defined sequenced computational instructions, are used to carry out a computation, perform defined actions or solve required problems. The input and output provided by these procedures are values. Algorithms can be used for various purposes: for calculations, processing of data, sequencing of data, and many more.

Let us now discuss why algorithms are needed

- Algorithms help us to understand the base of the main problem and provide us with a structured approach to solve the problem. Listed below are a few reasons why algorithms are used:
- Algorithms can be used for existing procedures, as they can be optimized to increase efficiency.
- There are various procedures available for a specific problem, algorithms can be used to compare the performance of various procedures and determine the best procedure for the respective solution.
- Developers and designers face many complex problems during development life cycles, algorithms can be used to structure the description of the respective problems and to provide a strong view of the requirements and goals of the problem.
- Determining the flow of a program, especially complex programs, can be very difficult, time-consuming, and sometimes might increase the budget of the development life cycle. Algorithms can be used to understand the basic flow of the program.
- Analyzing the performance of the methods (among various cases such as best case, worst case, and average case) used to solve a particular problem is one of the most important steps to ensure the effectiveness of the procedure. Algorithms can be used to analyze the performance and record the results of the methods.
- Various resource cycles such as input/output cycles, memory cycles, etc are automatically identified by an algorithm.
- While implementing an algorithm for a problem these two factors play a very important role. First, time complexity, and second space complexity. These factors for a problem are measured and analyzed by using algorithms.
- Algorithms are used to reduce the cost of the development life cycle as it is one of the most effective and productive ways to develop and design a solution for the problem.

## The complexity of an Algorithm

There are many algorithms available for the problem, but determining the best solution is important. The efficiency of these various algorithms with respect to each other can be classified using the complexity of the algorithm. The complexity of the algorithm is a function that carries the respective task by focusing on the execution time required to process data sets. The computational complexity or the simple complexity of an algorithm is important to determine the various resources required to run or execute the respective algorithms.

The classification of the algorithms can be performed by analyzing the relative amount of time or relative amount of space used by each algorithm during the execution time, by specifying the growth of time/space requirement of the input size.

Types of Algorithmic complexity

The two main complexity measures are as follows:

- Time complexity
- Space complexity

### Time Complexity

The respective function is used to describe the amount of time an algorithm takes in terms of the amount of input to the algorithm. The word “Time” can refer to the number of times the memory was accessed or the comparisons made between integers, the number of executions of some inner loop, or the amount of time taken by the respective algorithm to execute a natural unit.

The idea of time used for measuring the complexity is different from the clock time as many factors can affect the results such as the language used for coding, the choice of computational hardware used for processing the algorithm, the skill set of the developer and designers, the efficiency of the compiler used, etc. It is very important to select the right units for measuring the efficiency of the algorithm as they provide results independent of any factors.

### Space Complexity

The algorithm of a particular computational problem has a defined life cycle. The algorithm executes the set of instructions contained within it during the algorithm’s life cycle. For proper execution of the algorithm, memory is required for carrying out various operations. Depending upon the amount of memory required by the algorithm for its execution, we can determine the efficiency of the respective algorithm. The memory space required by an algorithm (in its entire life cycle) is the space complexity of the respective algorithm.

What is the memory space or the space required by the algorithm? It is basically the summation of the fixed part and the variable part of the memory space required by the algorithm.

- What is a fixed part of the memory space of an algorithm? An algorithm requires the use of various variables, constants, etc, also it requires memory to store data for carrying out various operations, the memory space required for a fixed part is dependent only on the algorithm and not on the size of the problem.
- What is a variable part of the memory space of an algorithm? Memory is required by the variables of an algorithm, such as dynamic memory allocations etc, such variables whose memory size depends upon the size of the problems.

Space complexity S(p) of any algorithm p is S(p) = A + Sp(I) Where A is treated as the fixed part and S(I) is treated as the variable part of the algorithm which depends on instance characteristic I.

The equational representation of the space complexity

S(p) = A + Sp(I),

where,

S(p) = Space Complexity

A = Fixed part

Sp(I) = Variable part

Let us consider the following example to understand the space complexity of an algorithm.

Algorithm

SUM(A,B)

Step 1 – START

Step 2 – C ← A – B + 50

Step 3 – STOP

- The above-mentioned algorithm is a basic mathematical algorithm, let us analyze the algorithm and figure out the fixed and the variable part.
- The number of variables required by the algorithm is three: A, B and C.
- The number of constants required by the algorithm is one.

So according to the equation of space complexity of algorithm,

We have,

- A = Fixed part, A = 1
- Sp(I) = Variable part, Sp(I) = 3

There are various IDEs available in the market, many different types of programming languages, and different types of machines (from low processors to high-end machines) available. Do they also affect the space complexity of an algorithm? Yes, we should not be ignorant of the fact that the space complexity of an algorithm also depends upon such factors.

These are the following cases for which the complexity of algorithms is analyzed:

- Worst case f(n): For any instance of size n, the maximum number of steps taken by the algorithm for execution.
- Best case f(n): For any instance of size n, the minimum number of steps taken by the algorithm for execution.
- Average case f(n): For any instance of size n, the average number of steps taken by the algorithm for execution.

## Role of sorting algorithms

Sorting is one of the most important tasks in development life cycles. Developers and designers can use sorting to increase the efficiency of searching algorithms. Selection, Insertion, and Bubble sort are important for understanding the fundamentals of sorting algorithms.

- Selection sort: It is used to pick the smallest element to affix to the result, and the respective process is executed repeatedly.
- Insertion sort: It is used to add a new element to the sorted result, and the respective process is executed repeatedly.
- Bubble sort: It is used to compare neighbour pairs and swap if necessary, and the respective process is executed repeatedly.

### Bubble sort:

The main function of bubble sort is to repeatedly compare and swap (if needed) the adjacent elements in every pass. In the case of ascending order during the algorithm operating in i-th pass, the last (i-1) elements are sorted by the algorithm beforehand. The largest element which is the i-th element is placed at (N-i)-th position (i-th last position).

Optimizing the algorithm for Bubble sort

First, we have to check whether any swapping operations have been executed in the inner or the pass execution loop. When the array has been sorted according to the needs of the algorithm, this is when the swapping stops. The sorting operation can be stopped if the swapping is absent for any pass. The number of passes can be optimized, when the array is sorted before all the passes could be completed.

Time Complexity of the algorithm:

- Best Case: The input provided is a sorted array or the elements of the input should be in the properly defined place. Function: O(N), where O(1) swaps.
- Worst Case: The input provided is a reversely sorted array or the elements of the input should not be in a properly defined place. Function: O(N2), where O(N2) swaps.
- Average Case: Function: O(N2), where O(N2) swaps.

Space Complexity of the algorithm:

- In-Place sort, because for swapping a temporary variable is used [ auxiliary, O(1) ].

Advantages of Bubble sort:

- One of the easiest sorting methods.
- O(N) is the best case of complexity; the array is sorted and it is the most effective and optimistic approach.
- It can easily detect the sorted array in the first pass when using the best case or the optimized approach, where the time complexity of the algorithm is O(1).
- The order of elements with equal keys in the array while using stable sort is not changed.
- In-short sort can be executed.

The disadvantage of Bubble sort:

- The execution time of the bubble sort is slow as compared to the other algorithms.

### Selection sort

The main function of the selection sort is to identify and select the smallest element from the provided subarray (unsorted) and then assign the element to the first position of the respective subarray (ascending order). The same process is repeated over and over again, where the algorithm then selects the next smallest element from the unsorted subarray and then places it after the previously placed element in the sorted subarray (ascending order). The i-th element selected by the algorithm is placed at the i-th position. The respective sorting method divides the provided array into two subarrays: first, the unsorted subarray, and second, the sorted subarray (ascending order).

Time Complexity of the algorithm:

- Best case: The O(N2) is the best case, where O(1) swaps.
- Worst case: The array is reversely sorted, and the inner loop of the method makes the maximum comparison. Function: O(N2), where O(N) swaps.
- Average case: Function: O(N2), where O(N) swaps.

The space complexity of the algorithm:

- In-Place sort, [auxiliary, O(1)]

Advantages of Selection sort:

- Linked lists and many other lists where the structure of the list requires easier and more efficient add and remove functions can make use of the selection sort. The following action can be performed by removing the smallest part of the unsorted part and ending the respective action at the end of the sorted part.
- Overall, there is a decrease in the number of swaps, where swapping in all cases: O(N).
- In-Place sort.

The disadvantage of Selection sort:

- O(N2) is the time complexity in all the cases and has no best-case scenario.

### Insertion Sort

The main function of insertion sort is to insert the elements at proper positions. It is a sorting algorithm working on a simple comparison base. The algorithm while performing the i-th iteration inserts the respective element (i-the element/ (Arr[i])) in the subarray, the respective subarray is the sorted subarray (Arr[1:(i-1)]) consisting of all the previous (i-1) elements already sorted according to the requirements. The algorithm repeats the following process until all the elements are properly inserted.

Time Complexity of the algorithm:

- Best case: The input provided to the algorithm is a sorted array. Function: O(N), where O(1) swaps.
- Worst case: The input provided to the algorithm is reversely sorted, the inner loop has to make maximum comparisons. Function: O(N2), where O(N2) swaps.
- Average case: Function: O(N2), where O(N2) swaps.

Space Complexity of the algorithm:

- In-Place sort, [auxiliary, O(1)]

Advantages of Insertion sort:

- Developers and designers can easily compute the algorithm.
- O(N) is the best case of complexity; the array is sorted and it is the most effective and optimistic approach.
- There is a significant decrease in the number of swaps.
- The algorithm works efficiently for smaller values of N, similar to any quadratic sorting algorithm.
- If the array provided as input is partially sorted the number of steps executed is reduced, making the respective algorithm adaptive.
- Stable sort.
- In-Place sort.

Disadvantages of Insertion sort:

- As the value of N increases the algorithm turns out to be ineffective due to a significant increase in the inner loop executions.

## Notation and its types

The following are the different types of notations used for the time complexity of algorithms:

- Big Oh: It is used for indicating “fewer than or the same as” <expression> iterations.
- Big Omega: It is used for indicating “more than or same as” <expression> iterations.
- Big Theta: It is used for indicating “the same as”<expression> iterations.
- Little Oh: It is used for indicating “fewer than” <expression> iterations.
- Little Omega: It is used for indicating “more than” <expression> iterations.

Let us now understand different notations used for time complexity for algorithms with the help of cases:

### Big O Notation (O):

The Big O notation is used to describe the worst case of time complexity for an algorithm. The maximum time required by the algorithm for successful execution by taking all the input values required by the respective algorithm into consideration is evaluated by the Big O notation. With respect to the expression, the following function grows slower or at the same rate.

let us consider the following cases:

#### O(1)

Big O notation O(1) is used to represent the time complexity of an algorithm, where the execution of the algorithm is in the same time and space independent from the input data.

Implementation of Constant time O(1)

Median of sorted array

public class MedianOfSortedArray {

public static void main(String args[]) {

long start=System.currentTimeMillis();

int arr[]={1,2,3,4,5};

int median;

if(arr.length%2==0){

median=arr.length/2;

}else{

median=(arr.length+1)/2;

}

System.out.println(“Median of array is: “+arr[median-1]);

long end = System.currentTimeMillis();

System.out.println(“The time taken for “+arr.length+” elements: “+(end-start)+” ms”);

start = System.currentTimeMillis();

int arr1[]={1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30};

if(arr1.length%2==0){

median=arr1.length/2;

}else{

median=(arr1.length+1)/2;

}

System.out.println(“Median of array is: “+arr1[median-1]);

end = System.currentTimeMillis();

System.out.println(“The time taken for “+arr1.length+” elements: “+(end-start)+” ms”);

}

}

#### O(n)

Big O notation O(N) is used to represent the time complexity of an algorithm, where the performance of the algorithm is directly proportional to the size of the input data, meaning the time used by the respective algorithm increases linearly with an increase in the size of input data.

Implementation of Linear time O(n)

Sum Of elements

public class SumOfElements {

public static void main(String args[]) {

long start=System.currentTimeMillis();

int arr[]={1,2,3,4,5};

int sum=0;

for(int i=0;i<arr.length;i++){

sum=sum+arr[i];

}

System.out.println(“The sum of elements: “+sum);

long end = System.currentTimeMillis();

System.out.println(“The time taken for “+arr.length+” elements: “+(end-start)+” ms”);

start = System.currentTimeMillis();

int arr1[]={1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30};

sum = 0;

for (int i = 0; i < arr1.length; i++) {

sum = sum + arr1[i];

}

System.out.println(“The sum of elements: ” + sum);

end = System.currentTimeMillis();

System.out.println(“The time taken for ” + arr1.length + ” elements: ” + (end – start) + ” ms”);

}

}

#### O(n^2)

Big O notation O(n^2) is used to represent the time complexity of an algorithm, where the performance of the algorithm is squarely proportional to the size of the input data, meaning the time used by the respective algorithm increases quadratically with an increase in the size of input data.

Implementation of Quadratic time O(n^2)

Addition of square matrix

public class AdditionOfSquareMatrix {

public static void main(String args[]) {

long start=System.currentTimeMillis();

int rows = 2, columns = 2;

int[][] MatrixA = { {1, 1,}, {2, 3,} };

int[][] MatrixB = { {2, 3,}, {2, 2,} };

int[][] sum = new int[rows][columns];

for(int i = 0; i < rows; i++) {

for (int j = 0; j < columns; j++) {

sum[i][j] = MatrixA[i][j] + MatrixB[i][j];

}

}

System.out.println(“Sum of the given “+rows+”x”+columns+” matrices is: “);

for(int i = 0; i < rows; i++) {

for (int j = 0; j < columns; j++) {

System.out.print(sum[i][j] + ” “);

}

System.out.println();

}

long end = System.currentTimeMillis();

System.out.println(“The time taken for addition of square matrix of order ” + rows + ” is: ” + (end – start) + ” ms”);

start= System.currentTimeMillis();

rows=10;

columns=10;

int[][] matA={{1,2,3,4,5,6,7,8,9,10},{1,2,3,4,5,6,7,8,9,10},{1,2,3,4,5,6,7,8,9,10},{1,2,3,4,5,6,7,8,9,10},{1,2,3,4,5,6,7,8,9,10},{1,2,3,4,5,6,7,8,9,10},{1,2,3,4,5,6,7,8,9,10},{1,2,3,4,5,6,7,8,9,10},{1,2,3,4,5,6,7,8,9,10},{1,2,3,4,5,6,7,8,9,10}};

int[][] matB={{1,2,3,4,5,6,7,8,9,10},{1,2,3,4,5,6,7,8,9,10},{1,2,3,4,5,6,7,8,9,10},{1,2,3,4,5,6,7,8,9,10},{1,2,3,4,5,6,7,8,9,10},{1,2,3,4,5,6,7,8,9,10},{1,2,3,4,5,6,7,8,9,10},{1,2,3,4,5,6,7,8,9,10},{1,2,3,4,5,6,7,8,9,10},{1,2,3,4,5,6,7,8,9,10}};

int[][] sum1 = new int[rows][columns];

for (int i = 0; i < rows; i++) {

for (int j = 0; j < columns; j++) {

sum1[i][j] = matA[i][j] + matB[i][j];

}

}

System.out.println(“Sum of the given ” + rows + “x” + columns + ” matrices is: “);

for (int i = 0; i < rows; i++) {

for (int j = 0; j < columns; j++) {

System.out.print(sum1[i][j] + ” “);

}

System.out.println();

}

end = System.currentTimeMillis();

System.out.println(“The time taken for addition of square matrix of order ” + rows + ” is: ” + (end – start) + ” ms”);

}

}

#### O(logn)

Implementation of Logarithmic Time O(logn)

Binary Search

class BinarySearch {

static int binarySearch(int arr[], int x) {

int l = 0, r = arr.length – 1;

while (l <= r) {

int m = l + (r – l) / 2;

if (arr[m] == x)

return m;

if (arr[m] < x)

l = m + 1;

else

r = m – 1;

}

return -1;

}

public static void main(String args[]) {

long start=System.currentTimeMillis();

int arr[] = { 2, 3, 4, 10, 40, 50, 60, 70, 80}; //sorted array. Add elements here

int elementToBeSearched = 40;

int result = BinarySearch.binarySearch(arr, elementToBeSearched);

if (result == -1)

System.out.println(“Element not present”);

else

System.out.println(“Element found at index ” + result);

long end=System.currentTimeMillis();

System.out.println(“Time to find element is array of size: “+arr.length +” in ms is: “+(end-start));

start = System.currentTimeMillis();

int arr1[]={1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30};

elementToBeSearched=23;

result = BinarySearch.binarySearch(arr1, elementToBeSearched);

if (result == -1)

System.out.println(“Element not present”);

else

System.out.println(“Element found at index ” + result);

end = System.currentTimeMillis();

System.out.println(“Time to find element is array of size: ” + arr1.length + ” in ms is: ” + (end – start));

}

}

There are also some other Big O notations such as logarithmic growth, log-linear growth O(n log n), exponential growth O(2^n), and factorial growth O(n!).

### Omega Notation (Ω):

The Omega notation is used to describe the best case of time complexity for an algorithm. The minimum time required by the algorithm for successful execution by taking all the input values required by the respective algorithm into consideration is evaluated by the Omega notation. The respective algorithm cannot be executed in less than this. With respect to the expression, the following function grows faster than or at the same rate.

### Theta Notation (θ):

The Theta notation is used to describe the average of the best case (Omega) and the worst case (Big O) of time complexity for an algorithm. The average time required by the algorithm for successful execution by taking all the input values required by the respective algorithm into consideration is evaluated by the theta notation. The respective algorithm can be executed within the range. With respect to the expression, the following function lies between the upper and the lower limits of the execution of the algorithm.

## Conclusion

This article explains what algorithms are, and how the efficiency of an algorithm is determined using the various complexity such as time complexity of an algorithm and space complexity. Also, it provides a detailed explanation of the various notations and their types. It is important to understand what the efficiency of an algorithm is and the role it plays in the field of data structures and algorithms.