**If-else statements****A Loop****Nested Loops****Consecutive Statements**- Why do we get
**Logarithmic Complexity**in a program?

**Note** : We always consider the worst case scenarios while determining the complexities of each algorithms as it will give us the maximum running time of an algorithm for large input values.

Let's say we have a program having two statements with `O(1) : Constant Time`

and `O(n) : Linear Time`

respectively, we will consider `O(n)`

as the complexity of the whole program because we know that the program will not take more than `O(n)`

time, so it will be the **Upper Bound** of the program.

For an introduction to Asymptotic Notations visit here.

We first have to check the complexity of the ** condition** inside the if statement, plus we will consider the complexity of either the

```
// condition statement : constant time O(1)
if( n % 2 == 0 ) // (even number)
return true; // then statement : constant time O(1)
else
for(int i = 1; i <= n; i++) // else statement : linear time O(n)
sum = sum + i; // we get the sum of first x numbers if x is odd.
```

`Total time = O(n)`

as O(n) is the maximum time consuming statement (else statement) so the whole program will not take more than O(n) time to completely finish executing.

The running time of a loop is determined by the *number of times the loop is executed multiplied by the running time of the statements inside the loop.*

```
// loop will execute n times
for(int i = 0; i < n; i++){
m = m + 2; // takes constant time, c, O(1)
}
```

`Total time = c * n = cn = O(n)`

**Q1: Will it effect the time complexity of the program, if we take i *= 2 rather than i++ in the update expression of for loop?**

*A1: We will soon find the answer. Keep reading ;)*

Here *total running time will be the product of sizes of all the loops*.

We have to analyse the program inside out. First we have to check from the inner most loop and it's statements, then one level up every time we have checked the complexity of the current loop.

```
// outer loop exexutes n times.
for(int i = 0; i < n; i++){
// inner loop executed n times.
for(int j = 0; j < n; j++){
k++; // constant time, c , O(1)
}
}
```

Here inner loop executes `n times`

for every `i`

, so Total time would be :

`Total time = c * n * n`

= cn^{2} = O(n^{2})

**Q2: Will it effect the time complexity of the program, if j (inner loop initialisation condition) starts from i or i + 1**?

*A2: It will not! as we are looking for large values of n, we have to consider that loop ran for n times.*

For more information about why are we taking large values of n: visit here

We just have to add the time complexity of each statements. :)

```
x = x + 1; // constant time c0
// executes n times
for(int j = 1; j < n; j++){
m = m + 2; // constant time c1
}
// outer loop exexutes n times.
for(int i = 0; i < n; i++){
// inner loop executed n times.
for(int j = 0; j < n; j++){
k++; // constant time
}
}
```

`Total Time :`

c_{0} + c_{1}n + c_{2}n^{2} = O(n^{2})

An algorithm is `O(logn)`

if it takes a constant time to cut the problem size by a fraction (usually by 1/2). Let's see an example.

```
for(int i = 1; i < n; )
i = i * 2;
```

If we observe carefully, the value of `i`

is doubling every iteration. Initially `i = 1`

, in next step `i = 2`

, and in following steps `i = 4, 8`

and so on. Let us assume that the loop is executing some k times. At k^{th} step 2^{k} = n and we come out of the loop. Taking logarithm on both sides, gives

`Total Time : O(logn)`

Referring to the **Q1**, the answer would be YES, changing the update condition from `i++`

to `i = i * 2`

, will change the time complexity of the loop to `O(logn)`

.

Let's also see the below case where the time complexity is also`O(logn)`

.

```
for(int i = n; i >= 1;)
i = i / 2;
```

From the above case, we can say that the logarithmic complexity comes up for decreasing sequence as well.

Another example: *binary search*

- Look at center point in the array
- Is the element towards left or right of center?
- Repeat process with left to right part of the array until the element is found.

*log*x^{y}= y*log*x*log*n =*log*_{10}n*log*(xy) =*log*x +*log*y*log*^{k}n = (*log*n)^{k}*log**log*n =*log*(*log*n)*log*(x / y) =*log*x -*log*y

Thank You!

]]>The idea of *analysis of algorithms* is to compare algorithms mainly in terms of running time and space consumed.

For example,
To go from city 'M' to city 'N', there can be many ways to carry-out this task: by *flight*, by *bus*, by *train* and also by *bicycle*. Depending on the accessibility and convenience, we choose the one that suits us. Similarly, in computer science we have various algorithms available for solving a same problem, for example we can solve sorting problem by many algorithms like `selection-sort`

, `bubble-sort`

, `merge-sort`

and *many more* but we will choose the one with lesser complexity. (Best algorithm for sorting is `quick-sort`

with complexity `O(nlog(n))`

)

This concludes that Analysis of Algorithm should be used in determining which method is more efficient and also which method is good in terms of time and space consumed.

Prior to learning Asymptotic Notations, Let's see what is ** rate of growth** of an algorithm.

**Rate of growth** is nothing but the rate at which the run time complexity of the algorithm increases as a function of the input.

Let's suppose we have to purchase a laptop and a mouse. If someone asks you what are you purchasing, then in general you will say buying a laptop and ignore buying the mouse part. This is because, cost of laptop is too big compared to cost of mouse. So, we can say

`Total Cost = cost_of_laptop + cost_of_mouse`

`Total Cost ≈ cost_of_laptop (approximately)`

For the above mentioned example, we can represent the cost of laptop and cost of mouse as a function and for a given function, we can ignore the lower order terms (that are relatively insignificant for large value of input size, n). Let us consider another example in terms of algebra, let n^{4}, n^{2}, 100n and 5 are individual costs of some function, here we can approximate this function to n^{4} i.e. the highest rate of growth.

*n ^{4} + n^{2} + 100n + 5 ≈ n^{4}*

Here's the list of most common rate of growths.

Time Complexity | Name | Example |

1 | Constant | accessing an array element |

log(n) | Logarithmic | finding an element in a sorted array |

n | Linear | finding an element in a unsorted array |

nlog(n) | Linear Logarithmic | sorting n items with divide-and-conquer (Merge Sort) |

n^{2} | Quadratic | shortest path between two nodes in a graph |

n^{3} | Cubic | matrix multiplication |

2^{n} | Exponential | tower of hanoi problem |

Before going to Asymptotic Notations, We also need to know about Types Of Analysis.

If we have to analyse an algorithm, we need to know on what inputs the algorithm takes less time, and on what inputs the algorithm is taking more time. That means we can represent algorithms with multiple expressions: one for the case where it takes *less time* and other for the case where it takes *more time*.

In general, when the algorithm takes less time it is called as the ** best case** and when the algorithm takes more time than it is called as the

**Worst Case**- defines the input for which algorithm takes longest time to execute.
- algorithm runs slower.
- algorithm which executes maximum number of steps on input data of size n.

**Best Case**- defines the input for which algorithm takes lowest time to execute.
- algorithm runs fastest.
- algorithm which executes least number of steps on input data of size n.

**Average Case**- provides a prediction about the running time of the algorithm.
- assumes that the input is random.
- algorithm which performs average number of steps on input data of size n.

`Lower Bound <= Average Time <= Upper Bound`

Asymptotic Notations is having an expressions for the best, average and worst cases, for all the three cases we need to identify the upper and lower bounds. To represent these upper and lower bounds we need some kind of syntax and that is the following discussion. Let us assume that for a given algorithm, it can be represented in the form of function *f(n)*.

This notation gives the ** tight upper bound** of the given algorithm / function

`f(n) = O(g(n))`

.

It means, for larger values of n, the *upper bound* of function *f(n)* is a function *g(n)*.

Here *upper bound* means, value of f(n) cannot exceed g(n) after a particular value of n. (represented as n_{0} in the graphical approach).

Let's see this with a graphical approach.

After *n = n _{0}*, value of g(n) is always greater than or equal to the given algorithm's rate of growth f(n).

Also, for example, if *f(n)* = n^{4} + 100n + 50 is the given algorithm, then n^{4} is *g(n)*. That means, *g(n) = n ^{4}* gives the maximum rate of growth for

O-Notation can be also be defined as **O(g(n))** = { **f(n)** : there exists positive constant c and n_{0} such that* 0 ≤ f(n) ≤ cg(n) *for all n ≥ n

Our objective is to get smallest rate of growth g(n) which is greater than or equal to f(n)'s rate of growth.

Generally we discard lower values of n. That means the rate of growth at lower values of n is not important. In the graph, n_{0} is the point from which we need to consider the rate of growths for a given algorithm. Below n_{0} the rate of growths could be different.

**f(n) = n**^{2}+ 1n

^{2}+ 1 ≤ 2n^{2}, for all n ≥ 1Therefore, n

^{2}+ 1 = O(n^{2}), with c = 2 and n_{0}= 1**f(n) = 2n**^{3}- 2n^{2}2n

^{3}- 2n^{2}≤ 2n^{3}, for all n ≥ 1Therefore, 2n

^{3}- 2n^{2}= O(2n^{3}), with c = 2 and n_{0}= 1

There is one more thing related to values of n_{0} and c, that is, there is no isolated set of values for n_{0} and c in finding the asymptotic bounds. Let us see an example,
100n + 5 = O(n). For this function, there can be multiple values for n_{0} and c, giving us an asymptotic solution / bound.

**Solution 1 **: 100n + 5 ≤ 100n + n = 101n, for all n ≥ 5, n_{0} = 5 and c = 101.

**Solution 2 **: 100n + 5 ≤ 100n + 5n = 105n, for all n ≥ 1, n_{0} = 1 and c = 105.

Both possibilities are correct.

**Note :** Most of the time we are interested in knowing the Big - O Notation i.e. Tight Upper Bound of an algorithm because it allows us to estimate weather an algorithm is feasible for our application or not, by telling us that this algorithm will not take more than such-and-such amount of memory or time when run on an input of size n.

This notation gives the ** tight lower bound** of the given algorithm / function

`f(n) = Ω(g(n))`

It means, for larger values of n, g(n) is the *lower bound* of function f(n).

Here *lower bound* means, rate of growth of *f(n)* is always greater than or equal to the rate of growth of function *g(n)* after a particular value of n i.e. n_{0}.

Let's see this with a graphical approach.

After n = n_{0}, value of g(n) is always smaller than or equal to the given algorithm's rate of growth f(n).

Ω Notation can also be defined as **Ω(g(n))** = { **f(n)** : there exists positive constants n_{0} and c such that* 0 ≤ cg(n) ≤ f(n) * for all n ≥ n

Here, our objective is to get largest rate of growth g(n) which is less than or equal to f(n)'s rate of growth. g(n) is asymptotic lower bound for the given algorithm's rate of growth f(n).

**f(n) = 5n**^{2}∃ (there exists) c, n

_{0}, such that: 0 ≤ cn ≤ 5n^{2}=> c = 1 and n_{0}= 1Therefore, 5n

^{2}= Ω(n^{2})2n = Ω(n), n

^{3}= Ω(n^{3}), logn = Ω(logn)

**Note : ** Lower bounds are of great use as well. Lower bounds can give information about whether we can improve our algorithm or is it feasible. We can also know that our algorithm is optimal, if our lower bound is equal to the upper bound.

This notation gives a range of upper bound and lower bound and determines whether the upper bound and lower bound of the given algorithm are same. The average running time of an algorithm is always between the **lower bound** (*Omega - Ω*) and **upper bound** (*Big - O*) of the function. If the upper bound and lower bound of a function gives the same result (rate of growth) then Theta - Θ will also have the same rate of growth.

For example, assume f(n) = 10n + n, then its tight upper bound is O(n) and the lower bound is Ω(n).
In this case, rate of growths in the best case and worst case are same. As a result, the average case will also be the same.
If for a given algorithm, the rate of growths O and Ω are not same then the rate of growth for Θ may not be same. In this case, we have to consider all possible time complexities and take average of those. (for example, *quick sort* average case gives Θ(nlogn) complexity)

Let us also see this in a graphical approach.

After n = n_{0}, the value of f(n) is always between c_{2}g(n) and c_{1}g(n).

Θ Notation can also be defined as **Θ(g(n))** = { **f(n)** : there exists positive constants c_{1}, c_{2}, and n_{0} such that* 0 ≤ c_{1}g(n) ≤ f(n) ≤ c_{2}g(n) , for all n ≥ n_{0} }. g(n) is an asymptotic tight bound for f(n).

Θ(g(n)) is a set of functions with same order of growth as g(n).

**Prove n ≠ Θ(n**^{2})c

_{1}n^{2}≤ n ≤ c_{2}n^{2}, only holds for n ≤ 1 / c_{1}Therefore, n ≠ Θ(n

^{2})**Prove 6n**c^{3}≠ Θ(n^{2})_{1}n^{2}≤ 6n^{3}≤ c_{2}n^{2}, only holds for n ≤ c_{2}/ 6Therefore, 6n

^{3}≠ Θ(n^{2})

For all the three notations, O, Ω, Θ, in every case for a given function f(n) we are trying to find other function g(n) which approximates f(n) at large values of n. That means, g(n) is also a *curve* which approximates f(n) at large values of n.

In mathematics, we call such curves as **asymptotic curves**. In other terms, g(n) is the asymptotic curve for f(n). For this reason, we call algorithm analysis as **asymptotic analysis** and notations as **asymptotic notations**

**Transitivity**: f(n) = Θ(g(n)) and g(n) = Θ(h(n)), then f(n) = Θ(h(n)). Valid for O and Ω as well.**Reflexivity**: f(n) = Θ(f(n)). Valid for O and Ω as well.**Symmetry**: f(n) = Θ(g(n)) if and only if g(n) = Θ(f(n)).**Transpose symmetry**: f(n) = O(g(n)) if and only if g(n) = Ω(f(n)).

All divide and conquer algorithms works by dividing the problem into sub-problems, each of which is part of the original problem and then we perform some additional work to compute the final answer. As an example, *merge sort* algorithm operates on two sub problems, each of which is half the size of the original problem and then performs O(n) additional work for merging the sub-problems.

This gives the running time of the equation :

`T(n) = 2T(n / 2) + O(n)`

Master theorem can be used to determine the running time of divide and conquer algorithms. For a given algorithm, first we try to find the recurrence relation of the problem. If the recurrence relation is of the below form then we can directly give the answer without fully solving it.

If the recurrence relation is of the form, ** T(n) = aT(n / b) + Θ(n ^{k}log^{p}n)**, where a ≥ 1, b > 1, k ≥ 0 and p is a real number, then:

If a > b

^{k}, then T(n) = Θ(n^{logab})If a = b

^{k}a. If p > -1, then T(n) = Θ(n

^{logab}log^{p+1}n).

b. If p = -1, then T(n) = Θ(n^{logab}loglogn).

c. If p < -1, then T(n) = Θ(n^{logab})If a < b

^{k}a. If p ≥ 0, then T(n) = Θ(n

^{k}log^{p}n)

b. If p < 0, then T(n) = O(n^{k})

```
vector<int> LET_S;
pair<string, int> GET;
list<int> STARTED;
array<int, 3> WITH;
map<int, string> STL;
```

Still don't know what these statement do?

Let's read this node to head start your journey in STL.

**Standard Template Libraries** are powerful set of C++ template classes. It contains numerous pre-defined classes and are used to make programming easier.

Consider Standard Template Libraries as a helping hand for not writing codes to implement Data Structures like `Linked Lists`

, `Stacks`

, `Queues`

, `Trees`

etc. in every program we require them (mostly in competitive programming questions), we can use these data structures by just including a library and using a pre-defined syntax.

Also, classes in Standard Template Libraries are made through template classes so these are generic in nature.

At the core of the C++ Standard Template Libraries, these are the following three well structured components.

**Containers.****Algorithms.****Iterators.**

- Containers are used to manage and store
`collection of objects`

of a certain kind. - Container Library in STL provide containers that are used to create data structure like arrays, linked-list, trees, etc.
- These containers are generic in nature, they can hold elements of any data types.

`vector`

: replicates arrays`queue`

: replicates queues`stack`

: replicates stack`priority_queue`

: replicated heaps`list`

: replicates linked list`set`

: replicates trees`map`

: associative arrays

**Sequence Containers**: These containers hold linear storage of data, like arrays, linked list etc.**Associative Containers**: Ordered containers like map, multimap, set, multiset are associative containers. (Ordered means the values are stored in the respective container in the same order as inputted by the user)**Unordered Associative Containers**: These are similar to associative containers but have different constraints, the elements are stored unordered due to the use of hash tables while implementing these containers.

When we use containers to implement data structures, we just have to include a header file or use the respective directive to initialise the data structure you need.

```
#include<bits/stdc++.h>
```

or you can just include a specific container header file,

```
#include<iostream>
// importing vector header file to implement vectors
#include<vector>
int main(){
vector<int> first_vector;
return 0;
}
// vector can be used for creating dynamic arrays of `char`, `int`, `float` and other types.
```

```
#include<iostream>
// importing list header file to implement lists
#include<list>
int main(){
list<int> my_list;
return 0;
}
```

- Algorithms act on containers. They provide the means by which you will perform sorting, searching, and transforming on the contents of container.
- Algorithms Library contains built-in functions that perform complex algorithms on the data structures.

Example:

We can reverse a range with reverse() function, sort a range with sort() function, search on a range with binary_search() and so on.

Algorithm Libraries provide abstraction, i.e. you don't necessarily need to know how the algorithm works.

*Sorting of a vector*

```
#include<bits/stdc++.h>
using namespace std;
int main(){
vector<int>second_vector {44,22,66,33,99};
// begin() and end() method returns an iterator to the vector.
sort(second_vector.begin(), second_vector.end());
return 0;
}
/*
Above code will sort the vector to {22,33,44,66,99}.
More about vector will be covered in a separate node.
*/
```

- Iterators are used to step through the elements of collection of objects. These collections can be containers or subsets of containers.
- Iterators in Standard Template Libraries are used to point to the containers.
- Iterators actually acts as a bridge b/w containers and algorithms.

Let's look at the above code in algorithms section for reference to iterators,

sort() algorithm or function have two parameters, starting iterator `vector.begin()`

and ending iterator `vector.end()`

pointing to first and last element in the vector respectively, now sort() compares the elements pointed by each of these iterators and arrange them in sorted order, thus it does not matter what is the type of the container and same sort can be used on different types of containers.

*Syntax*

```
// This is an iterator of type vector
vector<int> :: iterator itr;
// itr is the iterator
```

Example use of iterator:

*Printing elements of a vector*

```
#include<bits/stdc++.h>
using namespace std;
int main(){
vector<int> third_vector {10,20,30,40,50};
vector<int> :: iterator itr;
for(itr = third_vector.begin(); itr != third_vector.end(); itr++){
cout<<*itr<<" ";
}
return 0;
}
/*
OUTPUT : 10 20 30 40 50
*/
```

Visit this to view some Standard Template Libraries programs.

]]>- We use
*null-terminated (*, although it is not technically a data type.`\0`

) character array - So, Operators cannot be applied to them, like assignment and comparison operators
`=, <, >, <=, >=`

.

```
// Declaration of character array
char s1[10], s2[10];
s1[10] = "Doge";
```

```
// Error Full Code (Don't use at home/work)
s2 = s1;
s2 > s1;
s3 = s1 + s2;
```

This code results in invalid array operations.

- The
`string`

class is a expertise class of a more general template called`basic_string`

. - Since defining a class in C++ is creating a new data type, string is derived data type.
- This means operators can be overloaded for this class.

```
class string{
// Variables
// functions
// operators
}
string s1;
s1.function();
s1.operator(arguments);
```

string operations are safe but time consuming. So, 'char array' (speedy, less operations) concept is **not** deprecated.

`if (speed matters)`

{

Use `character array`

}`else if (safety and easy manipulation matters)`

{

Use `string`

class
}

Here's why `string`

is safer than character array

- Careless programmers, can overrun to the end of
an array that holds a null terminated (null character
`\0`

) string. - for example -
*see below* - string class handles such issues.

```
char s3[10];
strcpy(s3,"Hello careful programmers.");
```

- string is an another container class.
To use string class, we have to include string header class. (

**not**string.h)`#include<string>`

(for string header class)`#include<string.h>`

(in C, for string functions applied on character array)

`string()`

`string(const char *str)`

`string(const string &str)`

`=`

(assignment)`+`

(concatenation)`+=`

(concatenation assignment)`==`

(equality)`!=`

(inequality)`<`

(less than)`<=`

(less than equal to)`>`

(greater than)`>=`

(greater than equal to)`[]`

(subscripting)`<<`

(insertion)`>>`

(extraction)

- We can mix string objects with another string object or C style string.
- C++ string can also be concatenated with character const.

`assign()`

`append()`

`insert()`

`replace()`

`erase()`

`find()`

`rfind()`

`compare()`

`c_str()`

`size()`

ittekimasu! :)

]]>**STL**is Standard Template Library.- It is a powerful set of C++ template classes.
- At the core of C++ Standard Template Library are following three well-structured components.
- Containers
- Algorithms
- Iterators

Ohh! have I gotten far ahead from the topic, Sumimasen!

Coming straight back to the topic, Here is all about templates in C++.

The keyword `template`

is used to define function templates and class templates in our C++ program.
It introduces generic programming, which makes a way for programmers to write more efficient code.

So, now there are two ways in which we can use template,

### Function template

Function template is also known as

.*generic function*

`syntax`

creating function template:

```
template<class type>
type func_name(type arg1, type arg2, ....){
...
};
// here type is a placeholder for generalisation of data type.
```

It creates a function template named `func_name`

with any number of arguments.

**आइए इसे एक उदाहरण से समझते हैं**,

Program to find the smaller number from the given two numbers* without the use of template*:

```
#include<iostream>
using namespace std;
// without the use of template keyword.
int small(int a, int b){
if(a > b){
return(b);
}else{
return(a);
}
};
double small(double a, double b){ // function overloading
if(a > b){
return(b);
}else{
return(a);
}
};
int main(){
cout<<small(2,6)<<endl;
cout<<small(2.4,1.9);
return 0;
}
/*
OUTPUT
2
1.9
*/
```

(32 Lines of Code)

To avoid using different data types in a function, *Use template*.

```
#include<iostream>
using namespace std;
// Program using template keyword
template<class X>
X small(X a, X b){ // here X is a generic data type, making 'small' a generic function.
if(a < b){
return(b);
}else{
return(a);
}
};
int main(){
cout<<small(3,4)<<endl;
cout<<small(3.4,1.6); // no function overloading reduces the lines of code in a program
return 0;
}
/*
OUTPUT
3
1.6
*/
```

(25 Lines of Code)

Example 2:

```
template<class X, class Y>
X big(X a, Y b){ //different data types
if(a > b){
return a;
}else{
return b;
}
};
int main(){
cout<<big(4,3.4)<<endl;
cout<<big(1.99,-5);
return 0;
}
/*
OUTPUT:
4
1.99
*/
```

(20 Lines of Code)

### Class template

Class template is also known as.*generic class*

```
template<class type>
class class_name{
...
};
// here type is also a placeholder to generalise a class.
```

It creates a class template named `class_name`

.

**Let's also learn this with an example,
Here we are making a arrayList class without the use of template.**

```
#include<iostream>
using namespace std;
class arrayList{
private:
struct controlBlock{
int capacity;
int *arr_ptr;
};
controlBlock *s;
public:
arrayList(int capacity){ // constructor of arrayList class
s = new controlBlock;
s -> capacity = capacity;
s -> arr_ptr = new int[s->capacity];
}
void addElement(int index, int data){ // method to add elements to the list
if(index >= 0 && index <= s->capacity - 1){
s -> arr_ptr[index] = data;
}else{
cout<<"Array index not valid";
}
}
void viewElement(int index, int &data){
if(index >=0 && index <= s->capacity - 1){
data = s->arr_ptr[index];
}else{
cout<<"Array index not valid";
}
}
void viewList(){ // method to view the list
int i;
for(i = 0; i < s->capacity; i++){
cout<<" "<<s->arr_ptr[i];
}
}
};
int main(){
int data;
arrayList list1(4);
list1.addElement(0,2);
list1.addElement(1,12);
list1.addElement(2,22);
// list1.addElement(3,32); if not assigned, by default 0 is assigned.
// list1.viewElement(0,data);
// cout<<data;
list1.viewList();
return 0;
}
/*
OUTPUT:
2
12
22
0
*/
```

(61 Lines of Code)

*Image explanation of code.*

Now, templating our above program will result in increased possibilities of input with the same number of lines of code.

visit : arrayList_with_template.cpp

So, Here we come again, starting with STL, **Standard Template Libraries**.

- STL is Standard Template Library.
- It is a powerful set of C++ template classes.
- At the core of C++ Standard Template Library are following three well-structured components.
- Containers
- Algorithms
- Iterators

STL will be covered in upcoming nodes. :)

**stay tuned...**