News Leaflets
A leading news portal.

10 Most Important Algorithms For Coding Interviews

0

Algorithms are the set of rules to be followed in calculations or other problem-solving operations. It is considered one of the most important subjects considered from the programming aspect. Also, one of the most complex yet interesting subjects. From the interview aspect, if you want to crack a coding interview, you must have a strong command over Algorithms and Data Structures. In this article, we’ll read about some of the most important algorithms that will help you crack coding interviews. 

Algorithms For Interviews

There are many important Algorithms of which a few of them are mentioned below:

  1. Sorting Algorithms
  2. Searching Algorithms
  3. String Algorithms
  4. Divide and Conquer
  5. Backtracking
  6. Greedy Algorithms
  7. Dynamic Programming
  8. Tree-Related Algo
  9. Graph Algorithms
  10. Other Important Algorithms

1. Sorting Algorithms

Sorting algorithms are used to arrange the data in a specific order and use the same data to get the required information. Here are some of the sorting algorithms that are best with respect to the time taken to sort the data.

A. Bubble Sort

Bubble sort is the most basic swapping sort algorithm. It keeps on swapping all the adjacent pairs that are not in the correct order. The bubble sort algorithm, as it compares all the adjacent pairs, takes O(N2) time.  

Bubble sort is a stable sorting algorithm. It also has O(1) space for sorting. In all the cases ( Best, Average, Worst case), Its time complexity is O(N2). Bubble sort is not a very efficient algorithm for big data sets.

B. Insertion Sort

As the name suggests, It’s an insertion algorithm. An element is chosen and inserted in its correct position in an array. It’s as simple as sorting playing cards. Insertion sort is efficient for small data sets. It generally takes O(N2) time. But when the items are sorted, it takes O(N) time

C. Selection Sort 

In selection sort, we maintain two parts of the array, one sorted part, and another unsorted part. We select the smallest element( if we consider ascending order) from the unsorted part and set it at the start of this unsorted array, and we keep doing this and thus we get the sorted array. The time complexity of the selection sort is O(N2).

D. Merge Sort

Merge sort is a divide-and-conquer-based sorting algorithm. This algorithm keeps dividing the array into two halves till we get all elements independent, and then it starts merging the elements in sorted order. This whole process takes O(nlogn) time, O(log2(n)) time for dividing the array, and O(n) time for merging back.  

Merge sort is a stable sorting algorithm. It also takes O(n) space for sorting. In all the cases ( Best, Average, Worst case), Its time complexity is O(nlogn). Merge sort is a very efficient algorithm for huge data sets but for smaller data sets, It is a bit slower as compared to the insertion sort.

E. Quick Sort

Just like Merge Sort, Quick sort is also based on the divide and conquer algorithm. In quick sort, we choose a pivot element and divide the array into two parts taking the pivot element as the point of division. 

The Time Complexity of Quick Sort is O(nlogn) except for worst-case which can be as bad as O(n2). In order to improve its time complexity in the worst-case scenario, we use Randomized Quick Sort Algorithm. In which, we choose the pivot element as a random index.

2. Searching Algorithms

A. Linear Search

Linear searching is a naïve method of searching. It starts from the very beginning and keeps searching till it reaches the end. It takes O(n) time. This is a very important method to search for something in unsorted data. 

B. Binary Search

Binary Search is one of the most efficient search algorithms. It works in sorted data only. It runs in O(log2(n)) time. It repeatedly divides the data into two halves and searches in either part according to the conditions.

Binary search can be implemented using both the iterative method and the recursive method. 

Iterative approach:

binarySearch(arr, x, low, high)
       repeat till low = high
              mid = (low + high)/2
                  if (x == arr[mid])
                  return mid
  
                  else if (x > arr[mid])  // x is on the right side
                      low = mid + 1
  
                  else                    // x is on the left side
                      high = mid - 1

Recursive approach:

binarySearch(arr, x, low, high)
          if low > high
              return False 
  
          else
              mid = (low + high) / 2 
                  if x == arr[mid]
                  return mid
      
              else if x > arr[mid]        // x is on the right side
                  return binarySearch(arr, x, mid + 1, high) // recall with the right part only
              
              else                        // x is on the left side
                  return binarySearch(arr, x, low, mid - 1)  // recall with the left part only

3. String Algorithm

A. Rabin Karp Algorithm

The Rabin-Karp algorithm is one of the most asked algorithms in coding interviews in strings. This algorithm efficiently helps us find the occurrences of some substring in a string. Suppose, we are given a string S and we’ve to find out the number of occurrences of a substring S1 in S, we can do this using the Rabin Karp Algorithm. Time Complexity of Rabin Karp in which average complexity is O( m+n) and worst case complexity is O(nm). Where n is the length of string S and m is the length of string S1.

B. Z Algorithm

Z algorithm is even better than the Rabin Karp algorithm. This also helps in finding the number of occurrences of a substring in a given string but in linear time O(m+n) in all the cases ( best, average, and worst). In this algorithm, we construct a Z array that contains a Z value for each character of the string. The average time complexity of the Z algorithm is O(n+m) and the average Space complexity is also O(n+m). Where n is the length of string S and m is the length of string S1.

4. Divide and Conquer

As the name itself suggests It’s first divided into smaller sub-problems then these subproblems are solved and later on these problems are combined to get the final solution. There are so many important algorithms that work on the Divide and Conquer strategy. 

Some examples of Divide and Conquer algorithms are as follows: 

5. Backtracking

Backtracking is a variation of recursion. In backtracking, we solve the sub-problem with some changes one at a time and remove that change after calculating the solution of the problem to this sub-problem. It takes every possible combination of problems in order to solve them. 

There are some standard questions on backtracking as mentioned below:

6. Greedy Algorithm

A greedy algorithm is a method of solving problems with the most optimal option available. It’s used in such situations where optimization is required i.e. where the maximization or the minimization is required. 

Some of the most common problems with greedy algorithms are as follows –

7. Dynamic Programming

Dynamic programming is one of the most important algorithms that is asked in coding interviews. Dynamic programming works on recursion. It’s an optimization of recursion. Dynamic Programming can be applied to all such problems, where we have to solve a problem using its sub-problems. And the final solution is derived from the solutions of smaller sub-problems. It basically stores solutions of sub-problems and simply uses the stored result wherever required, in spite of calculating the same thing again and again.  

Some of the very important questions based on Dynamic Programming are as follows:  

8. Tree Traversals Algorithms

Majorly, there are three types of traversal algorithms:

A. In-Order Traversal  

  • Traverse left subtree, then
  • The traverse root node, then
  • Traverse right subtree

B. Pre-Order Traversal 

  • The traverse root node, then
  • Traverse left node, then
  • Traverse right subtree

C. Post-Order Traversal

  • Traverse left subtree, then
  • Traverse right subtree, then
  • Traverse root node

9. Algorithms Based on Graphs

A. Breadth First Search (BFS)

Breadth First Search (BFS) is used to traverse graphs. It starts from a node ( root node in trees and any random node in graphs) and traverses level wise i.e. In this traversal it traverses all nodes in the current level and then all the nodes at the next level. This is also called level-wise traversal.

The implementation of the approach is mentioned below:

  • We create a queue and push the starting node of the graph.
  • Next, we take a visited array, which keeps track of all the visited nodes so far. 
  • Till the queue is not empty, we keep doing the following tasks: 
  • Pop the first element of the queue, visit it, and push all its adjacent elements in the queue (that are not visited yet).

B. Depth First Search (DFS)

Depth-first search (DFS) is also a method to traverse a graph. Starting from a vertex, It traverses depth-wise. The algorithm starts from some node ( root node in trees and any random node in graphs) and explores as far as possible along each branch before backtracking.

The approach is to recursively iterate all the unvisited nodes, till all the nodes are visited. The implementation of the approach is mentioned below:

  • We make a recursive function, that calls itself with the vertex and visited array.
  • Visit the current node and push this into the answer.
  • Now, traverse all its unvisited adjacent nodes and call the function for each node that is not yet visited.

C. Dijkstra Algorithm

Dijkstra’s Algorithm is used to find the shortest path of all the vertex from a source node in a graph that has all the positive edge weights. The approach of the algorithm is mentioned below:

  • First of all, keep an unvisited array of the size of the total number of nodes. 
  • Now, take the source node, and calculate the path lengths of all the vertex.
  • If the path length is smaller than the previous length then update this length else continue.
  • Repeat the process till all the nodes are visited. 

D. Floyd Warshall Algorithm

Flyod Warshall algorithm is used to calculate the shortest path between each pair of the vertex in weighted graphs with positive edges only. The algorithm uses a DP solution. It keeps relaxing the pairs of the vertex that have been calculated. The time complexity of the algorithm is O(V3).

E. Bellman-Ford Algorithm

Bellman ford’s algorithm is used for finding the shortest paths of all other nodes from a source vertex. This can be done greedily using Dijkstra’s algorithm but Dijkstra’s algorithm does not work for the graph with negative edges. So, for graphs with negative weights, the Bellman ford algorithm is used to find the shortest path of all other nodes from a source node. The time complexity is O(V*E).

10. Other Important Algorithms

A. Bitwise Algorithms 

These algorithms perform operations on bits of a number. These algorithms are very fast. There are many bitwise operations like And (&), OR ( | ), XOR ( ^ ), Left Shift operator ( << ), Right Shift operator (>>), etc. Left Shift operators are used to multiplying a number by 2 and right shift operators ( >> ), are used to divide a number by 2.  Here are some of the standard problems that are frequently asked in coding interviews- 

  1. Swapping bits in numbers
  2. Next greater element with the same number of set bits
  3. Karatsuba Algorithms for multiplication
  4. Bitmasking with Dynamic Programming 

and many more…..

B. The Tortoise and the Hare

The tortoise and the hare algorithm is one of the most used algorithms of Linked List. It is also known as Floyd’s algorithm. This algorithm is used to –

  • Find the Middle of the Linked List
  • Detect a Cycle in the Linked List

In this algorithm, we take two pointers on the linked list and one of them is moving with double the speed (hare) as the other (tortoise). The idea is that if they intersect at some point, this proves that there’s a cycle in the linked list. 

C. Kadane Algorithm

Kadane’s algorithm is used to find the maximum sum of a contiguous subarray in the given array with both positive and negative numbers. 

Intuition:

  • Keep updating a sum variable by adding the elements of the array.
  • Whenever the sum becomes negative, make it zero.
  • Keep maximizing the sum in a new variable called max_sum
  • In the end, the max_sum will be the answer.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! News Leaflets is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment
vulvatube.com teenpornvideo.mobi desi girls sexy
sex video in bus tubekitty.mobi kamapichachi
the broken marriage vow march 12 full episode teleseryepisode.com mateo lorenzo net worth
telugu hd sex pornflex.org bluefilmtamil
indian sex stories lesbian indaporn.com best sex mms
hinde xnxx video redwap2.com bangla chudachudi
طيز خليجي filmstreamingporno.com التحرش بالمنقبات
affair sex video ultraporn.mobi deshimagi
tubb99 nuporn.mobi mumbaixvideo
agimat ng agila cast watchpinoyteleserye.com stl today result
age of tamanna pimpmovs.com xxxxx inden
indan xvideo com xxx-tube-list.net indina six video
gonzo xxx sunny leone eporner.name desi pirn
بنت تلعب في كسها teentubeonline.com سكس امهات اسيوي
largeporn film tubepatrol.porn kama katai