FY.Bsc.Cs Sem-2 Based on Mumbai Unversity
Desgin & Analysis of Algorithm SEM 2 Uint- 2 Question Bank Answer:-
1.What is recursion? Explain with example. (Class notes)
solution:-
Recursion is a programming technique where a function calls itself in order to solve a problem. It involves breaking down a complex problem into smaller, more manageable subproblems that are similar in structure to the original problem. Recursion continues this process of breaking down the problem until a base case is reached, at which point the solution is calculated and the recursion stops.
One classic example of recursion is the calculation of the factorial of a number. The factorial of a non-negative integer n, denoted as n!, is the product of all positive integers less than or equal to n.
Here is an example of calculating the factorial of a number using a recursive function in Python:
def factorial(n):
if n == 0:
return 1
else:
return n * factorial(n - 1)
# Example: Calculate the factorial of 5
result = factorial(5)
print(result) # Output: 120
In this example, the factorial
function calls itself recursively to calculate the factorial of a number. The base case is when n
is equal to 0, in which case the function returns 1. Otherwise, it calculates the factorial by multiplying n
with the factorial of n-1
.
2.Write difference between recursion and iteration. (class notes)
solution:-
Memory Usage:
- Recursion tends to use more memory compared to iteration due to the overhead of maintaining the call stack .
- Iteration typically uses less memory as there is no need to maintain a stack for function calls .
Overhead:
- Recursion involves extensive overhead due to updating and maintaining the stack for each function call .
- Iteration has no such overhead, making it more efficient in terms of memory management .
Basic Operation:
- Recursion involves calling a function within its own code, breaking down the problem into smaller instances of the same problem .
- Iteration involves repeated execution of a set of instructions using loops until a specific condition is met .
Termination Condition:
- In recursion, the termination condition is specified within the recursive function itself .
- In iteration, the termination condition is defined in the loop structure, typically involving initialization, condition, and increment/decrement of a variable .
Code Size:
- Recursion often results in smaller code size compared to iteration .
- Iteration may lead to larger code size due to the explicit loop structures .
Infinite Loops:
- Recursive functions may lead to infinite recursion if the termination condition is not met, potentially causing a system crash .
- Iteration can result in an infinite loop if the control condition of the loop statement never becomes false, consuming CPU cycles indefinitely .
Speed:
- Recursion is generally slower than iteration due to the overhead of function calls and stack management.
- Iteration is faster than recursion as it does not involve the same level of overhead .
Usage:
- Recursion is commonly used when time complexity is not a critical concern and when a smaller code size is desired.
- Iteration is preferred when balancing time complexity against code size is important .
Time Complexity:
- Recursion often has higher time complexity compared to iteration .
- The time complexity in iteration is relatively lower and can be calculated by analyzing the number of cycles repeated in a loop .
3.Write a python code for a factorial of a number. (Class notes)
Solution:-
def factorial(n):
result = 1
for i in range(1, n + 1):
result *= i
return result
# Example: Calculate the factorial of 5
number = 5
result = factorial(number)
print(f"The factorial of {number} is: {result}")
In this code:
- The
factorial
function takes an integern
as input and calculates the factorial ofn
using afor
loop. - The loop iterates from 1 to
n
and multiplies each number to calculate the factorial. - The result is then returned and printed for the given input number (in this case, 5).
The Tower of Hanoi is a classic problem in computer science and mathematics that involves moving a stack of disks from one rod to another, following specific rules. The problem consists of three rods and a number of disks of different sizes that can slide onto any rod. The rules for the Tower of Hanoi are as follows:
- Only one disk can be moved at a time.
- Each move consists of taking the top disk from one of the rods and placing it on another rod.
- No disk may be placed on top of a smaller disk.
The minimum number of moves required to solve the Tower of Hanoi problem with n disks is 2^n - 1.
Here is an example of solving the Tower of Hanoi problem with 3 disks:
Let's label the rods as A, B, and C. Initially, all 3 disks are on rod A in decreasing order of size from bottom to top (largest at the bottom, smallest at the top).
- Move disk 1 from rod A to rod C.
- Move disk 2 from rod A to rod B.
- Move disk 1 from rod C to rod B.
- Move disk 3 from rod A to rod C.
- Move disk 1 from rod B to rod A.
- Move disk 2 from rod B to rod C.
- Move disk 1 from rod A to rod C.
After following these steps, all 3 disks are successfully moved from rod A to rod C, satisfying the rules of the Tower of Hanoi problem.
The Tower of Hanoi problem is often solved using recursion, where a recursive function is used to move the disks. Each recursive call represents moving a stack of disks from one rod to another, following the rules of the problem.
5.Explain Selection sort. Write the algorithm for selection sort. (class notes)
Solution:-
Selection Sort is a simple and intuitive sorting algorithm that works by repeatedly selecting the smallest (or largest) element from the unsorted portion of the list and swapping it with the first element of the unsorted part. This process continues for the remaining unsorted portion until the entire list is sorted.
Here is the algorithm for Selection Sort:
- Set MIN to location 0.
- Search the minimum element in the list starting from the current position.
- Swap the minimum element with the value at location MIN.
- Increment MIN to point to the next element.
- Repeat steps 2-4 until the list is sorted.
Selection Sort has a time complexity of O(N^2) in the worst-case scenario, making it less efficient for large datasets but suitable for small arrays or when the swapping cost is not a significant concern. It is commonly used when it is necessary to check all elements in the list during the sorting process ,
6.Explain Bubble sort with an example. (class note)
Solution:-
Bubble Sort is a simple sorting algorithm that repeatedly compares adjacent elements in the list and swaps them if they are in the wrong order. This process is repeated until the entire list is sorted.
Here is an example of how Bubble Sort works with the array arr[] = {6, 3, 0, 5}:
First Pass:
- Compare 6 and 3: Swap as 3 < 6. Array becomes {3, 6, 0, 5}.
- Compare 6 and 0: Swap as 0 < 6. Array becomes {3, 0, 6, 5}.
- Compare 6 and 5: Swap as 5 < 6. Array becomes {3, 0, 5, 6}.
- The largest element (6) is now at the correct position (end of the array).
Second Pass:
- Compare 3 and 0: Swap as 0 < 3. Array becomes {0, 3, 5, 6}.
- Compare 3 and 5: No swap needed.
- Compare 5 and 6: No swap needed.
Third Pass:
- Compare 0 and 3: No swap needed.
- Compare 3 and 5: No swap needed.
- Compare 5 and 6: No swap needed.
After the third pass, the array is sorted in ascending order: {0, 3, 5, 6}.
Bubble Sort has an average and worst-case time complexity of O(N^2), making it less efficient for large datasets. It is suitable for educational purposes or small datasets where simplicity is preferred over efficiency .
Here is an example of how Insertion Sort works with the array arr[] = {12, 11, 13, 5, 6}:
Initial Array: {12, 11, 13, 5, 6}
First Pass:
- Start with the second element (11) and compare it with the first element (12).
- Since 11 < 12, swap them: {11, 12, 13, 5, 6}
- Now, the sub-array {11, 12} is sorted.
Second Pass:
- Move to the third element (13) and compare it with the elements in the sorted sub-array {11, 12}.
- Since 13 > 12, no swap is needed: {11, 12, 13, 5, 6}
- The sub-array {11, 12, 13} is now sorted.
Third Pass:
- Continue this process for the remaining elements:
- Compare 5 with {11, 12, 13} and insert it in the correct position: {5, 11, 12, 13, 6}
- The sub-array {5, 11, 12, 13} is sorted.
- Continue this process for the remaining elements:
Fourth Pass:
- Insert the last element 6 into its correct position: {5, 6, 11, 12, 13}
- The array is now fully sorted in ascending order.
Insertion Sort is efficient for small datasets, easy to implement, and adaptive to partially sorted arrays .
To sort the numbers 5, 3, 4, 1, 6, 2 using Insertion Sort, we will follow the steps of the algorithm:
Initial Array: 5, 3, 4, 1, 6, 2
Start with the second element (3) and compare it with the first element (5).
- Since 3 < 5, swap them: 3, 5, 4, 1, 6, 2
Move to the third element (4) and compare it with the elements in the sorted sub-array {3, 5}.
- Insert 4 in the correct position: 3, 4, 5, 1, 6, 2
Continue this process for the remaining elements:
- Insert 1 in the correct position: 1, 3, 4, 5, 6, 2
- Insert 6 in the correct position: 1, 3, 4, 5, 6, 2
- Insert 2 in the correct position: 1, 2, 3, 4, 5, 6
Sorted Array: 1, 2, 3, 4, 5, 6
The numbers 5, 3, 4, 1, 6, 2 have been successfully sorted in ascending order using the Insertion Sort algorithm.
9.Explain binary search. Write an algorithm for binary search.(class notes)
Solution:-
Binary Search Explanation:
Binary search is an efficient algorithm for finding a target value within a sorted array. It works by repeatedly dividing the search interval in half until the target value is found or the interval is empty. This algorithm is based on the principle that the array must be sorted for binary search to work effectively.
Binary search has a time complexity of O(log n) as it divides the search interval in half at each step, making it significantly faster than linear search for large arrays.
Algorithm for Binary Search:
Algorithm BinarySearch(A, target):
Input: A (sorted array), target (value to search for)
Output: index of target in A or -1 if not found
1. Set low = 0 and high = length of A - 1.
2. While low <= high:
a. Set mid = (low + high) / 2.
b. If A[mid] equals target, return mid.
c. If A[mid] is less than target, set low = mid + 1.
d. If A[mid] is greater than target, set high = mid - 1.
3. If the target is not found, return -1.
In the binary search algorithm:
A
is the sorted array in which we are searching for thetarget
value.low
andhigh
represent the current search interval boundaries.mid
is the index of the middle element in the current interval.- The algorithm repeatedly updates
low
andhigh
based on the comparison of thetarget
value with the middle element until the target is found or the interval is empty.
10.Explain linear search. Write an algorithm for linear search.(class notes)
Solution:-
Linear Search Explanation:
Linear search is a simple searching algorithm that sequentially checks each element in a collection (such as an array) until the target value is found or all elements have been checked. It is also known as a sequential search and is suitable for unsorted arrays or lists.
Linear search has a time complexity of O(n) where n is the number of elements in the array. It performs a single comparison for each element in the worst-case scenario.
Algorithm for Linear Search:
Algorithm LinearSearch(A, target):
Input: A (array), target (value to search for)
Output: index of target in A or -1 if not found
1. For i = 0 to length of A - 1:
a. If A[i] equals target, return i.
2. If the target is not found, return -1.
In the linear search algorithm:
A
is the array in which we are searching for thetarget
value.- The algorithm iterates through each element of the array sequentially.
- If the current element matches the
target
value, the algorithm returns the index of that element. - If the end of the array is reached without finding the target value, the algorithm returns -1 to indicate that the target is not present in the array.
AND ANALYSIS and P = AND. (class notes)
Solution:-
To apply the brute force method of string matching where T = "PLANNING AND ANALYSIS" and P = "AND", we will iterate through each character in the text T and compare it with the pattern P. Here is the step-by-step process:
Text T: "PLANNING AND ANALYSIS" Pattern P: "AND"
Start comparing the pattern P with the text T from the beginning:
- Compare "AND" with "PLA" (no match)
- Shift the pattern one position to the right: Compare "AND" with "LAN" (no match)
- Continue shifting the pattern until a match is found.
Match found at position 9:
- Compare "AND" with "AND" (match)
- Pattern P is found in Text T starting at position 9.
Therefore, using the brute force method, the pattern "AND" is found in the text "PLANNING AND ANALYSIS" starting at position 9.
12.Explain rabin karp algorithm with example. (class notes)
Solution:-
The Rabin-Karp algorithm is a string searching algorithm that uses hashing to find any one of a set of pattern strings in a text. It works by calculating the hash value of the pattern and then sliding the pattern over the text, calculating the hash value of the text substrings and comparing them with the hash value of the pattern. If the hash values match, it performs a full comparison of the pattern with the text substring to confirm the match. Here is an example of how the Rabin-Karp algorithm works: Text T: "ABCCDDAEFG" Pattern P: "CDD" 1. Calculate the hash value of the pattern P: - Hash("CDD") = ASCII('C') + ASCII('D') + ASCII('D') = 67 + 68 + 68 = 203 2. Calculate the hash values of the first window of text T with the same length as the pattern: - Hash("ABC") = ASCII('A') + ASCII('B') + ASCII('C') = 65 + 66 + 67 = 198 3. Slide the pattern over the text and compare hash values: - Hash("BCC") = ASCII('B') + ASCII('C') + ASCII('C') = 66 + 67 + 67 = 200 - Hash("CCD") = ASCII('C') + ASCII('C') + ASCII('D') = 67 + 67 + 68 = 202 - Hash("CDD") = ASCII('C') + ASCII('D') + ASCII('D') = 67 + 68 + 68 = 203 (Match found) 4. Perform a full comparison of the pattern with the text substring to confirm the match: - "CDD" matches with "CDD" in the text T. By using hashing, the Rabin-Karp algorithm reduces the number of comparisons needed to find a match, making it efficient for large texts and patterns. It has an average-case time complexity of O(n+m) and a worst-case time complexity of O(nm), where n is the length of the text and m is the length of the pattern.
Short Answer Question:-
1.Write short note on recursion. (class notes)
Solution:-
Definition: Recursion is a programming technique where a function solves a problem by calling itself with smaller instances of the same problem. It involves breaking down a complex problem into simpler subproblems that are similar in structure to the original problem.
Base Case: Every recursive function has a base case, which is the simplest instance of the problem that can be solved directly without further recursion. The base case is essential to prevent infinite recursion.
Recursive Step: The recursive step is where the function calls itself with a smaller input, making progress towards the base case. This step breaks down the original problem into smaller subproblems until the base case is reached.
Memory Usage: Recursion uses the call stack to store information about each recursive call. As a result, recursive functions can consume more memory compared to iterative solutions. It's important to consider the memory implications when using recursion.
Examples: Recursion is commonly used to solve problems with a recursive structure, such as calculating factorials, traversing tree structures, and sorting algorithms like quicksort and mergesort. The Fibonacci sequence and Tower of Hanoi are classic examples that are often solved using recursion.
Advantages: Recursion can make code more concise, elegant, and easier to understand for problems that exhibit recursive properties. It can simplify the implementation of certain algorithms and reduce the length of code compared to iterative solutions.
Termination: To avoid infinite recursion, it is crucial to ensure that the recursive function eventually reaches the base case. Failing to define a base case or properly manage recursion can lead to stack overflow errors.
The brute force method, also known as the naive method, is a straightforward approach to problem-solving that involves trying all possibilities exhaustively to find the solution. In the context of algorithms, the brute force method involves checking every possible solution to a problem to determine the correct one.
Key points about the brute force method:
Simple and Easy to Implement: The brute force method is easy to understand and implement as it involves systematically checking all possible solutions.
Exhaustive Search: It involves checking every possible combination or permutation of elements to find the solution.
Time Complexity: The brute force method typically has a time complexity of O(n^k), where n is the size of the input and k is the number of nested loops or recursive calls.
Applicability: The brute force method is suitable for small input sizes or when the problem space is limited and can be explored within reasonable time limits.
Drawbacks:
- Inefficient for large problem sizes: The brute force method can be inefficient for large input sizes as it involves checking all possibilities.
- Not always optimal: It may not always find the most optimized solution, especially in complex problems where a more sophisticated algorithm is needed.
Examples: Brute force methods are commonly used in simple search algorithms like linear search, string matching algorithms, and some sorting algorithms like bubble sort.
While the brute force method may not always be the most efficient approach,
Algorithm: Tower of Hanoi
Inputs:
- n: number of disks to move
- source: rod from which to move the disks
- auxiliary: rod to use as auxiliary for moving disks
- destination: rod to which to move the disks
Function move_disk(n, source, auxiliary, destination):
1. If n == 1:
Move the top disk from source to destination
2. Else:
a. Move n-1 disks from source to auxiliary using destination as the auxiliary rod
by calling move_disk(n-1, source, destination, auxiliary)
b. Move the nth disk from source to destination
c. Move the n-1 disks from auxiliary to destination using source as the auxiliary rod
by calling move_disk(n-1, auxiliary, source, destination)
To solve the Tower of Hanoi problem for n disks:
- Call move_disk(n, 'A', 'B', 'C') where 'A' is the source rod, 'B' is the auxiliary rod, and 'C' is the destination rod.
This algorithm uses recursion to move the disks from the source rod to the destination rod following the rules of the Tower of Hanoi problem. The base case is when there is only one disk to move, in which case it is moved directly to the destination rod. Otherwise, the algorithm recursively moves n-1 disks to the auxiliary rod, then moves the nth disk to the destination rod, and finally moves the n-1 disks from the auxiliary rod to the destination rod.
Comments
Post a Comment