Monday, August 18, 2025
0 comments

Master Algorithms & Problem Solving with Python: Module 10 - 20+ Core Algorithms with Real-World Applications

 Welcome to Last Module 10 of our comprehensive Python course, designed to transform you from a beginner to an advanced Python programmer! 


In Module 9, we explored web application development with Flask and Django, building dynamic websites and APIs. Now, we dive into algorithms and problem solving with Python, covering over 20 core algorithms across searching, sorting, recursion, number theory, dynamic programming, and graph algorithms. These algorithms are the foundation of efficient programming, used in applications like search engines, data processing, and route optimization.

This blog is beginner-friendly yet detailed enough for intermediate and advanced learners, offering real-world scenarios, multiple code examples, pros and cons, best practices, and alternatives. Whether you're optimizing search functionality, sorting datasets, or solving complex graph problems, this guide will equip you with the skills to tackle algorithmic challenges. Let’s dive in!
Table of Contents
  1. Searching Algorithms
    • Linear Search
    • Binary Search
    • Pros, Cons, and Alternatives
    • Best Practices
    • Example: Finding Products in an Inventory
  2. Sorting Algorithms
    • Bubble Sort
    • Insertion Sort
    • Selection Sort
    • Merge Sort
    • Quick Sort
    • Heap Sort
    • Counting Sort
    • Radix Sort
    • Pros, Cons, and Alternatives
    • Best Practices
    • Example: Sorting Customer Orders
  3. Recursion-Based Algorithms
    • Fibonacci Numbers
    • Factorial
    • Tower of Hanoi
    • Pros, Cons, and Alternatives
    • Best Practices
    • Example: Calculating Loan Interest Recursively
  4. Number Theory Algorithms
    • Prime Numbers
    • GCD & LCM
    • Sieve of Eratosthenes
    • Pros, Cons, and Alternatives
    • Best Practices
    • Example: Generating Encryption Keys
  5. Dynamic Programming
    • Longest Common Subsequence
    • Knapsack Problem
    • Coin Change Problem
    • Pros, Cons, and Alternatives
    • Best Practices
    • Example: Optimizing Delivery Routes
  6. Graph Algorithms
    • Breadth-First Search (BFS)
    • Depth-First Search (DFS)
    • Dijkstra’s Algorithm
    • Minimum Spanning Tree (Kruskal/Prim)
    • Pros, Cons, and Alternatives
    • Best Practices
    • Example: Social Network Analysis
  7. Conclusion & Next Steps

1. Searching AlgorithmsLinear SearchLinear search checks each element in a list until the target is found or the list ends.Code:
python
def linear_search(arr, target):
    for i, item in enumerate(arr):
        if item == target:
            return i
    return -1

# Example
inventory = ["apple", "banana", "orange"]
print(linear_search(inventory, "banana"))  # Output: 1
print(linear_search(inventory, "grape"))   # Output: -1
Binary SearchBinary search works on sorted lists, dividing the search space in half each step.Code:
python
def binary_search(arr, target):
    left, right = 0, len(arr) - 1
    while left <= right:
        mid = (left + right) // 2
        if arr[mid] == target:
            return mid
        elif arr[mid] < target:
            left = mid + 1
        else:
            right = mid - 1
    return -1

# Example
sorted_inventory = ["apple", "banana", "orange"]
print(binary_search(sorted_inventory, "banana"))  # Output: 1
Pros, Cons, and AlternativesPros:
  • Linear Search: Simple, works on unsorted data.
  • Binary Search: Fast (O(log n)) for sorted data.
Cons:
  • Linear Search: Slow (O(n)) for large lists.
  • Binary Search: Requires sorted data, not adaptive for frequent updates.
Alternatives:
  • Hash Tables: O(1) lookup with dictionaries.
  • Ternary Search: For unimodal functions.
  • Jump Search: For sorted lists with fewer comparisons.
Best Practices:
  • Use linear search for small or unsorted lists.
  • Use binary search for large, sorted lists.
  • Ensure input validation to handle empty lists.
  • Sort data before binary search if needed.
Example: Finding Products in an InventoryLet’s create a product search system for an online store.
python
def search_product(products, target, method="linear"):
    if method == "linear":
        return linear_search(products, target)
    elif method == "binary":
        sorted_products = sorted(products)
        return binary_search(sorted_products, target)
    return -1

# Test
products = ["laptop", "phone", "tablet", "mouse"]
print(search_product(products, "phone", "linear"))  # Output: 1
print(search_product(products, "phone", "binary"))  # Output: 2 (sorted: laptop, mouse, phone, tablet)
This example demonstrates searching for products, comparing linear and binary search.
2. Sorting AlgorithmsBubble SortBubble sort repeatedly swaps adjacent elements if they are in the wrong order.Code:
python
def bubble_sort(arr):
    n = len(arr)
    for i in range(n):
        for j in range(0, n - i - 1):
            if arr[j] > arr[j + 1]:
                arr[j], arr[j + 1] = arr[j + 1], arr[j]
    return arr

# Example
print(bubble_sort([64, 34, 25, 12, 22]))  # Output: [12, 22, 25, 34, 64]
Insertion SortInsertion sort builds a sorted array by inserting elements in their correct position.Code:
python
def insertion_sort(arr):
    for i in range(1, len(arr)):
        key = arr[i]
        j = i - 1
        while j >= 0 and arr[j] > key:
            arr[j + 1] = arr[j]
            j -= 1
        arr[j + 1] = key
    return arr

# Example
print(insertion_sort([64, 34, 25, 12, 22]))  # Output: [12, 22, 25, 34, 64]
Selection SortSelection sort selects the smallest element and places it at the beginning.Code:
python
def selection_sort(arr):
    for i in range(len(arr)):
        min_idx = i
        for j in range(i + 1, len(arr)):
            if arr[j] < arr[min_idx]:
                min_idx = j
        arr[i], arr[min_idx] = arr[min_idx], arr[i]
    return arr

# Example
print(selection_sort([64, 34, 25, 12, 22]))  # Output: [12, 22, 25, 34, 64]
Merge SortMerge sort divides the list into halves, sorts them, and merges them back.Code:
python
def merge_sort(arr):
    if len(arr) <= 1:
        return arr
    mid = len(arr) // 2
    left = merge_sort(arr[:mid])
    right = merge_sort(arr[mid:])
    return merge(left, right)

def merge(left, right):
    result = []
    i = j = 0
    while i < len(left) and j < len(right):
        if left[i] <= right[j]:
            result.append(left[i])
            i += 1
        else:
            result.append(right[j])
            j += 1
    result.extend(left[i:])
    result.extend(right[j:])
    return result

# Example
print(merge_sort([64, 34, 25, 12, 22]))  # Output: [12, 22, 25, 34, 64]
Quick SortQuick sort picks a pivot, partitions the list, and recursively sorts sublists.Code:
python
def quick_sort(arr, low, high):
    if low < high:
        pi = partition(arr, low, high)
        quick_sort(arr, low, pi - 1)
        quick_sort(arr, pi + 1, high)
    return arr

def partition(arr, low, high):
    pivot = arr[high]
    i = low - 1
    for j in range(low, high):
        if arr[j] <= pivot:
            i += 1
            arr[i], arr[j] = arr[j], arr[i]
    arr[i + 1], arr[high] = arr[high], arr[i + 1]
    return i + 1

# Example
arr = [64, 34, 25, 12, 22]
print(quick_sort(arr, 0, len(arr) - 1))  # Output: [12, 22, 25, 34, 64]
Heap SortHeap sort uses a max-heap to sort elements.Code:
python
def heap_sort(arr):
    def heapify(arr, n, i):
        largest = i
        left = 2 * i + 1
        right = 2 * i + 2
        if left < n and arr[left] > arr[largest]:
            largest = left
        if right < n and arr[right] > arr[largest]:
            largest = right
        if largest != i:
            arr[i], arr[largest] = arr[largest], arr[i]
            heapify(arr, n, largest)
    
    n = len(arr)
    for i in range(n // 2 - 1, -1, -1):
        heapify(arr, n, i)
    for i in range(n - 1, 0, -1):
        arr[0], arr[i] = arr[i], arr[0]
        heapify(arr, i, 0)
    return arr

# Example
print(heap_sort([64, 34, 25, 12, 22]))  # Output: [12, 22, 25, 34, 64]
Counting SortCounting sort counts occurrences of elements to sort non-negative integers.Code:
python
def counting_sort(arr):
    max_val = max(arr)
    count = [0] * (max_val + 1)
    for num in arr:
        count[num] += 1
    result = []
    for i, freq in enumerate(count):
        result.extend([i] * freq)
    return result

# Example
print(counting_sort([4, 2, 2, 8, 3]))  # Output: [2, 2, 3, 4, 8]
Radix SortRadix sort sorts numbers by processing digits place by place.Code:
python
def counting_sort_for_radix(arr, exp):
    n = len(arr)
    output = [0] * n
    count = [0] * 10
    for i in range(n):
        index = arr[i] // exp
        count[index % 10] += 1
    for i in range(1, 10):
        count[i] += count[i - 1]
    i = n - 1
    while i >= 0:
        index = arr[i] // exp
        output[count[index % 10] - 1] = arr[i]
        count[index % 10] -= 1
        i -= 1
    return output

def radix_sort(arr):
    max_val = max(arr)
    exp = 1
    while max_val // exp > 0:
        arr = counting_sort_for_radix(arr, exp)
        exp *= 10
    return arr

# Example
print(radix_sort([170, 45, 75, 90, 802]))  # Output: [45, 75, 90, 170, 802]
Pros, Cons, and AlternativesPros:
  • Bubble/Insertion/Selection: Simple, good for small lists.
  • Merge/Quick/Heap: Efficient for large datasets (O(n log n)).
  • Counting/Radix: Fast for specific data (O(n)).
Cons:
  • Bubble/Insertion/Selection: Slow (O(n²)) for large lists.
  • Merge: Requires extra memory.
  • Quick: Worst-case O(n²) with poor pivot choice.
  • Counting/Radix: Limited to non-negative integers.
Alternatives:
  • Python’s sorted(): Uses Timsort, hybrid of merge and insertion sort.
  • NumPy sort: For numerical arrays.
  • External Sorting: For massive datasets.
Best Practices:
  • Use sorted() for general-purpose sorting.
  • Choose algorithms based on data size and type.
  • Avoid bubble sort in production due to inefficiency.
  • Test sorting stability for complex objects.
Example: Sorting Customer OrdersLet’s sort customer orders by price using multiple algorithms.
python
def sort_orders(orders, algorithm="bubble"):
    if algorithm == "bubble":
        return bubble_sort(orders)
    elif algorithm == "merge":
        return merge_sort(orders)
    return orders

# Test
orders = [100, 50, 200, 75]
print(sort_orders(orders, "bubble"))  # Output: [50, 75, 100, 200]
print(sort_orders(orders, "merge"))   # Output: [50, 75, 100, 200]
This example demonstrates sorting customer orders, comparing bubble and merge sort.
3. Recursion-Based AlgorithmsFibonacci NumbersFibonacci numbers sum the two preceding numbers: 0, 1, 1, 2, 3, 5, ...Code:
python
def fibonacci(n):
    if n <= 1:
        return n
    return fibonacci(n - 1) + fibonacci(n - 2)

# Example
print(fibonacci(6))  # Output: 8
FactorialFactorial multiplies a number by all positive integers below it.Code:
python
def factorial(n):
    if n == 0:
        return 1
    return n * factorial(n - 1)

# Example
print(factorial(5))  # Output: 120
Tower of HanoiMove disks between pegs, following specific rules.Code:
python
def tower_of_hanoi(n, source, auxiliary, target):
    if n > 0:
        tower_of_hanoi(n - 1, source, target, auxiliary)
        print(f"Move disk {n} from {source} to {target}")
        tower_of_hanoi(n - 1, auxiliary, source, target)

# Example
tower_of_hanoi(3, 'A', 'B', 'C')
Pros, Cons, and AlternativesPros:
  • Elegant solutions for recursive problems.
  • Simplifies complex problems like tree traversals.
Cons:
  • High memory usage for deep recursion.
  • Risk of stack overflow for large inputs.
  • Slower than iterative solutions in some cases.
Alternatives:
  • Iterative Solutions: For memory efficiency.
  • Memoization: To cache recursive results.
  • Tail Recursion: Not optimized in Python.
Best Practices:
  • Use recursion for naturally recursive problems.
  • Implement memoization for performance (e.g., @lru_cache).
  • Limit recursion depth for large inputs.
  • Test base cases thoroughly.
Example: Calculating Loan Interest RecursivelyLet’s calculate compound interest recursively.
python
from functools import lru_cache

@lru_cache(maxsize=None)
def compound_interest(principal, rate, time):
    if time == 0:
        return principal
    return compound_interest(principal * (1 + rate), rate, time - 1)

# Test
print(compound_interest(1000, 0.05, 3))  # Output: 1157.625
This example uses recursion with memoization for financial calculations.
4. Number Theory AlgorithmsPrime NumbersCheck if a number is prime by testing divisibility.Code:
python
def is_prime(n):
    if n < 2:
        return False
    for i in range(2, int(n ** 0.5) + 1):
        if n % i == 0:
            return False
    return True

# Example
print(is_prime(17))  # Output: True
GCD & LCMGreatest Common Divisor (GCD) uses Euclid’s algorithm; LCM uses GCD.Code:
python
def gcd(a, b):
    while b:
        a, b = b, a % b
    return a

def lcm(a, b):
    return abs(a * b) // gcd(a, b)

# Example
print(gcd(48, 18))  # Output: 6
print(lcm(48, 18))  # Output: 144
Sieve of EratosthenesGenerate all primes up to a number.Code:
python
def sieve(n):
    primes = [True] * (n + 1)
    primes[0] = primes[1] = False
    for i in range(2, int(n ** 0.5) + 1):
        if primes[i]:
            for j in range(i * i, n + 1, i):
                primes[j] = False
    return [i for i in range(n + 1) if primes[i]]

# Example
print(sieve(30))  # Output: [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
Pros, Cons, and AlternativesPros:
  • Efficient for cryptographic and mathematical applications.
  • Sieve is fast for generating multiple primes.
Cons:
  • Sieve requires significant memory for large n.
  • GCD/LCM are specialized algorithms.
  • Prime checking is slow for very large numbers.
Alternatives:
  • Miller-Rabin: For probabilistic primality testing.
  • Math Libraries: Like sympy for advanced number theory.
  • Brute Force: For small inputs, but inefficient.
Best Practices:
  • Use optimized algorithms (e.g., Euclid’s for GCD).
  • Cache results for repeated calculations.
  • Validate inputs for negative numbers.
  • Use Sieve for generating primes up to a limit.
Example: Generating Encryption KeysLet’s generate prime numbers for RSA encryption.
python
def generate_primes(limit):
    return sieve(limit)

# Test
primes = generate_primes(100)
print(primes[:5])  # Output: [2, 3, 5, 7, 11]
This example uses the Sieve of Eratosthenes for cryptographic applications.
5. Dynamic ProgrammingLongest Common Subsequence (LCS)Find the longest subsequence common to two strings.Code:
python
def lcs(str1, str2):
    m, n = len(str1), len(str2)
    dp = [[0] * (n + 1) for _ in range(m + 1)]
    for i in range(1, m + 1):
        for j in range(1, n + 1):
            if str1[i - 1] == str2[j - 1]:
                dp[i][j] = dp[i - 1][j - 1] + 1
            else:
                dp[i][j] = max(dp[i - 1][j], dp[i][j - 1])
    return dp[m][n]

# Example
print(lcs("ABCD", "ACDF"))  # Output: 3 (ACD)
Knapsack ProblemMaximize value within a weight constraint.Code:
python
def knapsack(values, weights, capacity):
    n = len(values)
    dp = [[0] * (capacity + 1) for _ in range(n + 1)]
    for i in range(1, n + 1):
        for w in range(capacity + 1):
            if weights[i - 1] <= w:
                dp[i][w] = max(dp[i - 1][w], dp[i - 1][w - weights[i - 1]] + values[i - 1])
            else:
                dp[i][w] = dp[i - 1][w]
    return dp[n][capacity]

# Example
values = [60, 100, 120]
weights = [10, 20, 30]
capacity = 50
print(knapsack(values, weights, capacity))  # Output: 220
Coin Change ProblemFind the minimum number of coins to make a target amount.Code:
python
def coin_change(coins, amount):
    dp = [float('inf')] * (amount + 1)
    dp[0] = 0
    for i in range(1, amount + 1):
        for coin in coins:
            if coin <= i:
                dp[i] = min(dp[i], dp[i - coin] + 1)
    return dp[amount] if dp[amount] != float('inf') else -1

# Example
print(coin_change([1, 5, 10], 11))  # Output: 2 (10 + 1)
Pros, Cons, and AlternativesPros:
  • Solves complex problems efficiently by breaking them down.
  • Avoids redundant calculations with memoization.
Cons:
  • Requires significant memory for DP tables.
  • Can be complex to design and debug.
  • Not always intuitive for beginners.
Alternatives:
  • Greedy Algorithms: For simpler problems, but may not be optimal.
  • Brute Force: For small inputs, but inefficient.
  • Divide and Conquer: Without memoization, less efficient.
Best Practices:
  • Use bottom-up DP for memory efficiency.
  • Initialize DP tables with appropriate base cases.
  • Test edge cases (e.g., empty inputs, zero capacity).
  • Optimize space with rolling arrays when possible.
Example: Optimizing Delivery RoutesLet’s use the knapsack problem to optimize delivery item selection.
python
def optimize_delivery(items, capacity):
    values = [item['value'] for item in items]
    weights = [item['weight'] for item in items]
    return knapsack(values, weights, capacity)

# Test
items = [{'value': 60, 'weight': 10}, {'value': 100, 'weight': 20}, {'value': 120, 'weight': 30}]
print(optimize_delivery(items, 50))  # Output: 220
This example optimizes delivery item selection based on value and weight constraints.
6. Graph AlgorithmsBreadth-First Search (BFS)BFS explores nodes level by level, using a queue.Code:
python
from collections import deque

def bfs(graph, start):
    visited = set()
    queue = deque([start])
    visited.add(start)
    while queue:
        vertex = queue.popleft()
        print(vertex, end=' ')
        for neighbor in graph[vertex]:
            if neighbor not in visited:
                visited.add(neighbor)
                queue.append(neighbor)

# Example
graph = {0: [1, 2], 1: [0, 3, 4], 2: [0], 3: [1], 4: [1]}
bfs(graph, 0)  # Output: 0 1 2 3 4
Depth-First Search (DFS)DFS explores as far as possible along each branch.Code:
python
def dfs(graph, vertex, visited=None):
    if visited is None:
        visited = set()
    visited.add(vertex)
    print(vertex, end=' ')
    for neighbor in graph[vertex]:
        if neighbor not in visited:
            dfs(graph, neighbor, visited)

# Example
dfs(graph, 0)  # Output: 0 1 3 4 2
Dijkstra’s AlgorithmFinds the shortest path in a weighted graph.Code:
python
import heapq

def dijkstra(graph, start):
    distances = {node: float('inf') for node in graph}
    distances[start] = 0
    pq = [(0, start)]
    while pq:
        current_dist, current = heapq.heappop(pq)
        if current_dist > distances[current]:
            continue
        for neighbor, weight in graph[current]:
            distance = current_dist + weight
            if distance < distances[neighbor]:
                distances[neighbor] = distance
                heapq.heappush(pq, (distance, neighbor))
    return distances

# Example
graph = {0: [(1, 4), (2, 8)], 1: [(0, 4), (3, 5)], 2: [(0, 8), (3, 2)], 3: [(1, 5), (2, 2)]}
print(dijkstra(graph, 0))  # Output: {0: 0, 1: 4, 2: 7, 3: 9}
Minimum Spanning Tree (Kruskal/Prim)Finds a tree with minimum total edge weight.Kruskal’s Code:
python
def kruskal(graph, vertices):
    parent = {v: v for v in vertices}
    rank = {v: 0 for v in vertices}
    
    def find(v):
        if parent[v] != v:
            parent[v] = find(parent[v])
        return parent[v]
    
    def union(u, v):
        root1, root2 = find(u), find(v)
        if root1 != root2:
            if rank[root1] < rank[root2]:
                parent[root1] = root2
            elif rank[root1] > rank[root2]:
                parent[root2] = root1
            else:
                parent[root2] = root1
                rank[root1] += 1
    
    edges = []
    for u in graph:
        for v, weight in graph[u]:
            edges.append((weight, u, v))
    edges.sort()
    
    mst = []
    for weight, u, v in edges:
        if find(u) != find(v):
            union(u, v)
            mst.append((u, v, weight))
    return mst

# Example
graph = {0: [(1, 4), (2, 8)], 1: [(0, 4), (3, 5)], 2: [(0, 8), (3, 2)], 3: [(1, 5), (2, 2)]}
vertices = [0, 1, 2, 3]
print(kruskal(graph, vertices))  # Output: [(2, 3, 2), (0, 1, 4), (1, 3, 5)]
Pros, Cons, and AlternativesPros:
  • BFS/DFS: Simple, versatile for graph traversal.
  • Dijkstra’s: Optimal for shortest paths in weighted graphs.
  • Kruskal/Prim: Efficient for network optimization.
Cons:
  • BFS/DFS: Not optimal for weighted graphs.
  • Dijkstra’s: Doesn’t handle negative weights.
  • Kruskal/Prim: Complex for dense graphs.
Alternatives:
  • A*: For heuristic-based shortest paths.
  • Bellman-Ford: For graphs with negative weights.
  • Floyd-Warshall: For all-pairs shortest paths.
Best Practices:
  • Use adjacency lists for sparse graphs.
  • Validate graph inputs for cycles or disconnected nodes.
  • Optimize Dijkstra’s with a priority queue.
  • Test edge cases (e.g., empty graphs, single nodes).
Example: Social Network AnalysisLet’s use BFS to find connections in a social network.
python
def find_friends(graph, user):
    visited = set()
    queue = deque([user])
    visited.add(user)
    friends = []
    while queue:
        person = queue.popleft()
        friends.append(person)
        for friend in graph[person]:
            if friend not in visited:
                visited.add(friend)
                queue.append(friend)
    return friends[1:]  # Exclude the user

# Test
network = {'Alice': ['Bob', 'Charlie'], 'Bob': ['Alice', 'David'], 'Charlie': ['Alice'], 'David': ['Bob']}
print(find_friends(network, 'Alice'))  # Output: ['Bob', 'Charlie', 'David']
This example uses BFS to find all friends of a user in a social network.
7. ConclusionCongratulations on mastering Module 10! You’ve learned over 20 core algorithms, including searching, sorting, recursion, number theory, dynamic programming, and graph algorithms. These skills enable you to solve complex problems like inventory searches, order sorting, financial calculations, and network analysis.

0 comments:

Featured Post

Master Angular 20 Basics: A Complete Beginner’s Guide with Examples and Best Practices

Welcome to the complete Angular 20 learning roadmap ! This series takes you step by step from basics to intermediate concepts , with hands...

Subscribe

 
Toggle Footer
Top