Welcome to Decision Mathematics 1: Algorithms!

Hello! If you're studying Decision Mathematics, algorithms are the bedrock of the entire subject. Don't worry if the term sounds complicated; an algorithm is really just a super-precise set of instructions. Think of it as the ultimate recipe for solving a specific problem!

In this chapter, we will learn how to:

  • Understand what makes a good algorithm.
  • Trace algorithms step-by-step (a key exam skill!).
  • Compare different sorting and searching methods (like Bubble Sort vs. Quick Sort).
  • Assess how efficient an algorithm is.

Understanding these concepts will help you approach all future D1 topics, from graphs to networks, with confidence!

Section 1: Defining and Understanding Algorithms

What is an Algorithm?

An algorithm is a finite, well-defined sequence of instructions designed to perform a specific task or solve a particular type of problem. They are the 'how-to' guide for computers and decision-makers.

Essential Characteristics of a Valid Algorithm

For something to be a mathematical algorithm, it must possess five key features:

  1. Input: It must take zero or more external quantities. (e.g., A list of numbers to sort.)
  2. Output: It must produce at least one quantity. (e.g., The sorted list.)
  3. Definiteness: Each instruction must be clear and unambiguous. (No room for guesswork!)
  4. Finiteness: The algorithm must terminate after a finite number of steps, regardless of the input. (It can't run forever.)
  5. Effectiveness: Every step must be feasible and doable in a finite amount of time.

Analogy: Think of baking a cake. If the recipe (the algorithm) doesn't tell you exactly how much flour to use (Definiteness), or asks you to bake it for an infinite time (Finiteness), it's a poor algorithm!

Tracing and Flow Charts

To understand an algorithm, especially in an exam setting, you must be able to trace it. Tracing (or a dry run) involves following the instructions step-by-step, recording the variables and decisions made at each stage. This is often done using a Trace Table.

Key Symbols in Flow Charts

Algorithms are often represented visually using flow charts:

  • Oval: Start or End of the process.
  • Parallelogram: Input or Output (getting data or displaying results).
  • Rectangle: Process or Instruction (an action, like setting \(x = 5\) or calculating a total).
  • Diamond: Decision or Condition (a question with two paths, usually Yes/No or True/False).

Quick Key Takeaway: An algorithm is a precise, structured, and finite method for problem-solving. Tracing is how we prove it works.

Section 2: Measuring Algorithm Performance (Efficiency)

If you have two algorithms that both solve the same problem, how do you decide which one is better? You compare their efficiency.

Efficiency measures how the resources (usually time or memory) required by the algorithm change as the size of the input data (\(n\)) increases.

Qualitative Analysis of Time Complexity

In D1, you need to understand how the number of operations required by an algorithm scales relative to \(n\), the number of items being processed.

The efficiency (or Order) is determined by the term that dominates the run time as \(n\) gets very large.

We classify performance into general categories (best to worst):

  1. Logarithmic Time (\(\log n\)): The time required grows very slowly. Every step halves the amount of work remaining. (Excellent!)
  2. Linear Time (\(n\)): The time required grows directly proportionally to the number of items. If you double the input, you double the time. (Good.)
  3. Quadratic Time (\(n^2\)): The time required grows rapidly (quadratically) as the input grows. If you double the input, the time increases fourfold (\(2^2 = 4\)). (Poor, terrible for large datasets.)

Why does this matter? An algorithm that is \(n^2\) might be fast for a list of 10 items, but an algorithm that is \(n\) or \(n \log n\) will be vastly superior for a list of 1 million items.

Memory Aid: Always aim for a lower exponent or rate of growth. \(\log n\) is much smaller than \(n\), which is much smaller than \(n^2\).

Quick Review: Efficiency Hierarchy

\(\log n\) (Fastest) < \(n\) (Moderate) < \(n^2\) (Slowest)

Quick Key Takeaway: Efficiency is about how quickly the algorithm slows down as the list size increases. We want algorithms that scale well.

Section 3: Sorting Algorithms

Sorting algorithms arrange a list of items into a specific order (ascending or descending).

1. Bubble Sort

Bubble Sort is the simplest sorting algorithm to understand, but also one of the slowest. It works by repeatedly stepping through the list, comparing adjacent elements, and swapping them if they are in the wrong order.

The Bubble Sort Process (Ascending Order)
  1. Pass 1: Start at the beginning of the list. Compare the first two items. If the second is smaller than the first, swap them.
  2. Move one position down and compare the next two items (the second and third). Swap if needed.
  3. Continue until you reach the end of the list. The largest item will have "bubbled up" to its correct final position at the end of the list.
  4. Pass 2: Repeat the process, but stop one element earlier (since the last item is now correctly placed).
  5. Repeat this until a full pass occurs with no swaps.

Mnemonic: Imagine soda bubbles. The heaviest objects (largest numbers) sink to the bottom (the end of the list).

Performance: Bubble Sort is an \(n^2\) algorithm (quadratic). This makes it very inefficient for large datasets.

Common Mistake: Students often forget to stop the pass one element earlier in subsequent passes. Always reduce the unsorted portion of the list by one element per pass.

2. Quick Sort

Quick Sort is generally the fastest sorting algorithm taught at this level. It uses a "divide and conquer" approach.

The Quick Sort Process
  1. Choose a Pivot: Select one element from the list, usually the first item, to be the pivot.
  2. Partitioning: Re-arrange the rest of the elements so that all elements smaller than the pivot are placed to its left, and all elements larger than the pivot are placed to its right. The pivot is now in its final, correct position.
  3. Recursion: The process is repeated (recursively) on the sub-list to the left of the pivot and the sub-list to the right of the pivot until all sub-lists contain zero or one element (meaning they are sorted).

Performance: Quick Sort is generally \(n \log n\), which is significantly faster than Bubble Sort. However, if the pivot selection is consistently poor (e.g., if you choose the largest or smallest item every time), its worst-case scenario degrades to \(n^2\).

Did you know? In practical computer science, Quick Sort is often used because, despite its theoretical worst-case, its average performance is excellent.

Quick Key Takeaway: Quick Sort is fast (\(n \log n\)) because it divides the problem. Bubble Sort is slow (\(n^2\)) because it checks adjacent pairs repeatedly.

Section 4: Searching Algorithms

Searching algorithms are used to locate a specific target item within a list.

1. Linear Search (or Sequential Search)

This is the most straightforward search method. It simply checks every item in the list, one after the other, starting from the beginning, until the target is found or the list runs out.

Requirement: None. The list does not need to be sorted.

Process:

  1. Start at the first element.
  2. Compare the element to the target.
  3. If they match, stop and report success.
  4. If they do not match, move to the next element.
  5. Repeat until the list is exhausted.

Performance: Linear Search is an \(n\) algorithm (linear). In the worst case (the item is at the end or not present), you must check every item, making the time directly proportional to \(n\).

Analogy: Checking every aisle in a huge supermarket for one specific brand of cereal.

2. Binary Search

Binary Search is much faster than Linear Search, but it has a crucial prerequisite.

Crucial Requirement: The list MUST be sorted.

The Binary Search Process
  1. Find the Middle: Identify the middle item in the list.
  2. Compare: Compare the middle item with the target value.
  3. Decision:
    • If the middle item is the target, stop. Success!
    • If the target is less than the middle item, ignore the entire upper half of the list.
    • If the target is greater than the middle item, ignore the entire lower half of the list.
  4. Repeat: Treat the remaining half of the list as the new list and repeat the process (find the new middle, compare, eliminate half).

Binary Search is incredibly effective because it eliminates half of the remaining data with every single comparison.

Performance: Binary Search is a \(\log n\) algorithm (logarithmic). This is extremely fast. If you have a list of 1,000 items, you can find the target in about 10 steps (\(\log_2 1000 \approx 10\)).

Common Mistake: Trying to apply Binary Search to an unsorted list. If the list isn't sorted, Binary Search will fail spectacularly.

SUMMARY TABLE: Sorting & Searching Comparison
Sorting Algorithms
  • Bubble Sort: Simple. Compares adjacent pairs. Performance: \(n^2\) (Slow).
  • Quick Sort: Complex. Uses Pivots and Recursion. Performance: \(n \log n\) (Fast average).
Searching Algorithms
  • Linear Search: Simple. Checks everything sequentially. Performance: \(n\).
  • Binary Search: Efficient. Halves the list repeatedly. REQUIRES SORTED LIST. Performance: \(\log n\).

Conclusion and Final Advice

You've mastered the fundamentals of algorithms! Remember, Decision Mathematics is all about finding the best way to solve a problem. Efficiency is key!

The best way to understand these concepts is through practice:

  • Draw out the steps for a Bubble Sort trace table until it becomes second nature.
  • Practice selecting pivots and partitioning lists for Quick Sort.
  • Ensure you check the "sorted list" condition whenever you are asked to perform a search.

Keep up the great work!