Last p0st… okay I’ll stop.

Today, I am writing my last post in this SLog. Today, I shall be writing about sorting algorithms. The algorithms that I will be talking about are selection sort, quick sort, and merge sort. First, however, I will be talking about efficiency.

The efficiency of an algorithm is determined by how long an algorithm can be expected to run for before completing given a size of input. We usually count the amount of steps an algorithm can take to complete in the best case, average case, and the worst case. We classify algorithms into groups of algorithms that grow at different speeds with increasing input size n. A best case for an algorithm is usually when the input is a sorted list, and therefore nothing needs to be done. A worst case could be a randomly sorted list or a list that is sorted backwards. In this case, the algorithm needs to go through the most steps possible in order to sort the list. The complexities of these groups is denoted by O(f(n)), where f(n) is a class of functions. When we determine whether a runtime of an algorithm is within a certain class of algorithms, we only look at the fastest growing part of the algorithm. For example, given that the function f(n) = n^4 + 100n^3 + 200n^2 + 30n + 984 describes the worst case runtime of an algorithm, f(n)\in O(n^4), because n^4 is the fastest growing portion of the algorithm. Furthermore, saying that a function g(n) is in O(f(n)) is to say that g(n) will perform no worse than $cf(n)$ as the input size n increases.

In this article I will be mostly looking at the worst case runtime of algorithms.

Now you may think that an algorithm with a worst case runtime of O(n^4) will always be faster than an algorithm with, say a complexity of O(1) (which is constant time). However, this is not always the case. Let f(n) = n^4 (and therefore f(n)\in O(n^4), and let g(n) = 100 (and therefore g(n)\in O(1). If we look at this on a graph, we get the following:


Until the algorithm with complexity O(n^4) reaches the breakpoint, that is, when n^4 = 100, the algorithm with a constant worst case runtime of 100 steps will have worse performance than the algorithm with complexity n^4. Thus, we must choose which algorithm to use for the job at hand. The algorithm that does poorly on large inputs may do the best on small inputs, and the algorithm that performs poorly on small inputs may scale better than other algorithms for large inputs.

Now, I will move on to the sorting algorithms.

Selection Sort
Selection sort consists of scanning the list to find the smallest element and putting it as the first element in the list. Then, scan the list and find the next smallest element, and so on until the list is sorted. Because this process happens \sum\limits_{k = 0}^n (n - k) = 0 + 1 + 2 + ... + (n - 1) + n times, the number of steps required to sort the list in any case is \frac{n}{2}(n + 1) times. Since it takes the same number of steps for any set, because it has to go through the same process whether the list is sorted or reversed, it has a complexity of O(n^2).

Here is an animation of the Selection Sort algorithm in action:

Quick Sort
A Quick sort algorithm chooses a pivot (doesn’t matter which element), and reorders the list based on its relation to the pivot. If an element is smaller than the pivot, it is placed before the pivot. If an element is larger than the pivot, it is placed after the pivot. Then, the algorithm is applied recursively on the two partitions that result from this operation. After every partitioning operation is performed, the pivot is in its final position.

This algorithm is called a quick sort because it is surprisingly quick. Chances are that the pivot that you pick every time will be somewhere the middle of the list, which means that each time the list(s) will be divided in two. Its worst case complexity is O(n^2), but that is rare, as the pivot would need to be the first or last element after each partitioning operation (this has a probability of \frac{1}{n!}). Its average case and best case complexities are all O(n\log n).

Merge Sort
Merge sort is an algorithm in which the elements of a list are split into the smallest portions possible, and then compared to the element adjacent to each element. Then, the groups of sorted elements are compared to the adjacent group, and so on. Finally, the list is rejoined, and the sort is complete.

This algorithm is actually one of the first sorts that I looked into. This sorting algorithm has a worst case, average, and best case complexity of O(n\log n). Merge sort works better than quick sort when data can be accessed sequentially (that is, one at at time in sequence). It also does slightly better than quick sort in the worst case, even though they are both in the same class of algorithms, that is, O(n\log n).

Here is an animation of the Merge Sort in action:

The Future
Next week, the finals start. I hope that I’ll be ready for my exams, even though I certainly do not feel ready at the moment. CSC148 has been interesting and enjoyable, and I am looking forward to my next courses in the Computer Science program.

I got the data on how the algorithms worked (and some of the analysis), the animations to illustrate how the algorithms worked, and the Big-O complexities for each of the algorithms from the following pages on Wikipedia. These links were accessed on December 6, 2013 at around 11pm EST. The links are all permalinks to the revision that I used as a reference.

Selection Sort
Quick Sort
Merge Sort

Test #2 Results

And so, I finally got my test back during one of Professor Heap’s office hours. Overall, I didn’t do very badly, and I made two small mistakes.

Question 1
Questions 1 and 2 were to do with binary trees. In question 1, we had to write a function that took in a BTNode as a parameter, and returned a tuple containing the number of interior nodes as the first element, and the number of leaves in the second element, or (0, 0) if n is None.

This question was straightforward, however looking at the solutions (which can be found here: Solutions to Test #2), tuple unpacking would have been a good technique to use here. I ended up doing something rather complicated (I referenced each element of the tuple directly by its index). You may find the code from the exam here: Test2Q1

This is one of the questions in which I made a mistake on. See if you can spot it. When you’ve finished, see the answer after the break. Continue reading

Finally a way to make Python instance variables private?

This week, we talked about the property function. This function allows us to change an instance variable in a class from being directly assigned to by the user, and instead make the assignment operator call a method instead. This allows us to control exactly how instance variables are managed. For example, you could force the user to assign only valid data to a variable to avoid errors due to invalid data in a variable. The following snippet of code gives an example of this.

class Test:
    def __init__(self):
        self.number = 0

if __name__ == "__main__":
    a = Test()
    a.number = ["What a marvelous day to", ("break this program")]

If another function were to access Test.number expecting an integer, and instead find a strange list containing a string and a tuple containing a string, the function accessing the variable would not know what to do with this unless explicitly told otherwise. This could cause major problems in applications that need to be reliable. However, if many people have already implemented the Test class into their programs with direct assignment to Test.number, telling everyone to change their direct assignment to a call to a method would be a logistical nightmare. However, with property, there is a way to do this without breaking everyone else's programs. What you would do would be be to define a method that would be called when Test.number is accessed directly, and specifying the corresponding method (whether it be writing to the variable or reading from the variable, or some other operation), and creating a private(-ish) variable to be assigned to instead. Here is a snippet of code that redefines the Test class with the property function.

def Test:
    def __init__(self):
        self._number = 0

    def set_number(self, n: int):
        if isinstance(n, int):
            self._number = n
            raise TypeError

    def get_number(self):
        return self._number

    number = property(get_number, set_number, None, None)

Here, in the property function, the first argument is the method for getting the value of the variable, the second argument is the method to set the value of the variable, the third argument is the method that is called when the variable is deleted, and the fourth (and final) argument is the docstring for the property. If any of these properties are to be undefined, the corresponding argument can simply be set to None. For example, to make Test.number read-only, all one needs to do is to specify None as the second property.

The existence of the property function allows us to re-implement code without changing how the user accesses it. This allows for a more transparent transition from a possibly more insecure way of accessing variables to a more secure one with checks. This could prevent some exploits of the code that rely on being able to submit malformed input into variables.

Reference for the property() function: Entry in the Python 3.3 Documentation

Test #2: Recursion, recursion, and more recursion…

This morning, I had the second term test of the course. Last night, I made another aid sheet (handwritten, 8.5″x11″, double sided). This time, I didn’t even touch the aid sheet. The first question gave us a Binary Tree Node (BTNode) class, and the declaration for a module-level function that would count the number of nodes in the binary tree rooted at a node, and return a tuple containing information about the node (the exact details of which I cannot remember very well). This question was the stuff that we have been doing in the labs lately. I’ll give more information about the question when I get the test back next week.

The second question was about the same class, but we had to create a function called count_even() that would take in a BTNode, and return the number of nodes whose value is even. This one was easy: return 1 + the number of even numbered elements in the subtree of the root iff the root’s value is even, and return the number of even numbered elements in the subtree of the root if the value of the root was odd. I’m pretty sure that I got this question down, though I must say that when checking if a number was even, I used

if n // 2 != (n / 2.0): pass # check if odd

instead of

if n % 2 != 0: pass # check if odd

or even

if not n % 2: pass # check if odd

This is code that I used before I was aware of the modulo operation. The reason that I’m checking the integer division of n by 2 and comparing it to n divided by 2.0 is because I started programming in Python at version 2, where the divison operator would be taken to mean integer division if both the dividend and divisor are integers. Thus, an easy fix if you want floating point division is to make either the dividend or the divisor a floating-point number. One way is to specify 2.0 instead of 2, but another way would of course be to typecast either the dividend or the divisor as a float (using the float() function in Python).

However, I realized that a more concise (and easier-to-understand) way of checking whether a number was odd or even was to simply check if n mod 2 is equal to zero, or in Python:

if n % 2:
    print("n is even")
    print("n is odd")

The final question was about Binary Search Trees. What we had to do was to build a linked list of the shortest path from the root to the smallest element. This wasn’t too difficult either, and if I remember correctly, the solution was no more than 5 lines. It seems that recursion can really save a lot of code.

All in all, the test wasn’t too difficult. I think I should have been able to get a higher mark than last time. Anyhow, on to the Final Exam………

Runtime Analysis

This week in lecture, we discussed Binary Search Trees and Runtime Analysis. I will explain them in more detail below.

Binary Search Trees (BST)
A binary search tree is a binary tree in which the left branch contains elements that are less than the root, and the right branch contains elements that are greater than the root. Here is a picture showing a Binary Search Tree from its Wikipedia page:

A Binary Search Tree

As you can see, the left branch of each node contains elements less than the root, and the right branch contains elements greater than the root. This allows us to find an element in a tree very quickly. All we have to do is check the value that we are looking for against the value of the root. If its value is what we are looking for, then we’re done. If the value of the root is less than what we are looking for, then we try again with its right element. If the value of the root is greater than what we are looking for, then we try again with its left element. Below are links to and, respectively, which are examples of a binary search tree class and binary tree node, respectively, written by Professor Danny Heap: (Binary Search Tree class) (Binary Tree class)

Runtime Analysis
Runtime Analysis is the process of estimating how many steps an algorithm will take to complete with large inputs. In this course, we mainly deal with worst-case runtime, which is denoted by Big-O. Since Big-O notation is an estimation of how a function will perform for very large inputs, we only keep the part of the function describing the runtime that grows fastest. Here is an example:

f(n) = n^2 + 2n + 1

This function f is a quadratic function. Since the n^2 part grows faster than the other elements, we drop the rest and call this function in O(n^2)

Or symbolically:

f(n)\in O(n^2)

The reason for this becomes apparent when you graph the function.

Graphs of x^2 + 2x + 1 and x^2

Here, I have graphed the functions x^2 + 2x + 1 \mathrm{\ and\ } x^2 on the same graph. As you can (probably) see, for very large inputs, x^2 + 2x + 1 is approximately the same as x^2. By dropping unnecessary information like this, we can simplify our expression of the runtime of our algorithms.

Here are some classes of runtime (and their Big-O representation O(n); from best to worst)

Constant: c\in O(1)

Logarithmic: c\log n \in O(\log n)

Linear: cn \in O(n)

Logarithmic (usually the best worst case runtime for algorithms): cn\log n \in O(n\log n)

Quadratic: cn^2 \in O(n^2)

Cubic: cn^3\in O(n^3)

Exponential: c2^n\in O(2^n)

Horrible: cn^n\in O(n^n),\ cn!\in O(n!)

(For more information, and the source of this information, please refer to these lecture slides.

Now Runtime Analysis is important because it allows Computer Scientists in fields which run simulations or complex calculations that take a long time to run to estimate exactly how long the algorithm will take to run. This means that if there are time constraints, this allows a shorter and partially more important simulation or calculation to be chosen to be run first. This also allows for more efficient time sharing on a supercomputer, as calculations can be planned in advance, with other calculations scheduled with the least delay possible. This is also important in situations where there is little processing power to run the program or algorithm, such as in microcontrollers. This allows the computer scientist to be able to streamline and optimize their code before running it in said environment.

Linked Lists, and Test Scores.

This week’s lectures were about linked data structures. When I was in high school, I had heard about linked lists from a friend. Since I wasn’t very confident in my programming skill, I asked him to show me an example of an implementation of one. He programmed it in C++, which I was using at the time, but I don’t remember the details of it. Now, I understand what a linked list is.

A linked list is a chain composed of nodes, which have two elements: the value, and a reference to the next node. In C/C++, I think it would look something like this (assuming we’re dealing with integers):

typedef struct{
    int value;
    intnode *next;
} intnode;

In Python, since every object is passed by reference into a function, our node class would look like the following:

class LLNode:
    def __init__(self, value, next):
        self.value = value = next

Linked lists are pretty convenient to implement, especially when a list class (or equivalent) does not exist (C comes to mind). It is also easy to dereference and modify, because all that can be done recursively. Here is an example of a LLNode with a few essential methods:

class LLNode:
    def __init__(self, value, next):
        self.value = value = next
        self.length = 0
    def __getitem__(self, index):
        return max([self.value if index == 0
                    else - 1)])

    def append(self, value):
        if is None:
   = LLNode(value, None)

    def delete(self, value):

    def remove(self, index):
        if index == 0:
            self =
   - 1)

A side note: If anyone knows how to tab in HTML without having to use a bunch of   characters, please let me know in the comments.

Also, today I was browsing the list of other SLOGs on the CSC148 website (the list may be found here). I selected one at random: In the article, slogFIVE, the student was commenting on the any() function. The student commented about how one of the list comprehensions listed returned a value, while the other two didn’t. The student goes on to comment on how there was a -1 in the list, and how -1 was handled.

When I started programming, I always explicitly checked for true or false in my if and while statements. However, I later learned that there was no need to do so, because the if/while statement was checking if the following condition was true, I only needed to put the boolean variable as the condition. I also learned that there was no boolean type in C, and that the int type could be used as a boolean. When using an integer as a boolean, any value except for zero is true, while zero was false. Thus, this allowed for statements checking whether an integer’s value is zero to be simpler, such as:

i = 100
while i:
    i -= 1

This would print all of the numbers from 100 to 1 inclusive, which each number on its own line. How I would have done this before is as the following:

i = 100
while i > 0:
    i -= 1

Of course, in this example, I think that the second while statement is easier to read and makes more sense, but in more complex examples it may be more concise to use the former example.

This week, I also got Test #1 back. I didn’t do as well as I hoped I did, but our class did get scaled, so it might not be too bad. Indeed, I lost marks for my misinterpretation of the min function in question 1. I also lost a mark in question 2 for checking for a wrong base case. It turns out that my general case was sufficient to concatenate the strings properly. In question 3 part A, I lost the marks for forgetting that self referred to a list subclass. If I had remembered this, I would have implemented the class differently and more correctly. In part B, I lost a mark for forgetting to check for the empty case (i.e. FL = FunctionalList([])) >.< I'll have to remember that.

Test #1

And so today I had my first test. Last night, I spent about 20 minutes creating a handwritten reference sheet containing references to Python builtins (copied from the output of help()). However, during the test, I didn’t even touch the sheet. The first question was about going through a recursive function and figuring out the output for different function calls. The biggest problem I had with this was that I forgot how the min function worked–even though I had written down roughly what max and min did. However, I neglected to write down what would happen if max or min were used on a list of tuples. Testing it now, I got the following output:

>>> min([(1,10), (5, 2)])
(1, 10)

What I assumed the min function did was compare the first elements of the tuples. I had forgotten that it actually compared the modulus of the vector whose coordinates are given by the elements in each tuple. I remember it well now. Oops.

The second question was on recursion. We had to join elements in nested lists into a string separated by a given string (much like the str.join(iter) method, except instead of raising an exception when encountering a list, it would read the contents of the list and concatenate them as if they were in the top level of the list. This wasn’t too difficult to create a recursive function for.

The last question had 2 parts. In part A, we had to create a FunctionalList class from the list class, except that it included two new methods: a functional sort and append method. This meant that we had to return the modified list instead of modifiying it in-place (in contrast to the behaviour of the respective methods built in to the list class). Again, I did not find this too difficult, but I forgot how the list class worked. Talking to my friends later, one way to make a copy of the contents of the list class was to use self[:]. I forgot that self referred to the list class as well, and so I could treat it as a reference to a list. I’m pretty sure that I lost marks because of that.

Part B was about making test cases for the FunctionalList class. I did not have very much practice on this before, but I tried to make test cases for very case that I could think of.

All in all, this test wasn’t too difficult. I hope that I don’t lose too many marks on my errors, and I’ll be sure to know how the various built-in classes work better for the next test.