Friday, 6 December 2013

Last Blog Entry

This is the last blog entry. This week, we just had one class. No new topic was taken up, as this was a review session. We discussed what we did throughout the semester, and were given some tips and possible list of topics that would be included in the final exam. We were advised to refer to assignments, exercises and labs for possible questions. Other than that, this has been a good semester. I have learnt some new things and made some old topics stronger. Had fun studying this course under Professor Danny, but only hated to come at 9 in the morning. Made some friends and enjoyed the labs also with Amir Ali sir. Hoping that the final exam goes well and I finish the course with an A or A+.


The mandatory part was to include selection sort, quick sort, merge sort and efficiency.

1)Selection sort: The selection sort selects the minimum element from a particular index  to the end in an iterable and sets the value of that index to it. It then moves on to the next index. The total number of comparisons become n + (n-1) + (n-2)......+1, which gives us n(n+1)/2, and the efficiency becomes O(n). It gets slower as the input size increases.

2)Quick sort: The quick sort involves choosing a pivot element. Then, all values are compared with it, and divided into two partitions, either larger than it, or smaller. Thus, the pivot is at its correct position. This is recursively applied to the two partitions and we finally get a sorted iterable. The efficiency of this sort is O(n logn).

3)Merge sort: The merge sort involves dividing the iterable into two, and doing this recursively until we get a sorted iterable(An iterable with a single element). We then merge these halves.While merging the two halves, we compare each element of the halves, and merge them  so as to produce a sorted, larger iterable. We do this until we have just one iterable, fully sorted. The efficiency of this sort is O(n logn).

As we have seen these algorithms, and their efficiencies, we can clearly see that merge and quick sorts take much smaller time as compared to the selection sort, as the size of the input increases. Hence, the use of selection sort is avoided as the input size goes higher.

Thursday, 28 November 2013

Namespaces and memoizing

This week, we learnt about the namespaces, and the scope of variables. We learnt that a local variable cannot be accessed outside the function definition, while a global variable can be accessed only in the global space. The non local variables can be used outside the function definitions. Other thing we learnt is methods, and that methods of subclass override those of the superclass. Another interesting thing was tracing, where we saw the flow of the control in a recursive function, with the help of a visualizer. We saw how functions were called during each recursion cycle, and how values were returned, leading to the final answer. According to me,the most useful thing thing we saw in the last lecture was code efficiency. We used memoizing technique, where we stored previously calculated values in a table, to reduce identical calls to functions to calculate them. We used it on the fibonacci series and saw the results that we could calculate fib(400) using this technique, as compared to fib(40) without it. This concept was then applied to automate functions.

Friday, 22 November 2013

Reduce and information hiding

This week, we learnt an entirely different topic from what we were doing in the past week. The first concept was information hiding. We prevented the user from directly manipulating the values of various variables used in the code. Instead, we made the variable private and used getters and setters to set and use its value.This helps avoids assigning of invalid inputs to the variables, as the input is validated in the setter. Other than that, it also allows us to make some variables as read-only type, by not providing the setter. Once we have all the required methods, we make a new variable as the property of this private variable, and access the getter and setter through it. I was aware about the idea of using getters and setters from my JAVA knowledge and its recent use in CSC 207. But, learnt some innovative methods of using them in this week.

The other concept introduced was reduce, which allowed us to break down any iterator to a single value, by applying an operation on pairs of items in the iterator. This is a very nice technique to extend some of the built-in functions like sum, multiplication. etc and use them in a place where they cannot be used directly. We can also define our own operations. This week,examples like multiplying the items in a list, summing a list containing lists,etc were shown. It is a very nice way of reducing the iterators, when different operations need to be done on the items.

Thursday, 14 November 2013

Post term test

This week, we did not get the weekly dosage of CSC 148H1. We lost our Monday lecture to the fall break and the Wednesday one to the term test 2. As mentioned in the previous post, the fall break was an ideal time to finish up incomplete work, as well as relax, and procrastinate. Well, procrastination was undoubtedly there, as I did not have much work to do. I did some assignments for other subjects, and then studied for the test.The second term test was easy, but it was definitely harder than the previous one. What I felt about it was that binary trees were given all the weightage, and sorting algorithms were not tested at all. The first two questions were pretty much the same, and mainly tested the ability to traverse a binary tree. I was expecting the sorting algorithms to be tested, as I feel they have an important role to play in future. Anyways, the test was fun, and am expecting good marks on it.

Friday, 8 November 2013

Sorting algorithms

This week, we continued the topic started last time. The big-oh notation. We studied the application of the notation in calculating the orders for various sorting algorithms. The sorting algorithms were explained to us using different sized coffee cups, which was a very nice technique to demonstrate the sorts in a practical way, allowing us to understand them better, rather than just looking at the code and trying to get it. Another advantage of doing it was that we could compare the times taken by each sort, and discuss about their orders easily, thereby coming to a conclusion on the better sort. A few types of sorting algorithms that we discussed were selection sort, quick sort, merge sort and count sort. Each of them used a different method and took different times to complete. Then, we compared them by their running times through a program, with varying size of the input. Selection sort lead initially, but as the size got larger, the factor of n2 came into play and it started taking much longer time. Count sort and the built-in sorts were the long term leaders, but for extremely large inputs, count sort was better. The only reason why it is not used widely is that it cannot sort values in the input that are larger than the input size.

Other things going on in the curriculum are the upcoming fall break and the second term test. The fall break is an ideal time to study, as well as for procrastinating. The term test on November 13 will encourage some studying though. But, it’s a nice time to finish up all the incomplete work, and also relax a bit by not having to attend the university.

Thursday, 31 October 2013

Binary Search Trees

This week, we were introduced to the binary search trees. We were already  familiar with trees, specially the binary trees. The extra feature of the binary search trees over normal binary trees is that they are, in a way, sorted. That is, the left sub-tree has lower and the right sub-tree has higher node values than the root node value.
Apart from that, we have started to learn the O( ) notation. It is way of  representing the time taken by the code, as proportional to some function of the input size.A binary search tree is helpful to a big extent when we are searching for presence of the given  items in trees. It reduces the time taken by our code to search for an element to O(log n) as compared to O(n)  for a normal binary tree, where 'n' is the size of the input.This happens because as soon as the item is compared to the root node value, we are able to figure out which side to look for.Hence after each comparison, nearly half of the tree is eliminated for a possible presence of the given item in it.
Other things going on are the weekly exercises and the assignment 2. The exercises have been easy to handle and were finished pretty quickly. The assignment is a bit tougher than the exercises, but a lot simpler than the previous one. Hope that things will fall in place before it's submission.
One more week and then we have the fall break. Term test 2 coming up after that. Will have to gear up for it!

Wednesday, 23 October 2013

Linked lists

This week, we were given our marks for our mid term examination. Well, the exam was pretty easy, and I did receive a good grade in it. Happy, but not satisfied. Made a trivial error while passing the arguments in the second question.Will try to be more careful next time and not lose marks for the things I know. The new topic introduced this week was linked lists. They have some characteristics of lists as well as some of trees. Like lists, they store a data value, and like trees, they lead us to the next data member. Not that difficult to learn, once we understand the implementation of trees and the Node class from exercise 5. Attempted exercise 5 and got the (a) part at the first try. While implementing the (b) part, forgot to consider the case when the elements are "None", as wrote it in a hurry, at its preliminary stage. Don't know when I'll stop making these kind of errors. But corrected it as soon as I got some time to revisit it.Overall, learning is on the track and I'm enjoying it!!!