We're in the last week of classes, lectures are now done for this class and we are all starting to study.
So what have I learned? Well, the interesting thing about this class, is that week to week I felt like I wasn't really learning anything new. Recursion was an idea that I had seen before, and it seemed like we were just drilling that into our heads week after week. It seemed like we were learning one idea over and over for the whole class.
Although looking back, I know WAYYYY more than I did since class started. My programming alone is significantly more efficient and I am able to pull apart problems in a much more systematic way now. It's interesting that you can be unaware that you are learning, but learn so much.
With this course, I bid goodbye to computer science, as I will be starting programs in mathematics and actuarial science. I really did enjoy it though, and this was by far my most interesting class this year.
The Angry Student
Thursday, April 3, 2014
Monday, March 31, 2014
On the midterm exam, aka. something that needs to be said.
My mom is bad at math. I have tried many times to show her things that I think are really cool little tidbits of mathematics, and even things that are relatively simple she has trouble grasping. However my mom is an incredibly intelligent woman who has 3 degrees and graduated with a 4.0 GPA from Mt. Allison University in 1981.
Does not being bad at math make my mom stupid? By most people's standards, absolutely not. She is just not cut out for Math, and that is perfectly fine.
So this last midterm had a raw average of about 44%, which is the worst I have seen in any of my classes so far at U of T. Most of the people I know who are relatively talented at computer science or math easily got over 100% (one such friend got 100% BEFORE curving), so I have one thing to say:
Maybe if you failed, computer science isn't for you.
Sure, you may want it to be, you may like video games and want to program them, perhaps you heard that computer science can lead to a lucrative career. However if you struggle so badly to do something that a significant amount of people can do well at... maybe it isn't for you and you should consider something more in line with your talents.
Does not being bad at math make my mom stupid? By most people's standards, absolutely not. She is just not cut out for Math, and that is perfectly fine.
So this last midterm had a raw average of about 44%, which is the worst I have seen in any of my classes so far at U of T. Most of the people I know who are relatively talented at computer science or math easily got over 100% (one such friend got 100% BEFORE curving), so I have one thing to say:
Maybe if you failed, computer science isn't for you.
Sure, you may want it to be, you may like video games and want to program them, perhaps you heard that computer science can lead to a lucrative career. However if you struggle so badly to do something that a significant amount of people can do well at... maybe it isn't for you and you should consider something more in line with your talents.
Sunday, March 30, 2014
Addendum on Efficiency.
So I did it. I decided to explain Big O notation to my 77 Year old Grandmother (she had a birthday earlier this month!). It was I think somewhat successful, but here was what I said:
Unfortunately though I couldn't think of a proper way to explain more complex algorithms in terms of onions.
Alright, so we recently learned this thing in class called Big O notation, which we use to give a rough estimate of how good, or god awful a function we wrote is. So imagine, you are cooking dinner and you have two ways you could chop an onion. One is to chop it in 10 slices horizontally, 10 vertically, and 10 lengthwise. No matter what, this will take you 30 slices, which is constant no matter what the size of the onion, so we would say that the expected time that this algorithm takes is ALWAYS 30, or O(1), the 1 just stands for constant. Now contrast this to an algorithm that takes an onion, and cuts it into .5 cm pieces. Well, a 6 cm diameter onion would have the same expected time, but if it was 7 cm, all of a sudden we have 36 cuts to do, 8 cm would take us 42 etc, so now we have something that grows depending on the onion. If we carefully look at how this grows, we see that for every 1 cm of extra onion, you do 6 extra cuts, so we notate that as O(n) where the one stands for "grows with n".
Unfortunately though I couldn't think of a proper way to explain more complex algorithms in terms of onions.
Sunday, March 23, 2014
Making tasks super inefficient.
So, we have to write about efficiency. Well, instead of doing what most people are doing, which is likely explaining what it is etc., my goal in this blog is to take a bunch of familiar concepts in programming, and write the most inefficient code I can think of in order to demonstrate some key ideas.
First though, I feel obliged to give the coles notes version of the notation we will be using in this discussion, "Big O" notation. We can pretend it's some kind of magic mathematical voodoo, or we can just think of it like I do, Lazy man equations.
What do I mean by lazy man equations? Well imagine you are someone who wants to use an equation to describe how well something he made works, but he hates actually sitting down and formulating a precise equation. So the lazy man says "well, it's pretty close to this" and that's what we get. For example, let's say the precise equation for something is that it takes n(n+1)/2 operations for some n. Well, we would say "it's pretty close to n^2" so we say it is O(n^2). Or perhaps it is e^n+n^99, eventually e^n will be the biggest, and we don't really care that the base number is e since we are lazy, so we just say it is O(a^n).
So let's get to the meat of this post shall we?
Big O
First though, I feel obliged to give the coles notes version of the notation we will be using in this discussion, "Big O" notation. We can pretend it's some kind of magic mathematical voodoo, or we can just think of it like I do, Lazy man equations.
What do I mean by lazy man equations? Well imagine you are someone who wants to use an equation to describe how well something he made works, but he hates actually sitting down and formulating a precise equation. So the lazy man says "well, it's pretty close to this" and that's what we get. For example, let's say the precise equation for something is that it takes n(n+1)/2 operations for some n. Well, we would say "it's pretty close to n^2" so we say it is O(n^2). Or perhaps it is e^n+n^99, eventually e^n will be the biggest, and we don't really care that the base number is e since we are lazy, so we just say it is O(a^n).
So let's get to the meat of this post shall we?
Crappy Sorting
So how could we sort really poorly?
Let's imagine we have a list with items in it, for example [4, 1, 3, 2].
I can think of one insanely bad way. How about we check if it is sorted... and then we randomize the list and check if it is sorted again?
[4, 1, 3, 2] not sorted! go again!
[4, 3, 2, 1] not sorted! (wrong order) go again!
etc.
So how can we describe this in our lazy man notation?
Well the fastest it could work is one iteration, so the best case we would describe as O(1). The average it would work, is about O(n!) which is pretty bad.
But oh, what about the worst case???? Well it is within the realm of possibility that it may never work, so we would actually say O(∞).
Truly a terrible algorithm.
Let's imagine we have a list with items in it, for example [4, 1, 3, 2].
I can think of one insanely bad way. How about we check if it is sorted... and then we randomize the list and check if it is sorted again?
[4, 1, 3, 2] not sorted! go again!
[4, 3, 2, 1] not sorted! (wrong order) go again!
etc.
So how can we describe this in our lazy man notation?
Well the fastest it could work is one iteration, so the best case we would describe as O(1). The average it would work, is about O(n!) which is pretty bad.
But oh, what about the worst case???? Well it is within the realm of possibility that it may never work, so we would actually say O(∞).
Truly a terrible algorithm.
Crappy Prime Number Checker.
Let's say we want to find if a number is prime. Well the first way you would imagine is to just check if every number before it divides into it. The way a computer would probably do this is with modulo, so we can think of it integer dividing + 1 each number p n//p +1 times. So for every p up to n this roughly works out to n + n/2 + n/3 + n/4 ... + n/n etc, which would be approximately O(n^2).
This is pretty crappy, since we can actually eliminate a whole bunch of numbers simply by realizing that we only need to check up to sqrt(n). Which actually means that our algorithm is n/2 + n/3 + n/4 ... + n/sqrt(n) however this is STILL O(n^2), although some might say that it is "subquadratic" but that is another topic for another day.
And for a change of pace... the world's most efficient sorting algorithm.
This is pretty crappy, since we can actually eliminate a whole bunch of numbers simply by realizing that we only need to check up to sqrt(n). Which actually means that our algorithm is n/2 + n/3 + n/4 ... + n/sqrt(n) however this is STILL O(n^2), although some might say that it is "subquadratic" but that is another topic for another day.
And for a change of pace... the world's most efficient sorting algorithm.
(courtesy of StackOverflow):
Presenting Intelligent Design Sort:
Introduction
Intelligent design sort is a sorting algorithm based on the theory of intelligent design.
Algorithm Description
The probability of the original input list being in the exact order it's in is 1/(n!). There is such a small likelihood of this that it's clearly absurd to say that this happened by chance, so it must have been consciously put in that order by an intelligent Sorter. Therefore it's safe to assume that it's already optimally Sorted in some way that transcends our naïve mortal understanding of "ascending order". Any attempt to change that order to conform to our own preconceptions would actually make it less sorted.
Analysis
This algorithm is constant in time, and sorts the list in-place, requiring no additional memory at all. In fact, it doesn't even require any of that suspicious technological computer stuff. Praise the Sorter!
Feedback
Gary Rogers writes:
Making the sort constant in time denies the power of The Sorter. The Sorter exists outside of time, thus the sort is timeless. To require time to validate the sort dimishes the role of the Sorter. Thus... this particular sort is flawed, and can not be attributed to 'The Sorter'.
Heresy!
Making the sort constant in time denies the power of The Sorter. The Sorter exists outside of time, thus the sort is timeless. To require time to validate the sort dimishes the role of the Sorter. Thus... this particular sort is flawed, and can not be attributed to 'The Sorter'.
Thursday, March 20, 2014
Chess. (Part II)
So let's talk about strategy.
What is strategy exactly? Well I have always thought about it as a set of imposed rules that are beneficial towards some goal. For example when someone plays poker, they may have a set of "rules" they tell themselves like "never bet your life savings on a bad hand" which are not explicitly part of the rules of the game.
Now as I have said many times, computers are idiots, you need to spell out EVERYTHING to them. So how can we teach them a proper strategy?
Well the answer is to take into account EVERY contingency.
Now to apply this to chess. Take a look at this board:
So the key is teaching the computer some kind of point system to evaluate the moves. One of the ways this is done is to think of various pieces as having point values:
pawn - 1
bishop and knight - 3
rooks - 5
queen - 9
king - over 9000
So while the computer is checking possible move permutations, it might get black to try and capture the knight if you played b6, but then as white you would be able to capture the king and thus get 9000 points. Also as it tests various lines, it would stumble upon checkmate in this way since it would find an unavoidable line that results in the king being captured.
Sunday, March 9, 2014
Recursing upwards.
It was pointed out to me that usually when we do recursive things it heads downwards, until we hit a value of 1 or 0.
So I tried to come up with an example of something that would "recurse upwards" so to speak.
Let's say we want to create a list of lottery numbers, well we could use this kind of upward recursion to populate and return a list of however many numbers we want, like this:
So I tried to come up with an example of something that would "recurse upwards" so to speak.
Let's say we want to create a list of lottery numbers, well we could use this kind of upward recursion to populate and return a list of however many numbers we want, like this:
from random import randint
def lottery(start: int=1, end: int=49, amount: int=6, numbers: list=None):
if not numbers:
numbers = []
chosen = False
while not chosen:
number = randint(start, end)
if not number in numbers:
chosen = True
numbers.append(number)
if len(numbers) < amount:
lottery(start=start, end=end, amount=amount, numbers=numbers)
else:
numbers.sort()
return numbers
return numbers
Of course we could do the same thing without recursion, but it's always interesting to experiment!
Of course we could do the same thing without recursion, but it's always interesting to experiment!
Wednesday, March 5, 2014
Chess. (Part I)
Chess is a game almost everyone knows how to play. The rules are simple, yet the game is so deep that people devote their entire lives to mastering it. The simplicity of chess has had one very interesting consequence that is relevant to this course.
Chess engines.
For the unaware, a chess engine is a computer program specifically written to play chess. It makes a good problem since currently chess is an unsolved game (i.e there is no known strategy that will win every time), but it can quite easily be represented and evaluated in computer memory if done right.
So how might we write one? Well the first thing to think of are the basic classes we might want to simplify things. Some immediate ideas that spring to mind are:
1. A representation of the current state of the board, with a specific place for each piece, and when a move is executed it updates the board.
2. Each piece with it's legal moves.
Programming the piece movements is actually a fairly complicated question, for a few reasons. Some pieces are simple, for example the Knight. Let's say we represented co-ordinates on the board as (7,1) for G1, or C6 as (3,2) then we might make a list of legal Knight moves by creating a list of tuples where we add all 8 possible move permutations:
1. right 2, down 1 (2, -1)
2. right 2, up 1 (2, 1)
3. right 1 down 2 (1, -2)
4. right 1, up 2 (1, 2)
5. left 2, down 1 (-2, -1)
6. left 2, up 1 (-2, 1)
7. left 1 down 2 (-1, -2)
8. left 1, up 2 (-1, 2)
So for example, if we ran our function on G1, we would have the list:
[(9, 0), (9, 2), (8, -1), (8, 3), (5, 0), (5, 2), (6, -1), (6, 3)]
Then we would need to filter our list, since our coordinates are only from 1 to 8 inclusive, so we would have:
[(8, 3), (5, 2), (6, 3)]
We would then just have to exclude moves that are already occupied by our pieces, for example on the opening we couldn't play E2 since it is occupied by a pawn, so our move list would be [(8,3), (6,3)].
Other pieces get way more complicated, for example with rooks we would have to take castling into account, whether we can capture, all possible moves in each rank and file, etc
When we also think about board positions, we need to take all the possible moves, and then somehow evaluate them. This will be the topic of part II.
Chess engines.
For the unaware, a chess engine is a computer program specifically written to play chess. It makes a good problem since currently chess is an unsolved game (i.e there is no known strategy that will win every time), but it can quite easily be represented and evaluated in computer memory if done right.
So how might we write one? Well the first thing to think of are the basic classes we might want to simplify things. Some immediate ideas that spring to mind are:
1. A representation of the current state of the board, with a specific place for each piece, and when a move is executed it updates the board.
2. Each piece with it's legal moves.
Programming the piece movements is actually a fairly complicated question, for a few reasons. Some pieces are simple, for example the Knight. Let's say we represented co-ordinates on the board as (7,1) for G1, or C6 as (3,2) then we might make a list of legal Knight moves by creating a list of tuples where we add all 8 possible move permutations:
1. right 2, down 1 (2, -1)
2. right 2, up 1 (2, 1)
3. right 1 down 2 (1, -2)
4. right 1, up 2 (1, 2)
5. left 2, down 1 (-2, -1)
6. left 2, up 1 (-2, 1)
7. left 1 down 2 (-1, -2)
8. left 1, up 2 (-1, 2)
So for example, if we ran our function on G1, we would have the list:
[(9, 0), (9, 2), (8, -1), (8, 3), (5, 0), (5, 2), (6, -1), (6, 3)]
Then we would need to filter our list, since our coordinates are only from 1 to 8 inclusive, so we would have:
[(8, 3), (5, 2), (6, 3)]
We would then just have to exclude moves that are already occupied by our pieces, for example on the opening we couldn't play E2 since it is occupied by a pawn, so our move list would be [(8,3), (6,3)].
Other pieces get way more complicated, for example with rooks we would have to take castling into account, whether we can capture, all possible moves in each rank and file, etc
When we also think about board positions, we need to take all the possible moves, and then somehow evaluate them. This will be the topic of part II.
Subscribe to:
Posts (Atom)