My weekend was fantastic!
Unfortunately for the tech side of me, I didn't get any time to work on my turret program (check out my other blog to see how that's going) but I did end up enjoying the company of a young woman. Her and I hung out the other weekend and had fun, and decided to hang out again. It was wonderful. Even if the day before that was super crappy. My friend was roaring drunk all day and stumbled into my room when I got home from work and was trying to relax. It was unfortunate for the whole relaxing thing I was trying to do. But, oh well, such is life. I cut my hand open on something in the back room at work which kinda sucked. Not to mention dealing with power outages nad such at the gym. Phew, but after all that, I still had a wonderful night and sunday morning.
And I did get some reading done in my book, so things are going well. Unfortunately, I don't I'm going to be able to finish all the books I got from the library, considering I'm only a third of the way through number 1/4 of them. Sigh, and I really wanted to read Natural and Artificial Intelligence by Armand M de Callatay...
Originally Musings of a College Student, which were the rantings, and ideas of a bored college student.Including information about the various programs I create while bored, and the occasional video game suggestion when I stumble upon a good gem. Now, it's Observations of an Intellectual Moron. The location of thoughts and whimsies I want to say but don't have any context to bring it up in. And a place for me to vent about my life so I can keep my day-to-day free of my troubles
Total Pageviews
Monday, April 18, 2011
Thursday, April 7, 2011
reading
I finished my book, Godel Escher Bach by douglas Hoftstadter yesterday, and started in on The Essential Turing. That I picked up from the library a few days ago. I also have Natural and Artificial Intellice by "de Callatay" as well as A book on Noneuclidean geometry by Coxeter, and Knuths first volume of the art of programming. And I have a few other books that I got a while back that I need to read still.
So much reading so little time!
So much reading so little time!
Monday, April 4, 2011
The halting problem and AI
Today I thought about what I would do if I was diagnosed with an incurable disease that would waste my body away to nothing. such as Lou Gehrig's disease (Steven Hawking's). I thought about it, and decided I would dedicate myself to my studies and try to accomplish as much as I could in my time. And hope that people would say, if he had lived longer, he would have accomplished many more great works. In all honestly, I hope people say that, disease or not.
I would like to become famous, not famous in the idea that everyone knows me, but that I would be famous in my field. That I might become someone which people look back on as a pioneer. Like Turing. I mean, besides computer scientists and engineering, most people don't know who he is. But, I mean, what he did was amazing. Not to mention his work with Church. there was new ground to be covered then, and there is still now, but it seems more elusive. And yet, somehow I find myself with my motto on my sleeve ready to tackle the very problems that my predecessors have struggled with. I want to be a host of all their knowledge, a sponge that absorbs what they know, but when wrung out, gives forth new information and gives the data in a different form, such as foam from water like a sponge. But more so than a sponge. I want to be an established member of society, and perhaps help usher in a new era.
Lofty goals I know, in case you're wondering what I meant by my motto in that last paragraph. This is it: "It's the questions you ask that are important" I don't remember where I picked it up, or when I did, but I know that it's helped define how I approach problems in my life. Sometimes the most elegant solution to a problem is one that isn't at first the direct approach, but a more obscure question. I find myself pondering AI often. And I wonder if perhaps we aren't coming at it all wrong. We define intelligence as human intelligence. Obviously, if a computer or robot could perform all the things we could do, then we may dub it intelligent (or not really, but more on that later), but whose to say that we couldn't define intelligence in other measures? For example, artists and mathematicians are two separate breeds (most of the time) . But we might tell either they are intelligent. Perhaps, we would more likely attribute this feature to a person of math before one of art. But no doubt, you will agree with me that a gifted artist is equally intelligent as a gift mathematician? Just in different ways, it's not the way they perform that is intelligent, in fact, it's how they approach their fields that gives them intelligence.
But I digress, at the core of that is them building off what they've learned, and learning is what we humans do. Whether it be through trial and error, lecture, or self analysis. I might say that this is what makes us intelligent as a species. That we can learn to approach numerous problems from equally numerous angles, use systematic ways to derive solutions to problems, but at the same time, throw the rule book to hell and just try things randomly. It's a bit hard imagining a computer being able to do all these things. A computer can be made to 'learn' but most of the time, the environment's whatever algorithm or code is running is a precise one. And probably not as complex as a real system.
I was just thinking about this stuff today is all. It's an interesting though I suppose. In random other news, I've been trying to figure out if I'm even attracted to people anymore. I'm having a bit of a love affair with computers and semantics and symbol manipulation. It's all so fascinating and I can't get enough of it. I'm also wondering if I can't get a hold of Turing's original essay on compututable numbers. Where he proves the halting problem is unsolvable. Which I was thinking about the other day as well. I mean, if a program was known to take x amounts of steps, then couldn't it count down the steps as it went and then when it got to the second to last step it would say: I will terminate next step. At first, this appealed to me. But then I realized, as you added in keeping track of those steps, you would end up increasing the number of steps taken. I suppose a pre count that takes that into affect could be it, but still. Consider this:
I will stop in 2 steps
I will stop in 1 steps
I will stop in 0 steps
Stop
Looks good right? Not really, the stop is the next line, which is another step. You could do, this
I will stop in 2 steps
I will stop in 1 step
Stop
and it would be correct. But how exactly are you going to generalize this?
A program P takes S amounts of steps. If P is given code to keep track of this, and update itself then the amount of steps is increased, the amount increased, it given by the complexity of the code itself. However, by adding in the counting, did we not make the code more complex? To take effect into this complexity, we must make P more complex... I'm sure you can sense where this is going. For simple programs this is easy, take for example this python snippet:
for i in range(10):
print "I will stop running after the character after the colon is 0 :",
print i
At first, this looks good. After all, it told you when it would stop right? and when the number after the colon is 0, it really does stop. Or it looks like it in the output. But as you see by the code this is not the case. Let me switch to another language to make this clearer:
for(int i = 10; i >= 0; i--)
std::cout << "I will stop running after the character after the colon is 0 :" << i << std::endl;
As oppose to pythons in range, this for loop is a little more specific. And also, I can tell you more just looking at this lower level c++ code. first off, the output is the same, but, the loop runs and after a few runs i is equal to 0, The loop runs one more time printing out it's output. But then, it tells the output it's ending a line, and hops back to the for loop condition statement after decreasing i by 1. It checks the condition of i >= 0, finds it to be false, and hops out.
I'd say that the program is lying if it tells me it stops running after it's printed out 0. It might not have been as obvious in the python code as it was in the c++ because python has a bit more abstract coding, then lower level coding. Those two snippets tell my point though. As we increased the complexity, the way to count the steps and decrement increased as well. A one line program of "I will stop now" still must call the I/O processes to show this to the user, and also return from the main function call of the program. As well as to release control back to the host computer, and numerous other small tasks such as garbage collection. What about program lines ?
After all, as coders when we boast: I wrote 10,000 lines of code yesterday, we are quite proud of our achievement most of the time. But say we try to do what we did before?
in Java,
public class t{
public static int main(String[] args){
System.out.println("this program will terminate in 2 lines of code after this one");
}
}
indeed, we may argue this to be true. But at the same time we can argue against it. You might think this absurd, but it's not to hard to ruin an argument on an ambiguous statement as line. When we say line in this context, we assume to mean to lines of the text we have written. But how many machine instructions are in each line of a program? And if we placed those onto lines, how many would there be?
I suppose you can say I cheated, as I took the code out of it's original bindings. But after all, when we say the program terminates, do we mean we have run out of code to run, or we have run out of things to do? OR that we have told the computer we are running our program on that this program is done? It's all about the questions we ask isn't it?
One more example!
Say we want to know when our program ends. We might run it a few times, looking at how long it takes to execute. This won't work because the speed of the program depends on more than the program itself, but also the context it's run in. That is, the architecture and specifications of the computer in which the program is housed. No?
We can try to work around this, using Big-O notation. A common tool we use to measure running time complexity, and upper and lower bounds of algorithms. For example, say we have an external hashing program that uses disk page I/O. Each disk page read/write operation takes 5 milliseconds or so, every time. We say an external hashing algorithm that is based off keys has O(1) because it always takes constant time for the algorithm to do it's work on a single page. This is nice to know, and in fact, given that you must do 5 read write inserts or whatever, you would instantly know that it takes 25 milliseconds to execute those instructions. Halting problem solved right? Nope. How would you tell a program that it's going to stop after that time? You can't.
First off, you can't really tell a program anything that has semantic meaning to you anyway. Why? Because computers are mainly designed to work on syntax, and symbols. On a high level these symbols might have semantics, and might even dictate some behavior of the machine itself. But, still at it's base core, the computer is just plugging and chugging machine code. It's just following instructions without thinking, without knowing that it might be able to do something a different way. It's programmed to do one thing, and one thing only.
So, if we can't get semantics involved in our computing, how could you ever get AI working? I haven't fully told out all my ideas, but when you think about it. All the semantics we have, all the meaning and thoughts we have are based off of a intensely complex set of instructions. Our DNA and ribosomes that operate on it. Ribosomes are like little turing machines! It's great. Anyway though, it's time for bed I believe, so I hope that that helped stimulate some thought in you all
Subscribe to:
Comments (Atom)