Thursday, 15 May 2014

Exploring Artifical Intelligence - Can Machines Think?

Artificial Intelligence is both a topic of Computer Science and Philosophy, and begins to ask what really makes us human and if we are just a complex biological machine? Alan Turing first asked the question in his 1950 paper named Computing Machinery and Intelligence. The paper is much more accessible in comparison to his paper about Computability Theory and computable numbers, which are real numbers that can be calculated with a terminating algorithm, the computable numbers are also considered countable (number of axioms for this).

This blog post won't be long, and I'll probably conclude with a link to my OneDrive account which has a copy of Alan Turing's paper. I'm going to be mostly be talking about the philosophy of Artificial Intelligence.

There is one concept which has been of great interest to philosophers since the beginning of civilization, and that is the concept of consciousness, which is the ability to be able to be aware of our own existence. This concept of consciousness is what separates us from being a very complex biological machine. In essence, our behavior is typically determined through learning and our own past experiences. Yes, we can incorporate this concept of learning into machines and measure how well it performs when learning, but a machine will never be aware of it's own existence naturally like a human, unless we find some method of enabling the machine to be aware of it's own existence, however, that would be cheating since it is still relying on predetermined behaviors created by us.

A machine is usually regarded as being intelligent if it is able to imitate a human successfully (it able to play the imitation game in Turing's paper); that is to say that a human is unable to distinguish a machine from another person. Turing outlines in his paper the similarities between a person and a computer. However, can a machine think? What is thinking? Computer programs are based upon a defined set of rules which can't be broken, unless the rules weren't correctly defined to begin with, and therefore is the machine thinking about a problem or just going based upon a defined set of rules? 

Behaviorists will suggest that all thoughts are based on defined set of rules, and these rules give the false impression of free will and thought. This doesn't explain the idea of consciousness and what exactly creates consciousness. I believe machines will be able to solve mathematical problems and think, but they won't ever possess the strange and rather beautiful consciousness. Most robots aren't able to intelligently move around objects yet.

At the moment, most machines can only given Boolean valued answers to decision problems, they aren't able to think about the another answer, just give a simple "Yes" or "No" based upon some algorithm. There are learning algorithms, though I don't believe you can consider that as true thought. There is a popular idea of that a machine can only do what it is told to do, and this seems to very much true for almost all machines as of now.

I would suggest reading the paper Computing Machinery and Intelligence, which is available here.

It would be nice to hear some of my readers thoughts on the concept, and any ideas they may have about Machine Learning.

Friday, 9 May 2014

Deterministic Polymonial Time (P) and Non-Deterministic Polymonial Time (NP)

This is going to be a short description of the difference between P and NP time complexity classes, and what the time complexity classes are based upon. These are used for Computational Complexity and Algorithm Analysis. Computability Theory is mainly concerned with what is computable, rather than how feasible that computation is.

The P and NP time complexity classes are commonly based upon Deterministic and Non-Deterministic Turing Machines. These machines use similar mathematical notation to other models of computation, but are slightly more complex and have a larger application to other areas of Computer Science, and that is one of the reasons why I prefer to look at Turing Machines rather than Finite-State Automata.

The P complexity class is a class of decision problems (Yes or No on some input), which are solvable within Polynomial Time on a Deterministic Turing Machine. It is given by P(n) or just P, where is the number of inputs. On the other hand, NP is the complexity class of decision problems which are solvable within Polynomial Time but on a Non-Deterministic Turing Machine, and therefore given by the NP(n) or simply NP notation.

In general terms, it is considered that P is a feasible complexity class for decision problems. With both classes, the running time of the algorithm is said to increase polynomially as the size of input increases.

The Y-Axis is Time, and the X-Axis is the Input Size.

NP-Completeness and NP-Hard

Given a decision problem A and another problem B, the decision problem A is considered NP-Hard, if for every B which belongs to NP, and B is reducible to A. If A is NP-Hard and and belongs to NP, then A is considered to be NP-Complete. NP-Complete algorithms infer that there is no P version of that such algorithm. They may also be considered the hardest problems within NP.


NP (Complexity)
P (Complexity)