Saturday, February 12, 2011

AI and the UAV


The UAV
Modern warfare has become a totally different ball game. The word automation in this scenario has gained a whole new level of contextual relevance. It is no longer about who fights better, who fights longer or who has that slight incremental advantage in weaponry and personnel over the other, but it has become more about who fights smarter, the side that has the technological edge has complete dominance and control on the war front.
The UAV, short for Unmanned Aerial Vehicle is a clear cut example as to how better technology ensures military superiority. A very broad definition of an UAV would be that it is an aircraft which has autonomous control over its flight systems and whose functions can either be controlled by an external controller sitting on some ground station or its functionality can also be autonomous.
The UAV was created with the main intention to simulate the actions of a pilot in the cockpit of an aircraft without the pilot actually being present. It was a creation which satisfied the need of the military. Covert operations involving air-strikes sometimes get too dangerous and sending in a pilot operated aircraft could put the life of the pilot at risk. So inventors came up with the idea that instead of sending pilots in the war-zone why not simulate the actions that a real pilot does, somehow transfer the real-time decision making ability and tactical intelligence that the pilot shows into a computer? 
 
UAV an AI
This is how the UAV, a machine which possessed artificial intelligence originated. The analysis of this article is to compare the similarities between the UAV which is the machine with the AI and the action which this machine simulates which is the actual pilot in the cockpit scenario and to know where to draw the line between simulative technology and the real-deal.
UAV’s basically mimic the behavior of human pilots in combat situation and more importantly tough their lethal functions (missiles and guns) are sometimes controlled by an operator on the ground, flight related decisions are made by the computers on the aircraft in real time. This means that they fly on their own by judging the surrounding and then make decisions without any human intervention. This is the artificial intelligence embedded in them. In the present day scenario of warfare these systems have been immensely successful. Especially in the war against terrorism UAV’s have reportedly destroyed innumerable Al-Qaeda bases and have eliminated many members in the top brass of the terrorist organization. The main reason behind their success has been that these machines are designed to mimic the action of top notch pilots. But an important point here is that if pilots of the air-force are sent into the same situation, however brilliant and professional the pilot is, there is a constant life-risk factor playing at the back of the pilots mind, this fear will affect the quality of the decisions the pilot makes. Now in the case of UAV’s this risk factor is absent and the combat-decisions made will be of a much better quality and more importantly there is absolutely no need to stay on the back-foot as there are no lives at risk. This becomes especially relevant when the enemy as in the case of terrorists is made up of fanatics who are ready to go to any extent, even sacrifice their lives to substantiate their cause.

DRAWING THE LINE
So the question to be asked here is how much of the pilots tactical function can the UAV’s actually replicate and where do we draw the line and say that that’s it, only so much of modern warfare can be mechanized and at some junction or the other some level of human intervention has to take place because after-all war isn’t some sort of video game, there are lives which hang in the balance. For the last 5000 years humans have had complete monopoly over warfare in the sense that it was humans themselves who went into the warzone and fought for what they believed in. But now with the introduction of these autonomous ‘warbots’, the present generation is witnessing a complete change in the art of war in the sense that till date military superiority was all about incremental changes, who has higher fire-power who can build a tank with slightly better guns, but now this situation has completely changes because we are now changing ‘who’ exactly is fighting the war. We are in an absolute turning point in human history as far as warfare is concerned because we humans are now losing control over the one thing which was thought to be in our control.

THE ANTITHESIS OF THE MECHANIST APPROACH 
The reason the points mentioned above are brought up because at the end of the day there is a reason why there is a need of human command in war. War is never black and white so there need to be conscious decisions made and the bottom line is at least present day AI and that of the near future just can’t replicate this. All these machines want to do is seek and destroy their targets, period. They are very unlikely to call off strikes because the civilian casualties and the collateral damage involved is too high and this argument is supported by solid facts in Afganistan for every terrorist eliminated by drones (this is how they are known in the military) there are 10 civilians who lose their life. Another argument to pose here is that since these machines don’t have a conscious, they don’t see surrendering as an option, this is just speculative but imagine to what levels warfare could go to if both sides have such machines, because of the lack of a conscious, machines from both sides will battle it out to the end and the collateral then will be just mind-numbing.
 So in conclusion, there are a lot of similarities between AI and human functions they replicate and it is very tempting at least in the case of war, to outsource all the important missions to UAV’s  but at the same time there are many dissimilarities and these should be kept in mind because after all war isn’t a videogame.



Thursday, February 10, 2011

The Twin Face of AI Research – An analysis

  Fascination with Life, its defining characteristics and attempts to imitate and reproduce these have been one of the more scientific pursuits of human civilization in recent times. Since the clashing of the mechanist and Cartesian views of Life in the eighteenth century, there has existed a contradictory expectation of Automata specifically Artificial Intelligence. On one hand, it is hoped that the processes that define life and intelligence in natural organisms and more enthusiastically in humans can be simulated through the Mechanical means as essentially there is a belief that these processes are in themselves mechanical and hence it is reasonable to assume they would lend themselves to simulation whereas on the other hand, the opposing viewpoint that this simulation of the intelligent processes such as speech, logic and various other instances of day to day activities that require intelligence will be impossible to do exactly in a manner that can be considered intelligent or genuine and hence cannot be considered a surgical exposition as to how these processes actually work.

It Feels!!
  Recent advances in Artificial Intelligence serve as examples to highlight these ongoing questions. I would like to discuss the example of Kismet – a sociable robot to highlight these issues.
Kismet is an expressive Robot which communicates through methods that are considered natural to human beings (i.e. via speech, modulation and expressions of face) and can also take cues, input through these methods. Kismet has been developed by the AI group of MIT and showcases the advances made in the field of robotic understanding of subtle human expressions or what is also known more recently as ‘Affective Computing’.(whose goal to say in brief is to make the machine interpret the emotional state of humans and adapt its behavior to them, giving an appropriate response for those emotions.

Watch this Video:

 We can observe that the robot responds not only to what is being said(in this case scolding), but also to the tone of voice, their facial expression and other factors and gives weightage to each before coming to an appropriate response which is drooping it’s face and eyes i.e. the typical response we would expect from a human child on being chided. Thus , it can be readily seen that this Robot is simulating and responding retroactively to natural communication from a human. Can this be taken as a sign of intelligence? Is it right to say that Kismet is capable of ‘Intelligent’ Emotional Response? Well, consider this – Kismet makes use of all the components – facial expression, tone, and the words to make sense of how to respond, but what if one of these component inputs were missing? Or worse yet - counter indicative of a different emotion? How will the robot interpret then? What about Body Language and context of the conversation? ( We have all been in situations where we were deliberately thrown off by misleading emotions by the other person but were still able to figure out the actual underlying emotion from previous interactions and various other clues not related directly to the conversation).

Two sides to the Tale
   One answer might be to argue saying that research in this area is nascent and realization of such subtle exceptions and rules within rules of emotional reading will unknot themselves in time – which seems reasonable another way to look at this is to say that emotion is not by itself a standalone quality of the mind but refined by the experiences and stimulus we face over time and hence efforts such as those in Kismet to simulate emotional response as a singleton is only portraying the topical emotional response (a mimic if you will call it) and does not show the true processing of the emotion as such which is its impact on the course of the communication and in shaping the style of response to future conversations, other interactions in the long run and broadly the personality, psychology of the being(in this case the robot) as such. 

The complications of Simulation
  We have seen that Kismet is capable of responding in an ‘emotional’ manner to human interaction but this raises other questions when it comes to simulation of life through artificial machinery – is it possible to do these things as efficiently as handled by organisms? To begin to answer this question, we must first understand the amount of circuitry required in constructing this emoting curiosity - It involves two dedicated computers, four cameras and four lip actuators to say the very least. This amount of circuitry and complexity to partially achieve such a simple task as reading faces and tone, which comes naturally to humans, seems to show that mimicking aspects of life and human intelligence through mechanical means is at best an expensive hobby few can choose to enjoy. However, in taking this approach what we will be forgetting is the fact that the workings of the human mind are not fully understood and that these sort of experiments can help to shed light on how the actual human mind works, test current hypothesis and may even help in turn to create better designs which come closer to approximating the human emotional reality and in general the mechanisms of intelligence.

Wrapping it up
  In Conclusion, it can be seen that even those applications which seem to have come a long way since the Defecating Duck and other eighteenth century amusements, rather than demonstrate the closeness we are to achieving AI seem to show how far off we still are from stimulating Intelligence and other complex aspects of Life’s machinery. This may give the impression of a unified march by scientists on the road to failure, but what one needs to see is that every simulation is opening up new question hitherto unasked and by that virtue cause new aspects and defining details to come into view, at the same time leading to new connections being discovered between far flung problems and consequently helping to uncover a unifying theory of the organic intelligence.

Barath A

references:
Kismet at MIT AI Group
Defecating Duck
wikilink on affective computing

Intelligence and the Chatterbots

Introduction
The question of whether machines can accurately simulate human life has fascinated man for the past two centuries. Beginning with simple automatons in the later half of the eighteenth century such as Vaucanson's duck and his flute player, people have been creating ever more ingenious machines and artifacts attempting not just to demonstrate the inner workings and processes of animals, but also to gain a deeper understanding of what exactly it is that separates man from the machine. A common answer to this question is 'intelligence', a quality associated with learning, reasoning and an aptitude for grasping truths. Computers over the last fifty years have exhibited several of these qualities, perhaps the most famous instance being the famous loss of chess grandmaster Gary Kasparov at the hands of the supercomputer Deep Blue in May 1997. But wasn't the computer just following a sequence of step by step instructions? Could this truly be considered 'intelligence'? In 1950, a scientist came up with a definitive criterion for intelligence (no, it wasn't the IIT-JEE). It is called the Turing Test.

Turing Test
In his seminal paper titled 'Computing Machinery and Intelligence', Alan Turing considered the question of whether machines could think. Straying from the usual approach to such a question which involves formally defining a 'machine' and 'intelligence',he instead asked - 'Can machines do what we (thinking entities) can do ?'. The test he uses to decide this proceeds as follows (taken from Wiki) -

"A human judge engages in a natural language conversation with one human and one machine, each of which tries to appear human. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. In order to test the machine's intelligence rather than its ability to render words into audio, the conversation is limited to a text-only channel such as a computer keyboard and screen."

The above test has proven to be influential and has also been widely criticized, but is an essential concept in the philosophy of artificial intelligence.
Turing predicted that by the year 2000 , computers with upto 120 Megabytes of memory (we now have 3000 times that much) would be able to fool thiry percent of human judges in a five mintue test. Upto now, no computer has passed the test. In fact there is a $20,000 prize offered for the first computer to pass the Turing Test. This goes to show how effective a metric Turing's Test is for defining intelligence and just how far computers still have to go to be considered smart (let alone take over the world).

The First of Many

Partly out of an attempt to pass Turing's Test, and partly just for the fun of it, there arose in the 1970s several programs that tried to cross this first human-computer barrier: language. The programs were usually simplistic in design and relied on large databases, string matching algorithms and the formal rules of English grammar to try to convincingly interact with humans. While most were woefully inadequate, some grew to tremendous popularity. Perhaps the most famous such program was Joseph Weizenbaum's ELIZA, a simulation of a psycotherapist. Using almost no information about human thought or emotion, ELIZA sometimes provided a startlingly human-like interaction. It was unnerving for several people just as Vaucanson's Duck or The Turk had been in their era, and was the first of several Chatterbots. Over the years several better and cleverer ChatterBots such as PARRY and Jabberwacky have emerged and ELBOT for one missed Alan Turing's 30% threshold by a whisker in October 2008.

Chat with ELBOT here (it's quite fun actually):
http://elbot_e.csoica.artificial-solutions.com/cgi-bin/elbot.cgi?START=normal



Turing Tests in daily life
CAPTCHAs are a kind of reverse Turing Test, one where a human has to convince a computer that he is indeed a human. In fact, CAPTCHA is an acronym for Completely Automated Public Turing Test to tell Computers and Humans Apart.



The Turing Test has become commonplace in computer jargon, and frequently features in popular webcomics, such as xkcd.


Is this really intelligence?
Over the years, there have been several criticisms of the Turing Test, saying that it does not actually measure intelligence. Perhaps the most famous argument is John Searle's Chinese Room Argument in which he asserts that understanding is essentially different from thinking. A machine passing the Turing Test of a Chinese speaker, is merely simulating the ability to understand Chinese and does not literally understand it (the actual argument is more rigorous and formal than that). Others reason that human behaviour and intelligent behaviour are not exactly the same thing since - some human behaviour is intelligent, and some intelligent behaviour is inhuman. For these reasons, the Turing Test is not really relevant.

Final take
In my opinion, intelligence is too complex and abstract a concept to be defined in a single line or using a set of symbols. We can merely attempt to learn more about its nature through experimentation with the different aspects of intelligence that we perceive around us. Jessica Riskin is right - from Vaucanson's Duck to Elbot, our best tool to understand human life has remained the same - simulation.

References:
[1] The Defecating Duck, or the Ambiguous Origins of Human Life - Jessica Riskin
[2] http://en.wikipedia.org/wiki/Artificial_intelligence
[3] http://en.wikipedia.org/wiki/Turing_test
[4] http://en.wikipedia.org/wiki/ELIZA
[5] http://www.elbot.com/
[6] http://en.wikipedia.org/wiki/Chinese_room

Fun facts:
Alan Turing, a famous cryptanalyst, logician and computer scientist - was born in Orissa.
He cracked the complex Enigma code in World War 2, helping the British intercept and decode secret German communications.
In 1931, he solved a famous problem of computer science, by proving that there exist 'undecidable problems' - problems that computers cannot solve, given even infinite time and memory.
His powerful abstraction of the Turing Machine is central to theoretical computer science.
He committed suicide (find out why) 2 weeks before his 42nd birthday, and one of the greatest scientists of all time was lost to us prematurely.

Life and Artificial Intelligence - with reference to Sixth Sense

Jessica Riskin’s article, “The Defecating Duck, or, the Ambiguous Origins of Artificial Life”, has left us questioning our understanding of life and machinery. Purists would say that it is impossible to compare life and machinery as anything manmade simply cannot match the inherent intelligence of biological life. But some examples of the so-called “Artificial Intelligence” in today’s world boldly claim that the gap between life and machines can indeed be bridged. This article provides a sociologist’s insight into the world of Artificial Intelligence.

Sixth Sense – an introduction

Sixth Sense is a versatile and intuitive wearable gestural interface device developed mainly by Pranav Mistry, a PhD student at the Media Lab at Massachusetts Institute of Technology or MIT. This critically acclaimed marvel of engineering is just about the best example of Artificial Intelligence in the modern world. Sixth Sense consists of a camera, a portable media projector, a small portable computer and coloured finger caps to enable the camera to sense hand gestures. This device is portable and can be worn around the neck. It functions as a “digital assistant” by providing its user with information on essentially whatever the user sees or hears. Please watch this video to see for yourself.



So, it seems that this device can really “sense” the surroundings and “guess” what information the user is looking for. On seeing a book cover, it gives reviews on the book. On seeing a plane ticket, it shows information on the flight. It can even look at a roll of toilet paper, quickly research the web and tell you whether it is eco-friendly or not. It can look at a map and give weather statistics at various places. It can even read hand gestures and zoom in and out of a map, take pictures or draw on a wall.

Is this Artificial Intelligence?

This kind of behaviour seems convincingly intelligent. I mean, who would have thought some “machine” you can wear around your neck can actually understand what you want to know without you actually telling it or typing it in somewhere? It is like having a (human) personal assistant knowing what you need and when and arranging for it, only more efficient. One can always be sceptical and argue that this is not intelligence, but rather a big and complicated program with a lot of image and sound processing, at work. But the level of intuitiveness, versatility and ingenuity that Sixth Sense presents is way above that of most other machines we know of. This forces us to believe that this machine has a certain amount of “artificial intelligence”.

Artificial Intelligence – how intelligent?

After having accepted the existence of AI, our next question is – can AI, after a lot of improvement, development and sophistication, eventually equal human intelligence? Or is human intelligence an irreplaceable entity? There are valid arguments on both sides of this debate. Sixth Sense has convinced us that AI can reach heights never imagined before. Yesterday’s science fiction is today’s reality. What we speculate today may happen in future for real. We may be able to create machines having intelligence levels never thought of. Future automata may be able to “live” independently, without human command or interaction. They may be designed to be able to make decisions themselves, like humans. But as it is with everything, machines and AI also have their limits. The greatest factor that separates life from machines is consciousness. Consciousness, in this context, is the ability of a being to sense and observe its surroundings, make decisions based on these observations and act accordingly. It is consciousness that enables us to learn from our past and make predictions on our immediate future. Throughout history, Man has tried to simulate lifelike consciousness in machines, but has never been completely successful. A machine may be able to make calculations of great complexities, but it lacks subjective thinking. This is why machines are unable to show emotions or form opinions.
Thus, it can be concluded that machinery can be made to have great intelligence and to imitate living creatures in various aspects, but only to a certain limit, because life is indeed irreplaceable.

By Pranav R Kamat

References



Pure intelligence:- Bots that learn

Above is a video that takes artificial intelligence into a whole new realm. Up until now, machines have been programmed only to do a specific set of tasks. But with advent of Hod Lipson's models, it is now possible to create machines that, following trial and error, can learn about themselves learn to do a specific set of tasks to meet a specified goal. Here, I shall discuss the how far this technology can simulate the living world, what are it's limits, and how it could change the industry's views on the division of processes that can be mechanized and otherwise.

Basic necessities for simulating a bot that learns and evolves:-

These robots work on a 'Reward' system. It is equipped with some basic motor/output functions, along with some sensors/input for feedback. The bot feels rewarded when it achieves a certain task, which it perceives via it's sensors. Beyond that it is not told what it looks like or what each motor function does. Besides, a specific probability is coded into the bot, which decides how often the bot would try out a new set of moves. 

To understand this better lets take the example of Hod Lipson's "spider". It has eight motors and two tilt sensors to start with. It feels rewarded if it moves forward. The greater the speed, more the reward. The "spider" would, based on the probability coded into it, would either do a specific set of movements (say 'A'), that it knows would yield the best reward (tried and tested), or try out a random set of movements. If the new set of movements were more rewarding (say 'B'), then 'B' would be the new 'A'. And this cycle would continue till it is switched off.

Hod Lipson has also observed that when there is no specified reward that has been coded for, then the intrinsic reward of the bot is to self-replicate, when in a population.


Analogies with the living world:-

Just as human beings, these bots are equipped with sensory and motor functions, and are not programmed to do any one particular task. And just as human beings, these bots try random tasks to achieve the ultimate reward. To understand that human beings also work on the basis of a reward system, one can simply refer to the reward of the AI bots as an "Artificial orgasm". If we look at the big picture, it is easy to see that all our actions are pointed towards attaining the ultimate reward. It is also possible to interpret that we are built with no specific reward encoded in us. But then, mechanists can draw analogies with Lipson's squares (refer the video) where in no specific reward is encoded, and that the intrinsic reward is to substantiate their population as a whole. This makes reproduction, multiplying and substantiating the populace as the intrinsic reward for all living organisms, including humans. Thus, these AI bots can simulate any living organism.

One can also argue that these bots simulate the ultimate designer, hired by mother nature herself - Evolution. Evolution is the process by which traits in organisms get honed as they go down the generations. This is done by the process of mutation, where in a slight variation is induced into one of the organisms of a species. If this variation were beneficial, it would endure and the trait would be passed down the generations; else it would disappear. This same is imbibed into these AI bots; wherein the probability encoded deciding how often the bot reverts from it's normal course of action to a random set of movements, simulates mutation in living organisms.

Limitations of AI bots:-

It would be difficult to simulate an organism to it's fullest as the sensory and motor modules of living organisms are very vast. Consider human beings. Each cell of the retina (part of eye), each taste bud, each recipticle in the nose, inner ear and skin represents a separate sensor. Imbibing this in an artificial organism is very difficult. Besides, even if we do create such an organism, the processing power of the driving chip of the bot would be much slower than the rate at which a living organism evolves. That would mean, if it took years for a human being to evolve and groom itself, it could take millenia for the same kind of evolution in artificial human beings.

The statement - "Lipson's model of AI bots simulates evolution in itself", is not entirely correct. The code of a living organism (it's DNA) also includes the description of the sensory and motor modules and their functions. Mutation and evolution has caused living organisms to sprout new sensory and motor functions. But artificial machines cannot alter it's sensory and motor modules.

How could it change the industry's ability to differentiate the mechanizable from the not:-

First, we must understand why automation is used in the industry to replace human labor where ever possible. The human brain receives a lot of data from it's senses to comprehend and do tasks at hand quickly. But in automaton, only the data necessary to do the task at hand is fed in, and the machine can do these tasks much faster. These evolutionary bots have the scope to raise the bar even higher, given time and effort. "Management is an uncertain event and is thus non-mechanizable", would be a statement of the past with the advent of these AI bots. But industry has no room for mistakes, which are the basis for the bot's intelligence.

--Amit M. Warrier
EE09B004

References:-
1) The Video shown above
2)Defecating Duck, by Jessica Riskin (for the understanding of the mechanist and realist views on automation) 

Content aware images: Yes they exist!!

What is AI ?
               Turing Award winner Raj Reddy's Lecture, presented at ACM CS Conference, pretty much sums up what AI is and makes us realize that it is not something which is in future, but has already become a part of our lives. The lecture goes like this ...
                "Human and other forms of intelligence - Can a computer exhibit real intelligence? [Herbert] Simon provides an incisive answer: 'I know of only one operational meaning for "intelligence." A (mental) act or series of acts is intelligent if it accomplishes something that, if accomplished by a human being, would be called intelligent."
              He cites the example of the "Logic Theorist" written in 1956 which found a mathematical proof much elegant than the proof found by mathematician. This certainly raises question about difference between rote and intelligent, and makes us come back to the fundamental question "Can a machine simulate life?" Can machines be intelligent? And what they have to do so that we accept them to be intelligent? Mr. Reddy, in his lecture continues to stress the point that YES THEY ARE INTELLIGENT!! He goes on...
            "The trouble with those people who think that computer intelligence is in the future is that they have never done serious research on human intelligence. Shall we write a book on "What Humans Can't Do?"Computer intelligence has been a fact at least since the engineers at Westinghouse wrote a program that designed electric motors automatically. Let's stop using the future tense when talking about computer intelligence.' Can Artificial Intelligence equal human intelligence? - Some Philosophers and Physicists have made successful lifetime careers out of attempting to answer this question. The answer is AI can be both more and less than human intelligence. It doesn't take large tomes to prove that they cannot be 100% equivalent. There will be properties of human intelligence that may not be exhibited in an AI system (sometimes because we have no particular reason for doing so or because we have not yet gotten around to it). Conversely, there will be capabilities of an AI system that will be beyond the reach of human intelligence. Ultimately what AI will accomplish will depend more on what society needs and where AI may have a 'comparative advantage' rather than by philosophical considerations."

Content aware images : A fascinating example of AI
            Content aware images or rather content aware image editing attempts to something which we humans love to do and think that it is one of the qualities which makes us different from machines... IMAGINE! Imagine you resize an image, and to your surprise, it does not get visibly distorted! Imagine you delete some object in the image and the area behind it is constructed and rendered by the program! The content aware image manipulation software tries to do exactly that, and much more. The following video gives a flavor of what it has in store for us.
          Technicalities aside, the capability to imagine/predict what would be behind an object in a given scene is, as it seems, highly "human" activity. The following picture shows how the program can predict parts of images which are not captured at all, that is, creating a large part of image on its own!

Behind the scene :
           Although the algorithm Photoshop uses is proprietary and they have not disclosed it, a similar feature exists in another open source software, Gimp. The Gimp algorithm uses two concepts called seam carving and patch matching. Seam carving finds out which parts of image are less important (can be resized) and which are not (should retain shape). It is done by finding out the edge profile of the picture and creating energy map of the picture. The lower energy area is the one which can be distorted, while the higher energy area should retain its shape. Patch matching uses graph theory algorithms to do some nearest neighbor calculations combined with randomized sampling.


Is it really intelligent then?
         We can see that a lot of computation goes in to make images content aware. Compare this with our ability to imagine the missing part or image, etc. We do it seemingly effortlessly and obviously we don't employ the above mentioned methods to it. Here comes the question, if the program does it totally differently from how we humans do it, should we call it intelligent? Is it fakery?
         Obviously it is not simulation,(it does not mimic the process of imagination, but rather the result of the process) but the end result is that it certainly does what was thought could be done by only the animate. Finally how it does that is not as important as the fact that it does that. As with any AI technology, content aware images play a major role in redefining the boundary between the the machine and the animate. And in a sense, it is a program which can imagine!


References :
  1. To Dream The Possible Dream - Turing Award Lecture Presented at ACM CS Conference, by Raj Reddy
  2. Logic Theorist, The program which proves theorems
  3. PatchMatch, Content aware editing algorithm
  4. Seam carving, Content aware resize algorithm

AARON the 'Cybernetic Artist'


From the mid eighteenth century starting from Vaucansons Defecating Duck, there were series of attempts to simulate life and living processes. Every attempt much closer to real life than the previous ones, changed the meaning of simulation from 'mere fakery' to an actual reproduction of the real life process. From the external movements and activities like playing flute and piano, to internal physical process like digestion , to mental process like calculations and finally to intelligence and thinking we have come a long way in simulation of life. We have defined and redefined the extent to which we can simulate life, pushing it closer and closer to real life, leaving a very thin line between machines and animals. It is perhaps interesting to see that most of the modern machines are built in the same lines as we understand our body. The more we understand ourselves the better we simulate it.

About AARON

In 1973 Harold Cohen a reputed English painter took up an ambitious new project of creating a machine that can paint original art. AARON started as robot which draws black and white pictures. Over the thirty years that Harold Cohen worked on it, it has gone through many changes. Though it started as a mere drawing machine it grew in its complexity and eventually ending up drawing human figures and giving the program an increasingly sophisticated understanding of their positioning in space. Cohen has hard coded theory of colours, different strokes and theory of compositions. Though this is all procedural programming but at a higher level AARON chooses what to do when and what it should look like. There are no more than two painting so AARON that are the same. It mixes its own colours and cleans its brushes itself. Today AARON can it builds its images through colours the way Cezanne or Matisse once did. The colours are dazzling, deeply satisfying, which surprised everyone including its own tutor Cohen, already well-known as a gifted colourist. He says he has learned from AARON even as AARON learned about colour from him. It can make paintings of real life objects and also the abstract ones.


Hard copies of AARON paintings have hung in museums around the world, including the London's Tate Modern, Amsterdam's Stedelijk Museum, San Francisco Museum of Modern art, Brooklyn Museum and Washington Capital Children’s Museum.

Today the AARON software is licensed by the Kurzweil CyberArt Technologies Inc. The software uses the program used in the AARON robot but instead of making a hard copy of it, it displays it on your computer screen and we can print it out with a colour printer.


Is the computer being creative?

While AARON continues to makes those life like pictures, it raises a few questions about autonomy, creativity, learning, and intelligence. The automation in 18th century created a distinction between the creative and intelligent work to physical and unintelligent work. It was thought that only physical and unintelligent work can be simulated. As years passed by we simulated more complex human activities, are we now stimulating creativity?. This question is hard to answer because Creativity is very difficult, perhaps even impossible, to define in objective terms. It is impossible to measure creativity. Human creativity itself is not well understood. It is also not well understood why we are drawn towards certain visuals (art), certain sounds(music), etc.

When it comes to AARON a machine composing art worth of thousand of dollars is difficult to comprehend for lot of people. What does it say about originality? Creativity? Learning.
AARON's works are unique so it is original. 
 
When it comes to learning and creativity Pamela McCorduck the author of “Aaron's Code” says:
AARON has learned what Cohen has taught it, but like all good students, AARON surprises its teacher with its own work—in a human we would call that creativity.”
Ray Kurzweil who bought rights for AARON said:
Harold's AI-based program actually creates original paintings on your computer's screen, each one completely different. If a human created paintings like AARON, we would regard him or her as an acclaimed artist.

On the contrary if we go by If one considers Robin Baker’s criteria for a program to be to be recognized as creative,
1) The conceptual space of the programmer is extended or broken by a creative program. [In other words, it creates something beyond the boundaries of what was originally programmed into it.] 
2) It should have judgment and be able to recognize its own work
Though AARON clearly seem to satisfy the first point the it can not satisfy the second point. It does not judge good picture and a bad picture which is generally done by an artist. AARON may please many people but can not please itself. Moreover art is considered as a medium through which an artist communicate his mood and feelings. This gives the art some life, which lacks completely when it comes to AARON's paintings. AARON does not even know what it is painting of.

To some extent the way that ARRON works is similar to humans. It learns what its teacher has tought it and makes a painting with its own intellect. But it is not anything close to a humans it the judgement of its painting. Though there are lot of different opinions on computer-art. AARON's paintings have been sold for thousand of dollars and it surely can be looked at as an another step in narrowing the gap between man and machine. It has simulated creativity partially though not completely. 

-G Sujan Kumar

References

[3]Case study: Harold Cohen and AARON