- A 30+ year old Msc. Graduate in Computing Science currently working in the financial technology sector.
Wednesday, August 4, 2010
12:22 PM | Posted by Tobias | Edit Post
Here is a personal favorite of mine. L-system is short for Lindenmayer system, as in Aristid Lindenmayer the hungarian biologist who introduced this technique in 1968. L-system is related towards formal grammar mostly known for describing the growth process of plants. The growth algorithm itself is recursive and often pretty simple, but the result is far more complex. The very nature of the process makes the L-system categorized to self-similarity, meaning that even a small part of the plant is similar to it's whole structure.
The plant has its point of origin which never changes, however how the plant grow is formalized recursivly through a number of iterations where every iteration has one or several sets of rules which discribes the growthprocess. This is an important difference between L-system and formal grammar where there only is one rule.
There are many offsprings of Lindenmayers original L-system, but we for now only focus on one - The Fractal plant. I'm going to use the wiki example because I think it's self-explanatory.
variables : X F
constants : + −
start : X
rules : (X → F-[[X]+X]+F[+FX]-X), (F → FF)
angle : 25°
Here, F means "draw forward", - means "turn left 25°", and + means "turn right 25°". X does not correspond to any drawing action and is used to control the evolution of the curve. [ corresponds to saving the current values for position and angle, which are restored when the corresponding ] is executed.
Pretty life-like wouldn't you say? This is just an example of how L-system may be used, as I said earlier there are many variations. The emergent part is the simplicity, two rules howto draw lines produces this. If you look closely you can see the self-similarity in every branch. So what can we do next, well the video below is one thing.
I will return to L-system later to do some deeper analysis. For now ... as allways ... I leave it to you to find out more for yourself.
11:46 AM | Posted by Tobias | Edit Post
We come back to ant algorithms and now focus on where it can be applied. The most classic problem in computional mathematics is the traveling salesman problem. The problem is simple, but yet solving it is not.
Given a list of cities and their pairwise distances, the task is to find a shortest possible tour that visits each city exactly once.
It's described in mathematics as a NP-complete problem, but I'll leave it up to you if you want to go down that road finding out exactly what that means. Anyone can connect every city with a single line, but how do you know if it's the shortest route? If you read the first article about ant algorithms you should get a "aha!" experince right about now. Ant algorithms or as it is also called ant optimization is perfect for these kind of problems.
Having a set of cities and X amount of ants with one task - visit each city once, you will achieve your goal simply by using the same stigmergy as when the ants collect food for their colony. The trail pattern will go from (2) to (3) and ending up as (4) as the pheromone decreases in strength on the trails with the longer distance. No real calculation, just nature finding its way.
If you are more interested in this technique and what it can do I recommend taking a look here for further studies. It's all about optimizations =)
Monday, August 2, 2010
2:28 PM | Posted by Tobias | Edit Post
2:10 PM | Posted by Tobias | Edit Post
Here is what I think is a interesting chapter. Yes I know, we are focusing in emergent behavior but I can't help mentioning genetic programming.
As the name reveals we are talking about genetics and evolutionary algorithms. I'm no expert in the area but I will do fairly understandable explaination on the subject. It does not follow a strict emergent behavior, but as you mess aorund you may accomplish something which you did not expect and genetical research is to this day a far more complex area than for me to be able to explain. We still don't know what will happen if we tamper with our genetical code. So we can tamper with our digital genetical code instead =)
As stated before genetic programming is founded from evolutionary algorithms which began in the 1950s. Since it is computationally intensive its was mainly used for solving fairly small problems, but as we got more and more computerized it has become possible to use in a wider range i.e quantum computing. But the most fun part of genetic or evolutionary programming must be when it's applied in creature evolution. Basicly you have a creature with some form o limbs, and as genetic programming is applied the creature learns through generations of evolution to "walk" or atleast transport itself fron point A to point B. Watch the video below and compare the creatures movements and limbs as it evolves through generations of mutations.
As you can see, from generation 1 to generation 1000 it has evolved to a more efficient creature. This is done by mutation of the creatures movements and limbs as it passes through one generation to another as i.e. father to son. But how? Well, I said I'm going to explain it as simply as possible which basicly can be done by this illustration of crossover-mutation.
Lets say A is the father and B is the mother. They have a child which consists of genes from both parents, but sometimes a mutation occures contributing to a new gene not from the mother nor the father. This is a part of one unpredictable aspect of "human" evolution. If we look at the videoclip above, the creature goes through a similar processe between every generation, programmed into it before it is "reborn" again.
- But wait!? you say .... how does it know what are good mutations and bad mutations? Well, for the simulation in the videoclip it must have something to compare the evolutions towards. It can be as simple as we stated earlier - getting from point A to point B as fast as possible. If a generation of creature is faster than its predecessors the mutation was an improvement and therefor better to spawn new generations from. The mutation can be applied to the limbs movements, shape, size you name it. The mutation and inheritage between generations can be done in many ways, but for now I only illustrate the crossover-mutation.
As Herbert Spence phrased after reading Charles Darwin's On the Origin of Species - Survival of the fittest. The strong survive while the weak perish and that is what genetic programming has adapted in this case. I can imagin that hunters thousands of years ago were fast and agile, simply because the short and slow ones couldn't catch anything or became dinner them selves in the hunting process =)
11:04 AM | Posted by Tobias | Edit Post
Yes you read it. Ants are a good example of emergent behaivor.
Have you ever been to the woods? If so, have you ever stumbled on a ant trail? How could these little creatures with almost no brain be able to find their way from the colony, get food and find their way back again. Well, simple realy ... if you can't think it, see it than smell it - pheromones. Ants use the environment to communicate with eachother and leave local pheromone tracks when wandering about. This is called Stigmergy. Notice "local" pheromone, because a pheromone does not say anything about where the food is or where the colony is. In mathematical terms, it's just a variable with a value.
We can embody a simple set of rules which is geneticly inprinted in a "farmer" ant. When it finds food it releases a pheromone when trying to find its way back to the colony. It's follows pheromone released by searching ants, the stronger the pheromone the closer to the colony(more ants). The pheromone it releases by it self (1) is attracting other ants which leads to that more ants finds the foodsource. They than also try to find their way back to the colony, creatinga trail in the process. When finaly finding the colony and again sets of finding food it has a pheromone trail to follow back to the food, consisting of pheromones by other ants.
Hence, the ants does not know where the colony or the food is. They are simple following the pheromones. The stigmergy happens to solve the task "given" to the ants. But, and this is a big but, the pheromone left by each ant decreases in in strength by time. This makes it possible for ants to determine if a trail is "hot" or "cold". Otherwise the trail would not just be one trail, it would be many since the ants can't determine which way is the shortest between the foodsource and the colony. The trail would look like (2) in the picture, well somewhat simplified anyway. Since the pheromone decreases over time the longer trails gets lower pheromone strength the ants will more and more follow the path with shorter distance between the foodsource and the colony (3).
So, simply by following pheromones the ants form a more complex situation than anticipated. Not strictly emergent as Boids, but still a algorithm engineered by nature. This may also explain why ant colonies all of a sudden are deserted. Maybe it's simply because the food started to be so far away that all ants scattered about to god knows where and started a new colony. How exactly would be fun to speculate, but some other time ...
Tuesday, June 22, 2010
9:48 AM | Posted by Tobias | Edit Post
Most of us are familiar with the term Artificial intelligence or AI, which has been around since the 1940s. It's not to uncommon in Hollywood when it comes to doomsday films as The Terminator, AI(the movie), I Robot or Eagle Eye. So is AI going to doom us all? Well, no I don't think so ... the human race can handle that all by them selves. The area artificial intelligence is not strictly emergent, but emergent behavior underlays AI so I'm going to have it as a topic anyway because it's such widespread and basicly tries to emulate nature and the human brain.
Some techniques are strictly bruteforce, as many calculations as possible under an estimated time. For example - Deep Blue, the first computer to beat the reigning world champion in chess Garry Kasparov in 1997. The "AI" used for this task was mainly to foresee an amount of moves to a certain depth and make the most logical choise. This is not the kind of AI we will be focusing on, however emergent behavior underlays computational intelligence in a way.
As I've stated before, this blog if you may is for exploring thoughts around life and the emergent behaivor that sourrounds it.
However, AI is about making choises which best fits the situation. Wether it's playing chess, walking down stairs, or using abstract objects the AI makes choises based on what it "knows" about its environment. How it gathers that information can be done through numerous methods, or often in combination of methods.
Hold on to your hats or brifes, the following will contain many field buzzwords that may be confusing. I'm going to cheat a little bit and actually use the wiki approach dividing AI into three describing areas because we get a good preview of the subject, but avoid diving to deep into it.
Yes, every area has its sets of problems and AI has many. Why? Well for starters - we don't know how the object works that we are trygin to simulate. I'm talking about the worlds most advanced super computer which is replicated into about 6 billion copies. You guessed it - The brain.
Early AI researchers developed algorithms that imitated the step-by-step reasoning that humans were often assumed to use when they solve puzzles, play board games or make logical deductionsBut we humans are far more complex and use our emotions and intuitive judgment as variables in the equation when we are faced with a problem or task. Our brain is not a statemachine which itterates through possible answer, its more of a quantum machine which calculates millions of options simultaneously, and into that mix also apply our personal experienc ande emotions. There is no chance of simulating that(yet). AI however has several fields to coup with these issues and tries to handle the same input that we humans get from our surroundings to make adjustments to it.
The Planning methology is to set a goal and reaching it. As we humans do when wee need to get from point A to point B. Here we can apply emergent behaivor as a model through Evolutionary algorithm and Swarm intelligence, much like the Boids approach we've been discussing. The natural next step is Learning, which has been the central research in AI from the very beginning. Without going into it we here apply Unsupervised learning, Supervised learning and reinforcement learning which are different approaches to solve the same problem in different situations. We have Natural language processing to deal with communications between digital system and humans through regular language. We've actually come quite far on this one! The last thing I'm going to list as a AI problem in Perception, input from our surroundings through our five sences. This is a big one in robotics where of course the visual perception has most focus. We are talking facial recognition and object recognition to build a internal representation of the world.
There are several different approaches where not all play well with eachother. AI is not a defined researcharea which relies on theorems like mathematics. What "AI" stands for can't be explained in a single sentence. First of all it depends on the angle you are looking at it - is it from a psykological view, or something else. Is the term AI and the result it produces of the same value by all standards? Well the quick answer is - No. Here are some examples how to approach AI. You will notice that most research are based around the 1970s or earlier so this is something that's been around before we got computerized.
In the 1940s and 50s research was made for exploring the possibilty of reducing human AI to symbolic representation. There were Cognitive simulation where psycological experiments were used for developing programs that simulated human problem solving techniques. This laid the foundation for Cognitive science which later formalized the research. Others claimed that simulating the human brain wasn't enough, and abstract reasoning och problem solving were more fitting - Logical based intelligence. Using formal logic later came to be the backbone in logical programming which evolved to programming languauges as Prolog. In later years when computers were more advanced a different type of AI emerged. It is now known as Knowledge based intelligence. Having more system memory the researchers were able to construct programs that held more information, and therefor beeing able to access more data than just split second input. From here the first "real" successfull AI programs were deployed and they are formaly know as expert systems. For example a doctor could use symptoms on a patient as input and get a list on possible diagnoses as a result. Though expert systems should never be an absolut factor when making decisions. It should rather be a support system taken under advice. An exceptional scenario is when subscribing medication for a patient. A expert system could instantly prevent a doctor from subscribing drugs which should not be mixed with eachother, or hinder a obvious wrong dosage.
The sub-symbolic approach came to be of greater interest in the 1980s when little faith was shown to regular symbolic AI when focusing on seriuous perception, pattern recognition and robotics. We now touch the area which this site focus on - Computional intelligence. Using more non-symbolic methods as neural network, fuzzy systems and evolutionary computing researcher were able to embody the AI more according to the embodied mind thesis, a related field in cognitive science. Roughly speaking it sais that the AI is determined by it's form of "human" body. This makes sence looking a the Boid system, an abstract method with a set of rules which is embodied by an agent or avatar.
Now it's time to stop ... you should now have a good idea of the foundation of AI in genreal terms. AI is what we do here on this site, but we only focus on selected parts ... the interesting ones =) But never the less, one should know the hole picture rather than a small corner.
Wednesday, June 16, 2010
2:41 PM | Posted by Tobias | Edit Post
I think this one deserves an own topic. Cellular Automata, a model one might say strongly supported by Stephen Wolfram (author of A New Kind of Science). The technique first saw the light of day the 1940s by the hands of Stanisław Ulam who was studeing the growth of crystals and at the same time collaborating with John von Neumann who was working on a problem with self-replicating systems, the cellular automata was born.
A cellular automaton is a collection of "colored" cells on aWell, that sais it all doesn't it? No I thought so. Cellular Automata does not have its foundation in mathematics or physics, it's more of a model based on rules and movements. A visual representation of calculations if you may, and the result is really not just a answer to lets say - an equation, it's more like a display colors, patterns and states for the eye to judge. of specified shape that evolves through a number of discrete time steps according to a set of rules based on the states of neighboring cells. The rules are then applied iteratively for as many time steps as desired.
The most famous cellular automata is probably Game of Life, created by John H. Conway. Every cell in the grid applies three sets of rules which follows.
- Death: if the count is less than 2 or greater than 3, the current cell is switched off
- Survival: if (a) the count is exactly 2, or (b) the count is exactly 3 and the current cell is on, the current cell is left unchanged
- Birth: if the current cell is off and the count is exactly 3, the current cell is switched on
Every cell only knows its closest neighbours, no predefined paths or patterns are followed. The result is totalt emergent, same as for the Boids model.
Amazingly, life is a universal cellular automaton, in the sense that it is effectively capable of emulating any cellular automaton, Turing machine, or any other system that can be translated into a system known to be universalThis basicly means that a cellular automata can reproduce any system which by definition is universal. This is pretty powerfull stuff, and I recall reading about cellular automata beeing used for reproducing fungus growth on a microscopic level. Though I've failed to find a reference for this, but I will add it as soon as I do find it.
If we take a step into the 2100 century a demonstration of the cellular automata looks like this.
If you ask me, this looks like life under a microscope. Well, parts of it anyway. I will return to the cellular automata, but for now I will leave it to you for further reading.