Tuesday, 29 November 2016

Is Intelligence an Algorithm? Part 5: The Intelligence of Emotions

Bacterial colonies with an uncanny resemblance to neuronal structures

I have first published this essay on Steemit: https://steemit.com/ai/@technovedanta/is-intelligence-an-algorithm-part-5-the-intelligence-of-emotions

In this essay I will discuss how emotions are part of the natural intelligence algorithm. I will also indicate how an artificial equivalent of emotions could be of benefit to an artificial intelligence. The topic of how to deal in an intelligent way with your emotions will be the topic of the next essay.


In the previous posts in this series of “Is intelligence an algorithm” I have exposed my hypothesis that intelligence functions as a kind of algorithm (see part 1 of this series). Part 2 related to cognition, (pattern) recognition and understanding. Part 3 explored the process of reasoning, which is necessary to come to identifications and conclusions and is also a tool in the problem-solving toolkit. In part 4 I discussed how we identify and formulate a problem; how we plan a so-called heuristic to solve it, how we carry out the solution and check if it fulfils our requirements.
This article is written for those people who wish to understand more about the innate intelligence associated with emotions (this essay), ultimately in order to be able to deal better with emotions and not overwhelmed by them (next essay).
Many psychologists have written about emotional intelligence or have described what emotions are. It is not my purpose to summarise what they have said. Rather I would like to show how emotions fit in the intelligence algorithm I have been describing before. Note that nothing of what I assert is necessarily so. I am formulating a hypothesis and try to convince you of its plausibility.
In human interactions decisions sometimes seem to be based on irrational considerations, which appear to deviate from the rational process of the intelligence algorithm as set out in the previous posts. In this post I will try to argue that emotions are an integral and vital part of the intelligence algorithm. What appears to be irrational is merely a different aspect of intelligence in function. So let’s try to decipher what kind of mechanism or algorithm is involved in emotions.

Where and when 

Emotions are formed and stored in the in the part of the so-called “limbic system in the brain” called the “basal ganglia” in the brain, more specifically in a part thereof called the “amygdala”. It is also in the basal ganglia that fixed action patterns of immediate response to emotions are stored and released whenever necessary.

image from http://people.howstuffworks.com/swearing.htm/printable
When we make an observation the first percept is essentially neutral. It is only when we interpret the thought associated with the observation, that we colour the observation, that we interpret it. This thought-based identification called apperception then leads to the triggering of emotions.
Imagine you see a wolf. First your brain identifies that it is a wolf. That part of the apperception is still neutral. It is only when we interpret the context, that an emotion can be released. If it is a wolf in a book or in a cage, our first emotions triggered will not be fear. If it is a wolf in nature walking freely at a visible distance from us, that our brain will interpret this specific combination as being related to imminent danger. This will go so fast that these thoughts won’t even be articulated in words, but the non-verbal interpretation needs to happen before the basal ganglia can release a so-called fixed action pattern, which we call an emotion, in this example fear.
So the questions why we have emotions and what the evolutionary advantage is, can be answered quite simply: Because we can afford to go through the rational steps of ontologising weighing and finding an appropriate heuristic in the normal way. The emotion is like a flickering light on our dashboard, a monitor warning us that there is something wrong, that action is imminent and that we can’t reflect on an action but need to follow a fixed action pattern.   However sometimes emotions give false signals and how to recognise these false signals will be the topic for the next essay. The next essay will also give tips how to control emotions and use them to our advantage.   To understand emotions we have to try to build a more complete ontology of them and abstract generalised patterns therefrom. This is the topic of the present essay.  

What are emotions?   

As I just identified emotions are the fixed action patterns that are released as a result of internal and external monitoring: It is a dashboard indicator that lights up to make us aware of an imminent situation, which may have repercussions for the system we are.   Emotions show whether a desired state has been reached or not in comparison to the preceding (status quo) state. Emotions also show whether it is likely that a desired state will be reached or can be maintained, which we call anticipation.   Let me illustrate this with regard to a few basic emotions: fear, sadness, anger, joy, excitement, jealousy and awe.
  • Fear is the anticipation that a desired state may not be reached or cannot be maintained. 
  • Sadness is the finding that a desired state has not been and/or cannot be reached or maintained. 
  • Anger arises when the action of a person or thing forms an obstacle to reaching or maintaining a desired state.
  • Joy is the finding that a desired state has been reached or maintained. 
  • Excitement is anticipation that a desired state is likely to be reached.
  • Jealousy arises when we compare our status quo with that of someone else and start to desire that state with the uncomfortable anticipation that we might be incapable of reaching that state. It is also an indicator that there may be a better state than the one we have reached. 
  • Awe arises when we admire the better state or achievement of someone else without desiring to reach that state, because we are convinced that we cannot reach that state anyway.   
Thus here above we have classified emotions in terms of positive and negative (possible) outcomes.
Emotions can therefore be defined as indicators to notify us of our state in comparison to a different (possible) state. In this sense they are useful as elements for an algorithm. Algorithms also need to monitor and assess if a given state has been reached and inform the user and itself by feedback so as to be able to progress to the next step. In designing artificial intelligence we also need such indicators. We may not call these emotions, but in fact these internal checks are artificial emotions.
Emotions are therefore useful elements which can help a system to find its way in a complex environment to achieve a complex goal, which is exactly ben Goertzel’s definition of intelligence. Emotions protect us; show us when to advance and when to retreat.

Personality type ontology   

Most people are capable of having the same types of emotions, but not necessarily in the same degree or under the same circumstances. In most cases men and women particularly differ in this respect. Then there are stratifications of people in different types such as being based on their level of accomplishment (in analogy to Maslow’s pyramid), R.A.Wilson’s typing of viscerotonic, musculotonic and cerebrotonic types and the DiSC profile by Marston.

Maslow's pyramid of needs. Source: http://needsforlifestagessmhs.weebly.com/maslows-pyramid4.html
Since emotions are linked to needs and desires a correspondence map with hierarchy of Maslow’s pyramid can be made.   Fear relates to the level of physical needs such as shelter, food etc. At the next level of economical and social safety anger is the more prevailing emotion. The next level of Love and belonging correspond to joy. The level of esteem corresponds to excitement and the level of self-actualisation to serenity and equanimity. Feelings of superiority and inferiority, dominance and submission are associated with the levels of safety and esteem.
Another useful scheme for stereotyping people is the DiSC scheme used to type managers. Real people however often form a mix of the stereotypes I am going to describe, so don’t take these stereotypes too seriously. Don’t think I judge people in this way either, these are just traditional stereotypes described in various sources. It is by no means my intent to be condescending toward anyone.
The DiSC model stratifies people along two axes: The introvert-extrovert axis and the people- versus task-oriented personalities. DiSC stands for Dominance, Influence, Steadiness and Conscientiousness.

DiSC model image source:   https://nl.pinterest.com/Stevros/disc-%2B/
Dominant personalities are extroverted and task oriented. The typical military dictator. They correspond to Wilson’s musculotonic type and the noblemen in the ancient caste system. Their behaviour is intimidating, based on feelings of power and superiority. They may force to achieve their goals not always taking into account the effects on other people.
Influence type personalities are also extroverted but are more people oriented. They are said to have better communication skills and inspire people. Often they strife for progress. They correspond to the class of business men, the citadins or bourgeoisie in the caste system. they are excited, full of energy seeking enjoyment in life. Their physical constitution in Wilson’s terms would be quite normal. Their emotional approach is warm, welcoming and not hostile.
Steadiness represents introverted friendly naive people. Without wishing to denigrate there is said to be a certain correspondence to the working class including farmers here. They are said to be fearsome and suspicious and rather would like to keep things as they are. Some of them suffer from an inferiority complex. Their body constitution can be viscerotonic in Wilson’s terms (i.e. a bit stout). They are more people than task oriented. In the ancient caste system they corresponded to the servants.
Conscientious personalities are the academic people, the clergy, scientists and geeks. Their body type is cerebrotonic in Wilson’s terms i.e. thin, long body with a high head. Typically introverted and task oriented, they like to do things alone. They don’t understand and show emotions easily. They can exhibit a bit autistic behaviour and show frustration if they can’t get their message across. They will only show excitement when their mental creations are successful. they are often perplexed about the behaviour of other types, which they consider irrational and are suspicious and reserved towards emotive personalities.
As said before real people are a mix of these stereotypes and depending on their sociocultural position in the pecking order may have a certain proclivity towards certain attitudes and emotions that have been imposed on them by their cultural heritage and education.
Each of these groups has figured out a survival strategy belonging to their niche in the social patchwork. Their specific emotive sub-category patterns warrant them a certain degree of success in their niche.
Natural intelligence has selected certain fixed emotive release action patterns, which they believe will yield them the desired result. Their different desires as regards their degree of extraversion and people vs. task orientation result in different sometimes colliding strategies. When people of different groups/classes meet in a complex context many misunderstandings can arise due to different sociocultural phenotypical programming.
Each group has different type of filters to classify and interpret observations. By the intergroup tournament i.e. the competition and interaction between the groups a natural pecking order can arise.

How emotions manifest   

Emotions often manifest physically in the form of increased muscular tension, pain in the belly, crying, numbing, wetting one’s pants etc.

Why we have emotions   

The reason why we have emotions as already said is to warn us, make us aware we have to take action. Without emotions there is no monitoring of internal states. Without such monitoring imminent danger would not be recognised. If there is no care for others -as expressed in the emotion of love-no societal complexity can arise. This is why mammals are more successful than reptiles. Mammals have a limbic system capable of emotions. Emotions are there because there is no time for a deep heuristic search to evaluate and consider a situation. The slower rational cortical process must sometimes be overwritten by fixed action patterns in the amygdala. Hormones also play a role in bypassing the cortical neuronal information flux.

Process ontology   

I’d also like to ontologically map emotions with regard to an eight-phase cycle of perception, thoughts, action, and feedback corresponding to the intelligence algorithm described in my previous posts. The emotional classification by Plutchik (see figure) can be quite useful for this purpose.

Image source: http://puix.org/research/user-experience/plutchiks-wheel-of-emotion/

Phase 1: Perception / scan
A phenomenon, a stimulus (e.g. object, sound, and light) enters the brain via the sensory organs. This I'll call a perception stimulus. Emotion: interest-vigilance

Phase 2: Identification / search
This phase involves first a descriptive, a feature analysis and interpretation subphase The brain (lower thought processes) will then start searching using association laws for: 1) similarities, analogies and resemblances to mental objects stored in the memory (compare) 2) opposites, contrasts, differences (distinguish) 3) spatial associations (locate) 4) temporal associations (time) Thus the brain will be able to organise, classify, categorise and ontologise the phenomenon
Emotion: anticipation-optimism

Phase 3: Cognition / Conclusion / Judgement / Apperception (spirit)
On the basis of the most probable result of Phase 2 (less probable pathways, unlikely to rapidly yield success will be discarded) a conclusion will be reached and the phenomenon will be recognised or at least, if it turns out to be new, it will be associated with the phenomenon it resembles most. The conclusion is followed by a judgement about the phenomenon which finalises the conclusive phase. With judgement here I do not mean as a first consideration defining whether something is morally good or bad, but rather the following: an emotion is observed, pleasant or unpleasant, a threat is observed, a problem is noticed or indifference is concluded. The judgement phase can also involve an abstraction phase (higher thought processes) wherein the essence is distilled. This can be an archetypical form, but also an underlying principle, an algorithm. Emotions that follow when positive vary from serenity to joy or even ecstasy; when negative: fear to terror or from pensiveness, via sadness to grief; annoyance to anger.

Phase 4: Planning phase: Problem definition / Design solution (solve)
The outcome of phase 3 will lead to the need to take action if a problem must be solved or to non-action, if the result is indifference or negative. Even if there are not sufficient resources to solve the problem, the brain will necessarily go through these phases of problem/solution evaluation, even if the outcome is that the resources (energy and/or capabilities) are insufficient to provide the solution. The brain can only acquiesce to that outcome if it has been able to evaluate that the resources are insufficient.
The problem must first properly be defined and then a solution designing phase can take place, in which again a search analogous to phase 2 using association laws is used: If a proper or seemingly analogous solution is already available, this will be used or at least taken as starting point. If no good results are available, then the evaluation may result in the assessment, that the resources are insufficient (disappointment). Is a solution pathway chosen, the details of its future implementation will be planned.
This phase also goes through an evaluation of the degree of necessity/urgency and the availability of resources, the product of which must be bigger than a certain threshold in order for action to be taken. Furthermore, this phase also involves a moral assessment as to the desirability of attaining the solution to the problem or fulfilment of the need/desire. Here the modal principles as described by Kant come into play. Accompanying emotions: acceptance-trust (positive); disappointment: disgust-loathing-annoyance-anger (negative).

Phase 5: implementation phase (score) / behaviour
Firstly a decision will be taken, as to whether use the willpower (Ego) to implement the envisaged solution. Note that the details of the solution may not be known to the brain, it will rely on intuition, a hunch, a "Fingerspitzengefühl", that potentially a right solution is chosen or at least one worthwhile trying. Via the organs of action the solution will be implemented in the material world or at least its first stages. This is where the mind will make itself known to the material world and where an action materialises. Accompanying emotion: submission (of the material world to its needs) -apprehension-fear (for failing)

Phase six: perceive reaction / observe result / effect
The feedback loop starts here. The mind will perceive the results of what has been materialised in action: the reaction of an interlocutor, a strike of a pencil etc. This will strike the brain positively or negatively Emotion: surprise-amazement-awe

Phase seven: evaluation / sense
The result will now be evaluated, pondered and meditated upon. A search as in phase 2 and 4 will classify the result and the result will then be estimated/measured by the discriminating faculty of the intellect. The outcome can be disappointment or satisfaction. Emotion: negative: pensiveness-sadness-grief; positive: serenity-joy-ecstasy

Phase eight: re-evaluate / reject or reinforce (serve)
On the basis of the result of judgement by the intellect, the emotions will react by rejection (negatively) of the action one undertook or positively by admiration of what has been achieved, leading to a reinforcement of the chosen solution. Negatively, the disgust and loathing of the failure may even turn into contempt of oneself and lead to annoyance, anger or rage. Annoyance will lead to abandonment of seeking a solution to the problem, anger or rage, may be an incentive to look for a totally different solution. Positively a virtuous cycle is entered and the solution may become a more permanent tool. Note that this eight phase scheme encompasses the well-known 5 step scheme of: stimulus -cognition – feeling – behaviour – effect, but I do not see the emotions as separate from the cognition, rather they are part of it and furthermore, I figure that each phase has its inherent emotions.


We have seen how the natural intelligence algorithm uses emotions as indicators of (potential) achievement of desired states. How emotions depend on circumstances and sociocultural programming. How emotions result in necessary fixed action patterns, which may seem irrational but follow a pre-programmed sequence directed to reacting to imminent situations. That sometimes emotions can become long term problems such as in long term grief, points to the fact that emotions can derail beyond what they are naturally intended for. How to deal with the process of such undesired derailed emotions is inter alia the topic of the next post, which deals with how we should navigate the emotional quagmire of false signals.


The ontology of emotions and the notion of emotions as indicators of internal states should be further explored for the development of artificial intelligence. If we can create emotional machines capable of compassion, perhaps we need not be so afraid of the dystopian doom scenarios that are predicted upon the advent of strong AI. The machines may then not be inclined to wipe us out. Moreover I predict it is even vital to the design of artificial consciousness that there are internal monitoring feedback mechanisms. For that is what consciousness actually is a feedback mechanism that integrates information to update the status of the entity in question.

Image source of the bacterial colony on top: https://en.wikipedia.org/wiki/Social_IQ_score_of_bacteria       

Monday, 21 November 2016

Is Intelligence an Algorithm? Part 4 Problem Solving

Dendritic structures on a Petri dish show an uncanny similarity to neuronal dendritic structures.

This article has first been published by me on Steemit: https://steemit.com/ai/@technovedanta/is-intelligence-an-algorithm-part-4-problem-solving

We have now arrived at the most important part of this series on Intelligence as an algorithm: Problem solving. Thus far I have exposed my hypothesis that intelligence functions as a kind of algorithm in part I of this series. Part 2 related to cognition, (pattern) recognition and understanding. Part 3 explored the process of reasoning, which is necessary to come to identifications and conclusions and is also a tool in the problem-solving toolkit. In this part I will discuss how we identify and formulate a problem, how we plan a so-called heuristic to solve it, how we carry out the solution and check if it fulfils our requirements.

Problems arise due to a discrepancy between the status quo of a system or living being, which has a deficiency of a certain kind and a desired future state without that deficiency. This discrepancy can be internal or external.

For instance, if a part is broken in a system (e.g. a motor) or if a living being is diseased, the system doesn’t function as it is supposed to, it is functioning in a suboptimal manner, so there is an internal functional deficiency, which most often is caused by a defect at the structural level, A cog wheel may be broken, a fuse burnt or a gene may be deregulated to name a few.
External problems arise due to stimuli (or the absence of stimuli where there should be) from the environment and the relations of the system or living being thereto. A road may be bad hampering the movement of a vehicle, there may be a lack of resources in the environment for the system to be able to function or there may be a hostile opponent endangering survival, to name a few. The problem may also be of a psychological nature if we see that someone else is more successful than we are and we start to envy what the other has achieved. In that case too there is a deficiency of the status quo compared to the desired future state.

If we notice a system has a deficiency, we have to identify what exactly the deficiency is. This may be more difficult than these words suggest: The presence of a problem is often only apparent from the symptoms of a dysfunctional behaviour of the system. But the symptoms do not always reveal the structural cause of the problem.

So the first part of problem solving relates to the identification of the problem. From the symptoms we have to analyse the underlying causes to come to a correct diagnosis of the problem. We have to search for and explore which phenomena can cause the observed deficiency and check if indeed something is wrong with one or more of these phenomena. If so, that may be the cause of our problem and we have identified our problem. Correct diagnosis can be a daunting task, which requires a strategy to solve. In fact diagnosis of a problem is a problem of itself and requires the same strategical considerations as the stages of the problem solving process which follow after the diagnosis.

To assess the problem we have to map the problem in terms of everything we know about it in terms of symptoms, potential causes and a good description of the goal we wish to achieve. In fact we have to build an ontology (a descriptive list of features and relations describing an item) of the different aspects of the problem, which can be considered as the internal part of the problem description.

Once we have correctly assessed the problem we can start to devise a strategy to solve the problem. Fortunately, I don’t have to develop the toolkit for solving problems from scratch. I can stand on the shoulders of a giant who has thoroughly mapped and described the process of problem solving: George Pólya.

In his book “How to solve it” published as early as 1945, Pólya proposes a universal system of thinking, which can help you to solve any problem. Pólya describes this as a four step process, the instructions of which can be called an algorithm in line with my thesis that intelligence functions as an algorithm.
The four steps are:
1) understand the problem 2) make a plan 3) implement the plan and 4) verify the result and see if it can be improved. I will discuss these enriched with ideas from my own insights.

As to understanding observations I have written in the second part of this series about analysis, consideration and building an ontology of the observation. These principles also apply to the understanding of a problem. In the analysis we must identify the 6W’s (Who, What, Why, Where When and hoW) in which the identification goal of the problem to be solved is the most important aspect. A good trick to see if you have understood the problem is to restate the problem in your own words. Ideally, you can abstract the essence of the problem in a simplified pictorial representation: an image or glyph.
You will not only profit from ontologising the problem in the form of a list of features, relations, laws, equations, conditions and restrictions, but you will improve your understanding even more if you can visualise the ontology in a diagram, a map.
The best way forward to map the ontology of the problem is to put the elements of more relevance more central in the image in bigger characters and7or in a bigger frame. This generates a cloud of terminologies. Now you connect the different items with lines that represent their relations. The thickness of the lines can be indicative of the relative importance of the relation. This can result in a mapped ontology which can look like this:

Other types of graphical representations or schemes can also be useful, like a dendrogram, a grid etc.

If you want the distances between the different items to represent the relative importance (less important elements further away from the centre and the closer two items are to each other the more they are relevant to each other) this can be a difficult task. In the annex to this essay I will provide more details how you can do this. Ideally your ontology map shows all 6 W’s.

This ontology map of the problem can enable you to see whether you have enough information to solve the problem and if not prompt you to data mine for further essential information.

Vital to understanding a problem is that you understand all of the concepts and terminologies involved and if you don’t, that you update your information set with the missing meanings of these. You can also make a list of the questions which are still unanswered. Once you have understood the problem in line with Buckmister Fuller’s recipe, we can now articulate our understanding i.e. formulate the problem in a manner as detailed and precise as possible.

To Devise a Plan and Heuristics

We are now close to searching for a solution to the problem. It is important that we list the differences between the status quo and the desired solved state, ideally in terms of structures and functions and their associated effects. We can then start to devise the strategy to find a solution.

If it is a known problem we can adopt a conservative approach and simply consult books or databases to explain us how we should solve it. However most problems are unknown problems to us and we need a practical method to advance stepwise towards the solution.

This is the topic of “Heuristics” (Greek for to find, to discover). A heuristic is a practical method to solve a problem, which method is not guaranteed to be perfect or optimal, but good enough to get us started, simplify the overload of data we have to deal with and allow us to progress.
You could call a heuristics a way of making an “educated guess”. A heuristic is a way of exploring a potential solution space. Computer algorithms often use structured heuristics to solve a problem. But we do too, even if we may not always be aware of it. For instance in board games such as “Battleship”, you can use a strategy to launch your missiles in a more or less homogeneous distributed way over the 2D grid to explore the enemy’s space. In the board game “Go” it is more advisable to first conquer the angles and edges of the board, whereas in “Chess” it is a general strategy, that you should first strengthen your position in the centre. All these strategies of topological sequential advancement are types of heuristics.

Your intelligence and problem solving skills may significantly improve once you realise that you have to make a plan with different stages to solve your problem and that choosing the best heuristic will warrant you the highest chances of success in the shortest time possible.

First we should try to find an analogous problem in a related technical field and see what type of solution was used to solve it. Here we can use the ontology of the problem and apply a search involving pattern recognition to find a problem which is identical or most similar to our problem.

Alternatively, we can try to see if we can find a similar result or effect in a neighbouring technical field even if the problem we try to solve was not explicitly mentioned there and see which structural features or configurations thereof yield the desired result or effect. We can then try to apply such a solution to solve our problem.

We can also try to find either a more general or a more specific known problem in the prior art.

If possible we should split our problem in smaller sub-problems which we can solve independently and later on integrate in an overall solution.

We can again ontologise and map the problem vis-à-vis such alternative solutions, giving a potential solution-scape (in imitation of the word landscape).

Some tools/considerations in Pólya’s toolkit regarding heuristics are the following:
Guess and check (this is a very simple random trial and error heuristic),
Eliminating possibilities which are likely to fail (for this you need to use reasoning as explained in part 3 of this series),
Use symmetries and complementarities. Pólya instructs us to look for patterns, draw pictures or make models.
Most importantly Pólya suggests considering special cases of the problem: It is easier to solve the problem for a concrete instance thereof and then to abstract generalised rules to solve the general problem you wish to solve than starting to solve the general problem from scratch. Pólya calls this “solve a simpler problem first”.

Another heuristic he proposes is to “work your way back” from the desired solved state. A specific example thereof is “reverse engineering”. Reverse engineering is the situation where you find something which solves your problem but in which case it is like a black box to you: You don’t know how it functions and you cannot make it yourself. For instance you find an alien spaceship with great technology and you try to figure out how it is made and how it functions by disassembling it, by dissecting and analysing it, to give you clues how to put it back to together to make it work. Biotechnology often works that way, we have to dissect the complex systems in their simpler parts and figure out how they synergistically function together. In such cases “reverse engineering” is helpful.

However often we don’t have the luxury of finding such a working solution and we only have a mental conceptualisation of the desired result. Even then you can try to work your way back. In chemistry scientists use the technique of “retrosynthesis”, which is the best heuristic to figure out how to make a complex desired molecule by figuring out which simpler molecule with only one or two differences could precede the desired molecule and do this repeatedly at each stage for each identified simpler molecule until you arrive at two known starting materials, which you can purchase.

To solve a problem we can essentially only perform three types of actions: We can add something, we can take something away or we can modify something. A modification or substitution of an item for another, basically involves, taking away the item and replacing it with a similar item. Advancing in a territory means taking away your pawns from one location and adding them to a different position. Substitution essentially is successive subtraction and addition.
Certain problems can only be solved by omitting a constituent, which results in a simplification which yields better results. Other problems require the addition or substitution of an element. It is important to know that such additions or substitutions can have unforeseen synergistic effects, which is called strong emergence.

Many inventions and scientific advances involve the observation  of such a surprising effect. In fact often scientists make and observation they were not looking for at all, but which is even more interesting than what they were looking for. This is called “Serendipity”. the resulting scientific publication is nevertheless presented as if this was exactly what they were looking for, but don’t let yourself be fooled: They often reverse engineer the hypothesis and scientific process by which they should have arrived at the serendipitous result as if it was a planned problem solving process relating to an existing hypothesis or problem.

The exploration of a solution space of potential solutions, of alternatives to the status quo is a “Screening” process. The way you strategically plan your screening protocol by prioritising certain alternative types of solutions over others is your heuristic. It often involves eliminating directions which are likely to be unsuccessful and this process is called “Pruning”. Since prioritisation is a daunting task itself I will give you a strategy for setting priorities in the appendix to this essay.

To follow a conservative known heuristic can be the start of a problem solving strategy. However if this fails more creative strategies may be needed. An out of the box thinking may be needed where the exploring problem solver ventures into unknown territories, like looking for a similar problem or effect or ontology in unrelated technical fields. Or to see if a different ontology has nonetheless the same map structure, which is indicative of a similar functional behaviour despite the content differences.

Later in this essay I will show in what way evolutionary living systems explore creative solutions.

Once a promising potential solution has been identified, the remainder of the problem solving process, the actual implementation, must be planned timewise and subsequently be carried out. It is important to set milestones of what should be achieved at a given stage in the process of implementing the solution. This allows for monitoring whether one is progressing in the right direction towards the solution and allows for adjusting the process, steering away or even returning to an earlier stage of the development if it is drifting away from the desired solution. In other words we need an operational feedback check mechanism involving intermediates measurements and checks for fulfilment of conditions, restrictions, equations and laws as intermediate results.
Before implementing the solution in a real situation it can be advisable to first simulate it in a computer if possible and if a mathematical model is available or can be made.

There are quite a few mathematical computer heuristics, like the Random walk Monte Carlo method. To treat these falls outside the scope of this essay and of no interest to the educated scholar who has a better understanding thereof than I do. This essay is for the layman using every day calculation tools such as Excel. There are two numerical methods in excel, which many people don’t know to solve complex problems which I would like to share with you: “Goal seek” and “Solver”. They are very easy to use and useful in recalculating mortgage changes. Numerical methods such as Goal seek use successive increments of value. Solver is a more versatile version of Goal seek, which can be used to optimise when facing multiple constraints.

Once a solution has been reached another check is carried out to see if the solution is satisfactory under all thinkable conditions and to see if a better more optimal solution can be envisaged. You can also store this problem and solution in a problem solving database, to accelerate future problem solving.

The different possibilities I have suggested for problem solving can be implemented in an organised and prioritised way by the intelligence algorithm: Conservative heuristics first followed by ever more daring creative ones if success does not ensue

Now I wish to return to the exploration of creative solutions in biological societies such as beehives, anthills or bacterial colonies. In these societies there is a group of mostly workers, which implement a conservative strategy to maintain the status quo. These are called the “conformity enforcers”. The system has also “inner judges” that monitor whether the group standard or morality is maintained. However in times of difficulties such as a lack of resources individualistic explorers are needed, who venture in unknown territories and seek for alternative resources: these are the diversity generators. If they do find greener pastures and hit the proverbial jackpot, their success is celebrated beyond measure. The system can start to boom again and the group of resource shifters will exploit the new resources or educate the conformity enforcers to do so. Exhaustion of resources give rise to "depressions” in the system, resulting in putting the system on a low metabolism regime or even shutting down the system entirely in a hibernation mode, safe guarding the information in well protected spores, which can bloom again in the future under more fruitful conditions.

Biological and physical systems are thus part of what Howard Bloom calls the “evolutionary search engine”. This evolutionary search engine is a natural intelligence that solves problems by a screening and pruning process and thus implements a solution. Intelligent systems mimic each other, absorb each other, niche or arrive at a symbiosis. They can also eliminate contenders in an intergroup tournament or mutate into something different. These are so-called “fission-fusion” strategies. The result of such an intergroup tournament need not be dominance or extermination. Sometimes they arrive at a kind of exchange of features, a biological commerce as a form of symbiosis, which shows that even natural intelligence is capable of finding the so-called “Nash equilibrium”.

The mathematician John Nash realised that the overall result of a cooperation between parties can be better or have a higher probability of success than a competition between the parties. This resulted in his theory about bargaining as a part of “game theory” and in the so-called “Nash equilibrium” which is the equilibrium reflecting the best overall result. He realised this when he and his male fellow students met a group of female students. If all the boys would have competed for the most beautiful girl, probably no one would end up being successful in the group or at best one of them. If instead they agreed to ignore the most beautiful girl in the group and not to poach each other’s  territory, chances were that more of them would be successful to pick up a girl.

Cooperation can not only lead to bargains with an overall better outcome for the participants together and avoid that someone is left out, it can also yield synergistic effects. It is only not worthwhile if the cooperation would slow down the process, which can happen if you have too many participants for the work to be done, which is known as the law of diminishing returns.

Intelligence in human social, psychological and emotional settings however involves a different type of intelligence, an additional degree of complexity and hidden motivations I haven’t touched upon in this essay. These apparently more irrational considerations at stake in humans will the topic of a further essay in this series.

I hope you have enjoyed the different suggestions for problem solving as a part of an intelligence algorithm. If you liked it, please follow me. Comments and suggestions are very welcome.



The easiest way to prioritise a number of actions to be taken is to assess their relative relevance of priorities in pairs in a grid.
If the pairwise priorities are like this ( the symbol > here means “having priority over”) A>D, B>A, B>C, B>D, C>A, C>D, then you can represent this in the following grid:

Equal relevance scores a 0. If a horizontal action is more important than a vertical one, (e.g. B has priority over A) a 1 is assigned. If a vertical action is more important than a horizontal one a 0 is assigned (e.g. C has priority over A). The higher the horizontal sum of the values in a row, the higher priority of the item. Here the result is B>C>A>D. This we could have seen without a grid for four items, but if you have 15 items to put in order, this is a very fast way to prioritise.

Topological relevance mapping

Now in addition to the priority of the items we need values of the importance of the pairwise relations e.g. AB=2, AC=3, AD=5, BC=4, BD=5, CD=3. In this example the priority order is given to be A>B>C>D. How to put this in a 2D map, where high priority items are closer to the origin and the lower the priority the further away the item is? We first draw a series of concentric circles with equal distances of 1. We place our most important item A at the centre. Then we take a horizontal rod of length 4, which is the length of BC and shift it over the screen until its respective ends touch the second and the third circle simultaneously (which represent AB=2 and AC=3 respectively). Then with rods of length 3 and 5, corresponding to CD resp. AC or BC we position D at the right position to give the following result:

Topological Relevance mapping

Thus we have been able to map items according to their priorities and with pairwise distances representing their relative relevance.

Friday, 18 November 2016

Is intelligence an algorithm? Part 3: Reasoning

Dendritic structures on a Petri dish show an uncanny similarity to neuronal dendritic structures.
Dendritic structures on a Petri dish show an uncanny similarity to neuronal dendritic structures.

(This essay has for the first time been published on steemit: https://steemit.com/ai/@technovedanta/is-intelligence-an-algorithm-part-3-reasoning)

One of the most important tools of the Intelligence algorithm is “Reasoning”. Before we can explore the more challenging topics of “Problem-Solving” and “Heuristics”, which I will discuss in part 4 of this series, we must first get a thorough understanding of the process of reasoning, as we won’t be able to devise Problem-Solving strategies without it.

In the seminal post in this series I have discussed the possibility that natural intelligence might be a kind of algorithm and what this can mean for the design of artificial general intelligence. This first post you can find here:
In the second post series I have discussed cognition, (pattern) recognition, memory, abstraction, analysis, understanding and information retrieval, as essential parts of the intelligence algorithm. You can find this post here:

In this essay I will first summarise what “reasoning” is and how it functions. Then I will try to show that it has an algorithmic nature and is in fact one of the integrated tools of intelligence. As part of this argumentation I will also touch upon the topics of rhetoric, causation, reality and truth.

Reasoning is often defined as thinking in a logical way to come to a judgment or conclusion. Logic itself is the set of rules we apply in this thinking process, it is the instrument used, but it is not identical to the thought process called “reasoning”. The steps in reasoning to come from a premise (an assumption also called proposition which is believed to be true) to a conclusion we call “inference”.

Traditionally there are three types of inferences:
Deduction, induction and abduction.

Deduction is the process in which a specific instance is compared with a general rule for a class of items assumed to be true. If the specific instance belongs to this class, it will also follow the rule.
All men are mortal (general rule for class)
Socrates is a man (specific instance of class)
Hence Socrates is mortal (conclusion)

Induction is the process in which multiple instances appear to follow a general pattern, from which it is then predicted that a further such (future) instance will follow the same pattern.
Until now the Sun always rose to start the day (general pattern)
It is likely there will be a Tomorrow (specific future instance)
I predict tomorrow the Sun will rise to start the day (predicted conclusion)

Abduction is the process in which for a specific instance (e.g. an observation, an effect) a reason or cause is speculated, which is known to give that result, whilst there can also be other factors yielding that result. It is another word for guessing.
The lawn is wet (specific instance which might or might not belong to a class)
When it rains, the lawn is wet (general rule for class)
Hence it has rained (conclusion)

Abduction is also called a logical fallacy, because the conclusion you arrive at need not be true; in the example given the lawn could also have been wetted by sprinklers.
Both induction and abduction are uncertain ways to come to knowledge. Deduction is said to be the only certain process. But there is a snag here: The very premises of deduction (when they relate to physical phenomena) have been obtained by induction. In fact, we only assume that all men are mortal because until now we haven’t seen an immortal one. Deduction works if the premises themselves are mental constructs the truth of which is asserted by definition, but that does not give us any certainty about the physical truth of such statements.
Induction and deduction however give us a strong probability that our conclusions will correspond to what can be observed.

There is also reasoning by analogy, which is also a logical fallacy. Because a specific instance belongs to a general class it is concluded that this specific instance has all the features of another instance of that class. This can lead to aberrant non-sense as illustrated hereunder:
A man is a human
Beyoncé is human
Beyoncé is a man

A complete list with logical fallacies can be found on Wikipedia, https://en.wikipedia.org/wiki/List_of_fallacies

It is not my purpose in this article to treat each one of these in detail. But if you wish to increase your intelligence I recommend you to have a look at these. It will improve your understanding of the world and people around you and you will be able to interact therewith in a more intelligent manner. You will be able to avoid wrong conclusions that do not get you near the purpose of your intelligence, which -as I said before- is to achieve complex goals.

Logical fallacies often do not respect the principle that general rules or patterns must be grounded in a sufficient amount of independent observations (i.e. by different people). In order to arrive at a plausible conclusion, it must be probable. In order to be probable it must be statistically relevant and have a sufficiently large base of grounding observations.
Logical fallacies also arise due to an insufficient knowledge about general and specific and classification schemes.

One of the logical fallacies I do want to mention because of its significance in science is the correlative fallacy described by the Latin proverb: “post hoc, ergo propter hoc”: (after this, therefore cause by this) If B comes after A, you conclude that A caused B. But we have plenty of examples in which correlation does not imply causation.

For instance, when people are eating more ice-creams, there are more shark attacks. If you then conclude that eating ice-creams causes shark attacks, you commit the above mentioned fallacy.
Often correlated phenomena have a common underlying cause. In the case above, it was a warm day, which makes that more people swim in the sea, which attracts more sharks. The warm day also makes that people eat more ice-creams.

How can we then assess what the cause of a phenomenon is? Scientists change one parameter, while keeping the others constant. If a change in parameter A, systematically results in a more or less proportional change in effect B, they usually conclude that A causes B. This may be the case, and for the sake of being practical it is useful to assume it is, but it is not necessarily so.

When we reason on the basis of cause and effect are still stuck with a mechanistic type of thinking, which belongs to the seventeen century. The universe of Newton, Huygens, Copernicus etc. in which the celestial bodies move with a clockwork precision, everything moving as if triggered by a plethora of cogwheels.
Quantum mechanics shows us that at the quantum levels many of the deterministic premises do not hold. Since we are nothing but aggregates of quantum processes, how can we be so sure that cause and effect as we believe exist, really do exist? I will come back to this issue later in this essay.

The branch of artificial intelligence (AI) that tried to work with parsing, specific rules and logical inferences has not been the most successful branch. It only leads to algorithms that can be applied in very specific contexts. Certainly useful in a specific context, but this will not out of itself evolve towards artificial general intelligence (AGI), which can operate independent of the context, or even human level of intelligence. The more successful branch of AI, which works with Bayesian Networks and probabilistic inference, is much more based on correlations than cogwheel type cause-effect relations.

A good example thereof is Latent semantic analysis which is used the IBM DeepQA engine Watson known from the popular program "Jeopardy" in which people compete with a computer in answering questions. Latent semantic analysis is based on Bayesian proximity co-occurrence of terminologies: If terminologies occur together in a statistical relevant way, they belong together and together they provide context and meaning.

In fact this is possibly also the way the brain builds ontologies, when it is abstracting patterns of features and relations. Every ontology is said to be at least a didensity: you need at least two terminologies to arrive at a relation which provides meaning; a concept may even require three terminologies. Interestingly when items are connected in the brain, when there is a neuronal association between stored concepts, thinking of one of them will automatically trigger thinking of the associated concept according to the well-known neuroscience adage: neurons that wire together, fire together.

Could it be that our brains function much more like a Latent semantic analysis based program? That the logical inferences are only made after an association is detected, as a kind of proof-reading mechanism, which verifies whether the correlation is useful and in what way?

I already mentioned that we have no certainty that cause and effect as we believe exist, really do exist. Quantum mechanics seems to reveal that the arrow of time can work in two directions and not necessarily only in the one direction we observe, as has become evident from Wheeler’s delayed choice variant of the double slit experiment. This poses questions about the possibility of retro causation: present events being caused by events in the future. But there is also another explanation, which puts causality at a deeper level of reality. Modern physics is more and more venturing in the field of digital physics led by the Dutch physicist Verlinde, which considers the universe as a kind of quantum computational substrate. In this model everything is information. Gravity and entropy are not the direct consequences of the movements of corpuscular bodies but rather the consequence of information processing laws, algorithms at the deeper level of the computational substrate. Entropic gravitation results in proximity co-occurrence of corpuscular bodies such as planets, which maximises the ability to dissipate energy and to maximise entropy.
If this is true, then perhaps there is no direct causation at the macroscopic level we see, but the correlations we observe as causally linked are the consequence of a causation by algorithms functioning in the quantum computational substrate of the quantum vacuum. In fact this would imply that the whole universe is some kind of computer which uses a principle similar to the tendency to proximity co-occurrence of latent semantic analysis. The universe could then perhaps also be some kind of mind. I am aware that this is slippery ice and pure speculation and full of the very logical fallacies I warned for, but I merely ask you to consider this possibility as an alternative explanation to causation.

The Latin word for reason is “ratio” which is probably not by coincidence linked to the English word ratio, which refers to quantitative relation between two amounts. The Greek word “logos”, from which the word logic is derived, also means “reckoning”. These etymological sources also point to a link between reasoning and (numerical or informational) reckoning.

Nevertheless, for the purpose of reasoning and dealing with the world around us, assuming the principle of causation at the macroscopic level is vital. We can only bring complexity about, if our actions follow predictable patterns. So let’s keep the metaphysical notions about causation only in back of our minds and for practical reasons, put causal inference to our advantage.

The title of these series has been “Is intelligence an algorithm?” If reasoning is part of intelligence, it must also be part of this algorithm. But I have shown that reasoning in AI seems to be unsuitable for context independent approaches. How can our natural intelligence then use reasoning as an algorithmic process which is context independent?

To understand this we must abstract the common features of the different modes of logic. What they have in common is that they all compare specific instances with generalised rules. So the intelligence algorithm will upon encountering a specific instance of an item seek in its database whether there is an ontological class it belongs to. It will compare specific with general using the rules of one of the modes of deduction, induction etc. and if a fit is achieved, recognise the item. As the item resonates with the general ontological structure of links and categories of the class, an association will be formed, not only metaphorically in the mind, but also literally at the level of the neurons, which will wire together and hence fire together.

If the item is new it will look for similar items and build an ontology on the basis thereof and on the basis of the relations the new item has with existing ontologies: thus it will cognise.
This part of intelligence is essentially a comparison engine, which can discriminate between items, classify them and draw conclusions if certain conditions are fulfilled (reminiscent of the often used “if...then...else statements in computer algorithms). I’m telling you nothing new; in ancient India these aspects have already been described in great detail, for instance in the “Yoga sutras” by Patanjali, which is an excellent treaty not only on the mental aspects of yoga but also on the workings of the mind. The intellect was called “Buddhi”, the ability to discriminate between items was called “Viveka” and inference was called “Anumana”.
The rules of logic when comparing specific with general are typically used to predict the outcome of future events and are thus mostly essential for planning, heuristics and problem solving. The specific strategies thereof will be the topic of a further essay in this series.

Reasoning is also the most important part of Rhetoric, the art of discourse in which you try to convince, to persuade your audience of your point of view. In a rhetorical argument, you can often start by giving specific examples of a general principle you wish to illustrate, or conversely you start by making a generalised assertion, which you then give a foundation by exemplifying it with specific instances. It’s clear that here you are using logic, most often induction and deduction. You are grounding an observation, like Ben Goertzel tries to do with his Opencog and Novamente projects for the development of AGI (Artificial General Intelligence).

But there is more to rhetoric than logic alone: Rhetoric also appeals to psychological aspects; it appeals to your beliefs and morality or to your emotional propensity. here we enter a more difficult area, which I will treat in more detail in a further essay in this series. This area is more difficult because it diverges from traditional intelligence which is based on pattern recognition and logic. This area touches upon intuition, which I will try to illustrate is possibly a kind of hidden heuristic (a practical approach to problem-solving without a guarantee for success; e.g. an educated guess).

In rhetoric based on emotions, the persuading orator can make an appeal to fear, which may block your more objective logic way of making sense.
The orator will try to seduce you to fall in the trap of logical fallacies, and come with evidence and facts, which may be true in a given situation, but not in all situations. On the basis of probabilistic insufficient information cherry picks, he will try to make you apply your logic. Taken off-guard by an emotional distraction, you may not apply your usual standard. And you may do the same when you try to convince someone else of your point of view. Perhaps with this article I am doing this with you. But at least by unveiling my mask I now give you the opportunity to seek truth for yourself.

This brings us to the issue of “truth”. Truth as we experience it, is a relative concept. For each event, different beholders have a different narrative which is often blurred by interpretations and coloured by beliefs and emotions. The same event can be told from very different perspectives, which at first glance appear contradictory and even mutually exclusive, but in the end from a higher perspective can be transcended as relating to different parts of the same entity or process:
This is perhaps best illustrated by the Indian parable of the elephant: Several blind man touch an elephant to learn what it is. One touches the tail and concludes that it’s a broom, another one touches a leg and concludes it’s a pillar, a third touches the tusk and concludes it’s a horn etc. Whereas from their own perspective none of them is really wrong, from the higher all-inclusive perspective they are all wrong to a certain extent and right to another extent. The dichotomies are resolved by a higher dimensional entity and perspective, which is the elephant, which transcends but does not exclude the partial perspectives.

Hence the famous quote by Nagarjuna:
“Anything is either true,
Or not true,
Or both true and not true,
Or neither true nor not true;
This is the Buddha's teaching”.

As R.A. Wilson stated: “What the thinker thinks, the prover will prove”. In other words, you will always find proof and evidence to support your beliefs. Which makes that if you really go to the bottom of this rabbit-hole, you cannot believe anything, because nothing is really certain. Hence Terrence McKenna’s famous quote: “Belief is a toxic and dangerous attitude toward reality. After all, if it's there it doesn't require your belief- and if it's not there why should you believe in it?”

In addition we must realise that what we believe to be “reality” is a “virtual representation” of reality cooked up by your brains. Since your brains filter out a massive amount of information and since different people have different filter capacity in their brains, how can we conclude that there is a common truth? There may be a kind of “consensus reality” which certain people agree upon, because their observations correspond. But the results of quantum mechanics have made clear that there is no “objective reality” out there. Your observation already changes the nature of existence, which is summarised in the famous adage: “When you change the way you look at things, the things you look at change.” So when it comes to truth, there are subjective truths and at best a consensus truth shared by a group of people. Plus the fact that we have all kinds of observational biases due to our personal and emotional background, we have cultural and linguistic biases and certain words are ambiguous homonyms. So no wonder we often fall prey to misunderstanding each other.

We already saw that logic cannot give us a solid foundation for our beliefs. which does not mean we should discard reasoning: It is usually the only way we have to make sense of the world around us. But we must be vigilant to jump too quickly to conclusions, or to cast away someone else’s perspective; we probably haven’t seen the whole picture. So we must adopt a cautious pragmatic approach and replace our beliefs with probabilities and likelihoods. The more your intelligence increases, the less convinced you are of a certain specific point of view. Instead you will try to acquire the bird’s eye view, which puts different perspectives in context. You will try to find a meta-perspective.

So if someone is really certain about his or her case, beware! You may not be talking to a very intelligent person.

I hope you will also read my next essays on the topics of problem-solving, planning and creativity.

Monday, 14 November 2016

Is intelligence an algorithm? (part 2)

Dendritic structures on a Petri dish show an uncanny similarity to neuronal dendritic structures.

From observation to articulation.

 (This essay has for the first time been published on steemit: https://steemit.com/ai/@technovedanta/is-intelligence-an-algorithm-part-2)

Many people are interested in improving their intelligence and many tools, schemes and tricks have been proposed for this purpose. Each of these tools (mnemotechniques, schemes to organise information, planning schemes, heuristics etc.) address only a small aspect of the complete process we call “intelligence”, because most scholars dealing with the topic of intelligence do not have a good overview of the total picture of intelligence, what it is and how it functions.

If we could analyse intelligence and arrive at understanding its mechanism we might be able to use it to our advantage. This is the purpose of this series of essays I am writing:
To collect a more complete understanding of what Intelligence is and how it functions and to provide tools for improving our human intelligence as well as artificial intelligence.

In a previous article on Steemit (https://steemit.com/ai/@technovedanta/is-intelligence-an-algorithm) I already indicated that intelligence is a kind of algorithm, which I will only very briefly summarise:
When a living system encounters a problem such as a lack of resources it gets a stimulus to start to probe for a variety alternatives or other solutions.
From its observations and testing out it abstracts patterns. From these successful alternative strategies can be selected.
When encountering contending groups so-called “Intergroup tournaments” can lead to mutual a probing of the distinctions between the groups. This can result in niching, a symbiosis, or an exchange of those features which are different between the groups. Thus the system adapts itself to its environment.

In this and coming essays I will try to discuss different aspects of Intelligence such as cognition, (pattern) recognition, memory, abstraction, analysis, understanding, information retrieval, reasoning and problem-solving, including planning, heuristics and creativity, in more detail. I don’t claim to present you with novel knowledge on this topic, but it is useful to give an overview the teachings of the various Pundits in a simplified manner.

Ben Goertzel the Godfather of Artificial (General) Intelligence defines Intelligence as follows:
“The ability to achieve complex goals”.
This tells us what intelligence is about, its purpose, but it does not tell us how it functions. Since intelligence appears to function in a certain predictable and repeatable way we could say it is a type of (natural) algorithm: a set of instructions defined in a very general aiming to arrive at a goal, in this case a complex goal.

A.N.Whitehead told us that “understanding is the apperception of pattern per se”. Pattern recognition and understanding certainly are part of the way intelligence functions, but this does not tell us how understanding comes about and what to do with it once attained.

Another clue about the mechanics of intelligence is provided by Buckminster Fuller who described cognitive processes as comprising four parts:
  • observation,
  • consideration (analysis),
  • understanding and
  • articulation.
This is a good starting point for analysing at least the first stages of the intelligence algorithm, which will be the topic of this essay. In the next essays I shall address the topics of reasoning, problem-solving, planning and creativity.

When we observe an object or concept, we wish to know what it is, we wish to “cognize” it, so we analyse its form, its material, its constituent parts and the relations between those parts, its function, its purpose and how it relates to its environment.

This analysis is, what B.Fuller calls “consideration”: We build a network of relations, which gives us a framework for understanding, a constellation of facts or as B.Fuller calls it, a “consideration”. Stella and Sidera are Latin words both meaning “star”. A constellation or a consideration is a configuration of facts, metaphorical stars, which together describe a total form if you connect the “star dots”).
From connecting the dots the framework that arises or emerges helps us to understand the object of analysis. The consideration framework metaphorically stands under the topic to be understood.
Such a framework must geometrically spoken at least have three descriptive facts forming three relations, since with only two facts you only have one relation which does not build a plane for understanding. You need a third piece of information to identify what something is. Imagine the information describing an object you get is the following: “black dots”. This tells us nothing yet. If we get the information that there are only two of them at least we can start to speculate about its nature: Perhaps they are the plugholes of a socket, the nose of a pig, the double point symbol “:”. Sometimes you need more than three descriptors, if there is more than one thing fitting the information. Additional pieces of information may suddenly tip the balancing point towards identification, such as a colour or a material substance.

Once we have identified our observation by having a sufficient framework for understanding, its “ontological configuration” we can articulate what we think it is.
We have now completed a cycle of the process of cognition or re-cognition.

When we re-cognise something we label the specific thing as an object belonging to a certain general pattern, a class or category. So we classify the object.
Every category has a certain group of features, which are typical for that category, and the presence of which in an object or concept form the requirements to belong to that category.
To describe an object or concept as complete as possible as regards its features in terms of material(substance), form, internal relations of its parts, external relations with the environment, function, purpose, restrictions and rules etc., is the topic of “Ontology”, the study of being.
Such a list characterising an object, phenomenon or concept is also called “an ontology”.
To build a hierarchical classification of ontologies is building a kind of taxonomy, a classification scheme.
If we acquire a clear taxonomy in our mind, our recognition of objects, phenomena and concepts will dramatically improve. It allows for a rapid retrieval of information, namely to which type of pattern the object for consideration belongs: our pattern recognition skills will improve.

We will have a clearer overview and distinction which aspects are universal and extend across all classes, which aspects are general and belong to multiple classes and which aspects are specific for a class and thereby characterise it as a so-called “idiosyncrasy”.

I fact our minds can form an ontology only if they have observed multiple instances of the same object. If you observe a new object for the first time, you can only try to make an approximation of what it is like. You can try to classify it in a higher ranking more general category if it fits in and if it doesn’t try to see which type of object is the most similar to it i.e. which has a similar use and/or most features in common.

Ben Goertzel stated that “one is an instance, two a coincidence and three is a pattern”. If we have observed three instances of a new object, which share the same features, it’s worthwhile building an ontology for it.
When we build an ontology and recognise shared features in it, it means that from our memory we have been able to abstract aspects which the different instances of the phenomenon have in common.

Abstraction is a form of making a simplified generalised representation of something. For instance if we are able to abstract a tree to the structural features single stem, branches, roots and leaves, it means that we have from all different types of stems been able to abstract the quality of stemness, from all different types of leaves, the quality of leaveness etc. A simplified representation in words, which allows for recognising new trees as trees and to be able to distinguish them for instance from a bush.

Artificial general intelligence is concerned with designing universal pattern recognition protocols, which inherently involve the process of abstraction. Abstraction always follows the pattern of going from multiple specific branches to a single generalised concept. In that way the process of abstraction could be represented by a tree-form. Interestingly enough our neuronal structures also follow that pattern.
The ontology must also be linked to other ontologies, representing other objects, phenomena or concepts with which the one under investigation has a relation. The internal relations between the parts of the ontology must also be described as part of the ontology building. 

You may have noticed that I make a distinction between objects, phenomena and concepts. Without wishing to define these fully at this moment, please note that with an object I mean a physical, tangible object, with a phenomenon I also wish to cover non-tangible physical manifestations (e.g. light, sound) and with a concept I wish to refer to mental representations or compounded ideas and schemes, which necessarily involve a degree of abstraction.

Thus far the first part of our intelligence protocol involves: observation, consideration including pattern recognition and feature abstraction, relation (web) building, giving a framework for understanding and thereby completing the building of an ontology, which subsequently allows us to articulate a (mental) abstract representation of the observation allowing for future recognition of further instances of the item of observation.

From specific we have progressed to general, which makes it easier to store the item in our memory for future information retrieval and re-cognition.
This process can even be further improved by further abstracting the object to a simple “glyph”. Chinese characters were formed as such simple glyphs. The Egyptians had their hieroglyphs. Alchemy used glyphs. Such simplified images representing most essential features are easier to retain mentally than complex lists of features in words.

In computers data are often compressed. When we have to learn long lists of information, we also have certain “mnemotechniques” at our disposal to compress this information.
For instance we can form words or phrases built from the first letter of each most crucial term per item in the list. these words we can try to capture with an image thereof if possible or if not of a word that sounds similar and does have a visual representation. Ideally you form a set of glyphs, which you can mentally visualise arranged in a street through which you walk. Each house has a statue of the glyph in the garden at the number of the list. 

These are great ways of compressing information and making them easily retrievable. Information retrieval can be improved by building webs of relations between the terms to be memorised so that together they form a single whole, a single configuration that can be glyphed or assembled in a kind of fairy tale.
Other mnemotechniques can be numerical coding using “Gematria”-type of techniques in which words can be coded in numbers or vice versa numbers can be coded in words, depending on which mnemotechnique is more adapted for you. If you are a musician it can help to translate the number sequence into a melody, which you can remember, as each cipher from 1 to 10 can represent a note e.g. from c to e one octave higher. If you are artistically gifted you can try to put multiple visual representations in a picture as a whole that makes sense.
With these coding techniques we can improve the use of our storage space, storage speed, information retrieval speed and recognition speed.

These techniques can improve our learning abilities significantly. When it comes to memorising complex and huge quantities of information it is worthwhile to distil the most characterising word (or two words) of each phrase and reformulating the phrase in such a way that the characterising word (or words) is the answer to the question. Or you make a list of the characterising words and apply the first letter word building approach mentioned above.
It is very useful to identify for each question you make, which type of question it is in terms of the so-called 6 W’s (Who, What, Why, Where, When and hoW). These questions are also typical for ontology building and can help you to remember an item more easily.

In this essay I have described the first stages of the intelligence algorithm concerning observation, consideration, understanding and articulation in conjunction with mnemotechniques and ontology building tools to improve our abilities for these stages. I hope you will also read my next essays on the topics of reasoning, problem-solving, planning and creativity. If you don’t want to miss it, you can follow me.