Symbolic-Connectionist Representational Model for Optimizing Decision Making Behavior in Intelligent Systems

ABSTRACT


INTRODUCTION
Optimizing the agent based decision making systems is a centre of intelligent systems research. Agent based decision making is fundamentally different from human decision making and make decisions according to programmed procedures. In cognitive modelling, decision making is viewed as a high level mental process which involves judging multiple options in order to choose one, so as to fulfil the objective of the decision making agent [1], [2]. This paper describes the effort of moulding a connectionist model for optimization by incorporating the principles of LIDA (Learning Intelligent Distribution Agent) cognitive architecture [3]. LIDA facilitates the cognitive process of decision making through its "action selection" module [4], [5].
The LIDA action selection module is being adopted to evolve connectionist neural architecture to enhance its computational ability. The cognitively motivated computational connectionist model will be used for various optimization problems with considerably good impact in its decision making agents. Even though cognitive modelling (symbolic) and neural modelling (connectionist) use two different approaches in building intelligent agents, there are recent developments in combining both approaches (symbolicconnectionist) to reinforce the field of cognitive agent building [6][7][8].
The following sections of this paper describe the concepts and principles behind the computational methods to bringing out an enhanced Connectionist Cognitive Network (CCN) model for optimizing real time decision making problems. This will be a motivation for the prospective researchers to approach the new dimention of cognitive model building.

ACTION SELECTION MODULE OF LIDA
The LIDA cognitive architecture extensively defines the higher order cognitive process of decision making with its action selection module. The action selection phase is basically a learning phase where several processes operate in parallel in the complete cognitive cycle of LIDA ( Figure 1) for the decision which will lead to the next cycle. Various associations formed within the entities of its architecture by reinforcing old selections occur as the conscious broadcast reaches Perceptual Associative Memory. New events from the conscious broadcast are encoded as knowledge patterns in Transient Episodic Memory. Potential action patterns, together with their contexts and expected results, are learned into Procedural Memory from the conscious broadcast. This is more similar to the training pattern of the feed forward networks of learning. Alongside of this learning, using the conscious contents, possible schemas for the action behavior are evolved from the Procedural Memory. Same action pattern is sent to Action Selection, where it competes to be the behavior selected for this cognitive cycle. The selected behavior triggers Sensory-Motor Memory to produce a suitable motor plan for the behavior pattern to be carried out [9]. This part of the cognitive cycle is the motivation for the proposed connectionist approach of decision making.

SYMBOLIC REPRESENTATIONAL MODEL OF DECISION MAKING
Modeling of human decision-making is quite challenging especially under a complex and uncertain environment. Ethical decisions are among the more complex decisions that agents face. Ethical decision making can be understood as action selection under conditions where constraints, principles and values play a central role in determining which behavioral attitudes and responses are acceptable [10].
Many decisions require having to select an action when information is unclear, incomplete, confusing, and even false, where the possible results of an action cannot be predicted with any significant degree of certainty, and where conflicting values can inform the decision-making process. In order to make the process of action selection to be unfussy by an agent, a bottom-up approach has been derived from action selection phase of LIDA cognitive architecture [11]. The Figure 2 is an expansion of action selection module described in the previous section and is viewed as a symbolic representational model. This symbolic representational model has three layers for optimizing the decision which will be derived as a new schema. The first layer is schemanet that receives input from the procedural memory, which is very similar to the input layer of the feed forward connectionist network architecture. The hidden layer of the feed forward net is associated with slipnet in the cognitive model which is used to evaluate the adequacy of alternatives for the action selection process. The output layer of the neural net is molded with third part of the cognitive model i.e, behaviornet. The new selection (new schema) is fed back to the sensor motor memory for learning. This conceptual framework of cognitive model is the motivational factor to the emergence of enhanced symbolic-connectionist network model for optimizing decisions in intelligent systems.

BACK PROPAGATION NETWORK AS A CONNECTIONIST MODEL
In connectionist neural network, the most popular method of learning is called Back propagation (BPN) [12]. Learning in feed-forward networks usually supervised learning, meaning that the network is trained by providing it with input and matching output patterns. The computational ability of BPN in optimizing diversified decision making scenarios has also paved way for implementing cognitive models of decision making [13][14][15].

Figure 2. Symbolic representational model of action-selection
The standard form of back propagation algorithm is illustrated in Figure 3. Its computational phases are being presented here for more clarity. The topology consists of three layers of units namely input, hidden and output layers. The connections between the layers feeding their inputs to the other layer with weighted connections. All the weights are changeable in such a way to learn the network to obtain the desired goal. The network learns by adjusting its weights until to find a set of weights that produce the correct output for every further input.
On considering a feed forward network with n input (i) units, p hidden (j) units and m output (k) units, the back propagation training will be performed in two phases namely forward pass and backward pass.

Forward Pass
One of the set of n training input patterns is applied to the input layer.

Backward Pass
The error signal for each input is calculated by using the difference between the actual activation of each output unit (oo k) and the desired target activation (d k ) for that unit.    The weight error derivatives for each weight between the input unit i and hidden unit j are calculated by taking the delta of each hidden unit and multiplying it by the activation of the input unit it connects to (i.e. that input pattern x i ). These weight error derivatives are then used to change the weights between the input and hidden layers.
To change the actual weights themselves, a learning rate coefficient yeta ( ) is used, which controls the amount the weights are updated during each back propagation cycle. The weights at a time (t + 1) between the hidden and output layers are set using the weights at a time and the weight error derivatives between the hidden and output layers using the following equation.  In BPN, each unit in the network receives an error signal that describes its relative contribution to the total error between the actual output and the target output. Based on the error signal received, the weights connecting the units in different layers are updated. The two passes are repeated several times for distinct input patterns and their targets, until the error between the actual output of the network and its target output is convincingly small for all the units of the set of training inputs. Feed forward connectionist networks likely to develop internal relationships between units so as to organize the training data, as a result they develop an internal representation that enables them to generate the desired outputs when given the training inputs. The same internal representation can be applied to inputs that were not used during training; the new inputs are classified by the network according to the features they share with the training inputs.

CONNECTIONIST COGNITIVE NETWORK (CCN) MODEL
Connectionist Cognitive Network (CCN) is a conceptual framework which combines the functionalities of LIDA based action selection model and feed forward neural network model. The architectural similarities of both models are illustrated in the CCN architecture ( Figure 4). Similar kinds of frameworks have done with the cognitive model of action selection proposed by LIDA. There are research outcomes that show the importance of bringing cognitive and connectionist approaches together to produce out performed intelligent agents [16].
The authors hope that this framework will eliminate the limitations of a BPN through the adoption of cognitive model by reducing number of learning steps, so that the computing time will also be less compared with pure BPN. The major improvement is achieved by enhancing the momentum factor on updating weights.
During action selection phase of LIDA cognitive cycle, each cognitive cycle will produce the new schema which would be based on the previous schemas generated in previous cycles. The previous schemas are being referred from perceptual associative memory during the current cycle as illustrated in Figure 1 and subsequently in Figure 2.
The weight from one of more previous training patterns must be stored in order to use momentum. Here, the new weights for training step i+2 is based on i and i+1. This makes the current weight adjustment with a fraction of the recent weight adjustment. The weight updating formula which differentiates normal feed forward network is A slight enhancement in this algorithm which will guarantee the convergence is by limiting the momentum factor with the range 0< <0.5 instead of 0< <1. This is very similar to scenario of fine tuning the

Content
Action Results

Convergence with Improved Momentum Factor
Decision Goal Figure 4. Connectionist cognitive network model

RESULTS AND DISCUSSIONS
The performance of the proposed algorithm is verified by simulating an agent for figure print identification as a benchmark problem. The convergence behavior of the proposed algorithm (CCN) is compared with well known BPN training algorithms such as gradient descent (BPN-GD), gradient descent with adaptive learning rate (BPN-GDA) [17]. Epoch numbers, mean squared errors (MSE) and execution times will be considered for evaluating the convergence performance of the algorithms. To have effective comparison in each algorithm, the parameter values and termination condition are considered as constants. To get best results the initial weights are set to random and uniformly generated between [-4, +4]. The learning rate coefficient is assigned the value 0.01.
The figure print identification is a well known database in the pattern recognition. Three classes of data set each has 30 instances, totally 90 patterns are used. Out of 90 patterns, 70 patterns are used for training and the remaining 20 used for testing. The input values are normalized before applied to the network. Table 1 shows the results of network convergence while running the respective algorithms. The proposed CCN algorithm converges at 139 th iteration while other two BPN algorithms at the epochs of 395 and 358 respectively. It also takes minimum time of 67 msecs to converge. But BPN-GD and BPN-GDA have taken 143 msecs and 129 msecs respectively to reach the termination with condition MSE=0.0003. The notable variation found in the proposed CCN is its testing MSE i.e 0.692, which is minimum among all algorithms. The learning curve drawn against epochs and MSE for the proposed algorithm is shown in Figure 5.

CONCLUSION
The strategy adopted to make a unique Connectionist Cognitive Network (CCN) model by associating the BPN and LIDA Cognitive model of action selection is computationally proved with its enhanced network performance compared with other two variations of BPN. The convergence speed of the proposed algorithm in terms of time and epochs is shown by simulating the bench marking problem of figure print verification. The highlight of the proposed algorithm is the minimum MSE observed while testing. This can be improved by training with more input patterns. The proposed CCN model should also be compared with other leading architectures like SVM to know its limitations.
It is evident that this initiative will eliminate the limitations of both traditional symbolic and connectionist approaches and will lead us to a new dimention of higher order cognitive agent building.