Startseite Do I Know Ω? An Axiomatic Model of Awareness and Knowledge
Artikel Öffentlich zugänglich

Do I Know Ω? An Axiomatic Model of Awareness and Knowledge

  • Cesaltina Pacheco Pires ORCID logo EMAIL logo
Veröffentlicht/Copyright: 26. Mai 2020

Abstract

In modeling game and decision theory situations, it has been usual to start by considering Ω, the set of conceivable states of the world. I wish to propose a more fundamental view. I do not assume that the agent knows Ω. Instead the agent is assumed to derive for herself a representation of the universe. Given her knowledge and her ability to reason about it the agent deduces a set of conceivable states of the world and a set of possible states of the world. The epistemic model considered in this paper uses a propositional framework. The model distinguishes between the knowledge of the existence of a proposition, which I call awareness, and the knowledge of the truth or the falsity of the proposition. Depending upon whether one assumes that the agent is aware or not of all the propositions, she will or will not have a “complete model” of the world. When the agent is not aware of all the propositions, the states of the world and the possibility correspondence imaginable by her are coarser than the modeler's. The agent has an incomplete knowledge of both the states of the world and the information structure. In addition, I extend the model with “incompleteness” to a dynamic setting. Under the assumption that the agent's knowledge is non-decreasing over time, I show that the set of states of the world conceivable by the agent and her possibility correspondence get finer over time.

1 Introduction

In game and decision theory the set of conceivable states of the world is always assumed to be known. Given the information an agent has, there exists a subset of states that the agent considers possible. Learning involves being able to reduce the set of possible states. There is however nothing more learned about how the world is, i. e. about the set of conceivable states.

However there are many circumstances where the assumption that the set of states of the world is known to the agent is inadequate. In fact, the role of unforeseen contingencies has been recognized in many areas of the economic literature (e. g. decision theory, contracting). Under these circumstances the agent has “an incomplete picture of the world”, which may get more complete over time.

In this paper I develop a model where the agent's representation of the world may be incomplete. I do not assume that the agent knows Ω. Instead the agent is assumed to derive for herself a representation of the universe. Given her knowledge and her ability to reason about it, the agent constructs her own view of the world.

I want to look at the world through the eyes of the economic agent. What does she know about the world? Which states of the world are imaginable by her? Which states of the world does she consider possible?

I analyze these questions using an axiomatic model of knowledge. There is an important distinction between the axiomatic model of knowledge in this paper and other models which have been used in economic theory to analyze issues like common knowledge.[1] Frequently a probability space ( Ω , Σ , μ ) is considered, where Ω is the space of states, Σ is a σ-field of events and μ is a probability measure on Σ . For each individual, there is a given possibility correspondence P which describes the information available to the individual in each state of the world. Then it can be said that the individual knows event E at ω if P ( ω ) E . Hence the knowledge of events is a derived notion in these models. The properties of the knowledge operator can be derived from the characteristics of the possibility correspondence.[2]

The idea that I present is, in a certain sense, the opposite of the above. The primitives of the model are the properties of knowledge and the knowledge the agent has. The corresponding states of the world and properties of the possibility correspondence depend upon the assumptions about knowledge. The states of the world which are defined here are “epistemic states of the world”, they incorporate the knowledge the agent has. Two states of the world may differ only because the agent has different knowledge in each of them. One advantage of the explicit introduction of knowledge is that it enables us to reveal the underlying assumptions about knowledge which give rise to specific properties of the possibility correspondence. In this respect this work is closer to Bacharach (1985) and Samet (1990).

Like Samet (1990), I assume a propositional world. The environment of interest is described by propositions, including propositions about the agent's knowledge. The states of the world are assignments to the propositions of true and false values, which are consistent with the axioms of the model.

The set of epistemic axioms which has been most frequently used includes three axioms. The first axiom which defines knowledge, says that if the agent knows a proposition, then that proposition is true. The second axiom (positive introspection) says that when the agent knows something, she knows that she knows it. The third axiom (negative introspection) states that when the agent doesn't know something, she knows that she doesn't. However a closer examination of the third axiom reveals that it entails two implicit assumptions: an assumption about the agent's introspection capabilities and an assumption about the agent's awareness of all the existing propositions.

Traditional epistemic models have been unable to separate the two assumptions because there is no reference to the agent's knowledge of the existence of a proposition. In this paper I distinguish between knowledge of the existence of a proposition (which I call awareness) and knowledge of the logical value of a proposition (which I call knowledge). Consequently it becomes easy to separate the assumption of complete awareness from the assumption of introspection.

In my work the rationality assumption which is implicit in the axiom is maintained while the complete awareness assumption is given up. I assume that if the agent is aware of a proposition and she doesn't know the proposition, then she knows that she doesn't. In other words, the agent is capable of negative introspection, provided she is aware of the proposition.

Samet (1990) proves that under the three epistemic axioms mentioned above the possibility correspondence is a partition and that this is not the case when only the first two axioms hold.[3] I show that the partition result holds under the weaker assumption that negative introspection holds if and only if the agent is aware of the proposition.

Another important feature of my model is that the agent can reason.[4] The agent is able to reason about her knowledge and the properties of her knowledge.

The analysis can proceed at two levels. One is at the modeler's level, which has a complete description of the world.[5] The other is at the agent's level. Her description of the world depends on the set of propositions she is aware of. Depending upon whether one assumes that the agent is aware or not of all the propositions, she will or will not have a “complete model” of the world.

In a world where the agent is not necessarily aware of all propositions the states of the world imaginable by the agent are not always a complete description of the world. The notion of a state of the world à la Savage, as resolving all uncertainty does not fit in this setup. In this case, the agent may be able to learn more about the set of conceivable states of the world, about how the world can be. I present a dynamic model of knowledge, where it is clear that the agent improves her description of the world over time.

The paper is organized as follows: the notations as well as some definitions are presented in Section 2. This section reviews the axiomatic model of knowledge of Samet (1990). Section 3 describes the axiomatic model of awareness and knowledge which I developed. The following section explores some consequences of the axioms. In particular I show that under the axioms described in Section 3, the possibility correspondence is a partition. In Section 5, I consider a particular state of the world and ask what is the agent's view of the world in that state. The answer to this question is that the agent's view of the world depends upon the set of propositions she is aware of in that state of the world. In the following section I study the relationship between the agent's view of the world and the modeler's universe. I show that when the agent is aware of all the propositions she has a “complete” model of the world. However if she is not aware of all propositions her view of the world is coarser than the modeler's one. Section 7 extends the axiomatic model with incomplete awareness to a dynamic framework. Assuming that the agent doesn't forget or, in other words, if she knows something at time t she also knows it at time t + 1, I show that the agent's view of the world might get finer and finer as she becomes aware of more propositions over time.

2 A Simple Epistemic Model

In this section I present a simple axiomatic model of knowledge. The objective is to introduce the notation and concepts used in these types of models and to provide a bridge to my model. I illustrate the concepts with examples which show the distinctive characteristics of epistemic models.

The concepts presented in this section are from Samet (1990). The notation I use is slightly different from the one used by Samet. The examples and interpretation of his model are my own.

2.1 Concepts, Examples and a Result

Following Samet (1990), I consider a propositional world. Let Φ be a countable set of propositions describing a certain environment of interest. There exists a mapping : Φ Φ , where ϕ is interpreted as “not ϕ”. In addition, there is a mapping K : Φ Φ , the meaning of K ϕ is “the agent knows ϕ”.

Φ includes propositions about the environment, as well as propositions about the agent's knowledge about the environment, about the agent's knowledge about her knowledge, and so on. Let us now consider an example which illustrates the propositions included in Φ. Suppose that the universe of interest is only whether it is sunny or not. The propositions in Φ are then: “it is sunny”, its negation “it is not sunny”, the propositions about the agent's knowledge of these propositions, such as “the agent knows it is sunny”, the negation of this “the agent does not know it is sunny”, the propositions about the agent's knowledge about her knowledge: “the agent knows that she knows it is sunny”, and so on. Even with a simple environment like this, the dimension of Φ is infinite because of the infinite hierarchy of knowledge.

We notice that propositions about knowledge have the same status than any other proposition. In Gilboa's (1986) words there is no distinction between information and meta-information.

Each proposition can be true or false (1 or 0). Hence the set Σ = { 0,1 } Φ contains all the possible assignments of the true and false values to elements of Φ. An element ω of Σ is called a state of the world if for each ϕ Φ , ω ( ϕ ) + ω ( ϕ ) = 1 , meaning that in a state of the world if a proposition is true its negation has to be false. Each state of the world can be described (and identified) by the set of propositions which are true in that state. It is equivalent to say “ϕ is true in ω”, “ ω ( ϕ ) = 1 ”, or “ ϕ ω ”. Let us denote by Ω0 the set of all states of the world.

In the previous example, possible assignments of true and false values for some of the propositions in Φ, where ϕ 0 is the proposition “it is sunny” and each row describes a state of the world, are described in Table 1:

Table 1:

Possible assignments of true and false values, where ϕ 0 is “it is sunny.”

ϕ 0 ϕ 0 K ϕ 0 K ϕ 0 K K ϕ 0 K K ϕ 0 K K ϕ 0
1 0 1 0 1 0 0
0 1 1 0 1 0 0
1 0 1 0 0 1 0
1 0 0 1 0 1 0

As it is defined here, each state of the world includes propositions about the agent's knowledge as well as propositions about other (non-informational) aspects of the environment. Without additional restrictions the number of possible states of the world is infinite.

A remark about the proposition K ϕ 0 . This proposition is simply the logical negation of K ϕ 0 , it does not necessarily imply ignorance in the common sense of the word. It says “the agent does not know that it is sunny”. When this proposition holds either “the agent knows that it is not sunny” or “the agent does not know it is not sunny” may be true. Only when the last combination happens can we say that the agent is ignorant of whether it is sunny or not.

At each state ω, one can define the set of propositions known by the agent in that state.

Definition 1

The epistemic content of the state ω, K ¯ ( ω ) is:

K ¯ ( ω ) = { ϕ : K ϕ ω } = { ϕ : ω ( K ϕ ) = 1 } .

We say that the state of the world ω is possible for the agent at ω if and only if all that the agent knows at ω is true in ω or, in other words, ω is compatible with the knowledge the agent has at ω. Let us define formally the possibility relation and possibility correspondence:

Definition 2

The possibility relation p is a binary relation on Ω such that:

ω p   ω     i f f    K ¯ ( ω ) ω   o r ,     equivalently ,     ω ( K ϕ ) = 1 ω ( ϕ ) = 1.

The set of states which are possible at ω is:

P ( ω ) = { ω : ω p   ω } .

The possibility correspondence, P : Ω 2 Ω specifies the possibility sets of the agent for each state of the world.

The properties of the binary relation p are associated with the properties of the knowledge of the agent. Since I am interested in a model based on the knowledge the agent has, I shall consider the properties of knowledge as the basis.

There are three properties which have been frequently used in axiomatic models of knowledge:

A 1   I f      ω K ϕ = 1 then ω ϕ = 1 , ϕ Φ ; A 2 I f ω K ϕ = 1 then ω K K ϕ = 1 , ϕ Φ ; A 3 I f ω K ϕ = 1   then ω K K ϕ = 1 , ϕ Φ .

The first property states that if the agent knows a proposition, that proposition is true. This is the property that defines knowledge, if this property is not satisfied then we would be speaking of beliefs, not knowledge.

The second and third properties are introspective. The second axiom refers to positive introspection. Whenever the agent knows something, she also knows that she knows it. If it holds, the agent is able to tell what she knows.

The third property, often called the negative introspection axiom, is particularly objectionable. In my opinion, this property makes sense in cases where the agent is aware of the existence of the proposition. In such a case, by introspection the agent should be able to recognize her ignorance. But I can imagine situations where the agent does not even know the existence of the proposition. In such cases, introspection would not be enough for the agent to recognize her ignorance.[6] In summary I believe that this axiom has implicit more than just introspection, it assumes that the agent is aware of all propositions.

Let us denote the set of states of the world which satisfies A 1 by Ω 1 , the set of states which satisfies A 1 and A 2 by Ω 2 and the set of states which satisfies the three properties by Ω 3 .

The number of states of the world when the three axioms are satisfied is much smaller than Ω 0 . In the previous example, the second, third, and fourth assignments in Table 1 are not permitted, because they do not obey axioms A 1, A 2 and A 3, respectively. The states of the world in the example, under these axioms would be reduced to the four states shown in Table 2.

Table 2:

Set Ω3 in the example where ϕ 0 is “it is sunny.”

state ϕ 0 ϕ 0 K ϕ 0 K ϕ 0 K ϕ 0 K ϕ 0
1 1 0 1 0 0 1
2 1 0 0 0 1 1
3 0 1 0 1 1 0
4 0 1 0 0 1 1

The assignments of true and false values to all the other propositions in Φ is determined by using axioms A 2 and A 3. Given the assignment rules for the negation and for the epistemic propositions it is enough to assign values to the propositions ϕ 0 , K ϕ 0 and K ϕ 0 , to generate the logical value of all the other propositions. In other words, there are two elements of uncertainty: is the proposition ϕ 0 true or false?; does the agent know whether the proposition is true or false or not?

It is easy to see that the possibility correspondence partitions the set of states of the world. In fact P ( 1 ) = { 1 } , P ( 2 ) = { 2 , 4 } , P ( 3 ) = { 3 } and P ( 4 ) = { 2 , 4 } . We notice again that the knowledge the agent has is part of the definition of state of the world. This characteristic distinguishes this framework from the one normally used where the information structure is not an object of uncertainty, e. g. in Geanakoplos (1989). In the example above one could say that there are two possible information structures in the traditional sense, i. e. when the information structure is not subject to uncertainty. Under one information structure the agent knows if it is sunny or not (first and third rows), under the other the agent does not know whether it is sunny or not (second and fourth rows).

Samet (1990) proves several results about the relationship between the mentioned properties and the possibility relation and possibility correspondences. I summarize the results in the following proposition:

Proposition 1

Samet (1990)

  1. What p implies in terms of the epistemic content:

    1. For ω , ω Ω and Ω Ω 2 , ω p   ω iff K ¯ ( ω ) K ¯ ( ω ) ;

    2. For ω , ω Ω and Ω Ω 3 , ω p   ω iff K ¯ ( ω ) = K ¯ ( ω ) .

  2. Properties of the binary relation p:

    1. If Ω Ω 1 , p is reflexive;

    2. If Ω Ω 2 , p is reflexive and transitive;

    3. If Ω Ω 3 , p is reflexive, transitive and symmetric.

  3. Properties of the possibility correspondence.

    1. If ω Ω and Ω Ω 1 , then ω P ( ω ) ;

    2. If ω , ω Ω and Ω Ω 2 , then ω P ( ω ) and if ω P ( ω ) then P ( ω ) P ( ω ) ;

    3. If ω , ω Ω and Ω Ω 3 , then ω P ( ω ) and if ω P ( ω ) then P ( ω ) = P ( ω ) .

Proof. The complete proof can be found in Samet (1990). Here I prove the first part of the proposition. Consider the case where ω Ω 2 . Suppose that ϕ K ¯ ( ω ) . That is equivalent to ω ( K ϕ ) = 1 , which implies ω ( K K ϕ ) = 1 . Hence K ϕ K ¯ ( ω ) . Since ω is possible at ω iff K ¯ ( ω ) ω , the last result implies ω ( K ϕ ) = 1 or in other words ϕ K ¯ ( ω ) . The implication in the other direction is a consequence of A 1. In fact, if ω ( K ϕ ) = 1 it is also true that ω ( ϕ ) = 1 hence K ¯ ( ω ) ω .

When A 3 also holds one can prove that the inclusion cannot be strict. The proof is by contradiction. Suppose there is some proposition such that ω ( K ϕ ) = 1 but ω ( K ϕ ) = 0 . By A 3 ω ( K ϕ ) = 1 implies that ω ( K K ϕ ) = 1 which implies, by definition of p, that ω ( K ϕ ) = 1 , a contradiction.

Hence, under the axioms A 1– A 3, the agent only considers possible states which are epistemologically equivalent and the possibility correspondence partitions the set of states of the world.

However when only A 1– A 2 hold the agent may consider possible states of the world such that, if she was in those states, she would know more propositions than the ones she actually knows. In terms of possibility correspondence this result says that the agent may consider possible states such that if she was in those states she would consider as possible only a subset of the states that she considers possible.

2.2 Some Comments

Some observations on Samet's model are appropriate here. The model is written from the view point of the modeler. He is describing a certain physical environment as well as the agent's knowledge about it. The model does not say whether the agent knows or doesn't know the information structure and the set of states of the world.

The concepts introduced are defined by the modeler. They would have a different meaning or no meaning at all from the agent's perspective. For example, what would the agent answer if asked, at the state of the world ω, which propositions does she know? The answer coincides with K ¯ ( ω ) if ω Ω 2 , in this case the agent knows exactly the propositions that she knows, she knows K ¯ ( ω ) . Otherwise it is only the modeler who knows that the agent knows the propositions in K ¯ ( ω ) .

More important, would it make sense (within the context of the model) to ask the agent, in the state of the world ω, which states of the world does she consider possible? Generally it would not. The question may not be comprehensible to the agent (she may not recognize the expression “states of the world”). In order for the agent to be able to answer the question one has to assume that the set of propositions in Φ includes propositions referring to the possibility sets and that the agent knows these propositions.

The model I present in the next section can be analyzed from the agent's perspective. The issues of whether the agent knows or doesn't know the universe will be explicitly addressed within the model.

Let us assume for the moment that in the previous model the agent knows, in each state of the world, the set of conceivable states of the world (consider this as an informal assumption of the model). Then the model without the assumption A 3 implies that the agent is being partially irrational. Although the agent knows the complete description of the states of the world, and as such should be aware of all propositions which describe them, she does not always recognize her ignorance, which may lead the agent to consider possible states where she knows more propositions than the ones she actually knows!

In a different setup, which considers physical states of the world, Geanakoplos (1989) gives another example of how assuming that the agent knows the universe and knowledge doesn't obey property A 3, implies partial irrationality: Suppose there are only two states of the world Ω = { a , b } , where a is the layer of ozone is disintegrating and b is the layer of ozone is not disintegrating. Suppose P ( a ) = { a } and P ( b ) = { a , b } , where the disintegration of the layer of ozone can be identified because, if it happens, there is emission of gamma rays, while otherwise nothing happens. Assume also that the agent knows the information structure (otherwise the definition of states of the world could not be as defined). If b occurs the agent doesn't know whether state a or b occurred. But if the agent knows the information structure she should reason “I cannot possibly be in state a because if I was in state a I would detect gamma rays, since I do not detect them I have to be in state b”. Hence if the agent uses all the information she has, including her knowledge of the possibility correspondence, the final information structure is a partition.

2.3 A Reinterpretation of the Model

Samet's model does not distinguish between the agent's knowledge of non-epistemic propositions from her knowledge of epistemic propositions. However, under the axioms A 1– A 3, if the agent knows something, she can derive, by introspection, that she knows that she knows it, and so on. So if the agent knows “it is sunny”, she knows “I know that it is sunny”, she knows “I know that I know that it is sunny”, and so on. This type of knowledge can be called inferred knowledge. Thus the knowledge of the proposition “it is sunny” is enough to generate all the sequence above.

A reinterpretation of the model can be made by assuming that at different states the agent has some non-inferred knowledge and the existence of certain deduction rules. Through reasoning and introspection the agent can derive other things, increasing the set of propositions she knows, the end result of this iterative process is what the axiomatic models of knowledge refer to. In fact, it is usual in axiomatic models of knowledge to include both inferred and non-inferred knowledge in the structure of the model.[7]

Instead of defining the states of the world as I did before, using all the propositions in Φ, one could consider a subset of propositions such that once the true and false values are assigned to these propositions all the assignments to the other propositions in Φ can be derived using the axioms.

The set of “basic propositions” one needs to consider to generate Ω depends on the axioms of the model. For example, under axioms A 1– A 3 by considering assignments to the propositions about the environment and the propositions about the agent's knowledge of the environment one can “generate” all the other assignments.

I am interested in a model where the agent's non-inferred knowledge includes, in any state of the world, knowing that a proposition can either be true or false, knowing the assignment rules of logic and knowing the notion of a state of the world. Through introspection and reasoning the agent can then derive a set of conceivable states of the world. Which means that, at the end of the agent's reasoning process, she will have a certain view about the set of states of the world.

3 The Description of My Model

3.1 Features of the Model

I start this section by discussing the features of my model.

  1. The distinction between being aware of the existence of a proposition and knowing or not knowing its logical value is introduced in my model. Obviously an agent can only know the logical value of a proposition if she is aware of the proposition. One can imagine ourselves asking the agent to enumerate the propositions she can think about as the first step.

  2. I want the agent to be able to think logically. In order to achieve this goal I introduce logic in the model, in any state of the world the rules of propositional calculus hold. Thus the model considered here has high consistency requirements when compared with the model in the previous section. Moreover the agent is assumed to be able to make deductions and to know the axioms of logic and therefore to know the theorems of logic, too.

  3. I continue to speak of knowledge, so axiom A 1 continues to be valid.

  4. Positive introspection is maintained. I also preserve negative introspection whenever that makes sense, i. e. if the agent is aware of a proposition ϕ and she doesn't know ϕ, then she is able to conclude that she doesn't know it.

  5. The agent is not assumed to know a priori the set of states of the world. Instead the agent knows the concepts of a state of the world and of the possibility correspondence. Since she can think, she is able to deduce a set of conceivable states of the world and define the possibility correspondence on such universe.

Introducing logic. Besides the negation and knowledge operator all other operators of logic are defined in the model. The set of propositions Φ is closed under the logical operators. For example if propositions ϕ 1 and ϕ 2 are elements of Φ so is the proposition ϕ 1 ϕ 2 . Hence the structure of Φ is substantially more complex than the one used by Samet (1990). I will return to the issue of the structure of Φ later.

I assume that the assignment rules of propositional calculus hold in any state of the world. Thus once the logical values of the “basic propositions” are known, one knows the logical values of all the other propositions. In the example, if both ϕ 1 and ϕ 2 are true, the proposition ϕ 1 ϕ 2 is also true, otherwise it is false. While in the model of the previous section the only logical consistency requirement was that if a proposition is true its negation is false, here in any state of the world the assignments of true and false values have to satisfy all rules of logic.

The symbols , , , , are part of the alphabets of the model, and can be used in propositions.

The awareness operator. As referred above it seems to be important to distinguish between the agent's knowledge of the existence of a given proposition from her knowledge about the logical value of the proposition. I will call the first kind of knowledge: awareness.[8]

For each proposition ϕ Φ there exists a proposition A ϕ . Which means “the agent is aware of proposition ϕ”. Now each state of the world is also characterized by the agent being or not aware of a given proposition.

Definition 3

The awareness set of the agent at state ω, Φ a ( ω ) , is the set of propositions the agent is aware of in the state ω:

Φ a ( ω ) = { ϕ Φ : ω ( A ϕ ) = 1 } .

I assume the agent always knows that if a proposition exists its negation also exists as well as propositions about the agent's knowledge of the proposition. The agent also knows that if two propositions exist then one can define their conjunction, disjunction, and so on. As a consequence, the set of propositions the agent is aware of has to be closed under the logical operators and the knowledge operator.

The structure of Φ. The set of propositions in my model is substantially more complex than the one in the model of the previous section. This is so partly because of the inclusion of the logical operators and the awareness operator in the model.

There is however a more fundamental difference. Since I am interested in a model where it makes sense to ask the agent her view about the set of states of the world and which states she considers possible, a complete model includes propositions which refer to the conceivable states of the world, the epistemic content of these states and the set of propositions the agent is aware of. In other words the set of propositions Φ includes not only propositions about the environment and the agent's awareness and knowledge. It includes propositions about the “model”, about the set of states of the world and the information structure.

To illustrate this idea consider the example of the previous section where the basic proposition about the environment was “it is sunny”. One can define the proposition “There exists a conceivable state of the world where it is sunny holds and the agent knows it”. In order for us to say that the agent knows the universe of states of the world she has to know propositions like this one.

This leads to an element of circularity in the definition of states of the world, since I am defining states of the world based on the assignments of true and false values to propositions, where some propositions refer to states of the world. The element of circularity is not surprising. Circularity also exists in models where a probability space ( Ω , P , μ ) is considered and where it is assumed that the agent knows the information structure. This is so because in order to incorporate the idea that the agent knows the information structure one has to include the possibility correspondence in the full description of the state of the world ω.[9] Since my model is designed to answer questions related to the agent's knowledge of the set of states of the world one should expect the same type of circularity.

3.2 The Axioms

Let Φ be the set of all propositions. In order to simplify the list of axioms let g ( ϕ ) either be a proposition that combines ϕ with one or several of the operators A , K , (such as K ϕ , ϕ ) or the proposition “there exists a state of the world where ϕ holds”.

The states of the world defined in this model are “awareness-epistemic states of the world”.

Definition 4

Consider Σ = { 0 , 1 } Φ , the set of all the assignments of true and false values to the propositions in Φ. A state of the world ω is an element of Σ such that the assignment of true and false values satisfies the assignment rules of propositional calculus, the awareness and the epistemic rules stated below:

A 0 Axioms of propositional calculus ; A 1 I f ω K ϕ = 1 then ω ϕ = 1 , ϕ Φ ; A 2 I f ω K ϕ = 1 then ω K K ϕ = 1 , ϕ Φ ; A 3 I f ω K ϕ 1 ϕ 2 = 1 , then   ω K ϕ 1 = 1     ω K ϕ 2 = 1 , ϕ 1 , ϕ 2 Φ ; A 4 ω A ϕ = ω A g ϕ , ϕ Φ ; A 5 ω A ϕ 1 = 1   and ω A ϕ 2 = 1 ω A ϕ 1 ϕ 2 = 1 , ϕ 1 , ϕ 2 Φ ; A 6 I f   ω A ϕ = 0   then   ω K ϕ = 1   and ω K ϕ = 1 , ϕ Φ ; A 7 I f ω A ϕ = 1   and   ω K ϕ = 1   then   ω K K ϕ = 1 , ϕ Φ ; A 8 The  agent knows  definitions     1 , 2 , 3 , 4 and axioms   A 0 A 7.

The first axiom says that the universe of states is logically consistent. It includes the requirement that in any state of the world either a proposition is true or its negation is true, but extends that to all other propositional calculus rules.[10]

The next three axioms refer to the assignment rules of epistemic propositions. A 1 guarantees that one is speaking of knowledge. A 2 says that the agent is able to do positive introspection and whenever she knows something she knows that she knows it. A 3 says that the agent's knowledge has to be closed under implication. So if she knows “if it rains then there are clouds in the sky” and additionally she knows “it is raining”, then she knows that “there are clouds in the sky”. This is obviously a very strong assumption. It imposes a very high level of rationality. For example, if the agent knows the axioms of logic, she knows all the theorems too.

Axioms A 4– A 5 say that in any state of the world the set of propositions the agent is aware of is closed under the logical and the knowledge operator. The idea is that the agent knows that one can always define the negation of a proposition, propositions about the knowledge of the proposition, and so on. In addition if the agent knows that a proposition exists she knows that she can use the proposition in her description of the states of the world and vice-versa.

A 6 says that if the agent is not aware of a proposition then she cannot know if the proposition is true or false. Knowledge of the existence of the proposition is a necessary condition for the knowledge of its logical value.

A 7 says that negative introspection holds if the agent is aware of the proposition. This axiom is weaker than the axiom of negative introspection in the model of Section 2. In that model, whenever the agent doesn't know a proposition she knows that she doesn't, here this property is true only if the agent is aware of the proposition.

If A 8 holds the agent knows the properties of her knowledge. This in combination with A 0 means, that the agent knows logic, and with A 3 that the agent can derive the logical consequences of the properties of her knowledge. A remark about the axiom is that the agent's knowing the definition 4 doesn't mean she knows Φ. What it means is that the agent knows that a state of the world is an assignment of true and false values to all the propositions which is consistent with the axioms.

Latter I will add one more axiom which is necessary for coherence between the modeler's model and the agent's view of model.

Axioms are by their own nature assumed to be always true. So, in a sense one can interpret them as propositions which are always valid. For these types of propositions, as well as any logical consequence of them, only the true value can hold in any state of the world.

Let Ω be the set of all states of the world. I will also refer to Ω as the set of states of the world conceivable by the modeler.

An Alternative Approach Instead of considering the full set of propositions to define the states of the world one could consider “basic propositions” which are sufficient to generate the set of states of the world, given the set of axioms. Let us call the set of “basic propositions” Φ b . Let Σ b = { 0 , 1 } Φ b be the set of assignments of true and false values to these propositions and let Ω b be the subset of Σ b which satisfies the axioms of the model.[11] Let us define a correspondence from Ω b to Ω as follows:

f ( ω b ) = { ω Ω : ω b ( ϕ ) = ω ( ϕ ) , ϕ Φ b } .

We can say that Φ b generates the set of states of the world Ω if and only if f is a one to one mapping. There are, of course, many sets of propositions which generate Ω, the set of “basic propositions” is the minimal set of simple propositions which generates Ω.

The set of “basic propositions” in my model includes propositions about the environment, propositions about the agent's awareness of these propositions and propositions about the agent's knowledge of the logical value of the propositions describing the environment. The other propositions, including propositions referring to the agent's knowledge about her knowledge, what the agent knows about the true state of the world, what states does the agent consider possible, all have an assignment which can be derived by using the assignments to the basic propositions and the axioms.

Whenever I speak about a basic state of the world I use the superscript b. One can define the concepts of epistemic content and awareness set of a basic state of the world, just substituting Φ by Φ b in the previous definitions.

In the paper I use often the basic states of the world, since they are easier to characterize and encode all the information we need. I call the states of the world ω Ω “complete” states of the world.

A reinterpretation of the notation What does it mean to say that ω ( K ϕ ) = 1 ? It says literally that the proposition “the agent knows ϕ” holds in the state of the world ω. But what is the meaning of the agent knowing ϕ? It means that the agent knows that ϕ = 1 , or to be more precise it means that the agent knows that the state of the world is such that ϕ = 1 (such statement makes sense in this framework because the agent knows the meaning of a state of the world).

Given A 1, the agent knowing ϕ, means she knows something about the actual state of the world. Let us denote by ω * the true state of the world. One can express ω ( K ϕ ) = 1 in the following way: ω ( K ( ω * ( ϕ ) = 1 ) ) = 1 , the agent knows at ω that in the true state of the world ϕ = 1 (the true state of the world is ω, but the agent may not know that). This gives a correspondence between the agent's knowledge of propositions and the agent's knowledge about the true state of the world. This interpretation is implicit in many steps which follow. However I will keep the previous notation for simplicity.

In addition one can interpret ω ( A ϕ ) = 1 as ω ( K ( ϕ Φ ) ) = 1 .

4 Consequences of the Axioms

The (complete) states of the world are full descriptions. They include all deductions made by the agent. In particular her knowledge about the information structure and about the conceivable states of the world is in its definition. In this section I use the axioms of the model to prove some properties which have to hold in the complete states of the world.

Since the agent knows the axioms of the model and knows logic, she can derive the consequences of the axioms. We notice that Φ includes the axioms of the model as well as their consequences, all these are objects that the agent can speak about.

Lemma 1

  1. ω ( K ϕ ) = 1 ω ( K K ϕ ) = 1.

  2. ω ( K ϕ ) = 1 ω ( A ϕ ) = 1.

  3. ω ( K g ( ϕ ) ) = 1 ω ( A ϕ ) = 1 or, equivalently, ω ( A ϕ ) = 0 ω ( K g ( ϕ ) ) = 1.

  4. If ω ( A ϕ ) = 0 then ω ( ( K ) n ϕ ) = 1 , for any n.

  5. ω ( K ϕ ) = 1 ω ( K K ϕ ) = 1.

Proof.

  1. By A 2, ω ( K ϕ ) = 1 ω ( K K ϕ ) = 1 . Applying A 1 to the proposition K ϕ one gets : ω ( K K ϕ ) = 1 ω ( K ϕ ) = 1 . Hence the agent knows that a proposition holds iff she knows that she knows it.

  2. This is an immediate consequence of A 6, since ω ( K ϕ ) = 1 ω ( K ϕ ) = 0 , thus ω ( A ϕ ) = 1 .

  3. If ω ( K g ( ϕ ) ) = 1 , by property (ii) ω ( A g ( ϕ ) ) = 1 which is equivalent to ω ( A ϕ ) = 1 , by axiom A 4.

  4. This property is a particular case of (iii) (for g ( ϕ ) = ( K ) n 1 ϕ ) . I present a direct proof by induction. The property holds for n = 1 , by axiom A 6. Suppose that it holds for n, then I prove that it has to hold for n + 1 . Suppose not, then we would have ω ( K ( K ) n ϕ ) = 1 . Hence, by property (ii) ω ( A ( K ) n ϕ ) = 1 . By A 4 this implies that ω ( A ϕ ) = 1 which contradicts the initial assumption.

  5. If ω ( K ϕ ) = 1 then by A 1 ω ( ϕ ) = 1 . Hence ω ( ϕ ) = 0 . But ω ( ϕ ) = 0 implies by A 1 that ω ( K ϕ ) = 0 or, equivalently, ω ( K ϕ ) = 1 . Thus one concludes that ω ( K ϕ ) = 1 ω ( K ϕ ) = 1 . Since the agent knows the axioms and knows logic she knows that this implication holds. Moreover, by A 2, ω ( K K ϕ ) = 1 . Hence by A 3, ω ( K K ϕ ) = 1 .[12]

These properties are immediate consequences of the axioms. One way of interpreting them is as propositions, which have to hold in any state of the world. Notice that they are propositions about properties of states of the world. However there is something else about these consequences which are important. Suppose that the state of the world is such that the agent knows the precedent of a given condition holds, and then she knows the conclusion has to hold as well. In other words, the knowledge of these properties combined with the specific knowledge the agent has about the true state of the world, leads the agent to derive further things about the true state of the world.

The next proposition refers to an extremely important consequence of the axioms.

Proposition 2

ω ( A ϕ ) = 1 iff either ω ( K ϕ ) = 1 or ω ( K K ϕ ) = 1 .

Proof. Assume ω ( A ϕ ) = 1 . In any state of the world either ω ( K ϕ ) = 1 or ω ( K ϕ ) = 1 and when ω ( K ϕ ) = 1 by axiom A 7 ω ( K K ϕ ) = 1 . The implication in the other direction is a consequence of property (iii).

Under the axioms of the model the agent being aware of a proposition can be interpreted as the agent being able to recognize whether she knows or doesn't know the logical value of that proposition.

A corollary of proposition 2 is that the agent being aware of a proposition is a “self-evident” event. If the agent is aware of a proposition she knows that she is aware of it:

Corollary 5

ω ( A ϕ ) = 1 ω ( K ( A ϕ ) ) = 1 .

Proof. Suppose ω ( A ϕ ) = 1 . Applying axiom A 2 on the RHS of the equivalence in proposition 2 one gets ω ( K K ϕ ) = 1 or ω ( K K K ϕ ) = 1 . Since the agent knows that proposition 2 holds, axiom A 3 implies ω ( K ( A ϕ ) ) = 1 . The implication in the other direction is a consequence of axiom A 1.

The agent knows the consequences of the axioms, so she knows that this equivalence holds. Thus the agent knows that there is no state of the world and no ϕ Φ such that she is aware of ϕ but she doesn't know so.

At any state of the world the agent knows exactly what is the set of propositions she is aware of. The set { ϕ : ω ( A ϕ ) = 1 } = Φ a ( ω ) is the same than the set { ϕ : ω ( K A ϕ ) = 1 } . Hence the proposition [ { ϕ : ω * ( A ϕ ) = 1 } = Φ a ( ω ) ] belongs to K ¯ ( ω ) .

One may ask an interesting question: what are the properties of the possibility correspondence under these set of axioms? The rationality assumptions guarantee that the possibility correspondence forms a partition:

Proposition 3

Under the axioms A 0 A 8 the possibility relation is an equivalence relation.

Proof. Since A 1 and A 2 hold one knows, by proposition 1, that if ω p   ω then K ¯ ( ω ) K ¯ ( ω ) . So, at ω the agent has to know at least as much as she knows at ω. Suppose there exists some proposition ϕ Φ such that ω ( K ϕ ) = 1 and ω ( K ϕ ) = 1 . Consider first the case where ϕ Φ a ( ω ) . In this case ω ( K ϕ ) = 1 implies, by negative introspection, that ω′ (K ∼ Kϕ)=1. Since the agent has to know at ω ' all that she knows at ω this implies ω ( K K ϕ ) = 1 , and by A 1, ω ( K ϕ ) = 1 , which contradicts the initial assumption. Therefore ϕ cannot belong to Φ a ( ω ) . But corollary 5 tells us that at ω the agent knows the set of propositions she is aware of and she knows that she is not aware of any other proposition. Hence the axioms of the model, which are known to the agent, imply that at state ω the agent doesn't consider possible any epistemic state where she is aware of more propositions than the ones she is aware of.

The argument in the proof gives a characterization of the states that the agent considers possible at ω. They have to be states where the set of propositions the agent is aware of is the same than at ω and where the set of propositions the agent knows is also the same. This characterization is true independently of the agent being or not being aware of all the existent propositions. An important conclusion is the result that the possibility correspondence partitions the set of states of the world depends upon the rationality assumption implicit on the axiom that if the agent doesn't know something she knows that she doesn't know. Hence by giving up complete awareness one does not lose the partitions result.

Considering the one to one correspondence between complete states of the world and basic states of the world, one can translate the possibility correspondence into the set of basic states of the world. The possibility correspondence partitions the set of basic states according to the awareness set and the epistemic content of the basic states of the world, i. e.:

P ( ω b ) = { ω b Ω b : Φ a ( ω b ) = Φ a ( ω b )   a n d K ¯ ( ω b ) = K ¯ ( ω b ) } .

5 The Agent's View of the Universe at ω

In this section I consider a given basic state of the world, ω b , and ask: what is the agent's view of the world in the corresponding complete state of the world ω? Which states of the world can she conceive? Which states of the world does she consider possible?

5.1 Agent's View about the Model

The properties of the model guarantee that the agent knows at ω which propositions belong to the set Φ a ( ω b ) . Moreover she knows that she is within a model where there is an abstract set of basic propositions Φ b , which is the maximal set of basic propositions she can be aware of. If the agent doesn't know anything more about Φ b she will be uncertain whether Φ b Φ a ( ω b ) or Φ b = Φ a ( ω b ) . But then a complete epistemic model should include this uncertainty within the model. In a sense what is happening is that the agent doesn't know in which “model” she is in. She knows that the model has to satisfy the properties A 0 to A 7, but she doesn't know which model in this class she is in.

One way to solve this internal consistency problem is to extend the model in order to include all the models in the class of models satisfying the axioms above. However there are three interesting cases where one doesn't need to do so.

One case is when the agent is aware, in any state of the world, of all the existing propositions. Let us define the “completeness” axiom as follows:

A C   ω A ϕ = 1 , ϕ Φ b .

ω ( K ( A C ) ) = 1.

Adding this “completeness” axiom to the previous set of axioms, leads to a model where the agent knows the set of states of the world.

The other interesting case is when the set Φ b is infinite and in any state of the world the set of propositions the agent is aware of is a strict subset of all the set of existing propositions. This can be described by the following “incompleteness axiom”:

A I ϕ Φ b : ω ( A ϕ ) = 0.

ω ( K ( A I ) ) = 1.

A combination of these two cases which fits in this setup is when the completeness/incompleteness are self-evident, i. e. if the agent is not aware of all propositions she knows it and if she is aware of all propositions she also knows it. Below I concentrate on the case where the agent is never aware of all the basic propositions. I also refer briefly to the opposing case where the agent is always aware of all propositions.

The next two subsections answer the question of what is the agent's description of the set of states of the world and of the information structure. The concepts of states of the world conceivable by the agent and agent's possible states of the world are introduced.

5.2 Agent's Description of the World

The agent is not assumed to know a priori Ω, the set of states of the world. However the agent knows the concept of state of the world, she knows that a state of the world is an assignment of true and false values to the basic propositions which is consistent with the axioms of the model.

The agent can construct her own description of the set of states of the world, using her knowledge, at ω, about Φ b and the concept of state of the world.

Definition 6

The set of basic states of the world conceivable by the agent at state ω, is the set of states of the world that the agent can construct using the concept of state of the world and her knowledge about the set of basic propositions.

The question is how much does the agent know about Φ b ? At ω the agent knows which are the elements of Φ a ( ω b ) . If the completeness axiom holds then the agent knows this is exactly the set Φ b .

If the incompleteness axiom holds then the agent knows that Φ b \ Φ a ( ω b ) . This implies that she knows that there exist states of the world conceivable by the modeler where she is aware of propositions not in Φ a ( ω b ) . She also knows that the set Φ b \ Φ a ( ω b ) includes propositions about the environment, awareness and knowledge of the propositions about the environment. These are consequences of the agent's knowledge about the structure of model.

Although the agent has a generic knowledge about the existence of states of the world conceivable by the modeler where she would be aware of propositions she is not aware of, she cannot know which specific propositions not in Φ a ( ω b ) she would be aware in such states. Suppose the contrary, assume that the state ω b is such that ω b ( A ϕ * ) = 1 , while ω b ( A ϕ * ) = 0 . The agent cannot be aware of the proposition “there exists a state such that A ϕ * = 1 ”, because if she was aware of such a proposition then by A 4 she would be aware of proposition ϕ * at the state ω, a contradiction.

Obviously the agent is restricted in her description of the states of the world since she cannot use any proposition not in Φ a ( ω b ) . However the knowledge that Φ b \ Φ a ( ω b ) and that this set includes propositions about the environment, awareness and knowledge of the propositions about the environment should be used in some manner by the agent in her description of the world.

In order to formalize this idea let us define the propositions: ϕ ¯ ω = “propositions about the environment not included in Φ a ( ω b ) hold”, ϕ ¯ ω a = “the agent is aware of some proposition about the environment not in Φ a ( ω b ) ” and ϕ ¯ ω k = “the agent knows some proposition about the environment not in Φ a ( ω b ) ”.

The logical value of the propositions ϕ ¯ ω a and ϕ ¯ ω k can be determined for any basic state of the world ω b , as follows: ω b ( ϕ ¯ ω a ) = 1 iff ϕ Φ b \ Φ a ( ω b ) such that ω b ( A ϕ ) = 1 . And similarly for the proposition ϕ ¯ ω k . Once the basic states of the world are defined there is no ambiguity on the logical value of these propositions.

Let Φ ^ a ( ω b ) be the union of Φ a ( ω b ) with these three generic propositions. By adding these propositions to Φ a ( ω b ) all the agent's knowledge, at ω, about Φ b is captured – the specific knowledge about Φ a ( ω b ) and the generic knowledge that Φ b \ Φ a ( ω b ) is nonempty and includes propositions about the environment, awareness and knowledge of the propositions about the environment.

Lemma 2

Let us consider the elements of { 0,1 } Φ ^ a ( ω b ) which satisfy the axioms of the model. Under the incompleteness axiom, the agent can conceive this set of basic states of the world at state ω.

Proof. The agent is aware of the set of propositions Φ a ( ω b ) and knows the axioms of the model. Since the agent knows the incompleteness axiom, she knows that Φ b \ Φ a ( ω b ) . In addition she knows the structure of the model and thus she knows there are states where she is aware/knows propositions not included in Φ a ( ω b ) and this is summarized in the propositions ϕ ¯ ω , ϕ ¯ ω a and ϕ ¯ ω k . Since she knows the axioms of the model she can imagine all the assignments of true and false values to the propositions in Φ ^ a ( ω b ) which are consistent with the axioms.

Let us call a basic state of the world conceivable by the agent at state ω by s ω b . The subscript ω is a reminder that this is a state conceived by the agent at state ω. Let Ω b ( Φ ^ a ( ω ) ) be the set of all basic states of the world conceivable by the agent, at ω. One can also define a conceivable complete state of the world, s ω as the closure under the axioms of a basic state of the world s ω b . Let Ω ( Φ ^ a ( ω ) ) be the set of complete states imaginable by the agent.

The states of the world that the agent conceives are a function of the state of the world ω, because the set of propositions the agent is aware of and hence can use in her description of the states of the world changes with the state of the world.

5.3 Agent's Possibility Correspondence

In the previous subsection we argued that the agent can construct her own description of the set of states of the world, using the concept of state of the world and her knowledge about the set of basic propositions. Similarly, the agent can use the concept of possibility correspondence to derive her own description of the information structure.

I start this subsection with a word about the interpretation of the possibility set P ( ω ) , when one does not assume a priori that the agent knows Ω. Then I define the agent's possibility correspondence.

5.3.1 About the Possibility Set P ( ω )

As I said above, the agent's knowing a proposition in a certain state of the world can be understood as the agent knowing that the state of the world is such that the proposition is true. Hence the state of the world has to be such that the propositions that she knows are true. In addition, as I proved, the agent knows the set of propositions she is aware of and she knows that she is not aware of any other proposition.

The knowledge about the true state of the world need not be complete. If the agent doesn't know the logical value of all the propositions then the agent cannot exactly determine the true state of the world. Moreover, the fact that the agent knows some of the properties of the true state of the world doesn't mean that the agent can enumerate the elements of Ω which satisfy these properties. Since the agent is not assumed a priori to know Ω, it may be impossible for her to do such an enumeration. Hence when one thinks about the possibility set P ( ω ) , from the agent's perspective, the set should be defined descriptively and not by enumeration. P ( ω ) can then be interpreted as the set of states of the world in Ω, which are consistent with all that the agent knows at ω.

In order for the agent to enumerate all the states which can possibly be the true state of the world she has to be aware of all the propositions she doesn't know. However it is not always true that the agent is aware of all the propositions she doesn't know. Hence the agent may fall short of a complete enumeration of all the possible states of the world.

5.3.2 The Agent's Possibility Correspondence

Although the agent can imagine all the states in Ω b ( Φ ^ a ( ω ) ) , she only considers possible a certain subset of these states. Given what she knows, some of the imaginable states cannot be the true state of the world. They are not possible states of the world

Given the properties of the model the subset of Ω b ( Φ ^ a ( ω ) ) that the agent considers possible at ω are the states where she is aware of the same propositions she is at ω b and she knows the same basic propositions than in ω b . Let us call this set Γ * ( ω b ) . Formally:

Definition 7

The agent’s set of possible states of the world is:

Γ * ( ω b ) = { s ω b Ω b ( Φ ^ a ( ω ) ) : Φ a ( s ω b ) = Φ a ( ω b ) , K ¯ ( s ω b ) = K ¯ ( ω b ) } .

The difference between Γ * ( ω b ) and P ( ω b ) is that the first is defined in the space of states of the world conceivable by the agent at ω, while the last is defined in the set of states of the world conceivable by the modeler. The agent is able to enumerate the states which belong to Γ * ( ω b ) but she might not be able to enumerate the elements of P ( ω b ) .

Since the agent knows the concepts of possibility relation and possibility correspondence she can apply these notions to her set of conceivable states. Let γ ω be the agent's possibility relation on Ω b ( Φ ^ a ( ω ) ) and let Γ ω : Ω b ( Φ ^ a ( ω ) ) 2 Ω b ( Φ ^ a ( ω ) ) be the possibility correspondence in Ω b ( Φ ^ a ( ω ) ) . γ ω and Γ ω depend on the state of the world ω since Ω b ( Φ ^ a ( ω ) ) also depends on ω.

Given the result in proposition 3 one can describe γ ω and Γ ω as follows:

s ω b γ ω s ω b  iff   Φ ^ a ( s ω b ) = Φ ^ a ( s ω b )  and  K ¯ ( s ω b ) = K ¯ ( s ω b ) .

Γ ω ( s ω b ) = { s ω b Ω b ( Φ ^ a ( ω ) ) :   s ω b γ ω s ω b }   iff    Φ ^ a ( s ω b ) = Φ ^ a ( s ω b )   and    K ¯ ( s ω b ) = K ¯ ( s ω b ) .

The possibility correspondence Γ ω tells us the information structure in the set of states of the world imaginable by the agent.

5.4 Conceivable and Possible States – An Example

The concepts of states of the world conceivable by the agent and the agent's possible states of the world are relevant both under the completeness/incompleteness axioms as well as under the case where the completeness/incompleteness is “self-evident”. In order to consider a tractable example, keeping the spirit of incomplete awareness, I present an example where completeness/incompleteness are “self-evident”.

Assume that there are only three unrelated propositions about the environment ϕ 1 = “it is sunny in Boston”, ϕ 2 = “it is sunny in Lisbon” and ϕ 3 = “it is sunny in Paris”.

Let us assume that in the actual state of the world the agent is only aware of the proposition “it is sunny in Boston”, and she knows that she is not aware of all the propositions. Assume in addition, that ϕ 1 , ϕ 2 and ϕ 3 hold and the agent doesn't know whether proposition ϕ 1 is true or not.

Given her knowledge, the states of the world that the agent can conceive in the actual state of the world are described in Table 3.

Table 3:

Set of conceivable states when the agent is only aware of ϕ 1.

state ϕ 1 A ϕ 1 K ϕ 1 K ϕ 1 ϕ ¯ ω ϕ ¯ ω a ϕ ¯ ω k
s 1 1 1 0 0 1 0 0
s 2 0 1 0 0 1 0 0
s 3 1 1 1 0 1 0 0
s 4 0 1 0 1 1 0 0
s 5 1 1 0 0 1 1 0
s 6 0 1 0 0 1 1 0
s 7 1 1 1 0 1 1 0

We notice that the set of states of the world conceivable by the agent would be as above in any other state of the world where the agent is only aware of the proposition ϕ 1 . The agent is missing in her description of the states of the world the propositions ϕ 2 , ϕ 3 , A ϕ 2 , A ϕ 3 , K ϕ 2 , K ϕ 3 .

Given her knowledge, the states of the world the agent considers possible are s 1 and s 2 . Notice that for the purpose of distinguishing between the possible states of the world what matters are the set of propositions that the agent doesn't know (but knows that she doesn't).

It may seem that there is no reason for the agent to care about those conceivable states which are not possible. However there are many circumstances when the ability to think about what would happen if other conceivable states of the world were the true state of the world is crucial.[13]

6 The Relationship between the Agent's and the Modeler's Universe

In this section I analyze the relationship between the agent's and the modeler's universe under the completeness and incompleteness axioms.

6.1 The Model with the Completeness Axiom

Adding the “completeness axiom” has several implications: Axioms A 4– A 5 are redundant since it is assumed that Φ is closed under the logical and knowledge operators. In addition A 6 is irrelevant because it is never the case that ω ( A ϕ ) = 0 . Finally, A 7 is equivalent to having negative introspection over all the propositions the agent does not know.

It is trivial to show the following:

Proposition 4

When the completeness axiom is satisfied Ω b ( Φ ^ a ( ω ) ) = Ω b , ω . In addition Γ ω = P .

Proof. By the “completeness axiom” Φ a ( ω b ) = Φ b in any state of the world, and the agent knows that she is aware of all propositions. In addition she knows the definition of state of the world and all the axioms of the model. Therefore the set of conceivable basic states of the world coincides with Ω b . The fact that Γ ω coincides with P is obvious from their definition.

The most important difference between the model in Section 2 with axioms A 1– A 3 and this model is that here one can interpret the model from the agent's perspective, which is a result of the agent being able to reason and know the properties of her knowledge. Here it makes sense to ask the agent which set of states does she conceive, which set of states does she consider possible given what she knows, even which set of states would she consider possible if she was in some other state of the world. In any complete state of the world the agent knows what is the set of states of the world and what is the information structure.

The model with the completeness axiom has the same consequences as assuming informally that the agent knows the information structure, a common assumption in many economic models. However in this model the agent's knowledge about the information structure is not assumed. Instead it is “inferred knowledge”, in the sense that the agent derives herself what is the set of states of the world and the information structure.

6.2 The Model with the Incompleteness Axiom

My interpretation of this case is that Φ is not finitely generated. It doesn't matter how many (basic) propositions the agent is aware of, there are always more propositions that she could be aware of.

Before analyzing the relationship between the agent's model and the global model let me introduce here the concept of refinement.

Definition 8

A set Ω' is a refinement of another set Ω if there is a mapping, Π ^ , from the set of subsets of Ω to the set of subsets of Ω' ( Π ^ : 2 Ω 2 Ω' ), such that:

( i ) Π ^ ( { w } ) , for all ω Ω ;

( ii ) Π ^ ( { ω } ) Π ^ ( { ω ' } ) = , if ω ω ;

( iii ) { Π ^ ( { ω } ) : ω Ω } = Ω ;

( iv ) Π ^ ( A ) = { Π ^ ( { ω } ) : ω A } .

Since (iv) implies that the image set of a subset of Ω is the union of the image sets of each element of that subset, one can define refinement in a slightly different and more familiar way. We can say that Ω is a refinement of another set Ω if there is a correspondence from Ω onto Ω , such that the image sets corresponding to any two different elements of Ω do not intersect.

Definition 9

The mapping Π : 2 Ω' 2 Ω defined as:

Π ( A ) = { ω Ω : Π ^ ( ω ) A ϕ } .

is called the outer reduction induced by Π ^ .

The relationship between the agent's model and the true model is the following:

Proposition 5

Ω b is a refinement of Ω b ( Φ ^ a ( ω ) ) .

Proof. Define the refinement map:[14]

Π ^ ( { s ω b } ) = { ω b Ω b : ω b ( ϕ ) = s ω b ( ϕ ) , ϕ Φ ^ a ( ω ) }

Π ^ ( A ) = { Π ^ ( { s ω b } ) : s ω b A } .

One needs to prove that Π ^ satisfies the requirements (i) to (iv).

  1. The only way the property could fail is if a consistent assignment over Φ ^ a ( ω b ) implies something which is incompatible with all the assignment over Φ b \ Φ a ( ω b ) . However, for any given assignment over Φ ^ a ( ω b ) , the only implications over the set of propositions that the agent is not aware of are the ones implied by the propositions ϕ ¯ ω a , ϕ ¯ ω k .

  2. If s ω b s ω b then there exists some ϕ Φ ^ a ( ω b ) such that s ω b ( ϕ ) s ω b ( ϕ ) , call it ϕ * . By definition of the refinement map for any ω b Π ^ ( { s ω b } ) and any ω b Π ^ ( { s ω b } ) we have ω b ( ϕ * ) = s ω b ( ϕ * ) s ω b ( ϕ * ) = ω b ( ϕ * ) . Hence the requirement holds.

  3. One needs to prove that there is no ω b Ω b such that there is no s ω b Ω b ( Φ ^ a ( ω ) ) such that ω b Π ^ ( s ω b ) . This can be proven by contradiction. Suppose there exists such state of the world, call it ω ¯ b . By construction ω ¯ b is an element of { 0 , 1 } Φ b which is consistent with the axioms of the model. This can also be written as ω ¯ b { 0 , 1 } Φ a ( ω b ) { 0 , 1 } Φ b \ Φ a ( ω b ) . The non-existence of a state of the world conceivable by the agent which image is ω ¯ b is equivalent to say that if one projects ω ¯ b on the space Φ ^ a ( ω ) , the assignment obtained in that subset of propositions is not consistent with the axioms. Which means that either the assignment over Φ a ( ω b ) is not consistent with the axioms or the assignments to the propositions ϕ ¯ ω a and ϕ ¯ ω k is not consistent with the axioms. But then ω ¯ b cannot be a consistent assignment, which is a contradiction.

  4. Holds by definition.

What this proposition means is that there is a one to many mapping from the set of states of the world conceivable by the agent to the set of states of the world of the modeler. The agent can only make an incomplete description of each state of the world. Hence if we consider a certain state of the world as described by the agent, say s ω b , and ask what is the set of states of the world (in the modeler's model) which satisfy the agent's description there will be several states which will do (precisely Π ^ ( s ω b ) ). In other words, the states of the world imaginable by the agent in the state ω are coarser then the modeler's states of the world.

The example in Section 5.4 illustrates this relationship. Consider, for example, the agent's state s 1 . There are four states of the world in the modeler's model which correspond to s 1 . These states have in common the propositions that “it is sunny in Boston” and “the agent doesn't know that it is sunny in Boston”, but in one of them “it is sunny in Paris and it is sunny in Lisbon”, in a second one “it is sunny in Paris and it is not sunny in Lisbon”, in the third “it is not sunny in Paris and it is sunny in Lisbon” and in the fourth “it is not sunny in Paris and it is not sunny in Lisbon”.

In the case of s 1 , since the agent is not aware of propositions ϕ 2 and ϕ 3 she cannot know their logical value. Hence the four states above are the only states which correspond to s 1 . The same reasoning holds in the case of s 2 , s 3 , s 4 . However not all the states of the world conceivable by the agent, correspond only to four states in the modeler's universe. For example, in the case of s 5 , s 6 and s 7 there are 12 states corresponding to each of them in the modeler's universe ( ϕ 2 and ϕ 3 can have true or false values as above, and the agent can be aware of ϕ 2 , be aware of ϕ 3 or be aware of both).[15]

The definition of states of the world conceivable by the agent only involves the set of propositions the agent is aware of and the knowledge the agent has of the axioms of the model. Since in any state of the world the agent knows the axioms, the fact that the agent may have different views of the world in alternative states of the world is driven by the differences in the set of propositions the agent is aware of in these alternative states. Two states of the world belonging to the same “awareness equivalence class” will have the same set of states of the world conceivable by the agent.

Furthermore one can prove, using the same type of argument as in the proof of proposition 5, that if two states of the world are such that the set of propositions the agent is aware of in the first state is a superset of the set of propositions the agent is aware in the second state, then the set of the states conceivable by the agent in the first state, is a refinement of the set of states conceivable by her in the second state.

The next question one may ask is: how does Γ ω relate with P? Proposition 6 gives an answer to this question.

The proposition compares the possibility set in a given state, say ω b , with the agent's view at ω of that possibility set. What do I mean by this? Since at ω there exists some state imaginable by the agent which corresponds to ω b , s ω b = Π ( ω b ) , one can ask which are the states of the world, conceivable by the agent at ω, which she thinks at ω that she would consider possible if she was in that state, Γ ω ( s ω b ) . This is the agent's view at ω of the possibility set in state ω b .

However the agent's and the modeler's possibility correspondences are defined on different spaces. As a consequence one needs to express both correspondences in the same space in order to make such a comparison. I do this by using the refinement mapping on the agent's possibility sets, hence expressing both correspondences in Ω b . By applying the refinement mapping on the agent's possibility sets one identifies the states of the world (in the modeler's model) which are consistent with the agent's description of each possibility set. The comparison of these sets with the modeler's possibility sets is basically a comparison of the agent's description at ω of the information structure with the true information structure.

The result of this comparison is the following:

Proposition 6

  1. If Φ a ( ω b ) Φ a ( ω b ) then Π ^ ( Γ ω ( Π ( ω b ) ) ) = P ( ω b ) .

  2. If ϕ Φ a ( ω b ) : ϕ Φ a ( ω b ) then Π ^ ( Γ ω ( Π ( ω b ) ) ) P ( ω b ) .

Proof.

  1. Let us consider a state ω b such that Φ a ( ω b ) Φ a ( ω b ) . I construct Π ^ ( Γ ω ( Π ( ω b ) ) ) and show that it is equal to P ( ω b ) . Let us find the projection of ω b on Ω b ( Φ a ( ω b ) ) , Π ( ω b ) , say it is the state s ω b . By construction s ω b has the same assignments than ω b over the propositions in Φ ^ a ( ω b ) . Thus K ¯ ( s ω b ) = K ¯ ( ω b ) and Φ a ( s ω b ) = Φ a ( ω b ) Φ a ( ω b ) . Moreover s ω b ( ϕ ¯ ω a ) = 0 and s ω b ( ϕ ¯ ω k ) = 0 . Any state in Γ ω ( s ω b ) has the same awareness and epistemic states than s ω b and Γ ω ( s ω b ) includes all the states conceivable by the agent which satisfy this property. Hence for any s ω b Γ ω ( s ω b ) , K ¯ ( s ω b ) = K ¯ ( ω b ) , Φ a ( s ω b ) = Φ a ( ω b ) , s ω b ( ϕ ¯ ω a ) = 0 , and s ω b ( ϕ ¯ ω k ) = 0 . Applying the refinement mapping to Γ ω ( s ω b ) one gets:

Π ^ Γ ω s ω ' b = ω '' b Ω b : ω '' b ϕ = s ω ' ' b ϕ   ϕ Φ ^ a ω , w h e r e   s ω ' ' b Γ ω s ω ' b

Obviously any ω b which belongs to this set has to satisfy K ¯ ( ω b ) = K ¯ ( ω b ) and Φ a ( ω b ) = Φ a ( ω b ) and hence it belongs to P ( ω b ) . In addition this set has to include all the elements of Ω b which satisfy these properties because otherwise Π ^ would not be a refinement mapping.

  1. Let us consider a state ω b where the condition holds, this implies that ω b ( ϕ ¯ ω a ) = 1 . One can divide the set of propositions the agent is aware of at ω b in two disjoint sets: A = Φ a ( ω b ) Φ a ( ω b ) , and B = Φ a ( ω b ) \ A , where by assumption B is nonempty. Similarly one can partition the set K ¯ ( ω b ) into the sets C and D, C = K ¯ ( ω b ) Φ a ( ω b ) , and D = K ¯ ( ω b ) \ C , D may or may not be empty. Let us assume it is not empty, the proof can be adjusted easily if otherwise.

Applying the outer reduction mapping to ω b one gets a state, s ω b , such that s ω b ( ϕ ¯ ω a ) = 1 and all the assignments of true and false values to the propositions in Φ a ( ω b ) coincide with the ones of ω b . Hence Φ a ( s ω b ) = A ϕ ¯ ω a and K ¯ ( s ω b ) = C ϕ ¯ ω k .

The definition of Γ ω ( s ω b ) implies that for any state in the agent's possibility set Φ a ( s ω b ) = A ϕ ¯ ω a and K ¯ ( s ω b ) = C ϕ ¯ ω k . I only need to prove that applying the outer reduction mapping to any state ω b P ( ω b ) one gets one element of Γ ω ( s ω b ) . Let us assume that Π ( ω b ) = s ω b . Since Φ a ( ω b ) = A B , and K ¯ ( ω b ) = C D , it is clear that Φ a ( s ω b ) = A ϕ ¯ ω a and K ¯ ( s ω b ) = C ϕ ¯ ω k . Hence s ω b Γ ω ( s ω b ) .

It is obvious that in general P ( ω b ) is strictly included Π ^ ( Γ ω ( s ω b ) ) because any state, ω * such that Φ a ( ω * b ) Φ a ( ω b ) = A and K ¯ ( ω * b ) Φ a ( ω b ) = C and such that Φ a ( ω * b ) \ A and K ¯ ( ω * b ) \ C are nonempty is included in Π ^ ( Γ ω ( s ω b ) ) while P ( ω b ) only includes the states ω b for which Φ a ( ω b ) = A B and K ¯ ( ω b ) = C D .

The first part of the proposition says literally that the set of states in the modeler's model which are consistent with the agent's description at ω of her possibility set in the conceivable state of the world corresponding to ω is exactly P ( ω ) . The comparison is between the set of states of the world which are consistent with the agent's description of the possibility set, not a comparison of the description of the possibility sets per se (that would differ if the awareness set at ω is different from the awareness set at s ω ).

Roughly speaking, one can say that the agent's description at ω of the possibility set in the state of the world ω b , where she is not aware of any proposition which she is not aware of in ω, is as good as the description she would make if that state of the world had been the true state of the world. This result is not surprising. Since the agent is aware at ω of all the propositions she would be aware of in ω b she can replicate what would have been her view of the world if she was in that state. In particular, she can make the contrafactual reasoning of describing the set of states that she would consider possible if she were in that state.

However, when imagining a state, ω b , where she would be aware of propositions she is not aware of in the true state of the world, the agent cannot describe the possibility set as well as she would do if she was in that state. The reason is that if the agent was at ω b she would know the specific propositions she would be aware of and the states she would consider possible would be the ones which have exactly the same awareness and epistemic sets. On the other hand at ω the agent doesn't know the specific propositions she would be aware of if she was in state s ω ' b . So she cannot perform the referred contrafactual reasoning.

To summarize, in the model with the incompleteness axiom, the states of the world and the possibility correspondence imaginable by the agent are coarser than the modeler's. The agent has an incomplete knowledge of both the set of states of the world and the information structure. I would like to stress that the agent knows this property and that saying that the agent's possibility correspondence is coarser than the modeler's one does not contradict the result that P is a partition.

The agent's description is the best one she can do, given her awareness and her knowledge at ω. The relationship of the agent's possibility correspondence and the modeler's possibility correspondence is fully consistent with the fact that the agent has an incomplete view of the states of the world. If the agent had a complete knowledge of the possibility correspondence how could one argue that she can be rational and have an incomplete description of the states of the world?

7 Dynamic Model of Knowledge

7.1 Description

In this section I extend the model where the agent is never aware of all propositions to more than one period. As usual a state of the world is to be understood as a complete path, a complete history of the world. Awareness and knowledge are now time dependent. The proposition A t ϕ means that the agent is aware of proposition ϕ at time t, and the epistemic proposition K t ϕ means that the agent knows ϕ at period t. The type of propositions to which the knowledge operator is applied is quite general; in particular it can apply to propositions about past or future knowledge.

Let us rewrite the axioms using the time dependent awareness and knowledge operators and add one more axiom, saying that the agent doesn't forget, hence knowledge is non-decreasing:

A 9 I f ω ( K t ϕ ) = 1 t h e n ω ( K t + 1 ϕ ) = 1.

A basic state of the world ω b now includes propositions about the environment, which can refer to different point in time, it says which propositions about the environment the agent is aware of at each point of time, which propositions about the environment she knows at each point in time.

The axioms of the model imply that the knowledge of the agent about her past knowledge is well defined: if the agent knows at time t a proposition the agent knows at time t + 1 that she knew that proposition at time t (by A 2 and A 9), if the agent was aware of a proposition at time t but she didn’t know it's logical value then she knows at t + 1 that she did not know the proposition at time t (by A 7 and A 9), if the agent is not aware of a proposition at time t and becomes aware of the proposition at time t + 1 the agent knows at t + 1 that she did not know the proposition in time t.

However the knowledge that the agent has about her future knowledge is not completely specified once one knows which propositions about the environment the agent is aware of and which propositions she knows at each point of time. Therefore a basic state of the world has to say what the agent knows about the structure of revelation of information. One can have cases where the agent knows at which point in time she will learn the logical values of a certain proposition and cases where she doesn't know when she will learn the logical value of certain propositions.

The states of the world conceivable by the agent are also descriptions, eventually incomplete, of a full history of the world. As the agent becomes aware of more and more propositions, her description of the world will change. Let us denote by s ω , t a state of the world as described by the agent at period t, when the state of the world is ω. The index ω , t is only a reminder that this is a state of the world which can be conceived by the agent at period t, in state ω. One can have statements like s ω , t ( K t ϕ ) = 1 , which means that at state s ω , t , imaginable by the agent at period t, the agent knows at period t that ϕ is true. Or like s ω , t [ K t ( K t + 1 ϕ   or  K t + 1 ϕ ) ] = 1 , which means that the agent knows at period t that in the next period she will know whether ϕ is true or false.

7.2 Law of Iterated Knowledge

One can prove a law of iterated knowledge, saying that the agent cannot have at time t knowledge about future knowledge that exceeds what she knows at period t.

Proposition 7

(LIK). If ω ( K t ϕ ) = 1 then it cannot be the case that ω ( K t K t + 1 ϕ ) = 1 .

Proof. The proof is by contradiction. If ω ( K t K t + 1 ϕ ) = 1 , then by A 1, ω ( K t + 1 ϕ ) = 1 and once again by A 1, ω ( ϕ ) = 1 . Since the agent knows the properties of her knowledge she knows that ω ( K t + 1 ϕ ) = 1 ω ( ϕ ) = 1 , but then by A 3, ω ( K t ϕ ) = 1 , which contradicts the assumption ω ( K t ϕ ) = 1 .

It is easy to prove that, the set of propositions the agent is aware of is non-decreasing over time:

Lemma 3

If ϕ Φ t a ( ω ) then ϕ Φ t + 1 a ( ω ) .

Proof. If ϕ Φ t a ( ω ) that implies that either ω ( K t ϕ ) = 1 or ω ( K t K t ϕ ) = 1 . In the first case by A 9 one knows that ω ( K t + 1 ϕ ) = 1 , and thus ϕ Φ t + 1 a ( ω ) . In the second case, again by A 9 one gets ω ( K t + 1 K t ϕ ) = 1 . Hence ω ( A t + 1 K t ϕ ) = 1 but the closure axiom ( A 4) guarantees that ω ( A t + 1 ϕ ) = 1 .

It is interesting that one doesn't need any axiom relating the awareness operator at different points of time to get the non-decreasing awareness result. The reason goes back to the equivalence of the agent's awareness of a proposition and her knowledge about knowing or not the logical value of a proposition. Since knowledge is non-decreasing the mentioned equivalence is all we need to guarantee that awareness is non-decreasing.

A corollary of the LIK is that Φ t a ( ω ) is the best forecast the agent has of Φ t + 1 a ( ω ) .

Corollary 10

If K t ( ϕ Φ t + 1 a ( ω ) ) then ϕ Φ t a ( ω ) .

Proof. Suppose not, suppose ϕ not belonging to Φ t a ( ω ) and such that K t ( ϕ Φ t + 1 a ( ω ) ) . By axiom A 6, ω ( K t ϕ ) = 1 and ω ( K t K t ϕ ) = 1 . By LIK that implies that it cannot be the case that:

ω ( K t K t + 1 ϕ ) = 1   or ω ( K t K t + 1 K t ϕ ) = 1.

Or equivalently:

ω ( K t [ ω ( K t + 1 ϕ ) = 1 or ω ( K t + 1 K t ϕ ) = 1 ] ) = 1 ω ( K t A t + 1 ϕ ) = 1.

Hence the agent cannot know at period t that some proposition belongs to Φ t + 1 a , unless she is aware of that proposition in period t.

7.3 Relationship between Ω ( Φ t a ( ω ) ) and Ω ( Φ t + 1 a ( ω ) )

In this section I analyze how the agent's view about the world changes over time. I show that the set of states of the world conceivable by the agent and her possibility correspondence get finer as time passes.

The set of conceivable basic states of the world by the agent at state ω at time t depends on ω and t. Like before the knowledge of the agent about the structure of the model should be used to get a comprehensive picture of Φ b . Let us extend the set Φ ω , t a by propositions like ϕ ¯ ω , t , ϕ ¯ ω , t a t and ϕ ¯ ω , t k t . These propositions are defined as before except that now because the awareness and knowledge operators are time dependent there will be propositions referring to the awareness or knowledge of propositions which do not belong to Φ ω , t a , at each point of time, let Φ ^ ω , t a be union of these propositions with Φ ω , t a

Consider a state of the world ω. I showed before that under the axioms the set of propositions the agent is aware of, in state ω, is non-decreasing over time. Hence we should expect the agent's description of the set of states of the world to improve over time. One can prove the following:

Proposition 8

Ω b ( Φ t + 1 a ( ω ) ) is a refinement of Ω b ( Φ t a ( ω ) )

Proof. Define the refinement mapping:

Π ^ s ω , t b = { s ω , t + 1 b Ω b Φ t + 1 a ω : s ω , t + 1 b ϕ = s ω , t b ϕ , ϕ Φ ^ t a ( ω b ) } Π ^ ( A ) = { Π ^ s ω , t b : s ω , t b A } .

The rest of the proof follows the same reasoning than the proof of proposition 5.

In Section 6.2, I mentioned that if one compares the set of states of the world conceivable by the agent in two states of the world, such that the set of propositions the agent is aware of in the first set is a superset of the set of propositions she is aware of in the second state, one concludes that the set of conceivable states of the world in the first state is a refinement of the set of conceivable states of the world in the second state. The result in the previous proposition has implicitly exactly the same idea. The difference being that here we are comparing the set of conceivable states of the world at two different points in time, instead of two different states of the world.

Since Φ t a ( ω ) Φ t + 1 a ( ω ) , it is obvious that the set of conceivable states in t + 1 is finer than the set of conceivable states in t. In other words the agent learns more about the set of conceivable states of the world over time.

Before I describe how the possibility correspondence evolves over time let me mention some important aspects about the possibility correspondence in the dynamic model.

I mentioned before that the knowledge and awareness operators are time dependent. As such the possibility correspondence is also time dependent. P t ( ω ) is the set of states of the world which are compatible with what the agent knows at ω at time t. Using the definition of possibility set and axiom A 9 one can prove that P t + 1 ( ω ) P t ( ω ) , ω Ω . In other words, the possibility correspondence gets finer and finer over time. This phenomenon is common in traditional models of knowledge. The only difference is that in our model the structure of revelation of information may not be known. In other words, although the agent learns more over time about the set of possible states of the world she may not know the sequence of learning.

By the same reasoning as before the agent's possibility correspondence is also time dependent. Let us call Γ ω , t , τ the agent's possibility correspondence at state ω and time t, corresponding to time τ. Γ ω , t , τ specifies for each state of the world conceivable by the agent at time t in state ω, which conceivable states of the world does the agent consider possible when she is in a given conceivable state, at time τ. In summary Γ ω , t , τ is a correspondence defined from the set of states conceivable in state ω at time t to the set of subsets of such states.

Since the states of the world conceivable by the agent satisfy the axioms of the model Γ ω , t , τ is coarser than Γ ω , t , τ + 1 . We notice that in this statement we are fixing the point in time in which the agent is conceiving the set of states of the world (time t) and what we are comparing is the possibility sets, in some conceivable state, at time τ and at time τ + 1 . The reason why the relationship is as mentioned is the same than the reason why P t + 1 ( ω ) P t ( ω ) .

The question of how the possibility correspondence of the agent changes over time can be phrased as follows: What is the relationship between Γ ω , t , τ and Γ ω , t + 1 , τ ? In words, how does the possibility correspondence corresponding to time τ differ if the agent conceives it at time t or at time t + 1 ?

In order to make the comparison one needs to express both correspondences in the same space. Since, Ω b ( Φ t + 1 a ( ω ) ) is a refinement of Ω b ( Φ t a ( ω ) ) , if we project Γ ω , t + 1 , τ in Ω b ( Φ t a ( ω ) ) we would lose information. Therefore the right approach is to apply the refinement mapping defined in the proof of proposition 8 to Γ ω , t , τ and compare its image with Γ ω , t + 1 , τ . Let us call Γ ^ ω , t , τ the image of the possibility correspondence as conceivable in period t in Ω b ( Φ t + 1 a ( ω ) ) . One gets the following:

Proposition 9

  1. Γ ^ ω , t , τ is coarser than Γ ω , t + 1 , τ . i. e. for any s ω , t + 1 b Ω b ( Φ t + 1 a ( ω ) ) one has Γ ω , t + 1 , τ ( s ω , t + 1 b ) Γ ^ ω , t , τ [ Π ( { s ω , t + 1 b } ) ] .

  2. If Φ τ a ( s ω , t + 1 b ) Φ t a ( ω ) then Γ ω , t + 1 , τ ( s ω , t + 1 b ) = Γ ^ ω , t , τ [ Π ( { s ω , t + 1 b } ) ] .

Proof. The argument is the same than in proposition 6.

One should stress that this result is not about the agent learning more about the possible states of the world over time. This result is about how the agent's view of the information structure changes over time. What the proposition says is that the agent learns more about the information structure as time passes. Hence this is a result about learning about how the world is.

The second part of the proposition states that if the conceivable state is such that its awareness set at time τ is contained in Φ t a ( ω ) (which means that the agent can replicate at time t what would be her reasoning if that conceivable state had occurred and she was at time τ) then the agent's description at time t of the possibility set in that conceivable state at time τ, is consistent with the description she would make at time t + 1 .

In summary, the set of states of the world conceivable by the agent and her possibility correspondence get finer over time. The agent learns about how the world is as time passes.

8 Final Comments

The new element in the epistemic model presented in this paper is the distinction between knowing the existence of a proposition and knowing the logical value of a proposition.

A conclusion from this paper is that the states of the world as viewed by the agent, if she is not aware of all the existing propositions, are not a complete description of all that can happen. In a sense they are “events”, instead of states of nature.

Although my model says nothing about probabilities I would like to note an implication of the idea that the agent's model is incomplete – the agent cannot have a prior probability on the states of nature as viewed by the modeler. Even if the agent is Bayesian and has a probability distribution on Ω ( Φ t a ) , when translated in the modeler's universe what one has is a probability measure only on the “events” the agent can express about. The two kinds of learning: learning more about which state of the world can possibly be true and learning more about how the world can be, will imply that conditionalization will be important but besides that one also has to consider the specialization of the probability measure.

This work doesn't address the issue of decision making when the agent has an incomplete view of the world. The study of choice in the presence of unforeseen contingencies is a natural extension of the current paper and I hope to pursue it in future work. An obvious implication of the agent having an incomplete view of the world is that she cannot have a complete contingent plan of action, her plan cannot be contingent on anything she is not aware of. Furthermore one expects the agent's plan of action to be adapted when she becomes aware of more things. Her plan of action will get more and more complete over time.


Corresponding author: Cesaltina Pacheco Pires, CEFAGE-UE, Departamento de Gestão, Universidade de Évora, Évora, Portugal, E-mail:

Article Note: This paper is the first chapter of my Ph.D. thesis, except for minor corrections.


Award Identifier / Grant number: BD/693/90-RO

Award Identifier / Grant number: UID/ECO/04007/2019

Acknowledgments

I wish to thank to Drew Fudenberg, Jean Tirole, António Pacheco Pires, Soumodip Sarkar and participants on the M.I.T. Theory lunch seminar for helpful comments.

  1. Funding: This research was supported by the grant BD/693/90-RO from Junta Nacional de Investigação Científica. I am also grateful for the support of FCT – the Portuguese Foundation for Science and Technology within the project UID/ECO/04007/2019.

References

Aumann, R. 1976. “Agreeing to Disagree.” The Annals of Statistics 4 (6): 1236–9, https://doi.org/10.1214/aos/1176343654.Suche in Google Scholar

Bacharach, M. 1985. “Some Extensions of a Claim of Aumann in an Axiomatic Model of Knowledge.” Journal of Economic Theory 37 (1): 167–90, https://doi.org/10.1016/0022-0531(85)90035-3.Suche in Google Scholar

BrandenburgerA., and E.Dekel 1987. “Common Knowledge with Probability 1.” Journal of Mathematical Economics 16 (3): 237–45, https://doi.org/10.1016/0304-4068(87)90010-3.Suche in Google Scholar

FaginR., and J. Y.Halpern. 1985. “Belief, Awareness, and Limited Reasoning” In Proceedings of the Ninth International Joint Conference on AI, Los Angeles, California, 491–501.Suche in Google Scholar

Geanakoplos, J. 1989. Game Theory Without Partitions, and Applications to Speculation and Consensus. New Haven: Yale Univ, Cowles Foundation Discussion Paper 914.Suche in Google Scholar

Gilboa, I. 1986. Information and Meta-Information. Israel: Foerder Institute of Economic Research, Tel Aviv Univ, 30–86, Working Paper.Suche in Google Scholar

Kreps, D. 1992. “Static Choice in The Presence of Unforeseen Contingencies.” In Economic Analysis of Markets and Games: Essays in Honor of Frank Hahn, edited by P.Dasgupta, D.Gale, O.Hart, and E.Maskin, 258–81. Cambridge, Massachusetts: The MIT Press.10.7551/mitpress/2581.003.0015Suche in Google Scholar

Mendelson, E. 1964. Introduction to Mathematical Logic D. Princeton: Van Nostrand Company, Inc.Suche in Google Scholar

RubinsteinA., and A.Wolinsky.1990. “On the Logic of Agreeing to Disagree Type Results.” Journal of Economic Theory 51 (1): 184–193, https://doi.org/10.1016/0022-0531(90)90057-q.Suche in Google Scholar

Samet, D. 1990. “Ignoring Ignorance and Agreeing to Disagree.” Journal of Economic Theory 51 (1): 190–207, https://doi.org/10.1016/0022-0531(90)90074-t.Suche in Google Scholar

Received: 2018-12-11
Accepted: 2020-03-27
Published Online: 2020-05-26

© 2020 Walter de Gruyter GmbH, Berlin/Boston

Heruntergeladen am 3.10.2025 von https://www.degruyterbrill.com/document/doi/10.1515/bejte-2018-0177/html
Button zum nach oben scrollen