Chapter 2. The Interpretation of Probability
The object of the present chapter is to provide the different definitions and interpretations of probability that fit with the axioms and propositions stated in the previous chapter. Let us begin with definitions.
(A) Fundamental Definition
If 'a' stands for the expected instances to occur under certain conditions p, and such instances are incompatible and have equal chances, supposing that x is an event that happens is a number of those instances b, then the probability of the occurrence of x is b/a in relation to p. It is observed that such definition of probability presupposes another definition, i.e., that the relation of the instances consistent with x to the sum of expected instances has equal chances.
But such presupposition is not explicitly stated in the original definition. Thus the definition is vague and incomplete. In other words, probabilities are of two levels. On the first level we have the probability of the values of different ways of an event's occurrence in isolation; when we determine the value of every case and suppose that all cases have equal chances, we move to the second level, i.e. the probability of the events related to some of those possible cases and have equal chances. The original definition applies to the second level only, thus its incompleteness.
In order to make our objection clear, we may look deeply into the meaning of equal chances. We get two interpretations. First, we may explain the equality in probable cases by equality of the value probability. Secondly, we may also explain the equal chances in reference to the conditions under which events could occur, and then p includes all probabilities. Therefore all possible cases in reference to n represent one probability. If we take the latter (all probabilities included in p). interpretation we get rid of the objection to the original definition. But then confront two further problems.
The First problem
The first definition of probability already stated faces two problems, the first of which is that its presuppositions are themselves insufficient to justify the assertion that the degree of probability of b's occurrence, in the example mentioned above is b/a. For why should the probability of occurrence of all forms of an events have equal chances? This problem could be overcome in two ways. First, we suggest to add another presupposition, namely, of the possible form of the occurrence of events are equal, then the values of probability are equal. Secondly, we also suggest to remove any doubt as to the probability of happening, and then we get objectivity; we then say that the value of the occurrence of b in all probable case is .[???]- and this is regarded as objective judgment.
Now, we have two sorts of probability, real probability involving credibility, and mathematical one involving the proportion of cases concomitant with b to all cases. But these sorts of probability different, for the first concerns one single case, while the second concerns a hypothesis. For example, if we throw a particular [pi^???] of coin, the probability of getting its head is 1/2; but we can say on the other hand that the probability of getting the head of a coin in any throw is 1/2. The first sort is real probability while the second is mathematical.
In this connection we oppose a certain view in symbolic logic, namely, the distinction between a proposition and prepositional function. The latter includes a variable such as x is human; 'x' has no meaning and then truth or falsehood cannot apply to it. A propositional function becomes a proposition and is true or false when we give the variable a value such as 'Socrates is a man'. Further, when we have a class included in another class we have a prepositional function not a proposition, e.g. Iraqis are intelligent, that is, it is a hypothetical statement meaning if x is Iraqi, x is intelligent, Now, we oppose the view mathematical probability expresses a prepositional function. Mathematical probability, in our view, expresses a proposition for it is not of the same logical type as the inclusion of a class into another class. Indeed, mathematical probability considers two classes of events but involves a relation between them, and such relation is definite. Thus, it can be true or false.
The Second Problem
The second problem for the first definition of probability, stated above, concerns the equal chances, one of which is supposed to occur. But we want to determine the meaning of this equality supposed between the different occurrences and p. This equality presupposes some relation between each probable occurrence [???] and p, that this relation has degrees, and that those occurrences have equal chances in relation to p, if relations to p are all of one degree, no more or less. Now, what is such relation? It may be a relation of probability e.g. the relation of the appearance of a coin on its head in such degree [???] probability, and since the degree of the probability of each occurrence is indeterminate, it may be equal to the probability of any other occurrence supposed to be probable, but it may be larger or smaller.
But this explanation repeats the first problem, that is, the first definition already presupposes probability. Therefore, we must try to explain the relation which connects p with each probable occurrence without supposing probability in the content of that relation. This means that the relation must be constant and independent of probability and certainty; it must be between two propositions, namely, between p and the occurrence related to it, between the statement that a piece of coin is thrown, and that it is thrown on its head. The relation between these two propositions may be that of necessity, or contradiction, or else mere possibility. The relation of possibility is not here the same as probability because possibility, if taken as probability, is not an objective relation independently of perception. Whereas we mean here by possibility the negative of both necessity and contradiction; and since these latter two are objective, so the former. Thus, the objective relation standing between p and each probable occurrence is that of possibility in the sense that it is neither necessary nor contradictory. But it clear that possibility in this sense cannot explain the equal chances between the probable occurrences in relation to p, for possibility has no degrees of equality or largeness or smallness. So we turn to another definition of probability.
(B) Probability in the Finite Frequency Theory
We now turn to another definition of probability on the ground of which the Finite Frequency Theory of probability was established, in order to see whether it avoided the problems facing the first definition considered in the previous section. The new definition does not speak of the probable occurrences of p; neither does it explain mathematical probability in terms of definite domain of those occurrences. The new definition rather considers two classes of things or events, all members of which really exist, the class of Iraq is and that of intelligent instance. Now, what, is the degree of probability of some individual, randomly taken, to be both Iraqi and intelligent? This degree would be the number of intelligent Iraqis out of all Iraqis. The definition of probability according to this theory may be stated thus. If B and A are two finite classes, then the probability that s, taken at random of B, is also an A is to find out the number of B's that are also A's subtracted from all B's.
Such definition satisfies, in our view, the presuppositions and avoids the problems already discussed. For this definition avoids mention of all probable cases of p, which may, or may not be equal; it considers the number of individuals or particulars belonging to one class, and determines the probability that some member of B is a member of A according to the proportion of frequency of A in B, without supposing the idea of equal or unequal chances. But such definition faces a new objection, namely, that it does not exhaust all the cases included in mathematical probability. Before stating the objection in some detail, we may give some preliminary remarks.
Real and Hypothetical Probabilities
To say that there is a certain degree of probability that an Iraqi is intelligent is not the same as to say that if such a man is Iraqi he may be intelligent in some degree. These are different statements.
The former talks about a real probability and it is possible to turn it into certainty provided we get sufficient data about that individual. Whereas the latter considers a hypothesis, the degree of intelligence in the class of Iraqis, and this involves the certainly true statement that there are intelligent Iraqis.
What is expressed in the first statement may be called real probability, in the second statement hypothetical probability. Now we may add that mathematical probability which the definition aims to explain is a hypothetical, not real, probability, because mathematical probability, being deduced from mathematical axioms, is a necessary statement, while real probability is not, because the latter refers to cases about which we are ignorant.
On the other hand, real probability includes two statements. When it is probable that some individual Iraqi is intelligent by 1/2, we actually give two assertions, first that it is probable in 1/2 that such individual is intelligent; second, that if the degree of our knowledge or ignorance of the circumstances related to intelligence among Iraqis is the same degree related to the intelligence of some individual, then such degree is 1/2. The first statement asserts some probable judgment, while the second is hypothetical only, that it asserts a relation between two terms, thus it is certain not probable.
Does this definition exhaust all probabilities?
We obtain probability if one of the three following conditions are fulfilled. First, if we have two classes of things B and A and there exist members of both, then it is probable that those members of A belong also to B. Secondly, if we have two classes B and A, each of which has members, but we do not know whether there exist common members, then it is probable that some members of A belong also to B. Finally, suppose we are told that there exists someone called Zoroaster who assumed himself a prophet, lived between the tenth and sixth century B.C. To say that it is probable that he really existed is to say that he belongs either to a real or null class of prophets. Let us now examine these cases.
Take up the first case, in which we have two sorts of probability: hypothetical and real probability. The former is expressed by saying that it is probable that some x, being a member of A, is also a member of B. And this sort is determined, provided we know the number of common members. Real probability means that it is probable that x being a member of A is really a member of B. This sort of probability can be determined if two conditions are satisfied. First, there must be definite number of the members of a class B that also belong to another class A including the member x; if we assume that B has ten members, one of whom is x, we must know the number of the members of A that are also B. Secondly, we must include in our definition the axiom that there must be consistency between the number of common members in relation to the class B and the degree of probability that x belongs to A. If these two conditions are fulfilled, the definition applies to real and hypothetical probabilities.
The above consideration may involve contradiction, because when we speak of the real probability of x, we mean that we did not examine whether x is member in both A and B. Thus, when we stipulate our knowledge of the number of members belonging to A and B, we assume examining the status of x, and the fall into contradiction, that is, in order to determine the degree of probability of x belonging to A, we must be sure whether it belongs to A or not. But contradiction disappears provided we can know the number of the members of B that belongs also to A without determining an individual in particular. Further, there may be common members of two classes without determining their definite numbers. We have in this case a hypothetical probability in the sense that it is probable that there may be a member of A that is also B, we have here also real probability in the sense that x is probably a member of A.
Let us turn to the other two conditions of probability statements. In them, there is no hypothetical probability, for this means our knowledge of the number of common members in relation to all Members of A, and we have not such number. Again, our present definition does not apply to real probability, because such definition connects the degree of probability with the degree of frequency, but it does not assume the frequency in the latter two conditions. We may conclude that the definition of probability in the Finite Frequency Theory is in sufficient since it does not exhaust all sorts of probability B.
Russell attempted to defend this definition and its relevance to all sorts of probability on the basis of the principle of induction which justifies the generalisation [and] which applies to unobserved instances. "Suppose I say for example : There is high probability that Zoroaster existed". To substantiate this statement, I shall have to consider, first, what is the alleged evidence in his case, and then to look out for similar evidence which is known to be either veridical or misleading ....We shall have to proceed as follows : 'There is, in the case of Zoroaster, evidence belonging to a certain class A; of all the evidences that belong to this class and can be tested, we find that a proposition p is veridical ; we therefore infer by induction that there is a probability p in favour of the similar evidence in the case of Zoroaster. Thus frequency plus induction covers this use of probability'.
We many observe the following points on what Russell said. First, the probability of Zoroaster's existence is real and could not be determined on the frequency theory basis, for frequency and induction lead to a definite ratio of truth, and this we called mathematical induction which alone is insufficient to infer the probability of Zoroaster's real existence. In order to give such probability we have to add the axiom, that the degree of real probability of an event must conform to the frequency of various events belonging to the class of which that event is a member. Such axiom is not presupposed in frequency theory and induction, so Russell's attempt is unsuccessful.
Secondly, the explanation of such real probability as Zoroaster's existence on the basis of frequency theory is unsuccessful also unless there is evidence that the probable is a member in a compatible class. But in such a class, a member may not occur but it does not occur necessarily; thus the required evidence cannot necessarily be assumed.
Finally, the principle of induction itself depends on probability. For induction which justifies the general conclusion does not rest on probability in the sense of finite frequency, but probability in another sense to which we shall turn.
New Definition of Probability
We offer here a third definition which overcomes the difficulties involved in the two previous definitions.
But it may well first to introduce the concept of indefinite knowledge, that is, knowledge of anything not completely determined or defined. When I say I know that the sun rose or that John is coming now to pay you a visit, then I have determined a piece of knowledge such knowledge is not subject to doubt or probability. But suppose I told you that one of your three intimate friends is coming to visit you now, then I give you indefinite pieces of information which involves vagueness and probability belonging to someone yet unknown. Indefinite knowledge is of two kinds: that which includes incompatible items (two of them cannot simultaneously occur), and that which includes compatible items (when two of them can). And we use indefinite knowledge here to be of the first kind.
Now we have before us four things : (1) Knowing something indefinite in content ; (2) the collection of the items any of which may be the object of knowledge; (3) the number of probabilities which conforms to the number of the items; (4) incompatibility of items. We notice that the degree of the number of probabilities is equal to the given information itself; if this is 1 so is the number of probabilities. Consequently, the probability of each item is a fraction.
Now we come to our new definition of probability: Probability capable of determined value is always one of a class of probabilities represented in indefinite knowledge, its value is always equal to a number of items of indefinite knowledge.
If x stands for any such item, (a) for certainty, (b) for the number of items, then the value of x is a/b. Probability here is neither an objective relation between two events not merely a frequency of a class in another, but an incomplete degree of credibility. This credibility is considered a sort of mathematical probability, by which is meant a deduction from certain axioms. In the example of my knowledge that one of my three intimate friends is coming to see me, if we want to determine the value of x' coming, we find it 1/3.
To examine this definition we discuss the following five points: (a) whether it satisfied the axioms of probability, (b) to overcome any difficulty which it involves, (c) agreement of the definition with the mathematical side of probability, (d) whether our definition explains such cases which the finite frequency theory could not, (e) the additional axioms.
There are two formulae for the probability a/b. (i) it is the happening of x in the context of the other items of our indefinite knowledge, (ii) it is the various degrees of credibility of the happening of x. If we take a/b according to the first formula, we find it consistent with the six axioms of probability.
The first axiom says that there is only one true value of a/b. The second axiom tells us that all the possible values of a/b are the numbers between zero and 1, and our definition satisfies such axiom because if x does not occur the value is zero but if it only occurs the value is one, and if it occurs with others, the value lies between zero and one. The third axiom says that if b entails a, then a/b = 1. The fourth axiom states if b entails not-a then a/b = zero. Both these axioms are true because when the items of a collection include the member of the probability of which we want to determine, we find a/b = 1 and when such member is absent the probability is zero. The fifth axiom (that of continuity) tells us that the probability of a and c occurring simultaneously in relation to b is the probability of a in relation to b multiplied by that of c in relation [to] a and b; and the value of this probability is consistent with our new definition, not an added assumption. For example, suppose it is probable that some student is excellent in the subject of logic or in that of mathematics or in both. We face here three probabilities each of which is an instance of the probabilities in an indefinite knowledge. On the basis of induction we may suppose two reasons a and b for excellence in logic, and two other reasons c and d for weakness in logic, and two other reasons c and d for weakness and likewise with his status in mathematics. Now we have an indefinite knowledge in both cases. In the first, such knowledge includes a, b, or c or d; in the second we have a, b, c or d. The student's excellence in logic is represented in two items a and b, in mathematics represented in a and b, then the degree of probability in excellence in each subject is 2/4 or 1/2. Whereas his excellence in both subjects is one of the probabilities in a third domain of indefinite knowledge, which can be represented in one of the flowing sixteen cases, a and a, a and b, a and c, a and d, b and a, b and b, b and c, b and d, c and a, c and b, c and c, c and d, d and a, d and b, d and c, d and d. We now notice that the probability of the student's excellence in both subjects is 4/16 or 1/4.
The sixth axiom (disjunctive axiom) states that the probability of a or c in relation to b[e] is that of a in relation to b and to that of c in relation to b, and subtracted from the probability of both a and c. And this axiom is consistent with our new definition. In our previous example, we found that the value of each of the two probabilities (excellence in logic and in maths) is 1/2, but the probability of excellence in at least one of the subjects has twelve cases; thus the value of probability of his excellence in one of those subjects is 12/16 or 3/4.
We may now conclude that the first formula of our definition is consistent with all the axioms of probability, without assuming any of them a priori.
We now turn to the second formula of the definition expressed by a/b as probability in the sense of degrees of credibility. The second, third and the fourth axioms, aforementioned cannot come in terms with this formula of our definition. For the possible value of a/b do not lie between 0 and 1; rather these latter are among the values of a or b; and this is inconsistent with the second axiom. The third axiom says that if b entails a then a/b = 1 but there is no ground for speaking about entailment here. Yet, it must be noted that the acceptability of axioms of probability is arbitrary because some of them may be needed but it is not necessary to need them all.
Difficulties of our definition
The main difficulty facing our definition lies in determining a definite member among the members of a certain class. Let us introduce the following example. Suppose we have an indefinite knowledge that only one of my three friends will pay me a visit to day (John or Smith or Johnson), then how can I determine the visitor? Here we supposed that the class had three members, but there are alternatives: our class may include Smith and those whose names start with J, or may include Johnson and one of Peter's sons (assuming that John and Smith are his sons ). If we take the first alternative, then the probability of Smith's coming is taken then such probability is 1/2,in the third we find that the probability of Johnson's coming is 1/2.
Difficulties are enormous if we take the first alternative, For if we know that John has four costumes (a, b, c and d), we can then say that we have six members, consequently, the probability of John's coming is 4/6. We go into absurdities if we suppose that the probability of one's coming increases for the one whole has more suits.
We can offer two ways of overcoming this difficulty, namely, (1) when one of the members of a class is divisible other members must be so, or else we must ignore divisibility in all members; (2) if one member is divisible but the rest is not, then we should not neglect this process in the former.
The new definition and the calculus
We may well notice that our new definition completely explains the mathematical side of probability. It has already been shown that the axioms of conjunction and disjunction are consistent with our definition. And since the sums and products of probabilities rest on these two axioms, we conclude that the new definition explains all processes of addition and multiplication. In what follows, we take three cases of mathematical probability and see whether they are consistent with the new definition.
The new definition and inverse probability
We want first to discuss the principle of inverse probability in the light of the new definition of probability.
Suppose we draw a straight line and divide it in two parts a and b; suppose also we wish to fire a bullet on a certain point on the line but we do not know whether the point is on a or b, and we found that we fired successfully. Now what is the degree of probability that the point is on a? It will be 9/10 according to inverse probability, and such cases which apply to this principle involve an indefinite knowledge. In saying that the probability of throwing the bullet successfully on the meant point on a is 3/4, we mean that, by induction, we succeed after trying three throws out of four. Now when we fire on the meant point we find we have 16 probabilities six of which are improbable assuming we have already succeeded. The result will be that the degree of probability is 9 /10.
The definition and the Bags - example
In the Bags - example, it is supposed that we have three bags, of which the first contains three white balls out of five, the second contains four white balls and a black one, the third bag contains five white balls. Suppose we took one of the bags randomly and drew from it three balls and found all of them white, then what is the probability that such bag is the third one? Here we have indefinite knowledge and need to determine it; we have indefinite knowledge that the three white balls are either from the first or second or third bag. We have only one chance if what we drew is from the first bag, four chances if from the second bag, ten chances if what we drew is from the third. Thus there is indefinite knowledge involving fifteen chances, each of which is considered a case of such knowledge. If the three white balls are drawn from the third bag then the degree of probability is 10/15 or 2/3 and this is exactly what Laplace calculated in the Bags - example, for he determined this probability by (m+1)/ (n+1) in stands for the number drawn, n all the balls, and this would be 2/3. Now, we may ask, what is the probability that the next ball to be drawn is white from the third bag? In this bag we have two balls left, thus we have two probabilities, when multiplied in our fifteen cases we get thirty cases in our indefinite knowledge.
On the other hand, the probability that the next ball to be drawn is black has 24 cases, thus the probability is 24/30 or 4/5 ; and this is what Laplace found in his equation (m+1)/ (n+1) or (3+1)/ (3+2).
Our definition and Bernoulli's law
Bernoulli's law of large numbers states that if, on each of a number of occasions, the chance of a certain event occurring is p, then, given any two numbers a and b, however small, the chance that, from a certain number of occasions onward, the proportion of occasions on which the event occurs will ever differ from p by more than b, is less than a. Let us illustrate this by two examples.
The first example
It is that of tossing a coin. We suppose that heads and tails are equally probable, i.e., 1/2. Suppose we tried tossing the coin four times, we would have indefinite knowledge of the following occasions : (1) we get heads in all times, (2) heads do not occur even once, (3) we get head once, (4) we get it twice, (5) we get it three times.
The first occasion has only one chance, the second occasion has also one chance, the third has four chances, the fourth six chances, and the fifth has four chances. Consequently, we have indefinite knowledge of sixteen cases, one of which may occur. We can determine the degree of probability that any case may occur independently of the occurrence or the non-occurrence of another case. Such degree is 1/2 because if we randomly choose any of the four occasions and observe the times of the occurrences on such occasion, we find that it is 8/16 or 1/2. We can also determine the occasion, among the five we have, which may gain the highest probability; we shall find that the fourth occasion is such, namely, that the head of a coin appears twice, and this is 1/2. Bernoulli's law of large numbers proves that the form which involves a ratio of occurrence corresponding to its probability will increase.
The second example
Bernoulli's law proves that, provided that the probability of an event is 2/3, on many occasions we may be almost certain that the degree of occurrence of such event is 2/3. It may be asked whether this law could be explained in terms of indefinite knowledge, and we claim that it could.
For since we talk about probability the degree of which can be determined, and if we suppose that the probability of an event is 2/3, this means that this degree is determined according to indefinite knowledge. Thus in the example of tossing a coin many times, we have two sorts of indefinite knowledge: (a) indefinite knowledge which determines the probability of seeing the coin on its head is 2/3; ( b ) indefinite knowledge which includes all the alternative cases in which the event may appear. When we mix these two sorts we get a third in which all alternatives are equally probable.
Completeness of our definition
The definition of probability on the Finite Frequency theory is incomplete and involves gaps. For suppose we look into statistic results about the frequency of cancer among smokers, and we are not sure whether it is 1/4 or 1/5 owing to difficulty of reading, then the probability here is in the frequency not in the number of smokers, but such probability is not included in the account of that theory.
But such is satisfied in our definition according to indefinite knowledge. That is, the frequency is either 1/4 or 1/5 and thus the ratio is 1/2. There is one exception to our application, namely, complete doubt as to the major principles and axioms such as non-contradiction. This sort of doubt is beyond our definition to include.
New axioms of our new definition of probability may be introduced. If we have two kinds of indefinite knowledge, each of which contains many value probabilities, and there can be no incompatibility between them, then we can determine the value probability of one knowledge independently of the other. But if the values of the two kinds of knowledge are incompatible, we can multiply the number of in items both kinds and obtain greater indefinite knowledge. By virtue of multiplication, the value of an item differs in such greater knowledge from its value in its special kind. Suppose we have a coin and another piece having six sides numbered from one to six and tossed both, we have two kinds of indefinite knowledge: first, knowledge that the coin may be on its head or tail; secondly, knowledge that the six - faces piece may fall on a certain face. This means that the probability of the coin's appearance on its head is 1/2, and that of the other piece is 1/6.
Now, if we knew that, for certain reasons, the head is concomitant with a certain number in the other piece, then the degree of probability of the coin's appearance on its head will be less than 1/2. For we have to multiply the probabilities of the items of one of indefinite knowledge in the items of the other; then we get a new indefinite knowledge consisting of seven probabilities : (i) head with number 1, (2) head with number 2, (3) head with number 3, (4) head with number 4, (5) head with number 5, (6) head with number 6, (7) tail with number 6. Consequently, by multiplication the value of the appearance of the Coin's head is 1 / 7 and the value of the other piece's appearance on number 5 is 2 / 7. When we have two sorts of indefinite knowledge which can constitute a third sort by virtue of multiplication, the value probabilities will differ in the third sort from those in the former this we may call the multiplication axiom in indefinite knowledge.
But we need another new axiom. For in many cases in which we have two sorts of indefinite knowledge, and some items in the one are incompatible with some items in the other, we notice that the value probabilities are determined within one sort without the other. In such cases we have no need of the multiplication axiom, but another axiom which we may call dominance axiom. Let us make this axiom clear first by example.
Suppose we have indefinite knowledge that some person in the hospital (c) is dead, and we know also that there are ten sick persons in c; thus the probability that anyone of them is dead is 1/10. But take the following case. Suppose there is a sick man, besides the ten persons we know of, but we do not know whether he went to the hospital (c) or another (b) in which nobody died; and suppose that his entry in either hospitals has equal probability. This means that there is a second indefinite knowledge that the eleventh person is in (c) or (b), and that the probability of his being in either hospitals is 1 / 2. In this case, the eleventh person stands in the domain of the first indefinite knowledge, because since it is probable that he is one of the clients in (c), it is probable that he is the one we know about his death.
Hence, the probability that the dead man is in (c) is 1/10. We notice that the probability that the eleventh person is in (b), involved in the second indefinite knowledge, and the probability that he is in (c), involved in the first indefinite knowledge, cannot be both true. We now come to state the second new axiom presupposed in our definition of probability namely, dominance axiom: If there are two probability value derived from two kinds of indefinite knowledge, and if one of these value affirms, and the other denies some event, and the one includes the other, we call the former dominant over the other.
Ground of Dominance Axiom
There are two grounds which justify the dominance axiom. The first ground is that we should acquire a knowledge that what is to be known in the first indefinite knowledge possesses a quality necessary to one item in it but not necessarily belonging to other items in the same piece of knowledge; in this case, any probability incompatible with those other items dominates the probability compatible with the first item. For example, we might know in an indefinite manner that John or Smith is in the room and we know by testimony that the person there is white, and we know that Smith is white but we do not know John's colour. Then whiteness is the quality we know of the object of our indefinite knowledge and that it is necessarily possessed by Smith and has no connection with John's colour. Now any factor weakens the probability that John is white dominates the probability that John is in the room, What is known is the presence of a white man in the room; when we become sure that he is white we get higher probability.
In other words, the first ground is that if a certain quality is attributed to any item of a group of items is equally probable, there can be no dominance. In the previous example, if we know that Smith only is the person that is white then we are certain that it is he that is present in the room and thus we do not have indefinite knowledge.
The second ground of dominance axiom is that when we have indefinite knowledge about something, and that the object of knowledge may have some quality not necessarily possessed by any item, then any probability that such quality is, or is not, attributed to an item dominates the previous probability we have. For example, suppose we know that there is a white man in the room and we are told that he is either John or Smith, and we have no clear idea of the colour of both. Whiteness here is a quality that is not necessarily possessed of either. Then if whiteness is equally probable for both then the probability that either of them is in the room is 1/2. Now if we have knowledge that decreases the probability that John is white such knowledge dominates our previous probability.
Categorical and Hypothetical indefinite knowledge
Statements are of two sorts: categorical and hypothetical; the former attributes a predicate to a subject, and expresses a fact, while the latter expresses a relation between two facts by virtue of the fulfilment of a certain condition. We may apply such classification to indefinite knowledge and say that the latter may be categorical or hypothetical. An example of the former is the knowledge that your brother will visit you; example of the latter is that your brother will visit you in the period of the next ten days if he is not ill. As any categorical indefinite knowledge includes a number of items as members, so hypothetical indefinite knowledge includes a number of hypothetical statements each of which may be considered a member of the original statement; and each is probable. In the previous example we have ten hypothetical probable statements :
1) X will visit his brother tomorrow if he is not ill.
2) X will visit his brother the day after if he is not ill.
10 ) X will visit his brother in the tenth day if he is not ill.
The probability of any of those statements equals 1/10. Hence we have an important point, namely, that if the condition is a probable fact and if this fact has ten chances, some of which have the least probability, then we get a value probability inconsistent with the occurrence of such fact. Let us make this example clear. The condition is a probable occurrence that a person is not ill, and we have ten conditional statements turning on this occurrence.
Suppose we know that the person in question did not visit his brother in the first nine days and we know nothing about the tenth day then all probable conditional statements are inconsistent with the condition, therefore that person is ill.
Now, when the consequent of the conditional statement is false, then the antecedent or condition is absent, thus, we want to determine the probability of x's illness with 9/10 in the first nine days, but we know nothing of his visit in the tenth day, then we can say that our conditional indefinite knowledge gives the probability of his illness with 9/10. In consequence, we may formulate our axiom thus: every conditional indefinite knowledge includes a number of probable conditional statements, all having one condition in common but differing in consequent ; then this knowledge denies the existence of the condition with a probability equal to the probability of the original statement.
Conditional knowledge that is real
Conditional indefinite knowledge are of two kinds, (a) knowledge of the consequent which may be real, but being ignorant of it we formulate a conditional statement that gives us alternatives one of which is realizable in reality and the rest are probable. For example, I may indefinitely know that if I take a certain drug I may suffer one of three sorts of pain, in this case I can consult an expect to tell me which sort is to happen. (b) Conditional knowledge which involves a number of alternatives, none of them is real. For example, if we have a bag containing a number balls, and at least one of which is black, and ask which one is so? We have then indefinite knowledge that one of the balls is black.
These two kinds of conditional indefinite knowledge are substantially different; the kind which indicates the non-existence of the consequent in reality involves that there is no contradiction for the consequent to exist but that there is no empirical ground for its real existence. Whereas the kind of knowledge which gives empirical information implies only some sort of doubt such that if I have enough knowledge I could have obtained definite knowledge without any doubt. This main difference suggests that indefinite conditional knowledge which involves no empirical information cannot be taken 'a ground for the determination of any probability, where as the other kind of conditional knowledge can be taken a ground for such determination. Therefore, we can discover a mistake in applying the theory of probability in certain cases.
For example, if there is a bag containing ten balls numbered from 1 to 10, and we know nothing of their colour; suppose we draw the balls from 1 to 9 and we noticed they are all white. Can we apply the theory of probability and say that there is a probability that the tenth ball is white on the ground of our indefinite knowledge that if the bag contains a black hall it would be the first or the second ... or the tenth? Such conditional indefinite knowledge includes ten probable statements all of which have a condition in common, namely that the bag contains one black ball. We know that the consequent in the first nine conditional statements is not empirically verified, since we know that the nine balls are white. This means that those statements prove the absence of the antecedent.
Such application of probability theory, we argue, is false because it determines the probability of the tenth ball being white on the basis of our conditional indefinite knowledge which involves the unreality of the consequent; and this basis does not justify the determination of real probability.
We have hitherto studied and discussed the theories of probability, and offered a new definition of probability from which the following results may be uncovered. First, probability always depends on indefinite knowledge, and the value probability of any statement is determined by the ratio of the number of cases involved in this statement to the total number concerned. Second, a theory of probability based on our new definition has, for its ground, five postulates (a) the objects of indefinite knowledge have equal chances; (b) if some items of indefinite knowledge may be classified further while other items do not, then the division involved in the former is either original or peripheral; if it is original then each item is one of our indefinite knowledge, while if it is peripheral, then the item is the only member of such knowledge; (c) if we have two kinds of indefinite knowledge having distinct probabilities, one of which is consistent with a certain statement, while the other is inconsistent with this statement, such that one of those probabilities denies the statement while the other does not, then the former dominates (or exhausts) the latter; (d) when conditional indefinite knowledge involves the unreality of the consequent, it cannot be taken as ground for a probability of the consequent. Finally, if we have two kinds of indefinite knowledge, the value probability in the one is inconsistent with that in the other, we must multiply the members of the first kind by the members of the other kind, and then obtain a wider knowledge.