The following adverts are generated by Google. Please follow Islamic law when making any decision.
Search:

Part 2, Chapter: 1|2|3|4|5

Chapter 3. The Deductive Phase Of Induction

Introduction:

    Induction has two phases: deductive and subjective, and we have still been concerned with the former. By deductive phase here is meant that inductive inference aims at generalisations, and in order that these be effected, induction finds help in the study of probability. But we get the highest degree of probability of our generalisation in a deductive manner, that is, deduction from certain axioms and postulates. Therefore the degree of probability of inductive inference depends on such axioms and postulates. In this chapter, we shall consider induction as enriching the probability of generalisations, depending on the postulates of the theory of probability without giving any extra postulates for induction itself.

    We shall explain our new approach to induction and probability in relation to a certain form of causal principle. When we say that motion is the cause of heat, or a metal exposed to heat is the cause of its extension, we intend to confirm such generalisation by induction.

Causality

    Causality is a relation between two terms such that if one of them occurs the other does necessarily according to rationalistic theory, while such relation according to empiricism expresses constant or uniform conjunction which involves no necessity. The form definition of causality designates a necessary relation between two meanings or classes of events. When we say that motion is the cause of heat, we mean that any particular occurrence of motion is necessarily succeeded by a particular occurrence of heat. Whereas causality, on empiricist lines, is a relation of uniform conjunction involving no necessity.

    But the denial of necessity involves complete chance. For instance, heat precedes boiling is as chance occurrence as that falling rain precedes visiting my friend, with the difference that the former involves chance uniformly while the latter involves chance rarely. Thus, when succession of events is a uniform chance, it is a relation between two particular events not between two types of events.

    We may distinguish positive and negative causality, namely, that when an event occurs, another follows, and an event does not occur because its precedent condition does not. It is to be remarked that, for rationalism, negative causality involves the impossibility of complete chance, that is, the occurrence of some event is impossible without the occurrence of its cause. But positive causality does not involve the impossibility of complete chance, for that (a) is succeeded by (b) is not inconsistent with the occurrence of (b) without (a). We may now conclude (1) that positive causality does not deny complete chance from the rationalistic point of view. (2) that positive causality involves complete chance from the empiricistic point of view, (3) that negative causality, for rationalism, implies that complete chance is impossible.

    Now, we shall, in what follows, give four applications to our conception of a priori causality so as to clarify the deductive phase of induction. In the first, we claim that there is no a priori ground to deny positive causality on rationalistic lines, and that absolute chance is impossible. Second, we claim that there is no a priori ground for believing or disbelieving in negative causality, that is, throwing doubt in absolute chance. Thirdly, we shall defend the view that the belief in absolute chance is consistent with the belief in positive causality. Finally, there is no a priori ground for denying positive causality on rationalistic lines, but we may claim at the same time that causality stands, as uniform conjunction.

First Application

    As rationalism claims, we assume that there is no a priori ground for denying necessary connection between cause and effect, and that there is ground for denying absolute chance. Take this inductive statement: "all A is succeeded by B", and you find before you three probable formulae:

    (1) the generalisation: all A is succeeded by B.

    (2) A is the cause of B in view of empirical data.

    (3) A is the cause of B independently of experience, and the last is also probable so long as we have no ground for denying it.

    We notice that the first two formulae are one, while the third is distinct. Inductive inference, in our interpretation, proves the causal principle and thus confirms generalisation in the specified way as we shall presently see.

    In order to deal with inductive inference, we have recourse to the concept of indefinite knowledge, and since we assumed a priori the impossibility of absolute chance, we mean that b must have a cause. Suppose it is probable that the cause of b is either a or c, and by experiment we find that a is concomitant with b, we now have two cases, either that c does not occur or that c could occur. In the former, we conclude that a is the cause of b and we need not use indefinite knowledge, because we reached the causal relation between (a) and (b) priori and deductively not through induction. But in the case where (c) could occur in conjunction with (b) (but actually did not), we may say that (a) is not definitely the cause of (b), and then their conjunction could be explained in terms of relative chance. Hence we need to introduce indefinite knowledge to judge the probability that (a) is the cause of (b) in the following way:

    (1) (c) did not occur in both experiments,

    (2) (c) occurred in the first experiment only,

    (3) in the second only,

    (4) it did occur in both.

    It is remarked that first three cases show that (a) is cause of (b) while the last is indifferent as to confirm or deny such causality. This means that we have three probability value in favour of affirming the causality of (a) to (b), therefore the probability that (a) is cause of (b) in both experiments is 3.5-4 = 7/8, and after three experiments 15/16, and the probability increases when we make more experiments. Such indefinite knowledge may be called a posteriori, since it enlarges the causal principle through induction.

Rule of multiplication

    In addition to a posteriori indefinite knowledge, there is, we suggest, an a priori indefinite knowledge, and the latter is already conceived before inductive process. If we suppose that (b) has either or (c) as cause, this means that such knowledge includes two items only: (a) and (c); this knowledge determines the probability that (a) is the cause of (b); in this case, the value would be 1/2, and the denial of it would also be 1/2. Again, after two successful experiments, we get two pieces of indefinite knowledge, the a priori and the a posteriori; former gives the value 7/8, while the latter gives 1/2 as a determination of causal relation.

    Now, we can apply the rule of multiplication to those two pieces of indefinite knowledge and a third indefinite knowledge issues. After having two successful experiments, we can have eight probabilities, four within the a posteriori knowledge multiplied in two within the a priori knowledge.

    (1) The assumption that (a) is cause of (b) and (c) coexists in both experiments

    (2) That (a) is cause of (b), and (c) occurs only in the one experiment,

    (3) that (a) is cause of (b), and (c) occurs only in the second experiment,

    (4) that (a) is cause of (b), and (c) disappears in both experiments..

    (5) that (c) is cause of (b), and (a) occurs in both experiments,

    (6) that (c) cause of (b), and occurs only in the first experiment,

    (7) that (c) is cause of (b), and (c) occurs only in the second experiment,

    (8) that (c) is cause of (b), and (c) disappears in both experiments.

    It is noticed that the latter three cases never occur since they involve that (b) occurs without any cause. Thus remain the five cases, which constitute the new indefinite knowledge, and since four out of these five cases involve that (a) is cause of (b), then the value probability here is 4/5 instead of 7/8. Suppose we have made three successful experiments, and that the a priori indefinite knowledge has two items only, then the probability that a is cause of b, according to the a posteriori knowledge, would be 15/16, and according to the multiplied knowledge, 8/9.

Application of Dominance Axiom

    The rule of multiplication, discussed above, apply only to the domain of probabilities which have equivalent values, but does not apply to those values dominating other values. And it is observed that the value probabilities in the a priori indefinite knowledge dominate those in the a posteriori knowledge. Let us clarify this statement.

    The object of a priori indefinite knowledge is universal, for instance, something (b) must have a cause which is still indeterminate, it may be (a) or (c). But this universal knowledge, in case two successful experiment were made, denotes the occurrence of something in both experiments. The first indefinite knowledge is that the cause of (b) occurs in both experiments, while the second denies with high probability, that (c) occurs in both experiments, and this negative value provided by a posteriori knowledge denies the occurrence of (c), thus any value denying the occurrence of (c) in both experiments denies its occurrence a priori. Then such denial dominates the value that (c) is the cause a priori.

    What has just been said shows clearly that the value of the probability that (a) is the cause of (b) after any number of successful experiments is determined by a posteriori knowledge only, not by the third indefinite knowledge produced by multiplication. And the value of probability that (a) is cause of (b), on the ground of dominance axiom is larger than its value on the ground of multiplication supposed by the principle of inverse probability.

Dominance and the problem of a priori probability

    By means of dominance axiom we can solve one of the problems which face the application of the theory of probability to inductive inference. Such problem arises from applying the rule of multiplication and the principle of inverse probability, and this application involves the incompatibility between the a priori and a posteriori indefinite knowledge toward the determination of causality.

    A posteriori knowledge may determine that (a) is cause of (b), while a priori knowledge does not. Such incompatibility involves that the probability that a causes b decreases according to multiplication. And it is clear that a priori knowledge, being prior to induction, does not give determinate causes, but should suppose a great number of them, and then the number of causes suggested here would exceed what a posteriori knowledge shows.

    This problem is solved by dominance axiom, which shows that the probability that a is not cause of b (which a priori knowledge suggests) is dominated by the value probability which a posteriori knowledge gives us, and not incompatible with it.

Second Application

    We shall now assume that there is no a priori basis for denying causal relation between events, and for the impossibility of absolute chance. That is, it is probable that b may have a cause, and that b may at the same time may have no cause at all. Thus, for the sake of argument, we are not permitted to conclude that a is the cause of b merely from their concomitance, and thus b may have occurred by absolute chance.

    Now, suppose that a is probably the cause of b in order to reject the probability of absolute chance, and hence to suggest a priori the causal relation between them. We shall understand inductive inference in such a way that we can apply the probability theory to the impossibility of absolute chance, as a consequence of many successful experiments, thus we get a high degree of credibility that absolute chance is impossible.

    By absolute chance we mean the absence of causality, i.e., the absence of cause is a cause of the absence of effect. We could obtain a hypothetical indefinite knowledge as a result of observing that in all cases when a is absent, b is so. But if the absence of causality is not constant and regular, it is probable that the absence of effect is preceded by the absence of cause. If we observe two cases in which the absence of effect is concomitant with the absence of cause, it would be necessary that the absence of cause is connected with the absence of effect; if not, it would not be known that the absence of effect is connected with the absence of cause. In this supposition we have four probabilities.

The absence of effect does not occur in both cases

    The absence of effect does not occur in the first case only. The absence of effect does not occur in the second case only. The absence of effect does occur in both.

    These four probabilities express four hypothetical probable statements, all of which have in common one condition, namely, assuming the causal principle. The consequent in the first three statements is false, then the only way to make them true is to assume that the antecedent is false. Therefore the absence of effect as a result of the absence of cause is equivalent to the impossibility of absolute chance.

    The impossibility of absolute chance absorbs all the probable values involved in any hypothetical indefinite knowledge, except the value of the statement "unless the absence of cause is a cause of the absence of effect, this would occur in all cases, because all these values deny the condition or antecedent, thus proves the impossibility of absolute chance". And when we compare this hypothetical knowledge with a priori knowledge of the impossibility of chance, we do not find dominance of one on the other thus we may apply the rule of multiplication to both sorts of knowledge. And since there is no a priori ground of preferring the possibility of absolute chance to it impossibility, it may be that the value of a priori probability of impossibility of chance is 1/2. By multiplying both sorts of knowledge, we shall find that the value probability of impossibility of chance is less than the value determined by hypothetical knowledge only.

    Now, if it becomes reasonable that absolute chance is impossible with a greater value it becomes more probable that a is cause of b, for the impossibility of chance implies that a causes b; and if we suppose that b may probably have another cause than a, such as c or d, it is possible, in order to argue against such probability, to take the same way of explaining inductive inference stated in the previous application. We have now argued that it is possible to use the probability theory in order to argue against absolute chance, on the ground of a hypothetical indefinite knowledge.

Third Application

    We shall here suppose that there is no a priori ground for denying causal relation between two given events such as a and b; we shall also suppose that there is a priori ground for the possibility of absolute chance. Such supposition of a priori ground does not enable us to strengthen the impossibility of absolute chance as we have seen in the second application. In the third application, our problem is not that the cause of b is not a but may be c or d. Our problem is the probability that b has occurred by mere chance. Now suppose that although the only probable cause of b is a, we could not observe any concomitance between them in varied experiments. If a is a cause of b, it is necessarily connected with b in all relevant experiments. Whereas if we suppose that a is not cause of b, it is not necessary to be concomitant with it; in such a case we have four probabilities, expressed in four hypothetical statements:

    (1) Assuming the denial of causality between a an b, it is probable that b does not occur in the two experiments; or (2) b does not occur in the first experiment only; or (3) b does not occur in the second experiment only; or (4) b does occur in both experiments. All these statements are probable, though the condition or antecedent is false in the first three of them, thus these three statements affirm the causal relation between a and b.

Multiplication or dominance

    We may remark that, besides hypothetical indefinite knowledge, there is a priori indefinite knowledge, on the ground of which we can determine the a priori probability that a is cause of b. Supposing that if there is a cause of b, this cause is no other than a. Thus, there is an integral collection of cases which constitutes the causal relation, and its absence, between a and b; thus the value of each of both probability a priori is 1/2.

    When we compare the a priori indefinite knowledge to the a posteriori knowledge, on the ground of which we determine the probability that a is cause of b, we find that the value of the probability that a causes b a posteriori does not dominate the a priori determination of this causality, because a posteriori hypothetical knowledge here does not refute anything involved in a priori knowledge; on the contrary, the farmer confirms one horn of the latter in higher probability.

    In consequence, these two kinds of knowledge have [o???] to be multiplied, and multiplication affects the value probability of causality given a posteriori. By multiplication we get, after making two relevant experiments, five cases, for the fourth hypothetical statement is consistent both with the supposition that a causes b and its denial, whereas each of the other three statements expresses only one case, for it is consistent with the supposition of causality. Thus the value probability that a causes b is 4/5 instead of [-^-^-???]

    What has been proposed is based on the supposition that a is the only event that can cause b. But if there are many events other than a, we may modify our hypothetical knowledge in the following way: "If none of the things, concomitant with b in successful experiments, is a cause of b, either ... or ....". Such modification helps to give higher probability that some of the events regularly concomitant with b is its cause.

    Secondly, the probability of empirical causality does not exceed 1/2 through successful experiments, because any experiment involving the conjunction of a and b does not alter such probability except in case of decreasing the factor of multiplication. Suppose the kind (a) has ten individuals, then the value of a priori probability that a causes b is a result of ten processes of multiplication of the ten individuals of a by b; and when we observe the conjunction of the first individual of a and b, for example, we can dispense with one of those processes; and this means that if we observed the conjunction between nine individuals of a with b, the value would be 1/2.

    Thirdly, a posteriori knowledge, cannot be a ground of increasing the probability that a causes b, if we deny rational a priori causality right from the start. For such denial is equivalent to absolute chance.

Hypothetical Knowledge And Empirical Causality

    There may be a hypothetical knowledge which helps to increase the probability of causal law, even if we refused rational causality. Take the example of a bag containing a number of balls. If the bag k contains ten white balls, we may ask whether it has at least one black ball, and then we want to determine which one of the ten balls it is. If we suppose that there is a black ball in our bag, it is probable that it is the ball (1) or the ball (2) ... or the ball (10). That is, when we produce a hypothetical statement the antecedent of which supposes there being a black ball, we face ten probabilities in the consequent, although as a matter of fact there is no black ball.

    Similarly, if we suppose that there is a causal relation between two kinds a and b, that the kind a contains 10 individuals, then to say that a is uniformly and regularly conjoined to b is to express 10 conjunctions. Now, suppose we observed [a1, a2...a5][actually these the numbers with these letters should be in subscript, shouldn't they be?] and found that b is conjoined them all, we therefore conclude that b is conjoined to a1...a5, but we doubt whether b is conjoined to the other five cases.

    But when we doubt causal laws, that is, at least one individual of a is not conjoined to b, then we may ask which one is it? We have no way to know that this supposed individual is one of the unobserved five cases, because it may be that all the ten cases are factually conjoined to b. Thus, we may obtain a hypothetical indefinite knowledge including ten hypothetical probable statements. Such knowledge involves that if there is in the kind a at least one individual not conjoined with b, then it is either a1 or a2... a10. The antecedent in all such statements would be the supposition that at least one individual of a is not concomitant with b. We can know that the consequent in five of those statements in variant, and this means that those first five statements are modus tolless [???tollens] (denying antecedent) i.e., denying that there is an a which is not conjoined with b, and this affirms causal law. Thus, in our conditional knowledge, we have five hypothetical statements which favour causal law, while the other five statements are indifferent to the law. But the more there is conjunction between a and b, the more we have of hypothetical statements which affirm this law.

    But the role performed by conditional knowledge is no ground for increasing the probability of causal relation to a reasonable degree for two reasons. First, we have already distinguished, within conditional indefinite knowledge between knowledge whose antecedent is factually determined and knowledge whose antecedent is not so. And the role performed by conditional knowledge in increasing the probability of causality lies in the latter knowledge. Secondly, even if we ignore the above distinction within conditional indefinite [???].

Fourth Application

    Whereas we considered the previous applications starting from supposing no a priori ground of causal relation between a and b, the present application assumes a priori ground of refusing such relation. That is, inductive inference involves causality on empirical lines not on rationalistic lines, which means mere conjunction.

    Causality, empirically considered, involves not a relation between two events but various relations among many things the relation of the particular (1) of the kind (a) to the particular (1) of the kind (b), and the relation of the particular (2) of the kind (a) to the particular (2) of the kind (b) and so on. Therefore, the causal relation between a and b is a multiple relation between particulars of a and those of b, and such relations express relative chance.

    Now, the main difference between causality (in the sense of uniform chance) and causality (in the sense of rational necessity) is that in the former it involves a collection of independent relations, and in the latter it is one relation between the individuals of one kind and individuals of the other kind. In consequence, we may conclude the following points. First, since causality considered empirically, expresses a collection of independent relations equivalent to the number of particulars of kinds a and b, the value of its a priori probability is the value of the probability that a particular of a is enjoined to a particular of b, multiplied by the value that another particular of a conjoined with another of b, and so on. So the value reaches a fraction equal nearly zero; whereas causality rationally considered, being one single relation between two events, its a priori probability takes the value 1/2 knowledge this is still not to be taken as a ground of causal laws [??? sentence correction ???].

    That is to say, any conditional knowledge contains a number of statements equal to the number of individuals included in the kind concerned. If we suppose an individual in the kind (a) not conjoined with b, then this individual is either a1 or a2... or an, than the number of the consequent is as much the number of individuals in that kind. On the other hand, if conditional statements affirm causal laws their number is equal to the number of individuals under examination or experiment. It follows that the more the individuals of a kind are examined, the minimal value we get for the probability of causal law.

    As concerns the deductive phase of inductive inference and its justification, we have reached the following important points. First, the deductive phase is the first step of inductive inference and is a reasonable application of the theory of probability in the sense given in the course of this chapter. Thus induction does not presuppose any postulate except postulates of probability itself.

    Secondly, such deductive phase does not assume a priori justification for denying causal relations on rationalistic lines. Now, the denial of such causality cannot explain inductive inference. Finally, induction is consistent with the impossibility of absolute chance.

Part 2, Chapter: 1|2|3|4|5
Back to contents
Multimedia
Dua Iftitah (mp3)
Abu Thar Al-Halawaji
listen download
Dua Kumail (video)
Medina
watch
Dua Tawasul (mp3)
Abu Thar Al-Halawaji
listen download
Ar Rahman (mp3)
Abdul Basit
listen download
Al Faatiha (mp3)
Yasir Al-Filkawi
listen download
Al Anfaal 41-52 (mp3)
Yasir Al-Filkawi
listen download




Copyright © 2003 - 2006, www.IntroducingIslam.org
[All Rights Reserved]

1940444133
p u sh i sa
poiuytrewqlkjhgfdsamnbvcxz 60163 tornado