When the probability of one event is affected by another event these events are?

The probability of an event occurring is intuitively understood to be the likelihood or chance of it occurring. In the very simplest cases, the probability of a particular event A occurring from an experiment is obtained from the number of ways that A can occur divided by the total number of possible outcomes. For example, the probability of obtaining

from a single roll of a fair1 dice is obtained from the following observations,•

the possible outcomes are

,
,
,
,
, and
, that is there are six possible outcomes,•

the number of ways of obtaining

from a single roll is 1.

Therefore, probability of obtaining

from a single roll is 16.

Example 9.1

Calculate the probability of the following outcomes from a single roll of a fair dice.

a.
b.

Either

or
c.

Any one of

,
,
,
,
, or

Solution

In each case, the total number of possible outcomes is six.

a.

The total number of ways of obtaining

is one. The required probability is then 16.b.

The total number of ways of obtaining either

or
is two. The required probability is then 26=13.c.

The total number of ways of obtaining any one of

,
,
,
,
, or
is six. The required probability is therefore 66=1.

Example 9.2

Calculate the probability of obtaining the following total scores from simultaneously rolling two fair dice.

a.

1

b.

12

c.

6

d.

7

Solution

We need to determine the set of all possible outcomes. These are listed in Table 9.1 and we see that there are 36.

Table 9.1. Possible outcomes of simultaneously rolling two fair dice

OutcomeTotalOutcomeTotal

2
3
3
4
4
5
5
6
6
7
7
8
4
5
5
6
6
7
7
8
8
9
9
10
6
7
7
8
8
9
9
10
10
11
11
12

a.

A total of 1 cannot be obtained. That is, the number of ways of obtaining a total of 1 is 0. The required probability is therefore 0.

b.

A total of 12 can be obtained from only

. That is, there is only one way. The required probability is therefore 136.c.

A total of 6 can be obtained from any of

,
,
,
, or
. That is, there are six ways. The required probability is therefore 636=16.d.

A total of 7 can be obtained from any of

,
,
,
,
or
. That is, there are six ways. The required probability is therefore 636=16.

Example 9.3

Consider a bag containing a number of balls colored either red, blue, green, or yellow, denoted Ⓡ, Ⓑ, Ⓖ, or Ⓨ, respectively. In particular, there are 2 × Ⓡ, 1 × Ⓑ, 1 × Ⓖ, and 1 × Ⓨ. Calculate the probability that the draw of a single ball will be the following.

a.

b.

c.

either Ⓡ or Ⓑ

d.

a black ball

Solution

In each case, the total number of possible outcomes is five. That is, one can draw either Ⓡ, Ⓡ, Ⓑ, Ⓖ, or Ⓨ.

a.

The total number of ways of drawing Ⓨ is one. The required probability is then 15.

b.

The total number of ways of drawing Ⓡ is two. The required probability is then 25.

c.

The total number of ways of drawing either Ⓡ or Ⓑ is three. The required probability is then 35.

d.

The total number of ways of drawing

is zero. The required probability is then 0.

It should be clear from the above examples that the probability of an outcome is 1 if that outcome is certain, and 0 if it that outcome is impossible. For example, in Example 9.1 (c), the probability of obtaining either

,
,
,
,
, or
from a single roll of a dice is 1; that is, one is certain to obtain one of those results. Furthermore, the probability of drawing a black ball in Example 9.3 (d) is 0 because the bag only contains red, blue, green, and yellow balls.

We intuitively see that probabilities are real numbers on the interval [0,1]. Probabilities are typically stated as either fractions, as in the previous examples, or decimals.

Now let us now consider multiple experiments. For example, rolling a dice twice and asking about the probability they we obtain

on each roll, or pulling two balls from a bag and asking about the probability that we obtain Ⓑ and Ⓨ. In order to properly consider multiple experiments, it is necessary to distinguish between independent and dependent events. Rolling dice and pulling balls from a bag are good examples for illustrating the difference.

First, let us consider rolling a fair dice. The probability that we obtain

on the first roll is 16. It should be intuitively clear that the outcome of the second roll has no bearing on the outcome of the first roll and the probability of again obtaining
is 16. The two rolls are said to be independent. Note that identical reasoning applies to simultaneously rolling two dice and asking about the probability that they both show
.

Now consider drawing two balls from the bag described in Example 9.3. There are two variations of this experiment,

1.

draw a ball, note its color and return it to the bag before drawing another,

2.

draw a ball, note its color but do not return it to the bag before drawing another.

Since variation 1 involves replacement of the first ball, the outcome of the second draw has no bearing on the result of the first. The two draws in variation 1 are therefore independent.

Variation 2, however, is different. Since the first ball is not replaced, the outcome of the second draw is dependent on the outcome of the first. For example, if Ⓨ is drawn first from a bad containing ⓇⓇⒼⓎⒷ, it is impossible to draw it for a second time. This is explored in the following example.

Example 9.4

Using the bag of balls in Example 9.3, calculate the probability of obtaining the following outcomes from two draws. State whether the draws are independent or dependent.

a.1st:

Ⓖ is drawn then replaced,

2nd:

Ⓨ is drawn.

b.1st:

Ⓖ is drawn, but not replaced,

2nd:

Ⓨ is drawn.

Solution

a.1st:

There are five possible outcomes, only one is Ⓖ. The required probability is then 15.

2nd:

There are again five possible outcomes, only one is Ⓨ. The required probability is then 15.

The two draws are independent.

b.1st:

There are five possible outcomes, only one is Ⓖ. The required probability is then 15.

2nd:

There are now four possible outcomes, only one is Ⓨ. The required probability is then 14.

The two draws are dependent.

The independence or otherwise of multiple draws is a crucial consideration when calculating the “overall” probability of drawing a particular set of sequential outcomes. We return these ideas in Section 9.4.

View chapterPurchase book

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128001561000091

Special Types of Regression

Rudolf J. Freund, ... Donna L. Mohr, in Statistical Methods (Third Edition), 2010

Concept Questions

1.

The probability of an event is a value between __ and __, the odds of the event are between __ and __, and the ln(odds) are between __ and __.

2.

In one situation, Poisson regression and logistic regression can substitute for each other. Describe that situation.

3.

Your professor comments, “what appears as an interaction when a profile plot is made for the probabilities may not appear as an interaction when the ln(odds) are plotted.” Use an example with some probabilities you make up to illustrate the professor's meaning.

4.

Neither logistic regression nor Poisson regression produce an estimate of the error variance. Why?

5.

Suppose that in Example 13.4 the number of workers had been expressed in millions, that is, 2.850803 rather than 2,850,803. How would the estimated regression coefficients change?

View chapterPurchase book

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780123749703000135

Special Types of Regression

Donna L. Mohr, ... Rudolf J. Freund, in Statistical Methods (Fourth Edition), 2022

Concept Questions

1.

The probability of an event is a value between __ and __, the odds of the event are between __ and __, and the ln(odds) are between __ and __.

2.

For each situation below, say whether the choice of a regression-like method is most likely logistic, Poisson, nonlinear with an S curve, or nonlinear with a unimodal curve.

(a)

y is the height of children followed from ages 6 to 18 years.

(b)

y is whether or not a child receives a measles vaccine by age 6.

(c)

y is the number of times a person is hospitalized between the ages of 18 and 50.

(d)

y is the concentration in the blood of an antibiotic, followed from time of injection and for many hours thereafter.

(e)

y is the intensity of sunlight falling on a solar panel, plotted against time of day.

3.

Your professor comments, “what appears as an interaction when a profile plot is made for the probabilities may not appear as an interaction when the ln(odds) are plotted.” Use an example with some probabilities you make up to illustrate the professor’s meaning.

4.

Neither logistic regression nor Poisson regression produce an estimate of the error variance. Why?

5.

Suppose that in Example 13.4 the number of workers had been expressed in millions, that is, 2.850803 rather than 2,850,803. How would the estimated regression coefficients change?

View chapterPurchase book

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128230435000138

Probability and Sampling Distributions

Donna L. Mohr, ... Rudolf J. Freund, in Statistical Methods (Fourth Edition), 2022

2.2.1 Definitions and Concepts

Definition 2.4

An experiment is any process that yields an observation.

For example, the toss of a fair coin (gambling activities are popular examples for studying probability) is an experiment.

Definition 2.5

An outcome is a specific result of an experiment.

In the toss of a coin, a head would be one outcome, a tail the other. In the measles study, one outcome would be “yes,” the other “no.”

In Example 2.2, determining whether an individual has had measles is an experiment. The information on outcomes for this experiment may be obtained in a variety of ways, including the use of health certificates, medical records, a questionnaire, or perhaps a blood test.

Definition 2.6

An event is a combination of outcomes having some special characteristic of interest.

In the measles study, an event may be defined as “one member of the couple has had measles.” This event could occur if the husband has and the wife has not had measles, or if the husband has not and the wife has. An event may also be the result of more than one replicate of an experiment. For example, asking the couple may be considered as a combination of two replicates: (1) asking if the wife has had measles and (2) asking if the husband has had measles.

Definition 2.7

The probability of an event is the proportion (relative frequency) of times that the event is expected to occur when an experiment is repeated a large number of times under identical conditions.

We will represent outcomes and events by capital letters. Letting A be the outcome “an individual of childbearing age has had measles,” then, based on the national health study, we write the probability of A occurring as

P(A)=0.20.

Note that any probability has the property

0≤P(A)≤1.

This is, of course, a result of the definition of probability as a relative frequency.

Definition 2.8

If two events cannot occur simultaneously, that is, one “excludes” the other, then the two events are said to be mutually exclusive.

Note that two individual observations are mutually exclusive. The sum of the probabilities of all the mutually exclusive events in an experiment must be one. This is apparent because the sum of all the relative frequencies in a problem must be one.

Definition 2.9

The complement of an outcome or event Ais the occurrence of any event or outcome that precludes Afrom happening.

Thus, not having had measles is the complement of having had measles. The complement of outcome Ais represented by A′. Because Aand A′are mutually exclusive, and because Aand A′are all the events that can occur in any experiment, the probabilities of Aand A′sum to one:

P(A′)=1−P(A).

Thus the probability of an individual not having had measles is

P(no measles)=1−0.2=0.8.

Definition 2.10

Two events Aand Bare said to be independent if the probability of Aoccurring is in no way affected by event Bhaving occurred or vice versa.

Rules for Probabilities Involving More Than One Event

Consider an experiment with events Aand B, and P(A)and P(B)are the respective probabilities of these events. We may be interested in the probability of the event “both Aand Boccur.” If the two events are independent, then

P(AandB)=P(A)⋅P(B).

If two events are not independent, more complex methods must be used (see, for example, Wackerly et al., 2008).

Suppose that we define an experiment to be two tosses of a fair coin. If we define Ato be a head on the first toss and Bto be a head on the second toss, these two events would be independent. This is because the outcome of the second toss would not be affected in any way by the outcome of the first toss.

Using this rule, the probability of two heads in a row, P(AandB), is (0.5)(0.5)=0.25. In Example 2.2, any incidence of measles would have occurred prior to the couple getting together, so it is reasonable to assume the occurrence of childhood measles in either individual is independent of the occurrence in the other. Therefore, the probability that both have had measles is

(0.2)(0.2)=0.04.

Likewise, the probability that neither has had measles is

(0.8)(0.8)=0.64.

We are also interested in the probability of the event “either Aor Boccurs.” If two events are mutually exclusive, then

P(AorB)=P(A)+P(B).

Note that if Aand Bare mutually exclusive then they both cannot occur at the same time; that is, P(AandB)=0.

If two events are not mutually exclusive, then

P(AorB)=P(A)+P(B)−P(AandB).

We can now use these rules to find the probability of the event “exactly one member of the couple has had measles.” This event consists of two mutually exclusive outcomes:

A: husband has and wife has not had measles.

B: husband has not and wife has had measles.

The probabilities of events AandBare

P(A)=(0.2)(0.8)=0.16P(B)=(0.8)(0.2)=0.16.

The event “one has” means either of the above occurred, hence

P(onehas)=P(AorB)=0.16+0.16=0.32.

In the experiment of tossing two fair coins, event A(a head on the first toss) and event B(a head on the second) are not mutually exclusive events. The probability of getting at least one head in two tosses of a fair coin would be

P(AorB)=0.5+0.5−0.25=0.75.

Example 2.3

Screening Tests

One practical application of probability is in the analysis of screening tests in the medical profession. A study of the use of steroid hormone receptors using a fluorescent staining technique in detecting breast cancer was conducted by the Pathology Department of Shands Hospital in Jacksonville, Florida (Masood and Johnson 1987). The results of the staining technique were then compared with the commonly performed biochemical assay. The staining technique is quick, inexpensive, and, as the analysis indicates, accurate. Table 2.2 shows the results of 42 cases studied. The probabilities of interest are as follows:

1.

The probability of detecting cancer, that is, the probability of a true positive test result. This is referred to as the sensitivity of the test.

2.

The probability of a true negative, that is, a negative on the test for a patient without cancer. This is known as the specificity of the test.

Table 2.2. Staining technique results.

Biochemical Assay ResultSTAINING TECHNIQUE RESULTSPositiveNegativeTotalPositive23225Negative21517Total251742

Solution

To determine the sensitivity of the test, we notice that the test did identify 23 out of the 25 cases; this probability is 23∕25=0.92or 92%. To determine the specificity of the test, we observe that 15 of the 17 negative biochemical results were classified negative by the staining technique. Thus the probability is 15∕17=0.88or 88%. Since the biochemical assay itself is almost 100% accurate, these probabilities indicate that the staining technic is both sensitive and specific to breast cancer. However, the test is not completely infallible.

View chapterPurchase book

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128230435000023

Probability

Andrew F. Siegel, Michael R. Wagner, in Practical Business Statistics (Eighth Edition), 2022

One Event Given Another: Reflecting Current Information

When you revise the probability of an event to reflect information that another event has occurred, the result is the conditional probability of the first event given the other event. (All of the ordinary probabilities you have learned about so far can be called unconditional probabilities, if necessary, to avoid confusion.) Here are some examples of conditional probabilities:

1.

Suppose the home team has a 70% chance of winning the big game. Now introduce new information in terms of the event “the team is ahead at halftime.” Depending on how this event turns out, the probability of winning should be revised. The probability of winning given that we are ahead at halftime would be larger—say, 85%. This 85% is the conditional probability of the event “win” given the event “ahead at halftime.” The probability of winning given that we are behind at halftime would be less than the 70% overall chance of winning—say, 35%. This 35% is the conditional probability of “win” given “behind at halftime.”

2.

Success of a new business project is influenced by many factors. To describe their effects, you could discuss the conditional probability of success given various factors, such as a favorable or unfavorable economic climate and actions by competitors. An expanding economy would increase the chances of success; that is, the conditional probability of success given an expanding economy would be larger than the (unconditional) probability of success.

View chapterPurchase book

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128200254000063

Random Walk

Oliver C. Ibe, in Markov Processes for Stochastic Modeling (Second Edition), 2013

8.9.2 First Passage Time via the Reflection Principle

The computation of the probability of an event associated with a random walk is essentially the counting of the number of paths that define that event. These probabilities can often be derived from the reflection principle, which states as follows:

Reflection principle: Let k,v>0. Any path from A=(a,k)to N=(n,v)that touches or crosses the x-axis in between corresponds to a path from A′=(a,−k)to N=(n,v).

Thus, the x-axis can be thought of as a mirror that casts a “shadow path” of the original path by reflecting it on this mirror until it hits the x-axis for the first time. After the first time the original path hits the x-axis at B=(b,0), the shadow path is exactly the same as the original path. In Figure 8.5, the segment A′Bis the shadow path of the original segment AB. After B the two segments converge and continue as one path to N. Thus, the number of paths from A to N that touch or cross the x-axis is the same as the number of paths from A′ to N.

Figure 8.5. Illustration of the reflection principle.

Let Na,n(k,v)denote the number of possible paths between the points (a,k)and (n,v). (A guide to understanding this parameter is that the subscript indicates the starting and ending times while the argument indicates the initial and final positions.) Then Na,n(k,v)can be computed as follows. Let a path consist of m steps to the right and l steps to the left. Thus, the total number of steps is m+l=n−a, and the difference between the number of rightward steps and the leftward steps is m−l=v−k. From this we obtain

(8.20)m=12{n−a+v−k}

Because Na,n(k,v)can be defined as the number of “successes,” m, in m+l=n−abinomial trials, we have that

(8.21)Na,n(k,v)=(n−am)

where m is as defined in Eq. (8.20).

Consider the event {Yn=x|Y0=0}; that is, the position of the walker after n steps is x, given that he started at the origin. In this case we have that a=0,k=0,v=x. Thus, m=(n−x)/2and

N0,n(0,x)=(n−am)=(nn+x2)=(nn−x2)

where (n+x)/2is an integer. Thus, if p is the probability of a step of 1 and q is the probability of a step of −1,

P[Yn=x]=N0,n(0,x)p(n+x)/2q(n−x)/2=(nn+x2)p(n+x)/2q(n−x)/2

as we derived earlier in the chapter.

According to the reflection principle, the number of paths from A to N that touch or cross the time axis is equal to the number of paths from A′to N; that is, if we denote the number of paths that touch or cross the time axis in Figure 8.5 by Na,n1(k,v), then

Na,n1(k,v)=Na,n(−k,v)

If we assume that k and v are positive numbers as shown in Figure 8.5, then the number of paths from (a,k)to (n,v)that do not touch or intersect the time as, denoted by Na,n0(k,v), is the complement of the number of paths, Na,n1(k,v), that touch or intersect the time axis. Thus,

Na,n0(k,v)=Na,n(k,v)−Na,n1(k,v)=Na,n(k,v)−Na,n(−k,v)

To apply this principle to the first passage time problem we proceed as follows. Consider a random walk that starts at A=(0,0), crosses or touches a line y>v, and then ends at D=(n,v). This is illustrated in Figure 8.6. From our earlier discussion, the total number of paths between (0,0)and (n,x)is N0,n(0,v).

Figure 8.6. Reflection principle illustrated for first passage time.

Consider a reflection on the line Y=y, as shown in Figure 8.6. Assume that the last point at which the path from (0,0)to (n,v)intersects this line is the point B=(k,y). Then the reflection of the path from (0,0)to (n,v)on this line from the point B is shown in dotted line. The terminal point of this line is C=(n,2y−v). According to the reflection principle, the number of paths from A to D that intersect or touch the line Y=yis

N0,n1(0,v)=N0,n(0,2y−v)

To compute the first passage time, we note that for the walker to be at the point v at time n, he must be either at the point v−1at time n−1or at the point v+1at time v−1. Since his first time of reaching the point v is n, we conclude that he must be at v−1at time n−1. Thus, the number of paths from A to D that do not touch or cross Y=vbefore time n−1, N0,n0(0,v), is

N0,n0(0,v)=N0,n(0,v)−N0,n1(0,v)=N0,n−1(0,v−1)−N0,n−1(0,2v−(v−1))=N0,n−1(0,v−1)−N0,n−1(0,v+1)=(n−1n+v−22)−(n−1n+v2)=(n−1)!(n+v−22)!(n−v2)!−(n−1)!(n+v2)!(n−v−22)!=(n−1)!(n+v2−1)!(n−v2)!−(n−1)!(n+v2)!(n−v2−1)!=(n−1)!(n+v2)!(n−v2)!{n+v2−n−v2}={vn}n!(n+v2)!(n−v2)!={vn}(nn+v2)=vnN0,n(0,v)

where N0,n(0,v)is the total number of paths from A to D. Thus, the probability that the first passage time from A to D occurs in n steps is

pTv(n)=P[Tv=n]=vn(nn+v2)pn+v2(1−p)n−v2

Because a similar result can be obtained when v<0, we have that

pTv(n)=P[Tv=n]=|v|n(nn+v2)pn+v2(1−p)n−v2

Note that n+vmust be an even number. Also, recall that the probability that the walker is at location v after n steps is given by

P[Yn=v]=(nn+v2)pn+v2(1−p)n−v2

Thus, the PMF for the first passage time can be written as follows:

pTv(n)=P[Tv=n]=|v|nP[Yn=v]

View chapterPurchase book

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780124077959000086

Simulation Techniques

Scott L. Miller, Donald Childers, in Probability and Random Processes (Second Edition), 2012

12.3.1 Monte Carlo Simulations

In general, suppose we have the ability to recreate (simulate) the experiment an arbitrary number of times and define a sequence of Bernoulli random variables, Xi, that are defined according to

12.15Xi={1,if A occurs during the ith exp eriment,0,otherwise.

Hence, Xi is simply an indicator function for the event A. If the experiments are independent, then the probability of the event A, pA, can be estimated according to

12.16pˆA=1n∑i=1nXi.

This is nothing more than estimating the mean of an IID sequence of random variables. From the development of Chapter 7, we know that this estimator is unbiased and that as n → ∞ the estimate converges (almost everywhere via the strong law of large numbers) to the true probability.

In practice, we do not have the patience to run our simulation an infinite number of times nor do we need to. At some point, the accuracy of our estimate should be “good enough,” but how many trials is enough? Some very concrete answers to this question can be obtained using the theory developed in Chapter 7. If the event A is fairly probable, then it will not take too many trials to get a good estimate of the probability, in which case runtime of the simulation is not really too much of an issue. However, if the event A is rare, then we will need to run many trials to get a good estimate of pA. In the case when n gets large, we want to be sure not to make it any larger than necessary so that our simulation runtimes do not get excessive. Thus, the question of how many trials to run becomes important when simulating rare events.

Assuming n is large, the random variable

A can be approximated as a Gaussian random variable via the central limit theorem. The mean and variance are E[
A] = pA and σpˆA2=n−1pA(1−pA), respectively. One can then set up a confidence interval based on the desired accuracy. For example, suppose we wish to obtain an estimate that is within 1% of the true value with 90% probability. That is, we want to run enough trials to insure that

12.17Pr(|pˆA−pA|<0.01pA)=0.9=1−α.

From the results of Chapter 7, Section 7.5, we get

12.18ε0.1=0.01pA=σXnc0.1=pA(1−pA)nc0.1,

where the value of c0.1 is taken from Table 7.1 as c0.1 = 1.64. Solving for n gives us an answer for how long to run the simulation:

12.19n=(100c0.1)2(1−pA)pA=(164)2(1−pA)pA≈(164)2pA.

Or in general, if we want the estimate,

A, to be within β percent of the true value (i.e., |pˆA−pA|<βpA/100)with probability α, then the number of trials in the simulation should be chosen according to

12.20n=(100cαβ)2(1−pA)pA≈(100cαβ)2pA.

This result is somewhat unfortunate because in order to know how long to run the simulation, we have to know the value of the probability we are trying to estimate in the first place. In practice, we may have a crude idea of what to expect for pA which we could then use to guide us in selecting the number of trials in the simulation. However, we can use this result to give us very specific guidance in how to choose the number of trials to run, even when pA is completely unknown to us. Define the random variable NA to be the number of occurrences of the event A in n trials, that is,

12.21NA=∑i=1nXi.

Note that E[NA] = npA. That is, the quantity npA can be interpreted as the expected number of occurrences of the event A in n trials. Multiplying both sides of Equation (12.20) by pA then produces

12.22E[NA]=(100cαβ)2(1−pA)≈(100cαβ)2.

Hence, one possible procedure to determine how many trials to run is to repeat the experiment for a random number of trials until the event A occurs some fixed number of times as specified by Equation (12.22). Let Mk be the random variable which represents the trial number of the kth occurrence of the event A. Then, one could form an estimate of pA according to

12.23pˆA=kMk.

It turns out that this produces a biased estimate; however, a slightly modified form,

12.24pˆA=k−1Mk−1,

produces an unbiased estimate (see Exercise 7.12).

Example 12.7

Suppose we wish to estimate the probability of an event that we expect to be roughly on the order of p ~ 10−4. Assuming we want 1% accuracy with a 90 % confidence level, the number of trials needed will be

n=1p(100cαβ)2=104(100*1.641)2=268,960,000.

Alternatively, we need to repeat the simulation experiment until we observe the event

NA=(100cαβ)2=26,896

times. Assuming we do not have enough time available to repeat our simulation over 1/4 of a billion times, we would have to accept less accuracy. Suppose that due to time limitations we decide that we can only repeat our experiment 1 million times, then we can be sure that with 90% confidence, the estimate will be within the interval (p − ɛ, p + ɛ), if ɛ is chosen according to

ε=p(1−p)ncα≈pncα=1.6410−2103=1.64×10−5=0.164p.

With 1 million trials we can only be 90% sure that the estimate is within 16.4% of the true value.

The preceding example demonstrates that using the Monte Carlo approach to simulating rare events can be very time consuming in that we may need to repeat our simulation experiments many times to get a high degree of accuracy. If our simulations are complicated, this may put a serious strain on our computational resources. The next subsection presents a novel technique, which when applied intelligently can substantially reduce the number of trials we may need to run.

When the probability of one event is affected by another event this events are?

Dependent events in probability are events whose occurrence of one affects the probability of occurrence of the other.

When the outcome of one event affects the outcome of another event?

When the outcome affects the second outcome, which is what we called dependent events. Dependent events: Two events are dependent when the outcome of the first event influences the outcome of the second event.

What is dependent events in probability?

Two events are dependent if the outcome of the first event affects the outcome of the second event, so that the probability is changed.

What are independent and dependent events in probability?

Independent Events: the probability of one event DOES NOT effect the probability of a 2nd event. P(A and B) = P(A) P(B) Dependent Events: the probability of one event DOES effect the probability of a 2nd event.

Chủ đề