**Terminologies in Probability and Statistics **

**Probability and statistics** are concerned with events which occur by chance. Examples include occurrence of accidents and various games of chance, such as flipping a coin, or throwing a symmetrical six-sided die. In each case we may have some knowledge of the likelihood of various possible results, but we cannot predict with any certainty the outcome of any particular trial. Some common terms are –

### Statistics and Probability Tutorial Video

**Probability**is an area of study which involves predicting the relative likelihood of various outcomes.

**Descriptive statistics**, in which items are counted or measured and the results are combined in various ways to give useful results.

**Chance**is a necessary part of any process to be described by probability or statistics. Sometimes that element of chance is due partly or even perhaps entirely to our lack of knowledge of the details of the process.

**A random sample**is one in which every member of the population has an equal likelihood of appearing.

Looking for Top Jobs in ** Data Science **? This blog post gives you all the information you need!

**Basic Rules of Combining Probabilities**

The basic rules or laws of combining probabilities must be consistent with the fundamental concepts.

**Addition Rule**

This can be divided into two parts, depending upon whether there is overlap between the events being combined.

**(a).** If the events are mutually exclusive, there is no overlap: if one event occurs, other events cannot occur. In that case the probability of occurrence of one or another of more than one event is the sum of the probabilities of the separate events.

**For example,** if I throw a fair six-sided die the probability of any one face coming up is the same as the probability of any other face, or one-sixth. There is no overlap among these six possibilities. Then Pr [6] = 1/6, Pr [4] = 1/6, so Pr [6 or 4] is 1/6 + 1/6 = 1/3.The Addition Rule corresponds to a logical **or **and gives a **sum **of separate probabilities.

**(b)**. If the events are not mutually exclusive, there can be overlap between them. This can be visualized using a Venn diagram. **The probability** of overlap must be subtracted from the sum of probabilities of the separate events.

The** circle marked A** represents the probability (or frequency) of event A, the circle marked B represents the probability (or frequency) of event B, and the whole rectangle represents all possibilities, so a probability of one or the total frequency. The set consisting of all possible outcomes of a particular experiment is called the sample space of that experiment. Thus, the rectangle on the Venn diagram corresponds to the sample space. An event, such as A or B, is any subset of a sample space.

Set notation is useful:

**Pr [A B)** = Pr [occurrence of A **or **B **or **both], the union of the two events A and B.

**Pr [A B)** = Pr [occurrence of both A **and **B], the intersection of events A and B.

If the events being considered are not mutually exclusive, and so there may be overlap between them, the Addition Rule becomes –

**Pr [A B) = Pr [A] + Pr [B] – Pr [A B]**

In words, the probability of A or B or both is the sum of the probabilities of A and of B, less the probability of the overlap between A and B. The overlap is the intersection between A and B.

**Multiplication Rule**

**(a).** The basic idea for calculating the number of choices can be described as follows: Say there are n1 possible results from one operation. For each one of these, there are n2 possible results from a second operation. Then there are (n1 × n2) possible outcomes of the two operations together. In general, the numbers of possible results are given by products of the number of choices at each step. Probabilities can be found by taking ratios of possible results.

**The simplest form of the Multiplication Rule** for probabilities is as follows: If the events are independent**, **then the occurrence of one event does not affect the probability of occurrence of another event. In that case the probability of occurrence of more than one event together is the product of the probabilities of the separate events.

If A and B are two separate events that are independent of one another, the probability of occurrence of both A and B together is given by:

**Pr [A B] = Pr [A] × Pr [B]**

**(b)**. If the events are not independent**, **one event affects the probability for the other event. In this case conditional probability must be used. The conditional probability of B given that A occurs, or on condition that A occurs, is written **Pr [B | A]**. This is read as the probability of B given A, or the probability of B on condition that A occurs.

Conditional probability can be found by considering only those events which meet the condition, which in this case is that A occurs. Among these events, the probability that B occurs is given by the conditional probability, Pr [B | A]. In the reduced sample space consisting of outcomes for which A occurs, the probability of event B is Pr [B | A].

The **multiplication rule** for the occurrence of both A and B together when they are not independent is the product of the probability of one event and the conditional probability of the other:

**Pr [A B] = Pr [A] × Pr [B | A] = Pr [B] × Pr [A | B]**

This implies that conditional probability can be obtained by

**Pr [B | A] = Pr [A B] / Pr [A]**

or

**Pr [A | B] = Pr [A B] / Pr [B]**

**Permutations and Combinations**

Permutations and combinations give us quick, algebraic methods of counting. They are used in probability problems for two purposes: to count the number of equally likely possible results for the classical approach to probability, and to count the number of different arrangements of the same items to give a multiplying factor.

**(a)**. Each separate arrangement of all or part of a set of items is called a permutation. The number of permutations is the number of different arrangements in which items can be placed. Notice that if the order of the items is changed, the arrangement is different, so we have a different permutation.

Suppose we have a total of n items to be arranged, and we can choose r of those items at a time, where r ≤n. The number of permutations of n items chosen r at a time is written nPr. For permutations we consider both the identity of the items and their order.

**nPr = n! / (n-r)! = n (n-1)(n-2)….(2)(1) / (n-r)(n-r-1)…(2)(1)**

By definition, 0! = 1. Then the number of choices of n items taken n at a time is nPn = n!.

**(b).** The calculation of permutations is modified if some of the items cannot be distinguished from one another. We speak of this as calculation of the number of permutations into classes.

if n items are all different, the number of permutations taken n at a time is n!. However, if some of them are indistinguishable from one another, the number of possible permutations is reduced. If n1 items are the same, and the remaining (n–n1) items are the same of a different class, the number of permutations can be shown to be n! /n1! (n-n1!).

The numerator, n!, would be the number of permutations of n distinguishable items taken n at a time. But n1 of these items are indistinguishable, so reducing the number of permutations by a factor 1 / n1! , and another (n – n1) items are not distinguishable from one another, so reducing the number of permutations by another factor 1 / (n-n1)!.

If we have a total of n items, of which n1 are the same of one class, n2 are the same of a second class, and n3 are the same of a third class, **such that n1 + n2 + n3 = 1,** the number of permutations is n! / n1! n2 ! n3 ! . This could be extended to further classes.

**(c).** Combinations are similar to permutations, but with the important difference that combinations take no account of order. Thus, AB and BA are different permutations but the same combination of letters. Then the number of permutations must be larger than the number of combinations, and the ratio between them must be the number of ways the chosen items can be arranged.

**nCr = nPr / r! = n!/(n-r)!r!**

nCr gives the number of equally likely ways of choosing r items from a group of n distinguishable items. That can be used with the classical approach to probability.

Hello,

Thanks. Very useful.

BR

I need more and complex examples