Introduction to Data Science: A Comp-Math-Stat Approach

1MS041, 2021

©2021 Raazesh Sainudiin, Benny Avelin. Attribution 4.0 International (CC BY 4.0)

04. Conditional Probability, Random Variables, Loops and Conditionals

Topics:

What have we done?

Where are we going?

Probability

Recap on probability

An experiment is an activity or procedure that produces distinct or well-defined outcomes. The set of such outcomes is called the sample space of the experiment. The sample space is usually denoted with the symbol $\Omega$.

An event is a subset of the sample space.

Probability is a function that associates each event in a set of events (denoted by $\{ \text{ events }\}$) with a real number in the range 0 to 1 (denoted by $[0,1]$):

$$P : \{ \text{ events } \} \rightarrow [0,1]$$

while satisfying the following axioms:

  1. For any event $A$, $0 \le P(A) \le 1$.

Property 1

$P(A) = 1 - P(A^c)$, where $A^c = \Omega \setminus A$

Property 2

For any two events $A$, $B$,

$P(A \cup B) = P(A) + P(B) - P(A \cap B)$

The idea in Property 2 generalises to the inclusion-exclusion formula as follows:

Let $A_1, A_2, \ldots, A_n$ be any $n$ events. Then,

$$ \begin{array}{lcl} P\left(\bigcup_{i=1}^n A_i \right) &=& \sum_{i=1}^nP(A_i) \, - \, \sum_{i<j}P(A_i \cap A_j) \\ &\,& \quad + \, \sum_{i<j<k}P(A_i \cap A_j \cap A_k) + \ldots + (-1)^{n+1}P(A_1 \cap A_2 \cap \ldots \cap A_n) \end{array} $$

In words, we take all the possible intersections of one, two, three, $\ldots$, $n$ events and let the signs alternate to more carefully account for multiple countings.

Question

Does the inclusion-exclusion formula agree with the extended Axiom 3: If $A_1, A_2, \ldots, A_n$ are pair-wise disjoint events then $P\left( \bigcup_{i=1}^n A_i \right) = \sum_{i=1}^nP(A_i)$?

The domain of the probability function

What exactly is stipulated by the axioms about the domain of the probability function?

The domain should be a sigma-field ($\sigma$-field) or sigma-algebra ($\sigma$-algebra), denoted $\sigma(\Omega)$ or $\mathcal{F}$ such that:

We will not use the full machinery of sigma-algebras in this course as it requires more formal training in mathemtics but it is important to know what's under the hood of the domain of our probability function; in case we need to dive deeper into sigma-algebras for more compicated probabilistic models of our data.

Probability Space

Thus the domain of the probability is not just any old set of events (recall events are subsets of $\Omega$), but rather a set of events that form a $\sigma$-field that contains the sample space $\Omega$, is closed under complementation and countable union.

$\left(\Omega, \mathcal{F}(\Omega), P\right)$ is called a probability space or probability triple.

Example

Let $\Omega = \{H, T\}$. What $\sigma$-fields could we have?

$\mathcal{F}\left({\Omega}\right) = \{\{H, T\}, \emptyset, \{H\}, \{T\}\}$ is the finest $\sigma$-field.

$\mathcal{F}'\left({\Omega}\right) = \{ \{H, T\}, \emptyset \}$ is a trivial $\sigma$-field.

Example

Let $\Omega = \{\omega_1, \ldots, \omega_n\}$.

$\mathcal{F}\left({\Omega}\right) = 2^\Omega$, the set of all subsets of $\Omega$, also known as the power set of $\Omega$.

$\vert 2^\Omega \vert = 2^n$.

We have finally defined probability space as quickly as possible from first principles. As your mathematical background matures you can dive deeper into more subtle aspects of probability space, in its generality, as needed.

These examples are great for becoming more familiar with probability spaces.

Independence

Two events $A$ and $B$ are independent if $P(A \cap B) = P(A)P(B)$.

Intuitively, $A$ and $B$ are independent if the occurrence of $A$ has no influence on the probability of the occurrence of $B$ (and vice versa).

Example

Flip a fair coin twice. Event $A$ is the event "The first flip is 'H'"; event $B$ is the event "The second flip is 'H'".

$P(A) = \frac{1}{2}$, $P(B) = \frac{1}{2}$

Because the flips are independent (what happens on the first flip does not influence the second flip),

$P(A \cap B) = \frac{1}{2} \times \frac{1}{2} = \frac{1}{4}$.

Example

We can generalise this by saying that we will flip a coin with an unknown probability parameter $\theta \in [0,1]$. We flip this coin twice and the coin is made so that for any flip, $P(\mbox{'H'}) = \theta$, $P(\mbox{'T'}) = 1-\theta$.

Take the same events as before: event $A$ is the event "The first flip is 'H'"; event $B$ is the event "The second flip is 'H'".

Because the flips are independent,

$P(A \cap B) = \theta \times \theta = \theta^2$.

If we take event $C$ as the event "The second flip is 'T'", then

$P(A \cap C) = \theta \times (1-\theta)$.

Example

Roll a fair die twice. The face of the die is enumerated 1, 2, 3, 4, 5, 6.

Event $A$ is the event "The first roll is 5"; event $B$ is the event "The second roll is 1".

$P(A) = \frac{1}{6}$, $P(B) = \frac{1}{6}$

If the two rolls are independent,

$P(A \cap B) = \frac{1}{6} \times \frac{1}{6} = \frac{1}{36}$

You try at home

For those who are rusty on probability models.

Suppose you roll two fair dice independently. What is the probability of getting the sum of the outcomes to be seven?

Solution: Watch the Khan Academy movie about probability and two dice.

Conditional probability

Suppose that we are told that the event $A$ with $P(A) > 0$ occurs and we are interested in whether another event $B$ will now occur. The sample space has shrunk from $\Omega$ to $A$. The probability that event $B$ will occur given that $A$ has already occurred is defined by

$$P(B|A) = \frac{P(B \cap A)}{P(A)}$$

ConditionalProbabilityProgression.png

We can understand this by noting that

Only the outcomes in $B$ that also belong to $A$ can possibly now occur, and Since the new sample space is $A$, we have to divide by $P(A)$ to make $$P(A|A) = \frac{P(A \cap A)}{P(A)} = \frac{P(A)}{P(A)} = 1$$ If the two events $A$ and $B$ are independent then

$$P(B|A) = \frac{P(B \cap A)}{P(A)} = \frac{P(B)P(A)}{P(A)} = P(B)$$

which makes sense - we said that if two events are independent, then the occurrence of $A$ has no influence on the probability of the occurrence of $B$.

Example

Roll two fair dice.

Event $A$ is the event "The sum of the dice is 6"; event $B$ is the event "The first die is 2".

How many ways can we get a 6 out of two dice?

TwoDiceOutcomes.png

$A = \{(1,5), (2,4), (3,3), (4,2), (5,1)\}$

$P(A) = \frac{1}{36} + \frac{1}{36} + \frac{1}{36} + \frac{1}{36} + \frac{1}{36} = \frac{5}{36}$

TwoDiceOutcomesB.png

$B = \{(2,1), (2,2), (2,3), (2,4), (2,5), (2,6)\}$

TwoDiceOutcomesAndB.png

$B \cap A = \{ (2,4)\}$

$P(B \cap A) = \frac{1}{36}$

$P(B|A) = \frac{P(B \cap A)}{P(A)} = \frac{\frac{1}{36}}{\frac{5}{36}} = \frac{1}{5}$

Look at this result in terms of what we said about the sample space shrinking to $A$.

TwoDiceOutcomesBgivenA.png

Bayes Theorem

We just saw that $P(B|A)$, the conditional probability that event $B$ will occur given that $A$ has already occurred is defined by $P(B \cap A)/P(A)$. By using the fact that $B \cap A = A \cap B$ and reapplying the definition of conditional probability to $P(A|B)=P(A \cap B)/P(B)$, we get the so-called Bayes theorem.

$$\boxed{P(B|A) = \frac{P(B \cap A)}{P(A)} = \frac{P(A \cap B)}{P(A)} = \frac{P(A|B) P(B)}{P(A)}}$$

You try at home

Suppose we have a bag of ten coins. Nine of the ten coins are fair but one of the coins has heads on both sides. What is the probability of getting five heads in a row if I picked a coin at random from the bag and flipped it five times? If I obtained five heads in a row by choosing a coin out of the bag at random and flipping it five times, then what is the probability that I have picked the two-headed coin?

Solution: Watch the Khan Academy movies about applications of conditional probability and Bayes theorem to this bag of 10 coins.

A foretaste of simulation

The next cell uses a function called randint which we will talk about more later in the course. For this week we'll just use randint as a computerised way of rolling a die: every time we call randint(1,6) we will get some integer number from 1 to 6, we won't be able to predict in advance what we will get, and the probability of each of the numbers 1, 2, 3, 4, 5, 6 is equal. Here we use randint to simulate the experiment of tossing two dice. The sample space $\Omega$ is all 36 possible ordered pairs $(1,1), \ldots (6,6)$. We print out the results for each die. Try evaluating the cell several times and notice how the numbers you get differ each time.

Random Variables

A random variable is a mapping from the sample space $\Omega$ to the set of real numbers $\mathbb{R}$. In other words, it is a numerical value determined by the outcome of the experiment. (Actually, this is a real-valued random variable and one can have random variables taking values in other sets).

This is not as complicated as it sounds: let's look at a simple example:

Example

Roll two fair dice.

The sample space is the set of 36 ordered pairs $\Omega = \{(1,1), (1,2), \dots, (2,1), (2,2), \ldots, (1,6), \ldots, (6,6)\}$

Let random variable $X$ be the sum of the two numbers that appear, $X : \Omega \rightarrow \mathbb{R}$.

For example, $X\left(\{(6,6)\}\right) = 12$

$P(X=12) = P\left(\{(6,6)\}\right)$

And, $X\left( \{ (3,2) \}\right) = 5$

Formal definition of a random variable

Let $\left(\Omega, \mathcal{F}, P \right)$ be some probability triple. Then a random variable, say $X$, is a function from the sample space $\Omega$ to the set of real numbers $\mathbb{R}$

$$X: \Omega \rightarrow \mathbb{R}$$

such that for every number $x$ in $\mathbb{R}$, the inverse image of the half-open interval $(-\infty, x]$ is an element of the collection of events $\mathcal{F}$, i.e.,

for every number $x$ in $\mathbb{R}$, $$X^{[-1]}\left( (-\infty, x] \right) := \{\omega: X(\omega) \le x\} \in \mathcal{F}$$

Discrete random variable

A random variable is said to be discrete when it can take a countable sequence of values (a finite set is countable). The three examples below are discrete random variables.

Probability of a random variable

Finally, we assign probability to a random variable $X$ as follows:

$$P(X \le x) = P \left( X^{[-1]}\left( (-\infty, x] \right) \right) := P\left( \{ \omega: X(\omega) \le x \} \right)$$

Distribution Function

The distribution function (DF) or cumulative distribution function (CDF) of any RV $X$, denoted by $F$ is:

$$F(x) := P(X \leq x) = P\left( \{ \omega: X(\omega) \leq x \} \right) \mbox{, for any } x \in \mathbb{R}$$

Example - Sum of Two Dice

In our example above (tossing two die and taking $X$ as the sum of the numbers shown) we said that $X\left((3,2)\right) = 5$, but (3,2) is not the only outcome that $X$ maps to 5: $X^{[-1]}\left(5\right) = \{(1,4), (2,3), (3,2), (4,1)\}$

$$ \begin{array}{lcl} P(X=5) & = & P\left(\{\omega: X(\omega) = 5\}\right)\\ & = & P\left(X^{[-1]}\left(5\right)\right)\\ & = & P(\{(1,4), (2,3), (3,2), (4,1)\}) \end{array} $$

Example - Pick a Fruit at Random

Remember our "well-mixed" fruit bowl containing 3 apples, 2 oranges, 1 lemon? If our experiment is to take a piece of fruit from the bowl and the outcome is the kind of fruit we take, then we saw that $\Omega = \{\mbox{apple}, \mbox{orange}, \mbox{lemon}\}$.

Define a random variable $Y$ to give each kind of fruit a numerical value: $Y(\mbox{apple}) = 1$, $Y(\mbox{orange}) = 0$, $Y(\mbox{lemon}) = 0$.

Example - Flip Until Heads

Flip a fair coin until a 'H' appears. Let $X$ be the number of times we have to flip the coin until the first 'H'.

$\Omega = \{\mbox{H}, \mbox{TH}, \mbox{TTH}, \ldots, \mbox{TTTTTTTTTH}, \ldots \}$

$X(\mbox{H}) = 0$, $X(\mbox{TH}) = 1$, $X(\mbox{TTH}) = 2$, $\ldots$

You try at home

Consider the example above of 'Pick a Fruit at Random'. We defined a random variable $Y$ there as $Y(\mbox{apple}) = 1$, $Y(\mbox{orange}) = 0$, $Y(\mbox{lemon}) = 0$. Using step by step arguments as done in the example of 'Sum of Two Dice' above, find the following probabilities for our random variable $Y$: $$ \begin{array}{lcl} P(Y=0) & = & P\left(\{\omega: Y(\omega) = \quad \}\right)\\ & = & P\left(Y^{[-1]} \left( \quad \right)\right)\\ &= & P(\{\quad , \quad \}) \end{array} $$

Watch the Khan Academy movie about random variables

When we introduced the subject of probability, we said that many famous people had become interested in it from the point of view of gambling. Games of dice are one of the earliest forms of gambling (probably deriving from an even earlier game which involved throwing animal 'ankle' bones or astragali). Galileo was one of those who wrote about dice, including an important piece which explained what many experienced gamblers had sensed but had not been able to formalise - the difference between the probability of throwing a 9 and the probability of throwing a 10 with two dice. You should be able to see why this is from our map above. If you are interested you can read a translation (Galileo wrote in Latin) of Galileo's Sorpa le Scoperte Dei Dadi. This is also printed in a nice book, Games, Gods and Gambling by F.N. David (originally published 1962, newer editions now available).

Implementing a Random Variable

We have made our own random variable map object in Sage called RV. As with the Sage probability maps we looked at last week, it is based on a map or dictionary. We specify the sample the samplespace and probabilities and the random variable map, MapX, from the samplespace to the values taken by the random variable $X$).

Example 1: fruit bowl experiments

We are going to use our class 'RV' above and the fruit bowl example, and the random variable $X$ to give each kind of fruit a numerical value: $X(\mbox{apple}) = 1$, $X(\mbox{orange}) = X(\mbox{lemon}) = 0$ This is a discrete random variable because it can only take a finite number of discrete values (in this case, either 1 or 0).

(You don't have to worry about how RV works: it is our 'home-made' class for you to try out.)

To make an RV, we can specify the lists for the sample space, the probabilities, and the random variable values associated with each outcome. Since there are three different lists here, we can make things clearer by actually saying what each list is. The RV we create in the cell below is going to be called X.

We can get probabilities using the syntax X.P(x) to find $P(X=x)$.

You have seen that different random variables can be defined on the same probability space, i.e., the sample space and the associated probability map, depending on how the outcomes are mapped to real values taken by the random variable. Usually there is some good experimental or mathematical reason for the particular random variable (i.e., event-to-value-mappings) that we use. In the experiment we just did we could have been an experimenter particularly interested in citrus fruit but not concerned with what particular kind of citrus it is.

On the other hand, what if we want to differentiate between each fruit in the sample space? Then we could give each fruit-outcome a different value.

Example 2: Coin toss experiments

An experiment that is used a lot in examples is the coin toss experiment. If we toss a fair coin once, the sample space is either head (denoted here as H) or tail (T). The probability of a head is a half and the probability of tail is a half.

We can do the probability map for this as one of our ProbyMap objects.

Let's have a random variable called oneHead that takes the value 1 if we get one head, 0 otherwise and simulate this with an RV object.

One toss is not very interesting.
What if we we have a sample space that is the possible outcomes of two independent tosses of the same coin?

Tossing the coin twice and looking at the results of each toss in order is a different experiment to tossing the coin once.

We have a different set of possible outcomes, $\{HH, HT, TH, TT\}$. Note that the order matters: getting a head in the first toss and a tail in the second ($HT$) is a different event to getting a tail in the first toss and a head in the second ($TH$).

We can define a different random variable on this set of outcomes. Let's take an experimenter who is particularly interested in the number of heads we get in two tosses.

As you can see, two different events in the sample space, a head and then a tail ($HT$) and a tail and then a head ($TH$) both give this random variable a value of 1. The event $HH$ gives it a value 2 (2 heads) and the event $TT$ gives it a value 0 (no heads).

Now we can try the probabilities.

The indicator function

The indicator function of an event $A \in \mathcal{F}$, denoted $\mathbf{1}_A$, is defined as follows:

\begin{equation} \mathbf{1}_A(\omega) := \begin{cases} 1 & \qquad \text{if} \quad \omega \in A \\ 0 & \qquad \text{if} \quad \omega \notin A \end{cases} \end{equation}

The indicator function $\mathbf{1}_A$ is really an RV.

Example

"Will it rain tomorrow in the Southern Alps?" can be formulated as the RV given by the indicator function of the event "rain drops fall on the Southern Alps tomorrow". Can you imagine what the $\omega$'s in the sample space $\Omega$ can be?

Probability Mass Function

Recall that a discrete RV $X$ takes on at most countably many values in $\mathbb{R}$.

The probability mass function (PMF) $f$ of a discrete RV $X$ is :

$$f(x) := P(X=x) = P\left(\{\omega: X(\omega) = x \}\right)$$

Bernoulli random variable

The Bernoulli RV is a $\theta$-parameterised family of $\mathbf{1}_A$.

Take an event $A$. The parameter $\theta$ (pronounced 'theta) denotes the probability that "$A$ occurs", i.e., $P(A) = \theta$.

The indicator function $\mathbf{1}_A$ of "$A$ occurs" is the $Bernoulli(\theta)$ RV.

Given a parameter $\theta \in [0,1]$, the probability mass function (PMF) for the $Bernoulli(\theta)$ RV $X$ is:

\begin{equation} f(x;\theta)= \theta^x (1-\theta)^{1-x} \mathbf{1}_{\{0,1\}}(x) = \begin{cases} \theta & \text{if $x=1$,}\\ 1-\theta & \text{if $x=0$,}\\ 0 & \text{otherwise} \end{cases} \end{equation}

and its DF is:

\begin{equation} F(x;\theta) = \begin{cases} 1 & \text{if }1 \le x \text{,}\\ 1-\theta & \text{if } 0 \le x < 1\text{,}\\ 0 & \text{otherwise} \end{cases} \end{equation}

We emphasise the dependence of the probabilities on the parameter $\theta$ by specifying it following the semicolon in the argument for $f$ and $F$ and by subscripting the probabilities, i.e. $P_{\theta}(X=1)=\theta$ and $P_{\theta}(X=0)=1-\theta$.

Draw the PMF $f(x;\theta)$ for $Bernoulli(\theta)$ RV by hand now!











Draw the DF $F(x;\theta)$ for $Bernoulli(\theta)$ RV by hand now!











Control structures in python and relation to random variables

For loops

For loops are a very useful way that most computer programming languages provide to allow us to repeat the same action. In Sage, you can use a for loop on a list to just go through the elements in the list, doing something with each element in turn.

The SageMath/Python syntax for a for loop is:

  1. Start with the keyword for

The cell below gives a very simple example using a list.

If we wanted to do this for any list, we could write a simple function that takes any list we give it and puts it through a for loop, like the function below.

Notice that we start indenting with 4 spaces when we write the function body. Then, when we have the for loop inside the function body, we just indent the body of the for loop again.

Sage needs the indentation to help it to know what is in a function, or a for loop, and what is outside, but indentation also helps us as programmers to be able to look at a piece of code and easily see what is going on.

Let's try our function on another list.

We have just programmed a basic for loop with a list. We can do much more than this with for loops, but this covers the essentials that we need to know. The important thing to remember is that whenever you have a list, you can easily write a for loop to go through each element in turn and do something with it.

You try

Example 3: For loops

Try first assigning the value 0 to a variable named mySum and then making yourself a list (you pick what it is called and what values it contains) and then looping through the list, adding each element in the list to mySum. When the loop has finished, the value of mySum will be the accumulated value of all the elements in the list.

What about defining a function to accumulate the values in a list? Remember to give your function a good name, and include the docstring to tell everyone what it does.

Try out your function!

A for loop can be used on more than just a list. For example, try a loop with the set S we make below - you can try making a loop to print the elements in the set one by one, as we did above, or to accumulate them, or do anything else you think is sensible ....

Loop over the set and do something.

You can even use a for loop on a string like "thisisastring", but this is not as useful for us as being able to use a for loop on a list or set.

We can use the range function we met last week to make a for loop that will do something a specified number of times. Remind yourself about the range function:

Now let's use the counter idea do a specified number of rolls of the die that we can simulate with randint(1,6). Notice that here the actual value of the elements in the list is not being used directly: the list is being used like a counter to make sure that we do something a specified number of times.

Conditional statements

A conditional statement is also known as an if statement or if-else statement. And it's basically as simple as that: if [some condition] is true then do [something]. To make it more powerful, we can also say else (ie, if not), then do [a different something].

The if statement syntax is a way of putting into a programming language a way of thinking that we use every day: E.g. "if it is raining, I'll take the bus to uni; else I'll walk".

You'll notice that when we have an if statement, what we do depends on whether the condition we specify is true or false (i.e. is conditional on this expression). This is why if statements are called conditional statements.

When we say "depends on whether the condition we specify is true or false", in SageMath terms we are talking about whether the expression we specify as our condition evaluates to the Boolean true or the Boolean false. Remember those Boolean values, true and false, that we talked about in Lab 1?

The SageMath syntax for a conditional statement including if and else clauses is explained below:

  1. Start with the keyword if

Note that SageMath will either execute the code in the if-block or will execute the code in the else-block. What happens, i.e. which block gets executed, depends on whether the condition evaluates to true or false.

Let's set up a nice simple conditional statement and see what happens. We want take two variables, named x and y, and print something out only if x is greater than y. The condition is x > y, and you can see that this evaluates to either true or false, depending on the values of x and y.

We can nest conditional statements so that one whole conditional statement is in the if-block (or else-block) of another one.

You try

Example 4: Conditional statements

The cell above only did something if the condition was true. Now let's try if and else. This is a more complicated example and you might want to come back to it to make sure you understand what is going on.

Try assigning different values to the variable myAge and see what happens when you evaluate the cell.

We could also define a function which uses if and else. Let us define a function called myMaxFunc which takes two variables x and y as parameters. Note how we indent once for the body of the function, and again when we want to indicate what is in the if-block and else-block.

There is of course a perfectly good max() function in SageMath/Python which does the same thing - this is just a convenient example of the use of if and else.

Now we try our function out with some variables. Try using different values for firstNumber and secondNumber to test it.

For loops and conditionals

Finally, lets look at something to bring together for loops and conditional statements. We have seen how we can use a for loop to simulate throwing a die a specified number of times. We could add a conditional statement to count how many times we get a certain number on the die. Try altering the values of resultOfInterest and numberOfThrows and see what happens. Note that being able to find and alter values in your code easily is part of the benefits of using variable names.

To get even fancier, we could set up a map and count the occurences of every number 1, 2, ... 6. Make sure that you understand what we are doing here. We are using a dictionary with (key, value) pairs to associate a count with each number that could come up on the die (number on die, count of number on die). Notice that we do not have to use the conditional statement (if ...) because we can access the value by using the key that is a particular result of a roll of the die.

Try altering the numberOfThrows and see what happens.

You Try!

Earlier, we looked at the probability of the event $B|A$ when we toss two dice and $A$ is the event that the sum of the two numbers on the dice is 6 and $B$ is the event that the first die is a 2. We worked out that $P(B|A) = \frac{1}{5}$. See if you can do another version of the code we have above to simulate throwing two dice a specified number of times, and use two nested conditional statements to accumulate a count of how many times the sum of the two numbers is 6 and how many out of those times the first die is a 2. When the loop has finished, you could print out or disclose the proportion $\frac{\text{number of times sum is 6 and first die is 2}}{\text{number of times sum is 6}}$

Example 5: More coin toss experiments

If you have time, try the three-coin-toss and four-coin-toss examples below. These continue from Example 2 above.

Another new experiment: the outcomes of three tosses of the coin.

The random variable we define is the number of heads we get in three tosses.

If you are not bored yet, try a four-tosses experiment where we define a random variable which takes value 1 when the outcome includes exactly two heads, and 0 otherwise.

List Comprehension

This is a very powerful feature of SageMath/Python. We can create or comprehend new lists from exiting lists.

From https://docs.python.org/2/tutorial/datastructures.html#list-comprehensions:

A list comprehension consists of brackets containing an expression followed by a for clause, then zero or more for or if clauses. The result will be a new list resulting from evaluating the expression in the context of the for and if clauses which follow it.

The following list comprehension combines the elements of two lists if they are not equal and creates a list of tuples.

The above list comprehension is equivalent to the following:

Conditional List Comprehension

We can filter what is being comprehended using Boolean expressions as follows:

You Try!

Modify the next cell to output values of x^3 for each x in myList that is odd.

Nested List Comprehensions

Let's declare myMatrix as a list of lists and then use list comprehensions to find its transpose.

Anonymous Functions - Lambda Expressions

Anonymous functions are functions not bound to a name that can be used immediately in expressions. We can use lambda expressions for anonymous functions as described in section Lambdas.

Lambda expressions (sometimes called lambda forms) have the same syntactic position as expressions. They are a shorthand to create anonymous functions; the expression lambda arguments: expression yields a function object. The unnamed object behaves like a function object defined with

def name(arguments):
    return expression

Note that the lambda expression is merely a shorthand for a simplified function definition; a function defined in a def statement can be passed around or assigned to another name just like a function defined by a lambda expression. The def form is actually more powerful since it allows the execution of multiple statements.

The map is lazy and thus we need to evaluate it to get the result