Uniform Random Variable

The uniform random variable is defined by the density function [see Fig.1-2a](1.4-1)P(x) = {1/(b–a), if a≤x

From: Markov Processes , 1992

Random Variables

Sheldon M. Ross , in Introduction to Probability Models (Tenth Edition), 2010

Example 2.36

(Sum of Two Independent Uniform Random Variables)If XandY are independent random variables both uniformly distributed on (0, 1), then calculate the probability density of X+Y.

Solution:From Equation (2.18), since

f ( a ) = g ( a ) { 1 , 0 < a < a 0 , otherwise

we obtain

f X + Y ( a ) = 0 1 f ( a y ) d y

For 0≤a≤1, this yields

f X + Y ( a ) = 0 1 d y = a

For 1<a<2, we get

f X + Y ( a ) = a 1 1 d y = 2 a

Hence,

f X + Y ( a ) = { a , 0 a 1 2 a , 1 < a < 2 0 , otherwise

Rather than deriving a general expression for the distribution of X+Y in the discrete case, we shall consider an example.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123756862000108

Random Variables

Sheldon M. Ross , in Introduction to Probability Models (Twelfth Edition), 2019

2.4.2 The Continuous Case

We may also define the expected value of a continuous random variable. This is done as follows. If X is a continuous random variable having a probability density function f ( x ) , then the expected value of X is defined by

E [ X ] = x f ( x ) d x

Example 2.20 Expectation of a Uniform Random Variable

Calculate the expectation of a random variable uniformly distributed over ( α , β ) .

Solution:  From Eq. (2.8) we have

E [ X ] = α β x β α d x = β 2 α 2 2 ( β α ) = β + α 2

In other words, the expected value of a random variable uniformly distributed over the interval ( α , β ) is just the midpoint of the interval.  

Example 2.21 Expectation of an Exponential Random Variable

Let X be exponentially distributed with parameter λ. Calculate E [ X ] .

Solution:

E [ X ] = 0 x λ e λ x d x

Integrating by parts ( d v = λ e λ x d x , u = x ) yields

E [ X ] = x e λ x | 0 + 0 e λ x d x = 0 e λ x λ | 0 = 1 λ

Example 2.22 Expectation of a Normal Random Variable

Calculate E [ X ] when X is normally distributed with parameters μ and σ 2 .

Solution:

E [ X ] = 1 2 π σ x e ( x μ ) 2 / 2 σ 2 d x

Writing x as ( x μ ) + μ yields

E [ X ] = 1 2 π σ ( x μ ) e ( x μ ) 2 / 2 σ 2 d x + μ 1 2 π σ e ( x μ ) 2 / 2 σ 2 d x

Letting y = x μ leads to

E [ X ] = 1 2 π σ y e y 2 / 2 σ 2 d y + μ f ( x ) d x

where f ( x ) is the normal density. By symmetry, the first integral must be 0, and so

E [ X ] = μ f ( x ) d x = μ

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012814346900007X

Simulation Techniques

Scott L. Miller , Donald Childers , in Probability and Random Processes (Second Edition), 2012

12.1.3 Generation of Random Numbers from a Specified Distribution

Quite often, we are interested in generating random variables that obey some distribution other than a uniform distribution. In this case, it is generally a fairly simple task to transform a uniform random number generator into one that follows some other distribution. Consider forming a monotonic increasing transformation g() on a random variable X to form a new random variable Y. From the results of Chapter 4, the PDFs of the random variables involved are related by

Given an arbitrary PDF, fX (x), the transformation Y = g(X ) will produce a uniform random variable Y if dg/dx = fX (x) or equivalently g(x) = FX (x). Viewing this result in reverse, if X is uniformly distributed over (0, 1) and we want to create a new random variable, Y with a specified distribution, FY (y), the transformation Y = Fy −1 (X) will do the job.

Example 12.3

Suppose we want to transform a uniform random variable into an exponential random variable with a PDF of the form

f Y ( y ) = a exp ( a y ) u ( y ) .

The corresponding CDF is

f Y ( y ) = [ 1 exp ( a y ) ] u ( y ) .

Therefore, to transform a uniform random variable into an exponential random variable, we can use the transformation

Y = F Y 1 ( X ) = ln ( 1 X ) a .

Note that if X is uniformly distributed over (0, 1), then 1 − X will be uniformly distributed as well so that the slightly simpler transformation

Y = ln ( X ) a

will also work.

This approach for generation of random variables works well provided that the CDF of the desired distribution is invertible. One notable exception where this approach will be difficult is the Gaussian random variable. Suppose, for example, we wanted to transform a uniform random variable, X, into a standard normal random variable, Y. The CDF in this case is the complement of a Q-function, FY (y) = 1 − Q(y). The inverse of this function would then provide the appropriate transformation, y = Q −1 (1 − x), or as with the previous example, we could simplify this to y = Q 1 (x). The problem here lies with the inverse Q-function which can not be expressed in a closed form. One could devise efficient numerical routines to compute the inverse Q-function, but fortunately there is an easier approach.

An efficient method to generate Gaussian random variables from uniform random variables is based on the following 2 × 2 transformation. Let X 1 and X 2 be two independent uniform random variables (over the interval (0, 1)). Then if two new random variables, Y1 and Y2 are created according to

12.5a Y 1 = 2 ln ( X 1 ) cos ( 2 π X 2 ) ,

12.5b Y 2 = 2 ln ( X 1 ) sin ( 2 π X 2 ) ,

then Y 1 and Y 2 will be independent standard normal random variables (see Example 5.24). This famous result is known as the Box−Muller transformation and is commonly used to generate Gaussian random variables. If a pair of Gaussian random variables is not needed, one of the two can be discarded. This method is particularly convenient for generating complex Gaussian random variables since it naturally generates pairs of independent Gaussian random variables. Note that if Gaussian random variables are needed with different means or variances, this can easily be accomplished through an appropriate linear transformation. That is, if YN(0, 1), then Z = σY + μ will produce Z ∼ N(μ, σ 2).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123869814500151

Monte Carlo Integration

Matt Pharr , ... Greg Humphreys , in Physically Based Rendering (Third Edition), 2017

13.1.1 Continuous random variables

In rendering, discrete random variables are less common than continuous random variables, which take on values over ranges of continuous domains (e.g., the real numbers, directions on the unit sphere, or the surfaces of shapes in the scene).

A particularly important random variable is the canonical uniform random variable , which we will write as ξ. This variable takes on all values in its domain [0, 1) with equal probability. This particular variable is important for two reasons. First, it is easy to generate a variable with this distribution in software—most run-time libraries have a pseudo-random number generator that does just that. 2 Second, as we will show later, it is possible to generate samples from arbitrary distributions by first starting with canonical uniform random variables and applying an appropriate transformation. The technique described previously for mapping from ξ to the six faces of a die gives a flavor of this technique in the discrete case.

Another example of a continuous random variable is one that ranges over the real numbers between 0 and 2, where the probability of its taking on any particular value x is proportional to the value 2   x: it is twice as likely for this random variable to take on a value around 0 as it is to take one around 1, and so forth. The probability density function (PDF) formalizes this idea: it describes the relative probability of a random variable taking on a particular value. The PDF p(x) is the derivative of the random variable's CDF,

p x = d P x d x .

For uniform random variables, p(x) is a constant; this is a direct consequence of uniformity. For ξ we have

p x = 1 x [ 0 , 1 ) 0 otherwise .

PDFs are necessarily nonnegative and always integrate to 1 over their domains. Given an arbitrary interval [a, b] in the domain, integrating the PDF gives the probability that a random variable lies inside the interval:

P x a b = a b p x d x .

This follows directly from the first fundamental theorem of calculus and the definition of the PDF.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128006450500130

Random Variables

Sheldon Ross , in Introduction to Probability Models (Eleventh Edition), 2014

2.3.1 The Uniform Random Variable

A random variable is said to be uniformly distributed over the interval ( 0 , 1 ) if its probability density function is given by

f ( x ) = 1 , 0 < x < 1 0 , otherwise

Note that the preceding is a density function since f ( x ) 0 and

- f ( x ) dx = 0 1 dx = 1

Since f ( x ) > 0 only when x ( 0 , 1 ) , it follows that X must assume a value in ( 0 , 1 ) . Also, since f ( x ) is constant for x ( 0 , 1 ) , X is just as likely to be "near" any value in (0, 1) as any other value. To check this, note that, for any 0 < a < b < 1 ,

P { a X b } = a b f ( x ) dx = b - a

In other words, the probability that X is in any particular subinterval of ( 0 , 1 ) equals the length of that subinterval.

In general, we say that X is a uniform random variable on the interval ( α , β ) if its probability density function is given by

(2.8) f ( x ) = 1 β - α , if α < x < β 0 , otherwise

Example 2.13

Calculate the cumulative distribution function of a random variable uniformly distributed over ( α , β ) .

Solution:  Since F ( a ) = - a f ( x ) dx , we obtain from Equation (2.8) that

F ( a ) = 0 , a α a - α β - α , α < a < β 1 , a β

Example 2.14

If X is uniformly distributed over ( 0 , 10 ) , calculate the probability that (a) X < 3 , (b) X > 7 , (c) 1 < X < 6 .

Solution:

P { X < 3 } = 0 3 dx 10 = 3 10 , P { X > 7 } = 7 10 dx 10 = 3 10 , P { 1 < X < 6 } = 1 6 dx 10 = 1 2

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124079489000025

SPECIAL RANDOM VARIABLES

Sheldon M. Ross , in Introduction to Probability and Statistics for Engineers and Scientists (Fourth Edition), 2009

SOLUTION

Let X denote the time in minutes past 7 a.m. that the passenger arrives at the stop. Since X is a uniform random variable over the interval (0, 30), it follows that the passenger will have to wait less than 5 minutes if he arrives between 7:10 and 7:15 or between 7:25 and 7:30. Hence, the desired probability for (a) is

p { 10 < X < 15 } + P { 25 < X < 30 } = 5 30 + 5 30 = 1 3

Similarly, he would have to wait at least 12 minutes if he arrives between 7 and 7:03 or between 7:15 and 7:18, and so the probability for (b) is

p { 0 < X < 3 } + P { 15 < X < 18 } = 3 30 + 3 30 = 1 5

The mean of a uniform [α,β] random variable is

E [ X ] = α β x β - α d x = β 2 - α 2 2 ( β - α ) = ( β - α ) ( β + α ) 2 ( β - α )

or

E [ X ] = α + β 2

Or, in other words, the expected value of a uniform [α,β] random variable is equal to the midpoint of the interval [α,β], which is clearly what one would expect. (Why?)

The variance is computed as follows.

E [ X 2 ] = 1 β - α α β x 2 d x = β 3 - α 3 3 ( β - α ) = β 2 + α β + α 2 3

and so

var ( X ) = β 2 + α β + α 2 3 ( α + β 2 ) 2 = α 2 + β 2 - 2 α β 12 = ( β - α ) 2 12

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123704832000102

Beyond Wavelets

Bertrand Bénichou , Naoki Saito , in Studies in Computational Mathematics, 2003

9.4 Two-Dimensional Counterexample

Let us consider a simple process X = (X 1, X 2)T where X 1 and X2 are independently and identically distributed as the uniform random variable on the interval [-1,1]. Thus, the realizations of this process are distributed as the right-hand side of Figure 9.1. Let us consider all possible rotations around the origin as a basis dictionary, i.e., D = SO(2,R) ⊂ O(2). Then, the sparsity and independence criteria select completely different bases as shown in Figure 9.1. Note that the data points under the BSB coordinates (45 degree rotation) concentrate more around the origin than the LSDB coordinates (with no rotation) and this rotation makes the data representation sparser. This example clearly demonstrates that the BSB and the LSDB are different in general. One can also generalize this example to higher dimensions.

Figure 9.1. Sparsity and statistical independence prefer the different coordinates

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S1570579X0380037X

Preliminaries

Jaroslav Hájek , ... Pranab K. Sen , in Theory of Rank Tests (Second Edition), 1999

Section 2.3

15.

Let the random variable L = l(X) be the level actually attained. Then, under H, generally L is stochastically at least as large as a uniform random variable on (0,1). Hence the size of the test which rejects H if and only if L ≤ α is bounded by α; in other words, P(L ≤ α) ≤ a. [Theorem 8.3.1.3.] If X has a continuous distribution under H, then the distribution of L = l,(X) is, under H, exactly uniform on (0,1).

Definition.

For randomized tests, L may depend additionally on a uniformly distributed random variable U independent of X, and we assume P(l(X, U) ≤ α) ≤ α for α ∈ (0.1), P ∈ H.

16.

Prove that E p (L) ≥ ½ for p ∈ H and E q (L) ≥ ∫0 1[1 – β(α, H, q)} dα. Furthermore, E p (L) = ½ if and only if P(L ≤ α) = α, α ∈ (0,1), and E q (L) = ∫0 1 [1 – β(α, H, q)] dα if and only if {L ≤ α} is the critical region of the most powerful level α test for each α ∈ (0,1). (Within the framework mentioned in Subsection 2.3.3 a test based on L may be called most powerful if E q (L) = minimum.)

17.

Let L0 be a random variable depending on X and U and such that 0 ≤ L0 ≤ 1 and that E p (L0 ) ≥ ½ for all p ∈ H. Then E q (L0 ) ≥ 1 – β(½ H, q).

18.

1 β ( 1 2 , H , q ) 0 1 [ 1 β ( α , H , q ) ] d α .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780126423501500205

Power Spectral Density

Scott L. Miller , Donald Childers , in Probability and Random Processes (Second Edition), 2012

Section 10.2 Wiener–Khintchine–Einstein Theorem

10.4

Consider a random process of the form

X ( t ) = b cos ( 2 π Ψ t + Θ ) ,

where b is a constant, Θ is a uniform random variable over [0, 2π), and Ψ is a random variable which is independent of Θ and has a PDF, f Ψ(ψ). Find the PSD, SXX (f) in terms of f Ψ(ψ). In so doing, prove that for any S(f) which is a valid PSD function, we can always construct a random process with PSD equal to S(f).
10.5

Let X(t) = A cos (ωt) + B sin (ωt) where A and B are independent, zero-mean, identically distributed, non-Gaussian random variables.

(a)

Show that X(t) is WSS, but not strict sense stationary. Hint: For the latter case consider E[X 3 (t)]. Note: If A and B are Gaussian, then X(t) is also stationary in the strict sense.

(b)

Find the PSD of this process.

10.6

Let X ( t ) = n = 1 N a n cos ( ω n t + θ n ) ) where all of the ω n are non-zero constants, the an are constants, and the θ n are IID random variables, each uniformly distributed over [0, 2 π).

(a)

Determine the autocorrelation function of X(t).

(b)

Determine the PSD of X(t).

10.7

Let X ( t ) = n = 1 δ [ A n cos ( n ω t ) + B n sin ( n ω t ) ] be a random process, where An and Bn are random variables such that E[An ] = E[Bn ] = 0, E[AnBm ] = 0, E[AnAm ] = δ n,m E[A 2 n ], and E[BnBm ] = δ n,m E[B 2 n ] for all m and n, where δ n,m is the Kronecker delta function. This process is sometimes used as a model for random noise.

(a)

Find the time-varying autocorrelation function Rxx (t, t + τ).

(b)

If E[B 2 n ] = E[A 2 n ], is this process WSS?

(c)

Find the PSD of this process.

10.8

Find the PSD for a process for which RXX (τ) = 1 for all τ.

10.9

Suppose X(t) is a stationary zero-mean Gaussian random process with PSD, Sxx (f).

(a)

Find the PSD of Y(t) = X 2(t) in terms of SXX (f).

(b)

Sketch the resulting PSD if S X X ( f ) = rect ( f 2 B )

(c)

Is Y(t) WSS?

10.10

Consider a random sinusoidal process of the form X(t) = b cos (2πft + Θ), where Θ has an arbitrary PDF, f Θ(θ). Analytically determine how the PSD of X(t) depends on f Θ(θ). Give an intuitive explanation for your result.

10.11

Let s(t) be a deterministic periodic waveform with period t o. A random process is constructed according to X(t) = s (tT) where T is a random variable uniformly distributed over [0, t o). Show that the random process X(t) has a line spectrum and write the PSD of X(t) in terms of the Fourier Series coefficients of the periodic signal s(t).

10.12

A sinusoidal signal of the form X(t) = bcos(2πf o t + Θ) is transmitted from a fixed platform. The signal is received by an antenna which is on a mobile platform that is in motion relative to the transmitter, with a velocity of V relative to the direction of signal propagation between the transmitter and receiver. Therefore, the received signal experiences a Doppler shift and (ignoring noise in the receiver) is of the form

Y ( t ) = b cos ( 2 π f o ( 1 + V c ) t + Θ ) ,

where c is the speed of light. Find the PSD of the received signal if V is uniformly distributed over (−v o, v o). Qualitatively, what does the Doppler effect do to the PSD of the sinusoidal signal?
10.13

Two zero-mean discrete random processes, X[n] and Y[n], are statistically independent and have autocorrelation functions given by RXX [k] = (1/2) k and RYY [k] = (1/3) k . Let a new random process be Z[n] = X[n] + Y[n].

(a)

Find RZZ [k]. Plot all three autocorrelation functions.

(b)

Determine all three PSD functions analytically and plot the PSDs.

10.14

Let Sxx (f) be the PSD function of a WSS discrete-time process X[n]. Recall that one way to obtain this PSD function is to compute RXX [n] = E[X[k]X[k + n]] and then take the DFT of the resulting autocorrelation function. Determine how to find the average power in a discrete-time random process directly from the PSD function, SXX (f).

10.15

A binary phase shift keying signal is defined according to

X ( t ) = cos ( 2 π f c t + B [ n ] π 2 ) for n T t < ( n + 1 ) T ,

for all n, and B[n] is a discrete-time Bernoulli random process that has values of+1 or −1.
(a)

Determine the autocorrelation function for the random process X(t). Is the process WSS?

(b)

Determine the PSD of X(t).

10.16

Let X(t) be a random process whose PSD is shown in the accompanying figure. A new process is formed by multiplying X(t) by a carrier to produce

Y ( t ) = X ( t ) cos ( ω o t + Θ ) ,

where Θ is uniform over [0, 2π) and independent of X(t). Find and sketch the PSD of the process Y(t).

10.17

Consider a random process Z(t) = X(t) + Y(t).

(a)

Find an expression for SZZ (f) in terms of SXX (f), SYY (f), and SXY (f).

(b)

Under what conditions does SZZ (f) = SXX (f) + S YY (f)?

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123869814500138

Perturbation Methods for Protecting Numerical Data: Evolution and Evaluation

Rathindra Sarathy , Krish Muralidhar , in Handbook of Statistics, 2012

4.2.2 Sullivan's model

Another type of nonlinear perturbation model was proposed by Sullivan (1989) in cases where the marginal distributions of the variables are not normal. Sullivan's approach tries to preserve the marginal distribution of the masked variables to be the same as that of the original variables, regardless of whether they are numerical or categorical. This approach transforms each observation into a uniform random variable using its empirical cumulative distribution function (cdf) that is then retransformed to a standard normal random variable.

Let x i represent the transformed variable where

(15) x i = Φ - 1 ( F i ( x i ) ) , i = 1 , , n .

An appropriate level of noise is then added to the standard normal variable to result in y i as follows:

(16) y i = x i + ε i ,

where ε i represents the independent noise term. Once noise addition is completed, the entire process is reversed to yield perturbed values that have the same empirical distribution as the original confidential values as y i = F i - 1 ( Φ ( y i ) ) . The empirical nature of Sullivan's approach makes it difficult to predict its data utility and disclosure risk characteristics.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444518750000191