If You Have Multiple Independent Uniform Continuous Random Variables Together Can U Add Them
Uniform Random Variable
The uniform random variable is defined by the density function [see Fig.1-2a](1.4-1)P(x) = {1/(b–a), if a≤x
From: Markov Processes , 1992
Random Variables
Sheldon M. Ross , in Introduction to Probability Models (Tenth Edition), 2010
Example 2.36
(Sum of Two Independent Uniform Random Variables)If XandY are independent random variables both uniformly distributed on (0, 1), then calculate the probability density of X+Y.
Solution:From Equation (2.18), since
we obtain
For 0≤a≤1, this yields
For 1<a<2, we get
Hence,
Rather than deriving a general expression for the distribution of X+Y in the discrete case, we shall consider an example.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123756862000108
Random Variables
Sheldon M. Ross , in Introduction to Probability Models (Twelfth Edition), 2019
2.4.2 The Continuous Case
We may also define the expected value of a continuous random variable. This is done as follows. If X is a continuous random variable having a probability density function , then the expected value of X is defined by
Example 2.20 Expectation of a Uniform Random Variable
Calculate the expectation of a random variable uniformly distributed over .
-
Solution: From Eq. (2.8) we have
Example 2.21 Expectation of an Exponential Random Variable
Let X be exponentially distributed with parameter λ. Calculate .
-
Solution:
Example 2.22 Expectation of a Normal Random Variable
Calculate when X is normally distributed with parameters μ and .
-
Solution:
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012814346900007X
Simulation Techniques
Scott L. Miller , Donald Childers , in Probability and Random Processes (Second Edition), 2012
12.1.3 Generation of Random Numbers from a Specified Distribution
Quite often, we are interested in generating random variables that obey some distribution other than a uniform distribution. In this case, it is generally a fairly simple task to transform a uniform random number generator into one that follows some other distribution. Consider forming a monotonic increasing transformation g() on a random variable X to form a new random variable Y. From the results of Chapter 4, the PDFs of the random variables involved are related by
Given an arbitrary PDF, fX (x), the transformation Y = g(X ) will produce a uniform random variable Y if dg/dx = fX (x) or equivalently g(x) = FX (x). Viewing this result in reverse, if X is uniformly distributed over (0, 1) and we want to create a new random variable, Y with a specified distribution, FY (y), the transformation Y = Fy −1 (X) will do the job.
Example 12.3
Suppose we want to transform a uniform random variable into an exponential random variable with a PDF of the form
The corresponding CDF is
Therefore, to transform a uniform random variable into an exponential random variable, we can use the transformation
Note that if X is uniformly distributed over (0, 1), then 1 − X will be uniformly distributed as well so that the slightly simpler transformation
will also work.
This approach for generation of random variables works well provided that the CDF of the desired distribution is invertible. One notable exception where this approach will be difficult is the Gaussian random variable. Suppose, for example, we wanted to transform a uniform random variable, X, into a standard normal random variable, Y. The CDF in this case is the complement of a Q-function, FY (y) = 1 − Q(y). The inverse of this function would then provide the appropriate transformation, y = Q −1 (1 − x), or as with the previous example, we could simplify this to y = Q− 1 (x). The problem here lies with the inverse Q-function which can not be expressed in a closed form. One could devise efficient numerical routines to compute the inverse Q-function, but fortunately there is an easier approach.
An efficient method to generate Gaussian random variables from uniform random variables is based on the following 2 × 2 transformation. Let X 1 and X 2 be two independent uniform random variables (over the interval (0, 1)). Then if two new random variables, Y1 and Y2 are created according to
12.5a
12.5b
then Y 1 and Y 2 will be independent standard normal random variables (see Example 5.24). This famous result is known as the Box−Muller transformation and is commonly used to generate Gaussian random variables. If a pair of Gaussian random variables is not needed, one of the two can be discarded. This method is particularly convenient for generating complex Gaussian random variables since it naturally generates pairs of independent Gaussian random variables. Note that if Gaussian random variables are needed with different means or variances, this can easily be accomplished through an appropriate linear transformation. That is, if Y ∼ N(0, 1), then Z = σY + μ will produce Z ∼ N(μ, σ 2).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123869814500151
Monte Carlo Integration
Matt Pharr , ... Greg Humphreys , in Physically Based Rendering (Third Edition), 2017
13.1.1 Continuous random variables
In rendering, discrete random variables are less common than continuous random variables, which take on values over ranges of continuous domains (e.g., the real numbers, directions on the unit sphere, or the surfaces of shapes in the scene).
A particularly important random variable is the canonical uniform random variable , which we will write as ξ. This variable takes on all values in its domain [0, 1) with equal probability. This particular variable is important for two reasons. First, it is easy to generate a variable with this distribution in software—most run-time libraries have a pseudo-random number generator that does just that. 2 Second, as we will show later, it is possible to generate samples from arbitrary distributions by first starting with canonical uniform random variables and applying an appropriate transformation. The technique described previously for mapping from ξ to the six faces of a die gives a flavor of this technique in the discrete case.
Another example of a continuous random variable is one that ranges over the real numbers between 0 and 2, where the probability of its taking on any particular value x is proportional to the value 2 − x: it is twice as likely for this random variable to take on a value around 0 as it is to take one around 1, and so forth. The probability density function (PDF) formalizes this idea: it describes the relative probability of a random variable taking on a particular value. The PDF p(x) is the derivative of the random variable's CDF,
For uniform random variables, p(x) is a constant; this is a direct consequence of uniformity. For ξ we have
PDFs are necessarily nonnegative and always integrate to 1 over their domains. Given an arbitrary interval [a, b] in the domain, integrating the PDF gives the probability that a random variable lies inside the interval:
This follows directly from the first fundamental theorem of calculus and the definition of the PDF.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128006450500130
Random Variables
Sheldon Ross , in Introduction to Probability Models (Eleventh Edition), 2014
2.3.1 The Uniform Random Variable
A random variable is said to be uniformly distributed over the interval if its probability density function is given by
Note that the preceding is a density function since and
Since only when , it follows that must assume a value in . Also, since is constant for is just as likely to be "near" any value in (0, 1) as any other value. To check this, note that, for any ,
In other words, the probability that is in any particular subinterval of equals the length of that subinterval.
In general, we say that is a uniform random variable on the interval if its probability density function is given by
(2.8)
Example 2.13
Calculate the cumulative distribution function of a random variable uniformly distributed over .
-
Solution: Since , we obtain from Equation (2.8) that
Example 2.14
If is uniformly distributed over , calculate the probability that (a) , (b) , (c) .
-
Solution:
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124079489000025
SPECIAL RANDOM VARIABLES
Sheldon M. Ross , in Introduction to Probability and Statistics for Engineers and Scientists (Fourth Edition), 2009
SOLUTION
Let X denote the time in minutes past 7 a.m. that the passenger arrives at the stop. Since X is a uniform random variable over the interval (0, 30), it follows that the passenger will have to wait less than 5 minutes if he arrives between 7:10 and 7:15 or between 7:25 and 7:30. Hence, the desired probability for (a) is
Similarly, he would have to wait at least 12 minutes if he arrives between 7 and 7:03 or between 7:15 and 7:18, and so the probability for (b) is
The mean of a uniform [α,β] random variable is
or
Or, in other words, the expected value of a uniform [α,β] random variable is equal to the midpoint of the interval [α,β], which is clearly what one would expect. (Why?)
The variance is computed as follows.
and so
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123704832000102
Beyond Wavelets
Bertrand Bénichou , Naoki Saito , in Studies in Computational Mathematics, 2003
9.4 Two-Dimensional Counterexample
Let us consider a simple process X = (X 1, X 2)T where X 1 and X2 are independently and identically distributed as the uniform random variable on the interval [-1,1]. Thus, the realizations of this process are distributed as the right-hand side of Figure 9.1. Let us consider all possible rotations around the origin as a basis dictionary, i.e., D = SO(2,R) ⊂ O(2). Then, the sparsity and independence criteria select completely different bases as shown in Figure 9.1. Note that the data points under the BSB coordinates (45 degree rotation) concentrate more around the origin than the LSDB coordinates (with no rotation) and this rotation makes the data representation sparser. This example clearly demonstrates that the BSB and the LSDB are different in general. One can also generalize this example to higher dimensions.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/S1570579X0380037X
Preliminaries
Jaroslav Hájek , ... Pranab K. Sen , in Theory of Rank Tests (Second Edition), 1999
Section 2.3
- 15.
-
Let the random variable L = l(X) be the level actually attained. Then, under H, generally L is stochastically at least as large as a uniform random variable on (0,1). Hence the size of the test which rejects H if and only if L ≤ α is bounded by α; in other words, P(L ≤ α) ≤ a. [Theorem 8.3.1.3.] If X has a continuous distribution under H, then the distribution of L = l,(X) is, under H, exactly uniform on (0,1).
Definition.
For randomized tests, L may depend additionally on a uniformly distributed random variable U independent of X, and we assume P(l(X, U) ≤ α) ≤ α for α ∈ (0.1), P ∈ H.
- 16.
-
Prove that E p (L) ≥ ½ for p ∈ H and E q (L) ≥ ∫0 1[1 – β(α, H, q)} dα. Furthermore, E p (L) = ½ if and only if P(L ≤ α) = α, α ∈ (0,1), and E q (L) = ∫0 1 [1 – β(α, H, q)] dα if and only if {L ≤ α} is the critical region of the most powerful level α test for each α ∈ (0,1). (Within the framework mentioned in Subsection 2.3.3 a test based on L may be called most powerful if E q (L) = minimum.)
- 17.
-
Let L0 be a random variable depending on X and U and such that 0 ≤ L0 ≤ 1 and that E p (L0 ) ≥ ½ for all p ∈ H. Then E q (L0 ) ≥ 1 – β(½ H, q).
- 18.
-
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780126423501500205
Power Spectral Density
Scott L. Miller , Donald Childers , in Probability and Random Processes (Second Edition), 2012
Section 10.2 Wiener–Khintchine–Einstein Theorem
- 10.4
-
Consider a random process of the form
- 10.5
-
Let X(t) = A cos (ωt) + B sin (ωt) where A and B are independent, zero-mean, identically distributed, non-Gaussian random variables.
- (a)
-
Show that X(t) is WSS, but not strict sense stationary. Hint: For the latter case consider E[X 3 (t)]. Note: If A and B are Gaussian, then X(t) is also stationary in the strict sense.
- (b)
-
Find the PSD of this process.
- 10.6
-
Let ) where all of the ω n are non-zero constants, the an are constants, and the θ n are IID random variables, each uniformly distributed over [0, 2 π).
- (a)
-
Determine the autocorrelation function of X(t).
- (b)
-
Determine the PSD of X(t).
- 10.7
-
Let be a random process, where An and Bn are random variables such that E[An ] = E[Bn ] = 0, E[AnBm ] = 0, E[AnAm ] = δ n,m E[A 2 n ], and E[BnBm ] = δ n,m E[B 2 n ] for all m and n, where δ n,m is the Kronecker delta function. This process is sometimes used as a model for random noise.
- (a)
-
Find the time-varying autocorrelation function Rxx (t, t + τ).
- (b)
-
If E[B 2 n ] = E[A 2 n ], is this process WSS?
- (c)
-
Find the PSD of this process.
- 10.8
-
Find the PSD for a process for which RXX (τ) = 1 for all τ.
- 10.9
-
Suppose X(t) is a stationary zero-mean Gaussian random process with PSD, Sxx (f).
- (a)
-
Find the PSD of Y(t) = X 2(t) in terms of SXX (f).
- (b)
-
Sketch the resulting PSD if
- (c)
-
Is Y(t) WSS?
- 10.10
-
Consider a random sinusoidal process of the form X(t) = b cos (2πft + Θ), where Θ has an arbitrary PDF, f Θ(θ). Analytically determine how the PSD of X(t) depends on f Θ(θ). Give an intuitive explanation for your result.
- 10.11
-
Let s(t) be a deterministic periodic waveform with period t o. A random process is constructed according to X(t) = s (t − T) where T is a random variable uniformly distributed over [0, t o). Show that the random process X(t) has a line spectrum and write the PSD of X(t) in terms of the Fourier Series coefficients of the periodic signal s(t).
- 10.12
-
A sinusoidal signal of the form X(t) = bcos(2πf o t + Θ) is transmitted from a fixed platform. The signal is received by an antenna which is on a mobile platform that is in motion relative to the transmitter, with a velocity of V relative to the direction of signal propagation between the transmitter and receiver. Therefore, the received signal experiences a Doppler shift and (ignoring noise in the receiver) is of the form
- 10.13
-
Two zero-mean discrete random processes, X[n] and Y[n], are statistically independent and have autocorrelation functions given by RXX [k] = (1/2) k and RYY [k] = (1/3) k . Let a new random process be Z[n] = X[n] + Y[n].
- (a)
-
Find RZZ [k]. Plot all three autocorrelation functions.
- (b)
-
Determine all three PSD functions analytically and plot the PSDs.
- 10.14
-
Let Sxx (f) be the PSD function of a WSS discrete-time process X[n]. Recall that one way to obtain this PSD function is to compute RXX [n] = E[X[k]X[k + n]] and then take the DFT of the resulting autocorrelation function. Determine how to find the average power in a discrete-time random process directly from the PSD function, SXX (f).
- 10.15
-
A binary phase shift keying signal is defined according to
- (a)
-
Determine the autocorrelation function for the random process X(t). Is the process WSS?
- (b)
-
Determine the PSD of X(t).
- 10.16
-
Let X(t) be a random process whose PSD is shown in the accompanying figure. A new process is formed by multiplying X(t) by a carrier to produce
- 10.17
-
Consider a random process Z(t) = X(t) + Y(t).
- (a)
-
Find an expression for SZZ (f) in terms of SXX (f), SYY (f), and SXY (f).
- (b)
-
Under what conditions does SZZ (f) = SXX (f) + S YY (f)?
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123869814500138
Perturbation Methods for Protecting Numerical Data: Evolution and Evaluation
Rathindra Sarathy , Krish Muralidhar , in Handbook of Statistics, 2012
4.2.2 Sullivan's model
Another type of nonlinear perturbation model was proposed by Sullivan (1989) in cases where the marginal distributions of the variables are not normal. Sullivan's approach tries to preserve the marginal distribution of the masked variables to be the same as that of the original variables, regardless of whether they are numerical or categorical. This approach transforms each observation into a uniform random variable using its empirical cumulative distribution function (cdf) that is then retransformed to a standard normal random variable.
Let represent the transformed variable where
(15)
An appropriate level of noise is then added to the standard normal variable to result in as follows:
(16)
where represents the independent noise term. Once noise addition is completed, the entire process is reversed to yield perturbed values that have the same empirical distribution as the original confidential values as . The empirical nature of Sullivan's approach makes it difficult to predict its data utility and disclosure risk characteristics.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444518750000191
Source: https://www.sciencedirect.com/topics/mathematics/uniform-random-variable
0 Response to "If You Have Multiple Independent Uniform Continuous Random Variables Together Can U Add Them"
Post a Comment