We can't define what a random process is, only what it isn't. Outcomes which lack discernable patterns are assumed to be random. If there is no way to predict an event, we say it is random.
The mathematical conceptions of randomness involve deviations from distributions which are infinite in size. No empirical process can be tested against this idealized notion of randomness because we can't collect an infinite number of data points. We can't even judge something as being non-random from a single, weird pattern because random processes will sometimes produce short sequences which appear to be non-random.
We can't create a random number generator, only generators which produce number patterns which reveal no systematic patterns. Thus, there is no definition of what randomness is, only definitions of what it isn't. [But see level 3 exception below]
Level 3: There is the Kolmogorov-Chaiten definition: a sequence is random when the shortest computer program which can reproduce it is as long as the sequence itself. In other words, in computer lingo, the sequence can't be compressed. For example, this definition excludes PI (3.1416...) because a very short program (a circle's circumference divided by its diameter) can reproduce a billion digits of PI. So you can compress a billion digits with a formula involving just three symbols (C/d). The problem with this definition is that just because you can't find a program which can compress a given sequence doesn't mean that there isn't one out there. Maybe next year a new "Pythagoras" will find the formula which explains the given sequence.
Most state lotteries use computer-generated numbers, and some algorithms are better than others. For example, a Canadian mathematician deduced the algorithm used in his province's lottery, and he proceeded to win the next two lotteries. The authorities arrested him for fraud. The judge ruled that the mathematician, although clever, was not a criminal and let him keep the money. The next lottery used a different (and harder to figure) algorithm!
The closest we can come to a random process is radioactive decay, which is assumed to be random, according to current theory in physics. Link to radioactive decay random number generators. This site goes into elaborate detail about how radioactive decay is converted, electronically, into a string of hexadecimal numbers.
Chaos theory has show that deterministic systems can produce results which are chaotic and appear to be random. But they are not technically random because the events can be modelled by a (non-linear) formula. The classic example of such a system is the pseudo-random number generators used by computers.
Other systems are stable, linear, or non-chaotic under some conditions, but under other conditions do dissolve into randomness or unpredictability (on some level). But even then, on a different level or scale, patterns can still be found. A dripping faucet is an example of such a system. With some water flows, there is a steady a predictable drip, but at other (lower) levels the drips appear to be randomly irregular (but at a higher level of analysis, patterns can be detected, suggesting only chaos), and at still other water levels, no pattern can be discerned at the levels of analysis used for previous drip rates.
So the notion and practice of randomness is not as simple or cut and dried as people think.