Nature of Irreducibility and Information

With all the abstraction that our mind generates, a question keeps reappearing. Are the second order effects more powerful than the substrate itself? 

Fig: Trying to understand the universe

Existence of Irreducibility:

I stumbled upon this weird notion of irreducibility that seems to persist across this universe. Certain set of steps needs to be taken to reach a baffling conclusion that seemed very unobvious before you take those steps. To further probe this idea, two measures can be used: 

"Precision": How formal something is. Derivation from first principles should be possible with shared assumptions. 

"Correctness":  Should be formally or physically accurate.

Using these two measures, irreducibility can be classified into three categories: 

a) Mathematics

Math is an implicit candidate that comes into our mind when we think of irreducibility. Say you have a partial differential equation of any bivariate function f(x,y) : 

Fig: Partial differential equation

The only way to solve this equation is to carry out all of the steps necessary to reach the answer. The answer you get to is not something you could figure out without solving the equation. This is where irreducibility in math lies. Let's take a visual example:

Fig: Irreducible paths in mathematics

We have a question "Q", say a partial differential equation. There may exist many paths to solve the equation to reach the answer "A". Such paths could be
P1 = R1
P2 = R2
P3 = C1 + R1'
P4 = C2 + R2' 
P5 = C1+C2 (Shortest path) 
and more. 

We can see that P5 is the shortest path, which represents the minimal steps of operation that must be completed to go from "Q" to "Q". Thus, P5 is the irreducible path/step. There can be no other paths shorter than P5, nor is it possible to shorten P5

Does irreducibility of P5 hold everywhere? 

Of course not, it is trivial to see that P5 is only irreducible in Euclidean space and irreducible path in non-Euclidean space depends on the choice of geometry and the geodesic. But the choice of the space is not the concern here. The main focus is on the nature of irreducibility. Irreducible steps are a part of the inherent structure of the object itself, regardless of the choice of the space. 

But wait, after I've solved the equation once, the next time do I violate irreducibility? 

The answer is no. Once you solved the equation once, say with path P5 = C1+C2, your learned space now contains C1, C2, and A. While it is true that you no longer have to follow the same steps the next time you see the same problem, you do need the follow irreducible step at least once. Or else, your learned space will be empty. We also have to assume the notion of infinitely persistent memory as well for the case. So, no irreducibility is not violated but can by bypassed via reuse of the prior learned irreducible paths. 

If we were to look at this from theorem perspective say "T1" and more generalized claim "T". This needs verifiable irreducible proofs showing T is conclusion from T1 or collections of theorems "Ti". This is why mathematics has very high precision and correctness

b) Physical Observation:

Physical observations are less precise because observation does not inform you all the physical processes that is occurring. Still, these observations have high correctness as they simply evolve according to the natural physical laws. Example: An apple falling down from the tree. The process of observing the apple tells you correctly that things fall down due to some force but does not reveal precisely why. Therefore, the physical observation is the irreducible part. These observations can be used as basis for generalization using logic. Even though the observation itself is correct, there's no guarantee that the generalization is correct due to lack of precision. So, there's a trade-off of precision that allows easily generalization "G" from an observation "A", but the correctness of G cannot be guaranteed, in fact usually is not. When G is correct, this leads to quite unusual insights. I do want to note that this idea of trade-off is quite important when we talk about it later on. 

c) Thoughts:

Unlike mathematics and physical observation, pure thoughts are the are least precise and has the least correctness. Varying degree of error arises due to assumptions which may very well be subjective. You can improve precision using logic or clearing out the assumptions, but natural language is inherently ambiguous to a large extent. A random thought "T2" itself is the irreducible part. Naturally to improve precision you can increase the chain of thoughts from say T1 to T2. For example: 
T2: "Earth pulls everything towards it" 
T1: "Apple falls towards the earth"
T0: "People fall towards the earth"

Direct thought: T2 (Least precise)
Indirect chain of thought: T0 + T1 to T2 (Improved precision) 

Correctness here depends only upon the assumptions implicit in the thought and not necessarily the length of chain of thought. 

Ideally, this would be all of the categories. Surprisingly this is not the case. Remember that I noted about the trade-off, there exist a morphed category where some form of trade off in the measures allows you to do unusual things. Rather than calling it as a morphed category, it is better to view these changes in measures of precision and correctness as not a discrete set but rather continuous to a varying degree. 

Big claim, what are the examples of elements in such categories? 

At the moment I can think of three such cases. The first is the one and only legendary mathematician himself "Srinivasa Ramanujan" aka "The man who knew infinity".

Fig: Srinivasa Ramanujan

Ramanujan is the biggest anomaly in the history of mankind. There have been no other minds with lack of formal education yet with such heavenly intuition of mathematics. Due the lack of formal education, he did not know how to write mathematical proofs. Despite that he used to discover new formulas and mathematical insights that seemed unreal. It just came to him in his dreams. Ramanujan devoted his insight to family Goddess Devi Namagiri. He used to say: "An equation to me has no meaning, unless it expresses the thought of god". Let's take a simple formula that Ramanujan came up for approximating PI with **no proof** whatsoever.
Fig: Ramanujan PI approximation

There is no reason for this to be true. And yet it is true

But why? How did he do it? How is this possible? 

Well, remember the morphed category? This is exactly an instance of this. The base of discovery here is mathematics, which is precise, but the thought used to generalize and explore new formulas without formal proof should be full of errors, but in this case the formulas are highly accurate. Therefore, the measures here are high precision and high correctness. As to how this came to be, many people speculate a very rare and gifted mind. While that may be true, I believe neuroplasticity in addition had a huge role to play. See, Ramanujan studied most of math by himself from a book he stumbled upon named: "A Synopsis of Elementary Results in Pure and Applied Mathematics". This book contained a list of theorems with little to no proof, designed to help cram for exams. Yet, from these theorems alone Ramanujan kept on discovering new advanced math. This unconstrained informal setting helped his gifted mind to figure out unintuitive ways of looking at a mathematical object. 

The beauty of this universe is that there are a different number of ways to see the same things. To take a simple example of factors. Traditionally, factors of number "m" are "numbers" which divides m perfectly. Eg: Factors of 6 are 1,2,3. But there seems to be more fun way to see the same thing. Let's take a modular cyclic group Z mod 6 or (Z6,+) with addition as operation.

Z6 = {0,1,2,3,4,5} 

Z6 can be visualized like points on a circle from 0 to 5, just like a clock. 

Fig: Factors on (Z6,+)

It is trivial to see that factors are those elements that do not overshoot the identity element when operated on themselves. Therefore, factors of "m" are those elements "a" on (Zm,+) whose composition with self reaches identity without overshooting it on the first rotation. It is also trivial to note that factors closer to "m" or "e" have smaller order "n1" such that a^n1 = e as opposed to a2 which is farther away from modular bound or identity: a2^n2 = e implies: n1 < n2. With this intuition we can also find factors for some group "H" which is homomorphic to (Zm,+): H ~= (Zm,+), where the elements of "H" are not numeric labels

Okay, what's the point of this? 

The point is that there are a huge number of similar intuitions that could have helped Ramanujan look at math in completely different way. He was forced to figure out such intuitions or at least his brain was forced to figure it out without knowing out. Neurons formed such connections while he tinkered math on his own without any proofs. 

What about the other two instances? 

The second instance is proof systems like Probabilistic Checkable Proofs (PCP), Interactive Oracle proofs (IOP) etc which are implemented using cryptography. We call them validity proofs or zero knowledge proofs (If the zero knowledge property is satisfied). These proof systems are argument systems where one party tries to prove that a certain property holds true using less amount of information or computation required to verify by recomputation the other party. 

Why is this even relevant? 

These proof systems are relevant because, the proofs that they produce is not absolute. There's very negligible chance that the produce fake proof is accepted by the verifier. So, here we again find trade off of correctness that enabled such a system to be created. 

The last instance is also quite similar, where a class of classical algorithms called "randomized algorithms" achieves speedup or efficiency by utilizing randomness. Such algorithms go head on with equivalent quantum algorithms, and in most cases achieves enough speedup or efficiency such that the quantum algorithm becomes irrelevant, a phenomenon we call dequantization. We find trade off of precision in this case. 

"It is quite fascinating that moving the lever of the measures of precision and correctness enables bizarre instances in the morphed categories"

Information or is it precision? 

Before we ponder what information even is, I want to explore an assumption of information theory. Claude Shannon invented information theory to explore theoretical limits of communication through noisy channel between the source and the receiver. The theory he worked on was remarkable enough that it could be generalized across multiple disciplines like cryptography, physics, AI, coding theory etc. However, there's an implicit assumption in the theory. The assumption is that the encoding space and the decoding space is the same. While this may be true for communication, where the language we use to send information is shared across both the sender and the receiver. This, however, is simply not true when we think about the real world, where the encoding and decoding spaces are dependent on the learned priors of the encoder and the decoder, aka learned space

Let's look at a scenario in the traditional information theory: 

Fig: Unique decoding

Sender encodes a message 'x' of dimension 'k' with some parity bits to get a codeword of block length 'n'. Code is a collection of codeword of some alphabet 'Σ': C = {c1, c2, c3,..,cn} where each ci ∈ 𝔽^n in alphabet Σ and field 𝔽 of size 'q'. Let 'd' be the minimum distance between all pair of codewords in a code. Let 'Δ' be the distance between any two codewords, which is a measure of how different two codewords are from each other.  Let 'R' be the rate of the code, given by R=k/n. Rate represents the portion of codeword that contains the message. Higher the rate the better the code. But the increment is not absolute, you end up trading off with minimum distance 'd' which determines how much error you are able to detect and correct. 

The important thing to notice here is the rate R. In any setting you can't achieve a rate R = 1, because that would mean entire codeword contains the message with no space for parity message. Therefore, it would be impossible to recover the message in case of noisy channel or any adversary between the sender and the receiver. 

Are we done? Definitions look mouthful.

Not yet. I want to introduce the idea of unique decoding and list decoding that is a fundamental part of the theory. Unique decoding is what we talked about just above. It's a situation where we only allow decoder to output only one closest codeword to the received codeword y' which may or may not be corrupted. Therefore, whenever we normally think about decoding an encoded codeword, we usually mean unique decoding. This can be visualized by the following:

Fig: Decoding

Let y' be the received codeword. If the codeword was corrupted by a noisy channel or an adversary, then decoder will error correct it by finding the closest codeword. Let's take a hamming ball of radius d/2 from y'. We have placed the constraint that the decoding should be unique. Meaning, the decoder should only return one closest codeword if it exists within the hamming radius. In the above figure, there are two codewords inside the hamming ball such that: Δ(c1, c2) < d which violates unique decoding. This is the case because, the minimum distance of the code is d, ie two codewords must be at least separated by distance d. Therefore, if we remove c2 from the hamming ball, the decoder "Dec(y')" will return c1 in case of errors. If there are no error, then the code word c1 will coincide exactly on y'

Problem with unique decoding is that you quickly get capped on the rate. While this is not "problem" rather property, can we do better? 

Yes, we certainly can. We can ease up the unique decoding constraint and allow the decoder to return a list "L" of codewords instead of a single codeword. Such type of decoding is called list decoding. By returning a list of codewords, you get better rate-distance tradeoff. Theoretically the upper bound is already specified which is known as list decoding capacity bound. It is crucial to note that the size of list has to grow polynomial to size of codeword 'n', because it's pointless to have a list of very large or infinite size. If we again take the above illustration, decoder will return list L = {c1, c2}. This does look a lot like prediction in LLMs, but the difference is that even though we get a list, the correct codeword is still one of them from the list. 

Okay, but what is even the point of all of these above? Why even talk about this? 

Remember the assumption that I mentioned above? Let me remind you again. The implicit assumption is that the encoding space and the decoding space is the same. All of the encoding and decoding is done from the space of codewords, we call code: C = {c1, c2, c3,..,cn} on the same alphabet Σ and the field 𝔽.

The thing is that this is not the case for humans. The encoding and decoding space of the encoder and decoder is subjective to the learning of the encoder and decoder respectively.

Remember, when I told you that getting rate R=1 is not possible because there's no parity message? 

Well, for human communication, we can indeed achieve this. We do this subconsciously again and again. Let's say Alice sends Bob a message "Happd", which is a typo by Alice. What do you think the message by Alice was? Yes, it is indeed "Happy". Wait, where's the parity message? 

See, there is no parity message, so k = n, therefore, rate R = 1.

What? Does this mean it violates the theoretical impossibility that I mentioned above? 

No. For you to be able to decode the word "Happd" to "Happy", you needed to learn it prior. Meaning, your learned space of encoding must have had "Happy" before decoding. As a result, the information needed to decode was spent on expanding the encoding space and the decoding space. On top of that, both the encoding space and decoding space must have the word "Happy" within it.  However, this is not always the case. The above example was a classic example of unique decoding. 

Let's look at a list decoding version. Alice sends Bob the word "Balt". That word could literally mean anything. Bob after decoding would return a list L = {"Ball", "Bat", "Bob", "Bait", "Batsman"}. We can notice some properties emerging: 

a) The size of learned space directly affects the size of the decoding list L. 

b) The distance measure is multidimensional. The encoded message and decoded message no longer have to be the same dimension. Eg: "Balt" to "Bat" (smaller dimension) or "Batsman" (larger dimension). This makes the notion of distance and minimum distance more complex.
 
c) The rate is still 1. 

d) List size can be equal to the size of the entire alphabet Σ, which if true, makes list decoding useless

The properties "b" and "d" are quite concerning. Can we figure out some way to reduce the size of the list L which would inward reduce the complexity of the distance? 

Yes, turns out you can greatly reduce the list size and in many cases completely convert the list decoding into unique decoding. We can achieve this, by adding context words or sentences before or after or both the message. For example: Instead of sending "Balt" and list decode, we can send: "I went to a cave today and saw a " + "Balt". 

Context: "I went to a cave today and saw a "
Message: "Balt"

Codeword = Context + Message. 

Now, the decoded list would definitely be L = {"Bat"}, which is unique decoding (list size = 1).

Everything looks perfect. Does this come for free?

Sadly, no there is still the prior learning required to decode the message. On top of that the context acts as parity message. As a result, codeword is not completely message, therefore rate R!= 1. We end up trading off rate to shorten the search space for the list size. To further point out, the more context we add to the message aka lower rate, the higher chance we get of unique decoding. The optimal context size depends on the learned space of encoder and decoder. Therefore, this framework loosens up and offers more flexibility than the prior notion of communication. 

There is yet an interesting observation that I have not mentioned. In the same context and message above imagine the channel is noisy such that the entire message part gets lost. Or Alice is lazy enough to not even send the message and only sends the context to Bob. This gets us:

Context: "I went to a cave today and saw a "
Message: " "

Codeword = "I went to a cave today and saw a "+ " ". 

Even though Bob does not receive the message, only the context, Bob is still able to list decode "Bat". 

Does this seem familiar to you? 

It probably does, because it's next token prediction that we talk about so much in LLM. In this loosened framework next token prediction simply pops out. 

I do believe that this framework can be formalized to create a dynamic information theory. This may or may not be useful to interpreting more complicated dynamical systems like neural net. I cannot claim so because I would need to perform the irreducible step of formalizing this theory to figure out it would lead to. 

Finally, we're done right? 

Again, the answer would be no. Although, we have discussed a lot of ideas. There's a very crucial question that hasn't been answered yet. One that disturbs me quite often. A question so simple yet carries a profound implication on how we see the world. 

"What is information?" 

Fig: Pondering about information

If we look deeper, information is description of a state of some object. The object could be mathematical or a physical object. To be precise, let's take the fundamental states that exist in our universe. The states are space and time, at least our "model" of the universe is spacetime. Therefore, you can also view this as mathematical object, but viewing from physical lens is easier as we experience space and time in our daily lives. Okay so for any physical object, what fundamental questions can I ask to get the description of its state? Let's take an electron as the equivalent physical object. 

What space or position in space does the electron lie in? 
At what time is the electron in the state X or position Y? 

The answer to these questions, do give you the minimum state of the electron, thus the information about the electron or any physical object for that matter. Any other "classical properties" are derivative of space and time. Classical because intrinsic 1/2 spin of fermions is not a spin, at least in a classical sense. Thus, we'll make this quantum free to make everything digestible, so no spinors. Okay, moving on, what would be the answer to the two prior questions? 

Hmm, well it depends. We don't know if the space is finite or not. If we imagine an empty universe, how do we know if there's space at all? The problem lies even deeper. All our measures like position relies on where we place our measuring scale like the cartesian coordinate system. 

In our empty universe, where exactly do we place our measure scales?
What would be the origin point of the space? 
What about the universe with one electron? Do we place the origin point at the position of the electron?
What about universe with two electrons? As a matter of fact, universe will unfathomably large no of electrons and other bosons and fermions like ours? 

Thus, as we can see, the problem is in our measuring scales based on which our entire mathematics is built upon. Surprisingly, the issue runs even deeper due to the nature of numbers. Let's imagine a 2d coordinate system, placed at any random point in space

Fig: Measuring scale

There's an important axiom on top of which I'm about to stand on. I assume that the fundamental particles are point particles. In quantum field theory, bosons and fermions are excitation of some field at any point. Meanwhile, string theory describes them as strings, not a point. So, I do stand upon this idea of point particles from QFT, nothing more than that. Still, it is a big enough axiom or assumption that it needs to be pointed out explicitly.  

It's just points. How big of a deal will it be? 

Actually, entire mathematics builds upon this notion of points. Entire coordinate systems, algebra and so on. As a matter of fact, Euclid used this notion of point as one of the 5 postulates needed to derive entire mathematics in his writing now known as "Euclid's Elements". 

Fig: Euclid's Elements
Before using point in his axiom, he gives the definition of point as: "A point is that which has no part". Then he goes on to give following postulates where he uses points to construct lines. 

Fig: Postulates of Euclid

What did he even mean by thing that has no parts? 

I don't no.  We can see how shaky the grounds of mathematics are. He mostly might have meant point is something with no dimension. 

What is that even supposed to mean? 
How can such thing even physically exist? 
What exactly are we modeling or trying to with mathematics? 
What is even a point? 

As a matter of fact, what is mathematics? 

Moving on, now imagine, I place a point particle an electron randomly on a space just like the measuring scale. Now if I were to only look at one of the axes (for simplicity), say the x axis. Where would I say that the electron is? 

Well, to give you an answer I would have to ask what precision are we using in our measuring device? I would be lucky, if I placed the electron right at the origin, or at the integer coordinates like 1, 2 or even non integer exact halves like 0.5, 0.25 and so on. We see that each time we take a half of the value, the precision increases. This continues till infinity. This means, the position of electron at a random place say between 0 and 1 needs to be described with infinite precision (if not lucky). This means to describe even a simple state of an object we need infinite information. 

How can we even measure information if it takes infinite information to describe it? 

This makes you question what information is, if not the precision of our measuring devices?

The same is the case for time as well. If we assume time as something that exist regardless of whether state change of physical object occurs or not aka absolute (not in relativity terms). Then, the duration between before start of time and a second has passed, has infinite precision. But there is a way to pair time such that this infinite precision goes away. We could define time as the minimum "duration" where state of a physical object change. The object goes from state A to B in a period known as time. Here, time is bounded by state change of and object. Sure, the definition is circular, but that won't be a problem as we have all experienced it firsthand. 

"The mystery continues."

Let's imagine, instead of a point. I place a rectangular object in the 2d coordinate system. The catch is that I place the box at exactly 1 on the right side and between 0 and 1 on the left side. This is shown in the illustration below:

Fig: Box on 2d coordinate system

We have box, with B and C points place right at 1 in x axis and the others between 0 and 1. 

Everything looks okay, what is the issue here? 

To tell you about the issue, we have to take a minor detour into the infinite world of Cantor. See, Cantor proved that infinities come in different sizes. Some infinites are larger than other infinities. Using simple yet elegant diagonalization proof, he showed that the size of infinity between 0 and 1: (0,1) is larger than size of infinity of the natural numbers. For this he was called "the corruptor of youths" by the critics. Since the size of infinity of natural numbers and integers are equal, our argument we're about to make remains valid. 

Fig: Georg Cantor

Now that we have been corrupted by Cantor's idea. We can see that the precision that is needed to describe any random point in between 0 and 1 is higher than needed for integers. Therefore, we need more information to describe the position of point A, D and almost all of the body of rectangle, than the ones exactly on 1, which are point B and C.

How is this even possible? Why do we need more different amount to information to describe different parts of the same object? 

Things get even weirder. Imagine applying some transformation that flips the rectangle along the y axis just at the center. Then you get change in points: A to B, B to A, C to D and D to C. Now you need more information to describe the points B and C than A and D. Each transformation may change the needed precision to describe the state or parts of the same object, unless the measuring scales sticks to the object along the transformation which would be nonsense and outright crazy.

The idea above can be generalized to fun ideas like, something I call morphed group. Groups where the elements lie in different spaces. Like the above, assume, G = {A, B, C, D} is a group. Then, A and D would belong to open set S = (0, 1) and C and D belongs to the set . What would the geometry of such a morphed group look like? 

If we think about the physicality of infinite precision, we are left with even deeper questions. 

Does infinity physically exist? Or is it a mathematical fiction? 
Is the measuring scale even valid? Or is it just a useful model? 

This is a point where we can draw a line between mathematical and physical objects. To do this, let's imagine that the infinite precisions of space truly exist. Remember that we assumed all bosons and fermions are point particles. All of the infinite precisions in space are at the end all points, no matter how small. This allows me to take all of the bosons and fermions and place them in the infinite precision points between 0 and 1. Bizarre enough, I could fit the entire universe between 0 and 1. Of course, that will result in a blackhole, which we ignore for ideal purpose. 

Surprisingly, this tells us how the universe itself could be continuous or discrete. "If the infinite precision of space is real, then the universe is continuous, which holds for time as well". "If there is no infinite precision, and it is capped off and finite then the universe itself would be discrete". A point particle at some smallest allowable precision in space would move to the next precision in space by just appearing there. We wouldn't be able to see it in between, well because it's discrete. And boy, that is weird to think about. To be honest, that wouldn't be too surprising because, we do see it in quantum mechanics where we know that energy is quantized. The jumps in energy level are discrete, meaning we can never see them between the discrete energy levels. Discreteness would make a ton of sense for time, which is dependent on smallest state change of physical object. Otherwise, even in an empty stateless universe time would move continuously, which still can, but is quite lonely to think about. 

You know what would be even weirder? The universe can be both continuous and discrete. A "wave function" is continuous whereas after "measurement" we get discrete outcomes. But never mind such ideas are highly speculative.

To finish it off, I was wondering where entropy would fit in all of this idea. I want to point out this measure of entropy is informational, which is a measure of uncertainty. Luckily it, fits right in and kind of shows that both informational and thermodynamic entropy is the same. The informational entropy is uncertainty which arises due to infinite precision which cannot be measured

To explore this, let's imagine we start out with continuous universe with infinite precision. But a good thing is that entire infinite precision of any space can be measured given enough time and memory. This results in a universe starting with zero informational entropy.  As we know the thermodynamic entropy of the universe must always increase. As such, the precision of information that we are allowed to measure in this imaginary universe, must decrease, as the result of increase in informational entropy. This means that the precision of space that we are allowed to measure decreases over time or state evolution. But such a decrease is a very slow and gradual process that progresses slowly on a span of out billions of years. Scarily, that would mean there will come a time when we won't be able to even measure the position of tiny subatomic particles, and even worse molecules. 

Very disturbing, what is this universe that we live in that gives rise to an unsettling future?

There's more to it. The way we do a "measurement" is via interaction with some sort of light. The lower the wavelength the more precision we can measure. However, the smaller the wavelength, the higher the frequency of such a probing light. 

We have E = hf, where h is the Planck's constant and f is the frequency 

Therefore, the energy of the probing light increases. Actually, it is quite costly, otherwise we would have discovered all of the predicted bosons and fermions by now. Now the issue with the above equation is that, for our universe to have increasing informational entropy, the allowable precision has to decrease over state evolution. But I don't see someone stopping from us measuring what we want with any amount of high energy we want, assuming possible in future with technological advancement like harvesting star or something. 

Do you see some kind of mystical entity stopping us from doing this? 

No right? So how can this be? How is it that when the entropy increases as a result the allowable precision decrease but there is nothing to restrict our measurement of precision?  

"Unless, Planck's constant is not a constant at all"

Remember Max Planck discovered the constant empirically, by countless experiments. But no one claimed that the constant cannot change with large enough time. If the allowable precision has to be restricted, it has to be done via restriction in energy. Now, it would be nonsense to make frequency/wavelength of the probing light changeable. That would break all of our physics. But we can make it so that Planck's constant increases with increase in the entropy, which is decrease in allowable precision that can be measured as the energy required to measure the precision beyond the restricted boundary would be infinite. 

Planck's constant is the allowable precision boundary of the universe that is subject to increase with large enough time.

Here, I claim all these things, but what do I know, for all I am is an insect trying to comprehend the universe.

Comments

Popular Posts