Positioning for the next verticals of human society

Human society is at a cusp of a new technology renaissance. We've all seen a glimpse of what future may look like due to explosion of LLMs like GPT. Some fear it some embrace it. The best move is to position yourself at some if not all of the new upcoming revolutions. Don't fear it, don't embrace it rather play it like an infinite sum game. General consensus is usually wrong about the future. We're currently experiencing indeterminate optimism in society at least about technology, which usually doesn't end well. Maybe it's just human nature, like if you were to ask someone what the future would look like in 1980's they would surely mention flying cars, thinking machines all around us. We're not there yet, flying cars may not be optimal in everyplace, mostly sub optimal in some places. Still, even if you ask people near you, you'll get the similar or exact same optimism about technology. Maybe just maybe we're at the brink of exponential change/advancements. Or maybe we just have enough time to think/dream about the future than majority of humanity prior to us. Regardless, we surely are living through the most exciting time of our human civilization. 

What verticals do I believe are important? 

What are most people missing out on?

Starting off, with the most popular candidate:

1) Artificial Intelligence (AI):

You're probably pretty used to hearing AI this, AI that every single day. Everyone is super optimistic about AI, like it's magically going to change people's lives. Some want to make AI as useful as possible, some want to make AGI, to ascend humanity to GOD status. Regardless of which side you're on, we all agree AI is probably going to improve human society. To understand how we can position better in this field, we need to understand what different camps are optimizing for. 

The most popular camp is the AGI birds. This camp believes reasoning and understanding are just emergent properties. They believe LLMs will eventually reach or even beat humans at most or all task, gain sound reasoning and become "sentient" which is debatable. Pretty debatable, infact we're taking about incomplete and highly abstracted view of brain and replicated into what we call neural networks. Yeah, we stole the name too. This camp is the hardest to position as the goal is pretty vague or outright hard, because we don't even understand how human brain properly works, do we have free will? or what consciousness even is? Some believe it is large scale computation, which if true it's not a question of if but when we'll reach human level sentient AI. Some believe consciousness and sense of self is a quantum mechanical phenomenon, like Donald Hoffman. I like to believe this is true, because it's the most fun outcome. Argument for it would be there isn't enough time for quantum phenomena to persist in brain. If you try to position yourself in this AGI camp, you'll likely need to think a bit longer term. And I mean really really long term. 

The other camp views AI as parrot machine without logical reasoning and understanding and believe LLMs are only regurgitating whatever they've seen. Proper word for AI here would be data driven systems, still powerful and useful for daily task, doesn't lead to AGI. Sounds loom and doom right? We'll this camp actually has better chance of getting that VC money everyone yearns for. There are three things to be really excited about in this camp which are: AI agents, AI open world simulation, AI driven science

Building general purpose AI agent is a very hard task, but not impossible. What would those agents look like? Imagine, your device being controlled completely through voice. "Hey jarvis open up my gmail and send a business proposal to XYZ@gmail.com. Then tally me some information on internet about VC acquisition this year and edit my video shoot and upload it to YouTube". Sounds like a sci-fi movie right? 

AI open world simulation is getting better day by day as we speak. AI generated open world has the potential to serve as a foundation for building open world metaverse. "Imagine you playing an open world game, you stumble upon a village of NPCs, you find rare materials only available to you generated on fly by AI, then you proceed to build kingdom with months of effort, then another player comes in raids the kingdom destroys everything, loots your valuable artifacts away. What do you do? You start forming guilds and associations to protect yourself, thus a large-scale virtual economy emerges". Something like Sword Art Online (SAO), would be close to reality. What we end up is truly open world with experiences unique to the user. 

AI driven science is pretty thought provoking to think about. If AI starts automating science, then we'd have no purpose left. It's probably the last line of separation between humans and machines. We're still very far away from completely automating science because AI cannot reason well especially with deep mathematical and physics concepts. Most examples of LLM doing surface level science/math are just some emergent behaviors it got from the learning data. LLM still fail pretty bad when we interrogate it with harder and deep ideas. But the fact that Google's Deep Mind found new efficient matrix multiplication algorithm after 50 years is extremely fascinating. For now, AI driven science will be more of AI doing some sort of prediction in hard problems, like protein folding, designing new materials, drugs which are still pretty compute heavy. Combination of neural networks with neuroscience to build some sort of neural interface is fascinating area of research, Eg: AlterEgo from MIT, Hand neural interface from Meta etc. 

"Can AI innovation scale at the same pace? "

Maybe, or maybe not. There are debates that AI is hitting wall and law of diminishing return is starting to take over due to lack of real-world data to train on. Would using synthetic data improve the models? What about the hallucinations? Is test time compute actually improving reasoning in the model or is the benchmark obfuscated and manipulated to keep the hype going? To be honest the burden of proof falls on the major players of the industries, which we'll find out soon enough.

"At the end of the day all we are doing is large scale multiplication over multidimensional vector spaces"

2) Quantum Computing (QC):

Hmm, the mysterious quantum computing that everyone believes will eventually take over the world, break encryption system, but somehow looks like a decade away every time. Here's how I think quantum computing will be a "real thing": "They'll always be a decade away until one day they're just here somehow". This is somewhat similar to how AI pop up from the AI winter. So, where are we in terms of quantum computing? How fast are they? Or how do quantum computers even work?

To keep it fairly non-technical and simple, which can be sometimes misleading especially in QC. Quantum computers are built using properties of quantum mechanics whereas our classical computers work based on properties of classical physics. That's not really clear right? Maybe you have mild idea about quantum mechanics or no idea at all. Let me give you a somewhat close but misleading analogy: 
Let's put a cat inside a box with poison. The poison in the box can break down at any moment and kill the cat or it may not breakdown at all. Say there's 50% probabilities for both outcomes. Until you open the box the cat is in what we call superposition state of both "alive" and "dead". Let's represent them in a simple mathematical representation. We'll represent the state as Ψ called psi, alive as |1> and dead as |0> in Dirac notation. 

So, we have superposed state of the cat as  Ψ = alpha |0> + beta |1>   where alpha = beta = 1/2
where 1/2 represent amplitude or "50%" probability as | (1/2 )^2| = 1/2 = 0.5 => 50%

This analogy is actually incorrect because when the object is large enough, classical physics takes over and quantum physics breaks down. But when the object is small enough, say an atom which is isolated so that information doesn't leak from the isolated system then quantum physics takes over, which happens when isolated and cooled down to very close temperature so that there's very less radiation leaking out which leaks information breaking our quantum state called observer effect. Now if we were to do computation over these quantum state under boundary of quantum physics, we would get quantum computers. Why does quantum physics prevent us from learning what's happening behind the curtain? Well, no one knows, maybe we're just insects playing dice trying to comprehend the marvel of a universe that we live in. But you now know why quantum computers you see online has an inverted refrigerator trying to cool things down. 

Looks like a sand worm from Dune

It would be incorrect if we look at quantum computers as super-fast version of classical computers that can do this light speed calculation way faster than classical computers. But why? Didn't "Google" recently announce their quantum chip "Willow" that took 5 min to solve random circuit sampling (RCS) which is estimated to take fastest supercomputer some septillions years?  Yes, but here's how we should look at quantum computers: 



~ "Quantum computing is a superset of classical computing"

Just like how classical physics is merely convergence of quantum probabilities due to decoherence of the quantum state. Quantum computing is a larger paradigm that includes classical computing. Quantum computers can simulate classical computer, but not the way around. To just represent superposition state with classical system we would need to represent possible values over the Hilbert space which is infinitely large. Simulation is possible but it is merely deterministic. For many day-to-day problems, quantum computers perform as good as or even worse than classical systems. But for some problems, they exponentially outperform classical systems. Where does this speed up come from? Simplest analogy thrown around these days is quantum systems tries all possibilities and once across parallel universes and interfere to give us the result. How I like to think about this is, these quantum systems do have ability to access large possibilities at once for parallelizable problems, but the universe prevents us from trying to measure all of these values at once. But there's a trick, though we cannot access every probable value, but what we are allowed to do is relative comparison and compute over relative properties of all these possible value behind the curtain without looking at it. It's like universe trying to tell us "I'm not going to make you omnipotent, but I'll allow you to be semi-omnipotent".  

We could define two major problems while building quantum computers as:

"How are we going to scale quantum computers with high fidelity?"

"How to make them small and portable?"

Well, for the first one Google kind of gave us a small direction towards the answer. The significance of Willow chip from Google was not the speed but rather error correction and high fidelity.  Ideally what we want to optimize is increasing number of qubits while minimizing errors. Google combined multiple qubits to form a logical qubit which seems to be handle errors quite well as we scale the size of qubits. Meaning the more the qubits error no longer grows linear/exponentially. The quantum state still lives for very less time but, such a breakthrough in error correction is a massive step towards the right direction.

As for the portability, it depends on what type of quantum computer you're building. The one we commonly see is superconducting quantum computers like Google with refrigerators. Yeah, one that looks like Shai-Hulud from Dune. Another one that shows promise in scaling downsize is ion trapped based quantum systems where we use lasers to cool down atoms suspended in magnetic field. We're still in experimentation phase as to which one will be more successful. Portability is not that important right now, not at least till we have a scalable and practical quantum system.

What benefit would we have with a fault tolerant quantum system? 

It actually depends on what we use the compute for. See, quantum computers don't themselves suddenly make breakthroughs, rather they are more of precursor tool for massive breakthroughs in many fields. The most common combination that we'll probably see is simulating reality with quantum systems. We will be able to understand how our reality around us actually works. Along with prediction system powered by quantum speedup and simulation we'll be able to accurately predict cancerous cells and kill them off using nano bots. Or we could discover new drugs, elements, chemicals etc. Or find a way to reach stable nuclear fusion, the possibilities are endless. One of the first practical application would definitely be improving optimization algorithm which is integral to AI algorithms. 

Along with the optimism there's a massive fear around quantum computer breaking all the encryption all around us. That fear is valid yet misguided, for two reasons:

- We would need significantly large number of qubits with high stability and fidelity to begin to break any standard encryption and hashing algorithms.

- We already have quantum resistant cryptography like lattice-based cryptography. Transitioning world's infrastructure to a new cryptosystem is still a hassle

The good news is that, with that fear also comes post quantum cryptography systems, which uses provides security at a quantum level. Which means to break such a system you'll have to bend the laws of quantum mechanics itself. Sounds pretty secure to me.

That's all good, but when are we going to be able to teleport? Well, it's a bit tricky, teleporting minute particle is feasible but large system is a different game. Along with that comes a fundamental question about our reality. If we were to teleport a human being, we would need to destroy the person at one end and reconstruct at the other end. We can't just clone the human being, assuming they are in quantum state. We have no cloning theorem that tells us we can never accurately copy a quantum state. Then does that mean if we were able to teleport humans, then while reconstruction we are also able to construct humans with high accuracy? Which would also mean that we human beings would have the ability to construct any form of matter at will of our science and technology. I'll let you ponder upon it.

"We finally have a way of flirting with the fundamental nature of reality itself"

3) Blockchains:

We'd all agree that we don't really trust stranger on the internet, definitely even less than a stranger giving a candy to a child. So, if we think about it, we have the largest network of people humankind has ever made called internet, yet we can't coordinate with each other. Sounds a bit problematic right? Wouldn't it be awesome if we were able to coordinate even if we didn't trust each other? But is that even possible, sounds like a utopian idea, doesn't it? But why are we even talking about social coordination over the internet, weren't we going to talk about blockchains? 

Well, you see blockchains at the core are actually trustless coordination layer. Meaning they allow us to coordinate even if we don't trust the counterparty and vice versa. 

How are blockchains able to remove trust from the equation?  

What types of coordination can we even do?

Actually, the trust is not completely removed from the equation. The trust is simply transferred to a secure decentralized byzantine fault tolerant network that is responsible for coordinating among its nodes and ensure integrity of the network. The trust usually derives from two common place depending upon the type of consensus mechanism. Consensus mechanism utilizing proof of work derive trust from compute power of the network aka mining using devices or rigs, whereas consensus mechanism utilizing proof of stake derive trust from capital bonded on the network. Let's call these compute trust and economic trust. The larger the size of the network and the more spread out the geographical distribution is the larger the derived trust from the network and the more the fault tolerant the network is from malicious actors. So, we don't really remove trust but simply offload it to more secure counterparty. We're putting faith that this particular consensus protocol is resilient enough to withstand any sort of bad actors on the network. Usually, bad actors would need 51% or 2/3 majority depending upon the design of consensus protocols. When the number of network participant is large enough hostile takeover is extremely hard/expensive. Even if in extreme cases someone is able to take over the network, social fork choice would fork itself to a new chain where the hostile party has no power. Therefore, blockchains are resilient withstand against any form of malicious forces may it even be government or group of governments. Blockchains at the core reflect cypherpunk movement along with game theory on steroids.

We can pretty much perform any type of coordination acceptable by both side of parties. It could be financial like peer-to-peer financial purchase on the internet. It could be information exchange like peer-to-peer information/file sharing like torrent. You could also coordinate on offloading compute like cloud network but peer-to-peer where integrity of the computation is verified by the network. The possibilities are pretty much endless. 

But can't we do the same things over the existing internet? 

Yes, we can do it. But we have to remember that on the existing network we're displacing trust to the companies that provide specific implementation of the problem, unlike underlying network in blockchains. This results in a huge counter party risk. It's not ideal to trust anyone on the internet let alone companies claiming that they'll be gentle with our data or money. 

Blockchains are falsely known for being anonymous. Some privacy blockchains are completely anonymous like zcash, monero. But most blockchains are pseudo anonymous, meaning they're only anonymous until someone finds out that a particular address belongs to you. It's like transparent ledger where you can see each and every transaction but not whom particular addresses belong to. Blockchains despite being pseudo anonymous have a high affinity for building truly private applications like private payments, or uncensorable social media or a form of wikileaks without centralized counter party risk. 

The most double edge ability of blockchains is the ability to undermine any state or governments. Any sanction becomes useless if the code is on the blockchain. Social network of people can undermine authority of even the largest governments in the world. If we think about it, blockchains as of yet are extremely constrained by their limited storage capacity. The main goal of general purpose blockchains like Ethereum is to serve as a world computer, where all of the world's critical infrastructure runs onchain. This might be only possible if in the future, we had unlimited storage space with little to no cost. Another approach to achieving the goal would be making blockchains a verification layer for computation performed offline, where only the cryptographic proof of the computation is verified onchain. This technique is commonly used to scale layer1 blockchains by building another layer2 chain on top of layer one which commits cryptographic proof of state transition to the underlying layer1. Or we could also scale the underlying blockchain by sharding which has its own complexities.

What kinds of applications do I think are suitable for blockchains?  

I believe blockchains serve as a fail-safe mechanism to opt out of the current system. It is the only platform where we can build applications targeting everyone in the world. It's pretty straight forward to replace centralized banks with decentralized banks which we commonly call decentralized finance like decentralized lending, borrowing, escrow services, market making, onchain derivatives and securities etc. which can all be accessed from a smart phone and internet from anywhere in the world. But wait why are there so many financial applications? Weren't blockchains supposed to be the anything types of social coordination layer thing? Well, yes, the reason for this could be that it is comparatively faster and easier to disrupt financial services as they are the most common point of centralization we have in our world. It may not be optimal to bring everything on chain, but for some other things like Eg: decentralized storage, decentralized cloud, decentralized social media etc which can be migrated to blockchains usually may incur storage demands on the network. Yeah, that thing we're still waiting for hardware breakthrough to solve. Actually, I've hidden a fraction of truth, it's not that we can't totally bring these things, we can but in a trust minimized way. See there's a huge movement of something called restaking going on the space, where you can inherit some trust from the underlying network to run your own network. It's similar 0 to 1 movement when Vitalik introduced general purpose blockchains, when we had to build a separate blockchains for specific implementation. Now, we have AVS (Actively Validated Services) which are trust minimized networks that inherit security from the underlying POS network like Ethereum. So, technically we can now build any sort of network may it be storage, file sharing, social media, clouds, AI training and inference networks etc in a trust minimized manner. 

An important idea that can specifically be built on top of blockchains are the concept of Decentralized Autonomous Organization (DAO). As the name suggests DAOs are decentralized organizations formed by group of strangers on the internet with an aim to achieve a common goal. Surprisingly Steve Jobs talked about DAO by giving an analogy of "virtual organizations that is created to achieve a goal and disappears after achieving the goal" during his early interview about the internet. These DAOs can operate without a leader, but through coordination of the community members, formulating and voting on proposals and governing their treasuries. What types of DAO can be formed? Technically any kind, there has been a practice of DAOs by large protocol to govern the direction of the protocol by the token holders. There have been many tasks specific DAOs like constitution DAO that set to acquire a copy of original US constitution that was set for auction for millions of dollars only for the DAO to lose the auction due to transparency in their bidding power due to open treasury. There have also been DAO's to crowd fund ignored but important science and research, typically known as decentralized science. DAO's are the perfect example of digital democracy and digital governance using coordination. You could also form data dao's which provides token gated access to specific enterprise grade information and research. 

Besides DAO, blockchains are commonly known for universal identity systems and tokenization of real world and virtual assets. It's pretty common for us to create a new identity for every application on the internet. What if we were able to create a universal identity and used it for logging into any other application without recreating new identity? In short, we can define decentralized identities onchain which cannot be censored or taken down and can be universally used to access all onchain application without need to create new identity every time. Google's SSO may be closest to decentralized identities but are more specific and centralized. Tokenization is pretty powerful tool for bringing real world asset like lands, documents, securities as well as virtual assets like in game assets, or virtual items on a virtual world onchain which makes the market making process extremely efficient as blockchains are the fastest settlement layer for transaction we have, taking from few seconds to minutes, compared to any other centralized infrastructure having multiday settlement period. 

"The beauty of blockchains lie in their consensus mechanism"

4) Zero knowledge proofs (ZKP):

Zero knowledge proofs deserve a section of its own, because it's a magical tool that we can use to prove integrity of satisfiability problem. Let's start with the same old problem of trust. Alice claims she knows a value 'x' such that it satisfies equation (x + x^2 + 1 = 0), but Alice doesn't want to share us the value x. How do we even verify that what Alice is claiming is even true? Is it even possible to verify the correctness of a solution given the constraints of the problem? Turns out yes, we can verify that the x claimed by Alice in fact satisfies the given polynomial. Hmm, but how do we do that? Well, I'm not going to tell you how it's possible because it would be too long and technical. You'll have to take my word for it and believe it's possible by formulating cryptographic arguments or be satisfied with a simple example to get a better high-level intuition about similar problem.

Remember waldo? We're going to play where's waldo. What if I were to tell you I knew where waldo is, but I don't want to reveal where he actually is. 

Where's Waldo?


You wouldn't believe me without a proof, would you? Here's my proof: 

Waldo

I didn't reveal where waldo exactly was, but I provided a feasible proof that I knew where waldo was without revealing his position. Even with the proof you might have zero knowledge of where waldo is but is satisfied enough to believe that the proof is true. This is an underlying principle of proofs known as zero knowledge proofs where prover proves integrity of the claim without revealing the underlying value we know as witness/private inputs. We can easily extrapolate the idea to formulate proofs about integrity of mathematical claims or computation. The underlying machinery of zero knowledge proof aside privacy is actually "verifiable computation", meaning we can verify any computation was done correctly. 

"How could we actually verify these computation"

The most naive way is to verify correctness of a computation is to perform recomputation. Eg: Alice claims that f(10) = 20, where f(x) = 2x. To verify these claims we could recompute value 10 over function f(x) and check if the output matches to Alice claims or not. It might work in a small example like above, but recomputation is very infeasible and costly especially when you're recomputing on blockchains or verifying integrity of training process in machine learning models, which are compute heavy. We'd like to avoid recomputation as much as possible. What we should do is compute a zk or in this case validity proof which can be generated and verified in linear or sublinear time in size of the input.

ZKP overview

"What benefit does this even give us?"

Being able to verify any arbitrary computation is a superpower, infact it's a new paradigm for building a truly open and verifiable internet. Just as blockchains removes trust between parties backed by economic security, verifiable computation removes trust between parties by backed by cryptographic or mathematical security. We're using language of the universe to create a verifiable argument in an untrusted environment. This opens us to variety of new verifiable applications on the internet with ability of prove any arbitrary computation.

Let's take a scenario where we want to train ml model for a particular dataset for 100 epoch, but we don't have necessary resources for training. So, we offload the training process to a service provider that claims to train the model for 1$ per epoch cost. We can't trust that the service provider actually completes the training till 100 epoch, as model performance could be non-negligible at 80 epoch and 100 epoch, therefore service provider could only train 80 epoch and keep free 20$. But if we ask the service provider to provide us proof of computation, training in this case along with the trained model. Then we can easily verify if the computation has been done correctly as agreed. Similarly, we could create a marketplace for ml models where the validity of model's accuracy is provided using zero knowledge proof hiding the model parameters. This would enable useful models to be easily bought and sold. If we combine zkp along with blockchains then we can scale blockchains just like we mentioned before where state transition proof is committed and verified on chain. Or even heavy computation that are not feasible for blockchains can be performed off chain and verified onchain.

Besides the verifiable computation property, zkp also provides free zero knowledge that hides private witness from the verifier. Therefore, we can create truly private applications. Prime example of that is tornado cash which is zk backed privacy layer that can be added on any type of programmable blockchains, which instantly hides traces of source of transaction done using this protocol. Tornado cash was so powerful that US government had to sanction the entire code and arrest the founder for writing an open-source code. Which was overruled by the court after 2 years and the founder were released. ZKP could enable us to create a truly private social media where we would have no idea where the post was from, but we could prove that we were not part of it. Imagine something like reddit but truly private which would reveal humanity's true opinions and feelings. A powerful application of ZKP would be to replace existing identity infrastructure, imagine logging into any applications like twitter by saying "Hey Twitter, here's my proof that I'm a user, but I'm not going to share you my details".   

Privacy is a double edge sword, either you have privacy, or you don't. Along with the benefit of privacy comes the eventual risk of misuse of privacy. Governments waged a war on end-to-end encryption in late 1900s portraying it as a hotpot of criminal activities. Cypherpunk movement protected people's right to privacy, without which we would all be living in a total surveillance state.

It's important to note a critical property that both AI and ZKP share with each other. What would that be? In one word, compression. A good way to think about compression is that both AI and ZKP are compression engines. AI is an example of lossy compression where as ZKP is an example of lossless compression to a great extent. AI generalizes a large information into a model, whereas zkp generates a succinct proof of a computational claim. ZKP are usually implemented using two paradigms: circuit-based approach and VM based approach. Circuits are harder to implement as we have to design circuit specific to computation that we are about to prove but are highly efficient and auditable whereas think of ZKVM as virtual machine that can prove arbitrary computation but are resource intensive to run. We are yet to see a language agnostic and efficient ZKVM that will dominate the proving markets. Zero knowledge proofs are solved technologies in terms of the main mission. They're going through scaling renaissance where proof size, generation and verification time drastically reduce.  

ZK proofs are still probabilistic proofs that rely on the security of underlying cryptographic protocol they're designed upon. Which means there's a very minute negligible chance that fake proofs might be accepted but we could argue that is true for all cryptographic systems. 

Fully Homomorphic Encryption (FHE) is another tool that has the capability to completely change how we compute. FHE is known as holy grail of cryptography, why you ask? Using FHE we can perform arbitrary computation on an encrypted program. Yes, you read it right. We're used to encrypting data for privacy but, with FHE we could encrypt a program and perform blind computation on the encrypted program and get the output by decryption. FHE is still a highly research field but can be very powerful when combined with other technologies like Eg: AI assistants/agents possess system risk because they'll need to have access to your confidential information like credit card details, medical reports, private video, photo and conversations etc. If the agents utilized FHE and ZKP, agents could perform various interaction with the physical world without risking our private information because even the agent won't know what it's actually computing upon. An interesting idea of AI utilizing FHE is detecting malicious conversation possessing threat of national security without actually reading the conversation, rather performing the inference in encrypted form. FHE is still heavily a research domain, carrying a huge promise.

"Good technologies are usually great compression engines"

5) AR, VR:

AR and VR are probably one of those technologies that general public resist so hard against. The hate is mostly driven by inherent distrust and hate against Zuckerberg. This mass hysteria may entail that the future is very bright. Every argument against AR and VR has been mostly associated to social life and has a doom and gloom narrative around it. Many people fearfully think about a future where humans spend their day wearing these bulky headsets. People fear it may blur the line between what's real and what's not. Despite these prevalent arguments why do I believe the tech has a bright future? The reason is pretty simple, these techs allow us to have a better form of communication virtually. Before internet, letters were the dominant forms of communication especially for long distance. Then we moved to text-based messaging on the internet. Physical face to face communication was still better for short distance, but what internet disrupted was long distance communication with friends and families who are apart millions of kilometers. Despite some resistance from people, we moved to video conferencing where instead of texting, we could visually see each other and communicate with our voices. Similarly, the next evolution of communication technology eventually leads us to AR and VR where we can do much more. Combined with neural interfaces we will be able to not just see a 3d avatar but also touch and feel them. Ability to feel other people as if they are right besides us, makes humans closer despite the distances apart. Besides the neural interface, one could also share experiences like watching a movie together or playing games that feel more natural and interactive. Any form of 2d digital experience could be elevated with AR and VR. Imagine you're writing code on a remote work and face a bug but don't know how to solve? Why not ask your seniors? Oh yes, they'll have to video call and share your screen or give them remote access with screen sharing. But imagine you could just share your virtual monitor to the other side which makes things more intuitive. Meta, Apple pass through mode is already capable of such convenience. 

Besides communication, these can be great educational tools. Especially escaping our boring textbook representations which are extremely confusing. You could just toggle your headset and visually interact with the information represented in 3d form in the virtual space. You're a surgeon and want to practice heart surgery? Toggle on a simulation with your headset. You'd probably be worried about the inconvenience of handheld controller. These will be replaced by neural interface bands that will be able to analyze your nerve impulses and accurately provide a 3d construction of your hand movements using AI. Most people believe VR will be the most adopted, but that might not be true. AR is more practical in day-to-day life. Instead of reaching out on your phone for location, you could just speak with your glasses which will add direction in your glass screen. Meanwhile VR will be used more for gaming, world simulation leading to virtual economies etc. 

"What's major issues holding these technologies back?"

The hardest part about these techs is packing enough computational ability to a small glass or headset without making it look bulky and odd. Understanding the world around us is very computation heavy, unless we're able to make things smaller without compromising compute power, we won't be able to harness the benefit mixed reality will provide us. It's pretty common for gen alpha to use smartphone from childhood. The next generation will start using AR and VR technologies from their childhoods. Soon enough, the line between reality and virtual will fade away. 

"The line between real and virtual is starting to fade away"

6) Miscellaneous:

There are few things that do sound interesting enough, but ones that I have very little knowledge of. These are: Nuclear power (especially driven by need for higher compute power for driving AI), Longevity, Genetics Engineering, Programmable biology like NNC2215 insulin that dynamically regulates glucose according to glucose supply in the body, xenobots, nanobots that are designed by simulating on large supercomputers. It's pretty clear biology is going to have an enormous impact in our human lives. 

Comments