
Not Entirely Fooling Around
What happens if you take four of todayâs most popular buzzwords and string them together? Does the result mean anything? Given that today is April 1 (as well as being Easter Sunday), I thought itâd be fun to explore this. Think of it as an Easter egg⊠from which something interesting just might hatch. And to make it clear: while Iâm fooling around in stringing the buzzwords together, the details of what Iâll say here are perfectly real.
But before we can really launch into talking about the whole string of buzzwords, letâs discuss some of the background to each of the buzzwords on their own.
âQuantumâ
Saying something is âquantumâ sounds very modern. But actually, quantum mechanics is a century old. And over the course of the past century, itâs been central to understanding and calculating lots of things in the physical sciences. But even after a century, âtruly quantumâ technology hasnât arrived. Yes, there are things like lasers and MRIs and atomic force microscopes that rely on quantum phenomena, and needed quantum mechanics in order to be invented. But when it comes to the practice of engineering, whatâs done is still basically all firmly classical, with nothing quantum about it.
Today, though, thereâs a lot of talk about quantum computing, and how it might change everything. I actually worked on quantum computing back in the early 1980s (so, yes, itâs not that recent an idea). And I have to say, I was always a bit skeptical about whether it could ever really workâor whether any âquantum gainsâ one might get would be counterbalanced by inefficiencies in measuring what was going on.
But in any case, in the past 20 years or so thereâs been all sorts of nice theoretical work on formulating the idea of quantum circuits and quantum computing. Lots of things have been done with the Wolfram Language, including an ongoing project of ours to produce a definitive symbolic way of representing quantum computations. But so far, all we can ever do is calculate about quantum computations, because the Wolfram Language itself just runs on ordinary, classical computers.
There are companies that have built what they say are (small) true quantum computers. And actually, weâve been hoping to hook the Wolfram Language up to them, so we can implement a QuantumEvaluate function. But so far, this hasnât happened. So I canât really vouch for what QuantumEvaluate will (or will not) do.
But the big idea is basically this. In ordinary classical physics, one can pretty much say that definite things happen in the world. A billiard ball goes in this direction, or that. But in any particular case, itâs a definite direction. In quantum mechanics, though, the idea is that an electron, say, doesnât intrinsically go in a particular, definite direction. Instead, it essentially goes in all possible directions, each with a particular amplitude. And itâs only when you insist on measuring where it went that youâll get a definite answer. And if you do many measurements, youâll just see probabilities for it to go in each direction.
Well, what quantum computing is trying to do is somehow to make use of the âall possible directionsâ idea in order to in effect get lots of computations done in parallel. Itâs a tricky business, and there are only a few types of problems where the theoryâs been worked outâthe most famous being integer factoring. And, yes, according to the theory, a big quantum computer should be able to factor a big integer fast enough to make todayâs cryptography infrastructure implode. But the only thing anyone so far even claims to have built along these lines is a tiny quantum computerâthat definitely canât yet do anything terribly interesting.
But, OK, so one critical aspect of quantum mechanics is that there can be interference between different paths that, say, an electron can take. This is mathematically similar to the interference that happens in light, or even in water waves, just in classical physics. In quantum mechanics, though, thereâs supposed to be something much more intrinsic about the interference, leading to the phenomenon of entanglement, in which one basically canât ever âsee the wave thatâs interferingââonly the effect.
In computing, though, weâre not making use of any kind of interference yet. Because (at least in modern times) weâre always trying to deal with discrete bitsâwhile the typical phenomenon of interference (say in light) basically involves continuous numbers. And my personal guess is that optical computingâwhich will surely comeâwill succeed in delivering some spectacular speedups. It wonât be truly âquantumâ, though (though it might be marketed like that). (For the technically minded, itâs a complicated question how computation-theoretic results apply to continuous processes like interference-based computing.)
âNeuralâ
A decade ago computers didnât have any systematic way to tell whether a picture was of an elephant or a teacup. But in the past five years, thanks to neural networks, this has basically become easy. (Interestingly, the image identifier we made three years ago remains basically state of the art.)
So whatâs the big idea? Well, back in the 1940s people started thinking seriously about the brain being like an electrical machine. And this led to mathematical models of âneural networksââwhich were proved to be equivalent in computational power to mathematical models of digital computers. Over the years that followed, billions of actual digital electronic computers were built. And along the way, people (including me) experimented with neural networks, but nobody could get them to do anything terribly interesting. (Though for years they were quietly used for things like optical character recognition.)
But then, starting in 2012, a lot of people suddenly got very excited, because it seemed like neural nets were finally able to do some very interesting things, at first especially in connection with images.
So what happened? Well, a neural net basically corresponds to a big mathematical function, formed by connecting together lots of smaller functions, each involving a certain number of parameters (âweightsâ). At the outset, the big function basically just gives random outputs. But the way the function is set up, itâs possible to âtrain the neural netâ by tuning the parameters inside it so that the function will give the outputs one wants.
Itâs not like ordinary programming where one explicitly defines the steps a computer should follow. Instead, the idea is just to give examples of what one wants the neural net to do, and then to expect it to interpolate between them to work out what to do for any particular input. In practice one might show a bunch of images of elephants, and a bunch of images of teacups, and then do millions of little updates to the parameters to get the network to output âelephantâ when itâs fed an elephant, and âteacupâ when itâs fed a teacup.
But hereâs the crucial idea: the neural net is somehow supposed to generalize from the specific examples itâs shownâand itâs supposed to say that anything thatâs âlikeâ an elephant example is an elephant, even if its particular pixels are quite different. Or, said another way, there are lots of images that might be fed to the network that are in the âbasin of attractionâ for âelephantâ as opposed to âteacupâ. In a mechanical analogy, one might say that there are lots of places water might fall on a landscape, while still ending up flowing to one lake rather than another.
At some level, any sufficiently complicated neural net can in principle be trained to do anything. But whatâs become clear is that for lots of practical tasks (that turn out to overlap rather well with some of what our brains seem to do easily) itâs realistic with feasible amounts of GPU time to actually train neural networks with a few million elements to do useful things. And, yes, in the Wolfram Language weâve now got a rather sophisticated symbolic framework for training and using neural networksâwith a lot of automation (that itself uses neural nets) for everything.
âBlockchainâ
The word âblockchainâ was first used in connection with the invention of Bitcoin in 2008. But of course the idea of a blockchain had precursors. In its simplest form, a blockchain is like a ledger, in which successive entries are coded in a way that depends on all previous entries.
Crucial to making this work is the concept of hashing. Hashing has always been one of my favorite practical computation ideas (and I even independently came up with it when I was about 13 years old, in 1973). What hashing does is to take some piece of data, like a text string, and make a number (say between 1 and a million) out of it. It does this by âgrinding up the dataâ using some complicated function that always gives the same result for the same input, but will almost always give different results for different inputs. Thereâs a function called Hash in the Wolfram Language, and for example applying it to the previous paragraph of text gives 8643827914633641131.
OK, but so how does this relate to blockchain? Well, back in the 1980s people invented âcryptographic hashesâ (and actually theyâre very related to things Iâve done on computational irreducibility). A cryptographic hash has the feature that while itâs easy to work out the hash for a particular piece of data, itâs very hard to find a piece of data that will generate a given hash.
So letâs say you want to prove that you created a particular document at a particular time. Well, you could compute a hash of that document, and publish it in a newspaper (and I believe Bell Labs actually used to do this every week back in the 1980s). And then if anyone ever says âno, you didnât have that document yetâ on a certain date, you can just say âbut look, its hash was already in every copy of the newspaper!â.
The idea of a blockchain is that one has a series of blocks, with each containing certain content, together with a hash. And then the point is that the data from which that hash is computed is a combination of the content of the block, and the hash of the preceding block. So this means that each block in effect confirms everything that came before it on the blockchain.
In cryptocurrencies like Bitcoin the big idea is to be able to validate transactions, and, for example, be able to guarantee just by looking at the blockchain that nobody has spent the same bitcoin twice.
How does one know that the blocks are added correctly, with all their hashes computed, etc.? Well, the point is that thereâs a whole decentralized network of thousands of computers around the world that store the blockchain, and there are lots of people (well, actually not so many in practice these days) competing to be the one to add each new block (and include transactions people have submitted that they want in it).
The rules are (more or less) that the first person to add a block gets to keep the fees offered on the transactions in it. But each block gets âconfirmedâ by lots of people including this block in their copy of the blockchain, and then continuing to add to the blockchain with this block in it.
In the latest version of the Wolfram Language, BlockchainBlockData[â1, BlockchainBase -> "Bitcoin"] gives a symbolic representation of the latest block that weâve seen be added to the Bitcoin blockchain. And by the time maybe 5 more blocks have been added, we can be pretty sure everyoneâs satisfied that the block is correct. (Yes, thereâs an analogy with measurement in quantum mechanics here, which Iâll be talking about soon.)
Traditionally, when people keep ledgers, say of transactions, theyâll have one central place where a master ledger is maintained. But with a blockchain the whole thing can be distributed, so you donât have to trust any single entity to keep the ledger correct.
And thatâs led to the idea that cryptocurrencies like Bitcoin can flourish without central control, governments or banks involved. And in the last couple of years thereâs been lots of excitement generated by people making large amounts of money speculating on cryptocurrencies.
But currencies arenât the only thing one can use blockchains for, and Ethereum pioneered the idea that in addition to transactions, one can run arbitrary computations at each node. Right now with Ethereum the results of each computation are confirmed by being run on every single computer in the network, which is incredibly inefficient. But the bigger point is just that computations can be running autonomously on the network. And the computations can interact with each other, defining âsmart contractsâ that run autonomously, and say what should happen in different circumstances.
Pretty much any nontrivial smart contract will eventually need to know about something in the world (âdid it rain today?â, âdid the package arrive?â, etc.), and that has to come from off the blockchainâfrom an âoracleâ. And it so happens (yes, as a result of a few decades of work) that our Wolfram Knowledgebase, which powers Wolfram|Alpha, etc., provides the only realistic foundation today for making such oracles.
âAIâ
Back in the 1950s, people thought that pretty much anything human intelligence could do, itâd soon be possible to make artificial (machine) intelligence do better. Of course, this turned out to be much harder than people expected. And in fact the whole concept of âcreating artificial intelligenceâ pretty much fell into disrepute, with almost nobody wanting to market their systems as âdoing AIâ.
But about five years agoâparticularly with the unexpected successes in neural networksâall that changed, and AI was back, and cooler than ever.
What is AI supposed to be, though? Well, in the big picture I see it as being the continuation of a long trend of automating things that humans previously had to do for themselvesâand in particular doing that through computation. But what makes a computation an example of AI, and not just, well, a computation?
Iâve built a whole scientific and philosophical structure around something I call the Principle of Computational Equivalence, that basically says that the universe of possible computationsâeven done by simple systemsâis full of computations that are as sophisticated as one can ever get, and certainly as our brains can do.
In doing engineering, and in building programs, though, thereâs been a tremendous tendency to try to prevent anything too sophisticated from happeningâand to set things up so that the systems we build just follow exactly steps we can foresee. But thereâs much more to computation than that, and in fact Iâve spent much of my life building systems that make use of this.
Wolfram|Alpha is a great example. Its goal is to take as much knowledge about the world as possible, and make it computable, then to be able to answer questions as expertly as possible about it. Experientially, it âfeels like AIâ, because you get to ask it questions in natural language like a human, then it computes answers often with unexpected sophistication.
Most of whatâs inside Wolfram|Alpha doesnât work anything like brains probably do, not least because itâs leveraging the last few hundred years of formalism that our civilization has developed, that allow us to be much more systematic than brains naturally are.
Some of the things modern neural nets do (and, for example, our machine learning system in the Wolfram Language does) perhaps work a little more like brains. But in practice what really seems to make things âseem like AIâ is just that theyâre operating on the basis of sophisticated computations whose behavior we canât readily understand.
These days the way I see it is that out in the computational universe thereâs amazing computational power. And the issue is just to be able to harness that for useful human purposes. Yes, âan AIâ can go off and do all sorts of computations that are just as sophisticated as our brains. But the issue is: can we align what it does with things we care about doing?
And, yes, Iâve spent a large part of my life building the Wolfram Language, whose purpose is to provide a computational communication language in which humans can express what they want in a form suitable for computation. Thereâs lots of âAI powerâ out there in the computational universe; our challenge is to harness it in a way thatâs useful to us.
Oh, and we want to have some kind of computational smart contracts that define how we want the AIs to behave (e.g. âbe nice to humansâ). And, yes, I think the Wolfram Language is going to be the right way to express those things, and build up the âAI constitutionsâ we want.
Common Themes
At the outset, it might seem as if âquantumâ, âneuralâ, âblockchainâ and âAIâ are all quite separate concepts, without a lot of commonality. But actually it turns out that there are some amazing common themes.
One of the strongest has to do with complexity generation. And in fact, in their different ways, all the things weâre talking about rely on complexity generation.
What do I mean by complexity generation? One day I wonât have to explain this. But for now I probably still do. And somehow I find myself always showing the same pictureâof my all-time favorite science discovery, the rule 30 automaton. Here it is:
And the point here is that even though the rule (or program) is very simple, the behavior of the system just spontaneously generates complexity, and apparent randomness. And what happens is complicated enough that it shows what I call âcomputational irreducibilityâ, so that you canât reduce the computational work needed to see how it will behave: you essentially just have to follow each step to find out what will happen.
There are all sorts of important phenomena that revolve around complexity generation and computational irreducibility. The most obvious is just the fact that sophisticated computation is easy to getâwhich is in a sense what makes something like AI possible.
But OK, how does this relate to blockchain? Well, complexity generation is what makes cryptographic hashing possible. Itâs what allows a simple algorithm to make enough apparent randomness to successfully be used as a cryptographic hash.
In the case of something like Bitcoin, thereâs another connection too: the protocol needs people to have to make some investment to be able to add blocks to the blockchain, and the way this is achieved is (bizarrely enough) by forcing them to do irreducible computations that effectively cost computer time.
What about neural nets? Well, the very simplest neural nets donât involve much complexity at all. If one drew out their âbasins of attractionâ for different inputs, theyâd just be simple polygons. But in useful neural nets the basins of attraction are much more complicated.
Itâs most obvious when one gets to recurrent neural nets, but it happens in the training process for any neural net: thereâs a computational process that effectively generates complexity as a way to approximate things like the distinctions (âelephantâ vs. âteacupâ) that get made in the world.
Alright, so what about quantum mechanics? Well, quantum mechanics is at some level full of randomness. Itâs essentially an axiom of the traditional mathematical formalism of quantum mechanics that one can only compute probabilities, and that thereâs no way to âsee under the randomnessâ.
I personally happen to think itâs pretty likely that thatâs just an approximation, and that if one could get âunderneathâ things like space and time, weâd see how the randomness actually gets generated.
But even in the standard formalism of quantum mechanics, thereâs a kind of complementary place where randomness and complexity generation is important, and itâs in the somewhat mysterious process of measurement.
Letâs start off by talking about another phenomenon in physics: the Second Law of Thermodynamics, or Law of Entropy Increase. This law says that if you start, for example, a bunch of gas molecules in a very orderly configuration (say all in one corner of a box), then with overwhelming probability theyâll soon randomize (and e.g. spread out randomly all over the box). And, yes, this kind of trend towards randomness is something we see all the time.
But hereâs the strange part: if we look at the laws for, say, the motion of individual gas molecules, theyâre completely reversibleâso just as they say that the molecules can randomize themselves, so also they say that they should be able to unrandomize themselves.
But why do we never see that happen? Itâs always been a bit mysterious, but I think thereâs a clear answer, and itâs related to complexity generation and computational irreducibility. The point is that when the gas molecules randomize themselves, theyâre effectively encrypting the initial conditions they were given.
Itâs not impossible to place the gas molecules so theyâll unrandomize rather than randomize; itâs just that to work out how to do this effectively requires breaking the encryptionâor in essence doing something very much like whatâs involved in Bitcoin mining.
OK, so how does this relate to quantum mechanics? Well, quantum mechanics itself is fundamentally based on probability amplitudes, and interference between different things that can happen. But our experience of the world is that definite things happen. And the bridge from quantum mechanics to this involves the rather âbolted-onâ idea of quantum measurement.
The notion is that some little quantum effect (âthe electron ends up with spin up, rather than downâ) needs to get amplified to the point where one can really be sure what happened. In other words, oneâs measuring device has to make sure that the little quantum effect associated with one electron cascades so that itâs spread across lots and lots of electrons and other things.
And hereâs the tricky part: if one wants to avoid interference being possible (so we can really perceive something âdefiniteâ as having happened), then one needs to have enough randomness that things canât somehow equally well go backwardsâjust like in thermodynamics.
So even though pure quantum circuits as one imagines them for practical quantum computers typically have a sufficiently simple mathematical structure that they (presumably) donât intrinsically generate complexity, the process of measuring what they do inevitably must generate complexity. (And, yes, itâs a reasonable question whether thatâs in some sense where the randomness one sees âreallyâ comes from⊠but thatâs a different story.)
Reversibility, Irreversibility and More
Reversibility and irreversibility are a strangely common theme, at least between âquantumâ, âneuralâ and âblockchainâ. If one ignores measurement, a fundamental feature of quantum mechanics is that itâs reversible. What this means is that if one takes a quantum system, and lets it evolve in time, then whatever comes out one will always, at least in principle, be able to take and run backwards, to precisely reproduce where one started from.
Typical computation isnât reversible like that. Consider an OR gate, that might be a basic component in a computer. In p OR q, the result will be true if either p or q is true. But just knowing that the result is âtrueâ, you canât figure out which of p and q (or both) is true. In other words, the OR operation is irreversible: it doesnât preserve enough information for you to invert it.
In quantum circuits, one uses gates that, say, take two inputs (say p and q), and give two outputs (say p' and q'). And from those two outputs one can always uniquely reproduce the two inputs.
OK, but now letâs talk about neural nets. Neural nets as theyâre usually conceived are fundamentally irreversible. Hereâs why. Imagine (again) that you make a neural network to distinguish elephants and teacups. To make that work, a very large number of different possible input images all have to map, say, to âelephantâ. Itâs like the OR gate, but more so. Just knowing the result is âelephantâ thereâs no unique way to invert the computation. And thatâs the whole point: one wants anything thatâs enough like the elephant pictures one showed to still come out as âelephantâ; in other words, irreversibility is central to the whole operation of at least this kind of neural net.
So, OK, then how could one possibly make a quantum neural net? Maybe itâs just not possible. But if so, then whatâs going on with brains? Because brains seem to work very much like neural nets. And yet brains are physical systems that presumably follow quantum mechanics. So then how are brains possible?
At some level the answer has to do with the fact that brains dissipate heat. Well, what is heat? Microscopically, heat is the random motion of things like molecules. And one way to state the Second Law of Thermodynamics (or the Law of Entropy Increase) is that under normal circumstances those random motions never spontaneously organize themselves into any kind of systematic motion. In principle all those molecules could start moving in just such a way as to turn a flywheel. But in practice nothing like that ever happens. The heat just stays as heat, and doesnât spontaneously turn into macroscopic mechanical motion.
OK, but so letâs imagine that microscopic processes involving, say, collisions of molecules, are precisely reversibleâas in fact they are according to quantum mechanics. Then the point is that when lots of molecules are involved, their motions can get so âencryptedâ that they just seem random. If one could look at all the details, thereâd still be enough information to reverse everything. But in practice one canât do that, and so it seems like whatever was going on in the system has just âturned into heatâ.
So then what about producing âneural net behaviorâ? Well, the point is that while one part of a system is, say, systematically âdeciding to say elephantâ, the detailed information that would be needed to go back to the initial state is getting randomized, and turning into heat.
To be fair, though, this is glossing over quite a bit. And in fact I donât think anyone knows how one can actually set up a quantum system (say a quantum circuit) that behaves in this kind of way. Itâd be pretty interesting to do so, because itâd potentially tell us a lot about the quantum measurement process.
To explain how one goes from quantum mechanics in which everything is just an amplitude, to our experience of the world in which definite things seem to happen, people sometimes end up trying to appeal to mystical features of consciousness. But the point about a quantum neural net is that itâs quantum mechanical, yet it âcomes to definite conclusionsâ (e.g. elephant vs. teacup).
Is there a good toy model for such a thing? I suspect one could create one from a quantum version of a cellular automaton that shows phase transition behaviorâactually not unlike the detailed mechanics of a real quantum magnetic material. And what will be necessary is that the system has enough components (say spins) that the âheatâ needed to compensate for its apparent irreversible behavior will stay away from the part where the irreversible behavior is observed.
Let me make a perhaps slightly confusing side remark. When people talk about âquantum computersâ, they are usually talking about quantum circuits that operate on qubits (quantum analog of binary bits). But sometimes they actually mean something different: they mean quantum annealing devices.
Imagine youâve got a bunch of dominoes and youâre trying to arrange them on the plane so that some matching condition associated with the markings on them is always satisfied. It turns out this can be a very hard problem. Itâs related to computational irreducibility (and perhaps to problems like integer factoring). But in the end, to find out, say, the configuration that does best in satisfying the matching condition everywhere, one may effectively have to essentially just try out all possible configurations, and see which one works best.
Well, OK, but letâs imagine that the dominoes were actually molecules, and the matching condition corresponds to arranging molecules to minimize energy. Then the problem of finding the best overall configuration is like the problem of finding the minimum energy configuration for the molecules, which physically should correspond to the most stable solid structure that can be formed from the molecules.
And, OK, it might be hard to compute that. But what about an actual physical system? What will the molecules in it actually do when one cools it down? If itâs easy for the molecules to get to the lowest energy configuration, theyâll just do it, and one will have a nice crystalline solid.
People sometimes assume that âthe physics will always figure it outâ, and that even if the problem is computationally hard, the molecules will always find the optimal solution. But I donât think this is actually trueâand I think what instead will happen is that the material will turn mushy, not quite liquid and not quite solid, at least for a long time.
Still, thereâs the idea that if one sets up this energy minimization problem quantum mechanically, then the physical system will be successful at finding the lowest energy state. And, yes, in quantum mechanics it might be harder to get stuck in local minima, because there is tunneling, etc.
But hereâs the confusing part: when one trains a neural net, one ends up having to effectively solve minimization problems like the one Iâve described (âwhich values of weights make the network minimize the error in its output relative to what one wants?â). So people end up sometimes talking about âquantum neural netsâ, meaning domino-like arrays which are set up to have energy minimization problems that are mathematically equivalent to the ones for neural nets.
(Yet another connection is that convolutional neural netsâof the kind used for example in image recognitionâare structured very much like cellular automata, or like dynamic spin systems. But in training neural nets to handle multiscale features in images, one seems to end up with scale invariance similar to what one sees at critical points in spin systems, or their quantum analogs, as analyzed by renormalization group methods.)
OK, but letâs return to our whole buzzword string. What about blockchain? Well, one of the big points about a blockchain is in a sense to be as irreversible as possible. Once something has been added to a blockchain, one wants it to be inconceivable that it should ever be reversed out.
How is that achieved? Well, itâs curiously similar to how it works in thermodynamics or in quantum measurement. Imagine someone adds a block to their copy of a blockchain. Well, then the idea is that lots of other people all over the world will make their own copies of that block on their own blockchain nodes, and then go on independently adding more blocks from there.
Bad things would happen if lots of the people maintaining blockchain nodes decided to collude to not add a block, or to modify it, etc. But itâs a bit like with gas molecules (or degrees of freedom in quantum measurement). By the time everything is spread out among enough different components, itâs extremely unlikely that itâll all concentrate together again to have some systematic effect.
Of course, people might not be quite like gas molecules (though, frankly, their observed aggregate behavior, e.g. jostling around in a crowd, is often strikingly similar). But all sorts of things in the world seem to depend on an assumption of randomness. And indeed, thatâs probably necessary to maintain stability and robustness in markets where trading is happening.
OK, so when a blockchain tries to ensure that thereâs a âdefinite historyâ, itâs doing something very similar to what a quantum measurement has to do. But just to close the loop a little more, letâs ask what a quantum blockchain might be like.
Yes, one could imagine using quantum computing to somehow break the cryptography in a standard blockchain. But the more interesting (and in my view, realistic) possibility is to make the actual operation of the blockchain itself be quantum mechanical.
In a typical blockchain, thereâs a certain element of arbitrariness in how blocks get added, and who gets to do it. In a âproof of workâ scheme (as used in Bitcoin and currently also Ethereum), to find out how to add a new block one searches for a ânonceââa number to throw in to make a hash come out in a certain way. There are always many possible nonces (though each one is hard to find), and the typical strategy is to search randomly for them, successively testing each candidate.
But one could imagine a quantum version in which one is in effect searching in parallel for all possible nonces, and as a result producing many possible blockchains, each with a certain quantum amplitude. And to fill out the concept, imagine thatâfor example in the case of Ethereumâall computations done on the blockchain were reversible quantum ones (achieved, say, with a quantum version of the Ethereum Virtual Machine).
But what would one do with such a blockchain? Yes, it would be an interesting quantum system with all kinds of dynamics. But to actually connect it to the world, one has get data on and off the blockchainâor, in other words, one has to do a measurement. And the act of that measurement would in effect force the blockchain to pick a definite history.
OK, so what about a âneural blockchainâ? At least today, by far the most common strategy with neural nets is first to train them, then to put them to work. (One can train them âpassivelyâ by just feeding them a fixed set of examples, or one can train them âactivelyâ by having them in effect âaskâ for the examples they want.)Â Â But by analogy with people, neural nets can also have âlifelong learningâ, in which theyâre continually getting updated based on the âexperiencesâ theyâre having.
So how do the neural nets record these experiences? Well, by changing various internal weights. And in some ways what happens is like what happens with blockchains.
Science fiction sometimes talks about direct brain-to-brain transfer of memories. And in a neural net context this might mean just taking a big block of weights from one neural net and putting it into another. And, yes, it can work well to transfer definite layers in one network to another (say to transfer information on what features of images are worth picking out). But if you try to insert a âmemoryâ deep inside a network, itâs a different story. Because the way a memory is represented in a network will depend on the whole history of the network.
Itâs like in a blockchain: you canât just replace one block and expect everything else to work. The whole thing has been knitted into the sequence of things that happen through time. And itâs the same thing with memories in neural nets: once a memory has formed in a certain way, subsequent memories will be built on top of this one.
Bringing It Together
At the outset, one might have thought that âquantumâ, âneuralâ and âblockchainâ (not to mention âAIâ) didnât have much in common (other than that theyâre current buzzwords)âand that in fact they might in some sense be incompatible. But what weâve seen is that actually there are all sorts of connections between them, and all sorts of fundamental phenomena that are shared between systems based on them.
So what might a âquantum neural blockchain AIâ (âQNBAIâ) be like?
Letâs look at the pieces again. A single blockchain node is a bit like a single brain, with a definite memory. But in a sense the whole blockchain network becomes robust through all the interactions between different blockchain nodes. Itâs a little like how human society and human knowledge develop.
Letâs say weâve got a âraw AIâ that can do all sorts of computation. Well, the big issue is whether we can find a way to align what it can do with things that we humans think we want to do. And to make that alignment, we essentially have to communicate with the AI at a level of abstraction that transcends the details of how it works: in effect, we have to have some symbolic language that we both understand, and that for example AI can translate into the details of how it operates.
Inside the AI it may end up using all kinds of âconceptsâ (say to distinguish one class of images from another). But the question is whether those concepts are ones that we humans in a sense âculturally understandâ. In other words, are those concepts (and, for example, the words for them) ones that thereâs a whole widely understood story about?
In a sense, concepts that we humans find useful for communication are ones that have been used in all sorts of interactions between different humans. The concepts become robust by being âknitted intoâ the thought patterns of many interacting brains, a bit like the data put on a blockchain becomes a robust part of âcollective blockchain memoryâ through the interactions between blockchain nodes.
OK, so thereâs something strange here. At first it seemed like QNBAIs would have to be something completely exotic and unfamiliar (and perhaps impossible). But somehow as we go over their features they start to seem awfully familiarâand actually awfully like us.
Yup, according to the physics, we know we are âquantumâ. Neural nets capture many core features of how our brains seem to work. Blockchainâat least as a general conceptâis somehow related to individual and societal memory. And AI, well, AI in effect tries to capture whatâs aligned with human goals and intelligence in the computational universeâwhich is also what weâre doing.
OK, so whatâs the closest thing we know to a QNBAI? Well, itâs probably all of us!
Maybe that sounds crazy. I mean, why should a string of buzzwords from 2018 connect like that? Well, at some level perhaps thereâs an obvious answer: we tend to create and study things that are relevant to us, and somehow revolve around us. And, more than that, the buzzwords of today are things that are somehow just within the scope that we can now think about with the concepts weâve currently developedâand that are somehow connected through them.
I must say that when I chose these buzzwords I had no idea theyâd connect at all. But as Iâve tried to work through things in writing this, itâs been remarkable how much connection Iâve found. And, yes, in a fittingly bizarre end to a somewhat bizarre journey,  it does seem to be the case that a string plucked from todayâs buzzword universe has landed very close to home. And maybe in the endâat least in some senseâwe are our buzzwords!