Scaling

From our white paper:

page 3:
1. Lack of scalability. Decentralized networks and blockchain databases are inherently slow and low-capacity, compared with existing centralized clearing and storage solutions.

page 5:
(A good survey of development projects under way in support of Bitcoin scaling can be read here.) (21)

Second, it is now obvious that even a scalable cryptocurrency does not constitute an entire ecosystem by itself. Blockchain is a shiny new tool in the toolbox for building a better, more equitable economic structure for the future. But it is not, by itself, the entire system architecture, nor is it the solution to every extant problem in software or economics. Blockchain cryptocurrency needs to be utilized in an appropriate manner, to achieve appropriate objectives, as part of a larger crypto-economic ecosystem design.

Our wallet and payment clearing network, which is built using Voucher-Safe (22) technology, sits on top of the Ascension blockchain (also other blockchains), and provides transaction settlement that is fully scalable (solving issue #1 above), equally untraceable as physical cash (issue #2), and practically instantaneous (issue #4). Note that this technology already exists (beta deployment occurred in 2011), and accomplishes essentially everything that is hoped for from Lightning networks (23), or Lumino (24), and more. While functionality is distributed between servers with distinct functions, software updates are always controllable (issue #11).

page 8:
II. Blockchain Challenges: achieving scalability, speed, and privacy, while maintaining censorship resistance
Most of the controversy in the Bitcoin arena to date can be traced directly to varying visions for achieving the scalability necessary for a cryptocurrency to play a role as a significant global currency, and not merely as a party favor for giddy speculators. Because this is such an important issue in the marketplace, and because our solution approach is quite different, we will delve into this matter at some length. We shall begin with a more general discussion on the role of centralization in clearing mechanisms, while keeping in mind our desired separation of the minting and clearinghouse functions.

Blockchains famously represent a decentralized architecture for clearing payment transactions. This is because transactions broadcast to the network are collected and “mined” into blocks, theoretically by any full node that first solves the block. However in general there exists an inverse relationship between the number of eligible settlement nodes and the clearing efficiency and capacity of the network. In this section we will examine some of the tradeoffs associated with greater or lesser degrees of centralization.

Consider the diagram in Illustration 1 below. This diagram reflects the possible degrees of centralization in a settlement architecture, from total centralization around a single node, to an arbitrarily large number of clearing nodes. While there are of course no actual architectures at the extreme of an infinite number of nodes, there do exist actual examples of total centralization around a single node. PayPal (27), MasterCard, Visa and other credit card networks, money transmitters such as Western Union and MoneyGram, ACH-based (28) networks like clearXchange (aka Zelle) (29), and inter-bank settlement systems such as CHIPS (30), SWIFT (31) and FedWire (32), all serve as large scale examples of centralized settlement. Naturally a single clearing node does not imply a single computer; but since a CPU and database cluster controlled by one company is involved, it can fairly be considered as a single logical node. These types of systems are not “p2p” (direct person-to-person) because they always clear and record on the books of a third party, with which both of the parties to the transaction typically have accounts.

Ascension diagrammed
Ascension Diagram Illustration 1

page 10
The example of clearing checks, ACH payments, and other transfers between USA bank accounts presents a variable degree of centralization. If both the sending and receiving account are held in the same bank, this is a degenerate case in which a matching debit and credit to the two accounts on the bank’s ledger suffices to settle the payment. If however the two accounts are in different banks, then the transaction requires a clearinghouse (33) to settle between the respective banks. About $1.2 trillion per day of this activity is handled through the CHIPS system. CHIPS was organized in 1970 by eight New York banks who were members of the Federal Reserve System. CHIPS is thus both a competitor and a customer of the Federal Reserve. The clearing function is ultimately handled by the regional Fed banks (of which there are twelve). Clearinghouses like CHIPS and SWIFT act to “net out” (230) a large portion of the transactions for speed, so that the Fed banks only see the net flows between member banks. So-called “international” wire transactions also involve the correspondent accounts of the foreign banks at banks in the USA. (Technically all US dollar-denominated retail accounts anywhere in the world are in actual fact held at one of 13 US domestic commercial banks. Thus there is really no such thing as a “foreign” bank account denominated in US dollars; an inconvenient fact that cryptocurrency exchanges have recently learned to their cost.) (34) Ironically, all of this legacy system architecture is not dissimilar to the current design for Lightning Networks!

An important observation is that the number of processors required to settle a US dollar bank-to-bank payment varies, depending upon the geographical locations of the parties to the transaction. Also, due to their being rooted in an accounting system called double-entry bookkeeping (used widely by banks since the end of the 15th century) (35), banks historically settled transactions on the basis of a “business day.” This may be visualized as analogous to a “block” (in the blockchain sense) spanning at least 24 hours (longer where weekends or holidays are involved), in which the final balances of each account after all of the transactions have been settled represent the set of unspent transaction outputs (UXTOs) for the next block. Prior to January 2001, CHIPS settled at the end of the day, but now provides intraday payment finality through a real-time system.

While it isn’t related to payments, the internet’s DNS (domain name service) lookup mechanism is also semi-centralized, much like the 12 regional FRBs. There are 13 root servers (A – M), any one of which can be contacted to initiate a domain-IP lookup. Changes to the database have to propagate before the result will become consistent across all 13 root servers, a process which can require several hours. This serves as an example of just enough decentralization to provide robust parallelism, without introducing excessive synchronization overhead. While it’s not perfect, the mechanism suffices because most domains don’t change their IP address blocks very often (and for those that do there’s dynamic DNS).

The clearing mechanism used by the cryptocurrencies DASH (36) and PIVX (37) presents an
interesting middle case. Payments are submitted by ordinary client nodes, but aggregated and settled by “master nodes,” which are high-volume clients accorded special privileges. Although the motivation for this aggregation is privacy, via the obfuscation of individual transactions by mixing them with unrelated ones (using an algorithm known as “CoinJoin”) (38), the effect is to create a clearing layer (229) where the number of nodes involved with transaction settlement (~4600 for Dash) is much less than the total number of nodes in the system. The distributed ledger system Ripple similarly operates with a significant but fixed number of transaction validators. (39)

Bitcoin itself, along with most altcoins, employs a highly decentralized clearing architecture where the number of settlement nodes is bounded only by the number of “full nodes” (with mining activated) that are participating in the network. However due to such nodes banding together into mining pools, which can sometimes be viewed a single logical node controlled by only one operator, the degree of decentralization is actually much less than it appears. Most bitcoin users and merchants do not operate their own full nodes, and instead rely on wallets hosted by third parties. This results in further operational centralization. As a practical matter, exchanges on which bitcoin can be bought and sold for national fiat currencies are typically very centralized, and many of these operations often hold large proportions of their clients’ coins in trust at addresses controlled by themselves. If the exchange is honest, this isn’t inherently bad (give or take hacking risks); (40) but it definitely isn’t congruent with the popular vision of users personally controlling their own coins.

Efficiency and Throughput
In general there exists a positive correlation between the centralization of a settlement architecture and both its efficiency and its throughput. PayPal is reputed to be able to process on the order of 400 transactions per second (tx/s), while major credit card networks can process on the order of 50K tx/s or more. (41) These networks typically process transactions in seconds, certainly in no longer than a minute. Moreover whenever greater capacity is required, additional computing resources should suffice to provide the expansion (although not necessarily in a linear fashion). By contrast Bitcoin, the most popular cryptocurrency, requires anywhere up to 10 minutes or more to record a transaction, or up to an hour if 6 confirmations are wanted. By design, a single Bitcoin block can only hold enough transactions to support around 3.5 tx/s. Bitcoin’s own popularity is thus limiting its potential market penetration, through unacceptably slow clearing, grossly inadequate throughput, gradually increasing fees, and poor reliability (especially when low fees and smaller transactions are attempted). Faster clearance requires higher fees (42), since price is a rationing device for scarce block space. Inadequate fees lead to a large backlog of unconfirmed transactions. (43)

While the gospel of cryptocurrency avers that decentralization is always good and centralization is always bad, we should duly note that decentralized cryptocurrencies currently handle only a miniscule portion of global payment transactions, both from the perspective of aggregate value and also by transaction count. This is actually not surprising given the inherent correlation between increasing decentralization and increasing synchronization overhead. In a fully centralized system there is one node, and it is fully trusted. In a decentralized architecture there are many nodes, and for safety’s sake they must all be assumed to be bad actors, since an internet connection is the only requirement for entry into the network. (Being able to deal with this assumption about hostile actors is known as Byzantine fault tolerance (44), which is a common characteristic of blockchain systems.)

If we let N be the number of clearing nodes, then we can broadly (and simplistically) define network clearing efficiency E thus:

                                E = 1/N

This implies that efficiency is maximized with 1 clearing node, and approaches zero as N approaches infinity.

Similarly, we can define the effort (overhead) required for network synchronization S as:

                                S = N * (N – 1) / 2

Actually this is a worst case, since the network can relay blocks (45), making it unnecessary (46) for a node which clears a transaction (i.e., mines a block) to inform directly every other peer besides itself. The important inference is that this effort increases as a function of the number of peer nodes in the network, as well as with the volume of transactions. This factor generates an inherent and unavoidable tension in blockchain systems between decentralization and efficiency. Even blockchain systems developed after Bitcoin have a maximum “speed limit” which is far below that of centralized clearing mechanisms: Hyperledger Fabric can do about 400 tx/s (47), while Ethereum can do up to 10 tx/s. (48) (Which may prove insufficient to keep up with the crypto kitties.) (49) By contrast the logging system Apache Kafka (50) (which only needs to be crash fault tolerant) can do millions of tx/s.

The point is that there is always a maximum speed for any distributed system, inherent in the fact that it’s distributed, which will always be lower than in a centralized system. In a distributed system, every new user/node has to track state for every other user/node at a rate that is not sustainable. This is a known problem of long-standing in computer science. (51) Simply put, you cannot publish ever increasing quantities of data onto the network, and simultaneously reduce (or even maintain) the time interval it takes for the data to become globally consistent. This isn’t just “a hard problem,” it’s actually a logical fallacy. It ceases to be a fallacy only if one can assume the existence of both infinitely fast nodes and infinitely low (i.e. zero) latency across the network, neither of which can actually exist. We sincerely wish the brilliant Mr. Buterin good luck solving this conundrum with sharding. (52)

Why then is decentralization inherently “good” while centralization is ipso facto “bad”? This dictum is plainly not related to concerns about efficiency or throughput (which concerns by themselves would lead one toward the opposite conclusion), but is derived instead from exogenous non-technical aspects related to ownership, control, censorship resistance, economic theory, and even political ideology. For example, centrally regulated banks in the nation of Cyprus at one point told their depositors that they could not access their own money; and similar scenarios may yet unfold in other nations as well.

page 13
Having many communal owners or stakeholders in a network is seen as democratizing, and therefore as a social good. Given the extraordinary abuses seen with national fiat currencies, and the central banks that issue and operate those currencies, this is quite understandable. There is definitely something to be said for a network that is naturally resistant to dictatorial control. Unfortunately the flip side of this characteristic is the impossibility of final decision-making, precisely because there exists no ultimately responsible party. Bitcoin is presently in the midst of a full-blown civil war about scalability solutions,
which is becoming increasingly bitter, acrimonious (58), and indeed childish, while usability is plummeting and innovation is stalling out – or more accurately, moving into other technologies such as Ethereum, Hyperledger (59), and private permissioned blockchains. (60)

Many decentralized owners is also seen as fostering censorship resistance, because it becomes much harder to shut down widely scattered nodes, and because no one party can be served with subpoenas, cease-and-desist orders, or the like. However this supposition may be naive. In fact since Bitcoin (and practically every altcoin) utilizes a distinct protocol for all network communications, it wouldn’t be particularly difficult to program intelligent edge routers (say, at a national border) to drop all packets conforming to that protocol, by means of what is known as “deep packet inspection.” It should also be noted that the censorship resistance derived from having no single point for legal process can also be achieved by means of software design, coupled with sufficient jurisdictional arbitrage embedded within
the operational business model. (More on this topic later.)

page 14
The Bitcoin Block Size Debate
Which brings us to the ongoing civil war in Bitcoin about scalability. The basic problem is that Bitcoin has become too popular for its own good. There are too many transactions being posted in competition for limited space in blocks. The space is limited because of the fact that blocks are currently capped at one megabyte (1 MB) in size. The block size is a parametric value established in the source code (known as a “hard coded” value). The negative result of the competition for scarce block space is twofold: 1) transaction fees have climbed sharply as price gets used as a rationing device; 2) wait times for block confirmations have also increased, to the point where it isn’t unusual for a posted transaction (particularly a small one) to require several days before it gets mined into a block. The result is that Bitcoin becomes ever more expensive (61) and/or ever more inconvenient to use. It is of course possible to “cut in line” by bribing miners with higher than average fees. But as a general rule of business, declining service coupled with rising price is not a recipe for increasing market share. Growth tends not to be a problem for the private sector, but is frequently an onerous burden for the public sector; and Bitcoin’s communitarian nature bestows upon it a number of public sector characteristics.

There are two main factions or camps (62) within the Bitcoin community on the subject of whether or not the block size should be increased. The first is the “NO2X” faction, indicating opposition to doubling the present blocksize to 2MB. This group, including most of the current and former principal development team (known as the “Core devs”), wants to keep the block size unchanged. Instead, they promoted the adoption of SegWit (63) (Segregated Witness), and the future use of Lightning hubs to move retail transactions off chain. The pro-blocksize increase faction is lately associated with the “B2X” version of Bitcoin, based on the “2X” part of the SegWit2X plan adopted in the New York Agreement. In the past this faction was also associated with the alternative client Bitcoin Unlimited, as well as with advocacy for the Bitcoin Classic and Bitcoin-XT clients. It is also involved with promoting Bitcoin Cash, which forked off of Bitcoin earlier in the year and implemented 8MB blocks without SegWit.

It should be understood that in order to effectuate a change to the underlying protocol, nodes
representing a majority of the hashing power on the network must “signal” that they wish to adopt the proposed change. (For a general discussion on Bitcoin “governance,” see this.) (64) For example, SegWit was adopted on the Litecoin altcoin network when a majority of blocks (51+ out of the last 100) were mined by nodes signaling for SegWit. This has now also occurred on the Bitcoin network, after 80% of hashing power signaled support for it.

The “too long; didn’t read” (TL;DR) explanation of SegWit is that it modifies the protocol to shift some of the transaction detail data (mainly signatures) now stored in the blocks themselves into an adjunct data appendix (called an “extension block”), which is incorporated by reference inside the block transaction. The net result is about a 60% increase in the number of transactions which can fit inside a single block. Thus SegWit is about using space more efficiently, rather than increasing the amount of raw space.

SegWit is at best a stopgap which will buy only a limited amount of headroom for the Bitcoin
blockchain — at the cost of increasing the complexity of the protocol going forward. However since its adoption, only a low double-digit percentage (65) of bitcoin transactions have availed themselves of the new SegWit functionality. (But it’s early yet.)

Lightning networks (66), previously known as “side chains,” are a mechanism for moving a block of bitcoins into a special reserve, or anchor address (typically one having multi-signature controls). The coins are then cloned onto an entirely separate blockchain, where they can circulate freely (potentially according to entirely different rules) without needing to post any additional data onto the origin blockchain. Each Lightning network effectively represents an independent local centralization, or at least a concentration, of settlement nodes. The Lightning networks taken as a group act to fragment the Bitcoin blockchain. (Indeed the same concept in Ethereum parlance is referred to as “sharding.”)

While this idea could potentially buy Bitcoin considerably more headroom than SegWit, even this concept doesn’t achieve arbitrary transaction throughput. (67) (See also this.) (68) One good reason is that there will always be synchronization overhead associated with moving coins in or out of Lightning networks, or from one Lightning network to another. At some point, such necessary synchronization of coins would itself potentially eat up 100% of the base Bitcoin network’s capacity.

An analogy can be drawn with the NUMA (69) (non-uniform memory access) computers popularized
in the 2000s. At some point, adding more CPUs to a NUMA computer ceases to increase the total
capacity, because the synchronization of inputs and outputs between each CPU’s individual memory cache saturates the main data bus, all by itself. How many Lightning networks could operate without similarly saturating the underlying Bitcoin blockchain with their synchronization traffic? This would depend upon the degree of economic independence between the user and business communities being served by each Lightning network. It can safely be concluded however that the answer, as with NUMA CPUs, is finite and probably not all that large.

The NO2X faction tends to view the “pain points” of long delays and high transaction fees as useful and necessary in order to impel the industry to adopt technologies like SegWit and Lightning. This amounts to believing that Bitcoin is not in fact a retail payment network, and should not be used as one.

In this view, inevitable user dissatisfaction is a spur to innovation, which will take place mainly “off chain.” Thus the NO2X camp is remarkably insouciant about the growing pains of Bitcoin. NO2X adherents also tend to be “Bitcoin maximalists,” meaning that they see value in altcoins only insofar as they enable Bitcoin to improve, viewing them as an exegesis on the principal Bitcoin technology.

The Bitcoin 2X (B2X) faction, championed by Roger Ver and others, is in contrast much more alarmed by the growing unusability of Bitcoin for retail customers. While the Core devs tend to see Bitcoin users as nodes in a software system, B2X adherents view them as retail customers in the business sense – customers who are not at all well served (70) by the status quo. A summary of arguments for Bitcoin Unlimited, along with some remarkably blasé quotes from the Core dev team, can be found here in this slideshow (71) prepared by Roger Ver. In particular, the graphs showing the growth of altcoin market cap at the same time that Bitcoin’s user performance falls, are noteworthy. Certainly from a business perspective, as opposed to a software perspective, leaving the block size at 1MB is a non-starter, bordering on insane. Of course it can also be argued (72) that using Bitcoin as a network platform for retail consumer payments is a fundamental mistake to begin with.

There are however consequences to raising the block size. The amount of memory required to verify 1MB Bitcoin blocks (never mind mining them) is already substantial. Devices with only a few gigabytes of RAM available perform block verification multiple times slower than devices with significantly more RAM. If the block size increased substantially (note the Bitcoin Unlimited plan called for gradual steps up to as high as 8GB!), it would very quickly require hardware well beyond the reach of casual retail users and small businesses. The result would of course be further centralization, not only of the mining function, but also of the verification of the blockchain by non-miners maintaining full nodes. This would have the eventual effect of centralizing all blockchain activity around a small number of participants that owned enormous computing resources. Ordinary users would be relegated to operating web-based clients hosted at those major vendors, probably no longer in sole control of their own private keys. Thus the B2X vision ultimately entails sacrificing the distributed, democratized, decentralized nature of Bitcoin on the altar of good customer service. To refer back to our pyramid diagram, B2X would shift Bitcoin from near the bottom of the pyramid upward to near the top, gaining efficiency while losing decentralization.

If Bitcoin were a business operated by a single company, very likely the B2X option would already have been embraced. However as matters stand this is not the case, and since so many oppose deviating from the original vision of a decentralized community-based currency issued by no one, even all the money which is getting left on the table by Bitcoin’s inability to expand, and all the R&D budgets now pouring into alternative technologies, hasn’t been enough to rudder the ship onto a different course. What we see here is literally a case where arguably needful business development is being restrained by ideology. But this is hardly unprecedented, after all: didn’t the Soviet Bloc and Maoist China constitute exactly such an example on a massive scale for most of the 20th century?

The Bitcoin community is stuck behind the reality that not to decide is in itself a decision, and this does not appear likely to change anytime soon. Because SegWit2X requires agreement by a majority of hashing power (BIP 9 activation), which appears most unlikely to occur, yet another hard fork of Bitcoin seemed inevitable, until the B2X side “blinked” and elected to save face by postponing the activation. (73) This B2X fork would have occurred in mid November, and follows closely on the heels of the Bitcoin Cash (74) (BCH) fork (75), and the recent (friendlier) Bitcoin Gold (76) (BTG) fork (77) on 25th October. (Bgold is aimed at creating an ASIC-resistant Bitcoin.) So understandably some of the community felt that Bitcoin had just dodged a bullet. (78) But the irony is that even if they were all adopted on the main chain, ultimately such reforms are merely a stop-gap that would only kick the can down the road.

Which would have been the rosy scenario. The actual result is multiple permanent hard forks into (so far) four coins: original BTC, BCH, BTG, and BCD (Bitcoin Diamond (79), a version based on PoS mining), and possibly someday a fifth in B2X (80), not to mention various copycats. (81) Due to the lack of bidirectional replay protection, it’s also possible that one of the two chains, BTC and B2X, would subsequently replace its rival even after hundreds of blocks have elapsed (an event known as a “wipeout,” which is every bit as bad as it sounds). Miners may opportunistically flip-flop back and forth from mining on one chain to mining on another, depending upon prevailing conditions. (82) Mining pools may allow their members to pick which chain should be mined. Some may make a “none of the above” choice by opting to mine Bitcoin Cash, which already increased the block size to 8MB, or 4x bigger than SegWit2X’s 2MB. It’s interesting that BCH’s price nearly doubled immediately after B2X was suspended, especially given that some had argued that B2X was essentially mooted by BCH. It should be noted that we have the ability to bring any worthy Bitcoin forks into our ecosystem.

At this point the civil war is all about bitcoin businesses picking sides (83), and even deciding which coin (84) will be designated as “the true bitcoin.” Clearly the lack of a “decider” has multiplied risks, increased uncertainty, and generated much confusion. As a result, practical survival guides like this one (85) have begun to appear in the community. While this state of affairs should be enough to stand anyone’s hair on end, the anticipation of “fork dividends” (derived from receiving equal coins on the other fork as a windfall) has only driven the price of bitcoin higher. (86) (However, this fork dividend concept may yet prove to be a faulty expectation.) (87)

The remarkable fact though is that neither NO2X’s vision nor 2X’s vision is workable for Bitcoin in the long term. The NO2X path will lead to an oligarchy of Lightning hub operators, which will still have limits to scaling. (88) The 2X path will lead to an oligarchy of miners who clear all the transactions, plus an oligarchy of web wallet providers, and likely still won’t solve all the problems with latency. There simply isn’t a path forward which will allow Bitcoin to become the global cryptocurrency to obsolete other cryptocurrencies and ultimately replace fiat, as Bitcoin maximalists like to dream. (89) Or at least, if such a path exists, it hasn’t been discovered yet.

Back to the Future?
So where does this leave us, in late 2017 looking toward the future? It’s self-evident that wide global adoption is going to require serious scaling, coupled with low latency. It should also be clear by now that any solution providing adequate scaling is going to involve at least a certain amount of clearing centralization as a by-product, if not as a direct goal. This is true not only for the obvious reasons of efficiency, but also because of the growing requirements of regulation. Authorities will want to be able to reject at the net any transaction of which they disapprove. Such an outcome is highly likely to result from lobbying efforts such as this one (90), or from study committees like this. (91) In the past, regulators have been unable to exercise a veto over specific transactions, because the clearing mechanism was too decentralized. But once the need for clearing efficiency has leveraged enough centralization, such control may become much more feasible. Some national governments are mulling the creation of their very own cryptocurrencies (Estonia and Russia come to mind). In that case, censorship can be expected to be built-in, and competition from other blockchains will not be welcomed, and may be prohibited outright.

The other clear trend is the shift toward permissioned blockchains. (222) In any blockchain, there are three basic functional levels: the client can connect to the network and read data (it can download blocks); it can also submit transactions (meaning it has write access); and it can confirm transactions (by means of mining blocks). Put simply: read, write, and verify. In a permissioned blockchain, these functions can be accessed according to a node’s possession of a certain private key, or by virtue of connecting from a certain IP address, through some type of login protocol, or by some other means specified in software. For example, anyone might be able to download the client and browse completed transactions (similar to search functions at blockchain.info (92) for example). But only authorized terminals would be able to submit spends, and only a strictly pre-specified list of miners would be able to confirm them.

This is fully consistent with the way existing fintech works. For example banks do not allow non- customers to initiate transactions across the banking system. A non-customer may be able to cash a check drawn on the bank (if they’re willing to be fingerprinted), but only account holders may write checks, use an ATM machine, send a wire transfer or ACH payment, etc. Similarly while anyone can send money via Western Union or Moneygram, the customer has to take their money to an official terminal at a franchise office to submit their request. Transactions to certain recipients or regions are sometimes rejected by these money transmitters, allowing censorship. If banks and money transmitters ever adopted cryptocurrency, it stands to reason they would do so in a manner consistent with their existing model by deploying a permissioned blockchain. In such a system banks and franchised storefronts would have nodes with read and write privileges. Account holders would have wallets linked to their verified identities, and could submit payments only via a branch office node.

Transaction verification would be done by mining pools controlled by the bank, or more likely by the banking cartel as a whole. (The Federal Reserve System is of course a banking cartel, not a government agency as such, no matter how much it may posture otherwise.) No blockchain-based fintech could ever be acceptable to the existing financial system, or to its regulatory apparatus, which allowed any device with an internet connection, run by any anonymous geek anywhere in the world, to connect to the network and start broadcasting transactions, let alone validating them. Indeed there are reasons why even a privately owned network (as for a niche altcoin) might want to block that from happening – preventing DDoS attacks for instance. On a purely public permissionless blockchain, there is no way to prevent anything at all, from high velocity “dusting” spends of tiny amounts of coins simply to soak up block space maliciously, to frequent repetitive large spends back and forth between wallets controlled by the same owner, made in an effort to bump up the calculated average transaction fees, to a full-blown 51% attack. (Bitcoin has already experienced allthese kinds of attacks, and more, with the exception of a 51% attack.) For these reasons it’s also fairly probable that many Lightning network operators will ultimately deploy permissioned blockchains, too.

So where does all of this get us, in the end? Regardless of what happens to “resolve” the Bitcoin scaling debate, it seems that we are headed for, at best, a federated network of mostly centralized clearinghouses operating in competition with each other. Ironically, this is not dissimilar to the status quo which existed before Satoshi ever published his paper, with one distinct difference: in the past there was no distributed ledger of all posted transactions which could be read by anyone with permission. Thus the net effect of introducing blockchain tech into the mix will have been to reduce or eliminate privacy. This is of course completely contrary to the vision which animated early Bitcoin adopters, but it was the obvious likely result all along, at least to those who read Satoshi’s paper and thought about how the concepts (93) it discussed would eventually integrate into the real world.

page 23
Like many centralized clearing systems, the V-S component nodes scale linearly, and load testing with robot clients has demonstrated that the network can support a transaction volume at least equal to that of the global Bitcoin network, even if all of the various nodes are running on but a single modern server! Hardware can be added as required to provide multi-server clustering for each individual component (VP, OFS, etc.).

page 25
The voucher network can scale as required simply through the addition of more hardware, and
possibly also via future software changes to optimize clustering. In this characteristic it is
similar to other systems with centralized clearing, such as PayPal or Visa.

page 26
A Superior Hybrid Concept
Contrary to prevailing belief, completely decentralized cryptocurrencies lacking a proprietor to steer them and keep them competitive, are not inherently superior to systems utilizing centralized payment clearing. Not only are decentralized currencies necessarily less efficient, inevitably exhibiting scaling problems which are extremely hard to solve, but their need for broad consensus as a prerequisite to make any changes is clearly inhibiting their flexibility, and hence their growth. (Except for growth in price, as institutional money (110) begins to chase “alpha” into the cryptocurrency space.) (111) Moreover from a privacy perspective the blockchain spells the end of all financial and transactional privacy. It’s been fairly called the beginning of the cashless control grid. This is a leading reason why governments and big banks love blockchain tech, and today are seen starting up their own blockchain projects. Potential solutions from within blockchain tech are perhaps possible (112), but still years in the future.

Ascension offers a hybrid approach: issue the coins on a distributed private blockchain, but then add an application layer above the blockchain which provides the benefits of centralized clearing, along with strong privacy, plus the tools to support natively a community of users and merchants. This will allow the Ascension Foundation to construct the demand side for its currency, at the same time that the supply side is constructed, both via token sales and by other methods. Since the value of any currency – even the mighty US dollar – is predicated on achieving a balance between supply (money creation rate) and demand (level of economic activity using that currency), new OTO/Lyra released into circulation will be balanced by simultaneously providing markets and apps where it can be utilized by its holders. In this way a stable private ecosystem can be developed around a privately issued money, much as the brilliant Friedrich Hayek envisioned more than 40 years ago.

page 49
While the Ascension blockchain is not yet launched, our voucher network that sits on top of it is fully functional today. Moreover this network is not going away once our blockchain is deployed; rather, it represents an integral part of our market solution for scalability and privacy.

page 54
We do expect that our own ideas presented here will come in for considerable criticism from the community. In particular, we anticipate that we will be criticized both for utilizing a centralized payment clearing mechanism, and for refusing to specify a hard limit on the ultimate number of Lyra coins to be issued. Our reasons for making these decisions are explained above. Here in closing we’d like to observe that after the long blockchain scaling debate (still ongoing), and the various hard forks, we expect that the criticism about locally centralized clearing will be a lot less loud than it would have been several years ago. In view of the parabolic valuation growth of cryptocurrencies going on at present, especially in bitcoin, we likewise expect that the ability to increase supply as needed, which is today a radical notion, will gain growing acceptance in the months and years to come. Our proposal is to follow the guidance of the free market, rather than the hubris of designers – not excluding ourselves.