The Character of Our Technology

January 31, 2018

“We can’t influence the direction of technology, but we can influence its character,” says Kevin Kelly, founder of WIRED, in a recent interview with Krista Tippett. 

A quarter-century ago, financial derivatives technology, enabled by the emergent technology of distributed computing, was about to transform finance and global capital markets forever.  The rise of derivatives technology made profound contributions to the global economy, from a revolution in genuine risk management for banks, businesses, and governments, to mass efficiencies that resulted from linking global capital markets which reduce the cost of capital for productive investment.  The technology also, in part, enabled and most certainly exacerbated the financial crash of 2008, with extreme adverse consequences still pulsing through the real economy—consequences that have destabilized the democratic liberal order of the Western world. 

During the formative years of derivatives technology, we on Wall Street argued earnestly that responsible self-regulation of derivatives would enable innovation to unlock the potential efficiencies and resulting economic prosperity that was the promise.  Undue government regulation, no matter how well-meaning and thoughtfully crafted, would impede innovation that this technology made inevitable.  Furthermore, there would be unintended consequences “worse than the disease.” We were wrong.

It turns out that the character of Wall Street, reflecting on the insight Kevin Kelly shared in his Krista Tippett interview, was not up to the challenge.  Derivatives were hijacked by reckless speculators, manipulators, and, in certain cases, outright fraudsters.  Derivatives, as a result, is now a dirty word.

Far more than financial derivatives, social media brings the promise of immense social welfare through previously unimagined connectivity and the network effect, which makes each social platform exponentially more valuable as more people and organizations join it.  Like derivatives in the early years, social media as a technology is largely unregulated, this is in part because would-be regulators struggle to understand it, much less keep up with its rapid evolution, and its many uses and abuses. Will social media become the derivatives of tomorrow? 

The unintended social consequences of this technology on our children in a world dominated by intentionally addictive social media are daunting enough.  Recent evidence that the technology has spurred an underground of millions of fake accounts—used fraudulently by many (including the Russians to undermine our democracy, it now seems clear)—is chilling.  The “LIBOR scandal” is trivial in comparison. 

And we are now facing a future of artificial intelligence that some believe will fundamentally change what it means to be human.  What to do?

Kelly guides us to focus on the character of our technologies.   He uses the example of the Amish, generally (mis)understood to reject modern technology – they ride in horse-drawn buggies rather than cars for example.

Kelly explains that the Amish make technology choices as a community, in contrast to the personal choices most of us in the West make when considering whether to adopt a technology or not.   For the Amish, their culture rests firmly on two values: family and community.  So when considering a new technology, they assess (through trial by early adapters) whether the new technology will strengthen the family, and, whether it will strengthen the community.  Their decision to stick with horses, Kelly explains, means travel is limited to the fifteen-mile range of a horse.  This ensures that both social connection and commerce is concentrated within the local community, strengthening it in the process.  Of course, this value system comes with costs, but the Amish know what they value and are equally clear to ensure the character of the technologies they chose to support their values.

If there is one value that America remains united around, even in these divisive times, it is democracy.  Like the Amish, we would be wise to be clear on what values we hold dear, and make our technology choices accordingly. 

In the rapidly evolving context of technological advance, there is truth to the argument that unintended consequences of regulation can be worse than the disease.  But if we have learned nothing else in the past quarter-century, we should now understand that ideologically based calls for free markets, free speech, and freedom to carry guns are insufficient.  Reality is more complex.  Rules and regulations (or lack thereof) are all choices we have no way to avoid.

Equally impossible to avoid is technology’s exponentially growing impact on our lives, and even our humanity.  As such, it has never been more essential to get clear on the character of technology we want.  And for that we must be willing to make hard choices, just like the Amish do, grounded in the fierce defense of the values we hold dear.

Democracy demands, at the very least, fair elections not compromised by either foreign powers, hostile or otherwise, or ideologically driven billionaires, no matter their views. Last election, social media enabled manipulation—if not compromise—by both.

If our values were truly grounded in a fierce defense of a fair democratic process of governance, we the people, through our elected leaders,  would, among other things, deliver Facebook and Twitter a blunt message, recognizing full well the consequences of such a choice: 

Trying harder is not good enough.  Either prove you can protect your networks during elections from fraudulent and malicious bots, whether homegrown or from foreign hostile powers, or we will have no choice but to shut you down for the three months prior to our national elections.

If a similarly harsh message were delivered to Wall Street and taken seriously regarding the abuse of derivatives to exacerbate the consequences of fraudulent lending, and the existence of fraudulent lending at all, the 2008 crash would not have happened.  The stakes with social media and artificial intelligence in the future are far more profound. 

Will society and Silicon Valley, learn from the derivatives technology experience?  If not, “social media” and “artificial intelligence” are destined to become dirty words as well.  And society may not recover.

  • Excellent piece John, but from my perspective it does not explicitly address the three major areas of concern:
    1/ identification of core values;
    2/ understanding of systems and complexity;
    3/ identification and mitigation of risk.

    All three are essential and tightly coupled.

    1/ core values

    For me, core values are simple:
    Individual sapient life, and individual liberty, within responsible social and ecological contexts, applied universally.

    In this sense a sapient individual is any individual (human or non-human, biological or non-biological) capable of modeling itself as an actor within its model of reality, and of using language to express abstract ideas.

    Liberty in this sense is not a freedom to follow whim alone, but also involves a responsibility to assess the reasonably foreseeable consequences of actions and to avoid, remedy or mitigate any unreasonable risk to the life or liberty of any other entity – applied universally. And any such assessment of risk will be a matter of probabilities, and will involve a test of reasonableness.
    More on this and its relationship to the other two areas soon.

    2/ understanding systems and complexity.

    This comes in three parts also, uncertainty, boundaries, and responses.
    A modern understanding of systems and complexity is very different from classical notions of knowledge.
    The classical idea that truth may be known, and is the object of human endeavor seems beyond any reasonable doubt to have been disproved.

    We now understand many different sources of uncertainty, from simple error, to measurement uncertainty, to quantum uncertainty, to Goedel incompleteness, to maximal computational complexity, to irrational numbers, to chaos, to the fully random, to infinite classes of computational and algorithmic spaces.

    Anyone who has seriously looked at the nature of complex systems and understanding itself must admit of infinite classes of uncertainty and unknowability, that reduce all understandings to matters of context sensitive confidence.

    So first part is to admit of uncertainty, always – a touch of humility required.

    Second part is to understand the necessity of boundaries.

    Any form requires boundaries or relationships for its survival (depending on how one looks at it, they are equivalent notions, just expressed differently).

    Without such boundaries or relationship, everything decays to a uniform randomness.

    Any form of complexity has a minimum set of boundaries (or expressed differently a minimum set of relationships) that are required for its existence.
    Maintenance of that set of boundaries, and no more, is a matter of existence.

    Freedom must include the notion of being responsible for the maintenance of those boundaries that are actually necessary – and that will involve occasional testing as contexts change.

    We human beings seem to be very complex evolved systems, embodying about 20 levels of very complex sets of cooperative systems.

    In this sense, evolution is very poorly understood generally.

    It now seems clear that the evolution of new levels of complexity requires new levels of cooperation; and raw cooperation is always vulnerable to exploitation, thus requiring sets of attendant strategies to detect and remove cheating strategies – all levels, recursively applied. This leads to something of a potentially eternal strategic arms race, through infinite levels of complexity. The price of liberty is eternal vigilance.

    The common notion that evolution is all about competition is false.

    Competition and cooperation are both aspects of evolution, and broadly speaking, competition results in simplicity, cooperation allows for the emergence of complexity. Which tends to dominate is all a matter of where the dominant source or risk resides. If it is within group – then competition trending to simplicity tends to result, if the dominant risk is from outside the group, and cooperation can mitigate it, then cooperation tends to spawn new levels of complexity.

    This leads into the third aspect of complexity, the sorts of management responses that are appropriate in different classes of complexity. This is an infinitely complex topic, and David Snowden’s Cynefin framework for the management complexity ( ) is the best simplification of that complexity that I am aware of.
    The more complex the systems, the more flexible our management responses need to be.

    3/ Identification and mitigation of risk.

    When faced with potentially infinite complexity, risk is always present, and the unknown always exceeds the known – eternally.

    Being over confident leads to operational simplicity right up to the point of overwhelm and extinction by a risk that was not detected and mitigated. This is the risk that the Amish ignore.

    Our explorations of geology and cosmology and biology have already identified sources of risk that may not be mitigated with existing technologies, like comet and meteor strike, supervolcanoes, extreme classes of solar flares, some classes of viruses, etc. So in our explorations of technologies to create effective mitigation strategies to those known risks, we inevitably create new sources of risk we did not previously have to deal with.

    That too would appear to be a recursively consistent aspect of reality.

    So once again, nothing is certain, except that ignoring known sources of risk is not a mitigation strategy, and will most likely lead to extinction.

    So, considering all of that – where does that leave us with finance?

    Here, one needs to consider deeply the abstract relationships that are embodied in money.

    Money is an abstract measure of value.
    It works only because we believe it will work.
    It is based in trust and belief.

    Markets are reasonable tools for measuring the value of items that are genuinely scarce, but fail to assign any value to items that are universally abundant.

    When most things were genuinely scarce, that meant markets delivered a really useful measure of value.

    Now that fully automated systems can deliver any information product, and many material products, in universal abundance; markets cannot measure their value.

    The response to date has been to create artificial barriers to such abundance, to create/maintain their value in markets.

    So in order to make money, we have laws that deny the majority access to that which could be available to them at close to zero marginal cost. All of our IP laws, copyright, patents, most of our health and safety and certification laws – are essentially present to create market value where none would otherwise exist. They are present to prop up a system that is past its “use by” date.

    Fully automated systems are absolutely required to mitigate many of the known existential level risks.

    Fully automated systems could deliver all the reasonable needs of life and liberty to every person on the planet, but don’t because of the “market incentives” present.

    Markets cannot deliver a positive value for universal abundance, yet abundance is a positive value for most humans in most contexts (sugar, and many stimulant drugs being obvious exceptions; air and water the obvious positive examples – and it is always possible to have too much of a good thing).

    And markets have performed many other very useful functions, other that simply measuring value and mediating exchange, complex functions involving distributed information transfer, distributed information processing, distributed governance, distributed risk mitigation, the interlinking of distributed trust networks, etc.

    These are very complex, multi-level and essential functions.
    They can be done with other mechanisms, and those other mechanisms need to be actively developed, tested and deployed.

    So it seems that markets have now moved out of the territory they once occupied of being intimately linked to life and liberty, and with the changing context of the exponential development of fully automated systems, are now the single greatest source of existential risk to humanity as whole.

    How we plan the safe transition away from market based systems, to distributed systems of trust, governance, and risk management, are the great questions of our age.
    Universal Basic Income seems to be a useful part of an intermediary transition strategy.

    Getting people to see that the things we have traditionally associated with markets – the things that have supported life and liberty, are not actually attributes of markets themselves, but only have the attribute of traditional association, is not an easy task.

    Reality is far too complex for any human mind to deal with in its entirety.

    All of us have to make simplifying assumptions.

    The simplification that markets equal liberty worked in the past, but for all the reasons outlined above is failing now, and the rate of failure is exponentially increasing.

    That failure will be hard for many to see.
    Yet the benefits of that failure far outweigh the costs.

    The technology that creates the failure of markets allows us to address and mitigate existential level risks that markets cannot, ever.

    So we are in a time of profound change.
    All change has real risk associated with it, all levels.

    And ignorance of risk is not a risk mitigation strategy, though it is an anxiety mitigation strategy.

    Never confuse anxiety with risk!

    The horses of the Amish will never offer an effective mitigation strategy to super-volcanoes, comet strike, ice age, of super flare. Fully automated systems can, for those and many others.

    • John Fullerton

      Your elegant response transcends my simple blog on the need to chose which technologies we accept, rather than to accept all technologies that become possible as if that is inevitable, a law of nature! To clarify, I was not suggesting we all embrace the Amish choice. I was only using it as an example of a choice based on values.

      As for your piece, many important ideas, and many I agree with as you know. Some like “freedom” in the context of complexity I have even written on. But the one i’d like to highlight here is this provocative statement you wrote:

      “So it seems that markets have now moved out of the territory they once occupied of being intimately linked to life and liberty, and with the changing context of the exponential development of fully automated systems, are now the single greatest source of existential risk to humanity as whole.”

      While I don’t agree fully (I see it more of a both/and given today’s context in which not everything is abundant, not yet at least – longer discussion warranted), you have nicely reminded us that markets themselves are a “technology” whose usefulness can evolve over time as contexts change. And they are technologies that can be corrupted, and have both positive and negative consequences, just as derivatives and social media. Fascinating to ponder!

      • Hi John,

        As usual, we agree far more than not.

        We are certainly in a mixed mode at present.

        As stated, any information product could be delivered universally today, but isn’t, because of market incentives.
        The human cost of that is huge, and growing.

        Certainly, some things will always be scarce – originals, some heavy elements.
        And with a little creativity, we can create alternatives that are functionally indistinguishable in most contexts.

        A longer discussion is certainly warranted – and we have been in this discussion for a few years now.

        And yes – markets, money, and ideas, and set of relationships, can be thought of as a technologies, which will have different impacts (risks, benefits) in different contexts. Choices of interpretive schema are as important as any physical context.

        I spent last night in a little hut in a colony of Hutton’s Shearwaters, waiting for 2 birds with GPS and depth loggers attached to return to their chicks so I could recover the machinery and the information therein. As only two birds are left to recapture I made up a little alarm circuit with trip wires across their burrows. The tech worked perfectly waking me when a bird did enter the burrow, but not one of the birds I wanted, their mate. So I slept on my high tech ultralight airbed, in some of the most amazing scenery on the planet.

        The technology available to us now is amazing.
        We can put gadgets on these little birds (the birds are only half a pound each) that tell us where they go, and how deep they dive.
        And we now know that they can travel 400 miles over 3 days at sea, diving over 100ft down, hundreds of times, to bring back 2 oz of food to their growing chick.

        We are starting to understand so much more about biology, about the complexity of the connections of different species.
        The phosphates these birds bring back to their mountain burrows are a major source of the productivity of the ecosystems in these mountains.

        We humans need to stop thinking mostly about money, and start looking very closely at the key factors that actually keep us alive – particularly the nutrient and energy flows through the systems (not just our human economic systems, but the wider systems within which we are embedded).
        No money in it, and our survival is at stake.

        In order to maintain cooperation, we require abundance – that is games theory 101 in a very real sense.
        Driving systems to their limits for short term economic gain does pose very real existential level threats from a systems perspective (if those systems collapse and destroy the fragile abundance that is keeping our society as peaceful as it is).

        We cannot afford another major conflict.

        We need a new level of global cooperation – universal, without cheats.

        We need global abundance.

        Any centralised system poses too much risk – so we must have decentralised and massively redundant systems (like biology does).

        If we don’t take the big picture view – the results are not going to be pretty.

        And I am confident that we can do it, and it is a 70/30 thing, not the 99.99999% thing that I would like it to be.

        • John Fullerton

          Nice. I’m delighted you see it as a 70/30 thing. That’s better than I might have guessed! Of course the avoidance of the financial collapse was a “99.5% thing” as you are reminding us! EF Schumacher liked to talk of an economics of permanence. How reckless we are with what is priceless.

  • As usual—great points here John! What I hear you describing with the Kelly reference, as well as the reference to democracy, is commons management. Social media is part of a digital (and ultimately, societal) commons, and we have failed to develop an adequate group process (or even a container to hold such a process) for determining group norms and boundaries.

    A poignant example of this would be the notorious and tragic phenomena of fake celebrity porn—the latest in the evolution of machine learning, fake news, and misogyny. The origination point of this material is in a subreddit managed by “deepfakes.” In reading through the comments on this forum, I was inspired to see the number of people sharing deep criticism of this “innovation” (in the face of significant trolling). Conversations like these are on the front lines of our experimentation with the concepts of freedom, democracy, and management of a shared commons. Our ability to navigate the consequences of our technologies has continued to be outpaced by the development of the technologies themselves. This is likely why people like you and I have been focused on culture instead.

    • John Fullerton

      Great clarification. Yes this is a piece of the commons that we don’t yet even see as a missing “institution” of sorts, with a governance philosophy and more.

      What is “fake celebrity porn”? sounds a dreadful combination of three dreadful words!

  • John Fullerton

    Indeed! I think that’s what Kevin Kelly really means. But like the schumacher quote on the newsletter suggests, our technology reflects the character of those who back it with investment and adaption. So in a sense the technology assumes a character itself which is inseparable from we humans.