Most of us reflexively think of technology as more of a “good” thing than a “bad” thing and that is probably objectively true in some ways and contexts. If we look at fields such as medicine, transportation, and communications, for example, and compare how technology has affected those arenas today vs., say, 25 years ago, most of us would say that we are better off.
It is because of technology that someone in Abu Dhabi can buy an avocado grown in Kenya (and that farmers in Kenya know in real time where to get the best price for their products). It is because of technology that early detection of colon cancer has greatly increased survival rates and that surgeons can correct heart defects before babies are even born. It is because of communications technology that two people, regardless of location, can connect in real time and share virtually unlimited information of interest to both. Based on these examples, most people would say that technology represents a positive influence in our world and our lives.
On the other hand, technology has become so pervasive in virtually every aspect of our personal and professional lives that we have become fundamentally dependent on that technology to transact even the most basic tasks of our daily existence, even to the point that most people even mediate their primary human relationships through technology.
Technology has done big things in the very recent past. It has democratized access to information. It has effectively eliminated time and geography as barriers to communication. It has facilitated automation that has completely changed how entire industries function—and how the humans in those industries work and interact. It has made some products and services much cheaper (or even free).
It has also been incredibly disruptive, both in terms of how it impacts our lives, but also because of the increasingly rapid pace with which new technologies enter the mainstream of human life and enterprise, fundamentally changing how we communicate, work, behave, and interact with one another. It has wiped out entire labor markets and it has shifted the risk of failure from single entities to entire systems, and this is a really, really big deal.
In the “old days,” when technology was electro-mechanical or just mechanical, machines failed all the time. However, those failures were limited to individual machines (a car, a loom, a washing machine, etc.). Now, with most all technology employing some sort of software, software that is connected to other things also employing software, we are vulnerable to massive, even catastrophic failures—and it’s already happening. Entire airlines are grounded across the globe. Power grids go down. Vehicle fleets are pulled off the roads—because a string of software code fails or is hacked or was flawed to begin with. The current process for coding is deeply flawed as well, but that is another post…
In short, a new car, for example, typically requires millions of lines of code to operate multiple processors. There is no way for any programmer or programmers to anticipate the billions of potential combinations of scenarios that a driver and the car (and hundreds of thousands of other cars and drivers) will encounter over millions of miles in highly diverse environments. As a result, a piece of code that is supposed to stop a car from accelerating, for example, given enough time and scenarios, will inevitably fail to stop accelerating even when the driver takes her foot off the gas pedal. This has already happened, resulting in accidents and even deaths. And the problem affects every single car running the same code.
It is becoming clear that there are at least a few areas in which our “bargain” with technology may have come at a very steep price. Privacy, independence, vulnerability to failures, human development, mental health, and human relationships—and we have not even largely begun the next technology era with artificial intelligence and mixed/virtual reality! The reality is that we are all subjects in a very big experiment and we frankly don’t know what the outcome will be, particularly for young people (tech natives) who represent the first generation in human history to have lived their entire lives mediated through technology and tethered to smart devices.
This horse is way, way out of the barn. We are not going back to a pre-software driven world, and on balance, most of us are relatively satisfied with the technology we use every day (and depend on without even knowing it). But there are two realities that we should be thoughtful about. One is that there will continue to be massive, catastrophic systems failures, and it will get worse before it gets better. Part of this isn’t even related to technology failing directly—it comes from support systems failing. Ask the people of Puerto Rico about life without smart phones, ATMs, and internet, all of which need electricity and other infrastructure to function. The second reality is that we can still individually carve out space in our life that is mostly tech-free if we choose to and we should do that on a regular basis. As human beings, our relationships probably need time with others that is not mediated through technology. We know that spending significant, unbroken hours engaged with laptops, tablets, smart phones and video games has documented affects on our brains and bodies (think concentration, sleep and eating patterns). We also know that social media without breaks can increase anxiety and decrease self-esteem, while also generating a great deal of stress. Increasing research is beginning to show that human beings need and benefit from extended periods of time in social and natural environments that are not mediated in any way through technology. At least for now we still have some control over that and should regularly exercise that control for our own benefit and the benefit of others.