Technology has changed, but is it for the better?
- 7 minutes read - 1430 wordsCall me old, but I remember the days before the internet, the iPhone, and modern computers. I was a do-it-your-selfer, building several computers from components, throwing together programs starting on a graphing calculator, moving to BASIC, and eventually learning to program. This aptitude gave me the ability to problem solve and focus on a singular, difficult task until complete, and I feel is responsible for my success in software development as a career.
The industry and culture has been moving away from this trend for some time, but the gradual decline has been hard to notice. As I raise my boys, I’ve been reflecting on how it’s changed and started questioning if this is really for the best. Much like previous generations questioning the move from manual transmissions to automatic or mechanical components to computerized electronic ones in cars, I am not sure that less modular, more tightly controlled devices and software is actually a good thing. We’ve traded convenience for the opportunity to learn and understand. Is it worth it?
I was about 5 years old when we first got a computer and around 13 years old when we first got the internet. This came at the perfect time as I was old enough to understand the power of both, but young enough to fully embrace both and fall in love with the promise they offered. My dad got some DOS games installed on the computer and I knew one day I wanted to understand how it worked. Through high school and college, I build several computers, often making mistakes that would haunt me, but succeeding through the challenges. I once bent pins on a CPU that would overheat and shut down the computer seemingly every time I was in the middle of an intense game. To counteract this, I added a water cooling system that ended up leaking and frying the whole thing. Through this journey of mistakes, I learned patience during component installation, as well as how to troubleshoot these kinds of issues.
Last month, I got a new computer at work because I was eligible for an upgrade after three years with my previous MacBook Pro. The new version, while significantly thinner and lighter, which is great for carrying around to meetings all day, proliferates the trend away from customization and tinkering. Every component on it is sealed. This is nothing new for Apple. However, the keyboard is also so finely integrated, that I’ve been informed if anything goes wrong with it, the entire machine essentially needs to be replaced. This seems inevitable as the keys barely travel. Yet somehow, they are loud enough that as I type this, sitting in the jury duty waiting room, I constantly receive dirty looks.
Perhaps worse, the removal of every port except for two USB-C ports means I need to carry cables and dongles around with me during the day to connect to projectors, get power, charge my phone, and use my monitors. Compared to my colleagues with PCs, it seems like they have a tower PC looking at all of the various ports they have. I swear I even saw a modem port on one.
Meanwhile, my latest phone, the Google Pixel 2 XL, has no headphone port, no SD card slot, a sealed battery, and uses USB-C. Sure I can use the Bluetooth “standard”, but in reality, every flagship phone now implements its own version of the standard, making many headphones either incompatible or a poorer experience than those made by the phone manufacturer.
Software is just as bad. The app stores on phones, Apple laptops, and even the Windows Store have further locked down what applications can do and raised the barrier to entry to throw together a simple, hacky program to learn. Thankfully we still have Unix and the command line. With the way things are going, how much longer will these exist though? Cloud based services like AWS and Azure have abstracted so much of the details of infrastructure and coding that future generations may never learn about CPU multithreading, thread contention, or even have to worry about resources.
Yes, it’s great that we are moving up the abstraction layer and technology has become far more accessible, usable, and convenient as a result. However, this progress isn’t free and I worry that we’ve left behind the reasons to learn and deeply understand it as a result.
My kids are going to grow up in a world with even more locked-down hardware and software. They’re likely never going to need to build their own machines to get the best experiences. Top-end ready-made hardware solves that problem. I will highly encourage them to be curious and tinker, but what is going to drive them to explore on their own if there is no need to hack or play around on their own? There’s a lot of talk about how important coding is and STEM focused toys, but do any really provide a compelling reason for them to learn? If they don’t see a necessity in learning to achieve a larger goal, it will be easy to give up at the first challenge. Without providing some challenges that scale and enable continuous learning, I fear these skills will never be developed.
I recently picked up some older – it’s funny that 7-year-old technology is now considered old – gadgets and electronics from my grandfather. Among them were some old cameras, laptops, and iPods. The cameras were especially illuminating, they didn’t even have a shutter button, but rather a manual lever to be pulled to open and close the shutter. These even used – gasp – film! It brought me back to the days of taking pictures that you had no idea how they would turn out until months later when the film was developed. The feedback loop for this was very long, but the cost of making a mistake was so high it forced learning and improvement. Today’s digital cameras make the process so easy the vast majority of users never switch off of automatic mode.
The iPods reminded me of the days of downloading questionable quality music and the huge time spent managing it. I had to make careful choices of what music to get and keep due to space limitations both on the player and the computer. With 1 GB hard drives, careful curation of music libraries was critical. In order to get the best experience, I also had to manually grab music metadata like correct titles, cover art, and album names. Doing so exposed me to new music and gave me a deeper appreciation for the music I had. Streaming services have completely removed this for us. I do get to discover new artists and I tend to listen far more, but I never end up developing the same deep relationship with music because I only hear these songs once or twice.
With this blog, I can quickly create an entire customized webpage in about 2 minutes using WordPress and all of the great plugins available. In the early days of the internet, I had a geocities website. In order to customize it, I actually had to learn to write HTML. The internet provided me with the means to learn it through tutorials and an easily customizable sandbox, but the desire to make my page standout among my friends’ and others’ drove me to learn. To be honest, my HTML was terrible and I had to learn far more when I started working, but the base I learned from this experience gave me the foundation to do so and succeed.
There is no doubt that technology has rapidly advanced, becoming more accessible and improving the world. However, sometimes the pace is so fast, and the pressure to move fast so overwhelming that no one questions if every decision is really worth the tradeoffs. The world is more connected and our lives are more convenient than ever in history which is to be celebrated, but we need to stop to consider what we’ve lost in this trade. Progress isn’t always a straight line, it often moves up and down. We might need to move back a little in order to move forward.
I want my boys to grow up in a world that rewards curiosity and encourages dedication. I don’t want everything to be easy, convenient, and trivial for them. Some things in life need to be hard so that we learn to preserve and overcome them. This develops a growth mindset and the belief in one self’s ability to solve complex and difficult problems.