Software developers need to act more like parents with Artificial Intelligence
- 7 minutes read - 1333 wordsSoftware developers need to act more like parents with Artificial Intelligence
“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful.” - Elon Musk
“HAL had a lot of information, could piece it together, could rationalize it. Hopefully it would never have a bug like HAL did where he killed the occupants of the spaceship. But that [level of artificial intelligence] is what we’re striving for, and I think we’ve made it a part of the way there.” - Sergey Brin
These two quotes by two of the most influential leaders in the tech industry demonstrate two commonly held beliefs about the state of AI and where it may go in the future. Most technologists share Google founder Sergey Brin’s view that AI is the gold standard for where technology should go and the relentless march toward perfect AI is the end goal of a perfect experience. Tesla’s Elon Musk shares a less common view that AI is incredibly dangerous and could damage or otherwise harm humanity if left unchecked. Other technologists with similar views believe that progress on AI should be halted until we can better understand the potential impact. Both are wrong and the best path forward actually lies in the middle.
Software developers are not often encouraged to consider the ethical or humanitarian impact of the products and systems they build. The goal is most often to improve, scale, or otherwise build up software in a relentless goal to outpace competitors and acquire new customers faster. The questions asked during the initial requirements phases and during pauses in implementation are design questions to determine the fastest, most optimal solution to a problem - the how, rather than the why. Developers are tasked with figuring out how exactly they will build the product, never whether or not they should actually build it.
The problem starts in school. Most colleges require a bare minimum of non-technical courses to get a computer science degree. In my personal experience, I only had to take an introductory English literature course, a history course, and optionally took a few courses in cognitive science. Hardly any schools even offer, let alone require any type of ethics course. A few are now bucking the trend with ethics for computer scientist courses such as Holy Cross, Tufts, and Utah Valley University. Still, most students who go on to become software developers, building the next generation of technology will not have ever taken an ethics course or have learned to question the reasons for building things. Without this context, they will never ask the right questions and challenge each other. Whether or not AI is the right goal or will lead to the end of humanity, it’s important to actually ask deep questions about how things should be built in a responsible manner.
This extends beyond questions about AI. One of the most pressing concerns with the technology industry and company cultures is a lack of diversity and inclusion. Stories abound about bias and discrimination within all of the major tech companies and many smaller startups as well. This is only the tip of the iceberg as much of this toxic culture goes unreported because employees are afraid to speak out or feel nothing with change. Worst, they may feel it comes with the territory of software development and begin to accept it.
We need to begin to teach developers to fight these preconceived notions. The status quo can always be challenged and broken but it won’t happen if the questions don’t get raised. Ethics classes can help, but really it needs to begin earlier. Ethics and morality are concepts that need to be inculcated early on in life to be effective. Endemic belief structures are formed very early and are difficult to change, so the sooner these concepts are developed, the more likely they are to be followed later in life.
Software developers need to act more like parents for AI development and research as well. In order to drive the right questions and really think deeply about the implications of work on AI, developers need to think about their creations like parents do their children. Developers should feel the need to worry themselves to sleep at night about if they are raising their progeny correctly and in a manner consistent with their values and beliefs. Parents mold their children in a combination of their own image and the desired image of what they see as important, including ethical, social, and humanitarian beliefs. Developers working on AI should weigh their decisions about how they build and train AI the same way that parents worry about the values they impart to their children.
Strong parenting is a combination of hands on guidance and development along with going hands off and letting kids make mistakes and learn for themselves. However, no parent can go completely hands off and succeed. Unfortunately, this is the approach most computer scientists are taking with AI. They build models, train, develop, and add capabilities, but don’t consider the implications and what values the AI is being imbued with in the process. Did the developers of Watson, the AI that won Jeopardy consider the implications of trouncing other contestants? Did the developers of the AI that powers restaurant recommendations consider the impact to small businesses? These questions aren’t even noted during development and the endless quest for improvement and new features. We need to start asking these questions, especially as AI improves and begins performing better in a variety of tasks that will make categories of jobs performed by humans obsolete. As the AI that powers self driving cars improves, questions of safety and ethical questions like what to do in a situation where a collision is unavoidable will become necessary to ponder. Without a background and experiences in asking and answering these type of questions, AI will become a soulless box that causes irreparable harm.
Parents and developers treat mistakes very differently. Parents see mistakes as learning opportunities and provide feedback and guidance to improve the situation or behavior so that in the future, a different outcome is reached. Developers see mistakes as things to be eradicated and prevented at all costs. They are not development opportunities for improvement, but negative side effects of less than perfect code and should be unavoidable. When a child does something bad, like bite another kid on the playground, parents take many actions including talking to the child, explaining the problem, the impact of it, how it makes others feel, and what to do differently. When programmers see a mistake in their program, they modify it or remove the functionality. If a child screams the f-bomb in class, parents may discipline them, take a more laid back approach with encouraging emotional empathy, or talk through the problem. The point is that there are many options and different parents may employ different techniques to encourage change. Over a child’s life, these different experiences from parents, family, and teachers with differing values and approaches will sum up to shape the adult who emerges. AI development needs a similar approach of imbuing diverse and differing viewpoints and values to build a well rounded entity that can make well reasoned and informed decisions.
As a banner at the entry to the daycare our kids attend states
“Who children become is more important than what they know”
If we do not guide and develop the character and spirit of AI now as it is developed, we will lose control and won’t be able to unwind the damage done. These “soft skills” that are often derided by those in the sciences need to be taken seriously and integrated into the software development process and lifecycle if we are to succeed and actually achieve the goal of so many of these companies and make the world a better place for all, not just the privileged.