How Will AI Impact Our Perception of Free Will?

This morning I decided to work out. Or did I? 

If you were the neuroscientist Sam Harriss, your answer would be, “No, he didn’t decide that.” Harris argues that free will is an illusion. 

He suggests that thoughts and intentions emerge from background causes of which we are unaware and over which we exert no conscious control. According to him, our thoughts and actions are the result of predetermined events, implying that we do not have the free will to choose them.

His views are largely informed by neuroscience. He cites studies showing that our brains make decisions before we are consciously aware of them. This indicates that our subjective experience of making a choice is not the cause of the choice, but rather a subsequent event. Still with me? I didn’t think so. 

But let’s pretend that we either believe the above or still believe we have complete control of our decisions and destiny. What happens when we hand over this agency, or illusion of agency, to something we believe to be better equipped to make these decisions for us? 

One example would be Tesla. Musk argues that full self-driving cars are exponentially more capable of making better decisions than any single driver because they are being trained by the experiences of hundreds of thousands of drivers. Not to mention, they will never make objectively poor decisions like texting while driving, having four cocktails with dinner before heading home, or commuting to work after having two hours of sleep. 

Or you could ask Bryan Johnson. 

Bryan Johnson is the tech-mogul spending $2MM a year in an attempt to reverse biological aging. But it’s not all hair dye and skin treatments (although there is some of that as well). Johnson undergoes a litany of weekly tests that inform him of exactly how to eat, treat, train, and supplement his biology for the best chances of ultimate longevity. No opinions, no “well, this is how I feel today.” He’s obediently following the data that has what he believes to be his best interests in the algorithm. 

Bryan Johnson collecting a little data

So, let’s consider that at scale—the scale of wide or narrow data models that have our best interests waiting at the command prompt. Last year, I consulted another human after articulating my fitness goals, and she then gave me a workout prescription that, in her opinion, was best suited for me. I didn’t always agree with her opinions. Especially when noting her very human limitations of juggling many other clients, their histories, programs, and a multitude of nuances and environments they were working with. 

I created an AI model specifically for prescribing strength training workouts. It knows my history, the equipment I have, and my goals. I did something similar for nutrition, where it knows my caloric and protein requirements based on the training I’m doing, dietary preferences, and overarching goals. It then provides meal plans and a grocery list. On one hand, you might say that these are just conveniences. But on the other, I am surrendering a series of choices I was otherwise making myself. 

Taking this a step further, I’m working on an AI to analyze ongoing journal entries that date back over a decade. Like a therapist with perfect memory, it should be able to connect dots I am unaware of, and provide advice and insights to help me through current struggles. Theoretically. Because, like most of the AI tools out there now, it’s rough around the edges, but it’s good enough to give us more than just a glimpse as to where this is headed. 

So, at what point do we give AI command over most of our daily decisions, knowing it is going to be acting in our best interest? Or did we ever really have agency over this, to begin with? I guess it all depends on what you believe. If you believe you have free will to begin with, you might be more hesitant to let it go. If you believe, as Harriss does, that it is an illusion, you may be more willing to acquiesce. But it may also be worth mentioning that Sam Harriss is very concerned about AI. 

If you believe that we’re all part of one big collective soup of consciousness, then maybe AI will be the data expression of all of that. But then again, I’ve been known to error toward optimism. 

Previous
Previous

How I Ended Years Of Back Pain

Next
Next

Crafting Tailored Terms of Use and Privacy Policies: A Web Developer's Perspective