How do we make dumb computers smart? It's about practice.
If you want to understand Artificial Intelligence, first you’ve got to understand how computers learn.
The first AI I ever encountered was not Artificial Intelligence but Allen Iverson. The initiated know AI as one of the best combo guards in the history of basketball, and even the uninitiated have probably seen his infamous press conference rant about “practice” from 2003.
Iverson was special because he thrived despite being unconventional. In his own words:
“Did I try to fit in? Hell NO I didn’t! I loved hip hop and I let everyone know it. I had these tats and I wasn’t about to cover them up. I could stay out late one morning, then drop 50 and 10 that next night, and I wasn’t even trying to hide it. I wasn’t some robot built to play basketball — you know what I’m saying? I was this real person, from this real place.”1
He’s right. If an engineer sat down and tried to build a robot to play basketball, they would have never created anything resembling Allen Iverson.
How would they do it? Computers are very, very dumb. They’re fast and efficient but they aren’t smart or intuitive. If you want a computer to do anything right, you need to give it a precise recipe to follow. It's not enough to say “season to taste” — a computer needs to know exactly how much salt to add and when.
Before Artificial Intelligence, the entire game of basketball would have to be defined in an excruciatingly detailed list of rules: bouncing a basketball consists of pushing the ball towards the court using your hand. Dribbling consists of using one hand to make a series of consecutive bounces without stopping. A pass is how you give the ball to your teammate and there are two types: a chest pass and a bounce pass which uses the “bouncing” function we defined earlier.
Exhausting right? And we haven’t even mapped out how to actually compete against the other team. What should the robot do when it’s trying to make a bounce pass but someone is in the way? We need to go beyond rules and articulate instincts: how to anticipate whether your opponent will go right or left, how to move when you don’t have the ball, what fouls this ref will call as opposed to that one. It’s a nearly impossible feat.
Until recent history, computers were limited by the human capacity to describe things using clear-cut rules. A computer has no internal sense of problem solving — it’s just a dumb machine waiting for you to tell it what to do.
By contrast, humans are smart! Even the dumbest among us are smarter than computers. Seven year olds can watch Steph Curry highlights on YouTube and then got to the park to shoot around and make one or two. They don’t need exhaustive lists and detailed explanations; if they see a few examples and practice for long enough, they’ll just get it.
Artificial Intelligence rests on the idea that we can make computers that are as smart as humans. If computers are going to become our benevolent overlords then we need to teach them how to think like us — how to reason, strategize, plan and most importantly learn like we do. The branch of AI that teaches computers how to learn is called Machine Learning, and it’s one of the most important disciplines in the field.
Machine Learning teaches computers how to start with an example and work backwards to come up with its own set of rules for how to achieve that example. A program powered by machine learning starts at the end, not the beginning; it doesn’t need to know every step to take beforehand because it’s smart enough to figure it out along the way.
How do you teach a computer how to learn? It's about practice, of course.
No rules, no lists, just vibes ✨
Instead of a list of directions, the engineer would show the robot different examples of what makes a great NBA guard: highlight reels of Magic Johnson and Clyde Drexler, Jordan’s workout routine, and even data on which shoes to wear and diet to follow. The robot would use those examples to create a model — aka “recipe” — on how to become a world-class basketball player.
Creating this recipe is mostly math, and if you took Statistics 101 then you already know the foundational concepts of Machine Learning. Statistics teaches us how to find patterns in data and create formulas that capture the relationship between what we put in and what we can expect to get out. If we know X then we can predict Y.
Machine Learning is statistics on steroids. It takes the outcome you want (a made three pointer) and works backwards to figure out which variables matter to achieving this outcome. Then, it creates a neat little formula for how to get there: this launch angle + that arc + a flick of the wrist = a swish.
Machine Learning is trying to reproduce what our brains do as we transform the things we see, hear and feel into knowledge. The more we see, hear and feel something, the more we just know it because our mind has created our own little formula for how to get there.
But it’s not enough to just create a recipe on paper. Next, the robot has to go to the gym and put what it learned to work. Unlike Allen Iverson, Artificial Intelligence loves practice. It thrives on it. Engineers “train” computers just like coaches “train” athletes: show them how to do a skill and then have them practice it in a safe, experimental environment. Engineers use a technique called reinforcement learning to give the computer positive feedback when it does well and negative feedback when it misses the mark.
With enough practice, the robot will become the most fine-tuned basketball playing machine the world has ever seen. But like Allen Iverson predicted, it would be nothing like him.
The robot would probably be around 6’5 like Oscar Robinson, not short like AI who barely measured 6’0 in shoes. The robot would probably have a tireless work ethic like Kobe, unlike AI who was known for partying the night before games and practicing only when the spirit moved him. For his shoe endorsement the robot would probably sign with Nike like Mike instead of Reebok like Iverson, whose prior roster included D-list stars like Nick Van Exel and Shawn Kemp.
There was no model that would have predicted AI’s success and yet he became one of the greatest of all time with the accolades to prove it: 4x scoring champion, 2001 league MVP and a first-ballot Hall of Famer. Iverson didn’t just win games, he defined the game for an entire generation of players. AI is your favorite scorer’s favorite scorer. He’s the forefather of handles, the blueprint from which guys like Kyrie Irving learned how to maneuver the ball with equal parts force and finesse. He’s the Mr. Miyagi of disrespect; he savaged Ty Lue with the step-back / step-over so that Steph could launch three-pointers from the logo and then shimmy without impunity.
AI gave guys permission to embrace the one thing the NBA was always trying to temper: their blackness. The league had been dominated by young black men since the mid-90s but that doesn’t mean that young black men felt seen by the league. To NBA commissioner David Stern “hood” was a four letter word, but to Allen Iverson it was a badge of honor. AI made sure you could see him, all six feet of him, in his trademark tall tee, baggy jeans and gold chains. AI had his mom braid his hair while sitting on the bench during a game. His strength was in his difference, and the game is so much better off because an outlier broke through and found success in his own unique way.
A few big flaws of AI
Now that you understand one of the most important principles of Artificial Intelligence — the way that computers learn — you’re ready to examine one of its biggest flaws. The data we use to train computers is limited by what we can observe and collect, and there are all sorts of intangible things we can’t represent in 1s and 0s. Whatever made Allen Iverson special – his perseverance, creativity or confidence – can’t be fully captured in the data. An NBA GM would never trust statistics alone to decide his next draft pick but some states trust Artificial Intelligence to weigh in on complex issues of justice, like deciding which incarcerated people should get parole based on who has committed crimes in the past. Unsurprisingly, more black defendants get incorrectly flagged as being likely to commit a crime again, which leads to a denial of parole. Machine Learning models look for the most powerful patterns within imperfect data to make predictions, and in doing so risk codifying past circumstance into future fact. In this version of the world prejudice is a feature, not a bug.
Faith is one of the defining factors of the human experience, but can AI believe in things that it cannot see? Can it help us imagine a better world or only reinforce the one we already have today?
Machine Learning models also breed homogeneity. Creating the future based on the average of everything in the past makes everything…pretty average. That’s why ChatGPT sounds like a vaguely soulless Wikipedia article – it’s the average of most of the words it was trained on.
Can AI be creative, not only generative? The former is about imagination and the latter is about productivity. Spawning millions of things similar to what we’ve seen before is much different than contributing a single net-new idea to the canon.
There are already a lot of smart people thinking about how to solve these problems and now, you’re one of them. Thats the point of Ain’t I — to not just explain the mechanics behind AI but also to give you some direction on how to interrogate what it means for our society and culture.
After reading this, you should understand that Machine Learning is the branch of AI that teaches computers how to learn. Instead of defining a list of rules for computers to follow, we give them lots of data around different tasks so that they can create their own unique models for how to achieve them. Because computers aren’t limited by how well we can describe things they can solve much more complicated tasks with far less human engineering. But this approach is not without its downsides, the first of which is that AI can replicate suboptimal things that have happened in the past. The second is that it can lead to homogeneity and a regression to the mean rather than creating things that are truly novel or unexpected.


“Vaguely soulless Wikipedia article” - that part.
Beautifully written, clear, understandable bite size pieces of important and fair information about AI!