Top Nav Breadcrumb

Artificial intelligence: Expert or actor?

Diploma Programme (DP) graduate Johan Byttner looks into artificial intelligence, that seems to be everywhere nowadays, and why it’s not necessarily the expert we think it is. This is ­­his third story in our graduate voices series.

johan

By Johan Byttner

Everyone talks about AI; you think it’s in your phone, half the city seems to be working on it and you are left feeling the only way you will command it is to ask a speaker about the weather. This AI thing seems to have conquered the world and you are left wondering. But what is it? What does it do? Can it read my emails? How do I stay in control? And why is it so good at making decisions?

Quite unlike any of your previous computers, AI now seems to be not just your private secretary but also your chauffeur, doctor and analyst. We are here because something fundamental has shifted, not only in how programs are written, but also in how humans understand information. Today we can build an artificial system that appears to be smart and creative. But appearances can be deceptive.

Heuristics and other life hacks

“AI systems think in terms of heuristics … they fail spectacularly when unexpected things happen”

It is said that intelligence is the ability to reason about abstract concepts. This is not the same as applying logic to a problem. AI can do the latter but broadly not the former. It is easy to confuse the two. What separates them is that reasoning requires dissociated thought and logical problems can often be solved with heuristics.

A heuristic (rule of thumb, trial-and-error solution) is a working practice that, when applied to a problem, often gets the answer right. A useful heuristic is that there are no tigers in your bedroom. A more dangerous one is that cars will yield when you cross the street. AI systems think in terms of heuristics. They are therefore very good at dealing with the usual scenario, but they fail spectacularly when unexpected things happen, such as when someone parks a fire truck on a highway.

When you hear about AI threatening the survival of an industry, this is usually not the case. Rather, AI threatens the bits of an industry where straightforward planning get you by. If you find yourself ‘disrupted’, you will notice most boring jobs being pawned off to diligent, dedicated automatons, who conduct a sequence of actions more predictably than you do. The best AI systems can even observe you doing something, notice your typical pattern and copy that. They then refine it to perfection with some simple experiments. It is magic, until you realise it is just guesswork.

Funnily enough, it seems like humans worry about being worse than robots in the fields where robots excel. Conversely, human strengths are often ignored.

You will often hear data scientists talk about the dimensionality of a problem. This is a way to measure how difficult a task is for a robot to do. Playing chess has a low dimensionality, Go has a medium one, Starcraft has a high dimensionality and cycling has such as huge dimensionality that few robots can do it better than a three-year-old. AI has a weird understanding of our reality and it takes a while for a human to get used to it.

Funnily enough, it seems like humans worry about being worse than robots in the fields where robots excel. Conversely, human strengths are often ignored. When I speak with people in different industries, they universally worry about robots doing the easy bits. But true virtue is doing something that is hard.

 Robots filling in spreadsheets

There are plenty of times when automating somewhat complex sequences of actions proved valuable. The Payment Protection Insurance scandal in Great Britain (paywall) started with miss-selling of insurance in the 90’s. People who took out mortgages were offered insurance in case they fell ill or were otherwise incapable of paying the money back. But only some 15 % of the money was actually returned to claimants, making this a greatly overpriced scheme. It could add more than 20 % on the monthly payment of a loan, making it a significant expense.

At its peak, there were 20 million outstanding policies, with many policyholders being unaware that they could claim. When the scandal broke, the Financial Conduct Authority (a government agency) decided that banks had to pay back most of the money. The sheer number of claimants meant that most banks had to hire thousands of workers. But instead some turned to AI.

“Menial jobs have long been eliminated on factory floors and replaced with more qualified roles”

In the end, millions of claims were handled by programs that had learned how a human would process one. One bank hired six thousand contractors to deal with claims. Another wrote tens of thousands of lines of code, to much the same effect.

One could have imagined that hiring thousands of people to do the repetitive, manual work of processing claims would provide a valuable societal service, and therefore it should be promoted. However, menial jobs have long been eliminated on factory floors and replaced with more qualified roles (to the point that there is a projected shortfall of 2,4 million U.S. workers in manufacturing over the next ten years). If we take the trend in manufacturing as a guide, it seems like many administrative jobs will disappear, to be replaced with fewer but more skilled positions.

 Markov Chaining through Mazes

There is a game called PacMan that consists of guiding an orange pie travelling through a maze. To confuse the pie, there are ghosts that will try to eat it. This game long seemed simple enough to just be out of reach for AI. It is theoretically solvable by a state-searching algorithm called Q-Learning. Imagine that the player can move one square up, down, right or left, if there are no walls in the way. The pie can also stand still. The result after moving one square is called a state. Each ghost can also move and will choose a direction that brings it closer to the player, with some randomness thrown in. We observe the game until the player either wins or gets eaten.

The problem is that we quickly get many possible states to observe. For each ghost, we multiply the number of states by up to five. With three ghosts, there are up to 125 possible states after one move, 16125 after two, 3,814,697,265,625 after five etc. This is known as a state explosion and it long limited algorithms’ potential. But people at a company called DeepMind found a workaround. By using a good heuristic, we can guess how the ghosts will move. This algorithm became known as Neural-Q, since it worked through a neural network. It was also the start of the era of driverless cars.

This sequence of maze states is known as a Markov Chain. Driving a truck through a road looks much like a sequence of maze states, so algorithms similar to Neural-Q can be quite good at it. But if there are too many different pedestrians to track, or they do unexpected things like practicing gymnastics on the road, the heuristics break down and the truck risks running over someone. This is why driverless trucks tend to stick to mines they are generally free from cartwheeling youngsters.

Talking the Turing Test

Ultimately, many robots that we perceive as smart are nothing like the sort. Instead, they are good actors. It turns out humans are surprisingly predictable most of the time but they also have a great capacity to surprise. Rigorously enforced rules keep humans inside a maze small enough for AI to navigate. Excluding some social experiments in Beijing (paywall), it is therefore unlikely that we will see large-scale deployment of truly driverless cars in traffic anytime soon.

But AI will replace many jobs that were just that, jobs. As a former prime minister once said, “[The future] is replete with opportunities, but they only go to those swift to adapt, slow to complain, open, willing and able to change.” As was made abundantly clear by the death of the heartlands after the financial crisis, “most people are not like that”. Maybe the death of offices will be the end of work, but I fear it might bring along something more sinister.

The aftershocks of automating manufacturing still make themselves felt politically. If the same is to happen in offices, we need to reevaluate what it means to contribute to society. Shifting stacks of paper will no longer be enough. It will be a challenge to adapt and it will go against many vested interests. It is called disruptive innovation for a reason.

 Does this unit have a soul?

As a computer programmer, I have always faced the risk that I will be automated. You never quite get used to the idea—this might be why programmers can be very opinionated. But this threat also brings along with it the freedom to automate everything around me. By now, I get work done faster by making a cup of tea or going for a run, rather than by staring at a screen. As someone on the frontlines, I remain irredeemably hopeful.

“Ultimately, many robots that we perceive as smart are nothing like the sort. Instead, they are good actors.”

The key takeaway is that if you stay on the more creative side of things, you are likely to remain in work for some time. There may be an upper limit to the number of CNC machinists and other skilled factory floor people we need, but it has not been spotted yet. There are however indications that we have spotted the limits of AI, at least for this cycle, and we will need an army of people to claim the ground that has now been yielded. Therefore, I hazard a guess that neither Skynet nor VIKI will take over the world anytime soon, since they would not be thinking, let alone conscious.

There are many intricacies of intelligence and I have found it incredibly interesting to study the reasoning and logic of a computer. They often do things differently from you and will stay mindlessly focused on one objective. Leaving some thinking to a machine empowers humanity to be greater than they have ever been. AI will be the end of Computer says No and I intend for it to be the start of compassionate, empowered human decision making.

Johan pic 3

Johan Byttner is a graduate of the IB from the time before ubiquitous smartphones. He studied Management at Warwick University, UK, and studies Mathematics at Linköping University, Sweden. Outside of class he rides horses in his spare time and works to make an autonomous car not crash into a fence near you.

To hear more from Diploma Programme (DP) graduates check out these IB programme stories. If you are an IB grad and want to share your story, write to us at alumni.relations@ibo.org. We appreciate your support in sharing IB stories and invite you to connect with us on LinkedIn, Twitter and now Instagram!

If you enjoyed this story, consider reading more below:

,