Of Love Lives, Fake News & Data Privacy: Cambridge Mathematician on How Algorithms Govern Our Lives
Cambridge Professor Piers Bursill-Hall explains how Maths has come to play a crucial part in our love lives, how it has unwittingly fostered fake news, sexist and racist biases, and why the battle for data privacy has already been lost.
Representative Image (AFP Relaxnews)
If you are one of those who had spent considerable years of your school life fretting over numbers and were incredibly happy to drop Maths as a subject, there’s some bad news in store: You may have left numbers but they surely haven't left you or stopped governing your life.
As your virtual life becomes more important than your real world, and you spend more time on social media, adopt e-banking, do your business or buy health insurance online, sign-up on dating apps, use online cab services, or buy books online, and research on search engines -- each and every digital footprint of yours is dictated by a set of complex algorithms (constructed by various mathematical calculations), that influences your personal choices to a great extent, and is slowly changing not just your own life, but also the fabric of our society. We all know it, but we rarely pay any heed to the extent of this technological encroachment, which has begun to define our lives now.
What is worse is our limited understanding of these algorithms that make up social media sites like Facebook, Instagram, Twitter, or communication apps like Whatsapp, dating apps like Tinder or Bumble or multinational tech companies such as Amazon, Google which are now an integral part of our everyday lives. The only silver lining in this whole situation is that those annoying kids who scored 99 or 100 in the yearly math exams in school, have got nothing on you now, because, they too are on the same boat of oblivion, restricted choice sets, and practically zero privacy because algorithms that give us book recommendations on Amazon, choose the articles we read on Google, decide your choice set on Tinder, or what Facebook ads you see, are often too complicated to be understood by most common man.
In an interview to News18.com, Piers Bursill-Hall, a professor of History of Mathematics in the Department of Pure Mathematics at the University of Cambridge, who was in India recently for the Tata Lit Fest, 2019, explained how mathematics plays a crucial part in our love life, how it has unwittingly fostered fake news, sexist and racist biases, and more importantly, why the battle for privacy of data has already been lost.
A World Without Privacy
People are talking about the protection of personal data as if it is something that they can and should do in future, but what they do not know is that it is something they should have protected, and it may be too late now, pointed out Bursill-hall.
"You have already lost your privacy. Most governments, corporations and spy companies already know everything that there is to be known about you. The phrase that we often use to explain this phenomenon is, 'scraping the data off the internet'." said Bursill-hall.
"You are being spied upon in the most unbelievably invasive way. In fact, it is now known, that for some young women who are very active online, social media companies, and American spy agencies, know that they are pregnant even before they discover this fact about themselves," he added. When women become pregnant they begin to change various patterns of behaviour, and Bursill-hall claims that these social media platforms now have so much data on each individual that it is very easy for them to spot any slight change in behavioural pattern in any of them.
Tracking behavioural patterns through smallest of clicks online like a 'like' on Facebook, or psychological profiling has been happening for some time now, and this was how the British political consulting firm, Cambridge Analytica managed to swing the Trump elections, and 'leave EU' referendum votes in 2016.
“By doing an extensive study of apparently unrelated data like people's jobs, their mortgages, car loans, and many such simple details, Cambridge Analytica managed to do a psychometric analysis of the voters,” explained Bursill-hall.
"Cambridge Analytica began to identify specific groups. For instance, they identified a group of people who are insecure about their jobs, another group which is anti-immigration and then they targeted adverts to individual people based on the groups they belonged to. They don't advertise to everyone on social media, but to specific individuals. These adverts may be false or may be true with a twist, but they influenced people's thoughts and their votes, and that is how they swung the 2016 Trump election by several percentage points and certainly the Brexit votes too. So, they did not just change individual lives, but also the destinies of nations." added Bursill-hall.
The more troubling issue here is that most governments lack the basic technological understanding to come up with effective data protection policies.
"In Europe, we have a new data privacy law, called the GDPR (General Data Privacy Protection), which is a wonderful law. It is actually forcing international, non-European corporations to obey the law, if they are doing any kind of business with the Europeans through the electronic route. Therefore, several international companies are finding themselves obliged to follow GDPR even if there is one European in their database. But, they started to write this law 20 years ago and if you ask the lawmakers how many computer scientists or mathematicians worked on this law, they would say none and any mathematician who can code knows how to get around GDPR." remarked the professor.
Our Diminishing Capacity to Identify Fake News
Bursill-hall predicted that despite the increasing awareness about fake news, it will continue to permeate our lives, and change the nature of virtual reality and democratic elections. The problem is likely to get much worse, as deepfakes make their way into the market.
"The problem of fake news will continue to get worse because the methods of producing fake news are getting better. At least at the moment when I see a politician on the television saying something, I know that it is he who is saying it. But, in a few years, as deepfakes are taking over, it will be hard to tell if he/she is actually saying the words that we see him/her say on TV. So, we are losing more handles on truth and reality. In the future, unless you were there, and you saw a politician with your own eyes, at less than 20 metres, you won't even know for sure what they said. So, the fake news problem is going to get a lot worse." he said.
"And, chances are, as the fake news increases, readers and viewers ability to distinguish between fake and non-fake news will diminish. Mathematicians are not much help in trying to sort out what is fake news, and what isn't and this poses a danger to the democratic fabric of our lives," he added.
As we become defenceless against a huge range of technological invasion of privacy, and deformation of knowledge, the professor suggests that the educated elites of the society need to become aware of this and start to protest or else the democracies as we know it, and the process of democratic elections may fail.
When Mathematics Acts as Cupid
Who could have ever thought that mathematics could be your cupid with a bow, but it is the algorithms that ultimately gets to say who can see your profile, or whose profile you can see, on dating apps like Tinder based on certain criteria and scores that you receive on the site. For instance, Tinder previously used the Elo, an algorithm which ranked people on the basis of their attractiveness, and so people who got right-swiped more often (thus indicating that they are attractive to more people) were likely to be matched with others like them. Although that policy changed earlier this year, reports now speculate that Tinder is now using the Gale-Shapley algorithm, which again comes with its own set of problems and limited choice sets.
Bursill-hall propounds that the way these dating apps form matches by making different 'levels' for different people based on not-very-transparent criteria, they can bring about a change in how the human race advances in future.
“Dating apps, especially in the western world is the origin of 20 % or 30 % of most dates. In fact, there is now a significant proportion of the American population who are married or partnered through dating apps. If 'looks' is a criterion, then people who are deemed to be ugly will not get good mates, and they may not reproduce and have as many babies, while other people, whom the algorithm deems beautiful will find partner(s) and the chances of them procreating will definitely increase," the Cambridge professor pointed out.
These dating app algorithms can do much worse. For instance, if an algorithm finds that red-headed people get fewer dates it may end up giving redheads bad choices, and if this continues for years and generations, redheads will find lesser matches, and chances of them getting married will be less, and this might in future eliminate redhead population considerably, so what they are doing is that they are practising the most astonishing kind of eugenics, explained the professor.
The Sexist and Racist Biases of Algorithms
Unfortunately, all algorithms are built by humans and therefore, they too have the flaws of their makers. For instance, recently Apple Card was accused of being a ‘sexist program’ because it offered 20 times higher credit line to men than women. There have been several instances of algorithm’s racism, pointed out Bursill-hall.
“There are famous cases where the facial recognition algorithms could not recognise black, or brown individuals because the manufacturers tested it on everybody in their company all of whom happened to be white.” claimed the professor.
"You cannot eliminate the human flaws from algorithms. This is the absolute and brutal truth. Interestingly, when this issue of human biases was being discussed in the parliament in Britain, one of the parliamentarians suggested that 'why don't we just write an algorithm which checks the algorithms for their biases?' But, obviously, that algorithm will be written by another human being too, and therefore will be quite fallible in catching biases itself…. That’s the thing, members of any government does not understand technology very well. When are we going to get someone who can code in C++ in the cabinet?” asked the Cambridge professor.
Bursill-hall also pointed out that for any algorithm to even have a chance at being bias-free, it will have to be passed around a table full of people from different backgrounds and genders. There is no algorithm to test algorithm, so companies need to be transparent, and subject them to democratic regulations and court proceedings.
Get the best of News18 delivered to your inbox - subscribe to News18 Daybreak. Follow News18.com on Twitter, Instagram, Facebook, Telegram, TikTok and on YouTube, and stay in the know with what's happening in the world around you – in real time.
Recommended For You
- PUMA x FIRST MILE: Puma Have a New Sportswear Collection Made From Recycled Plastic
- Samsung Sent a Weird 1 Notification By Mistake, And People Started to See Conspiracies
- Icons on Your Windows 10 PC Are Set to Get Their First Big Makeover in Years
- Google is Throwing Out Apps That Bother Users With Adverts; Almost 600 Have Been Binned
- Brisbane Student Offers Rs 950 to Kill Cockroach in His Room, Are You up for It?