Tom Fisher of Privacy International explains how financial technology companies are making assessments from the data that users create online, and how these judgements can be discriminatory and ultimately affect users’ credit ratings
TOM FISHER: Fintech is the growing field of the use of technology in the financial services sector. It’s growing in areas like credit, banking, insurance, pretty much everything is seeing this change coming from new start-ups and other people in the space. Fundamentally, it’s about data, and so much of it is about getting more data from us and then using that data to make decisions and judgments about us. For instance, there is Kenyan credit scoring apps, which on a daily basis, will upload the entire contents of your phone to their servers, then analyze it to offer people credit based on things like how you organize your address book, whether you write in capital letters, even things like how often you call your mother, now affect your credit rating. The fintech sector is really about that expansion in the use of data and the amount of data they’re gathering about us to make these judgments about us. When we fill in an online form to apply for credit, for instance, how we fill in that form now becomes more important than what we write on it, because they’re analyzing information like what device you’re using. Are you using a high end iPhone, therefore you’re more likely to be rich and wealthy, versus a lower end Android device? And also, things like your location. They can map this in quite detailed ways to know whether you live in a place where they consider it less likely to repay your loan. We used to call these kinds of things redlining for mortgages, where certain populations and certain groups were discriminated against because of where they lived. Companies are building up these pictures of us and who we are based on how we behave on social media, what devices we are, and they are using that to build up this picture of us, which isn’t always accurate, but is very difficult for us to challenge, because it’s based on this data about how we behave. This becomes problematic, because that begins to limit what we see online, for instance, through advertising, which isn’t just a question of us being annoyed by a few online ads for the thing we just bought on Amazon anyway. When we get into areas like what job adverts we might see, what areas we’re offered housing in, because of who they think we are online, then it begins to affect our lives more deeply and more profoundly. In some important ways, we are losing control over our identity. We can no longer have the option to present different personas in different contexts, because they are building up this one single picture of us based on the data they gather about us, so we no longer have control over how we present ourselves, who we are, and that’s extremely concerning development. We expected to trust these companies a huge deal with all this information we’re giving them, particularly in countries with weaker data protection rules, often with very little knowledge of what’s going to happen with that data, or what they can do with it in the future, or who that is going to be sold to in the future, but at the same time, they no longer trust us on what we tell them, because everything we tell them has to be backed up with data, with evidence, with both how information you are inadvertently giving them from what device you used to fill in the form, for instance, but also data they are getting from credit reference agencies, from looking at your social media, and all these other places. We’re putting a lot of faith into algorithms, which is often misguided. We have to understand a great deal about where these come from. It’s easy to see, because you’ve basically got a whole lot of numbers involved that these algorithms are making decisions about our lives and objective truth, and their own objective measure of what is happening, where in fact, it’s far more complicated than that. On the one hand, you’ve got the people who are writing and developing these algorithms, often far, far away from the groups that are being affected by things like alternative credit scoring methods. These are often credit scoring aimed at the poorest and most vulnerable people without traditional credit files. A lot of growth in Africa on these kind of projects, yet you have these being written by computer scientists, data scientists, in California. So we have to understand how these algorithms are developed and work, because it’s so easy for these to discriminate against certain groups. We see in predictive policing, where the predictive policing algorithms are predicting crime rates based on historical data, but these historical data gathered by police forces which themselves had certain ideas about race and were recording crimes differently for black people versus white people. We then see an algorithm reflecting those biases. The fintech field presents whole new sets of challenges for regulation. We have obviously in most parts of the world, quite a regulated sector in various different ways over things like what can be contained on the credit file, the need to correct that information if it’s wrong, and the ability to see it, know where it came from. Now, if as the former Google chief information officer said, “All data is credit data,” then we have a whole new set of challenges for how we think about data in that sector and how it’s regulated. The information that can be used to make these judgments about us is no longer this limited set of things which is on our credit file, it is now more or less everything we do which is used to build up this picture of us. How can regulators begin to deal with this changing use of information, the challenge of new ways of analyzing data and also moving forward into the future, when this power can only increase? There is this race towards this idea that the fintech center is this new, disruptive, innovative force that needs to be grasped by government and regulation through potentially a reduction in the regulations of this sector, when we think about things like regulatory sandboxes, where there is reduced and changed regulations in order to foster growth of this sector, whereas our starting point needs to be more to look at how fintechs are affecting issues like privacy, like human rights, rather than assuming the starting point that getting as much data as possible is a desirable goal in the financial sector. Sadly, this use of data in the fintech space is moving beyond the new brash start-ups, towards more established financial institutions. There’s increasing risk, ’cause this use of data becomes so prevalent through the entire financial industry that we end up without any choice but to give them access to so much of our data, often without our knowledge. So it’s deeply concerning how this sector is going to develop in the coming years. Privacy International continues to work in this sector. We’ve got a report coming out soon on fintech and privacy, focusing on examples in Kenya and India.