AI for Lawyers May 15, 2019 Carole Piovesan



so my name is karol POVs on I am with Inc data law we are brand new law firm were about eight weeks old if you try to find us online despite the fact that I'm gonna be speaking to you about AI and law we actually don't have a website so that's coming so today I want to talk to you about this intersection that I've been talking about for nearly three years now and I really want to applaud the Law Commission of Ontario for bringing this program to you because this is a very good time to be talking about AI and law and Toronto and Canada and Montreal our friends from Montreal these are these are great places from which to be talking about artificial intelligence because of the strength of our research community the vibrancy of our startup ecosystem and a lot of the development that's been happening at the government and policy level so AI is going to shape all of what we do that's not an understatement and we're gonna talk a little bit about why I say this in fact I didn't say it but the CEO of Microsoft did but we're gonna talk a little bit about why he says this and why this is a topic that you keep hearing about in the news day in and day out and we're gonna talk about this from the perspective of law as a lawyer I'm always curious to know how on earth am I gonna practice in this area so we're gonna talk about it from the perspective of law we're gonna talk about ethics and we're gonna talk a touch about explain ability but I think Richard did such a great job that I'm gonna cut it off there so let's look at AI in society let's just start by contextualizing AI in society and why it's so important what it's going to touch and again I think Richard did a very good job in laying out the structure and laying out this sort of the playing field of what's happening in the development of artificial intelligence but fundamentally you're talking about data it is no longer the widget that is going to make you billions of dollars it's pieces of information about you and me that is going to make a company at billions and billions of dollars and how you analyze and process that data for the purposes of efficiencies and the purposes of predictive insights so it is taking those pieces of data that we are giving up every day and we're gonna look at this in a minute it's giving up those pieces of data analyzing those pieces of data and turning them into a predictive insight that makes prediction cheaper more accurate and faster it is power it touches on all aspects of our society it touches on our privacy and it's really raising some really fundamental and interesting debates about what is privacy and what do we care about what are we willing to give up and why in exchange for what what does the barter system look like when it comes to privacy is a certif research engine good enough if you get to take all of my data it touches on questions of ethics it's not just a question of what can I create but what should I create and we're gonna look at examples of facial recognition in a little bit inclusion Richard Richard showed I think a very good diagram of how you can divide according to ethnic groups if you're not represented in the data set if you're not represented as a program or if you're not part of the making the assumptions then are you going to be excluded as a group so questions of inclusion accountability and justice the predictive policing example was a great example of how we have a lot of concerns around accountability in the use of sophisticated systems based on historical data the public good there are some who predicts that artificial intelligence could actually help us reverse climate change it could actually help us cure a huge number of diseases or conditions if we were to mobilize the datasets and use use the the predictive power of artificial intelligence for those purposes but it's a yin and yang yes there's a lot of great that can come from it but there are also huge challenges associated with it if in the wrong hands economics there are predictions that say again these are predictions but they say you know artificial intelligence will be responsible for an infusion of something like fifteen point five trillion dollars into the global economy by 2030 so this is not just a game of knowledge it's a game about economics it's a big game about financial gains labour market speaking about the future of work in particular are we situating ourselves such that augmentation won't be enough that in fact there will be huge swathes of the labor market that are replaced and I think that's an open debate right there are lots who say yes you might replace a number of workers but that you know those jobs may be replaced but there are other jobs that will be created so be creative keep your mind open think about your transition strategy so here's the deal artificial intelligence is a general use technology you can see it in every sector in every vertical when people ask me well whoa where is it having an effect I don't know where is society happening it is it can have it is or can have an effect in every sector of society it is a general use technology currently it is arguably unaccountable under the law meaning we don't have regulation that is specific to artificial intelligence but again I'm gonna park that with a caveat because this is changing and changing much faster than I even anticipated it is changing because of concerns with and as Richard put it the no-go zones you know are there certain things it was Richard a fellow said you there are certain use cases that we're not going to allow they will be no-go zones when it comes to artificial intelligence but let's go back to this question of geopolitical power and power who is it a no-go zone for and what happens when it's not a no-go zone for somebody else and I'll leave that as a question it's it's a question that is becoming increasingly fascinating to me as we look at technologies like facial recognition and who's developing it and who's not developing it and how that might play out geopolitically I'm also a Poli Sci background so I can't help but get pretty excited about these things there's a new generation of law that's gonna be required because of artificial intelligence and we're gonna look at this in a minute and finally we still need to think about some of those principles we are still grappling with ethical principles and legal principle as it relates to this really fascinating and evolutionary technology so why today well there's a huge amount of data there's a huge amount of data being created every single day as we walk around with our devices not one but I've got two so I am perfectly well tracked and and people know exactly where I am there's a huge amount of data the computing power is cheaper faster storage is cheaper we have more talent in this area and increasingly we have more investments in this area and more government and company or Enterprise attention to the development of artificial intelligence so before I go into some of the three main topics that I want to discuss I did want to touch on a case study just to make it concrete and I chose a case study from financial services though having seen how much was done in the fs sector in the prior presentations I sort of wish I had chosen a different one but let's just run through this so this is a case study out of the Bank of New York Mellon Corp around 2016 they decided that they were going to create 220 BOTS these bots were going to were focused on handling tasks that were very rote routine repetitive for example data requests from external auditors fund transfer fund transfer BOTS and bots that are correcting formatting and data mistakes and if you don't know what a bot is it is a system with which you can interact that can make its sense of your language whether oral or written and give meaningful language back or take actions based on the command so they roll out this project of 220 bots and what they find is actually pretty impressive right they have a hundred percent accuracy in account closure validations across five systems a hundred percent accuracy they have 88 percent improvement in processing time 66 percent improvement in trade and and entry turnaround and something else that I find so a quarter of a second in robotic reconciliation versus five to ten minutes so from an enterprise perspective it makes a lot of sense and you can see why there will be as one example where there why there would be a lot of interest at the enterprise level for your clients who are gonna be thinking more strategically about where their companies are going and it's not just in the bots right like there are numerous use cases and financial services that where artificial intelligence is being used and this last one of robotic process automation I put sort of as a caveat I wouldn't really generally call that AI but I've got it there we're in reference to sort of the internal you've got you've got to think of it in two ways internal efficiencies so what is rote and routine where I can effectively smart automated at a higher level and you've got examples when it comes to for instance risk management or in taking the bot example again in personalised banking I don't know how many of you love getting on the phone with your bank I personally wish there was a smart bot on the other side that I could at 3:00 in the morning when I'm very irritated by my account I can quickly send an email get an immediate response and whatever I had asked to be done is done doesn't necessarily happen quite yet but then there's also the predictive value of AI the fact that it can analyze massive amounts of data and it can spot trends that we can't see and it will do so in an unconventional way so we don't really understand how it got there because the method is not one that we would apply and so from a training perspective again mentioned earlier the value is tremendous because in record real time you are making you are analyzing massive amounts of data and making decisions that affect a price and this might explain why there are huge investments in this area we're not talking millions of dollars some of the largest banks in the u.s. are investing billions of dollars in their in their tech infrastructure the investment in these areas will differentiate those who have innovated and who will sort of survive into the next ten years and those who won't so my roadmap I promised you I was going to talk to you about sort of the ethical ethical AI from the perspective of law bias and then touch on explain ability so here's the road map ahead and really the reason I chose these three in particular was because I think it's important to talk about and think about how the legal landscape is going to change and why talk about other sort of I would call them almost softer legal issues such as ethics and bias and be really aware of how these two corollary topics or I don't wanna call them crawler may be integrated topics have a role in law and then explain ability and some of the real questions we see coming up in the legal context so on the horizon what do we see we see that artificial intelligence is as as a product or because of its creation meaning the amount of data that's required in order to create and train AI systems it really is creating a change in the legal regulatory regime in Canada and around the world so I have categorized three changes but I admit to you upfront that the second sort of intermediate or medium-term change is happening much faster than I then I I think I give it credit for but we're gonna talk first about the immediate change so what's happening right now and I think Phil did a very good job in telling you about some of the regulation that we see changing we talked about GDP are the policy initiatives we see multilateral engagement on AI standards and this is important medium-term we see an identification of foreseeable harm so what do we really think is going to happen and where do we think it can be problematic and then longer term we see more on the theoretical side still but we're gonna talk about why it's not purely theoretical real questions about whether these systems are going to shake some of those foundational principles of law will law have to change because these systems exist or can we argue by analogy to other to existing legal paradigms so regulating data the pace of data accumulation is staggering we see that there are three point seven billion people on the internet now just looking at the Facebook users alone it says two billion active Facebook users so that does not include those who join Facebook and then decommissioned but still have all their data on Facebook Google processes over 40,000 searches every second and let me just I've found this nice little picture and what it does is it sort of highlights for me how much is happening every minute how much data we are giving up every minute if you look at just the text messages okay so you say 18 million in 2018 18 point 1 million in 2019 as an estimate but that's every minute that this is happening so the amount of data that is being transferred and and the fluidity with which it's being transferred is really astounding and data as I said is power the geopolitics of data is really interesting and it's becoming more and more interesting as we see countries like China really push ahead where their privacy legislation is significantly different than ours and GDP our and the our European colleagues or friends or allies struggling to create a regime using its its large geographic and and population base to fight back with the u.s. that also has a very different regulatory regime in the privacy space also you know at the forefront of innovation in every sense but trying to determine how do we rectify the the true potential of innovation with our democratic values of which a privacy is fundamental and I don't want to spend too much time on the GPR cuz I think Bill did a good job to describe it the next is really looking at what are the foreseeable harms so just yesterday so I I had this slide I created this slide about I don't know five weeks ago for a presentation and when I looked at it again I thought oh my gosh it's already out of date it's already out of date it's not medium term this is actually happening much faster than I anticipated so the last time I had this slide there was a projection Microsoft has said that we should regulate facial recognition today when I have this slide I can tell you that yesterday San Francisco did ban the use of facial recognition in law enforcement sorry in in local authorities including law enforcement so the pace at which change is happening even regulatory change which we typically pride ourselves on as being glacial it's happening much faster than we could ever expect our own government within six months created a directive on automated decision-making that helps government departments determine a risk level for particular automated decision so an AI system that's gonna be making decisions identifying what are particular risks for which we require a human in the loop they did this again within record speed I think it was our cio or chief information officer who said it took about six months to do on in an open context on a consultative basis and this is influential not only for government our government but for governments other governments are looking to what Canada has done and companies trying to understand how do we create more robust and transparent decision-making frameworks how do we build our own due diligence defense when it comes to the use of automated decisions in more sensitive contexts we see that the FTC recently out of the u.s. the the trade commission said something I thought was really interesting that is part of their privacy enforcement they are thinking about personal liability for executives and this is in the US where they don't have robust privacy protection for private sector they don't have that and here you've got the FTC saying you know what we're gonna be a little bit more directed about this and then dr. Hinton our own AI you know father of deep learning who said we might need a geneva-like convention for the use of AI in warfare well last report is that the Pentagon is putting in something like 75 billion dollars into understanding artificial intelligence in the military so maybe he's right when we think about reevaluating foundational principles of law why do I say this so I offer you my my very inelegant definition of AI that's how I refer to it every time because it's very much from my own perspective to think through what are the legal issues that I need to be mindful of when I think about the uniqueness of this technology and so I say it's a computer system that analyzes massive amounts of data learns from the analysis and takes action in unconventional ways with remote human involvement and I say this because I am reminded as a computer system it is inorganic it is not you and me it's not a dog it is an inorganic system it is evolutionary it's not static and you heard Richards say this as well this is an evolutionary moving system that is constantly learning based on the data it's receiving it is action oriented it is not again not static in the sense of you know processing words it's actually able to interact it's able to be influenced it's able to take a step if you are a self-driving car it is able to turn right or stop at a red light so it can be action oriented it's it does things in unconventional ways we don't understand how the system analyzes the information we don't understand why it favors certain things over certain outcomes or outputs over others it's but we understand what the intended output is we understood what we have optimized it for we just don't understand why it did what it did and it has remote human involvement so again this is the more sophisticated system this is not the everyday system that you're using but over time the human involvement becomes more and more remote which raises some pretty interesting questions around liability you know to the extent that there is a creator and there is a harm caused down the line and the creator of the system or the trainer of the system has very very little to do with the system over a period of time and this system is evolving over a period of time then to what extent can you really extend liability to the individual creator or trainer and there's a great there's a great example about this in the news again just two weeks ago there's a Hong Kong tycoon who is suing because the AI system that is determining his stocks his trading really messed up as far as he's concerned lost 20 million dollars in the day but he can't sue the system so he's going after the broker and I have no idea what's gonna happen I I read the case I mean it was a news report so you don't get too much detail I read it as a misrepresentation case I think that's why he's going after the broker but you could see how this can evolve and how it can become increasingly complex and it really does it really does touch every area of law but again I don't want to go into this in too much detail because I think Phil did a very good job in outlining some of the different legal issues that AI is upon and will have an influence on including competition and and human rights so let's now move to the question of sort of ethics and bias so we we talked a lot about that a nice question about predictive policing is a good one what what are we worried about what is weird what are the sources of bias that you find in these AI systems and where are we worried so there are different sources of bias and I wouldn't say this is the most exhaustive list but there these are examples of where you will find bias so selection bias your data set under represents a particular group Amazon recently had this problem in their HR practices they were looking to use AI to help them spot some really great candidates to bring in turns out the system which I don't think was ever ruled out publicly they were constantly testing and training it favored white men why because their data set had predominantly I and I'm gonna make an assumption here cuz I actually don't know this feature but I understand that they're pretty their data set was skewed towards white men and it's not because they were favoring white men in particular it's just that that's who was in that's who was their population and they were using their population their high performers to help indicate and help project who their future high performers could be well you got out optimized for those those an incomplete data set or interaction bias so interaction bias is the great example of Microsoft's chatbot a who within 24 hours of being released into the public turned into an awful awful chap odd very misogynistic and Holocaust denying and very racist and the question was well why did this happen how did this go so wrong and part of the answer is well because the people interacting with teh either had a great laugh at tazed expense and decided to test how how awful the the chat box or were actually saying terrible things to the chat bot sincerely and in the turn the chat bot learned through the interaction with humans trying to understand is this normal behavior is this appropriate is this okay no it's not okay emergent bias so this is something we are talking a lot about in the debate about democracy is the news feed that is because I liked this one article now you assume I only want articles of that nature and so you're only feeding the articles of that nature is that causing a certain other form of bias which is more on the receiving end you're only giving me information that you think I want to see it creates a bias bubble and then latent bias so again this is some of the the assumptions that are being made throughout the development and training process and are you being mindful of these variables that can cause harm the correlation as an example of the label doctor with a white male or a nurse with a white female and where do we see opportunities of bias or for bias so here you've got some examples does the data reflect inherent bias so again let's go back to the data set for my own timeline does the does the data reflect inherent bias have we made the right assumptions have we turned our minds to the issues that we think are going to be problematic and have we address address them before we release this couple this this bot or this system publicly have we chosen appropriate use cases there are great examples of things we can build but just because we can build them the question is should we or should we choose not to build them for other ethical reasons because it could cause harm in other ways or if we do choose to build them knowing that they could cause ethical harm what guardrails do we put around to protect the public from harm and then finally what are the possible harms have we thought through that and when I when I look at this diagram I always think how do I build this out how could I build this out in a due diligence defense so there is no regulation that can tell me what I can and can't do necessarily or there are multiple different forms of regulation that might give me different pieces of what I can and can't do so how do I as a company or I as a lawyer how do I build out my own framework my own governance framework Jay Richards point for a good due diligence defense and then I promise you I was going to touch a little bit about on accountability and explain ability I agree with Richard that it is not every use case that requires an explainable AI and so the risk you really need to be thinking about what is the risk management framework around the deployment of a particular system or creation of a particular system but to get to the last two bullets it is about thinking of accountability it is you may not be able to explain exactly what layer of the neural net process which piece of information to ultimately result in whatever the output is and frankly nobody cares about that not one of us would want to hear that explanation because it would go over all of our heads at least I speak for myself but we do want to know that there has been accountability we do want to know that as a company you are aware of the biases in your data sets you are aware of the biases in your systems you have turned your mind to those issues you have documented it you have thought through the use cases and you have thought through the possible harms we do want to know that we do want to know that there has been somebody that has turned their mind to more you know higher risk categories of use cases that can have a legal harm to somebody or caused a legal harm or similarly situated type of harm to somebody and that there is somebody at the center of all of this who understands the system understands the data was part of the governance structure and can be can offer an explanation and so here's some advice as we think about an AI first world which we are not in yet but as we think of an AI first world here is my lasting advice which is number one as lawyers we must be informed we need to understand that this technology we need to understand its complexity because the the pace of change is happening so quickly that we can't just learn about it in time we have to constantly follow how this is changing and how it impacts what we do because that is fundamental to offering good quality service to our clients the other is we have to get involved so to the extent that you're in a company or you're in a hospital or you're in a law firm and there are discussions about you know what is what are we building and should we be doing some really cool stuff with artificial intelligence have we really thought about our data strategy have we thought about our AI strategy I urge you to be in that conversation get to the strategy table and helped the strategy because if it comes to you if the plans come to you when they're already fully baked it's too late you need to inform this is the the sort of creation of artificial intelligent systems and all the accoutrements that go with it fundamentally is legal and then finally I say have no fear and I say that very broadly these are exciting times there's a lot of good that's out there and and as lawyers at least from my perspective this is some of the most exciting work we can be doing and we could be working on so have no fear get in the game and enjoy the process thank you [Applause]

Leave a Reply

Your email address will not be published. Required fields are marked *