Emily Gorcenski - The Ethics of the Internet of Things EMILY: Thank you all very much. I'm going to talk about the ethics of the Internet of Things and I promise I'm not going to lecture you too much. I'm Emily Gorcenski. I'm on Twitter. I say there things sometimes. I like the Internet of Things and the landscape it is creating. So why am I talking about the Internet of Things at a JavaScript conference? Why am I talking about ethics at a JavaScript conference? It's because I've given this talk a few times and I joke that every time I give it, I can give a brand new talk because there are so many issues, there are so many failures, and bugs, and security issues that come up so freakily, so that if I just focus on case studies, it will be a brand new talk every time. This time, I've decided that I don't want to give this talk any more. I want you all to be able to give this talk, so I want to talk a little bit more generally about why ethics matters and why is should matter to you as a JavaScript developer and how we can put technology into all sorts of devices and services where it doesn't normally belong. It's almost impossible to do an ethics talk without getting into heavy stuff. There are contents warnings about this talk. We will talk frankly about some incidents that resulted in injuries and death of people. A discussion of a specific instance of sexual assault and an image of raw meat. If that spooks you out, that will be maybe about ten minutes into the talk. Who am I? I have a little bit of a confession. I'm kind of an imposter here. I'm not a JavaScript developer. I'm a data scientist and I'm trained as a mathematician but also an engineer. I went to school for aeronautical engineering and mechanical engineering, and I have worked in my career in the aerospace, biotech, and now I work in the finance industries. And what these industries have in common is that they're all heavily regulated, and most of the people working in them subscribe to a professional code of ethics either through an independent society or some other organisation that guides what ethical conduct mean. Here I am, I worked in defence, health care, and banking, and I'm going to talk to you about ethics - buckle in! When I talk about the Internet of Things, what is it that I mean? It's kind of a wishy-warranty definition. We might think about smart fridges or smart cars, that sort of thing. I like to think of it as putting the internet where it doesn't normally belong. So it could be smart appliances. I think of Uber as an IoT taxi. When we look at ethics of that, we have to look at the entire scope of what are we doing with our technology and what are we connecting? The difference isn't that these devices and products or services haven't been computerised, it is we're letting the consumer have connectivity to what is going on. So, if you're a JavaScript developer, maybe you want an bread machine so you can hack your code while the bread bakes. This is important because IoT products are the next level of convenience optimisation. We've spent the last 30 years optimising products for convenience, and there's not much more competitive advantage that you can get in a refrigerator nowadays. So if you don't have a competitive advantage with a non-connected device, you have to go connected. This is also important for people whose livelihoods are affected by disability. You might be concerned about the surveillance capabilities of IoT devices or the horrible things that Uber has been accused of doing, but if you're not able to get around or don't live in a place where there is an easy taxi service and you have other needs, something like Uber is a life -saver and changes your life. We can't write it off as an absurdity, say IoT is frivolous, we have the internet Twitter account and there are a lot of misses and there but there are a lot of good things that come out of IoT as well. When I talk about ethics, what is it I mean when I say that word? You've probably seen this diagram. The framing goes there is one Nobel Laureate tied to five set of tracks and another tied to another. Somehow, you have been put in the position of pulling the lever. This is a really popular problem on the internet right now because it feels like something we can solve with category theory if we just abstract it enough, and, two, it really makes for some dank memes. The thing about the trolley problem is the trolley problem wasn't a problem for trolleys, so why is it a problem for self-driving cars? We love to frame things as puzzles to solve - that is our nature as developers and engineers. In tech, we don't actually face ethical dilemmas that often. Ethical dilemmas happen when there are two competing ethical frameworks, and an action that you take cannot not be in violation of at least one of them. What I think is fascinating about JavaScript is that the JavaScript community is responsible for what I consider to be the most fascinating true ethical dilemma in a decade of technology. I will get to you on that later. Some might know what I'm talking about already. The issue of technology is that often we just don't act with ethics. I don't mean this as an indictment saying you're bad, unethical, immoral human beings. There are some companies out there that will get a side eye right now, but I mean in our industry, we don't have a professional code. There are some societies that you can join, but raise your hand if you're a member of like ACM or I888. There are a few out there but it is not the majority. In practice, ethics are about things: they are about the analysis of harm and the mitigation of risk. So, when we talk about acting ethically, especially in something like research ethics, what we are doing is we're not trying to eliminate the possibility of somebody getting hurt, but we are trying to understand all of the ways that somebody might be hurt by our technology, and we're looking for what actions we can take to mitigate the chance of that happening, to mitigate the severity of it when it happens, and to provide remediation when it inevitably does. So this is what we need to bid up as our ethical framework when we are developing technology particularly for IoT. So harm can happen in three ways: ment first is through malfeasance. This is the most common topic in IoT. This is security. This is people talking about hacking. When the Miri botnet had a DS attack last fall, it was the biggest issue, the big I est DOS attack witnessed happened through IoT devices that were unsecured and you know that IoT security is in a pretty abysmal state right now. When this happened, the timing of it and the way that it was structured gave a lot of people a lot of concerns that it was a precursor to an attack on the US presidential election, and that would be an attempt to influence the outcome of that election as it turns out, the fears were unfounded - we managed to screw that one up all by ourselves - because I don't want to talk about security in this talk. First, I can't cover everything; succeeded, the other ways that harm can happen, if we address those, we address the security issues. Failures are failures in bugs and software. Cases happen when a device is operating under normal operating circumstances, but it gets put into a condition that we did not predict as developers. It is also worth mentioning that sometimes we like to treat ... difference except for semantics. So a great example of this, this was on Twitter. This poor gentleman, Andrew, had an IoT water cooler, and his TLS certificate expired, which led to some blocking code which meant that a hardware interlock failed and he has water all over his house. This is a real issue, right? This is a problem. If a TLS certificate expire in a web service, like in a web space, and we forget the TLS certificate and we have a blocking code, we have ops people to deal with that. We can't treat IoT devices like cattle any more, we have to treat them like pets that live in people's homes and get very, very angry when they don't get fed. One day, if we're not careful, we are going to put JavaScript into, I don't know, an IoT kettle and light somebody's house on fire because "undefined" is not a function. I stole that joke, by the way which is total payback because he didn't save me MPM socks! This is something that I did, I'm very proud of, this. I did this a couple of years ago am I did full screen on this if I can, if I can figure out where my mouse is - there we go. This is a Microsoft band - and I'm not picking on Microsoft at all here - and this is a piece of raw chicken. I didn't do horrible to a chicken like have a zombie chicken out there, this is a piece of meat that I bought from the grocery score. It is reading a heart rate of 120 beats per minute! [Laughter]. In the real world, sensors are messy, they're noisy, they're imperfect. And so, when we are designing for IoT, we have to take this into consideration. It is absurd that you can read a heart rate off a piece of chicken breast, but this has deep, deep ramifications for a lot of things. For one, there are colleges out there that are mandating students wear FitBits. There are employers out there that have health insurance incentive programs for doing this. If you're following what is happening with American health care, now we have this issue where we have surveillance devices that are monitoring our health and can report on pre-existing conditions. This is not actually hypothetical, this is something that's really happened. Let me get out of full-screen mode if I can. 2015, a woman was visiting a co-worker. She pulled police to report a sexual assault. When the police investigated they found her fit bit, and with her permission, they analysed the data. When they analysed the data, not only did they drop the investigation into her claims, but they turned around and they charged her with making false statements to the police, and, last year, she completed guilty to those charges and was convicted and put on probation. The prosecuting attorney said that the FitBit data sealed the deal. I can pull 120 beats per minute off a piece of raw chicken and a woman's life is ruined because nobody at FitBit stood up and said no, our devices are not this accurate. You can't do that. Our devices bear false witness against us, and they can. The problem is there is no regulation, quality assurance or standards for how we build them. We just ship code. We ship hardware. We innovate, fast, fast, fast. We don't ask ourselves what kinds of harm can happen when this goes wrong? And this is happening increasingly often. These devices are being used in criminal and civil investigations. Just last week, CNN reported that a man is being charged with murder of his wife based on the FitBit data saying she had travelled a certain distance and that distance didn't correlate with his story. Anybody that is knitted, for example, while wearing one will know that it will record steps while you're sitting on your couch. How can we let this happen? How can we let this information affect people's lives? Another incident: a smart water metre was used in a murder investigation last year. In the same investigation, they also filed a warrant for Amazon Echo data. The question is: who's going to go to jail? Who's going to get put on probation when a device makes false statements to the police? Moreover, say something happens, say something breaks and somebody gets hurt or somebody gets killed? Who's going to be liable if that device causes an accident? Is it the owner? Is it the developer? The company that made it? And this seems like it should be a settled question, but it is actually not. And this is has already happened. In this frame, you will see the vehicle, the white vehicle on the right is a Google self-driving car, and this image is a still image - this is a screen capture from a video taken from the dashcam of a municipal bus in Mountain View California. That Google SUV is about to pull out in front of the bus and get in an accident. Thankfully, nobody was hurt in this. There were no injuries, just a fender-bender. This is the first time that the self-driving car has ever been found responsible for causing an accident. Google fessed up to. They said, "You know what? Our bad. We will take care of the damages." And they investigated what happened, and they concluded that the car predicted that the bus would yield to us because we were ahead of it. Okay, now Google is in a position right now where they want to ship self-driving cars, so of course they're going to assume liability for this because they don't want to test it in court. But we can't rely on that as we go into the IoT future. We can't rely on benevolent corporations to assume liability once this goes out at scale. By the way, even if Google was right this is still going to be an historical moment because, if the bus had yielded to the vehicle, it would be the first time that a municipal bus has ever yielded! A few years ago, there's a judge in San Francisco, and this is part of a research project, not part of a case, he looked into the question of whether autonomous systems fall under existing theories of liability. And in looking into it, he found that vehicles that make our own decisions, that use things like neural nets that use adaptive self-adjusting control systems, smart systems, if you will, devising their own means to attain a task may not be subject to liability under any existing theories of tort. And this has huge implications. Because if you buy a normal refrigerator and it breaks, you can say, "Hey, manufacturer, you're responsible for that break." If you buy a coffee-maker and it burns down your house because of a defective unit, you can get out safely and then you recover damages from the company and your insurance company takes care of it. There's a whole ethical framework that's built up around this. And there's a legal structure that's there as well. Obviously, self-driving cars are going to be safer, they're going to save lives. That is a very important thing. We want to save lives. We want the roads to be better. But the number of lives saved is not the only term in our ethical calculus. We have to look at what happened when people get injured? How are they taken care of? How are they able to pay the medical bills or get back to work, or miss work biochemical they are recovering but still be able to pay rent and afford food? So the question about this is what does this mean for us as developers? Like, does this give us a free pass? We're not liable for IoT devices. That means we can ship, right? We can do whatever. Let's just innovate the hell out of everything until something breaks, and there is a precedent, right? Is that really the legacy that we want to leave behind? Do we want to leave the legacy behind of we did it because we could and we didn't give a damn about who we hurt? Some companies are doing this. Some companies are actually still working in the space where they just want to innovate, and they just want to build things, and ship things, and they will deal with the consequences later. But you have to ask yourself: do I want to be responsible for that? That's what ethics is all about. Now, I said that the JavaScript community had one of the most fascinating ethical incidents in technology in a while, and that's the left-pad incident. I don't know why it is pink, but whatever. When Ashley talked about left-pad last night, it was really fascinating because she focused on a lot of the backlash. She said the internet blew up when left pad happened. People were angry about a lot of things, the way the JavaScript community developed a small-module system and maybe it is wrong or maybe it is right and there's a lot of argument back and forth and friendships were lost or damaged in this incident. And what people didn't realise was that the reason for all of this anger and acrimony was because the left-pad incident actually exposed what is a true ethical dilemma, and we just didn't see the forest for the trees at the moment, because left-pad had two competing ethical decisions. The first is the hacker culture ethic, that openness is the most important thing, that openness is a virtue, and that the ability to control your code is tantamount to being a hanger, to being an open-source developer. Sure, other people can fork it, but you're going to choose tout it. When the left-has the broke all his modules and broke a bunch of stuff on the internet, MPM had a competing framework that they have a responsibility to the people who use their product. They have a responsibility as engineers. They also value openness. They're an open-source community. So this was a very hard decision, and that's why there were so many heads being butt over the decision that was made. Could you imagine what would have happened if this didn't happen in 2016 when it mostly affected the web? But rather in, I don't know, 2018, 2020 when MPM is running on people's cars, people's refrigerators? Somebody pulls down a module, and now, all of a sudden, you're driving on the highway at 70 miles an hour and your car shuts off and something goes wrong. You think that couldn't ever happen. Nobody would ever actually do live deployments on a car running an IoT device running in the field. Please, like we're doing it in production systems right now. IoT security is a mess. We're doing this rapid innovation pace. Of course there's going to be issues. And you don't want to be the person that's responsible for somebody's refrigerator going out and they lose all their food or maybe they lose their important medicine that needs refrigeration. You don't want to be responsible for that - or maybe you do. Maybe you think that the virtue of openness is the more important ethic and that is a true ethical dilemma. So what do we do as jeers? What are the takeaways that we can have when we talk about ethics? This is kind of why I wanted to not give this talk and rather empower you to be able to give this talk, because there are actionable things that we can do as engineers, as developers, to make our workspaces better, to act with more ethics. The first would be to set expectations with your boss. If you know what pressures your boss has, if they ask you to do something you don't feel comfortable with, you need to know, can I go to my boss and I don't feel comfortable with this? Can you go to your manager and say, "I have concerns over this?" Do you know what process will happen if you do that? That is an important thing. You also have to be prepared to say no. If somebody comes to you and says, "Hey, I need you to build in this method that sends tracking data on somebody's heart rate back to our server in real time," are you comfortable doing that? Maybe you're not. But do you know how to refuse an order? Are you willing to refuse an order, to put your career in jeopardy for doing so if it goes against something that you believe in? You need also to be able to hold frank discussions with your co-workers about what this means. I work in finance, and I'm a data scientist, so we have a vast amount of data on people, and a vast amount of capability to do things with that data, and so, we talk often, my team, about the implications of what we are doing when we record customer data, when we record information about their finances. We talk all the time, like what are you not willing to do? What are we legally obligated to do? In finance, we have legal obligations in terms of reporting fraud and in terms of looking for money-laundering, for example. So we have to talk about these things with each other. And, as engineers, you should be able to talk frankly with your co-workers like, "I don't like where this is going." How do we make sure it doesn't go there? How do we make sure it stays on the safe side and not the dangerous side? The most important thing is to know your limit. It's to know when you're willing to talk away. Because tech is really lucrative and we have a lot of privilege. We have a lot of privilege in tech. Even just look around the space that we are in, this is a remarkable conference in a remarkable space, and there are amenities and all sorts of decadence here. Not all industries are like this. What is your limit where you would be willing to say, "I can no longer in good conscience continue to do this"? And go do something different. If you don't know what that limit is, you're not going to discover that you're over it until it is too late. That's all I have. Thank you very much. [Applause]. [Cheering]. >> Thank you, Emily. That was a spectacular talk. I think you touched on a lot of really important points for the current ethical questions we should be asking. We're going to get set up with the next speaker. Hold tight, and we will be right with you.