Look mum, no hands!

0 0

Thanks everybody for being here this morning. I thought I would start by - it's not working any more. That's great. Hello. I will do without. That's fine. So, again, I thought I will start, and it's not working, either. Wow. All right, let me my something. And then again. And maybe that then. Cool. I want to share the link to the slides, because, if you want to follow along, if I'm going to fast or too small, I tried to use the biggest font size I could fit and the right colour contrast, so I might have missed something. If you want to follow along, this is the short link. I tried to make it pretty simple. But so let's move on to the actual topic. So I want to start with a pretty simple slide. I would assume that, if you're here, you know what these represent, but just in case, on the left, you have a laptop, and, on the right, you have a mobile phone. And the reason I started with that is because I want to talk about interaction. I find that very weird that you have the internet which is like this giant network where you can access information anywhere by almost anybody, and we decide to interact with it in a small screen that's like our phone, or our laptop, and the way we interact with it is that we have to learn these new gestures like swiping and tap it up on your keyboard, and it became part of what we do, and we accepted it. Personally, I don't really believe in the keyboard like that being around for that much longer. I think that there's probably other types of interacts that are a lot more natural that we should focus on. And this is what I like to do on my personal time, so, by day, I'm a software dev, so I probably work on the same type of stuff as you like websites or apps, and, in my personal time, I like to explore alternative interactions. I like to play around with technology and see how I can interact with stuff outside the typical keyboard and phone. Let's talk about these interactive technologies. When you - especially the new stuff, you can start with the wearables. Some of you are probably wearing a FitBit. Yes, you have to interact with the dashboard later on on your phone or laptop, but the whole tracking of your steps is done quite freely. You can move around and it will count. You don't have to hold your phone and say one step, two steps, three steps. It is free. It kind of does it, and you don't have to think about it. Then you have also voice UI. Some of you probably have an Amazon Echo, or Google Home, or other device that I don't know about. The same thing: it is a lot more free. You can move around your house, and you have to say a certain comment and access information without being in front of a laptop or having to take your phone out of your pocket. I think that's quite powerful. Maybe it doesn't work as well right now, but I think it is really important to be working on that kind of interaction. Motion: some of you might be working in offices where you walk around and the lights turn on. The same thing, if nobody is in the office after a certain time, the lights turn off, and you do vent to think about it. It's all around us but it's disappearing. This one is quite funny, like facial recognition. In China, you can pay at KFC with your face. I find that quite interesting, because I can't even imagine the interaction you go to the shop, and, "Would you like to play by card, cash or face today?" I don't really want to pay with my face, but they're already doing it, so, maybe in a few years, we will all have to do it. I want to focus on biofeedback today. So, biofeedback is just having the data coming like getting data of physiological functions using devices that track the activity of different systems in the body. And especially, I'm working with the Epoch which is a brain sensor. Before talking about how the device works, I thought we should go back to how the brain actually works. I'm not a neuroscientist, so I can't explain in depth how the brain actually works, and I don't think we really know, but I'm going to do a high-level and brief intro so we can understand what is going afterwards. You start with a subject, could be anybody, and, with an intent of doing something. It triggers a certain part of the brain. Different parts of the brain are in control of different activities. And then the - let's say you want to talk. You don't think about walking, you just kind of do. You have a trigger in a think actions are in the prefrontal cortex but I don't remember exactly. But then, this part of the brain is going to send a signal to a body part. So that is a hand, because I was looking for an icon of legs, and it's either strong men legs for a sexy woman legs, and I was fuck this, I will get a hand, it's the same thing, and it's gender-neutral. [Applause]. And, yes, so, and then, so the part of the brain that is responsible for actions of walking send a signal to your little hands, and you end up walking like that, that ends up being an action. All it does is basically just signals from the brain to the rest of the body. And now, how does the emotive work? It has 14 different sensors. On the left, you have where these channels are placed around the head. And, in the middle, you have how more high-level, high-definition brain sensor works, and in green and orange, you have where the emotive is placed around is to that so trying to cover a broad rake of the face and head without having to have too many sensors. When I got this brain sensor, of course I wanted to play with it, but the only thing available was an SDK in C++ and Java. I learned Ruby and JavaScript and I thought that was not going to work for me. I didn't want to give up. That's a piece of device that's quite, you know, it can be a bit expensive - it's a lot cheaper than something you find in hospitals - but it's like a bit money. I didn't want to give up. So I decided to try and build something in JavaScript to allow other JavaScript devs not to go through the struggle I went through but use the device and use JavaScript with it. So I built epoch.js that allows you to interact with the brain sensor in JavaScript. And so the features, when you get this sensor, you can download what is called the composer which is an emulator, so you don't have to actually set up the sensor all the tile, you don't have to carry it around, it can be fragile. You can actually open the - write your programme. If you can send programmes from the emulator to your programme, you can do what you want. It's sweet and working. You can have access to the live data. The device has a gyro scope to you can get the head movements. You can get the performance metrics, get your level of performance, focus, excitement, stress. You can also get facial expressions with the sensors around the front of the face, so, for example, smiling, or looking right, looking left, up, down, and, a few others. But the most exciting part is that you can get some mental comments. So they have to be related to a thought of action, and I would assume — but I'm not sure — because it is easier to recognise as a pattern than thinking about a chair or something like that. When you think about an action, the signals that come from the brain are easier to recognise in different people than thinking about the beach. In terms of technical Stack, I had to use the C++ SDK in the background — probably badly written but it works — and then created a Node.js add-on and I used these three modules there. Now, I know there's a new way to create — I think it is called NAPI, but at the time, when I started, it wasn't there, so I haven't updated it, but for now, it works, so, when I get time, I will probably try to, like, move over. Okay. All right, so, with , demo time. I always like to start my demo with a little reminder that it might not work. It's supposed to work. It was working a few days ago. [Applause]. But ... yeah. All right. So, I was already nervous. So the first thing that I build is a brain keyboard. So it's just an interface of, well, a keyboard. I wanted to be able to interact with it just by moving my eyes. This demo is not not limited to thought but to actions and stuff. The little bit annoying thing with the sensor is that you have to put some kind of gel on all of them to conduct properly, so I did it before the talk to make sure that I will be fine but I will just redo it quickly. It's not that much of a pain but just a little bit. All right, I should be all right. The thing is, you do look stupid when you're wearing it, so just get used to it. As it is a demo, you might want to take a picture, which is fine. I don't know if I turn it on and off? So, you might want to take a picture, which is fine. Just make sure shy eyes are open because I really look stupid. If my eyes are closed on top of that, don't take a picture. Don't take a picture when my owes are closed, please! That's already quite hard. So I'm here, and then — let me check something first. I want to check if it's green everywhere. So I have to launch another of the programmes. I'm missing one. So I did turn it on? That is interesting. So that is not great. But it should be all right. Let's try. So, I have my server, and I just have — okay. So if I look right, and I blink, blink. It doesn't get the blink. So I look right. And oh, I did look left. Okay. Right. Blink. Left. Oh, damn ... . Okay, let me try a bit more. Oh, well, I did blink. Yes, okay. So, right, okay. Blink, blink, blink. That's working really well, isn't it! [Applause]. Damn! I knew it. It's letting me down all the time. I know I'm moving — well, now you want to work. All right, so, I was supposed to be working fine, but it doesn't, so I'm going to move on to the other demo, then. Hopefully, this one will work better — probably not. So the other demo that I built, the aim at the end is to have mind-controlling in web VR, I realise has not web VR but 3D in a browser. As I'm pretty sure it is going to let me down, I record something so I have the proof that it does work. So I for now, just trained the thoughts of thinking about the direction right, left, pushing and pulling, so I will try and go back and forth. So, let's see. The thing is as well, you have to focus quite a lot, and — yes, we are still tracking. The state of mind in like now, I know what you mean thinking about other stuff, nervous. Server again, and then, I will go to localhost. We're back. Man, I'm struggling. It doesn't want to go right. But that's fine. Let me try again. I will try again. [Applause]. Okay. Maybe I need to — ah, missing some in the middle. I don't have time to redo it. All right. I really want to go right. Okay. Okay, I went right. Well, all right. So I'm going to take that off because now I have gel all over my hair. This is great. Forget about having a cool hair cut! All right, so what is next? Code samples. Just to show you very quickly how it works, this is a very, very short code sample. There are more lines than that. I tried to make it quite big. If we start from just taking it as pseudo code because there is a lot of code missing, because it's about understanding the process of the Node.js module. At the bottom, you want a module that we're going to call the module, and the entry point is the init function which is right here. We are going to create a function that we want to have access to in JavaScript that I called connect to composer. When we require the module in JavaScript, we will be able to call that to start the whole thing. What this method is going to do is that it is actually going to be connected to our connect function that is here. What this one does, and this is where you use the C++ SDK, you start to use an emo state handle, data from the brain sensor, and you use that handle to check what expressions or thought you get. If it wants to know you're blinking, it comes back as an integer, 01, and then you keep checking, and an object called event, and I add a property "blink" on the event object and the value of it will be the integer coming back checking from whether I'm blinking or not. So, very poorly written. I don't know if it is poorly written or not, because I don't do C++, so that is fine for me. This is like the C++ file, and then you have the bind ing jit file. But this is what it does, that you take your C++ as the source file and say the target name is module. I just wanted to call it module. You compile, and, in JavaScript, you can require that module, and then we have access to connect to composer, and then we get our object with the data coming back, and I have connect to live data function where you record your brain patterns. Painfully, when you run the pattern, you have the access to this object with the properties blinking, looking left or right, whatever you want. So, very convincing, this is how it — so, very quickly, it's not that bad. It took me a while to figure it out, because not knowing C++, it was hard to know where I was going. In the end, I got there. It definitely needs to be refactored. I did refactor it a few times. I remember once I realised I had two files of a few hundred lines and they were doing exactly the same thing! So I could delete everything, I felt so productive, it was awesome. So the limits: of course, you need training for each user, not for the facial expressions but for the mental comments which in a way is a good thing because it means we would all be exactly the same. It means that, when a user tries it for the first time, you do have to train it before being able to use it. You can't track everything. That kind of like makes sense to me but sometimes, you see people complaining about it is not good enough, doesn't know what I want. It is a brain sensor that you can buy object line. It has around 15 actions that you can track. What else do you want! And latency. As it has to focus and check all the time difference between the current brainwaves and the patterns that it knows, there is a delay between thinking and focusing and then it detects it. Depending what you want to build with it, it may not be the right thing. If you want to build a control car, you might not want to use that. There are limits in terms of user experience. I think the tech is actually quite cool, but as users, we like to have a seamless interaction, be able to use technology, do whatever we want it to without having to think about it. There is a bit of a limit in terms of how we are building technology right now. A lot of the time, we build innovation and think the innovation is amazing but people don't want to use it. Like it won't work. We have trust issues with technology. When a new product comes in, we are super excited, and we use it, but then it fails once, and it's over, and we don't use it any more. Which is also interesting. If you wanted to develop products like that, you also have to think about that. As a user, we should probably be a bit more like nice to tech because you have to remember that we build this, not this magical thing that you buy and it works. Real value: to, when, if you would want people to actually use new interactions and stuff like the brain sensor, you have to find a way to make it bring value. We kind of like, we have habits and we like to use the habits so you don't have to think, so you have to make sure what you're building is good enough for people to want to switch. Even if this brain sensor was like super powerful, and it works really well, I'm not sure I would walk around with that on. In terms of social acceptance, it doesn't really work! Those are the three points that I could think about. I'm sure there is more because I was researching interactions and technology, and this talk is cool, so this person, these students from MIT, built a device where you have a camera and projector and have things projected on to your environment rather than your phone. You could point to a newspaper, and have videos like if you were in Harry Potter, stuff like that. Or make a gesture like this and take a picture for you. The tech was cooled and worked well in the demo at least. Then I scold down and I saw that it was made ten years ago. I was surprised that, I'm like, so now, ten years ago, so we've, like, ten years have passed, and we have nothing close to that. We are still using the exact same thing. I find that quite crazy. At the end of the talk, the speaker actually said, "We never know, maybe in ten years, we will come back and talk about the ultimate brain implant." We are now ten years, and we're not there at all. So that's quite interesting. It's like you have to figure out why exactly are we not there? And I think that we are working so hard on making the tech good that we forget to think about the user. I think we need to think about — we need to think more about people. Possibilities: accessibility, of course. My demo is small, but I think it could useful to some people. You have people working on trying to control a wheelchair with that sensor. I think that's a uni project but that's still pretty cool. Mental health: the He pock is not the only — Epoch is not the only brain sensor. They have other ones making you deal with stress and attention a bit better. I think that's a cool space also to have that as a useful thing. And art: my favourite. So I had to put it in there. I like mixing technology and art, because you can explore things that you don't get to do at work, and it might seem useless for some people, but I would like to remind everybody that useless is not worthless. A lot of the things that I do are useless, but I learn so much from doing it, and I learn stuff that I can apply on other projects, and like when I started with the brain sensor, I didn't want to do brain keyboard, I was thinking would did be cool if I could have graphics with my brain? Then it ended up as beak something that could be useful and I learned a lot. Yes, useless is definitely not worthless. I just wanted to quickly show something else that I didn't build but thought the next establish would be incredible. I don't know if you've heard of this from the MIT Media Lab. This device can actually track internal speech and translate into words. So, you know when you talk to yourself in your head, so you create the words, like I can hear myself talk in my head, but they managed to — they managed to sense the electrical signals that happen in your jaw when you think about speaking, and to translate into words. There's an interview where this student could Google things and answer questions by just kind of like thinking about the words, so of course I'm sure they polished that demo for the interview! But it is like, it's amazing. I'm just like, "You know, it's like electrodes, I have them at home. Maybe I could try it." Maybe we will try it, we will see. I wanted to show quickly something funny. Like the first prototype was that, and it reminded me of an MIT Epoch but upside-down. I thought I could use that, you know? I'm getting to the end of my talk, but I have a few links if you want to have a look. There's probably more that I can add in there. If you're interested in technology, there is a Slack channel that is global where people talk about that. I wanted to finish on that point. You know, when sometimes you look at a piece of art, and you're, "I could have made this," and this is exactly how I want you to feel about that talk, because, when you think about the tech, all I did was write some bad C++, and then I wrapped it into a Node.js add-on. The tech is the sensor which I bought. If I can do that, I can assure you that anybody here can do that. You need maybe to take a step back and don't think about making something useful straightaway. Have fun. Release your inner child. Don't be scared of just trying and building something. I can assure you that you're going to have a lot of fun doing this. So, this is the end of my talk. I am on Twitter. You can of course always come and talk to me if you want. I will be walking around. If you have any questions, no worries. Unfortunately, I have to leave the conference this afternoon because I have to take flights back to Australia. I have to speak at another conference so I won't be there to be honest, but I think maybe I will leave around three, so I will be walking around if you have any questions. Thank you very much for listening, and thanks so much JSConf EU for having me! [Cheering and Applause].