#143: Tiera, where am I?

Transcript
Foreign.
Speaker B:Welcome to blind guys chat, where oren emile. Hello. Yang bloom.
Speaker A:Hello.
Speaker B:And mohammed lashear.
Speaker C:Hi there.
Speaker B:Talk about the a to z of life.
Speaker A:Well, hello, ladies and gentlemen, and you're very welcome to episode 143 of Blind Guys Chat. Now, just before we get talking to our guest for this show, Liam Gishvint, about the T era app, I just want to remind you that the Audio Description association are having a VI users group on Tuesday 31 March, which I will be leading. If you want to register, just follow the show notes for this podcast and you'll get all the details there. And all you have to do, if you want, is to watch a couple of episodes of, let's say, EastEnders on the BBC or the Night Manager on the BBC or iPlayer or on Netflix Bridgeton, or a drama called Missing you. And if you have some views on them, stick them in your head, write them down, put them in your computer and come along on Tuesday at 7 and we will discuss all.
Speaker D:Email us on blindgashchat.com drop us a line line and come along. Email us on Blind Guys Chat dot com. Drop us a line and come along. Email us on blind.com read it out loud.
Speaker A:Now, ladies and gentlemen, it's time for our guest, Liam GE is with us all the way from. You'd never believe it, folks, the Netherlands.
Speaker E:What?
Speaker A:Yeah, Liam is just back from the 1972 Olympics where as a sumo wrestler, he won third place in the heavyweight competition. Hang on, there is only a heavyweight competition in sumo. Yeah, but you are very welcome to the podcast, Liam. And what part of the Netherlands are you in? And please don't say Dun Haig, and please don't say Amsterdam and please don't say Utrecht.
Speaker E:Thanks so much for the introduction and yeah, it's good to be back after the competition. But, yeah, based. Luckily I'm not based in either of those cities. I'm in the much better part of the Netherlands, in the south, where the people are warmer and the weather's about the same. So, yeah, based in the city of Eindhoven. High tech. We're very smart down here. We do lots of innovation and create cool things.
Speaker C:I like my dumb, clammy, cold people. They're good.
Speaker F:Yeah, no, you're highly welcome, Liam. Highly welcome.
Speaker C:Welcome, welcome.
Speaker A:So, Liam, tell us, what have you just launched? What are you.
Speaker E:What.
Speaker A:What's your company do?
Speaker E:Yeah, so this January we launched an app called Tierra. And basically, yeah, long story short, this started off as a university project out of the University of Technology here in Eindhoven, the tue really with the question of how to take the kind of data and the information that exists in the world and interpret that for people with, with blindness or low vision, really with the ultimate goal of, you know, reconnecting that information to people who, you know, for example, if they're walking down, down a high street, there are typically signage, there's discounts at restaurants, there's information that in a sighted world where it's taken for granted that people can see these things. We asked the question of how we could use AI to really change that and build a revolution around that. And that's where the company was born two years ago. So we've got this app live now, it's in beta and we just had 1000 monthly active users in the last 30 days. So moving fast and, yeah, excited.
Speaker F:But it's still in beta, isn't it, Liam?
Speaker E:So it really kicked off in January as a beta, and then we've just been kind of getting the feedback to really improve, but now it's a good question of then do we just release something else later on. But we're kind of just continuing to improve it and improve it, and slowly it will get to a point where it's kind of ready for the market.
Speaker C:So, of course there are apps already, Liam, called things like Blind Square that you can turn on and it will tell you what's around you in terms of, like, shops, and it will tell you where they are. For example, at 10 o', clock, 50 meters in front of you, there's a barbershop, something like that, and it will give you the name. So, but, but what I'm hearing you saying is even more intriguing, where you will connect things like discounts as well to what you're being told as you're walking down the street. Is that right?
Speaker E:Yeah. So I think I. I don't even have enough fingers to count how many navigation apps there are for people within this specific market. You know, there's just endless, endless solutions. I think it's important to clarify that what we're building is really a platform, and it's a platform really built on companionship. And what that means is Tiara will be at the end of this year, basically almost like a digital human being. And what that means is the technologies that you experience with, for example, Blind Square, Voice Vista, they're really like standalone products. But the mission with Tiara is that you can leave your house and that. Yeah. The door of your apartment and you can get to, you know, wherever you need to go. Whether that's finding the gate at a terminal, the airport. Our mission is the entire use journey. And that requires a lot of different technologies, a lot of different challenges. And we have an AI assistant that basically encompasses all of that, really focused on improving the overall experience rather than a specific subset of the problem.
Speaker A:So are you saying that from the moment I leave my house, I could switch on, enable the app and it could take me from my house, walking to the local shop and tell me what obstacles are in the way, or lots of different information along the way, and then when I get to the shop, if there's a sandwich board or there's a discount or something like that, some kind of message outside the shop, it'll read that automatically. Or are these things you have to enable as you're going along with the app?
Speaker E:So the way I like to kind of visualize it is sort of like a halo, is that as you kick off, there's sort of the active state and there's the passive state. So if you say, okay, you know, you want to walk in Den Haag, for example, you want to walk to the grotemarkt and you're walking, let's say, from, from the central station. So you would just exit the station and you'd say, yeah, take me to, to the grotemarkt and those instructions would be set. So you're on a route, you've got, of course, these sort of waypoints. We, we plan these as specifically as possible to connect them to actual walking paths. And that's sort of the basic. Okay, it's a navigation system, it's an A2B application. But as you're walking, there is this active state where you can say, hey, Tierra. And Tiara will come back to the foreground and you can say, hey, Tierra. What's on my left hand side? What's on my right hand side? And then Tierra will be like, oh, on the left hand side is a Starbucks. Actually, today there is a promotion or actually, you know, you're walking past the library and tonight there is an event at 6pm It's a combination of. Okay, well, there's the functional part where we want to get to A, from A to B, but you know, we can only solve so many problems at that point. But it's really about, you can imagine that the entirety of the world and the physical space that we move in has all of these different data layers and information that we can access and interpret with AI and communicate it in a way that just feels natural. And feels non intrusive. One thing is we don't want cognitive overload. Right. So that's why we've got this whole thing of yeah, we set the root and you get your basic functional A to B and you get your important instructions and we try to make them as accurate as possible. But how can we take it a level beyond that where you can interact with the environment through voice and getting that information? But to answer your question more specifically about sort of visual interpretation, we don't want people walking around with their phone in their hands. And not everyone has access to meta glasses. So we try to do everything as much as possible without camera. But of course, in the instance that people want to have that kind of direct interpretation, we're going to build that out as well.
Speaker F:But this is what I like. If you have for example, the additional flavor or the juicy part that you are walking and that you okay, hey, what is around me, you know, and then also what is there in this restaurant for a special or whatever that is so interesting because then you don't need to stop by and make a picture of the or to find out where is the picture of the menu or whatever and then ask Mehta to read it for you. But if you can do that on the fly, it's very good.
Speaker E:Yeah. And it also engages those enterprises, it gets them involved in the accessibility discussion. And I always think that kind of accessibility products, a lot of the time they end up being used for the general market and the general public as well because they. They just improve access, of course in the name to information or you know, typically sort of the accessibility ramps, people use them for other purposes. And I think it would be the same with this is that we'll find that in, you know, two or three years that the use cases are so much broader than when we initially planned. But yeah, really the primary focus is about that kind of reconnection.
Speaker F:I was using it for a while already in the weekend and I use it in some stages. But can you use it also next to already a navigation app, what you are used to work with in a way because then you have set up. I did not try that. But is it possible?
Speaker E:We don't currently have the opportunity for you to submit favors from other apps, but we are planning to allow the ability for the user to import GPX files into the app immediately. But it's something that I've just written it down now, you know, with the development team that we have, I can just pitch this to them and explain why it's important, and then in a couple weeks or a month, it'll be in the app. But for the supermarket case, what we basically envisage is that the route planning will get to probably the closest pin that it could find to the entrance. Sometimes it's very difficult for us to know where the entrance actually is based on just either Google Maps data is incorrect or kind of our own route planning engine is also not fully accurate. So we would allow the user basically to use visual AI to find the door, which of course is a massive challenge. But as they were, you can imagine as someone enters, for example, an Albert Hein, you know, there's always a. There's always. For someone who's visual, they can see all of this information. And it's so taken for granted that that's just not accessible for everyone. So the ability for you to basically ask the AI, hey, is there anything that I should be interested in today? And you know, we wanted to get it that in a year, if you're vegetarian, that the AI is not going to talk about that there's a special on the run or something. You know, that's the kind of level that we want to get to.
Speaker F:Halal Guy Mo. Yeah, you are focusing on that one.
Speaker C:Don't tell me to buy pork.
Speaker E:Yeah,
Speaker F:yeah.
Speaker C:So this is the vision. But just to be clear, if I download the app now, that's not yet what I'm getting, right?
Speaker E:Yep. So it's always an important thing. And whenever I kind of have these conversations, it's difficult for me to do that balance where I'm like, well, I've got this idea for the end of the year, but then, you know, I'll talk to Jan or to you guys and then if you download it from the App Store, what are you actually going to get? So in a nutshell, if you download it now, what you'll experience is you have this standard navigation situation, which is very similar to Google Maps, and there are these checkpoints which are basically like the waypoints, the dots on the map that you can interact with with visual positioning. So especially for the start of the route, we heard from a lot of people that they had this massive difficulty knowing which, like, initial orientation to make to actually start the route. So what you can do is you basically take your phone and you hold it in your hand for a couple of seconds and it uses the visual cues of the environment, which, especially in somewhere like Het Spur in Den Haag, where we did a pilot, you got tall buildings, Amsterdam zone, the same GPS and accuracy is massive. And they can put you on the wrong side of the street, which just ruins the entire navigation system. So in that, with that visual positioning system, we're able to get centimeter accuracy to start the route and it starts you off on a brilliant foundation. And you can do the same in reverse when you get to the end. So I was in London with my co founder who is a blind veteran and we were just doing a route together and we took a left when we shouldn't have taken a left because it was one of those instructions where it's like a left for like a meter and then a right, you know, and it should just be a straight, but it says left and right. So we ended up taking a left into a side street and then it's like you've gone off route, checkpoint system enabled. And then you'll get a kind of a vibration. You take the phone out and if you turn in 360 degrees it will play either a sound or you'll feel a vibration, depending on whatever you prefer. And it puts you back with the right orientation. And it was like, you can't imagine how nice it was to see that technology work in his hands because he was like, oh, I took a left and then he got right back onto the main road and then went on his way. So that's primarily the user experience right now is this navigation system with really the checkpoint system as we call it, with the visual positioning where we can get at moments, these critical moments where it's, you know, I just went off route or you know, where do I start? Those 10% of the time issues, we've got that high precision positioning to really get you continue on the way.
Speaker F:Is it then better working when you have pre planned routes or is it also working for the first time?
Speaker E:So we plan the routes. We like have an automated system that plans the routes ourselves. What we're putting in there this month is the ability for the user to do kind of breadcrumbing where it's very similar to I believe Voice Vista where you can kind of set your own points. We basically will offer the ability for people to create their own routes and also create their own destinations where a user can create a point in space. An example would be a dog bin at a dog bin. Sorry, a poo bin at the dog park. Yeah,
Speaker C:dog in a poo park.
Speaker E:Yeah, dog park. That's not good. Be a typical example or a bench or you know, there's a story of a, a guy we always talk about, he's in the UK and he became visually impaired quite recently and it's, it's actually prevented him from going to his, his garden, his allotment. He has a, an allotment a bit further, like half a kilometer away from his house. But he feels so unconfident to actually get to the entrance of the allotment because it's also not a, an official destination. You know, it's not like a restaurant or something like this. So the ability for him to place a point in space that he can always come back to is something that we're looking at. So, yeah, custom routes, custom destinations are our current step right now.
Speaker A:What's the process of doing that? Is it because you're using AI? Is it an easy process to just put in a quick waypoint rather than having to go into the app and then maybe go into the navigation system and then into Waypoint and then name the waypoint and you know, because I'm just. What bugs me is the length of time that it actually takes you. You're standing at the waypoint and really all you want to do is just put in the waypoint and move on. Is, is that a fast process within this app?
Speaker E:So it depends on the kind of accuracy that you want to get on it because you can just use GPS or you can use VPS and just. Yeah, the difference is that if it's in a rural area, there's not much.
Speaker F:What is the difference? Gps? Vps.
Speaker E:Yeah. So visual positioning is vps. So that's where you use the camera to get centimeter accuracy. It's best for high density urban areas. And then gps, of course, is best for rural areas because you have visual positioning. It kind of needs the data from the buildings around you to make that positioning. So yeah, to answer your question, if you would like, if it's in a kind of a high density area, if you live in New York City, I would never suggest using gps. But yeah, and parts of Den Hag, of course, is similar. Right. In sort of the newly built areas. But for the general usage it would be very simple just to press a button and then that location is saved as, as a waypoint.
Speaker F:Is it also done by voice? Then when you're, hey, Tierra, this is a marking point or how do you hit it?
Speaker E:Yes. I'll need to double check with the development team on what the current user experience plan is, at least for what's launched this week. But regardless, whatever you're able to do in the app manually, the assistant is able to do as well as sort of a general function because Our idea is to slowly get people used to using the AI assistant as the primary mechanism rather than using voiceover just because it's. There is a behavior change needed there because AI assistants are sort of new and, you know, in a couple years it will be. The voice will be so good and kind of the intonation will feel so natural that we want people to start getting used to that because it is easier. But yeah, people have requested, because we just had in the past a button you could press and call an AI and that was it. And then people were like, I just want, you know, I want, like, settings and I want to be able to control things and move with voiceover in the app. And of course that makes a lot of sense. So we have kind of the duality where there's the parallel ways of operating the application.
Speaker F:And are you able to change also the voices used by the AI assistant?
Speaker E:So currently we've got the default one that you'll experience in the app, but I'm hearing this more and more, and this is really sort of a beta, of course, prototype style voice. The best of the best at the moment, as far as I know, is from a company called 11 Labs. And there you can even create your own voices. So, you know, I always kind of had this fantasy of navigating around to the voice of Arnold Schwarzenegger or even, you know, David Attenborough I think would be even cool. I don't know how copyright works on that, but. But yeah. So custom voices or just better, more natural sounding?
Speaker A:I have to cut in there and think. I think really you need to start with Mr. T from the A team.
Speaker F:Oh, yeah.
Speaker A:A friend of mine has. Has a GPS in their car, which is Mr. T. And sometimes you hear, go, turn left, fool.
Speaker F:That's really fun.
Speaker E:Yeah, yeah, yeah.
Speaker A:But what about when you get what happens when you go inside? Then if you make it to the shop or the cafe or whatever, what happens then? I want to find a seat or I want to find the counter where the. Where I can order my coffee or whatever. Will it work inside then?
Speaker E:Yeah. So the challenges indoors are positioning. Right. Satellite is extremely difficult. A visual positioning only works at this point if you've got the building mapped, which you're not going to map every restaurant in Hague. It's just not scalable or affordable for the users. So really what we're looking at is trying to understand, well, from. Okay, let's say, for example, someone takes the phone out of the pocket, holds it in their hand, or they're using Metaglasses from the visual feed. Do we have enough information to help someone navigate indoors? Because there are so many indoor navigation solutions where they come in, they scan a bunch of things they make maybe put some Bluetooth beacons in there, but then that's going to be, you know, one station, or maybe it's only the big station in the central center of the city and the rest is just left alone because it's just, it's an operational nightmare. So how can we build a system where the AI through its sort of top down vertical view, where especially Sripole for example, it has a map, right? It's got sort of a 2D map where you can really see the terminal floor plan. If you combine that with the AI context from the vision that it can see in front of you, then can it use the reference points that it sees, combining that with the relations that it knows exist because of the map, is that enough to kind of give enough context? And I think, and you guys, of course, tell me if I'm wrong, but the more I'm spending time in this market is I feel like a lot of the navigation solutions are built by people with vision. Assuming that the needs of people with low vision is 100% 1cm by centimeter accuracy. And my co founder really sort of pushed me on this because I was all about mapping. You know, we got a partner that can help us map and we do the AI through sort of digital twins. And he was like, I don't need to know centimeter by centimeter the entire building, but I have my cane and I've been trained to use that for more than 20 years. I know how to navigate generally and get by. I just want to know which direction is the shop. If I'm at the shop, I can use vision AI to identify the exact product, but I don't necessarily need a perfectly planned path indoors. So yeah, it's more of a question actually for you guys here as what you think of that and what your opinions are.
Speaker C:One of the issues that we as blind people run into, especially if you use things like airport assistance or train station assistance, these are human assistants that bring you from point A to point B. One of the massive issues that you run into is those assistants only have clearance to bring you from point A to point B. So you don't generally get the travel experience, right, you don't get. Well, if the assistant really wants to help you, you get to maybe a coffee, right? You can go to the kiosk and get a coffee, they'll generally be fine. With that. But tell them that you want to spend 20 minutes in a shop somewhere or want to eat first before taking the train or going to the gate, they'll say, no, sorry, I don't have time for that. That's not my job. I'm not allowed to do this for you. Right, so you don't get the general I'm traveling experience, neither in the train station, unless you know the train station already or the airport. And I think if you get those general directions right, you don't need pixel perfect accuracy. But if you get general directions combined with information about what's present in a shop, that would be a massive, massive improvement already, really. If you do want something, there is always someone you can ask. And for a small thing, most people are willing to help. Right. If you say, oh, I want to find these cookies, can you help me quickly get them? Someone will go pick them up for you and give them to you. If you can't find them with vision AI, but you can't, you know, walk up to someone and say, I have 45 minutes to. To kill. Can you stay with me and bring me here and there and everywhere? They won't do that, obviously. Right. So yeah, yeah, I say that. Yes, that's the. Yeah, that's something that would be huge to have.
Speaker F:Yeah, fully agree. Because here you used to do a walk with your cane, but also with a guide dog. You know, you are navigating without any problem from A to B. But yeah, the experience, what Mo says, that is really good. And also when you do it hands free because then you can walk around, touch in the meantime, some objects, for example. And then also when you're able to interact with this agent. And also that's what I now experience. Sometimes you want to shut up. You know, press the control key with jaws. You know that it will stop speaking.
Speaker C:Shut your mouth key.
Speaker F:Because you, when you're out and about, you know, you have an A message from the railway station or you have someone passing by or you want to hear something or whatever that is. And when the agent is talking or voice, voiceover, you know, that is sometimes a little bit annoying.
Speaker E:So I've heard this so many times is that again, it's. There's too much focus on the functionality and the technology and the digital twinning and the centimeter by centimeter. But it's just about the experience. Right? And I think that's what we're trying to, you know, that's really our mission is just to make navigation an enjoyable experience.
Speaker C:So do you also have things like functionality. Right. I get out of a train. And a train, of course, is a pretty long vehicle, so you never quite know where you end up on the platform. Yeah, I want to be able to take out my phone and say, hey, find me the closest stairs.
Speaker E:Yeah.
Speaker C:And. And it says, you know, pan from left to right and it says, okay, well, the stairs on our. On your left go left.
Speaker E:You know, it depends on the station. Right. So in the Netherlands, you've got some that are.
Speaker C:For example.
Speaker E:Yeah, Utrecht would be an interesting one because if you're on the platform, it's sort of open air. So if you were to use the visual positioning, that works perfectly because it would look out across the platform, it would see the buildings around, it would see the station, the main station building, and then it would know really well where your position is on a 2D plane. So then it would know the platform.
Speaker F:But that is already working now, Liam, how would I do that?
Speaker E:So it would be about navigating to a point of interest. If we're talking about a staircase or kind of an elevator that is coming, I'm literally looking right now on our development timeline, and that is coming before search and navigate to nearby points of interest. So that's April 29th that we're looking to launch. That would be.
Speaker F:But what would be the user experience? What would you do? You know, Hamo is stepping out of the train. What is he doing?
Speaker E:So let's say, yeah, hey, Mo, you just grabbed a cup of coffee at the kiosk. Where can I throw my cup of coffee? And then Tierra will be like, yeah, okay, well, let's have a look for some nearby, you know, afal a little bin somewhere nearby. And then it would make kind of a radial search of your environment to find where a bin is. Works the same with a staircase or anything that really, especially with a station like Utrecht, it's really. Well, like, the data there is really, really strong and very clear. And then we'd be able to say, okay, because we've got that good position and we have a position of your endpoint. Basically, the destination is now not a street address, but it's the position of an object that you could get to that position.
Speaker C:But you would say, for example, something a station like Schiphol International Airport, which is underground, it'd be much harder in there, Right?
Speaker E:Absolutely. Yeah. I mean, especially the Schiphol platforms, because they're even underneath the main terminal building. So that's a. That's a massive challenge. 100. Yeah.
Speaker A:Similar with Heathrow Yeah, Looking at all
Speaker E:of the use cases, the airports, and as far as I understand, hospitals are areas where there is sort of a robust system for support and, and assistance. And I think it's more about, rather than the functional of okay, getting From Schiphol's Platform 1 to the gate, it's more about, okay, when I get to the gate, how can I enjoy that experience? How can I explore the commercial side of things? And then ultimately the plan is that the entirety of the platform to the gate, you'd be able to do a Tierra, but as a technical challenge, that's quite, quite massive.
Speaker F:Are you speaking with airports, for example, or with railways Pro Rail, for example, in the Netherlands? So I had to.
Speaker E:So actually, one part of the application that we haven't talked too much about is the human in the loop side. So of course I'm a massive fan of AI and technology as a whole. I just fundamentally believe that at this point in time, it's too flawed to be the final and optimal solution. I think It's a great 85% of the problems, but I think that There is a 2015, a 10% which requires human support. And of course, there's already existing solutions. Whether it's Aira, be my eyes. There's the we assist now coming out from we Walk, but we are training Dutch operators, so operators that speak Dutch as well as English, primarily to do 24. 7 support to be embedded in the navigation. So that you can imagine you're calling Tierra. And, you know, I was writing out while I was on the plane coming back this, this Saturday, I was writing out this story of someone using the application and what that would feel like. And the kind of example that I came up in, in my mind was that, you know, this, this person Alex, reaches the apartment of their friend and using Vision, AI is trying to identify where the, the button to press on the, on the telecom is and can't seem to find it. And Tiara is trying the best to, to find this button to press, but no way, no how is any technology going to solve that? And Tiara says, hey, look, I can't help you with this, but I can promote you to a human operator who's trained to help you with this. And then one of our operators picks up, sees the live video feed and can tell that it's an apartment building and that there's a branch with a leaf covering, sort of the intercom section and then explains how one can find the number. So I still believe that there is this 5 to 10% maybe even less component of kind of the user journey that cannot be solved with AI and that you need this safety net, this support mechanism that covers the way and is not and I'm not going to sound critical here, but it's not one that also sacrifices user privacy. And I think that the solutions that exist now for this have, whether it's Be My Eyes or aira, sell user data to large corporate companies because the data is valuable. And I just don't think that in accessibility you should be sacrificing your data or your privacy just to complete a user journey from A to B.
Speaker F:And how will you then well pay for it? Will there be a subscription?
Speaker E:Primary function of the app is we're trying to make as free as possible for as long as possible. And really our business model is more focused on, you know, the organizations like we've talked about airports, train stations, especially university campuses, just trying to make them completely accessible. But we know that there's users who will want to have tele assistance in their local language outside of these partner locations and making that possible, of course we can offer subscription models and, and at this point based on our pricing projections it would probably be for say 30 minutes, would be about €15 per month as an example. The subscriptions are not finalized and that of course comes with like unlimited access to the AI. And that's sort of where we're starting out at, is about 50 cents per minute and trying to reduce the cost as much as possible. And yeah, basically that's the roadmap. It goes live currently we're internally testing it, but it'll go live actually around the first of May.
Speaker B:Okay.
Speaker C:So outside of Be My Eyes, which is really volunteer driven and I've used Be My Eyes and it's a fantastic, it's a fantastic offering. It's incredible. I also use aira, which is of course in English and we assist is also in English. So do you also think about offering this kind of support in multiple local languages I.e. french, German, Italian, Spanish, where you know, you just currently don't have any system like this?
Speaker E:Yeah, absolutely, that's exactly the plan. So in a nutshell, it's the inclusion of human in the loop GDPR friendly data, private support in local languages. So we're talking to some French organizations. I also talked of course while I was in Austria to their organization and with the French and the Germans, you know, they don't like English of course that much. So yeah, that's the opportunity that we're looking at. It's just something A little bit more European.
Speaker A:Two questions. One is about the meta glasses and is it, is, is the app gonna integrate? But the second one really, the app is running all the time. Therefore you're draining your battery from your phone.
Speaker E:Yeah.
Speaker A:So how do you get around that?
Speaker E:So a lot of the battery draining comes from having a lot of the processing done in the app. The majority, almost all of. And that's why the app is so small when you download it. Basically all of the computation is happening on our backend server. But yeah, you may notice especially using the visual positioning system that yeah, it does drain, does drain the battery and we're trying our best to see how we can minimize that as much as possible. And that will come as the navigation system improves where you don't need as much of kind of the high tech, high battery draining elements of the solution. But yeah, it's again, it's sort of a technical constraint that we have to deal with that we're trying our best to sort of mitigate. Where I'm currently looking at meta Ray ban integration for hands free operation is July 23rd to August 5th. That's when we've got it planned in. But I'm going to bump this up several sprints I think to really make this a priority because we look at our users. So it was about 1000 monthly active users that we had but the utility users. So people who really use the app as utility users so they find a true value in. There is not a high enough percentage that we want to get it at. And I think that barriers like a lack of integration with, with the Ray Bans might be a component that we can use to make it more functional in the daily life.
Speaker F:How big is your team?
Speaker E:So currently we're a team of 12 combination of full time employees and a bunch of interns. We've got a little intern army that now that I understand how to manage people better, it's actually really useful.
Speaker A:Mohammed, you'd like to ask the question?
Speaker C:Yeah, thank you, Oren. You see, if you're nice to people, they'll be nice back to you. Anyway, so my question is, and I find this a very compelling argument that I'm also simultaneously not convinced. But the argument is, look, if we don't give data to these big AI model providers as blind people, they're never going to consider us in their training because they don't have the data about us. If you are conscientious about what kind of data you share with them, actually sharing data with them, like be my eyes is Doing, for example, with video calls, I think they share them with Microsoft and OpenAI can teach their AI systems to actually be better at guiding in the future and that will in the long run help our user base. Where do you stand on this? Because this is quite a dilemma because I have the same values you have in terms of, you know, I'm from Europe as well and the data privacy is important here. But it is a compelling argument, isn't it?
Speaker E:It's a great argument. And I actually got into this argument with someone at the Zero Project because I basically said that I just felt like it wasn't accepted. And he was like, well, you know that these organizations are using this data to improve the lives of the community that you're talking about. And I wholeheartedly think this is false. I think there are levels to data and I think if we're talking about using data to make technologies more inclusive 100%, we need that kind of data. But that's different than having what we have now with. If you read that article from a couple of weeks ago about that data, the Meta AI Ray Ban glasses, you shared with me that basically people didn't realize that Meta was seeing them on the toilet, seeing them in bed with, with, with their lady, seeing them looking at bank statements, all of this stuff where, oh, there was, there's a gray area in the data privacy. It's, it's like, let's be honest, right, we're talking about Meta, which I'm sorry, is maybe the least ethical company that I've, that I've ever known when it comes to the way that the social media platforms are set up just to turn people into. Yeah, I mean dolls, where they extract data to keep your attention and sell that to companies as advertising, where we have basically become products through social media, in my opinion, and I feel very strongly about this and of course my opinion can change, but I think that the data that we need for creating more accessible and tools that are better designed for people with low vision is not knowing and seeing every pixel of their movements throughout the day. It's having the information to know, well, what was the wishes that Mo or Yan had in that day and what problems did they encounter when they were experiencing X. But that's, that's very Meta level, right? That's a user story. That's, that's kind of an analysis on a transcription between an operator and a user. And that's something that we will do as well because if we, you know, when we're offering tele assistance, we want to know what that operator has solved for that person and what challenges they encountered and how we can later solve that through a mechanism that's more affordable, which of course is AI. But that's a whole different thing than taking live video feed and then using basically the operator's description or the BMI's volunteers description of that video feed to train a model that will then basically be used for other purposes.
Speaker C:I feel the same way. But then I also do acknowledge that really if we look at what Be My Eyes does, you should not be using Be My Eyes in very privacy sensitive situations to begin with because you're looking at a volunteer and a volunteer can be very nice and is typically very nice. But I will not show volunteer my bank statements. That just is not the way to go with Meta. I think I agree with you in terms of. I think Meta has much more of an opportunity to just continuously record you when it really shouldn't be because it's the smart glasses. That is true. And, and I think that's different in terms of Be My Eyes. It is a very intentional call for help, almost the same as your tele assistant. So I think there are degrees to this, right? There are degrees to which data collection is okay and can help and where it just goes over the line.
Speaker F:How can we have. We will share in the share show notes of course, all the technical details etc. But what will be in short, the quickest way to be on the road With Tiara.
Speaker E:Yeah, so it's available on the app stores both iOS and Android under the name Tierra. So that's just with one R. So T I E R A and then yeah, we've got like 16 languages in there. You can, you can try it out. There's also if you go to the website, TouchPulse, Printanelle, TRS or if you just type in TouchPulse you'll find it. There's the ability for you to sign up. We don't make it mandatory but you can put in there like your, your city, so where you're based, your phone number, your email and then basically we, we try our best just to keep you informed of any updates that are happening around you, whether that's partner companies, locations, something like that. And then of course we've got a monthly, monthly newsletter that I can, I can get you guys onto if you're curious.
Speaker F:Yeah, definitely, for sure.
Speaker A:Yeah, yeah.
Speaker E:There's also actually we've got a Dutch Dutch discusies group chat where we've got all the Dutch people in There. We've got an international one as well, so I can send over those links to you if you guys are curious to join.
Speaker A:Yes, please do.
Speaker F:Great.
Speaker A:Well, Liam, thank you very much for telling us about this product, and do keep in contact with us because we'd like to. We'd love to know what's. How you're progressing. And we will. If we sign up to the newsletter, which we will, we'll tell people what. What's been updated in month to month, if. If that's any use to people.
Speaker E:Brilliant. We've got an ambitious roadmap, and of course I can. I can say lots of things on a podcast, but, you know, the proof is in the pudding. So we're gonna do the delivering for the rest of this year, and then I would be happy to hop on again at some point later in the year, and then by then I'll have something to talk about again.
Speaker A:Great.
Speaker F:Wonderful. Well, many things.
Speaker C:Great.
Speaker E:Thank you so much for your time, guys.
Speaker C:Thank you.
Speaker F:Okay, bye. Bye.
Speaker E:Take care.
Speaker D:Yo, Clodagh. Got the inbox. She's the email queen Reading out your messages she's the go between tips, tricks, complaints suggest Blind guys chat Answering your questions Yanmo Oren, they're bringing the facts Hit us with your wisdom or your wisecrack attack BGC email what you got to say Blind guys chat Gmail send it our way BGC email We're ready to reply. Claudad reads it out loud no message too sly so hit that. Keep keyboard, let your fingers tap we're waiting on your voice in this funky rat.
Speaker B:Hello?
Speaker D:Hello. Where have you been?
Speaker B:I've been hiding. I've been hiding.
Speaker F:Not for us, I hope.
Speaker C:No. You are very scary. Yan, come on.
Speaker F:Sorry, that's.
Speaker C:No, I said you are very scary.
Speaker B:He could. He couldn't be scary. If he tried, I'd say maybe it's Halloween. Maybe it's Halloween.
Speaker A:Okay, so just got time for one email.
Speaker B:So I'll do it. And before I do it, I have to make a huge apology. This email is from Nora in Boston.
Speaker A:It's all right, good friend Nora. I forgive you.
Speaker B:Okay? I'm glad you forgive me, but it's not you I'm apologizing to. It's Nora. Because Nora sent this on the 15th of January, and somehow or other.
Speaker F:Oh, my God.
Speaker B:I know.
Speaker F:That's a long time.
Speaker B:I know.
Speaker F:Was it 2023 or 2024?
Speaker B:It was 2026, but it was. Oh, I know. It's terrible. I don't know how I forgot about it. But I forgo. I'm so sorry, Nora. It came in, I think, when we had just recorded an episode and I. I kind of thought, oh, yeah, I'll do it in the next one. And then I just completely forgot. I am so sorry, Nora. So she goes, hello, all still alive. You may not want to read this whole thing because it's very long and you should likely skip the political business or. And you can cut out the political business if you want. Okay. Binge listening to some backup.
Speaker F:We love political business.
Speaker B:Yes, we do. Really, don't we? Binge listening to some back episodes. Nora says MO is a great addition. There you go.
Speaker C:Thank you, Nora.
Speaker B:I know. We think so too. I've been off all podcasts for many, many months, even yours, dealing with some medical issues, which I won't go into now. Sorry to hear that, Nora, but hopefully will be resolved soon. We're sending our good, healthy vibes to you. Our son moved into an apartment with friends. Wow, that's big move. Lots of life changes. Also, so many current events and news podcasts are so upsetting and stress inducing. She's absolutely right there what you're talking about, Nora.
Speaker A:All is good in the world.
Speaker B:The news literally derails and distracts me every day. You can't. It's not funny, Oren. You can't imagine that it can get worse. And yet it does every day. And we know how lucky we have it. My heart breaks for so many. So what's it like here now? Things have gotten worse since my Trumpster fire remark. I remember that. That was funny. Minneapolis is on the brink of martial law. The number of ICE Immigration and Customs Enforcement agents is constantly growing. They are given a 10k signing bonus, a mask, guns, and minimal training, and let loose on the public. I think that most of them failed the psych test for the armed forces or the police. They are finally getting their chance and taking full advantage of it. They wear masks, wave guns around like they're toys, and even point them directly at people. They use their vehicles to force cars off the road, even on city streets. Now, this was the 15th of January. Renee Nicole Goode, who had been shot, was killed on the 7th of January. And on the 24th of January was when Alex Pretty was shot and killed by US agents. So it was right between that. Anyway, Nora goes on to say, it is mayhem. Immigrants, some legally in the US are being snatched off the streets, run off the road, or having their doors battered in. They're put on planes and sent to other faraway states where federal Courts are more favorable to the current regime, or they're being flown out of the country to random countries that they're not from. Protesters and people taking videos of ICE actions are being called insurrectionists and terrorists. They're being assaulted, pepper sprayed, beaten and arrested. Outside of Boston, somebody videotaped multiple car carriers loaded with ice's SUV of choice making a delivery to their headquarters. The speculation is that Boston is next. So that's a lot, in fairness. And I can understand why Nora's not watching the news and not listening to podcasts, because it is.
Speaker E:Yeah.
Speaker F:Terrible.
Speaker B:Yeah.
Speaker C:Yeah. And since then, a war has started
Speaker A:and also, you know, do very well in our. In Iran. Send them there. That'll sort them out.
Speaker C:Maybe that would be good.
Speaker B:I don't know.
Speaker F:Yeah.
Speaker A:Off you go, lads. And you go guard this. Straight of hummus.
Speaker F:Yeah. Let them sail the Strait of Hormuz, you know, because Trump, he said, ah, we need to be a little bit more brave, you know, Then you can sail through it, you know, don't worry.
Speaker B:And now for something completely different. Some sad news here. Larry. No laughing. Larry passed on rather suddenly in late October. This is her guide dog, Larry. Yeah, he had been slowing down. Yeah. Gorgeous, handsome fella. Really lovely. Warren and myself met him here in Dublin, and he was most definitely retired, but it was still a shock. He was just shy of 12, which is a great age for a 43 kilo dog. He was fine the day before and had a stroke or something in the wee hours. He died at home. We were with him. He didn't suffer and we didn't have to make any hard decisions. Oh, God, I'm so sorry. And at least she wasn't suffering long. That's awful.
Speaker E:Yeah, yeah.
Speaker B:Anyway, his successor arrived back in June and they got on well. Did I tell you his name? She didn't, I don't think. I don't remember hearing his name. No. What did you say?
Speaker A:Or Rocky.
Speaker B:No. I hope. I hope that your new host is not offended, but his name is Mo.
Speaker E:Oh, that's good.
Speaker B:I don't. I think that's amazing. I think that's lovely. We are joking that my next successor will either be Curly or Shemp, because Larry, Moe, Curly and Shemp. Three stooges, that's who it is. Yeah. I'm binging back episodes. I'm up to autumn. I hope that your Larry is. Well, retiring a dog is tough. I kept my Larry here with us because he wasn't well, and the guide dog school and I both thought that he couldn't deal with separation. So in for a penny, in for a pound. He took great care of me, and we did our best to take great care of him. A couple of local restaurants who knew Larry and our situation would let us bring both of them in to dinner with us. Can you imagine the reactions from some people when we. When we came in with two German shepherds, one 63 kilos and the other one 39 kilos. Wow. Like Larry's.
Speaker F:So the new one is. Yeah. So the new one is also a German shepherd.
Speaker B:Yes. Yeah, Yeah.
Speaker A:I think she only likes German shepherds. I think she only wants German shepherds.
Speaker B:Yeah. Larry liked Mo. They got on. But when I would leave the house with Mo and Larry. With Mo and Larry was left behind. He's howled to break your heart.
Speaker E:Oh, God.
Speaker B:I know. That is heartbreaking, isn't it? Then he discovered that he could sit on the leather chair in our front room and look out the windows, and there was no one around to tell him to get off. In fact, the last picture that we took of him was on that chair with the caption, it's good to be the king. And she has the photo there. And he's just gorgeous. He's such a beautiful boy sitting in the leather. In the leather chair. Then she sent a photo of the two doggies snuggled up together on a doggy bed lying like. Like spooning, you know, Mo is the. Mo is the little spoon and Larry's the big spoon. And then there's one of Mo by himself. Mo is a very handsome boy. He's. He's fully black. Whereas Larry was kind of more typical German shepherd coloring, you know, that kind of brownie. Brownie black. And he has beautiful big ears. He's gorgeous. So I haven't forgotten about you blind guys. And Cloda. Love the show. It has made me laugh out loud a few times today, which I needed. You're all the best. We'll write again as I get through the backlog. Sad that I missed the Christmas quiz, assuming that you had it. We didn't, Nora, and I'm heartbroken still about it. Yeah, yeah. Next year we have to do it again, I think. She says I had a dismal performance last year, so maybe it was for the best. No, not at all. We'll. We'll all come back refreshed next year for it, I think. Anyway, Nora says if you have any questions about the insanity over here, successor, guide dogs or anything else, let me know. All the best, Nora from Boston. Well, Nora, thank you so much for that. And I'm so, so sorry for leaving it languishing, gathering dust in my inbox from January. I'm really, really sorry. That's like two months, literally.
Speaker A:It's okay.
Speaker F:But we. We now read it. So we are. We are up and running again. So just.
Speaker C:Good right here.
Speaker B:Yeah, but listen, thanks for the email. I really appreciate it. And best of luck with Mo. And I'm so sorry about Larry.
Speaker E:Yeah.
Speaker A:Okay, so that's it for the show.
Speaker E:No, no, no, no, no.
Speaker F:I. I got one nice addition. Also. Chef was also a whole. Or crying a lot, you know. Oh, you know. You know why?
Speaker B:No.
Speaker F:Rosalie was playing the Montarmonica, or how
Speaker C:do you call that? Oh, he was singing a hog. That's fantastic.
Speaker E:Yeah.
Speaker F:And then Rosalie was asking, does he like it or not?
Speaker D:You know, hard to tell.
Speaker C:Maybe he was singing along.
Speaker F:Instantly. Really nice.
Speaker A:Okay, folks, that is it. We'll see. See you in two weeks time.
Speaker E:Bye.
Our guest this week is Liam Geschwindt from Touchpulse. Liam is here to talk to us about Tiera, which is a new AI-powered navigation app designed specifically for blind and low-vision users. Teira offers real-time, highly precise audio guidance to help us move independently through complex environments. Built with input from the visually impaired community, it adapts to each user’s needs and uses features like voice assistance and smart routing to make everyday travel safer and more accessible. And the Blind Guys were delighted to hear that the app can also find a bargain, cos we're too cheap to pay full price for a coffee!
Clodagh is back with emails and starts with an apology to our good friend Nora Nagle. It appears we didn't read out Nora's email when we got it in January. Sorry Nora, but it's not our fault; speak to the email boss aka our token sightie - we just do the yappin'!
So, stop eating those easter eggs - it's not easter yet! Cuddle up on your couch instead, and listen to the number 1 podcast as recently voted by Brazilian sumo wrestlers: Blind Guys Chat - 7 out of 10 sumo wrestlers prefer it to wedgies!
Links for this show:
· Touchpulse: https://www.touchpulse.nl/
· Tiera App: https://www.touchpulse.nl/tiera
· ADA VI User Group event registration page: https://us02web.zoom.us/meeting/register/2d-BX76kT0WFGChBjky0zQ
Support Blind Guys Chat by contributing to their tip jar: https://tips.pinecast.com/jar/blind-guys-chat