#117: Movie stars and A.I. assistants

Transcript
Welcome to Blind Guy's Chat, where this guy, Oren O'Neill. Hello. And this guy, Jan Bloom.
Speaker B:Hello.
Speaker A:And Claudia O'Donovan.
Speaker C:Hello.
Speaker A:Talk about the A to Z of life. Well, hello ladies and gentlemen. And you are very, very welcome to episode 117. Now I have trolled the Internet to find a car with the numbers 117. Oh, and I could not find it. I did find one.
Speaker C:Oh, no way.
Speaker A:I found a very old 1952 Czechoslovakian dumper truck.
Speaker B:Dumper truck.
Speaker C:I have an answer.
Speaker A:Made by Tonka and it was a child's toy. No, I'm only joking. There is no.
Speaker C:But you're actually not important because Mattel Disney, you know the Cars, the, the animated film the Cars.
Speaker A:Oh yeah.
Speaker C:Well, I think it was Cars 3 had a character called Ralph and his number, he was a rally car and his number was 117.
Speaker A:Oh really?
Speaker C:Oh, so now there you go. Ralph Carlo, I think is his name.
Speaker B:Yeah, yeah, yeah, yeah, yeah, yeah. We've watched also the Cars a couple of times.
Speaker A:Yeah, I think I've only seen one of them with the one, I think there was one with Paul Newman and he was kind of a cranky old car, wasn't he? He was.
Speaker C:You love cranky old things.
Speaker A:I do, I love. Anyway, look, let's, let's move on more important issues because the last show we had a beautiful infield with Mercedes Holland stuff.
Speaker B:I must say, you know, we gave her a kind of practice to be honest, because the whole. The, the Rotterdam Film Festival is an international film festival. So she was well prepared. Yeah, yeah, yeah, yeah. But, but, but the film was presented as well. The, the, the film. I shall see. So that was really an, an English title. Yeah, it was quite. Yeah, yeah, yeah.
Speaker A:Tell me from the point of being picked up by the limo outside your.
Speaker B:House, I will tell you every detail. Well, we had, we had. Our limo was parked in, in front of the house. You know, it was, it was not a charge.
Speaker A:It was charged.
Speaker B:Yeah, it was well charged. Ye. And we were then also I opened the door for, for chef and he entered as well. He was well nicely brewed. You know, he was really ah, feeling good, you know, painted. Yep, yep, yeah, yeah, yeah. Colored nicely black. And, and Rosalisi was wearing a nice dress. Yeah, yeah, yeah. And she did her hair a couple of hours before, you know, so she was well, well prepared, etc. And, and I did also have a n. Nice shirt and trousers, whatever. You know, it was a good idea to wear trousers. Yeah, we were matching. It was really good. Yeah. And then we.
Speaker A:We.
Speaker B:We went by car to. To Rotterdam and then. Yeah, it was really nice. And then when we entered the cinema, Mercedes was really approaching us. Oh, there you are. Oh no, sorry. It was. Hi Chef, how are you? You know.
Speaker C:He was the real star, was he?
Speaker B:Yeah, yeah, yeah. But there was an. And he had some competition because what I didn't know was that there was also a guide dog in the show in the film and that was with the name Plato. Her official name was Sophie. So this was. Yeah.
Speaker A:Girlfriend for.
Speaker B:Yeah, yeah, he was quite. Yeah, they had. There was a click I would say. Yeah. And. And then. But. But in this plateau was not. Well, he was. Or she was continuously at. Not in harness but with a leash. So that was an. Yeah, so it was not really an. An official guide dog.
Speaker C:Ex was playing a pet dog.
Speaker B:Yeah, but but it. So, so we had a warm welcome. It was very busy that it was a full full house. And then. Oh my God, everyone was nervous, you know, a lot of pictures made. Etc. And Rosalis he recognized some stars, you know, from. From the TV and. Yeah, yeah. Who. Who were playing also an A role. Etc. So and then. Yeah, we were sitting in the. In the cinema and then was with ear Catch, you know, that was what. Yeah. Audio description and it was well done. It was really nicely well done. So it was. And. And the the film was now because I didn't know really what the film was about. But but so. So I can tell you we know what Mercedes told us as well, that it was a lotta and she had an accident with firework, etc. And then she got blind. Yeah, that was really an exceptional situation. My God, I was scared to death, you know, when it happened because there was a problem with a firework pot and then her sister was really playing with it and then she wanted to help and they were not wearing any firework glasses, etc. No protection, etc. Boom. You know that. Yeah, it was. And that was the drama effect, of course. And then. But before that, before she was having this accident, she was an. Yeah. And Happy young girl, 17 years old with a boyfriend and diving Scooby Diving a lot. You know, that was a big, big hobby of them. And. And you saw them doing that, you know, in the North Sea or also other parts. And you saw them. Yeah, playing with each other and they were really in the. In the bright of life, you know, they really had fun. They were really. Well, they had dreams on school, etc. You know. And then, yeah, this terrible accident happened. You Saw the devastating moments when. With the eye doctors and in the end that it was, yeah, no curing possible, etc. So it was a really hard feeling. Now in the end she managed to make the decision to go into rehab and then, yeah, all the parts came where I was also playing a kind of a role in the, in the background of, of. Of yeah, being in a, in a cooking lesson, in a braille lesson and. And so on. But yeah, due to the, the audio description, I could not. Yeah, you don't pay much attention to the real audio in the cinema and, and, but, but Chantal and Rosalie, they could really hear me in all the lessons. Yeah. And so they were knocking me and I was really, you know. Yeah, you're focusing on the audio description in a way. So you did not pay attention.
Speaker C:You were caught up in the story.
Speaker B:Yeah, yeah, yeah, yeah, yeah, yeah. So you don't see me really or us in the, in the, in, in the actual lessons, but you hear me in a way, the conversation way, talking in class, eh? Yeah, yeah, yeah, yeah. Being in the background etc. And, and, but you see us one passing by in an alley. Yeah. And then. Yeah, we, we are for a couple of seconds we are in a. So in view, but we are not really recognizable. Yeah, of course, when you know us, then you can recognize.
Speaker C:Yeah.
Speaker B:And then the, the. The. The main story of the film was the frustrations of the main actress and that she was escaping the frustrations by dreaming, you know, and then she was dreaming that they, that she was. That she was meeting up with her boyfriend again to do scuba diving or to do this and to do that, etc. And at the rehab center she met two guys. One guy was really an older guy and he was having also this dog and he played music. And then she tried also to do some music, etc. And there was another guy who was smoking a lot and you know, also marijuana a little bit. Then she also liked that a lot to come also to, to, to. To come into dream world, you know. And then. Yeah, so, so it was also a little bit stigmatizing in a way that all. Well, in the rehab they all are drinking, smoking, et cetera, you know, and they dream a lot, you know, and that is also now sometimes they escape. But, but yeah, it was. Chantal was looking at me. Hey, did you do that? No, no, sorry. I was really boring back then maybe. Yeah, I was really bored in that those days.
Speaker A:But it was a well told story for what it was. Jan, do you think it was? Well Told.
Speaker B:It was well told, but it was really about surviving your acceptation by going into dreams in a way, but it was not really focusing so much that you can live your life also independent. But they did not really mention or showcase that in the. In the. In the. In the film. And they showcased that they went on a party and that she lost herself a little bit in the. In the drinking and that she. That she wanted to go out and that she lost on the. On the. That she was lost in the beach, for example, ahead. But she managed to cope, etc. So it was really strong and. And it was powerful presentation of. Of this lady.
Speaker C:Very good. Yeah, that's interesting.
Speaker B:From April 3rd, it will be officially in the. In the. In the cinemas in the Netherlands. And Mo, he promised already to go there, so. Yeah.
Speaker A:Girlfriend Barbie.
Speaker B:Yeah, yeah, I think so too. Sitting in the back row, you know, that's always good. No, but. But I think for us you really recognize the. The. The feelings and the em. Go through and that is very. Yeah. Recognizable.
Speaker D:And.
Speaker B:And also Chantal, she recognized a lot. You know, it is really sometimes also a confronting situation.
Speaker C:It sounds to me like it'd be very interesting for sighted people to understand better what. What people go through. Especially like such a dramatic way to lose your sight and so suddenly, you know, for it to be gone. That must be so. I mean, I think it can be.
Speaker B:Really a good hit also for. For people who. To warn about using firework, you know.
Speaker A:Yeah.
Speaker B:That is also a extreme good example. It shows. Yeah. That you need to be careful.
Speaker A:All right, shall we hear from our guest?
Speaker B:Yep.
Speaker C:Yes.
Speaker B:We have discussed already many times the nice technique of AI for the blind. And also freebies out of this.
Speaker A:Sorry, sorry. Are we getting freebies? Are we getting freebies? Yeah, we're getting T shirts and mugs from Envision if we do this.
Speaker B:Oh, no, no. We like freebies, but you're messing around in my intro.
Speaker A:Oh, sorry, sorry, sorry, sorry, sorry.
Speaker B:We talk a lot about AI, especially for the blind with apps. For example, seeing AI Be my eyes. But also that it would be very handwave and vision. Of course, the app. And we had Jesse Weinhold already talking about it, Mohammed last year. But here we are in this show we have the CEO and co founder Akartik A day from Envision. And you are highly welcome, Karthik.
Speaker A:You're very welcome.
Speaker B:Yeah, you're highly welcome.
Speaker D:Thanks a lot. Excited to be on here.
Speaker B:Yeah, yeah.
Speaker A:Now do we get our freebies now?
Speaker B:Of course. You know what? Kartik Oren was without power since last Sunday, you know, because due to the storm in Ireland, you know, he was. So I think he's a little bit suffering. He's glad with some power now.
Speaker A:Yeah, I'm delighted be talking to people because we have power back in our house now.
Speaker B:Yeah, yeah, yeah.
Speaker A:So I'm very excited.
Speaker B:Yeah, it's amazing.
Speaker A:What parts of the world are you in, Kartik?
Speaker D:I'm in the Netherlands. I'm based in.
Speaker A:Oh, I'm sorry.
Speaker B:Ah, that's better than the. Yeah, yeah, yeah, yeah. What does the weather look like today in Rotterdam?
Speaker D:Today? It's been pretty cold and it's been a crisp I would say. So it's been about 3, 4 degrees.
Speaker B:Your accent is suggesting that you have also roots somewhere else. So what is the temperature now at your roots, Karthik?
Speaker D:Well, my family is back in India.
Speaker B:Yeah.
Speaker D:In, in, in. In like Bangalore and it's pretty of your like pretty. It's. It's. Even though it's supposed to be at winter there, it's still a picture. It's still a pretty warm. So it's like, it's a good. You're 25 degrees.
Speaker B:Oh, I like that. Oh my God. Please send something over.
Speaker A:I would just as a matter of interest, does. Is there an actual winter like in Bangalore? Would it get really cold?
Speaker D:I think it goes down to like you're 15 or something at.
Speaker A:Oh, I could live there. Yeah. Give us the address of your family there and I'll get the bags packed and Larry, my dog can go over and Clauda as well.
Speaker B:Can you explain to the listeners about your product and a little bit perhaps about the company, how you found it or how long you are already active.
Speaker D:Envision we started, let's see, it's been about seven and a half years now since we began started out as a thesis of mine. I was a student here at the university in Delft.
Speaker B:Okay. On the tu. On the tu.
Speaker D:The tu. Delft, yeah.
Speaker B:Ah, okay. University. Yeah.
Speaker D:And I was studying industrial design. So towards the end of my masters I had to pick a topic to do a thesis. I happened to be in India for the winter holidays and I was invited there by a school for the blind to come and just give a talk to the students about what are the job opportunities they can have in the future. And I was talking to the students there and I was just explaining to them, hey, a designer is just somebody who solves a problem. So if you can build a solution to a problem that you face, all of you could Become a designer tomorrow. And then towards the end of my talk, I asked all of them a question. I said, hey, if all of you were to become a designer tomorrow, what would be the problems that all of you would like to solve? And almost all the kids in the room that day said, I want to be more independent friends on my own. I want to be able to pick up and read a book by myself. So this independence was such a strong emotion that all these kids wanted to experience. And. And for some reason, that whole thing really stuck with me because I think deep down I felt, you know, hey, I'm supposed to be a designer, somebody who solves a problem. Why am I not doing something about it? I went back to the university, I spoke to a professor of mine, and I said, hey, I want to do this as a thesis. And in the beginning, it was purely for me, a research endeavor. I was just going around and talking to as many of the blind and low vision users I could in the Netherlands. And I was simply trying to understand what is independence for them, right? Like, what do they mean when they say the word independence? And what I found out was that for a lot of them, independence almost always meant access to information. And because so much of the information around us, because it happens to be in a visual form, the inability to access, that is the thing that is causing a dependency in their life, right? So when a blind user is walking into a train station, the information is up there. But because we decided that this information would be a visual information, that's the reason why it becomes inaccessible.
Speaker B:Okay? Yeah.
Speaker D:But at the same time, I also understood that it's a bit impractical to expect all the information and all the infrastructure structure around us to change overnight. You cannot just go and put a Braille sticker on everything that's around you. That's when I started to take a look at how can the technologies of today, like artificial intelligence and image recognition, how can they be used to understand this information and without having to change the infrastructure itself? So I wanted to explore if there's possibility of building a tool that can actually help you're making inaccessible information accessible.
Speaker B:What did you do to create Kartik?
Speaker D:We started with a simple smartphone application. That's how we started. We sort of built a very, very simple application to begin with. It was like AI of 2017, which is very archaic by the standards of AI that we have today. But back then, your AI could say that, hey, this is a chair, this is a table, this is like a cup. So that's how we started, that's the kind of AI that we built, but then went out and started showing this app to a lot of users, asking for feedback. And then we kept on iteratively improving on the app. And by end of my thesis when I started let's show after the people, everybody was like, hey, this app is amazing. I want to use it every day, I want this in my hands. So it was at that point we started to think, okay, this app that we have built, if you want it to be out in the hands of all the blind and alovision I use in the world, that's when we started to think about turning there's a thesis into a startup. We really wanted to make sure that it is sustainable, it is scalable. And that's how the journey of Envision as a startup began.
Speaker B:Yeah. So you started with two people, you and your co founder in a way. And from now have we now talking 20, 25, how many people you have now?
Speaker D:We have about 26 people. Yeah. As the app began to blow up and a lot more people started to use the app, one of the feedback we got from people is that hey, you know, the app is cool, but if I can do everything that the app can do but without holding a phone in my hand, that'll be amazing. Right? Like if I'm out and about, I have a cane or a dog in my hand already, I don't want to be holding up my phone. So we started to take a look at if it's possible for us to put Envision's software into some form of a smart classes or a wearable. We didn't want to make the hardware ourselves because making your own hardware is, you know, like it is a capital intensive process. You need a lot of, you know, investments and it's not really our expertise as well. But then something amazing happened. We won the Google Play award for the best accessibility app in 2019.
Speaker B:2019, okay.
Speaker D:Yeah. And that was also when Google was about to launch their new smart glasses, the Google glass Enterprise Edition 2. So when we met with their team in California, we said, hey, we want to build this application, can we do it on your glasses? Which was an idea that they really liked. So we stuck a partnership with them where we take their hardware, we flash our software into it and we offer that as a product called Envision Glasses. So that's been the glasses that we've been selling for the last four years now and that is the one that's actually able to offer people like a hands free experience. Of accessing different kinds of visual information around them.
Speaker B:And that is then also used in combination with sunglasses or with. With prescription. Is that possible?
Speaker D:Yeah, it is possible. So these Google Glass have different frames on it, so you can add your prescription lenses or sunglasses to them if you need to.
Speaker B:How do you use them?
Speaker D:Basically, there is a touchpad on the side of the glasses, and that is the primary way to operate it. So you can do a. Gestures on the touchpad, like swiping forward or backwards, doing a double tap, doing a single tap. So these are gestures that are very similar to the gestures that you are used to with a voiceover or your talkback on your phone. And that's basically the primary way to navigate through all the features on the glasses. But it also comes with a voice command, so you can also just ask it to open a feature or do something for you as well.
Speaker A:Do the glasses, are they able to control apps within the phone as well that are not in vision?
Speaker D:So the glasses are a standalone piece of hardware. So there is a companion app. So you can sort of. You can, you can pair the Envision app to the glasses, but they do operate in a standalone way. So you don't really need to have the app around to use the glasses on a daily basis.
Speaker A:But I suppose what I'm getting to is can you make, let's say a WhatsApp call or a standard call, or can you. Can you get the messages read through into the arms, the speakers on the. The frame arms and that. Or no, no.
Speaker D:Okay. No, no, no. So they don't act as like a blue.
Speaker B:Okay, okay, okay, okay. What kind of features do you then have based on AI?
Speaker D:So currently on the glasses, there are discrete features that are each specialized to do one thing. So there's a feature called instant text that helps you read short pieces of text instantly. So it works with the video feed. You open the glasses, any piece of your text you put in front of it, it just speaks it out. This all happens offline on device. So it's like incredibly instantaneous.
Speaker B:Is.
Speaker D:Then there is.
Speaker B:So you don't need to make a picture. Then you don't need to make a picture. So it is like seeing AI.
Speaker D:Then there is scan text, which is meant for you to read more complex documents and letters. You can, as soon as you click on, click on, scan text. It offers you instructions on how to hold the document. And once the entirety of the document is in the frame, it automatically takes a picture and speaks out the content to you. But what you can also do Is because after a document has been captured and once you have the text output, you can also simply ask a question of the document. So you can simply scan a document and ask how much is the amount I need to pay? If it is an invoice that you're scanning or if you're scanning a menu, you can be like, hey, can you tell me what are the appetizers? And then the AI can just go and just look for those specific information. There's a describe scene with which you take an image. It offers you a detail, a description of the information around you. And then you can also ask a follow up question. So you can be like, hey, is there a trash can in this room? Or do you see an exit? And it can go ahead and answer those things for you as well.
Speaker B:Does Envision use the picture taken for all this information? So when you did not face to a door, it will also tell there is no door. Yeah. Or can you move your head along? And then, hey, there is a door. Can you do that as well?
Speaker D:So it doesn't work with the video feed as of yet. So you need to take a discrete images, but it can give you feedback based on discrete images. And then there is like a whole area of AI features to detect your cache, to detect, detect colors, to detect your people. Or you can teach it your faces of your people, you know, and it can detect them. And all those are like a whole bunch of AI features sort of. You do get this kind of a Swiss knife kind of experience where depending on situation, you can pick and choose which features that you want to use. But instant text, scan text and a described scene. Or like the three most used AI features that we have on the glasses, we also have two other features that are used a lot which are like video calling features. So we had a call, a companion where with these glasses you can also make a video call to a friend or a family member. If you're ever in a situation where the AI can't help you, you can always just make a video call and then they can answer it on their phone like a normal your video call. But then they get to see everything from your perspective. So you can be out and about and you still have your hands are free to hold things or to operate something.
Speaker B:And what system do you use then? Karthik?
Speaker D:It's our own system. So when you want to make a call to a friend or family member, then we use our own system. We also have the option to call an AIRA agent as well. So if you are somebody who is familiar with aira, and you want to make a call to an AIRA agent. You can also do that from envision glasses as well. But now we are working on this whole new thing called Ally.
Speaker B:That's the new thing. Tell us more.
Speaker D:Ally is basically a conversational, a personal AI assistant. And it sort of reimagines the way we have been building AI app for accessibility for the longest time. So the way Ally works is you can simply start with a question. Instead of having to figure out which your feature to use. Is it instant text or scan text or describe scene? You can simply ask your question and Ally will then figure out which is the tool it needs to use in order to give you the answer to the question that you're asking. So you can simply hold up a menu and you can be like, how much is the cappuccino? And then Ally will first understand the intent behind your question. It will open up the camera, you take a picture, it will crop the document, it will do an OCR on it, and then within the text output, it will look for the price of cappuccino and just speak that out to you. All of this happens in under your two seconds. So you as a user will actually have their answer to your question under your 2 seconds, as opposed to taking your 5 to 10 seconds with the previous way. Right. You can also ask it, do I need to take my umbrella with me today? Right. And then Ally understands that, hey, he's asking me about taking an umbrella, which means I need to do like a weather check. So it'll open up like a weather API. It will look for the weather in your local area and it'll come back to you with an answer if you need an umbrella or not. Right. So that's the kind of powers it has. So basically think of Ally as a conversational agent which has access to all of these different tools like your camera ocr, which makes it very easy to use for a user. So the only interface you have is basically a conversation interface. You don't have to jump through all your buttons and options and things like that.
Speaker A:And so where is it storing that image? If it's taking a picture, it's just.
Speaker D:Processing it on the cloud and then discarding that image. So we never store any images on our.
Speaker A:How is it that you can take these pictures and ask the app to describe an image of what's in an image, particularly if it's in a public setting? How are you getting over the kind of GDPR and Digital Services act within Europe?
Speaker D:So Basically what we're doing is GDPR compliance. So like taking an image and explaining to your wall what's in that image is not an invasion of your privacy raws, it basically is something you can do with any of the apps on your phone today. You can simply take an image and have it describe things to you so it does fully comply with the GDPR.
Speaker B:And what AI are you using? Are you using Gemini or ChatGPT or what kind of. Or is it, or is it your own creation?
Speaker D:It's a combination of things. So we don't solely rely on a model. So for example, in every aspect we use a different model and these models are very interchangeable. So depending on what's the best model to use at a particular point of time, we sort of replace it with that, right? So the one that's sort of detecting your intention, that is a fine tuned version of a llama that we have, we have a train, that's what we use. But for example, if there needs to be a description for images, then we use your ChatGPT because that's the one that is the best at the moment. But for other questions we use other AI models.
Speaker A:So how is it learning? For example, I have used the Ally app. I would say the response that I got was probably about 60%. Right? So what happens when the description of the image that you're asking the app to describe isn't quite right? What happens then? How does it learn?
Speaker D:So that's, we sort of keep improving this on a few layers, right? So I think the base model itself needs to keep on improving. So when we do get these, these are models from third party, they need to keep on improving. For example, your GPT of 4 is a big improvement from GPT 3.5, which is a big improvement from GPT3. So there are these foundational model improvements itself that happen. What we do on our end is improve the pre processing steps before we sort of use the models itself, right? So one of the examples of that is exactly you're doing an OCR on an image before simply asking a question of the image, right? So then AI has the context of exactly what the text is and then it can ground its answers to that. But the other aspect, which is the personal aspect is also a big differentiating factor of Ally where it is a personal assistant mainly in like two ways. It is a personal assistant where on one hand you can, you can volunteer information about yourself to Ally so you can, you know, inform about stuff you like, stuff you don't like, you know, what kind of allergies you have and what kind of work that you do. And Ally can use all of that information as context to offer you a lot more insightful answers than otherwise. Yes, it would do. Right. So, for example, you can also ask Ally for a recommendation, and because it understands your preferences, it can actually go ahead and make your meaningful recommendations to you.
Speaker B:Okay.
Speaker D:But it's also a personal assistant in the sense that it has a personality. So you get to define how you want Ally to talk to you. Right. Do you want it to be your professional and straight to the point, or you want to have a sense of humor? So you sort of can define what is the voice it has, what is the tone it has, and all of that can be personalized exactly to the way that you like it. So that's where it becomes a lot more of a personal assistant. And this is another way where the AI direct learns and improves from you as well. Because over time, based on your interactions that you have with Ally, we will build a database of. There'll be like a wiki about you that will be built and stored with Ally, which will personalize your answers to you to a higher and higher degree on the basis of interaction that you have had with it in the past.
Speaker A:So that's being stored on the phone then, is it that information within the app?
Speaker D:No, it will be stored on the cloud in your account. So it's basically information about your.
Speaker B:And this is all in the app. So you can use it without glasses. You can also use it with the camera of your phone, Karthik.
Speaker D:It's ubiquitous. And what we mean by that is it's available on all the platforms that you want to use it on. So it's available on your phone, it's available on the web, it's available on the glasses. So basically, Ally will be available on all of these platforms. And we are going to be working with a lot more your hardware partners as well in the future. This is like, a very exciting chapter that we've been looking forward to, like, a future where there'll be a lot of good commercially available smart glasses that people can just go into a shop and purchase for themselves. And we see that happening in this year. I think there'll be a lot more commercially available smart glasses that people can have access to in this year. And Envision's endeavor is to be on as many of them as possible. Of course, we will also have, like, some sort of a combined solution that we'll offer as well. But, yeah, I think we will be A software focused company going forward.
Speaker B:So you can also, for example, when you have the Meta glasses, you can also make use of the Envision Ally app. Is that also possible? So you can connect the two of each together.
Speaker A:Okay, but just not at the moment. So you.
Speaker B:Oh, not at the moment. Oh, okay.
Speaker A:So you, you've obviously got to talk to, to Meta to see if that can become a reality.
Speaker D:Ah, yes, okay.
Speaker A:Right. Okay.
Speaker B:Because you, you are also partnering with Meta for example, and Google and all the, and Apple for example as well.
Speaker D:Like the short answer is yes, we're talking to everyone.
Speaker B:Yeah, you talk ah, that decision.
Speaker A:That's good, good.
Speaker B:That's always good.
Speaker D:But yeah, it's going to be an exciting your future where we believe that two things will happen. One is the reasoning capability of these AI systems will continue to improve. So they will be able to handle a more and more complex kind of tasks and questions. So that reasoning capability is definitely something that we see improving exponentially. And the second thing that will improve is the amount of tools that this, your AI assistant has access to. Right. I think that will also increase. And it can have access to other apps for example. Right. It can you book an Uber for you, it can order food for you.
Speaker A:Are you going to be introducing, do you think, a subscription model or are you planning to keep it for as long as possible as a free app.
Speaker D:For exploring that we sort of are. You're playing around with ideas on how to build a business of business or business a model around your ally. And right now there are two ideas that we are currently exploring. One is a freemium model where we can offer a set of functions of ally for free to everybody. But then there are a few features that we deem are more like premium features and that's what there will be a subscription option for. The other one is more a business to business kind of a partnership approach where we are talking to a lot of enterprises and businesses who are interested in making their services and workplaces more accessible. So we are talking with them as well to see if they are the ones who can pay for actually having these services used by their users and customers.
Speaker B:Where can we find you on the Internet, Kartik?
Speaker D:Yeah, so I think the easiest way to find us is on your website. So anything about Envision you can go to to like letsenvision.com that's L E T S E N V I S I O-N.com okay. And that's where you can get information about the Envision app and the glasses if you want information about Ally. You should go to Ally Me. That's a L, L Y dot me.
Speaker B:Okay.
Speaker D:And there you can sign up for our beta as well.
Speaker B:Okay.
Speaker A:Well done. Thank you for coming on the show. We really look forward to using the app and all its features and we wish you the very best with the future products.
Speaker B:We wish you a lot of success.
Speaker D:Thanks a lot and yeah, looking forward to this exciting year.
Speaker A:Thank you very much again, Karthik. And it's brilliant. We shall keep watching the app and see for the updates.
Speaker B:Yeah, we wish him all the best and you know, it is all for the benefit of our community and yeah, we will see how it will. Yeah, will benefit us. It's really a good development. And they are also, yeah, partnering up with Meta and with Google and probably, he said also Apple.
Speaker A:Well, that's all we have time for, folks. Thank you very much for listening to blind guys chat. And don't forget the email. Email is [email protected]. send us your voice notes or your text notes and we will read them out. So bye for now. See you in two weeks.
Speaker B:Okay, bye.
Hello our lovely movie stars, and welcome to another episode of “Jan, the movie star”. Yes we kinda sorta pick up where we left Jan last time, standing at the end of his massive driveway, waiting on his limo to take him to the premier of the film, ‘I will see”.
...well, he made it, and brought all the family, including Sjeff who was dressed up to the nines. Now the only question is, will there be a sequel, and will Jan be brought back for a reprieve of his role? (That’s two questions!)
Our guest this week is Karthik Mahadevan from Envision. Karthik is here to tell us all about Envision smart glasses. He describes all the functions and features such as reading short text or longer documents. These glasses can even summarise menus and they can tell you what bank note you are holding in your hand. They can even describe colours. Karthik also talks about the upcoming release of the 'Ally' app which Envision are calling a conversational assistant. It even has attitude! You can find out more about Envision at letsenvision.com and if you want to know more about Ally, or sign up for the beta program go to ally.me.
So, forget about that idiot in America for a while, stick some popcorn in the microwave, and settle down to the most listened to podcast this side of the Superbowl: Blind Guys Chat!
11 out of 15 American football fans prefer it to silly shoulder pads and ridiculously long breaks between play.
Links for this episode: Film: ‘I Will See’: https://iffr.com/nl/iffr/2025/films/ik-zal-zien Envision glasses and the Ally app: https://www.letsenvision.com/ The Ally app beta programme: https://www.ally.me/
Support Blind Guys Chat by contributing to their tip jar: https://tips.pinecast.com/jar/blind-guys-chat