TiHS Episode 35: Meenakshi Das – on building human-centric AI

Welcome to episode 35 of the Technology in Human Services podcast. In this episode I speak with Meenakshi (Meena) Das.

Podcast logo

Meena is working on a number of interesting areas around nonprofit technology, but what I wanted to focus on in this conversation is her work around Human-Centric AI. As you’ll hear her discuss, she’s written that “Human-Centric AI is a new learning area (seriously, you can Google it and find how little comes up), and our industry is undoubtedly still getting used to the idea.”

I agree that AI is still very much a new learning area for many of us in the nonprofit/charity world. So I asked her to give us a primer or overview of AI and then describe what Human-Centric AI is, why it’s important we understand it, and how we can impact the direction AI takes in our work and lives.

Some additional questions we discussed:

  • When AI comes up in conversation with front line workers, there is anxiety and fear of being replaced by algorithms. What’s your sense of where AI can fit as a complement to human service nonprofit work?
  • You work a lot at the intersection of data and justice, equity, diversity, and inclusion. We’ve heard a lot about machine learning and AI that simply reinforces and replicates social bias, racism, sexism, and other oppressions. What can the nonprofit sector do, or how should we be advocating for unbiased AI?
  • What’s happening outside the nonprofit sector related to Human-centric AI that is interesting and we can learn from?
  • What does tech leadership in the sector need to look like when it comes to AI?
  • What’s the best way for nonprofits to build practical skills like digital literacy and competency when it comes to AI within their organization, from front line to management? How should they have this conversation about Human-centric AI in their organizations?

Some useful resources:

Machine-Generated Transcript

What follows is an AI-generated transcript of our conversation using Otter.ai. The transcript has not been edited. It may contain errors and odd sentence breaks and is not a substitute for listening to the audio.

Note: In this recording, there were some small audio glitches that don’t impact understanding the audio when listening, but have an impact on the transcript. I’m writing that here in case you see some strange phrasing or words below that don’t make sense. 🙂

Marco Campana 0:00
Welcome to Episode 35 of the technology and human services podcast. In this episode, I speak with Mina das Mina is working on a number of interesting areas around nonprofit technology. But what I wanted to focus on this in this conversation is her work around human centric AI. As you’ll hear her discussed, she’s written that human centric AI is a new learning area in our industry is undoubtedly still getting used to the idea. I agree that AI is still very much a new learning area for many of us in the nonprofit and charity world. So I asked her to give us a primer, or overview of AI and then describe what human centric AI is, why it’s important we understand it and how we can impact the direction AI takes in our working lives. I think you’ll find this an interesting conversation.

Marco Campana 0:41
Welcome to the technology and human services podcast. Thank you so much for joining me, I want to just start by letting you introduce yourself. Tell our listeners a little bit about your background and and what you’re working on.

Meenakshi (Meena) Das 0:53
Thank you so much for having me both. So I am the universe, my pronouns are she, her and hers. I have a consulting practice called MSA data. And a training shop school called data is for everyone. I’m most active on LinkedIn. So if anyone wants to find me, they can find me via LinkedIn website, all different story, you know, talk, we can talk about that later. My work focuses on bringing social equity and justice through data. It’s, it’s a pretty, it was say a pretty strong standards, bringing social equity through data. And most of the days I am exploring that by myself. So on consulting side of the I work with nonprofits and social impact agencies, on helping them on their fundraising side of things. So how can you do your sales better your prospect research, whether your predictive analytics better? On the other side of the consulting is my work where I do workshops? Like, how can you as equity through data collection, how can you move towards human centric AI. And between those two, I love to write, I have a LinkedIn based newsletter called data and collected and I explore topics or which I am not yet paid. So that’s fun. I am going to put a plug for my upcoming workshop towards human centric AI. Minute, Michael, you will have the links for the workshop. And you can share after when you share this new release this episode, too, with absolutely, and you know, full disclosure I’m taking I’m part of your October cohort for that workshop. And that was part of the the impetus for this conversation actually, because as you’ve just outlined, you’re working in a lot of different interesting areas when it comes to nonprofit technology and nonprofit analytics and data and things like that. And we’re focusing this conversation today on your human centric AI,

Marco Campana 2:45
training as well as your approach. And so I want to sort of just jump in and and quote you back to you. And so you’ve written that human centric AI is a new learning area. And you know, you can Google it, and you can find out how little comes up. And the nonprofit industry is really still getting used to this idea. And I would absolutely agree. I’m finding that this is one of those technologies that is exploding in some ways. And yet we’re not talking a lot about it in in at least the sector. I work in the immigration refugee serving sector, it’s very much a new learning area. And so I’m wondering, based on that it’s going to be new this conversation may even be you know, the idea that may be quite new to people. Can you give us initially a primer an overview of what AI is? And then what how you define what human centric AI is, and why it’s important that we understand that?

Meenakshi (Meena) Das 3:33
That’s a good question. I’ll start with the why once I understand that. And I always bring examples from the kitchen. I’m a big foodie. So I’m going to look at the kitchen for this example. human centric AI is important because consider if you want to make a new chicken dish, right? Imagine you have to cook something very new chicken ready that you came across on a magazine or a TV show, and you want to make it but you know who is allergic to that chicken in your family or you don’t know who might be allergic to some new specific ingredient at the cooking show told you to use along with the chicken. So you’d like to go ahead and create one recipe but you need to learn more about it. That show gave you some tips and tricks and what to do about it and you’ll start to wiggle on it. You need to start to talk about it that the people around you. Okay, I need to meet this. How can I do it? Have you made it and you’ll help me on it. Imagine you’re asking that question to everybody around you’re trying to figure out how that dish but all that comes up to you is okay, use spoon with the steel use nonstick cookware, use the two flavors of thyme and rosemary is something like that. But nobody tells you what exactly goes and it’s happening. But you can, everybody’s giving you the tools that you can use to make that dish, everybody’s giving you the organic components that can go into the dish. It’s something sort of the thing that the AI everytime want to hear to give anyone not about AI if you Google it, and I literally say Google it, because we kind of really rely a lot of Google, everything that comes out about is the tools that you can use to do things with AI, or are the certifications that you can do do with AI. But nobody talks about AI in itself is and what is missing there. So we need to learn about that. All competent before we get to the how enablers of that AI as your or other certifications that exists out there. Like AWS, they are great, but they are telling you the how none of them are telling you the what in the world. So I will realizing this. Yeah, it started to bother me a lot. Because there is one group of people who really love the word data really love but AI, they want to jump on the train pass because there’s only depression. And then there’s other group, which never feels confident is to leverage something that is already existing or coming on which it’s inevitable. So seeing the gap between the two, seeing the gap exists out on the Google that out there is nothing out there that talks about the war, I really wanted to create this, where we can dig into more into what that human centered AI looks like. And to give you a minute overview of this human centered AI word, I’m not going to be in this space, I don’t want to be talking about what oh, what is the difference between a regression goal, or SVM outcome? I don’t want to be talking about what is, as your word says in donor search product or a, you know those big names that exist. They’re great means they are perfect, good products. But I’m trying to create a space such as product Master Algorithm agnostic where we all just want to talk about what does it mean, mean to build better AI? So if I saw the question you posed a couple of days ago, when you do the content creation using AI tools? Is it really helpful for the immigrant community? Just connecting with the immigrant community? Is it really helpful for being a good community? How can it be helpful? What implications of using that technology? So I’m moving 100 steps back from where we are thinking right now to grow into the fundamentals and basics and starting about? Okay, why do we need AI English just as we can, should we? And now that we are what I simplifications from here?

Marco Campana 7:48
I think that it, it’s really interesting, because I think that that’s a really important distinction to talk about what we’re hearing about AI, because that’s exactly what I did with those tools is, oh, here’s some interesting tools. What can I do with them? Not necessary? And I mean, as I was using them, I was thinking, well, here are some use cases, here are some possibilities where this could come in handy. But I think that part of because we only hear about what’s possible, and we hear a lot about AI as a replacement for something right, streamline. So for example, the one of the one of the the tools was creating an AI avatar that that basically speaks your lines and does the video production is you’re not you’re no longer doing video production, you’re just taking some content, putting it in, and and this person this, this avatar will will will speak those lines for you. And I think that I wonder if because when AI comes up does come up in conversation, there’s a lot of anxiety and a lot of fear about being replaced by the algorithms being replaced by the robots, if you will. So I wonder, even just along those if we understand the what is there a sense of where AI could perhaps complement human service work, instead of feeling like it might replace us in some way?

Meenakshi (Meena) Das 9:07
I definitely agree there. And we have this view that AI is kind of replacing us or so I took a couple of fools as I was designing this workshop, which is coming up towards centracare on LinkedIn and Facebook and other social media applications. And something interesting came up a lot of people just mentioned to the polls. I don’t want to learn about AI, the pit options like I’m scared of that. I don’t want to use AI in my professional work. There are a lot of these options. Those are people. While I appreciate that people feel confident, to be honest and say I don’t want to learn about AI at all. But here’s the thing. AI is coming into our lives, right? It’s going to be more around us. It’s going to be those efficiency algorithms. It’s going based on the application Shouldn’t it’s going to be in our day to day lives, we are already using AI in our daily lives as such, and maybe we haven’t noticed it yet enough that we are using it already exists, you know, in general. So I would say one of the things that I have found, and I think I built about immodestly editions when you set up initially, I don’t want to assume by radical relationship with AI, I don’t want to be threatened by it. I don’t want to worship it, there has to be something symbiotic. And when the first thing of being symbiotic relationship, indicator and biotic relationship is, in the cat language I use, I don’t want to say it’s just a tool, you know, I will teach what it needs to do. Or I also use the words like, AI isn’t magic, it knows everything, and it can solve a problem. It’s as good as my good colleague next to me sitting next to me with whom I can create something good on a project rely on their knowledge while they rely on mine. And that kind of information to create to have that there is power in it. So if I use Grammarly, for example, and I keep using Grammarly for every sitting without ever test what it is produced in its corrections of my grammar and my input? That’s overeats. I don’t want that. What I do usually is I work with this copywriting AI software’s and I combine it with a little Grammy. And when I sit, we’re good to see, okay, how is the text flowing? And it’s almost sort of like I have two more people in my team called Grammy and this other product, and we are the three of us are together, creating something fantastic for an article that I’m writing for some journal. I want that kind of a symbiotic relationship. And that only stops when we acknowledge where am I in this world. So if I’m using Gmail, like audition, great sentences coming up, if you just try the word more, like if you’re right, yes, at the end of your email, I’m sure I’d write more, I might, it would automatically give you a complete sentence, say, I’m on mobiles, excuse any titles, that kind of a bit. And but I want to be able to know that that is happening. That is a and I can change that sentence. That kind of power. That is where the human Centricity lives

Marco Campana 12:29
and being able to change something, I mean, I hear a lot of the some of the criticisms are, we don’t know what’s in the AI black box, which is what you’re talking about with we don’t know the what, and some of what, you know, you mentioned power as well, in your comments. And, you know, in your introduction, you talked a lot about how you work at the intersection of data and justice, equity, diversity and inclusion. And you know, there’s, we don’t know a lot about the what, but we’ve also seen a lot of examples where machine learning and AI has really reinforced and replicates and reintroduces social bias, racism, sexism, and other kinds of oppressions. And so I think there’s also a trepidation of well, the people creating the AI. And again, we sort of we separate ourselves from having any control or influence over that someone over here is creating something that we’re going to be expected to use, but we’re not sure that we’re represented in their creation process, which is a real concern for many things in society. But certainly in technology, we’ve seen that time and time again, not just AI but in social media and, you know, portals and things like that is that it reinforces it doesn’t build in, you know, those those oppressions and things like that. So is there a role that we, what can we as a sector, or, as you know, not necessarily as individuals, because that’s too daunting, but as as a sector as advocacy organizations, what should we be advocating, so that we can build unbiased and human centric AI amongst these technologies, these technologists who are creating these allegedly on our behalf, right.

Meenakshi (Meena) Das 14:05
That’s a good question I would say. And I’ll start with this kind of a thought experiment that I do with in my shop the other workshops. If you and I know the listeners can see that I’m going to share what I’m doing so I’m holding a pink post it note in front of you Marco on the screen here. Imagine you have never seen work with this positive note. This is something new, and we called it a gay a new word a gay like a G why something like that. It doesn’t anything but I say this was I think a gay or something like that. Now, the question is, what kind of questions you can ask about it. What are the kinds of variations you can have about them? Nobody knows enough about data. What I’m showing it to you here these painful signal pink a gay Talk to me about it. Tell me what do you have seen? asked me questions. bombard me with questions. Think about what can you do with this product? That is the kind of reactor wonder about AI? Maybe we don’t have to get into how is this thing posted? Note, we don’t have to get into how what is the exact algorithm of AI, there are different roles. And with which we play around AI, same as what kind of relationships that we have around data, not everybody’s going to be the people who would visualize charts, not everybody’s going to be the people who designed those outcomes. But what we can do is try to understand, what is this product doing? Why was it mean? What can we do about it when it is not matching? Its why and who created it? What is underlying it when I say data is the most fundamental thing that is going into an algorithm so we can at least gather collectively learn about where is that data coming from? Because what machine learning and AI does is it’s it is learning what you are sharing with him. So if you’re sharing data, it’s always going to be impact I’m not talking about let’s make our data perfect. I’m talking about let’s acknowledge what we are feeding into those machine learning algorithms into those AI algorithms. So I would say what we can do collectively is the more curious be more why Senator Steen? Questions do it so that the transparency and accessibility stops to get included in it? And one of the this response might be something anti response that yeah, it’s one of the things that a lot of places says, if you’re making AI products, bring diverse people in as designers into your product design. While that’s a great tip to start, but it’s only half true, if you don’t have a culture, that space that you can actually hear the diverse voices and do something about it. It’s no good hiring diversity all from the racial and ethnic diversity. And then, you know, including all those people as your members and then doing nothing about it in the design. So if we are truly inclined to bring community voice in the design process of these AI products, we need to start that from the start right from the day we decided, let’s create this problem and turn it into AI. That’s where we need community all the way up to the end where, Okay, now let’s send this product back to the community and check out and see, okay, if it’s working or not, it cannot happen as we pick and choose and spaces or not. So those are some of the things that we can talk about. Collectively, I would tell you that individually, it’s going to be daunting, hopefully, until we figure out a way to do it collectively. And I want to keep pushing for the individual realisations as well. So each feel that discomfort, but something is missing. And each of us feel discomfort enough that when we come together collectively, we can find a very big change together.

Marco Campana 18:09
Excellent. Yeah, no, that’s I don’t want to diminish the individual. It just feels so often. And I think the tech companies want us to feel that way. It’s like just use the product, and trust that we’re building it well. So I think it’s important that you’re talking about bringing people in, to be inclusive and to have diversity on the staff is important. But But in a case like AI, if the data isn’t diverse, and inclusive, you know that that’s still going to cause problems, as you say, in the design, it’s got to be from the ground up. And I love that idea of so much of this is creating release into the wild and you know, going back to the communities, maybe that’s an area we can do more advocacy for is like let’s look at what you’re creating. So we can give you that feedback. So that you can make this a better product, not just for a segment of society. But for everybody who is who’s who might potentially be affected by it.

Meenakshi (Meena) Das 19:01
Right. And I think that’s the place where the community was bringing it from the start is so much more important. I keep talking about it in my newsletter edition still this one single thing that community has to be included from the start otherwise it geo super extractive where you get to hear that like for example Vancouver community needs being posted now instantly, you know, when marginalized, for example, needs this pulled outside would create like tons of post it notes, get it back to them and say hey, share your feedback, buy this product and tell me how can I improve upon it so I can sell it more. But that’s not what we want. Truly, that’s not what we should want to be going as our next step is building a better world for ourselves. So I would say definitely being curious and including community way from the start can help us to understand we can the community can help us understand is the what matching the why or not. Then it’s What, why and how matching or not I’d be aligning or not. So instead of having the back at the end, we are importing them from the star.

Marco Campana 20:10
And I’m wondering, you mentioned and again, the quote that I read earlier is that there’s not a lot being talked about around human centric AI. But I’m curious outside of the nonprofit sector, is that is that a conversation? You know, when you’re looking at it? Who is talking about human centric AI? Or Are any of the mainstream kind of AI people, you know, having that conversation? Or is it still something that we need to advocate and build towards with

Meenakshi (Meena) Das 20:35
them? Well, I wouldn’t paint with a broad brush strokes and say, nobody’s talking, I would say, yeah, people are talking, there is definitely more consciousness, more awareness about it. But it’s not part where it should be. I feel like we are more into AI products and VR into being human centric. That kind of consciousness is not that although people are talking about it outside of nonprofit industry, the kind of AI that exists in industry, it’s more mature. So people, at least have played with the products for a longer time, in more ways that they understand a little, if not ask the right questions, but at least to ask the questions. In the nonprofit industry. The thing is, we don’t have enough mature AI products around we have few, which are doing a few things, a few questions that are answering, but we don’t have a lot of ways in which we played with that product enough in the professional nonprofit world, that we can start asking a lot of questions. So a workshop going back to what we you and I would be doing in November, is just looking at some of these examples and combining it. So what if the nonprofit industry doesn’t have 17 different ways to talk about AI? We will talk about AI, pharma, and the professional world to start understanding. Okay, what are the kinds of questions we should ask? We look back and forth on the kind of things that we’ll be looking out for when we are working with a new product to see okay. This is let me concentric AI, there is no perfect definition of saying this doing ABCDE is what is human centric, is probably going to be our collective curiosity that is going to translate into a definition. Yeah,

Marco Campana 22:30
I think that having that conversation that you’re that you’re facilitating in these workshops, for example, it seems to be an important first step is that just raising awareness that this is happening? We can’t avoid it, right? Even if we want to resist it, it’s already part of your life, for better or for worse. So what role can we play to make sure that if we are implementing it, let’s say we don’t have a lot of impact on the creation of it, but we will be implementers and users of this AI? And I guess like any technology, we want to learn how to become critical and, and ethical users of technology. So is that really kind of a, I guess, for a lot of nonprofit staff who might be listening? Is that at least a starting point for them to feel like okay, maybe I’m never going to understand what’s behind the black box, perhaps I’ll never be given given access to it. But I can start making decisions about how I implement and use it and apply an ethical framework to it.

Meenakshi (Meena) Das 23:24
Absolutely, absolutely. That’s a very good place to start. None of us need to know exactly everything that is happening in the AI, I would say, as someone who has worked in the AI product as algorithmic designer and you know, older and programmer and different roles around AI. Even I would there are different many things that I don’t understand, to be honest, you know, would be a gardener say, but do I need to know everything? Exactly the same way, as a data scientist needs to know and does a data scientist need to know everything exactly what the product when it goes out into the world. And they know that that is why that symbiotic relationships back into play. What we were talking about is between having these different roles, along with AI, how we can you know, each of us do our own competing something good. So we don’t have to carry this burden that I need to know everything to be able to do good with it. I need to know enough to be able to share it with someone, I need to be able to know enough to share my problems and challenges that I feel around AI so if someone can help me get to it, I want us to get to that part first before we feel like if I don’t do anything, nothing is not doing so let’s not do anything at all dependencies.

Marco Campana 24:42
So I feel like it there’s we need to be developing some some literacy and some competency around AI amongst nonprofit staff from frontline all the way up to leadership. And you’re working on building some of that with your conversations with these workshops. Are there other ways are what are some of the other ways that folks who are listening who are curious? And what are some good resources or ideas? Or how can they start this conversation, you know, locally in their own organization with their peers? What’s a way for them to feel like they’re up to speed or literate enough to be able to kind of move the conversation forward?

Meenakshi (Meena) Das 25:16
You know, I would suggest, probably starting to acknowledge what are the kinds of AI products they are already using? Chances are, most of the time that we are using in our work is, let’s say, email email program that we are using around the nonprofit industry or a segmentation or product or any, you will usually do have one or two cars that we are working and that you know, they do have a component of AI inbuilt into them. So the first I would suggest is acknowledging what are you working with one or two things you’re working with that has AI component in it, then let’s do some sort of a team as a team exercise, let’s talk within our teams as to what are we doing with that product? Who is benefiting from that? What is into that product, the kind of data that is going into it? Let’s talk about that product, we are using such an R word, meaning we are using it for one or two things. And I’m not talking about using that talking about product features point, oh, this is so great. So we save some time this is we can start that’s a great piece of conversation. But is it truly helping? To the extent they’re supposed to be helping or that to kind of give an example. And being one of the books I’m forgetting, but I’ll be happy to share that with you. It shows talked about the future and one of the future things with AI and the book pretty advanced. And when I’m shooting, something I the book suggests is what if our customer becomes very personalized. So every show, for example, on Netflix gives you this level of AI feature where you can personalize the ending of the story, depending on what you want to do with it. And if you’re a person who likes to have happy endings, you can choose data centers and turn that story into happening. And if someone wants to change it to more action or something that they can use it in turn, turn it into action. And that example really prompted me to think a kind of a good question that I want to deal with folks. And what I want folks to do it within their own teams is just because we can do it, should we do it? Do we want to there is something about the about the art about any of these, whether it’s a theater play, or it’s it’s art is that an artist creates, wants to put forward as a vision that that person wants the artist wants forward? Do we really need to change and personalize it to the extent that means that original vision of the artists because that’s the kind of discussion that I’m hoping, and you know, quotations like we are doing today can start it within our teams. Just because we can should we who is getting in there, because there’s a product exists. If if serving everybody who is around this is not just you and I as staff members using this product, but also the audience who are getting affected, on the other end, are everybody equitably able to engage and communicate with this product, or is somebody impacted a little less a little differently, a little inequitably those are the kinds of students that they can start having conversations after acknowledging products in your thinking that would be a good start.

Marco Campana 28:43
That sounds like a really good practical start. And it’s, it starts with what they’re already using, which is also important. They don’t have to imagine something in a in a vacuum, if you will. But it’s start with what’s around you and what you already know. And love it. This is such a thought provoking conversation. And it just feels like so important to to at least start what feels like a daunting conversation amongst people who don’t consider themselves technologists, but are using our users of technology and also bridges to technology for community sometimes who you know, ultimately are going to be impacted by a lot of these kinds of things. Is there anything I find this has been just a really useful and interesting conversation and so much to think about? And I’m looking forward to I’ll share all of your notes, your your links and things for folks in the in the shownotes. But is there anything I haven’t asked you about that you think people should be really thinking about when it comes to a focus on on human centric AI and how they can sort of build that future?

Meenakshi (Meena) Das 29:45
I would say probably I would put Okay, so I would want to use this plan to encourage everyone who is listening to think about why they need to learn it because AI is inevitably the going to be more and more involved in our lives. That is for sure. And that is the time to think about AI. So when most of the time when I talk to people about AI, the first thing, oh, is it really the time that this is the time they can be weighed more I mean, this isn’t, there’s more things to do. And the to do list never ends or more things in priority. But it’s coming more and more, more faster than understand that it’s here. And so sooner we spent stop spending time on just learning about it a little, just thinking about it a little with team members, the better handle, we would have not just on the AI, but the data underlying on it sooner we understand, okay, something is missing with the AI as they come to the talks of it, we understand Oh, the how we collect data in our organization that needs to perhaps change or how we do the conversations and discussions around our dashboards and metrics that needs to change. So the talk about AI is not necessarily just to talk about AI, it’s also to talk about all the underlying things that’s going into that the timing for it is absolutely perfectly right now, and I want to encourage all the listeners who would be listening to this episode is just figuring out one reason, why do you need to do that, and I’m pretty sure we find in that whole chain of, you know, your data and bringing people together through data, you will find something that can talk to that speaks to you and use that why not start learning around? It doesn’t have to be a full, you know, Masters be it has to be just one conversation. One question. So I want to understand probably,

Marco Campana 31:43
no, that’s great. And I think, bringing it to the level of talking about data, I think people do understand the importance of good data, or they’re starting to, and if we if we can help them understand that working on that can perhaps work at work to improve or humanize the AI, it’s actually something that they that they can have an impact on how their organization, you know, collects AI asks, or has collected data asks for that in different ways and uses it and analyzes it. I think that’s something that’s more practical, feels more practical and tangible for people. And it’s so important for the evolution of the AI itself.

Meenakshi (Meena) Das 32:21
Absolutely. And I’m gonna have partially for the immigrant community where you were you mentioned, Narco, that you just, you know, you’re working. It’s so much more important, right? Like the for the immigrant community, people are coming, obviously, coming from very diverse backgrounds, racial, racial, ethnic diversity is obviously included in this is where under this word immigrants are, and there is this whole component of inverse just about not what goes on in AI. But how accessible is it get there are a lot of things involved. I’m an immigrant, myself, a first generation immigrant. And there are a lot of things involved in you from one country to another when you move your home. There are a lot of things involved. Before you can start realizing your feelings about AI. But you’re already using those products you are already involved around. Ticketing and make sure your signup sheets and making sure you’re having access to services in a new community to be places where AI in a small or a big way already involved. So talking about AI now is only going to be helpful in building trust within the immigrant community, who already need a little bit of support when you are bringing them into a new community. And so this is the time going back to the point I was just talking about to think about these questions you can make better use of this AI technology with the immigrant community, or the newer communities that are coming into our society.

Marco Campana 33:57
Fantastic. Thank you. That’s a great one to end on. I appreciate you taking the time to kind of provide this overview and give people a sense of there are things that could but also should be doing now that now is the time to be having these conversations so that we can try to have some impact and humanize AI moving forward. So I will share all of your your your websites and your newsletter in the notes. If there’s any other resources, please feel free to forward them to me. But thank you so much for taking the time to have this conversation and probably for a lot of people introduce this idea to them and what they can do about it perhaps for the first time.

Meenakshi (Meena) Das
Thank you so much for having me. I look forward to seeing more people being interested in AI. Awesome.

Marco Campana
Thanks again. Thanks so much for listening. I hope you found this episode interesting and useful for you and your work. You can find more podcast episodes, wherever you listen to your podcasts are also on my site at marcopolis.org I appreciate you listening and if you have any tips, suggestions, ideas or want to be interviewed or know someone who wants to be interviewed please drop me a line through my website or marco@marcopolis.org. Thanks again

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.