Small Talk with RainKraft

Hosted BySubha Chandrasekaran

Small Talk is for current and aspiring leaders who want to level up their career and professional lives in a hyper-growth world.

S3E12 – Getting Smart About Artificial Intelligence With Kashyap Kompella

Apple PodcastsSpotify
Everywhere you look there is talk of Artificial Intelligence (AI) and how it will change all our lives. For the better or is there danger lurking behind the hype? And do we really understand what AI means in the many apps and tools we use? Are science-fiction movies defining our expectations or will the Metaverse really be our next holiday destination?

Discussion Topics: Getting Smart About Artificial Intelligence

  • Transitioning to Artificial Intelligence from a particular field
  • Metaverse – the concept and experiencing it
  • Harmful effects of AI
  • AI audit and its concept
  • Advice on starting a career in AI

Transcript: Getting Smart About Artificial Intelligence

Subha: Hi Kashyap Good morning. Welcome to Small Talk With Rainkfraft. Thank you for being with us.

Kashyap: Good morning Subha, thanks for having me.

Subha: Really looking forward to a very enlightening discussion today. So, Kashyap, I’m going to start with your career journey. And actually even before that, your educational journey because that itself is a very, very long list. Right? So you did your engineering at BITS, that’s how we know each other, you went to ISB, you were at the CFA Institute, at the National Law school doing the Master’s, and so much more. So how do you manage so much?

Kashyap: Well, that’s just embarrassing. See, there was no grand plan. It’s just a few happy-go-lucky decisions that sort of worked out well. So I got into consulting after my MBA. And as I was working on different kinds of projects, I thought it was useful to have sort of multidisciplinary knowledge. So I ended up getting a few other diplomas and degrees as you mentioned, but from a carrier itself, I think I was lucky to have graduated at a time when the software industry was taking off in India. So it’s sort of being in the right place at the right time.

Subha: No true, I mean, many of us were, like you, at the right place at the right time. But you caught on to the field of Artificial Intelligence pretty early, right? When did it all start?

Kashyap: So let me take a step back and give you some context. See right now I’m an Industry Analyst. So that means I try to understand new or emerging technologies, how they’re likely to evolve, when and how to use them effectively, etc. So that’s my job. So I say that it’s like being a management consultant, where you’re specialising in future technologies while being underpaid. So, that’s my job. So a big part of my job description is learning.

So coming to artificial intelligence so we used to have an elective course called neural networks taught by Professor L. Behera. This was in Bits 25 years ago. So that was my first introduction to that field. That’s the first time I heard of that. Then in 2010, about 12 years ago, my friend and colleague, Prabhas Thakur, who’s from IIT Bombay, and who you also know is your classmate at XLRI, I think.

Subha: Yes, yes, small world.

Kashyap: So in 2010, Prabhas, and I explored a startup idea in computer vision. So at that time, the AI technology was not ready for prime time, or what we had set out to build. So in a sense, we were a bit early, but that was my sort of reinitiation into the field of AI at that point in time. So we tinkered with it for a bit. And after that, when we realised that the product market fit was not going to be there, we moved on to other things.

And for the last 10 years, I’ve been a Technology Industry Analyst. And more specifically, as the current wave of AI advances, I’ve been focused on AI and automation for the last five years. That’s my AI stint or AI journey.

Subha: And if I’m not wrong, you also are in the space of teaching AI, making it more accessible to folks like me?

Kashyap: See, that sort of happened like, as I started learning about different things, I thought, some of what I learned, I mean, why don’t I try and share with others as well. So that led to a lot of writing, teaching, and even advising startups. So I teach in a few engineering and business schools. And I write quite a few columns, I write four or five columns in different Indian and foreign magazines and newspapers.

So that’s the thing. So one of the strengths that we have as a firm is being able to explain technology in simpler terms, and we felt there was a need for it. So we kept doing it.

Subha: Oh, no, there’s a definite need for it. And I’m going to lean on you for that now too like an AI for Dummies, if you will, just to help a lot of us we think we know it, we know it in certain small pieces. And we do end up hearing a lot about other companies, like you said friends and colleagues who are either building something or working in the space of AI.

And honestly, beyond a point, we do wonder what it really is all about. What is it that is so game-changing? And some of the things that we hear just feel like, kind of maybe glorified automation and I’m not saying that with any judgment, but just what is that artificial intelligence? Because the two words are so strong and powerful that maybe our expectations are also very different. So like, for lay people, what is AI? And is it very different from what we maybe understand a little more, which is machine learning that you kind of teach the machine by giving it data and letting it learn from that? So is this very different? And how do you see it?

Kashyap: No, not at all. I think you have the intuition down right. A lot of what you say is actually very expert-level takes, so you don’t need any AI for Dummies. See, let me ask you a simple question. So I’m going to give you a series, and you have to fill in the next thing in the series, say 5 10 15 20 25. So what comes after that?

Subha: 30.

Kashyap: 30 right. So that’s exactly what the current AI that we have does. So let me elaborate, it seems simple, but it’s as simple as that to give you an intuition. So there is a set of data here, it happened to be a series of numbers. And then you identify the decision rule fairly quickly, that this series of numbers is increasing by five. So now, you extend the same concept to different data types and huge amounts of data. Here, the decision rule is quite simple. But when the data is like say 1000s of variables, you cannot easily intuit that decision rule.

So once you’re confident in your decision rule, I’ll explain this in a simpler way. But just go with this for now. So once you’re able to intuit what is the decision rule for different business or other situations, you can make those predictions and act on those predictions. So right now, when businesses operate, they also make all of these decisions.

But these decisions are probably made by people like us human experts. But as you gain confidence in the predictions, you can automate those things. So that’s the kind of automation that you’re talking about. So that in some cases can lead to prediction of ways, better customer experience, the whole bunch of regular software, when you use what are the benefits that you can expect you will get those things.

So taking this simple concept and applying it in different situations, just like you’re able to predict the next number in the series, given a user’s profile their shopping behaviour with us can I recommend the product that they’re most likely to buy next? So that’s the whole set of shopping recommendations, extend that to content based on what this user has watched so far on Netflix, can I predict the next show that they’re likely to be interested in?

Like on a music platform can I predict songs they are likely to listen to? So you apply the same concept across different situations. If it’s in a retail banking lending situation can I assess your risk profile?

Right now I think we do that, say using the credit scoring mechanism. Can I get better at arriving at that credit score using some of these better data, more data? So that drives a lot of decisions like a percentage of interest should I give you a loan or not? What percentage of loan and what should be your interest rate? So it sort of lends itself to very many applications when you’re able to take patterns from data.

So the field of AI itself is probably about 50-60 years old, six decades almost. And it keeps coming and going in waves. These are called AI Spring and AI Winter when the interest in AI veins that’s winter. So now, there is a lot of confusion about what exactly AI is because the AI that we see in movies is very different from the AI that we see in our day-to-day lives.

So science fiction movies, project a view of AI as artificial general intelligence where AI is all-powerful, has emotions, and feelings, it decides good, bad, etc. But what we’re actually building is Narrow Intelligence.

That means we take all this past data that we have with us and apply some fancy computer science techniques to it and then predict the decision rules. So it’s good at say handwriting recognition or it is good at detecting patterns in speech identifying the objects images or people images etc. So it is very narrowly focused on doing one specific thing. So we used to be able to try to do these things, what techniques that we have used have changed throughout the course of these six decades that I talked about. So when we were in college, there was a lot of interest in something known as expert systems, something known as fuzzy logic, and different techniques.

I mean, we don’t need to go into the details of these things. So the definition of machine learning is learning patterns or learning by examples. So the examples that we show the algorithm to learn are called training data. So you use data to train these algorithms, we always were pretty good at doing this thing for tabular data, rows, and columns, like think of a spreadsheet or think of a table in an Oracle database or something like that, we were able to do a lot of predictions out of that kind of tabular data.

But in the last 10-15 years or so, we have gotten good at identifying patterns in unstructured data, or semi-structured data, that’s images, written text, speech, videos, etc. So with these kinds of capabilities, now, the current state is that we’re able to decode all of these unstructured data, and are in a position loosely speaking, to imbue the computer with a lot of senses, like sight. Computer Vision is like we’re giving computers or devices sight, or speech recognition devices for smart speakers, they’re able to decode what we say. So based on these things, they can predict the decision rules, and they can act on these things.

So it opens up a lot of possibilities in terms of automating things that previously required manual intervention at the same time because we have so much computing power at our disposal through the cloud or through more powerful servers. And because of increased digitization, we have a lot more data at our disposal. So combine these two facts, and we are in this narrowly focused task, the algorithms are getting better at the precision for these predictions if a give an image, say in the field of image recognition, it is able to identify objects after being trained in that image. So that lends itself to very many applications. So in narrowly focused tasks, these algorithms are approaching the human level of performance, which is roughly about, say, 90 to 95%.

But the difficulty is that an image recognition algorithm is trained for a very specific thing, while humans have like 1000s of these kinds of skills, like I can identify, say, images in pictures, but I can also do something else. Or I can decode different languages, like a speech recognition system that is trained in English. For example, wouldn’t be able to recognize speech in Malayalam but humans can do all these things. But anyway, stopping here.

So I hope that gives us a sense of what AI is where we’re at in this space. I mean, I sort of covered a broad but to summarise, I’ll say, whenever you see AI, in the news these days, they’re referring to a branch of machine learning called Deep Learning, which uses a technique called artificial neural networks. So when they’re talking about AI, that is the dominant way we’re doing AI these days, so that they’re referring to deep learning. What is AI? AI is simply a piece of software. What is a neural network? A neural network is a data processing structure, it takes certain inputs, and it arrives at a specific set of outputs, which helps us to give some predictions.

So, the problem is, it does not give humanly intuitive or understandable decisions at the end of it, but it gives say some sort of numerical outputs which we need to translate, interpret and apply to our business workflows and decisions. So AI is equal to software. It is not like thinking all-knowing, all-pervasive all human, whatever all-powerful things that you see in science fiction, a neural network is nothing but data processing structures things.

Subha: Got it. And I think that really simplifies it and makes it more accessible to so many of us. My next question would really be, I liked the way you showed the thread or how it has evolved from expert systems to fuzzy logic, we went to ML and now it’s the AI. What’s next? How would you see this evolving as an Industry Analyst, what do you see coming next? And is this whole hype around Meta a part of that?

Kashyap: Absolutely. That’s a great question. See, as an Industry Analyst, a big part of my job is actually keeping tabs on what’s coming down the road. But in reality, what happens is the things that we talk about now are actually being implemented by companies five, ten years down the road. So despite all the hype, it’s still very early days for AI. When I talk to organisations right now, be it in India or outside of India, the bulk of their money is going towards things like Cloud, putting their technology estates be it hardware or software, into the cloud more than 80% of the budget for most organisations is going towards Cloud, and a very small percentage of that is actually going into new things like AI.

So that’s one nuance that’s usually missed out in the nonstop coverage of AI by the media and the press. So the transition to the kind of automated future, the kind of wider adoption of AI, is going to happen over the next 10 to 20 years. We’ve already made a lot of advances in deep learning. And now we have some of these ingredients, I liken them to the sensors of a computer giving five and six senses to a computer. So implementing that into our businesses and organisations is going to keep us busy, even without any more advances for the next five to ten years.

So now, this Metaverse is an interesting thing. And it shot into popularity, all of a sudden when Facebook renamed itself as Meta. So the context I think behind this is we’ve never thought we would have a trillion-dollar market capitalization company because trillion dollars is a lot of money. India’s GDP right now is like $2.6 trillion or so. So now we have companies like Apple, companies like Microsoft, etc., who have trillion-dollar valuations. And Apple became the most profitable and most valued company in the world because they were able to put a phone and even Google. I don’t remember their market capital, but I think they’re more than a trillion dollars because they put a phone or a computer in everybody’s hand. So now where is that next trillion-dollar opportunity going to come from?

So the Tech Titans are betting that the next trillion-dollar opportunity or huge opportunity is going to come from putting a VR headset, or some sort of a headset on your head. So that’s the logic. So where is the next trillion-dollar opportunity in the next 10 years, 20 years, etc? So to give more context around the Metaverse so the Metaverse the name itself, is from a 1992 science fiction book called Snow Crash, which is likened to the Metaverse a virtual world that is projected onto headsets or goggles worn by users. And they’re free to choose their avatars or virtual identities.

So today, the Metaverse is curious because it’s already here, but it’s also the future. It’s tough to define, and everybody has their own definition. So there is also a lot of opposition to big tech, which is like Google and Facebook. So the big tech embraces the Metaverse because it’s a trillion-dollar opportunity. But opponents of big tech also embrace these things. So, I’ll explain all of these things in just a bit. So to give a working definition of a Metaverse, it includes several elements from Neal Stephenson’s conceptualization, where like the physical and the digital, they blend seamlessly. So it’s also a shared experience, it’s an immersive experience.

It’s also persistent like you get out of the Metaverse that there are still people in the Metaverse. So it combines augmented reality, virtual reality, video games, social networks, and interestingly, even blockchains and cryptocurrencies. So the basic idea is that the Metaverse is you provide rich, multi-sensory, multi-dimensional digital experiences. Now also, it’s not just about technology, but it’s also about culture and economics as I mentioned about blockchains and cryptocurrencies. But if you think about it, even I’ve been looking at this space for a long time.

So there are elements of gamification, a rich online world with digital avatars and virtual goods that are not exactly new. You may remember the buzz around one of the things called Second Life. It was a virtual world that was launched in 2003. And it still exists today. So today, if you see about the Metaverse, there is a lot of hype and excitement, saying people are buying land in the Metaverse, people are opening embassies and having concerts, Universities are opening their campuses in Metaverse, etc. And exactly all these things happen in Second Life as well. They bought virtual real estate, musicians held concerts, brands opened retail outlets, and so on.

But despite the hype, like the Second Life, it continues, but it will not attain mainstream adoption types. So that naturally leads us to ask what’s different now. First time and we made tremendous advances in virtual reality headsets we’re also talking about coming out with 5G mobile networks, gigabit speed internet, and all that stuff is available. We also have high-end powerful computers on each desktop where all these 3D worlds can be easily rendered. Next, this concept of a Metaverse is being championed by companies with big pockets, and deep pockets.

So Facebook is spending billions of dollars on virtual reality and augmented reality and apps in 2022 alone. So Microsoft has acquired a games publisher Activision Blizzard for about $69 billion, which is their largest acquisition so far. And the media has plans, Apple is planning its own headset, etc. So people are willing to bet big money on making this Metaverse happen this time, we’ll see whether it happens or not. And also, video games have moved from niche to mainframe. And then people are spending a lot of time because of the pandemic also, they’re more used to digital experiences and whatnot.

So where the crypto guys come in is that the crypto guys say that we are generating all the content for Facebook and Google and the social media companies, but we’re not getting paid for it. In that sense it’s become like, we’re working for them for free. So they say, let’s create a Metaverse where you get I call it the crypto metaverse like the two key ideas of the crypto Metaverse are that it introduces the notion of digital property rights, which are stored on the blockchain. And users get a share of the profits because of whatever they are contributing.

It’s not mainstream yet. And I think our audience would not be very interested. But one difference is that the kind of Metaverses that Apple and Facebook are envisioning require some sort of headset while the crypto Metaverse really doesn’t need a headset. So that’s the Metaverse and AI I mean, there is a whole bunch of technology that is required to make this Metaverse come to life. And AI is involved in each of these technology pieces in different ways. See I could go on. I mean, I teach after a course on Metaverse so you have to stop me.

Subha: No, there’s a lot of interesting nuggets in everything that you say and every sentence can kind of go forward for a deep dive. But coming back to I like how you explain Metaverse, so is it that let’s say in the near future, or in the foreseeable future, you might say okay, I went for an IPL match, but you never really went yet you had the full experience of sitting in a stadium with people you don’t know. And probably feeling the sounds and the sights and the excitement in that live environment. Because of this virtual headset and everything. So you were there, but you were not really there. So you went to Egypt and saw the pyramids, but you never really physically went there. So is a lot of our life going to be lived on our couch, but claiming that we’re everywhere else?

Kashyap: See that’s the vision that the tech overlords are pushing. But there are a few practical difficulties before it happens. For example, these headsets are not very comfortable for people who wear glasses. It’s getting there, but it’s not comfortable yet. See, if you’re looking at reality, reality means the real world, what happens is when you sort of move your head in a particular direction, your brain expects something will change. But when you wear a virtual reality headset, and if you’re moving actions, there is a dissonance between what your eye sees and what your brain perceives. So that gives dizziness. So there is a lot of physics behind it.

And so they are taking care of that but say this dizziness depends on your centre of gravity, which is different for people with different heights. In general, women are a little shorter than men. So all the tech bros are building these virtual reality headsets and there is a thing called cyber sickness. Just like motion sickness we have cyber sickness when you wear these virtual reality headsets. So that is not being solved. So the VR headsets have a gender problem there. So that needs to be solved. Like I said, like people with vision defects.

So that’s all, they are making accommodations in the sense they’re making space in the virtual headset for, say, people who wear glasses. But what if you have bifocals, so all these things need to be thought through, and to deliver the kinds of experiences that you mentioned So it is possible to have, say, a music concert or a sports event where you’re hanging out with 50 people, because of the limits of technology, because you can only stream that much video, because of bandwidth and device limitations, etc. But that will keep getting better, you will be able to hang out with hundreds of thousands.

Probably the point will come when the devices are powerful enough that you will hang out with even the entire crowd. So that is an easier problem to solve over the period of years Moore’s law, etc., will help us get there. But there are these other issues that I talked about cybersickness. And what if people bully you in cyberspace, etc? They’re only getting to realise the dangers of those things. All those have to be solved. But yeah, like I said, that’s why it’s not going to happen tomorrow, it’s not going to happen the next year, but it’s an evolution over the next five to ten years.

Subha: Got it. And I think that brings us to another important aspect of a lot of this, which is data and decision and algorithm-based. And I know you do a lot of work in ethics around AI. So two things come to mind. One is who defines these rule sets and who defines how this will work. And the clear example that you gave is that already women and those with certain disabilities or challenges, we’ll have to see how to accommodate or fit them in. And if somebody is not thinking about that proactively, it could get missed out.

Because the group that’s sitting in the room and doing all this, it really depends on how diverse they are, or what their thought processes are, and second for all of this to really be useful for me as a consumer. And this is something we joke about in our groups when I say something on WhatsApp, I see an ad on Instagram for the same product. But we need to be willing to, or be okay with putting out so much data about ourselves, or at least acknowledging that whether we do it willingly or not people have access to it. And are they using it in the right way to serve me better by giving me better recommendations and how could they possibly misuse it?

Kashyap: Right. So that’s a very complex question, frankly. Because the argument is that it’s very uneven. For example, we keep talking about, say personalisation, because of data, you give us your data we will serve you better personalised, we won’t waste your time you’ll have a better experience. But all of us are also familiar with getting calls from the banks in which we have accounts saying hey, do you want a loan? So, it makes me wonder, I mean, I keep reading about all these things saying there is predictive maintenance.

And I’m talking about the consumer experience types. So predictive maintenance, where like we’re going to equip all this equipment’s like say, an elevator in your building before it fails, we’re going to come and repair it, etc. But I have seen so many instances where there is a lift or an elevator that has not been repaired in months and weeks after it needs repair. And similar things keep happening. I mean, you check for something like let’s say you’re interested in going on a holiday someplace, you check for flights for that whatever, for the next one month, you keep getting advertisements for hotels and the same and this happens even after you finish the holiday.

Subha: No, you are right the consumer experience honestly hasn’t lived up to the hype so far. There are some spaces where, as you said, immediately you feel the effect, you see hotel recommendations, because you chatted about Goa on a WhatsApp group and you see hotel recommendations, and you say, okay, it is kind of useful for me, and I’m not going to crib about it too much. But sometimes you see things which you really didn’t expect because you spoke about something a little private to you.

And you didn’t expect to see ads for I don’t know, a mental health product or something like that. And I also see the other extremes like you mentioned, for example, I am a very, very frequent user of audible audiobooks, etc. And so I keep getting there, I think it’s a weekly or fortnightly emailer. And for the past two years, despite the wide range of books that I have been downloading and listening to, I see the email with the same four books on the header image there are atomic habits, there are alchemists, this psychology of money and I think one more.

Kashyap: So, did you buy those books? They’re gonna show you the same ads till you buy those books.

Subha: A couple of them, I’ve actually downloaded an audible and read, and a couple of them I have bought in physical, which also I mean, if you’re so great, audible should know from Amazon that I bought it. So, some of those experiences make you wonder where the AI is, it promised me better recommendations, but that’s not happening. So yeah, there is I guess still some way to go in terms of the consumer experience.

Kashyap: Yeah, see, the consumer experience definitely is improving in many places, but it’s improving in bits and pieces. And only in advertisements, do you have seamless types. So what it is enabling is it is enabling some sort of semi-personalization at some scale. And there is also economics involved. So I mean, these big companies, at least, have macroeconomists who are running these experiments.

So the example of tele callers, it’s easier for them to just throw people at a problem. And if somebody converts, it works for them, so they can make it work out that way. Same with the advertisements but the specific instance you gave could be because there could be some bug where something is not getting refreshed or because the audible catalogue in India is limited. So they’re not able to come up with a better recommendation or something could be broken sort of a thing. But in general, yes. I think the attitudes like what kind of data privacy we need, or what kind of data privacy we get depends on the expectations.

So Pew Research Center does a lot of surveys regarding attitudes toward technology. And then in India, for example, India is the most trusting of AI or technology in general. In the US about 45, if I remember numbers 45 to 50% of people mistrust, or don’t trust AI highly, while only 12% in India think that AI is a bad thing. So countries fall on a spectrum. So we are very trusting of technology in India. So that is going to inform people’s attitudes in general. I mean, we’re talking in broad brushes, about what kind of data they are willing to give away in terms of experience. I also think I have seen anecdotally that attitudes changed based on what’s your age profile, younger people seem more okay, because they’re into this sort of sharing a lot more information than senior citizens may be comfortable with it, etc.

So there are a lot of issues. And in general, technology leads while regulation, and our attitudes towards that lack. So we were talking about these fairly complicated issues. But for example, when the pandemic broke out in India, people just wanted to work from home but regulations has been set up in such a way that the government had to amend some rules which allowed people to work from home for people who are working in SCG, or software export zones and other restricted zone types.

So we solved those kinds of issues only. So these are going to take a lot of years to work out the issues and whatnot. But I think there are other harms of AI. One is like, they’re not delivering the customer experience that we’re hoping for. But there are other harms of AI that we should be concerned about. I am not going to talk about what are all the good things, or what are all the tremendous opportunities and potential when AI is used well because that’s what the rest of the industry, the rest of everybody is, shilling you on that. There are some things that are not getting enough attention.

And I’m thankful to you for giving me this opportunity. So AI is a dual-use technology, which means you can put it to good use, and you can put it to bad use. So the same autonomous vehicle capabilities, autonomous flights that we have, they can be used in warfare, for example. So that’s the AI being put to bad use, then the same AI can be sort of used by bad acts. You can use AI to improve cybersecurity and defense. But you can use AI to write better phishing emails, you can trick people into giving away their information and credit card information, click on links, etc., compromise links, etc. by using AI.

So it’s available to both the bad acts and the good acts. When we can’t do much about it we need to educate the users, you can have auto-generated content, more easily spread misinformation, disinformation, have deep fakes, etc. So because it’s horizontal technology that’s applicable and available to everybody. So that’s AI in for wrong uses, AI in the wrong hands is one type of risk, then, but more commonly, more prosaically, another risk that you’ve alluded to which comes up in your question is where hyping up AI before it’s ready for the real world.

So we talked about let us say, the impression the popular perception, somebody who hasn’t looked deeply into these things is because it is using massive amounts of data because it is using pretty complicated mathematical techniques, and huge computing power compared to humans who have their limitations, the amount of data they can process, etc.

AI is much more accurate than humans. That is one misconception. Next, it is more accurate than what it really is. Say we keep seeing accuracy levels of 95% in reality probably it is much less than that it is 60% 70% somewhere depending on the task. But depending on the training data, the accuracy levels can fall very dangerously low for certain groups, say for women or for minority groups, or whatever it is.

Subha: We see these problems in let’s say hiring today, right? Because I do a fair bit of work on the women coming back to work or mid-level folks who are trying to change industries and spruce up their resumes, etc. They all know that there is some kind of tool that this is going to go through before a human sees their resume. And there are so many pitfalls in that. How is that algorithm set up really to look at things that can’t really be explained in clear words and timelines?

Kashyap: Absolutely. See there’s nothing wrong in saying technology is only 50% accurate because we will get better or but the danger really when you hype up before it’s ready for the real world is that that 50% technology is being implemented left right and center thinking it is 95% technology. So a lot of people ask me what is AI ethics. If you say AI is a piece of software, might a software have a piece of ethics, what is ethical or unethical about it?

So you wouldn’t implement say like if you’re buying a regular car if it starts only 80% of the time, you wouldn’t buy it, certain categories. I’m not saying 80% is the right number, it depends on the product category. For example, a medicine that works 80% of the time or a vaccine that works 80% of the time is considered pretty good. But they monitor it knowing that it’s only 80% of the time. So this is the mismatch that is right now happening in different domains and images.

Say if you have exam proctoring systems if the facial recognition system has to recognize you before you can do an exam if it falters because you don’t have proper lighting or if your skin colour is darker, you’re not taking those things and a lot of examples there in say in India we are using them for determining the beneficiaries of welfare, the government gives out a lot of welfare schemes, who gets it.

So we’re using a lot of data for that this is being used across the world. So if there are defects or if there are problems in those algorithms, if we know we will have, say, exception handling mechanisms, we’ll have humans in the loop or some other safety mechanisms to ensure that, but if you’re going thinking this is going to solve everything, then people will be caught in this automation hellhole. So this happens, I mean, I’ll give you an example from the Netherlands.

So the Netherlands had some sort of an algorithmic system, which determined who should get child welfare benefits. So all of this has played out very publicly, and all the data is available. So using that example. So this went on for years. And this created a hassle for so many families. Because, say, if somebody had twins, the algorithm flagged them saying, Hey, your claim for child welfare benefits is fraudulent because you’re claiming twice, this system did not accommodate for that kind of situation. Finally, it ended up that people in low-income families, people who are migrants, etc., were being disproportionately targeted.

And because of fraud claims, the government also put in systems, that flagged some people as having fraudulently claimed benefits, and wanted to claw back and said give us back the benefits that we paid for the last five years that didn’t give them any grace or anything, put them in prison. 1000s of children were separated from their families and had to be put in orphanages, they imprisoned a lot of people, and some people even committed suicide, which created a huge crisis. The Dutch government actually resigned because of their lack of oversight over this algorithmic system.

See we talked about data privacy, and the Netherlands is a country, which is actually subject to the GDPR, which is one of their most stringent data privacy regulations, data privacy, and protection. So even the Dutch GDPR authority fined the Dutch tax authority who was saying this thing. So this has led to a lot of rethinking about how we really use AI systems, what kind of mechanisms we need to put in place so that people are not adversely impacted, etc., but we’re still learning. So this is the field of responsible AI or ethical AI.

And one of the things that is emerging is that just like we have third-party audits of financial statements, we need to have independent third-party audits of high-stakes AI systems. So this is giving rise to the field of AI audits. And I’m happy to share that I’m one of the first certified AI auditors in India.

Subha: Awesome. Yes, I saw that, and very recently, congratulations on that.

Kashyap: So this is a field that we see is going to become more important as we learn more and more about the impact of using AI. So these are some of the risks that we need to watch out for. And also, then the last thing I’ll say about risks is what do we really use AI for? So a Facebook executive had a famous quote, which said, the best minds of my generation are thinking about how to make people click ads.

So we’re taking all these I mean, highly qualified trained computer scientists and researchers and trying to see how they can get us to spend more time online on social media or on all of these things, instead of solving some of the larger humanities problems we’re facing, or even things like I mean, why don’t we use this expertise, we have to build an automated wheelchair, which one of my friend is doing, do things like that there is potential for all of that thing. And those are the kinds of things I wish would get their time in the spotlight as well.

Subha: Correct. No, interesting, I was just seeing, and I think it was in the MIT Tech review an article on how there are so much of AI tools that have been built during COVID. But somehow they have not helped to make it better in terms of, I think just forecasting, projecting, or even trying to make the whole treatment process and the triage process improve that. And I was reading that a lot of tools have gone into it, or a lot of AI effort has gone into it, but the results have not yet been shown.

So I think a lot of what you said is that our expectation of these algorithms and these tools is that oh, if I’ve made something it’s performing at 95% level, and that itself is an unrealistic expectation. And we have to be mindful that hey, something is working only at 20-30% level. And hence, the decisions that we’re making based on that output have to factor that in.

Kashyap: Perfect. I’ll give you three quick examples. One is, say 10 years ago, or maybe 12 years ago IBM Watson was synonymous with AI. You remember all those advertisements that IBM had that IBM was using Watson in a variety of ways. But most importantly, they were going to use AI to cure cancer. But IBM paid a lot of pioneering costs. And they’ve invested billions of dollars into that AI needs data, so they spent $5 billion, acquiring that data, and they had the best of the best partnerships in healthcare.

And then they recruited at their peak, they had 7000 engineers working in the IBM Watson unit, and they made the company completely on Watson AI solving healthcare and other domains. But very recently, in the early part of this year, they sold it off for a billion dollars, Watson. So, that’s one thing. I mean, it’s tough to solve, I mean, not to dunk on IBM, but I think they also understand what problems can be solved using AI or not using and IBM’s experience is going to help others in the field. Thanks to them for that. But these are the challenges of solving.

So that’s one example. The second example is the MIT Technology Review that you mentioned. In the last two years, during COVID, the British Medical Journal has assessed systematically, hundreds, if not thousands of COVID prediction models that have come in. And their conclusion is that not a single one of them is fit for clinical use. So I strongly agree with what you say. Then the third thing is, one of the greatest pieces of AI that we have today is language generation models. Open AI the GPT. three is a classic example of that. So let’s take open AI, you’ll give it a prompt, and it’ll generate content that is very tailored to that.

I actually have written four columns, four of my columns using the opening AI GBT Three. So fantastic piece of thing, you answer any question it will tell you the answer because it has been trained on almost a huge corpus of internet data. But the limitation is that it was trained on data till 2019. They released it in 2020, I think. So, when COVID hit, so everybody turned to or people had access to GPT three, they turned to GPT three, and would ask about COVID. So what are COVID precautions to be taken, what do you think of this, this that type? So here is our most impressive piece of AI and it had no clue about the biggest challenge facing us at that time.

So how odd is that, right? But it also had some smattering because SARS Cov 1 or the first Coronavirus one was there a few years ago. So all the responses it would give would be based on SARS Cov 1 and not SARS Cov 2. So that explains the nature of the AI systems that we’re building. They’re very good at finding patterns in the data that they’ve been trained on. When they encounter something new, that’s when the accuracy dip that you’ve seen that you mentioned, like it falls from 95% to say 50% or even it’s totally confused. So that’s the situation but I’m in general hopeful. I mean, I’m talking I’m highlighting these aspects because these are not something that are given much attention to. This is very curious to me, as a general observer, because in our general human affairs, the media highlights when things go wrong.

Subha: Yeah, cynical, more cynical than we are.

Kashyap: In human affairs. But we have, I think, more optimism in technology. So we focus on the upside of technology and don’t pay as much attention to the downside. In fact, if you’re talking about, say, the downside of AI, you’re seen as a party pooper and not encouraging the efforts of the people who are working hard, etc. I mean, kudos to all the people who are working hard but as an industry analyst my take is more on both the pluses and the minuses so we can make more informed decisions.

Subha: And with some of the insights that you shared today I think that is going to be more and more important. Having a better picture even as a consumer where you may think that Hey, it’s not for me to get into this so deeply. But I think it is to be more aware of just one what it means, what it mean for me and yes, there is good stuff to watch out for I am going to one day, kind of watch a match on my sofa and really watch be in the stadium but be at home. But what are the pitfalls and very important to be aware of that?

So thank you for some of the insights that you’ve shared today. So I’d like to wrap it up with, you know, those who want to get into this field, careers in AI, be it a youngster, or a lot of mid-career folks who are obviously now, they’ve been in technology for some time. And they know that if they want to continue working and survive and thrive for the next 10, 15 years or so this is probably something they should know a lot more about.

So how does someone go about it? What’s the best way? And I’m asking this because I see many folks doing very basic online, you know, today, if you go to Udemy, for probably 500 bucks, you can get some, you know, two, three certifications on AI, how to build with AI, how to do this and that with AI, and clearly, that’s not going to be enough. So how does one start? What’s a good place to start? And doesn’t even make sense for someone in mid-career to move to this now?

Kashyap: So there are three questions about AI careers, youngsters, and mid-career folks, so let’s take each at a time. So to give some context about careers in AI, for example, there are roughly about 25 million computer programmers in the world right now. Because of increased digitization, because of like, we’ll have a lot more other things. Let us say we are going to double that number in the next 10, to 15 years, we’ll have 50 million total programmers.

So that is still a much smaller percentage compared to the world’s population. So the opportunities are not just in AI, but opportunities are going to be more broadly based elsewhere as well. And in AI, specifically, there are gonna be three paths. One is like you want to build the AI systems, you decide which AI systems to build. And then you manage, maintain, or use AI systems, we’re not even talking about consumers. But these are the types of things that we have. Decide which AI systems to build, will be like investors, domain experts, and maybe the government. So there is going to be a very small sliver of jobs of the millions of technical jobs, let’s call these.

So building the AI systems that are going to require AI or computer science knowledge, these are the AI researchers, people who are doing PhDs and whatnot. And then you need to translate that and build that with software engineering, then domain expertise. So this is again, going to be a small sliver of the jobs that are going to be there. But the bulk of the jobs will be to manage and use the AI systems. We talked about a lot of the examples, right? So what are the conditions in which AI needs to be used? Is it doing right or not? Keeping an eye on AI systems and using AI in your jobs, a lot of the data science jobs fall into this category, you have all the data you’re generating some insights for people to use, etc.

So this is where the three types of taxonomy of jobs are going to be there. So depending on which field you want to go into, obviously, I mean, when people say they want to get into AI, they’re looking at getting a premium for their skills based on that. Naturally, I mean, if you are in a position to decide which AI systems to build, you’re an investor if you are specialising in AI, you will have a premium for your skills if you’re an AI researcher who has a Ph.D., see you don’t have a Ph.D. to get into the field. But if you have a Ph.D., you do different types of things, it’s not like a monolith saying people are doing these things.

So that’s the kind of skill. So for youngsters who are still like let us say, in college or they are just starting their careers, the building blocks of these things would be maths, statistics, probability calculus, linear algebra, etc., programming languages, and knowledge of machine learning methods. So you’re not in a position at this point in time and you’re in college to have domain expertise, but you will pick that up with experience.

But in general, outside of AI, I mean, this is a very interesting time. Because you can come out of nowhere, I mean, it’s like a small town or a no-name institution and have a stellar career and you have so many examples in India and outside of India of that, so you can really chart your career path. But it’s also interesting that there are significant returns to be hand if you’re associated with the good brand types, so there is a huge brand premium as well. If you are from IIT Bits, IMSP, XLRI, or a brand-name employer like Google. McKinsey etc., I mean, there are huge premium people chasing after you for that type.

Subha: Correct. The credibility is already partly as that.

Kashyap: Correct. Some sort of screening is there. So my advice is like I mean, there is a lot of talk about youngsters you need to go through four-year college, drop out of college, etc. So if you have to drop out of college, please make sure you drop out from a top school. So, in general, youngsters should play the long game. So if you’re in an opportunity to get into some of the good name brands early in your career, please do that. Because that compounds give you returns throughout your career. So coming into the mid-career folks, I think you said something very interesting you said some people are stuck or stagnating in their careers. So my take here is, when you’re in mid-career, stagnation is a natural order because there is a hierarchy of organisational promotions and options. And as you keep growing, there are only so many jobs to be had, right?

Subha: Correct. There is a pyramid.

Kashyap: There are only 5000 CEO types. So if you’ve reached the director level, I think after that a lot is beyond your talents, it depends so much on luck and other external factors, etc. So don’t think that you’re stagnating. Because that is the natural order type, you will be exceptional or an exception if you go beyond that. So at that point in time, it makes sense to play to your strengths. It doesn’t make sense to jump into a new thing like AI or some other thing. So what will be in high demand is the understanding of the industry, the relationships, the consumer pain points, what needs to be solved, etc.

So that would really pay a premium, I would think, obviously, some sort of digital fluency or AI fluency, technical fluency is important because that is the people you’re managing will be those kinds of people perhaps. Or the products you are building or helping involved in building those kinds of things. So get that level of fluency or that level of knowledge, rather than doing this thing. So there is a concept called T-shaped professionals, the letter T, where the horizontal portion of the T is supposed to be the generalist skills, while the vertical line of T is supposed to be the deep expertise that you bring in. So mid-career professionals can sort of take an assessment of what their T looks like and what they want to stack or what they want to add to that.

Subha: Makes sense. I think playing at any stage, and especially after, once you’ve been in a certain field, or you’ve established yourself for a set number of years playing to your strengths is underrated. But it’s very, very useful and important. And I think you have to be cognizant of the fact that when you make these sudden jumps and shifts, you are now competing with a much wider and probably much younger talent pool who can take bigger risks. So you have to keep that in mind too.

Kashyap: Absolutely. You should play on your terms, I completely agree with you, rather than trying to compete with much younger people or I mean, you can compete with much younger folks, but you need to know those types.

Subha: Got it. Oh, this has been wonderful, I can’t believe we’ve been speaking for almost an hour, it just flew by. And thank you so much for really bringing it down to basics in a lot of ways, simplifying it, and also contextualising it for us listeners. And I’ve been wanting to talk to you for a while because this is a space that is so in the news that you can’t hear about it on a daily basis. And we’ve all made our assumptions about what we think it is and what we can do. So a lot of clarity has emerged. Thanks for your insights.

Kashyap: Thank you Subha. I mean, it’s my pleasure, and I’ve been listening to your podcasts and you’ve been putting together a finely handcrafted finish podcast. So all the best for that and I hope to listen to future episodes as well.

Subha: Thank you so much.

Kashyap: Thank you. Bye-bye.

Subha: Bye.

Our Guest: Kashyap Kopella

All this and more, from the basics and made as simple and relatable as you will ever hear, from AI Industry Analyst Kashyap Kopella! As CEO of RPA2AI Research and through various books and columns and teaching assignments, Kashyap is an established global thought leader in AI and Digital Transformation who generously helped me understand AI and its implications for each of us today and in the future.

Leave a Reply

Your email address will not be published. Required fields are marked *