Artwork

Inhoud geleverd door Reed Smith. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door Reed Smith of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.
Player FM - Podcast-app
Ga offline met de app Player FM !

AI explained: Open-source AI

26:59
 
Delen
 

Manage episode 439038738 series 3402558
Inhoud geleverd door Reed Smith. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door Reed Smith of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.

Reed Smith partners Howard Womersley Smith and Bryan Tan with AI Verify community manager Harish Pillay discuss why transparency and explain-ability in AI solutions are essential, especially for clients who will not accept a “black box” explanation. Subscribers to AI models claiming to be “open source” may be disappointed to learn the model had proprietary material mixed in, which might cause issues. The session describes a growing effort to learn how to track and understand the inputs used in AI systems training.

----more----

Transcript:

Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.

Bryan: Welcome to Tech Law Talks and our new series on artificial intelligence. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. My name is Bryan Tan and I'm a partner at Reed Smith Singapore. Today we will focus on AI and open source software.

Howard: My name is Howard Womersley Smith. I'm a partner in the Emerging Technologies team of Reed Smith in London and New York. And I'm very pleased to be in this podcast today with Bryan and Harish.

Bryan: Great. And so today we have with us Mr. Harish Pillay. And before we start, I'm going to just ask Harish to tell us a little bit, well, not really a little bit, because he's done a lot about himself and how he got here.

Harish: Well, thanks, Bryan. Thanks, Howard. My name is Harish Pillay. I'm based here in Singapore, and I've been in the tech space for over 30 years. And I did a lot of things primarily in the open source world, both open source software, as well as in the hardware design and so on. So I've covered the spectrum. When I was way back in the graduate school, I did things in AI and chip design. That was in the late 1980s. And there was not much from an AI point of view that I could do then. It was the second winter for AI. But in the last few years, there was the resurgence in AI and the technologies and the opportunities that can happen with the newer ways of doing things with AI make a lot more sense. So now I'm part of an organization here in Singapore known as AI Verify Foundation. It is a non-profit open-source software foundation that was set up about a year ago to provide tools, software testing tools, to test AI solutions that people may be creating to understand whether those tools are fair, are unbiased, are transparent. There's about 11 criteria it tests against. So both traditional AI types of solutions as well as generative AI solutions. So these are the two open source projects that are globally available for anyone to participate in. So that's currently what I'm doing.

Bryan: Wow, that's really fascinating. Would you say, Harish, that kind of your experience over the, I guess, the three decades with the open source movement, with the whole Linux user groups, has that kind of culminated in this place where now there's an opportunity to kind of shape the development of AI in an open-source context?

Harish: I think we need to put some parameters around it as well. The AI that we talk about today could never have happened if it's not for open-source tools. That is plain and simple. So things like TensorFlow and all the tooling that goes around in trying to do the model building and so on and so forth could not have happened without open source tools and libraries, a Python library and a whole slew of other tools. If these were all dependent on non-open source solutions, we will still be talking about one fine day something is going to happen. So it's a given that that's the baseline. Now, what we need to do is to get this to the next level of understanding as to what does it mean when you say it's open source and artificial intelligence or open source AI, for that matter. Because now we have a different problem that we are trying to grapple with. The problem we're trying to grapple with is the definition of what is open-source AI. We understand open-source from a software point of view, from a hardware point of view. We understand that I have access to the code, I have access to the chip designs, and so on and so forth. No questions there. It's very clear to understand. But when you talk about generative AI as a specific instance of open-source AI, I can have access to the models. I can have access to the weights. I can do those kinds of stuff. But what was it that made those models become the models? Where were the data from? What's the data? What's the provenance of the data? Are these data openly available? Or are they hidden away somewhere? Understandably, we have a huge problem because in order to train the kind of models we're training today, it takes a significant amount of data and computing power to train the models. The average software developer does not have the resources to do that, like what we could do with a Linux environment or Apache or Firefox or anything like that. So there is this problem. So the question still comes back to is, what is open source AI? So the open source initiative, OSI, is now in the process of formulating what does it mean to have open source AI. The challenge we find today is that because of the success of open source in every sector of the industry, you find a lot of organizations now bending around and throwing around the label, our stuff is open source, our stuff is open source, when it is not. And they are conveniently using it as a means to gain attention and so on. No one is going to come and say, hey, do you have a proprietary tool? Adding that ship has sailed. It's not going to happen anymore. But the moment you say, oh, we have an open source fancy tool, oh, everybody wants to come and talk to you. But the way they craft that open source message is actually quite sadly disingenuous because they are putting restrictions on what you can actually do. It is contrary completely to what the open-source licensing says in open-source initiative. I'll pause there for a while because I threw a lot of stuff at you.

Bryan: No, no, no. That's a lot to unpack here, right? And there's a term I learned last week, and it's called AI washing. And that's where people try to bandy the terms, throw it together. It ends up representing something it's not. But that's fascinating. I think you talked a little bit about being able to see what's behind the AI. And I think that's kind of part of those 11 criteria that you talked about. I think auditability, transparency would be kind of one of those things. I think we're beginning to go into some of the challenges, kind of pitfalls that we need to look out for. But I'm going to just put a pause on that and I'm going to ask Howard to jump in with some questions on his phone. I think he's got some interesting questions for you also.

Howard: Yeah, thank you, Bryan. So, Harris, you spoke about the open source initiative, which we're very familiar with, and particularly the kind of guardrails that they're putting around what open source should be applied to AI systems. You've got a separate foundation. What's your view on where open source should feature in AI systems?

Harish: It's exactly the same as what OSI says. We are making no difference because the moment you make a distinction, then you bifurcate or you completely fragment the entire industry. You need to have a single perspective and a perspective that everybody buys into. It is a hard sell currently because not everybody agrees to the various components inside there, but there is good reasoning for some of the challenges. But at the same time, if that conversation doesn't happen, we have a problem. But from AI Verify Foundation perspective, it is our code that we make. Our code, interestingly, it's not an AI tool. It is a testing tool. It is written purely to test AI solutions. And it's on an Apache license. This is a no-brainer type of licensing perspective. It's not an AI solution in and of itself. It's just taking an input, run through the test, and spit out an output, and Mr. Developer, take that and do what you want with it.

Howard: Yeah, thank you for that. And what about your view on open source training data? I mean, that is really a bone of contention.

Harish: That is really where the problem comes in because I think we do have some open source trading data, like the Common Crawl data and a whole slew of different components there. So as long as you stick to those that have been publicly available and you then train your models based on that, or you take models that were trained based on that, I think we don't have any contention or any issue at the end of the day. You do whatever you want with it. The challenge happens when you mix the trading data, whether it was originally Common Crawl or any of the, you know, creative license content, and you mix it with non-licensed or licensed under proprietary stuff with no permission, and you mix it up, then we have a problem. And this is actually an issue that we have to collectively come to an agreement as to how to handle it. Now, should it be done on a two-tier basis? Should it be done with different nuances behind it? This is still a discussion that is ongoing, constantly ongoing. And OSI is taking the mother load of the weight to make this happen. And it's not an easy conversation to have because there's many perspectives.

Bryan: Yeah, thank you, for that. So, Harish, just coming back to some of the other challenges that we see, what kind of challenges do you foresee the continued development of open source with AI we'll see in the near future you've already said we've encountered some of them some of the the problems are really kind of in the sense a man-made because we're a lot of us rushing into it what kind of challenges do you see coming up the road soon.

Harish: I think the, part of the the challenge you know it's an ongoing thing part of the challenge is not enough people understand this black box called the foundational model. They don't know how that thing actually works. Now, there is a lot of effort that is going into that space. Now, this is a man-made artifact. This piece of software that you put in something and you get something out or get this model to go and look at a bunch of files and then fine-tune against those files. And then you query the model, and then you get your answer back, a rag for that matter. It is a great way of doing it. Now, the challenge, again, goes back to because people are finding it hard to understand, how does this black box do what it does? Now, let's step back and say, okay, has physics and chemistry and anything in science solved some of these problems before? We do have some solutions that we think that make sense to look at. One of them is known as, well, it's called Computational Fluid Dynamics, CFD. CFD is used, for example, if you want to do a fluid analysis or flow analysis over the wing of an aircraft to see where the turbulences are. This is all well understood, mathematically sound. You can model it. You can do all kinds of stuff with it. You can do the same thing with cloud formation. You can do the same thing with water flow and laminar flow and so on and so forth. There's a lot of work that's already been done over decades. So the thinking now is, can we now take those same ideas that has been around for a long time and we have understood them and try and see if we can apply this into what happens in a foundational model. And one of the ideas that's being worked on is something called PINN, which stands for Physics Informed Neural Networks. So using physics, standard physics, to figure out how does this model actually work. Now, once you have those things working, then it becomes a lot more clearer. And I would hazard a guess that within the next 18 to 24 months, we'll have a far clearer understanding of what is it inside that black box that we call the foundational model. With all these known ways of solving problems that, you know, who knew we could figure out how water flows or how, who knew we could figure out how, you know, the air turbulence happens over a wing of a plane. We figured it out. We have the math behind it. So that's where I feel that we are solving some of these problems step by step.

Bryan: And look, I take your point that we all need to try to understand this. And I think you're right. That is the biggest challenge that we all face. Again, when it's all coming thick and fast at you, that becomes a bigger challenge. Before I kind of go into my last question, Howard, any further questions for Harish?

Howard: I think what Harish just came up with in terms of the explanation of how the models actually operate is really the killer question that everybody is poised with the work the type of work that I do is on the procurement of technology for financial sector clients and when they want to understand when procuring AI what the model does it they often receive the answer that it is a black box and not explainable which kind of defies the logic of what their experience is in terms of deterministic software you know if this then that you know ] find it very difficult to get their head around the answer being a black box box methodology and often ask you know what why can't you just reverse engineer the logic and plot a point back from the answer as a breadcrumb trail to the input? Have you got any views on that sort of question from our clients?

Harish: Yeah, there's plenty of opportunities to do that kind of work. Not necessarily going back from a breadcrumb perspective, but using the example of the PINN, Physics Informed Neuro Networks. Not all of them can explain stuff today. We have to, no one, an organization and a CIO who is worth their weight in gold should ever agree to an AI solution that they cannot explain. If they cannot explain, you are asking for trouble. So that is a starting point. So don't go down the path just because your neighbor is doing that. That is being very silly from my perspective. So if we want to solve this problem, we have to collectively figure out what to do. So I give you another example of an organization called KWAAI.ai. They are a nonprofit based in California, and they are trying to build a personal AI solution. And it's all open source, 100%. And they are trying really, really hard to explain how is it that these things work. And so this is an open source project that people can participate in if they choose to and understand more and at some point some of these things will become available as model for any other solution to be tested against so so and then let me then come back to what the verify foundation does we have two sets of tools that we have created one is to create One is called AI Verified Toolkit. What it does is if you have your application you're developing that you claim is an AI solution, great. Now, what I want you to do is, Mr. Developer, put this as part of your tool chain, your CICD cycle. When you do that, what happens, you change some stuff in your code. Then you run this through this toolkit, and the toolkit will spit out a bunch of reports. Now, in the report, it will tell you whether it is biased, unbiased, is it fair, unfair, is it transparent, a whole bunch of things it spits out. Then you, Mr. Developer, make a call and say, oh, is that right or is that wrong? If it's wrong, we'll fix it before you actually deploy it. And so this is a cycle that has to go continuously. That is for traditional AI stuff. Now, you take the same idea in the traditional AI and you look at generative AI. So there's another project called Moonshot. That's the name of the project called Moonshot. It allows you to test large language models of your choosing with some inputs and what outputs come up with the models that you are testing against. Again, you do the same process. The important thing for people to understand and developers to understand, and especially businesses to understand is, as you rightly pointed out, Howard, the challenge we have, this is not deterministic outputs. These are all probabilistic outputs. So if I were to query a large language model, like AAM in London, by the time I ask the question at 10 a.m. in Singapore, it may give me a completely different answer. With the same prompt, exactly the same model, a different answer. Now, is the answer acceptable within your band of acceptance? If it is not acceptable, then you have a problem. That is one understanding. The other part of that understanding is, it suggests to me that I have to continuously test my output every single time for every single output throughout the life of the production of the system because it is probabilistic. And that's a problem. That's not easy.

Howard: Great. Thank you, Harish. Very well explained. But it's good to hear that people are trying to address the problem and we're not just living in an inexplicable world.

Harish: There's a lot of effort underway. There's a significant amount. MLCommons is another group of people. It's another open source project out of Europe who's doing that. AI Verified Foundation, that's what we are doing. We're working with them as well. And there's many other open source projects that are trying to address this real problem. Yeah so one of the outcomes hopefully that you know makes a lot of sense is at some point in time the tools that we have created maybe be multiple tools can be then used by some entity who is a certification authority so to speak takes the tool and says hey Mr. company a company b, we can test your ai solutions against these tools and once it is done you pass we give you a rubber stamp and say you have tested against it so that raises the confidence level from a consumer's perspective, oh, this organization has tested their tools against this toolkit and as more people start using it, the awareness of the tools being available becomes greater and greater. Then people can ask the question, oh, don't just provide me a solution to do X. Was this tested against this particular set of tools, a testing framework? If it's not, why not? That kind of stuff.

Howard: And that reminds me of the Black Duck software that tests for the prevalence of open source in traditional software.

Harish: Yeah, yeah. In some sense, that is a corollary to it, but it's slightly different. And the thing is, it is about how one is able to make sure that you... I mean, it's just like ISO 9000 certification. I can set up the standards. If I'm the standards entity, I cannot go and certify somebody else against my own standards. So somebody else must do it, right? Otherwise, it doesn't make sense. So likewise, from AI Verify Foundation perspective, we have created all these tools. Hopefully this becomes accepted as a standard and somebody else takes it and then goes and certifies people or whatever else that needs to be done from that point.

Howard: Yeah and and we we do see standards a lot you know in the form of iso standards recovering almost like software development and cyber security again that also makes me think about certification which we're is seeing appear in European regulation. We saw it in the GDPR, but it never came into production as something that you certify your compliance with the GDPR. We have now seen it appear in the EU AI Act. And because of our experience of not seeing it appear in the GDPR, we're all questioning, you know, whether it will come to fruition in the AI Act or whether we have learned about the advantages of certification, and it will be focused on when the AI Act comes into force on the 1st of August. I think we have many years to understand the impact of the AI Act before certification will start to even make a small appearance.

Harish: It's one thing to have legislative or regulated aspects of behavior. It's another one when you voluntarily do it on the basis of this makes sense. Because then there is less of hindrance or less of resistance to do it. It's just like ISO 9000, right? No one legislates it, but people still do it. Organizations still do it because it's their, oh yeah, we are an ISO 9035 organization, And so we have quality processes in place and so on and so forth, which is good for those that is important. That becomes a selling point. So likewise, I would love to see something that right now, ISO 42001, which is all the series of AI-related standards. I don't think any one of them has got anything that can be right now be certified yet. Doesn't mean it will never happen. So that could be another one, right? So again, the tools that AI Verified Foundation creates and Mel Korman creates and everybody feeds into it. Hopefully that makes sense. I'd rather see a voluntary take-up rather than a mandated regulatory one because things change. And it's much harder to change the rules than to do anything else.

Howard: Well, I think that's a question in itself, but probably it will take us way over our time whether the market forces us to drive standardization. And we could probably have our own session on that, but it's a fascinating subject. Thank you, Harish.

Bryan: Exactly I think standards and certifications are possibly the kind of the next thing to look out for for AI you know Harish you could be correct. But on that note last question from me Harish so, interestingly the term you use moonshot right and so personally for you what kind of moonshot wish would you have for open source and AI. Leave aside resources, yeah if you could choose what kind of development would you think would be the one that you would look out for, the one that excites you?

Harish: I would rather that, for me, we need to go all the way back to the start from an AI training perspective, right? So the data. We have to start from the data, the provenance of the data. We need to make sure that that data is actually okay to be used. Now, instead of everybody going and doing their own thing, Can we have a pool where, you know, I tap into the resources and then I create my models based on the pool of well-known, well-identified data to train on. Then at least the outcome from that arrangement is we know the provenance of the data. We know how it was trained. We can see the model. model, and hopefully in that process, we also begin to understand how the model actually works with whichever physics related understanding that we can throw at it. And then people can start benefiting and using it in a coherent manner. Instead of what we have today, I mean, in a way, what we have today is called a Cambrian explosion, right? There are a billion experiments happening right now. And majority, 99.9% of it will fail at some point. And 0.1% needs to succeed. And I think we are getting to that point where there's a lot more failures happening rather than successes. And so my sense is that we need to have data that we can prove that it's okay to get and okay to use, and it is being replenished as and when needed. And then you go through the cycle. That's really my, you know, Mojoc perspective.

Bryan: I think there's really a lot for us to unpack, to think about, but I think it's really been an interesting discussion from my perspective. I'm sure, Howard, you think the same. And I think with this, I want to thank you for coming online and joining us this afternoon in Singapore, this morning in Europe on this discussion. I think it's been really interesting from a perspective of somebody who's been in technology and interesting for the ReadSmith clients who are looking at this from a legal and technology perspective. And I just wanted to thank you for this. And I also wanted to thank the people who are tuning into this. Thank you for joining us on this podcast. Stay tuned to the other podcasts that the firm will be producing, and I do have a good day.

Harish: Thank you.

Howard: Thank you very much.

Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's Emerging Technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.

Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.

All rights reserved.

Transcript is auto-generated.

  continue reading

76 afleveringen

Artwork
iconDelen
 
Manage episode 439038738 series 3402558
Inhoud geleverd door Reed Smith. Alle podcastinhoud, inclusief afleveringen, afbeeldingen en podcastbeschrijvingen, wordt rechtstreeks geüpload en geleverd door Reed Smith of hun podcastplatformpartner. Als u denkt dat iemand uw auteursrechtelijk beschermde werk zonder uw toestemming gebruikt, kunt u het hier beschreven proces https://nl.player.fm/legal volgen.

Reed Smith partners Howard Womersley Smith and Bryan Tan with AI Verify community manager Harish Pillay discuss why transparency and explain-ability in AI solutions are essential, especially for clients who will not accept a “black box” explanation. Subscribers to AI models claiming to be “open source” may be disappointed to learn the model had proprietary material mixed in, which might cause issues. The session describes a growing effort to learn how to track and understand the inputs used in AI systems training.

----more----

Transcript:

Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.

Bryan: Welcome to Tech Law Talks and our new series on artificial intelligence. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. My name is Bryan Tan and I'm a partner at Reed Smith Singapore. Today we will focus on AI and open source software.

Howard: My name is Howard Womersley Smith. I'm a partner in the Emerging Technologies team of Reed Smith in London and New York. And I'm very pleased to be in this podcast today with Bryan and Harish.

Bryan: Great. And so today we have with us Mr. Harish Pillay. And before we start, I'm going to just ask Harish to tell us a little bit, well, not really a little bit, because he's done a lot about himself and how he got here.

Harish: Well, thanks, Bryan. Thanks, Howard. My name is Harish Pillay. I'm based here in Singapore, and I've been in the tech space for over 30 years. And I did a lot of things primarily in the open source world, both open source software, as well as in the hardware design and so on. So I've covered the spectrum. When I was way back in the graduate school, I did things in AI and chip design. That was in the late 1980s. And there was not much from an AI point of view that I could do then. It was the second winter for AI. But in the last few years, there was the resurgence in AI and the technologies and the opportunities that can happen with the newer ways of doing things with AI make a lot more sense. So now I'm part of an organization here in Singapore known as AI Verify Foundation. It is a non-profit open-source software foundation that was set up about a year ago to provide tools, software testing tools, to test AI solutions that people may be creating to understand whether those tools are fair, are unbiased, are transparent. There's about 11 criteria it tests against. So both traditional AI types of solutions as well as generative AI solutions. So these are the two open source projects that are globally available for anyone to participate in. So that's currently what I'm doing.

Bryan: Wow, that's really fascinating. Would you say, Harish, that kind of your experience over the, I guess, the three decades with the open source movement, with the whole Linux user groups, has that kind of culminated in this place where now there's an opportunity to kind of shape the development of AI in an open-source context?

Harish: I think we need to put some parameters around it as well. The AI that we talk about today could never have happened if it's not for open-source tools. That is plain and simple. So things like TensorFlow and all the tooling that goes around in trying to do the model building and so on and so forth could not have happened without open source tools and libraries, a Python library and a whole slew of other tools. If these were all dependent on non-open source solutions, we will still be talking about one fine day something is going to happen. So it's a given that that's the baseline. Now, what we need to do is to get this to the next level of understanding as to what does it mean when you say it's open source and artificial intelligence or open source AI, for that matter. Because now we have a different problem that we are trying to grapple with. The problem we're trying to grapple with is the definition of what is open-source AI. We understand open-source from a software point of view, from a hardware point of view. We understand that I have access to the code, I have access to the chip designs, and so on and so forth. No questions there. It's very clear to understand. But when you talk about generative AI as a specific instance of open-source AI, I can have access to the models. I can have access to the weights. I can do those kinds of stuff. But what was it that made those models become the models? Where were the data from? What's the data? What's the provenance of the data? Are these data openly available? Or are they hidden away somewhere? Understandably, we have a huge problem because in order to train the kind of models we're training today, it takes a significant amount of data and computing power to train the models. The average software developer does not have the resources to do that, like what we could do with a Linux environment or Apache or Firefox or anything like that. So there is this problem. So the question still comes back to is, what is open source AI? So the open source initiative, OSI, is now in the process of formulating what does it mean to have open source AI. The challenge we find today is that because of the success of open source in every sector of the industry, you find a lot of organizations now bending around and throwing around the label, our stuff is open source, our stuff is open source, when it is not. And they are conveniently using it as a means to gain attention and so on. No one is going to come and say, hey, do you have a proprietary tool? Adding that ship has sailed. It's not going to happen anymore. But the moment you say, oh, we have an open source fancy tool, oh, everybody wants to come and talk to you. But the way they craft that open source message is actually quite sadly disingenuous because they are putting restrictions on what you can actually do. It is contrary completely to what the open-source licensing says in open-source initiative. I'll pause there for a while because I threw a lot of stuff at you.

Bryan: No, no, no. That's a lot to unpack here, right? And there's a term I learned last week, and it's called AI washing. And that's where people try to bandy the terms, throw it together. It ends up representing something it's not. But that's fascinating. I think you talked a little bit about being able to see what's behind the AI. And I think that's kind of part of those 11 criteria that you talked about. I think auditability, transparency would be kind of one of those things. I think we're beginning to go into some of the challenges, kind of pitfalls that we need to look out for. But I'm going to just put a pause on that and I'm going to ask Howard to jump in with some questions on his phone. I think he's got some interesting questions for you also.

Howard: Yeah, thank you, Bryan. So, Harris, you spoke about the open source initiative, which we're very familiar with, and particularly the kind of guardrails that they're putting around what open source should be applied to AI systems. You've got a separate foundation. What's your view on where open source should feature in AI systems?

Harish: It's exactly the same as what OSI says. We are making no difference because the moment you make a distinction, then you bifurcate or you completely fragment the entire industry. You need to have a single perspective and a perspective that everybody buys into. It is a hard sell currently because not everybody agrees to the various components inside there, but there is good reasoning for some of the challenges. But at the same time, if that conversation doesn't happen, we have a problem. But from AI Verify Foundation perspective, it is our code that we make. Our code, interestingly, it's not an AI tool. It is a testing tool. It is written purely to test AI solutions. And it's on an Apache license. This is a no-brainer type of licensing perspective. It's not an AI solution in and of itself. It's just taking an input, run through the test, and spit out an output, and Mr. Developer, take that and do what you want with it.

Howard: Yeah, thank you for that. And what about your view on open source training data? I mean, that is really a bone of contention.

Harish: That is really where the problem comes in because I think we do have some open source trading data, like the Common Crawl data and a whole slew of different components there. So as long as you stick to those that have been publicly available and you then train your models based on that, or you take models that were trained based on that, I think we don't have any contention or any issue at the end of the day. You do whatever you want with it. The challenge happens when you mix the trading data, whether it was originally Common Crawl or any of the, you know, creative license content, and you mix it with non-licensed or licensed under proprietary stuff with no permission, and you mix it up, then we have a problem. And this is actually an issue that we have to collectively come to an agreement as to how to handle it. Now, should it be done on a two-tier basis? Should it be done with different nuances behind it? This is still a discussion that is ongoing, constantly ongoing. And OSI is taking the mother load of the weight to make this happen. And it's not an easy conversation to have because there's many perspectives.

Bryan: Yeah, thank you, for that. So, Harish, just coming back to some of the other challenges that we see, what kind of challenges do you foresee the continued development of open source with AI we'll see in the near future you've already said we've encountered some of them some of the the problems are really kind of in the sense a man-made because we're a lot of us rushing into it what kind of challenges do you see coming up the road soon.

Harish: I think the, part of the the challenge you know it's an ongoing thing part of the challenge is not enough people understand this black box called the foundational model. They don't know how that thing actually works. Now, there is a lot of effort that is going into that space. Now, this is a man-made artifact. This piece of software that you put in something and you get something out or get this model to go and look at a bunch of files and then fine-tune against those files. And then you query the model, and then you get your answer back, a rag for that matter. It is a great way of doing it. Now, the challenge, again, goes back to because people are finding it hard to understand, how does this black box do what it does? Now, let's step back and say, okay, has physics and chemistry and anything in science solved some of these problems before? We do have some solutions that we think that make sense to look at. One of them is known as, well, it's called Computational Fluid Dynamics, CFD. CFD is used, for example, if you want to do a fluid analysis or flow analysis over the wing of an aircraft to see where the turbulences are. This is all well understood, mathematically sound. You can model it. You can do all kinds of stuff with it. You can do the same thing with cloud formation. You can do the same thing with water flow and laminar flow and so on and so forth. There's a lot of work that's already been done over decades. So the thinking now is, can we now take those same ideas that has been around for a long time and we have understood them and try and see if we can apply this into what happens in a foundational model. And one of the ideas that's being worked on is something called PINN, which stands for Physics Informed Neural Networks. So using physics, standard physics, to figure out how does this model actually work. Now, once you have those things working, then it becomes a lot more clearer. And I would hazard a guess that within the next 18 to 24 months, we'll have a far clearer understanding of what is it inside that black box that we call the foundational model. With all these known ways of solving problems that, you know, who knew we could figure out how water flows or how, who knew we could figure out how, you know, the air turbulence happens over a wing of a plane. We figured it out. We have the math behind it. So that's where I feel that we are solving some of these problems step by step.

Bryan: And look, I take your point that we all need to try to understand this. And I think you're right. That is the biggest challenge that we all face. Again, when it's all coming thick and fast at you, that becomes a bigger challenge. Before I kind of go into my last question, Howard, any further questions for Harish?

Howard: I think what Harish just came up with in terms of the explanation of how the models actually operate is really the killer question that everybody is poised with the work the type of work that I do is on the procurement of technology for financial sector clients and when they want to understand when procuring AI what the model does it they often receive the answer that it is a black box and not explainable which kind of defies the logic of what their experience is in terms of deterministic software you know if this then that you know ] find it very difficult to get their head around the answer being a black box box methodology and often ask you know what why can't you just reverse engineer the logic and plot a point back from the answer as a breadcrumb trail to the input? Have you got any views on that sort of question from our clients?

Harish: Yeah, there's plenty of opportunities to do that kind of work. Not necessarily going back from a breadcrumb perspective, but using the example of the PINN, Physics Informed Neuro Networks. Not all of them can explain stuff today. We have to, no one, an organization and a CIO who is worth their weight in gold should ever agree to an AI solution that they cannot explain. If they cannot explain, you are asking for trouble. So that is a starting point. So don't go down the path just because your neighbor is doing that. That is being very silly from my perspective. So if we want to solve this problem, we have to collectively figure out what to do. So I give you another example of an organization called KWAAI.ai. They are a nonprofit based in California, and they are trying to build a personal AI solution. And it's all open source, 100%. And they are trying really, really hard to explain how is it that these things work. And so this is an open source project that people can participate in if they choose to and understand more and at some point some of these things will become available as model for any other solution to be tested against so so and then let me then come back to what the verify foundation does we have two sets of tools that we have created one is to create One is called AI Verified Toolkit. What it does is if you have your application you're developing that you claim is an AI solution, great. Now, what I want you to do is, Mr. Developer, put this as part of your tool chain, your CICD cycle. When you do that, what happens, you change some stuff in your code. Then you run this through this toolkit, and the toolkit will spit out a bunch of reports. Now, in the report, it will tell you whether it is biased, unbiased, is it fair, unfair, is it transparent, a whole bunch of things it spits out. Then you, Mr. Developer, make a call and say, oh, is that right or is that wrong? If it's wrong, we'll fix it before you actually deploy it. And so this is a cycle that has to go continuously. That is for traditional AI stuff. Now, you take the same idea in the traditional AI and you look at generative AI. So there's another project called Moonshot. That's the name of the project called Moonshot. It allows you to test large language models of your choosing with some inputs and what outputs come up with the models that you are testing against. Again, you do the same process. The important thing for people to understand and developers to understand, and especially businesses to understand is, as you rightly pointed out, Howard, the challenge we have, this is not deterministic outputs. These are all probabilistic outputs. So if I were to query a large language model, like AAM in London, by the time I ask the question at 10 a.m. in Singapore, it may give me a completely different answer. With the same prompt, exactly the same model, a different answer. Now, is the answer acceptable within your band of acceptance? If it is not acceptable, then you have a problem. That is one understanding. The other part of that understanding is, it suggests to me that I have to continuously test my output every single time for every single output throughout the life of the production of the system because it is probabilistic. And that's a problem. That's not easy.

Howard: Great. Thank you, Harish. Very well explained. But it's good to hear that people are trying to address the problem and we're not just living in an inexplicable world.

Harish: There's a lot of effort underway. There's a significant amount. MLCommons is another group of people. It's another open source project out of Europe who's doing that. AI Verified Foundation, that's what we are doing. We're working with them as well. And there's many other open source projects that are trying to address this real problem. Yeah so one of the outcomes hopefully that you know makes a lot of sense is at some point in time the tools that we have created maybe be multiple tools can be then used by some entity who is a certification authority so to speak takes the tool and says hey Mr. company a company b, we can test your ai solutions against these tools and once it is done you pass we give you a rubber stamp and say you have tested against it so that raises the confidence level from a consumer's perspective, oh, this organization has tested their tools against this toolkit and as more people start using it, the awareness of the tools being available becomes greater and greater. Then people can ask the question, oh, don't just provide me a solution to do X. Was this tested against this particular set of tools, a testing framework? If it's not, why not? That kind of stuff.

Howard: And that reminds me of the Black Duck software that tests for the prevalence of open source in traditional software.

Harish: Yeah, yeah. In some sense, that is a corollary to it, but it's slightly different. And the thing is, it is about how one is able to make sure that you... I mean, it's just like ISO 9000 certification. I can set up the standards. If I'm the standards entity, I cannot go and certify somebody else against my own standards. So somebody else must do it, right? Otherwise, it doesn't make sense. So likewise, from AI Verify Foundation perspective, we have created all these tools. Hopefully this becomes accepted as a standard and somebody else takes it and then goes and certifies people or whatever else that needs to be done from that point.

Howard: Yeah and and we we do see standards a lot you know in the form of iso standards recovering almost like software development and cyber security again that also makes me think about certification which we're is seeing appear in European regulation. We saw it in the GDPR, but it never came into production as something that you certify your compliance with the GDPR. We have now seen it appear in the EU AI Act. And because of our experience of not seeing it appear in the GDPR, we're all questioning, you know, whether it will come to fruition in the AI Act or whether we have learned about the advantages of certification, and it will be focused on when the AI Act comes into force on the 1st of August. I think we have many years to understand the impact of the AI Act before certification will start to even make a small appearance.

Harish: It's one thing to have legislative or regulated aspects of behavior. It's another one when you voluntarily do it on the basis of this makes sense. Because then there is less of hindrance or less of resistance to do it. It's just like ISO 9000, right? No one legislates it, but people still do it. Organizations still do it because it's their, oh yeah, we are an ISO 9035 organization, And so we have quality processes in place and so on and so forth, which is good for those that is important. That becomes a selling point. So likewise, I would love to see something that right now, ISO 42001, which is all the series of AI-related standards. I don't think any one of them has got anything that can be right now be certified yet. Doesn't mean it will never happen. So that could be another one, right? So again, the tools that AI Verified Foundation creates and Mel Korman creates and everybody feeds into it. Hopefully that makes sense. I'd rather see a voluntary take-up rather than a mandated regulatory one because things change. And it's much harder to change the rules than to do anything else.

Howard: Well, I think that's a question in itself, but probably it will take us way over our time whether the market forces us to drive standardization. And we could probably have our own session on that, but it's a fascinating subject. Thank you, Harish.

Bryan: Exactly I think standards and certifications are possibly the kind of the next thing to look out for for AI you know Harish you could be correct. But on that note last question from me Harish so, interestingly the term you use moonshot right and so personally for you what kind of moonshot wish would you have for open source and AI. Leave aside resources, yeah if you could choose what kind of development would you think would be the one that you would look out for, the one that excites you?

Harish: I would rather that, for me, we need to go all the way back to the start from an AI training perspective, right? So the data. We have to start from the data, the provenance of the data. We need to make sure that that data is actually okay to be used. Now, instead of everybody going and doing their own thing, Can we have a pool where, you know, I tap into the resources and then I create my models based on the pool of well-known, well-identified data to train on. Then at least the outcome from that arrangement is we know the provenance of the data. We know how it was trained. We can see the model. model, and hopefully in that process, we also begin to understand how the model actually works with whichever physics related understanding that we can throw at it. And then people can start benefiting and using it in a coherent manner. Instead of what we have today, I mean, in a way, what we have today is called a Cambrian explosion, right? There are a billion experiments happening right now. And majority, 99.9% of it will fail at some point. And 0.1% needs to succeed. And I think we are getting to that point where there's a lot more failures happening rather than successes. And so my sense is that we need to have data that we can prove that it's okay to get and okay to use, and it is being replenished as and when needed. And then you go through the cycle. That's really my, you know, Mojoc perspective.

Bryan: I think there's really a lot for us to unpack, to think about, but I think it's really been an interesting discussion from my perspective. I'm sure, Howard, you think the same. And I think with this, I want to thank you for coming online and joining us this afternoon in Singapore, this morning in Europe on this discussion. I think it's been really interesting from a perspective of somebody who's been in technology and interesting for the ReadSmith clients who are looking at this from a legal and technology perspective. And I just wanted to thank you for this. And I also wanted to thank the people who are tuning into this. Thank you for joining us on this podcast. Stay tuned to the other podcasts that the firm will be producing, and I do have a good day.

Harish: Thank you.

Howard: Thank you very much.

Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's Emerging Technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.

Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.

All rights reserved.

Transcript is auto-generated.

  continue reading

76 afleveringen

Alle afleveringen

×
 
Loading …

Welkom op Player FM!

Player FM scant het web op podcasts van hoge kwaliteit waarvan u nu kunt genieten. Het is de beste podcast-app en werkt op Android, iPhone en internet. Aanmelden om abonnementen op verschillende apparaten te synchroniseren.

 

Korte handleiding