play_arrow

keyboard_arrow_right

skip_previous play_arrow skip_next
00:00 00:00
playlist_play chevron_left
volume_up
chevron_left
  • Home
  • keyboard_arrow_right Featured
  • keyboard_arrow_right Validation
  • keyboard_arrow_rightPodcasts
  • keyboard_arrow_right Revolutionizing Validation and Automation in Biotech [Nagesh Nama]
play_arrow

Featured

Revolutionizing Validation and Automation in Biotech [Nagesh Nama]

Nagesh Nama July 10, 2024


Background
share close

Yan Kugel⁠ is joined by Nagesh Nama, CEO at xLM who brings a wealth of experience and expertise in incorporating AI into validated processes within the pharmaceutical industry. Nagesh shares insights on the benefit challenges and best practices of using AI and machine learning technologies in pharmaceutical manufacturing quality. His insights shed light on the challenges and solutions in implementing AI in pharmaceutical manufacturing quality. The potential for AI to improve efficiency, reduce costs, and enhance regulatory compliance is evident, and the future of AI in pharma looks promising.

Nagesh’s Journey in Pharma and AI Implementation

Nagesh’s journey in the pharmaceutical industry started in 1992 as a manufacturing engineer. Over the years, he gained experience in consulting, IT, and eventually started a company focused on automating validation processes. His journey led him to explore the potential of AI in pharmaceutical manufacturing quality, and he has been at the forefront of implementing AI and machine learning technologies in this field.

Challenges and Solutions in Implementing AI in Pharma

Challenges Faced by Pharma Companies

  • Smaller biotechs and medical device companies are eager to adopt AI due to limited resources in the IT space.
  • The challenge of validation is a concern for companies, but Nagesh discusses a process that involves human validation in the loop, ensuring that AI is a co-pilot in the validation process.

AI’s Impact on Efficiency and Cost Reduction

  • Nagesh highlights the potential for AI to improve efficiency and reduce costs in pharmaceutical manufacturing, particularly in areas such as predictive maintenance, data processing, and document generation.
  • He emphasizes that AI can make processes faster and provide better outputs, ultimately leading to significant time and cost savings.

Regulatory Considerations and Trust in AI

Nagesh addresses concerns about regulatory compliance and trust in AI. He emphasizes that the spirit of regulation should guide the use of AI, and there are methods to ensure human validation in the loop to maintain trust in the AI-generated outputs.

Continuous Validation with AI

Nagesh explains the concept of continuous validation and how AI can enhance this process. He discusses the potential for AI to achieve significant time and cost savings, with estimates of up to 80% savings compared to manual testing.

The Future of AI in Pharma

Nagesh discusses the current state of AI in pharmaceutical manufacturing quality and the potential for growth in the coming years. He highlights the development of AI agents and reinforced learning as areas of focus for further advancements in AI technology.

Shifting Workforce Dynamics

Nagesh acknowledges that the implementation of AI in pharmaceutical manufacturing will lead to a shift in workforce dynamics. He predicts that routine tasks will become more automated, leading to changes in the roles and responsibilities of employees in the industry.

Expanding Beyond Validation: New Initiatives

Nagesh shares that his company has established continuous labs to explore AI and different tools beyond validation. They have launched a series of product managed services called CDI, continuous data integrity solutions, to provide comprehensive data center services for biotech and medtech companies. Additionally, they are developing action models like DAM (document action model) and CAM (code action model) to automate document creation and software generation, respectively.

In conclusion, Nagesh’s insights shed light on the challenges and solutions in implementing AI in pharmaceutical manufacturing quality. The potential for AI to improve efficiency, reduce costs, and enhance regulatory compliance is evident, and the future of AI in pharma looks promising.

Episode Chapters:

  • Introduction: 0:00 – 1:40
  • Guest Introduction: 1:41 – 4:18
  • Nagesh’s Journey in Pharma: 4:19 – 14:45
  • Implementing AI in Validated Quality Systems: 14:46 – 26:12
  • Efficiency and Cost Savings with Continuous Validation using AI: 26:13 – 35:57
  • Innovations in the Pharmaceutical World: 35:58 – 44:12
  • Conclusion and Contact Information: 44:13 – 47:05

Podcast transcript:

Please be advised that this is an AI generated transcript and may contain errors

00:28 – 01:05
Yan Kugel: Welcome to our podcast episode focusing on the use of artificial intelligence in pharmaceutical manufacturing quality, the challenges and solutions. Today, we are delighted to have Nagesh Nama as our guest, who brings a wealth of experience and expertise in incorporating AI into validated processes within the pharmaceutical industry. Nagesh will be sharing insights on the benefit challenges and best practices of using AI and machine learning technologies in pharmaceutical manufacturing quality. So Nagesh, welcome. Great to have you on the show.

01:07 – 01:10
Nagesh Nama: Yeah, very nice to thanks for inviting me. Nice to be here.

01:11 – 01:33
Yan Kugel: Great. So very excited to talk to you as an expert on AI, such a hot topic. And before we dive into the technicalities, could you illustrate and tell us a bit about your journey in pharma and how you came to implementing AI in validated quality systems?

01:35 – 02:20
Nagesh Nama: Definitely. My journey started way back in 1996. Just before that, I always worked in the pharmaceutical sector. I started as a manufacturing engineer in 1992, providing consulting services to life sciences and did consulting projects all over the country in the US. Then 1996, I started my own company called ValiMation, where we focused on building control systems, custom control systems that were actually would make or help or drive the machines to make the product. So since it was a pharmaceutical applications, we had to validate that software. So that’s how my journey started. Over time, got involved in

02:20 – 03:01
Nagesh Nama: many, many, many projects, in drug manufacturing, clinical research, IT. So we started building our own products based on SharePoint. Then eventually in 2016, I started a company called XLM and focused on automating validation. So the idea was to take the pain out of testing by automating the entire life cycle, you know, from start to finish, the robots would kick in, drive the browser, do all the testing, and give you a report on the fly. It took us a couple of years to make that perfect. You know, now it’s working out very nicely. As the journey, as

03:01 – 03:36
Nagesh Nama: I was part of the journey, it was always challenging to keep these test cases accurate because the software always changes. Since our focus was on the cloud applications used in GXP, the applications do change. So we were committed to 100% regression. So we never said that we will do a risk-based approach and reduce the number of test cases we do when there’s a new release or some changes happening to the software. So It was always challenging for us to make sure that, you know, our technology worked nicely and we were very efficient in keeping up with

03:36 – 04:10
Nagesh Nama: the changes. That led us to looking into computer vision, then eventually AI came into the way. So now we realize that, you know, with the power of AI, we don’t need any human intervention. What I’m talking about is it can actually give a user manual. It can actually go to the user manual, create the user requirements on the fly in the format that you want, and take that user requirements, generate the test cases and the test steps, go through the software on its own, scrape the software with all the links and everything else, then actually perform

04:10 – 04:47
Nagesh Nama: the test and give you a report. I never thought that that will be possible when I started the XLM in 2016, I always believed in automation, but with AI, we are really pushing the boundaries. I would consider XLM as 1 of the premier organizations doing this kind of work, where we are implementing end-to-end automation using AI agents and also in the GXP environment. So whatever we build will be continuously validated. That’s our mantra is to make sure that we continuously validate whatever we build.

04:49 – 05:10
Yan Kugel: Right. So that sounds very interesting in challenges to bring AI into pharma. And what are the biggest challenges that you have been facing when introducing your solutions to pharma and the biggest objections that you hear from such companies?

05:12 – 05:49
Nagesh Nama: Some of the companies, especially the smaller ones, like the smaller biotechs and medical device companies are very eager to adopt because they have very less resources, especially in the IT space. So we already are working with them on the continuous validation side. Now This is the next phase of automation where we go from 1x to maybe 10x, push the envelope and try to make it as automated as possible. So they’re very excited. Obviously the questions of validation comes in. So we have a process for that. If it is, let’s say if our agents do some work,

05:49 – 06:24
Nagesh Nama: let’s say preparing a risk assessment. It can automate the generation of risk assessment. Any questions, it can interact with you, ask some questions and generate a word document at the end of it. That Word document can go to a QA person and the QA person can do the actual review and sign off on it. That’s human in the loop model where the AI is just a co-pilot, but ultimately the approval is done by a QA or a human in the loop. There is no objection to that because it’s just preparing all the documentation for you, you

06:24 – 06:56
Nagesh Nama: ultimately approve it. The next level is to introduce a second model. If they really want automation is what we call model in the loop where the first model does the work, second model revalidates it, and make sure that it approves it and gets the work done. So whether it’s a document creation and approval, this can follow this. So you have 2 models, 1 creates a document, other 1 approves the document. That is, we have not put that into production, but that’s something that we’re working on. And we’ll work with our clients depending on their appetite, we

06:56 – 07:34
Nagesh Nama: can make that happen also. So our goal is to go from Initially with human in the loop, to ultimately model in the loop that way and continuous evaluation is part of all these models. That’s where we want to get. Coming back to your question, right now everybody’s reading it out. They’re paying attention, but they know it can be done. Some of them are skeptical. Some of them are, you know, and whenever we showcase our technologies, for example, we’re working with Viva right now in the clinical trial space. They came to us for validation, then the whole

07:35 – 08:19
Nagesh Nama: dialogue changed because when the clinical trial is set up, their customers give them the requirements or the validation. They have CRF forms and the forms have a lot of these rules that needs to be incorporated. So that comes in English. And now their engineers would take that, write the script. It’s like JavaScript. They write the script, then test the script, and then let the client know that everything is validated. They can, the study can start. This process typically incorporates on an average thousand rules, and it takes about 2 to 3 weeks. So now we are working

08:19 – 08:52
Nagesh Nama: with them. They wanted this automation. Now we are putting all the pieces together where the agent will generate the actual script in the Viva scripting language, load the script, figure out how to navigate to test the script, test the script and give an output in PDF. So the goal is to go from two-week or three-week window to maybe 2, 3 hours window. And all the thousand plus rules, whatever rules they have will be done. So in this case, they’re very excited. And if we can pull this off, I think this will be a big feather in

08:52 – 09:30
Nagesh Nama: our cap. The conversations are at various levels, on the big corporation side, a little bit slow because some of them don’t have this nice vision as to how AI can help their business. And that has to come from the top leadership. I don’t see that in all the big corporations. Once that happens, I think the funds are set aside, commitment is there, things can go on. For example, we are working on the predictive analytics side where you can look at a lot of these time series data in real time and figure out if there’s gonna be

09:30 – 09:49
Nagesh Nama: a failure or not. And if there is going to be failure, we need to predict it. So those are things we are working on on the big pharma manufacturing side. So a lot of different projects, a lot of different levels. So I cannot give you 1 single answer, but time will tell. This will take about 2 to 3 years to kind of at least have some kind of a rhythm.

09:50 – 10:12
Yan Kugel: So and I guess you mentioned that big pharma companies lack the vision to where to implement AI, how to do it and how it can help them. So from your perspective, where AI can help efficiency and reduce costs in pharma, in the manufacturing quality in any area that you can think about?

10:14 – 10:54
Nagesh Nama: I think first manufacturing. So if you’re a high volume pharmaceutical manufacturer, right, you have multiple lines and the first thing you need to look at is how can I do machine language, how can I implement ML models to make sure that my production line is running at optimum levels so that my manufacturing output is at a very high percentage, very efficient? And if there’s something goes wrong, can I predict it even before it goes wrong? Switch from preventive maintenance to predictive maintenance. And if there’s any issue, the line operator should be able to chat with data,

10:54 – 11:24
Nagesh Nama: you know, meaning ask a question and they should get an answer. The old way was to have like 50 screens, 100 screens that they have to navigate to figure out how the line is working. From that, we should be able to create a chat interface where they can definitely chat and it can alert and let you know that some things will go wrong. You need to do something about it. So That’s what we’re looking at at the manufacturing level. At the document processing level, you know, in pharma, med tech or biotech, you have a lot of

11:24 – 11:58
Nagesh Nama: processes. Every process has a document to go with it. So that’s another area that AI can help where instead of you typing up in Word or Excel or 1 of these tools, you should be able to create, ask an AI bot, I want to create this. And AI bot should know all the historical data based on that, it should prepare the document for you, or at a minimum, prepare the draft for you. If there are any gaps, you should be able to communicate with you, have a conversation with you, so you can give it the conversation.

11:58 – 12:35
Nagesh Nama: At the end of it, the bot can generate the document in the right format, right template, and you’re almost there to get that document approved. So like that, we can do that for every document. I’m not talking about simple documents. I’m talking about complex risk assessments, documents where information from various sources have to be reviewed. I’m talking about such complex documents. That is another area where AI will really help quality operations and manufacturing operations as well in the GXP world. Obviously you have clinical research and other areas where they’re already using it on the drug discovery

12:35 – 13:14
Nagesh Nama: side. I mean, they’re a little bit ahead than on the manufacturing side. So that’s a big area that will help also. So To be honest with you, the way this will work is, if a human being is doing, then AI can do it faster, give you a better output. If you have the data, then AI can make you understand the data better. A lot of pharma companies have the data, especially on the manufacturing side, but it’s really not used that well at all. So that’s where the commitment should come is AI first, so that they can

13:14 – 13:28
Nagesh Nama: apply AI at every level. Obviously, they have to prioritize, funds are limited, but they can accordingly prioritize. There’s no area that AI cannot touch. In every single area, AI can touch. And the technology is becoming so advanced.

13:32 – 14:03
Yan Kugel: Right. And from the regulatory standpoint, where do you see the regulators right now getting involved in this? So I think that a lot of companies refrain to implement AI because it is not clear at the moment what is allowed, what is not allowed, what information can be shared, what can we entrust AI to do. So what is your sense in this regard?

14:04 – 14:35
Nagesh Nama: Yeah, that is just an excuse, to be honest with you. People don’t, they should take the spirit of the regulation and be able to defend what they’re doing. So there are multiple choices they can make. 1 is you don’t have to let the AI make all the decisions, meaning you can have human in the loop. So AI will prepare everything. The human can say, okay, I’m good now, or I’ll kick it back and do something else. So when you have a human in the loop, then it’s a human who’s signing off, not the AI. So that

14:35 – 15:03
Nagesh Nama: 1 way they can get away with it, meaning have that human, it’s still a little bit not super efficient, but at least 90% of the work can be done by the AI. That’s 1 way of looking at it. The other 1 is, you know, you’re doing data analysis, right, but especially with the ML models. So in fact, you’re adding more value, you understand the process better, not, it’s not the other way around. So that means, you know, you can make a very big case that, you know, I understand my process better, I’m predicting if any failures

15:03 – 15:33
Nagesh Nama: are going to happen. All these things can be validated. It’s not that you can just sort of blindly test it. The way it will work is, once you build the model, you can use the historical data to make the model learn and also use some of the historical data to test the model to see because you already know what the output is. So now you push that data and see what the model predicts compared with actual data and make sure that you know that somewhere close by that can be done. You know And this can be

15:33 – 16:09
Nagesh Nama: also done over time, meaning it can predict something, but after a point, the actual situation happens, you can feed the data saying, this is what you predicted, this is what the actual situation was, now let’s fine tune the model, That can be done. And as you’re, let me once we put it to production, you can introduce a certain use cases so that you can predict, you should be able to get the known output that you can compare. Meaning, let’s say I have a database of 50 use cases that I know what the output already is. So

16:09 – 16:34
Nagesh Nama: now randomly I will introduce these use cases to test my model. So I know the output, now the model is giving me this output right now. Can I compare it and say, is it hallucinating or is it consistent with what I expect? So I think you can really operate much better than what we are doing today. And there is no excuse, even in the regulated world, not to use AI. Yeah.

16:34 – 17:18
Yan Kugel: Right. So that’s a good point to have a human in the loop who will check and review. And we are talking about saving a lot of time on generation of huge documents, review, data collection, risk analysis. And at the end we come to the human review, right? And how Do you think that we can always entrust people and how we make sure we can trust the people to not trust the machine in a way because there is always the danger that it can hallucinate and at some point at time it can give you 50 times the

17:18 – 17:39
Yan Kugel: right answer and then suddenly something goes wrong, right? So how do you create a protocol for the checker to make sure they’re always alert and What do they have to be alert to? How do they differentiate between perfect data and data where something was hallucinated?

17:41 – 18:10
Nagesh Nama: Yeah, even today, right? Let’s say I’m an engineer working on any of the processes, whether it’s doing a risk assessment or writing a test protocol, whatever. Ultimately, it goes to quality. They don’t trust me, right? They have to review everything that I have done and then sign off on it. And they’ll always kick it back saying that I don’t understand this, you know, this is not right or this is not correct. Even in today’s world, forget AI for a second, it is done that way. Meaning if person A does it, person B, the quality person checks

18:10 – 18:41
Nagesh Nama: it, right? So we can use the same model until we get a lot of confidence in AI by saying that AI is an engineer now, doing all this work, giving it to quality. Quality doesn’t have to do with anybody. They have to do what they do today, review it, make sure that everything is good. Whether AI did it or a human did it doesn’t matter to them. They have to give it the same level of scrutiny that they gave before. That’s the interim model, I think. But as we develop these AI technologies, you know, we will

18:41 – 19:12
Nagesh Nama: understand it better, we’ll have better continuous validation technologies, The models will get better. And also, we have guardrails to test the hallucination. Meaning there are tools to actually, even today, I don’t know how much you can trust them, they actually will make sure that it’s not hallucinating. It’s part of the continuous validation rhythm. So that’s how I see it. You know, I think we cannot remove the quality aspect of human in the loop right now till we get a lot of confidence. So they should assume that some human has prepared it. I need to do what

19:12 – 19:31
Nagesh Nama: exactly I did before. And once we get a lot more sophisticated in testing and making sure the models are mature enough and they’re not hallucinating, then maybe we can reduce that percentage. Maybe we don’t have to look at every single 1, maybe a random process. So that’s how I see it right now, Yan.

19:32 – 20:09
Yan Kugel: Right. Yeah. So that it makes sense. So until the data, we have enough data, until there is enough trust in the models, the QA’s job doesn’t change, right? And you were talking about continuous validation, right? And when we were talking about continuous validation, can you explain a bit the difference between the continuous validation and the standard methods that were used before and how AI came comes into play here.

20:10 – 20:45
Nagesh Nama: So when I say continuous validation, even before AI, my philosophy was when I do my testing, When I started the first baseline validation, I had a set of requirements. I had to make sure that those requirements were met. That’s your baseline package. Let’s assume that we 100 requirements. My coverage should be good enough to test all the 100 requirements and give you evidence that I tested all the 100 requirements. So that was step 1. Now I want to take it a little bit further and said, why can’t I run the same suite maybe on a daily

20:45 – 21:21
Nagesh Nama: basis, weekly basis, quarterly basis, right? And show that these 100 requirements, provided they did not change, that has been met on an ongoing basis. So that is continuous validation to me. It’s not that you’re running validation all the time, 100% of the time. It’s just that you make sure that whenever you run it, you do 100% testing and constantly prove that the user requirements have been met. And in the cloud scenario, this helps also because a lot of times the cloud is changing so fast and some of the changes we don’t control. So by applying this

21:21 – 21:57
Nagesh Nama: philosophy, you can get absolute confidence that your software, where it’s in the cloud or anywhere else, is working and doing its job and all the requirements have been met. I have evidence to show that. Now with AI also, we should do the same thing. With AI, the problem will be the hallucination piece. We need to come up with better testing mechanisms to make sure that We can predict these hallucinations or if these hallucinations happen, we catch it. So for that, what typically companies do or what we are doing is we have a set of use cases

21:58 – 22:20
Nagesh Nama: that we can randomize and constantly inject those use cases and see what the output. We already know what the output should be. Now we know at that moment that the model generated that output, we compare it and they should match. If they don’t match, then it’s a red flag and we need to open a ticket or freeze the system saying that something is wrong and a root cause analysis has to be performed.

22:23 – 22:38
Yan Kugel: Right. And where do you see the efficiency on this? Do you have data to say that by doing continuous validation using AI, it saves X amount of time and saves so much cost.

22:39 – 23:20
Nagesh Nama: Absolutely, because we have many customers using our continuous validation platform. Just to give an example, at a very high level, not to a particular customer, minimum savings over a 3 to 5 year life cycle is 50%. With AI, I think we can push it to probably 80% savings. You cannot look at the first time. You need to look at the life cycle, right? If I have a software application, I have to maintain it for 3, 5, 10 years. So let’s say we apply the validation cost over 3 or 5 years. Before AI, we were shooting for

23:20 – 23:58
Nagesh Nama: 50 to 60%. Now with AI, I think we will shoot maybe north of 80%. So that’s compared to the manual testing. That’s a rule of thumb. So that you can see, I’ll give you an example. Right now, the way my team works is, this is before AI, is they understand the software and they figure out what use cases has to be done to meet the requirements because customer gives us the requirements. And then they write the feature files that is in quasi-English, what the test cases and the test steps should be. They give it to the

23:58 – 24:36
Nagesh Nama: developer who does the test automation. And the cycle continues. So Now with AI, what we’ll be doing, we are very close to achieving this is just give the input as a user manual and give access to the software. The agent will automatically scan the software, meaning like a human, it can go through all the different menus, figure out on its own, and also take the user manual input, prepare the requirements based on the user manual, and then somebody can approve it. Once the requirements get approved, it can generate the test cases on its own, including the

24:36 – 25:13
Nagesh Nama: steps. Now the web automation can take that input and actually do the testing and give you the output in a PDF. So the human intervention is probably 3 to 5%, not more than that. And this I’m not talking about just testing. It is automating the requirements. It’s automating the test on the test cases. It’s automating the automation piece, meaning writing the scripts and executing those scripts and giving a PDF report. You see that, right? I mean, we are going from 1 to 10 in a very short span. That’s because of AI. And what you always hear

25:13 – 25:51
Nagesh Nama: about is LLMs, large language models. That is just the foundation, right? They are called the frontier models. But what is important to us is LAM, large action models, where AI agents can leverage LLMs and automate tasks and also have the reinforced learning as part of it, meaning they get better over time. So that’s what we are working on for validation. We have a framework that we’re building called VAM, Validation Action Model, which will automate from requirement generation. I’m only talking about the software world, requirement generation all the way through testing.

25:53 – 26:24
Yan Kugel: Right, sounds very fascinating. And at the moment, where would you say the AI is in terms of how much you already achieved and how much there is still to do. So let’s say you have this continuous AI validation. So where would you say it is at the strongest at the moment? Which part would you say there is much growth and you’re going to try to achieve that in the next years?

26:27 – 27:03
Nagesh Nama: What you would normally hear, the agentic frameworks, That is the action models or the agents that will perform actions like a human being or better. So that’s where the development is happening right now. A lot of research is being done in the open source community as well as in the commercial side, non open source or the closed source, you know, And everybody wants a piece of that from OpenAI to other companies like Microsoft, for example. When you hear about Copilot and things like that, it’s nothing but the agents. And Apple just announced at their WWDC that

27:03 – 27:36
Nagesh Nama: they’re going to incorporate that into their iPhones and iPads and things like that where it can see the screen that you’re seeing, it can perform actions just like you can do only better and it can be truly your secretary. That is where a lot of research is being done. And I think it will take maybe a year or 2 for that to really come to a level where, we can leverage the full potential of AI. So that is 1 thing. And the next thing is the reinforced learning, meaning making the agents learn as it, like a

27:36 – 27:54
Nagesh Nama: human being learns in different areas, not just 1 area. In my case, I’m focused on software testing, software validation, process automation. So the learning becomes important. And you’ll be amazed at what even today the software can do. And I can only imagine a year from now.

27:56 – 28:38
Yan Kugel: Right. So it sounds like a lot of the processes will be replaced by AI, what people have been doing. So what positions or actions done and performed by people do you think are still safe or what shift in the workforce we will see in the next years. So far we know, okay, we still need QA to review everything, but probably not as much QA people. We need people in the manufacturing floors that review things, but probably a lot of the systems will become more and more automatic, right? So how is the manufacturing will change in the

28:38 – 28:40
Yan Kugel: next years in terms of workforce?

28:42 – 29:18
Nagesh Nama: There definitely will be a change. In our industry, as you know, it will take a little bit more time because people have to trust this technology. But believe me, same thing happened when the cloud came into existence. When I was very skeptical, I’m talking about 2010, 2011, will life sciences adopt cloud technology? And we know, Most of the biotechs that we work with have everything in the cloud. They don’t have anything on physical servers, every single thing, you know? So with AI, it only happened faster. It won’t take 5, 10 years. You’re talking about a year

29:18 – 29:55
Nagesh Nama: or 2, it will definitely happen. Once the management sees that they can get the ROI and they can get these routine tasks to make their operations more efficient, I think they will start adopting and asking for it. And like I said, it’s a different mindset and the top management has to understand what is their vision for their company 3, 5 years from now. And not just complain that, oh, it’s a regulatory thing or my workforce is not ready. We don’t have the tools. No. What is your vision? Understand what I can do today and kind of

29:55 – 30:35
Nagesh Nama: extrapolate what it can do tomorrow and now come up with a vision for your company 5 to 10 years and allocate the funds accordingly. So how will the workforce change in our area? Definitely there’s going to be changes, but I don’t think it’s that Every job will be taken. In my opinion, right now, quality spends a lot of time reviewing documents, looking for GDP errors, things of that nature. In my opinion, it’s not that much value added. The same quality engineer should be working with operations, controls people to make the process better, right? So it’s more

30:35 – 31:12
Nagesh Nama: robust and it’s of high quality. I don’t see that in many organizations. Quality people are more focused on GDP, regulations, this, that they hardly even go to the manufacturing floor. And in most cases, they don’t even understand how the highly automated lines work. I’m talking about underlying technology, how it even works. So that will be the shift, meaning routine tasks, let the AI agents handle it. Now, the human resources are very valuable. Can they focus on high value and maybe come up with better products, more products, enhanced products for their company so that everybody can benefit.

31:13 – 31:20
Nagesh Nama: In my opinion, that’s what will happen, I think. But again, time will tell, right? We don’t know how all this will pan out.

31:20 – 32:05
Yan Kugel: It’s very interesting to see how innovation progresses. And as you mentioned, we’ll see what happens. And on this note, let’s say if you didn’t concentrate on validation and automation AI, which is your main business model at the moment and the thing that you think gives the most benefit for the pharma. But let’s say you had more capacities and more opportunity to work on additional projects in this world. What do you think is the next step? Or you say, okay, if I weren’t doing this, then I think this is the next thing that will be a breakthrough

32:05 – 32:06
Yan Kugel: in the industry.

32:09 – 32:50
Nagesh Nama: Yeah, and I think for me, validation gave me personally a very good foundation because I have worked at all levels of automation, level 0 at the PLC level, all the way through ERP systems. So from level 0 to level 5, my company has done projects, I have worked. So new facilities, facility expansion, existing facilities, simple ones, highly complicated, highly automated, you know, so gave me a very good exposure. So I have gained a lot of experience in various fields. Now, since last year, we at my company established continuous labs where we are also learning about AI

32:50 – 33:32
Nagesh Nama: and the different tools and also seeing how we can expand beyond validation. So what we have done is we have launched a series of product managed services called CDI, continuous data integrity solutions. And the idea here is we already have released about 6 different services like application life cycle management, service management, risk management, remote monitoring and management. So the idea is that, you know, for a SMB biotech or medtech company, we would like to give them the entire data center in the cloud with all the services they need to start in a compliant way, in a

33:32 – 34:08
Nagesh Nama: qualified way. So that’s our goal. By the end of the year, we plan to launch another 4 or 5 services. So you’ll have close to 12 services. That’s enough to cover most of the processes that a typical SMB company will need, GXB company will need. That’s 1 area that I’m very excited about and I want to make it very painless for these high-tech biotech companies to get up and running in a qualified state. And the next is, you know, we want to develop these action models. Like I said, VAM is for validation action model. We already

34:08 – 34:40
Nagesh Nama: are developing DAM, which is document action model. My goal there is at least I will try to replace word or Excel. Any document that you want to create, no matter what the process is, you should be able to chat with the chat bot where the chat bot should have access to the historical information, should understand the context, should be able to prepare these documents on the fly in the right format, using the right template. And you should be able to save that session information, go back to the same session, regenerate the document without ever touching Word

34:40 – 35:22
Nagesh Nama: or Excel. That will save companies a lot of pain because once you give Word or Excel to any engineer or any QA person or any person for that matter, it’s a disaster. It’s just a waste of time. So that’s the document side. And lastly, we’re also working on what we call CAM, Code Action Model, where you would like to generate the software on the fly. I explained at the beginning of this session on generating these scripts for EDC, electronic data capture. So similarly, we want to generate, automatically we want to generate code for various applications. So

35:22 – 35:34
Nagesh Nama: these are some of the areas. So this is quite a few areas that we are taking on. It will take us probably a year or 2 to get all these different products in a mature state. Maybe then we can jump onto other areas as well.

35:36 – 36:20
Yan Kugel: Right, so very admirable how your innovation is set to change the pharmaceutical world. So this is very inspiring. So it’s great to hear from people such as yourself that do so much to change and innovate the market. And I want to thank you for having this chat with me and for everybody who wants to chat with Nagesh. His LinkedIn details will be available in the information about this podcast episode and on the LinkedIn post. And feel free to check his LinkedIn profile and the information about the XLM company will also be there. So you could go

36:20 – 36:38
Yan Kugel: there and explore a bit more about what they are doing and the solutions that they have at hand because my feeling is there is a lot still and yet to come from XLM and Nagesh. Thank you very much for this talk, Nagesh.

36:39 – 36:42
Nagesh Nama: Yeah. Thank you for having me. I really enjoyed our talk.

Tagged as: , , .

Rate it
Previous episode
Post comments (0)

Leave a reply

Your email address will not be published. Required fields are marked *