Alti Rahman: Hi, my name is Alti Rahman. I’m the chief strategy and innovation officer for American Oncology Network.
OBR: Can you describe the panel on AI that you moderated the COA 2024 Community Oncology Conference?
Rahman: As much prep work that we do with the panelists, and we do our research, and the panelists bring a lot of their subject matter expertise, I think I learned something being a moderator. I learned something very important yesterday. Artificial intelligence (AI) is here, and it’s being tested in a lot of large systems, academic centers, health systems, and hospitals. You have the likes of Microsoft, Google, and a lot of the big tech companies getting involved in this space. There’s a significant opportunity for community oncology to get involved in this space. But because of our diversity and different scales of sizes that we have, I just don’t think we’re attracting the right attention to make sure that we’re getting some use cases into community oncology.
I’m super excited to help support and work with the Community Oncology Alliance (COA) to make sure we grab the attention of these large technology platforms to start testing various applications of AI. I would say that before I started that panel, my thought process was that AI is still in maybe a “think tank” type of phase and deploying a couple of applications here and there. [Now] I think it’s definitely here, and I think community oncology is ready to start testing, whether it’s administratively focused or clinically focused. I think the next few months are going to be really exciting. The next few years [are going to be] even more exciting.
OBR: Do you feel that AI technologies are ready for prime time in community oncology?
Rahman: I do think so. I think that there was a key takeaway from the session, which was the word “trust.” I think trust is an active process. It’s not passive. We have to be engaged in looking at and asking the right questions to guide the process toward trusting the application to do what it needs to do. Is AI ready for primetime? I would say yes. Trust and ethical considerations have to be taken into account. I don’t think we can say that we want the federal government to create something that’s going to build trust. I think healthcare has always been a grassroots effort. It has always been local. We have to take those same elements and also think about how AI technologies are going to start adapting to what we need to care for our patients on the ground.
I think there are significant implications for how not only this [affects] urban communities but, even more importantly, the rural communities. I think as we look at the lack of physicians, the lack of medical oncologists, and the lack of health care in rural communities, I think AI can fill a significant portion of that gap. Of course, [it will] never replace the physicians or the advanced practice providers, but [is] able to help them to reach these patients who really don’t have access to some of the services that you have in more developed areas.
I think it’s really going to be revolutionary and transform the way we think about healthcare.
OBR: In terms of other technology concerns, what lessons should oncologists take from recent cyberattacks?
Rahman: I think an important lesson that we've learned from the Change Healthcare attack is that we are part… My background is that I am obviously not a clinician. I look at it from a business perspective. We are part of a supply chain – just like any other industry has a supply chain, health care is part of a supply chain. And the care that we deliver on the ground is connected to technology platforms, distributor platforms, and payers. I think there are fundamental questions that need to be asked with all of those connected partners, which are, “What are you thinking about cybersecurity, and what policies do you have in place? And what insurance coverage do you have?”
I think asking these questions is certainly the first place to start. And then from there, as we think about how nothing is going to prepare you to face that unfortunate experience, being able to run simulations [is important]. Being able to run a drill, just as in school; we [ran] fire drills when we were in elementary school and high school. We did fire drills. It’s preparation. You have to simulate and do a drill in terms of how you’re going to respond, how you’re going to communicate. What happens if you don’t have internet? What happens if you can’t bill? What happens if your electronic health record (EHR) [system] goes down?
It’s so important that risk mitigation becomes part of the organization’s DNA. And to make it part of the organization’s DNA, you have to invest in either the person who’s going to be a risk mitigation person or a team or look at outside support to help drive some of those discussions. I think that’s a significant area of investment that practices will really benefit from.
OBR: How does AI factor into cybersecurity?
Rahman: That’s a really interesting question. I’ve never thought about how AI [affects that]. I think the purpose of AI is to help offset some burdensome tasks, help free up our time to think about higher-functioning tasks, rather than be professional data entry people. That’s across the spectrum, whether you’re physicians or clinical or nonclinical. I think where AI can be helpful in this area is actually thinking about and creating incidence response plans. Because if we think about the nature of AI, it’s dependent on what’s called a large language model. A large language model, in a simplistic way, is essentially data in an ecosystem. And if we think about data in an ecosystem, it’s going to house different types of information that looks at how you create incidence response plans. Rather than thinking, “Okay, where do I even start?” You can [use] ChatGPT or Gemini through Google and ask questions like, “Hey, how do I think about incident response? What do I do if I get attacked or have a cybersecurity issue?”
I think that can go into the comment around risk mitigation, [you can use AI to] create plans for you and then exercise a higher level of thinking and functioning to say, “Does this plan make sense for my organization?” You can save a lot of time because incidence response plans aren’t these small half-page or one-page documents. They’re several pages. How do different departments work together? Who do you need to call? I think that’s where AI could be really helpful is [allowing] everyone to start thinking through creating plans, rather than take the long hours of starting to write up a plan.
OBR: How do you see COA working on issues related to technology?
Rahman: What’s wonderful about community oncology is that we look like our communities. If you’re in a large community, millions of people, then practices are typically larger to serve those communities. If you’re in smaller communities, you scale to support those communities by being nimble there. I think, regardless of whatever size practice you are, COA serves as our voice, the “one body” voice. We are essentially this large community. And when they go out and talk to the White House or to Congress and even get involved in some of the technology conversations, because it’s an amplified voice, we need the attention for resources diverted and priorities given to us.
I think that’s where COA is going to be tremendously helpful. We can get these larger stakeholders to come to the table, rather than doing it on our own. We can get the attention of the big tech companies. We can get the attention of large supply chain–connected vendors that we really need at the table to make sure that we’re creating the right experiences for our patients. I see COA continuing to become an important voice in this area.
This transcript has been lightly edited for clarity.