Balancing innovation in AI with caution
Every day, my English teacher begins class with a “Question of the Day.” Recently, he posed an intriguing question about AI: Should policymakers be permitted to use AI in creating laws?
I expressed that AI, while it should not be the sole method for policy development since it is often inaccurate, can be used to analyze data, which humans can then use to make decisions. I discovered that this approach is already being implemented in policy creation. However, it is still essential to ensure the data used to train these systems is diverse and unbiased, thereby mitigating potential ethical issues.
Many of my classmates argued that since AI isn’t allowed for schoolwork, it shouldn’t be used in policy creation. I found this argument illogical because these are entirely different applications. AI systems used by policymakers provide them with quick and accurate information by collecting and analyzing data in complex systems. They don’t use it to necessarily complete their work quicker but rather to get valuable insights to assist them in decision-making. In contrast, students often use tools like ChatGPT to complete school work quicker, which undermines the intended purpose of the assignment.
Regardless, AI is currently being implemented in numerous different fields, and this discussion with my classmates prompted me to explore other AI implementation concerns. In the past, I encountered multiple instances in the media where people were disappointed when different organizations implemented AI into their systems, with many arguing that these AI systems were not ready to be released yet. I wanted to explore this topic further, so here are two other recent concerns I identified!
Google AI
Google recently introduced a feature that provided an “AI Overview” at the top of search results, summarizing relevant information from the web to answer the user’s query. However, it is currently in retreat.
Many users found that the AI system generated many incorrect and sometimes even harmful responses. View some examples below.
This spread of misinformation is an effect of the AI system collecting and displaying information from social media or parody blog posts created as a joke. However, without the context of the website, it can be difficult for some to understand the intent of the information. Google developers need to improve their overview AI feature to determine if the information was intended as a joke in the context of the website and to prioritize information from credible sources. This spread of misinformation can be extremely harmful, influencing users to engage in dangerous activities, from putting glue on pizza to jumping off the Golden Gate Bridge. This can be especially harmful for younger users who may not know enough to discern the truth. Many publishers are also concerned that this AI system might reduce traffic to original websites by providing users with summaries of their content, thereby discouraging visits to the actual sites.
When asked about these responses by Business Insider, a Google spokesman responded with the following:
“Extremely rare queries and aren’t representative of most people’s experiences, vast majority of AI Overviews provide high-quality information. We conducted extensive testing before launching this new experience and will use these isolated examples as we continue to refine our systems overall.”
While many of these prompts were unusual, considering Google has over 4.9 billion users and 15% of searches daily are new, many may still come across them. As companies are starting to implement AI into their systems, Google likely created this AI system to stay ahead of its competitors, such as OpenAI and Microsoft. However, it is still essential for Google to perfect a feature like this as it is a big company that many use for reliable information.
AI in Healthcare
More and more hospitals are starting to utilize AI tools to diagnose patients. With AI, healthcare providers can swiftly and accurately analyze vast amounts of medical data, leading to quicker and more precise diagnoses. This enhances patient outcomes, reduces costs, and improves overall healthcare efficiency.
However, many people are concerned about the safety and liability risks of implementing AI in healthcare. Six out of ten Americans are uncomfortable with this practice. In a recent talk, Michelle Mello, a professor of law and health policy at Stanford, discussed the risks linked with AI in healthcare.
She underscores the urgent need for clear regulatory frameworks for AI tools in healthcare. Without any, she explains the two significant risks present. Firstly, there is no standardized testing process for AI in healthcare, unlike the rigorous FDA approval process for drugs. I found this point interesting because just as a mistake in the creation of a drug can cause severe health issues, an error in the creation of an AI system could also lead to severe health issues due to faulty prescriptions.
Secondly, she explains how it becomes challenging for courts to define the appropriate use of new technology, as judges rarely understand how AI work, given its complexity. In addition to adding regulatory frameworks, as she suggests, I believe there should also be a regulatory agency specifically for AI in healthcare, consisting of government experts in the field. These experts would oversee the development, implementation, and regulation of AI technologies, ensuring they are safely integrated into healthcare practices while safeguarding patient welfare and maintaining high standards of care.
She also highlights the competitive race among companies to be the first to develop AI models for healthcare. This competition can increase risks, as the systems might not be fully optimized yet. In healthcare, perfection is crucial because errors can have severe consequences. Implementing these regulations would likely encourage companies to allocate more time to perfect their systems and meet the regulation standards.
Mello also explains how hospitals should weigh the system’s benefits against the potential risks to determine if is worth implementing. This is not just a responsibility but a significant duty, as while AI systems can benefit doctors, they should never compromise a patients’ health, which should always be a hospital’s utmost priority.
Additionally, Mello asserts that hospitals should be transparent about using AI. They should disclose information to patients and obtain consent before any operation to prevent potential lawsuits. Hospitals should also establish agreements with AI vendors to define liability in case issues arise. This is crucial because, it is difficult to determine who is reliable when an AI system messes up, so hospitals must discuss this with AI vendors and patients beforehand to prevent any lawsuits. This emphasis on transparency provides reassurance and keeps all parties informed.
Overall, many companies are racing to create and produce AI systems to stay ahead of competitors, which is risky since it can do things like promote harmful misinformation or mess up essential healthcare procedures. These companies need to spend more time training and developing their AI systems, and there needs to be improved regulation in areas like healthcare where an AI mistake could have harmful effects.
Citations: