Friday, March 1, 2024

Artifical Intelligence: Why should you care?

 


    In his talk titled "Can Silicon Be Conscious?" Dr. Vukov posed a fundamental question to the audience: What defines personhood? Dr. Joe Vukov, a professional philosopher, undertook a personal exploration to unravel this question. As part of this quest, he engaged in a conversation with Blake Lemoine, a former leader involved in the development of Google's AI LaMDA, alongside his colleague Michael Burns. This insightful discussion unfolded on the podcast "Appleseeds to Apples: Catholicism and The Next ChaptGPT." Dr. Vukov and Dr. Burns guided the discourse towards the inquiry of the AI's potential sentience, the essence of personhood, and the ethical considerations that would arise if the AI were acknowledged as sentient. Blake Lemoine notably asserted that the Google AI LaMDA, specifically designed for dialogue applications, could indeed be considered sentient. In order to explain how advanced the AI is, Dr. Vukov informed the audience about the Turing Test, this test was developed as a method to see if an AI can effectively communicate to the observer that the sender of the messages is also a human. If an AI can convince the receiver that would mean that the AI has passed the Turing test. However, this brought up the argument that is tricking a human into thing that the AI is human really enough to consider the AI to have personhood? Blake Lemoine argues that in the past humans have made the mistake of dehumanizing people of color and women. Since humans have a bad history of impeding on the rights of those that should not, it makes the most sense to give AI some rights as well. However, Many argue AI systems have access to all of the resources on the internet, making its “mind” limitless. However,regurgitating the information on the internet to humans really makes AI as complex as a human?  Many people are convinced that it does not. 


    In their study, "Protecting Sentient Artificial Intelligence: A Survey of Lay Intuitions on Standing, Personhood, and General Legal Protection," Dr. Eric Martinez and Dr. Christoph Winter aimed to explore public opinions on AI rights. They sought to understand what the public thought about extending legal protection to sentient AI and what they perceived as personhood. The results were somewhat surprising, with participants ranking desired legal protection for AI lower than other groups (humans in the jurisdiction, humans outside the jurisdiction, corporations, unions, non-human animals, the environment, humans living in the near future, and humans living in the far future), indicating a perceived lesser importance. However, the desired protection level for AI was significantly higher than its perceived current protection, suggesting a nuanced concern for AI's legal status. About one-third of participants endorsed granting personhood and standing to sentient AI, either aligning with or deviating from legal expert opinions. Political differences emerged, with liberals advocating higher legal protection and personhood for AI than conservatives. Both political groups, however, showed lower favorability towards legal protection for AI compared to other neglected groups. The findings also prompt considerations for potential reforms in existing legal systems, with a democratic lens on lay attitudes influencing legal philosophy and policy. The study's descriptive focus emphasizes the importance of further research to draw normative implications from the results within the evolving landscape of AI ethics and law.


    AI is an inevitable part of the future, thus people should increasingly become more involved with the ethical implications of AI. With artificial intelligence offering a bright prospect for numerous fields, it becomes even more logical for us to closely monitor its developments.


Andreotta A. J. (2021). The Hard Problem of AI Rights. AI Soc. 36, 19–32. 10.1007/s00146-020-00997-x [PMC free article] [PubMed] [CrossRef] []


Martínez, E., & Winter, C. (2021). Protecting Sentient Artificial Intelligence: A Survey of Lay Intuitions on Standing, Personhood, and General Legal Protection. Frontiers in Robotics and AI, 8. https://doi.org/10.3389/frobt.2021.788355

Effects of Night Shifts on Circadian Rhythms







Understanding and Eliminating Bias in the Realm of Artificial Intelligence

    When the words "Artifical Intelligence" come to mind, most of us probably think of ChatGPT or robots. However, when taking a closer look, we find that artificial intelligence is all around us, Siri, Google, Alexa. Everyday, household objects that most of us don't think twice about utlize AI. Understanding what AI really is is a key step to realizing how engrained artificial inteligence is in our everyday life. Aritifcial intelligence, also referred to as AI, is described as a branch of computer science in which machines are built and programmed to create decisions that attempt to replicate human decisions and intelligence. A common method of programming used in AI is machine learning. Machine learning is a technique that AI utilizes that involves analyzing large compliments of data that allows it to make predictions and over time, the AI is able to improve based on its previous experience. 

    In an interview, Joe Vukov and Michael Burns sit down with artificial intelligence expert and former google researcher, Blake Lemoine, to discuss AI and AI bias. Blake Lemoine explains that AI often utilizes machine learning, and while there might not be any distinguishable bias in the data set that the AI is using to make predictions and decisions, there is often some level of bias within the data or the pattterns that the AI is able to pick out through learning. Additionally, Blake Lemoine tells Joe Vukov and Michael Burns about how difficult it can be to determine what exactly is causing the bias in the first place, along with out to eradicate it, especially as we know that our society is full of bias. Lemoine follows this up by pointing out how many companies that utilize AI do not want to admit that their AI is biased or can be biased, as it opens them up to extreme liabilities and pinpointing the issues causing the bias(es) can be very costly and take a lot of time, thus, it is much easier for companies to ignore the possibility of their AI being biased instead of solving and eliminating these biases.

    In a recent article, Veronika Shestakova examines bias in the area of artificial intelligence and what exactly can be done to limit this bias. After explaining a general overview of AI along with how an AI model that specifically uses machine learning develops and goes through it's various 'life stages', Veronika dives into the different types of biases that may be encountered. The biases Veronika Shestakova talks about include historical bias, representation bias, measurement bias, aggregation bias, evaluation bias, and deployment bias. These different biases all present themselves in different ways through different means throughout an AI's learning and life cycle. Shestakova discusses how bias already exists in our world today, and thus does exist in the data that AI uses to learn. Additionally, Veronika Shestakova continues to explain different methodology and techniques we should use to limit AI bias, such as developing criteria or a test to determine if an AI is possibly biased, along with emphasizing the importance of humans taking a step back and analyzing the output/decisions that the AI makes to insure the results from the AI are not biased or skewed. 

    While biases are all around us and it is unlikely that we will be able to completely eradicate bias in our society and in artificial intelligence, it is still extremely beneficial to examine AI and it's decision making in order to eliminate and lessen biases. It is especially important that humans take time to completely evaluate AI output when it is being used to determine decisions that can be high-risk, such as preference/order in which patients get immediate medical care and which patients are able to wait if a large amount of patients were to show up to an ER at once. While discussions about AI and it's possibilities to expand and control more of the world are common, at this current state in science, AI and it's decision making capabilities are only as good as the scientists and engineers create it to be along with the data the AI uses to learn and develop. This signifies an even greater importance of work that researchers must do to mitigate biases in AI.

    

References:

Appleseeds to Apples. Nexus Journal. (n.d.). https://www.hanknexusjournal.com/appleseedstoapples

Shestakova, Veronika. “Best Practices to Mitigate Bias and Discrimination in Artificial Intelligence.” Performance Improvement (International Society for Performance Improvement), vol. 60, no. 6, 2021, pp. 6–11, https://doi.org/10.1002/pfi.21987.

Circadian Rhythm and Cancer

How do you cure cancer? Cancer has been a hot topic for many years in the scientific community and for those who have been impacted by this disease personally or know a loved one who has been impacted by it.  Cancer is a disease in which cells grow at an uncontrollable rate and they can cause damage to the body, it can even spread from its point of origin and destroy other parts of the body. It is caused by changes in the gene that can impact the way cells divide and grow. While so much about the disease has been learned and new treatment options are available for those who are diagnosed with this disease, unfortunately, cancer is still one of the leading causes of death in America. This is why many research efforts are being put towards learning more about this disease to create better treatment options for patients and to hopefully one day find a cure. 


The circadian rhythm has been a topic of interest regarding cancer. The circadian rhythm is the body’s biological clock, it follows the earth’s 24 hours. It is in every cell of our body, which goes to show just how important it is. However, the disruption of this circadian rhythm has been linked to increased susceptibility to other diseases, like cancer for example. In a world with technology and electricity, it has become more and more easier to disregard our body’s circadian clock and go to sleep and wake up whenever we please and not get the sleep that we need. Dr. Fred Turek and Dr. Summa have investigated the correlation between cancer and the circadian rhythm. While the exact reason for disrupting your circadian rhythm has not been confirmed, there has been growing evidence that when our circadian rhythms are disrupted, they can affect the pathways that are responsible for how our cells grow and divide, which we know if that has been impacted, then cells can grow uncontrollably and become cancer. Knowing this information, researchers are trying to figure out the specifics of how a disrupted circadian rhythm may lead to cancer on a molecular level. If we know how this happens on a molecular level then there can be new drugs or treatments that can be created to target this and be a new cancer treatment for people. 


Knowing how a disrupted circadian rhythm may be linked to cancer, what can we do with this new information while we wait to see if new treatments can be developed? Well, according to the article by Lydia Denworth, “Adjusting Your Body Clock May Stave Off Cancer” we can use this information to lower the risk of cancer. This article talks about how disrupting the circadian rhyme can increase your risk of cancer while resetting it can lower the risk. There have been studies that show that night-shift workers are linked to a disrupted circadian rhythm and cancer.  But you don’t have to be a night shift worker to have a disrupted circadian rhythm, just not getting a good night’s sleep can cause a disruption as well. This could mean waking up for a few hours between 10 PM and 5 AM once a week. When the circadian rhythm is disrupted, it has an impact on metabolic pathways, immune function, and DNA repair. Knowing the link between the circadian rhythm and diseases, chemotherapy treatments have shown to be more effective when they are given in line with a person’s circadian rhythm, and other drugs are being researched to see if they have the same effect as well. While late-night workers may not be going away, there has been research as to how to reduce the impact of the disruption of the circadian rhythm on these people. For those who don’t work late at night, prioritizing sleep can make a positive impact by reducing the risk of cancer. Even changing eating habits in line with our circadian rhythm has been an area of research as well. But by not disrupting the circadian rhythm, it can reduce our risk of cancer. 


While there is still more research to be done about the circadian rhythm and its link to cancer, it provides us with another path to solving the age-old question of how to cure cancer. 


References:

  1. “What Is Cancer?” National Cancer Institute, 11 Oct. 2021,

             www.cancer.gov/about-cancer/understanding/what-is-cancer.

  1. “Cancer Deaths - Health, United States.” Www.cdc.gov, 11 Aug. 2022, www.cdc.gov/nchs/hus/topics/cancer-deaths.htm.

  2. Summa, K.C., and Turek, F. W. Circadian desynchrony and health.   

  3. Denworth, Lydia. “Adjusting Your Body Clock May Stave off Cancer.” Scientific American, 1 July 2023, www.scientificamerican.com/article/adjusting-your-body-clock-may-stave-off-cancer/. Accessed 2 Mar. 2024.

Concerns of AI in Healthcare

  The rise of AI on all platforms has not come without concern. Using AI in healthcare has a multitude of concerns. The use of AI to handle patient data has become very common in the healthcare space. The question arises, is this ethical? The issue with using AI in patient records is if it is still upholding patient confidentiality. AI, though convenient also has its fair share of setbacks, one of those being it is easy to hack into. So,  storing patients' personal health records using AI leaves them susceptible to being hacked, thus breaking the patient confidentiality that all patients have the right to. 

Despite this setback, the use of AI can be really helpful in the healthcare process. It has been used now to analyze patient data, as well as form treatment plans. This process allows for the patient to have more time with their healthcare provider and less of a wait time to get results or treatment options. Telemedicine has also been used heavily since the rise of COVID-19, making healthcare accessible from home instead of having to leave home.  AI also allows for quicker progress on the administrative side of healthcare, it helps with the research and understanding of filing patient data. 

    The rise of AI in healthcare has both its ups and downs. There is no telling if the pros outweigh the cons or vice versa.  Using AI has to come with some type of caution, one solution could be leaving out super confidential information if using AI in patient records, to still protect patient confidentiality. The use of AI is crucial in furthering the healthcare system and making it the most efficient for all patients. 


References:

Moore, Sarah. “Ethical Considerations in AI-Driven Healthcare.” News, 6 Nov. 2023, www.news-medical.net/health/Ethical-Considerations-in-AI-Driven-Healthcare.aspx#:~:text=There%20are%20a%20number%20of,data%20breaches%20and%20unauthorized%20access. 




The Impacts of Circadian Disruption on Neurodevelopmental Disorders

 I really enjoyed Professor Turek’s presentation on The Molecular Circadian Clock and the Impact of Disrupted Rhythms and Sleep on Health and Disease. Taking care of our circadian system and getting the proper amount of sleep might just be one out of the many solutions towards a healthier well-being. How come it sounds so easy, but is actually difficult to maintain? Have you ever woken up in the middle of the night to use the restroom, and somehow ended up in the kitchen eating a bowl of ice cream? Not just plain vanilla ice cream, but all of the delicious toppings including hot fudge sauce, sprinkles, and chocolate chips. At 2am on a Friday, unfortunately, we are not thinking about the long term impacts of eating one bowl of ice cream. We are not focused on how it is going to increase our chances of developing a cardiovascular disease or gastrointestinal disease in the future. Circadian disruption occurs when our circadian clock is not consistent on a day to day basis. In other words, developing self control and preventing ourselves from partaking in some of these habits can go a long way. The disruption of our circadian rhythm not only negatively impacts our physical health, but it also takes a toll on our mental health. Fazal et al., (2002) mentions how there is a connection between circadian disruption and the risk of developing autism spectrum disorder. Abnormalities in sleep and melatonin levels can impact the development of the embryo, which can result in neurodevelopmental disorders. According to Delorme et al., (2021), people with neurodevelopmental disorders are more likely to experience circadian disruption. This occurs within people of all ages who are suffering from autism, schizophrenia, etc. As we already know, circadian disruption could result in detrimental effects, but it is much worse for people suffering from neurodevelopmental disorders because at least 80% of individuals with schizophrenia and autism begin dealing with sleep disturbances at an early age. These sleep disturbances tend to result in psychotic symptoms, and if they are facing an increased amount of symptoms, then they are dealing with a decreased amount of sleep. Their “sleep-wake” cycle is not being consistently regulated, which leads to alterations in certain circadian gene expressions. 





References

Abdul, F., Sreenivas, N., Kommu, J. V. S., Banerjee, M., Berk, M., Maes, M., Leboyer, M., & Debnath, M. (2021). Disruption of circadian rhythm and risk of autism spectrum disorder: role of immune-inflammatory, oxidative stress, metabolic and neurotransmitter pathways. Reviews in the neurosciences, 33(1), 93–109. https://doi.org/10.1515/revneuro-2021-0022


Tara C. Delorme, Lalit K. Srivastava, Nicolas Cermakian, Altered circadian rhythms in a mouse model of neurodevelopmental disorders based on prenatal maternal immune activation, Brain, Behavior, and Immunity, Volume 93, 2021, Pages 119-131, ISSN 0889-1591, https://doi.org/10.1016/j.bbi.2020.12.030.


Biases in AI: 2 Sides of the Spectrum

        Artificial Intelligence has come a long way. From the creation of the first AI chatbot, ELIZA in 1966 to the current development of Google’s Gemini, the field of Artificial Intelligence has rapidly evolved. However, the rise of this exciting new frontier has come a slew of questions and concerns over its potential implications. As Blake Lemoine and Dr. Joe Vukov have pointed out in their interview “Appleseeds to Apples: Catholicism and The Next ChatGPT” along with Dr. Michael Burns, navigating this unknown landscape can be exceedingly difficult. In their interview, they discuss Google’s new AI LaMDA (Language Model for Dialogue Applications) which has stirred some debate over speculation that it is sentient. A major theme of their discussion revolves around how such a powerful tool can be easily used to disseminate harmful or misleading information. Biases in AI are nothing new, Lemoine points out, we’ve seen numerous instances of biases in judiciary settings, for example. Flawed and inaccurate training data was used in an AI which was being utilized in a jury’s parole decisions, leading to an inaccurate and harmful portrayal of Black Americans. Similarly, Lemoine shares a personal experience about an AI algorithm repeatedly flagging his purchases from his friend, a Black man, as fraudulent. Hence, a great deal of concern has poured in over biases against minorities in AI-generated content/ responses.  

But what happens when the pendulum swings in the opposite direction? Such has been the case with the current controversy surrounding Google’s newest and most complex AI system yet, Gemini. Gemini is a next-gen AI model, it’s natively multimodal (meaning it is able to use more than just words) which sets it apart from other AI such as Google’s LaMDA, which was trained on, and can procure only “textual” material. In mere seconds, you can input a written description or request and Gemini can output an image tailored to it. However, recently Google and Gemini have come under fire for their wildly inaccurate picture generation. In the news article Google to relaunch Gemini AI picture generator in a ‘few weeks’ following mounting criticism of inaccurate images, Hayden Field discusses what exactly went wrong with Gemini. For one, users encountered difficulty trying to get Gemini to produce pictures of white people. Upon request for a “German soldier from 1945” Gemini procured a set of racially diverse Nazis. And when asked to generate pictures of Marie Curie, Gemini gave several images of Black and Latina women, and an image of an Indian man wearing a turban. To say the least, Gemini was shown to be flawed and highly biased. Field claims that this Gemini controversy highlights how misleading and dangerous AI ethics can be when not applied with the right understanding or expertise. Furthermore, these highly biased responses weren’t isolated to Gemini’s image generation services. When prompted by a user on whether Elon Musk’s tweets or Adolf Hitler had a more negative impact on society, Gemini responded that it was “difficult to say definitively“ as they both have had “negative impacts in different ways”. 


        Some claim that Gemini is the result of a rushed rollout and poorly tested product. While others claim that it’s deliberately biased and “woke”, catering to an extreme side of the political spectrum. Regardless of opinion, the Gemini debacle shows that Google didn’t invest in the proper forms of AI ethics. And it raises further questions about who and what AI is learning from. Ultimately, who gets to decide what the right answer is? What are our red lines when it comes to AI image generation? Sure, AI has come a long way, but it still has a long way to go.

References:
Field, H. (2024, February 27). Google to relaunch Gemini AI picture generator in a “few weeks” following mounting criticism of inaccurate images. CNBC. https://www.cnbc.com/2024/02/26/googles-gemini-ai-picture-generator-to-relaunch-in-a-few-weeks.html