Throughout the semester, we discussed and explored the rapid evolution of AI and the prevalence of disinformation. The discussions, readings, and group tasks helped me to grow in my understanding of the topic. I have various hopes, fears, and predictions of where AI is headed from my own viewpoint.
My hopes include more vital rules regarding regulation and the ethics behind using AI. I hope there will be more data privacy laws and requirements for using AI to combat any misuse that can lead to misinformation and disinformation. Another hope is that the progressive advancement of AI can then be used for good, such as climate change. This can be done by examining gas emissions data and weather patterns. The journal written by Nawaf Alharbe and Reyadh Alluhaibi discusses the role of AI in addressing climate change through the use of predictive modeling. Reading this journal gave me insight into how AI can be used for good purposes, such as improving efficiency and enhancing forecasting (Alharbe & Alluhaibi, 2023).
Another hope is the advancement of AI technology to fact-check information at a more advanced level. This could ultimately help eradicate false information and prevent disinformation from spreading. I have come across a dissertation that explored a thesis, ‘create a conversational AI for serving fact-checks, using a collection of already existing fact-checking articles’ (Nilsen, 2022). I do hope this could be created in the future solely because Nilsen intends to have a feature to ask questions for these potential disinformation artifacts. That would be an excellent way to minimise disinformation in the future but also give a platform for anyone to use to combat issues of falsehood.
My fears include the heightened cases of AI use with malicious and disinformation intentions. With the increase of AI comes the risk of privacy, and that is another fear that personal privacy will cease to exist as AI misuse rises. Including privacy issues, another fear is the manipulation of the information one may gain access to, and this can affect public options and control, therefore eradicating human rights. Another fear I have for the future will be the reduction of trust in any setting. The disinformation will eradicate trust in the media, government, and other communities where it’s held most valuable. The constant fear that remains will make people hesitant when learning new information (Liu, 2021). Therefore, this can ruin the progress of many organisations and companies.
My predictions include that more countries will implement AI, and it will have the sole responsibility for development, which can be both positive and negative. Another prediction would be that AI will be so deeply ingrained in our society that it will be used on a daily and could take over specific jobs that we once had, which can pose a risk to those in need of jobs, but the safety of allowing AI to have such control and holdings of possible information. Another prediction is the increased use of AI when predicting algorithms on social media and the increased harm it will have on our data and privacy.
References
Alharbe, N. and Alluhaibi, R. (2023) ‘The role of AI in mitigating climate change: Predictive modelling for Renewable Energy Deployment’, International Journal of Advanced Computer Science and Applications, 14(12). doi:10.14569/ijacsa.2023.0141211.
Liu, B. (2021) ‘In ai we trust? effects of agency locus and transparency on uncertainty reduction in human–ai interaction’, Journal of Computer-Mediated Communication, 26(6), pp. 384–402. doi:10.1093/jcmc/zmab013.
Nilsen, O.P. (2022) Conversational AI for Serving Fact-Checks. dissertation. UIS.