Potentials and Pitfalls of ChatGPT for Students
Words by Sunil Ale
Image by Matheus Bertelli
Have you ever wished you could have a conversation with a computer and have it understand you as well as another human would? While it may sound like science fiction, with ChatGPT and other chatbots built on top of the latest Large Language Models, it is becoming a reality with the potential to revolutionize the world. Released at the end of November, ChatGPT is one of the latest developments in artificial intelligence that allows users to interact with a computer through chat. Although the technology is still in its infancy, it can be used in various ways, such as chatbots, virtual assistants, language translation, personalization of user content, customer service, education, research, and content generation, as well as being able to generate codes in multiple computer languages and summarize complex research papers. However, it also comes with challenges such as biases, privacy concerns, reliability, and ethical considerations.
On March 30th, the "ChatGPT, LLMs and AI Opportunities and Ethical Challenges" conference was held at the Grove School of Engineering. The conference was hosted by Radha Ratnaparkhi of the T.J. Watson Research Center and GSOE Executive in Residence, with panelists including YingLi Tian, CUNY Distinguished Professor and Professor of Electrical Engineering; Renata Kobetts Miller, Dean of the Division of Humanities and the Arts and Professor of English; and Tilak Agerwala, Former IBM Executive and GSOE Executive in Residence.
Professor Tian spoke about the potential uses of GPT and similar technologies in assisting visually impaired individuals with text recognition, as well as in research in generating and processing images, videos, and audio to train other AI models. She predicted that these technologies would have a significant impact on human-computer interfaces.
Professor Agerwala noted that GPT is the fastest-growing software ever and that it can have conversations and write content like humans. He explained that it works by using statistical models to determine the next best information to use to generate text. However, he also acknowledged that the technology is still imperfect, as it is challenging to program symbolic reasoning, and implicit knowledge is hard to capture in code. For example, a computer cannot understand that a lemon is sour because it cannot taste it.
Professor Miller argued that GPT still lacks skills in the humanities. She acknowledged that GPT and related technologies have potential uses in assignments such as essays and poems, but students who rely solely on these technologies to generate their assignments (in her class specifically) are likely to fail because the quality of the writing generated is lower. Additionally, the writings may make up evidence to support theses on lesser-known topics. Professor Miller also touched on DALLE, which was developed by OpenAI, the same company that developed ChatGPT. DALLE can generate images given textual prompts, but she argued that these images have diminished values compared to the original works of historical and current artists because they lack the artist's history, life story, and background that make the works interesting.
There are still several challenges faced by ChatGPT and related technologies, such as the accuracy of the information generated and the problem of determining if data is produced by AI or not. There are also issues with privacy, implementation in research, and biases in the data used to train these models. There have been many instances where AI bots had to be terminated because they were producing unsafe, sexist, and racist output. These technologies can also be exploited by technically competent people to produce spam and phishing attacks, even if they lack language skills, such as impersonation of bankers, in multiple languages instantly. One major concern is the possibility of the creation of polymorphic malware, which are computer viruses that can mutate on the fly to dodge any detection.
Despite these challenges, the potential benefits of ChatGPT and related technologies cannot be denied. They have the ability to revolutionize how we interact with computers, perform research, and create content. It is just crucial to ensure that the technology is used ethically and responsibly to avoid negative consequences.
One way to address some of these challenges is by improving the quality of the data used to train these models. As more diverse and representative data is used, the accuracy and reliability of the generated content will improve. Additionally, it is crucial to have ethical and transparent guidelines and regulations in place to ensure that these technologies are used in a responsible and beneficial manner.
The conference highlighted both the potential and challenges of ChatGPT and similar technologies. While there are concerns regarding accuracy, biases, privacy, and ethical use, the potential benefits of these technologies cannot be ignored. It is crucial to address these challenges and use these technologies responsibly to ensure they positively impact society. Please note that this article does not cover the contents of subsequent conferences, which were scheduled to cover deeper insights into LLMs and AI, and their ethical implications, since we were unable to attend them.