By Elon University News Bureau, staff
February 13, 2023
Assistant Professor of Computer Science Ryan Mattfeld offers insights into how the recently developed technology could become integrated into our lives.
The public has rapidly become fascinated with the power of a new artificial intelligence technology — ChatGPT — a chatbot developed by the research and deployment company OpenAI and launched late last year. Already it’s demonstrated the ability to serve up detailed answers to complex questions while using the information it processes and feedback from users to improve its ability to respond.
ChatGPT has proven to be versatile, with users using the technology to compose music, debug computer code, write restaurant reviews, generate advertising copy and answer test questions. It’s able to deliver its responses in a conversational way, and has sparked excitement about its potential, along with some concerns with how it might be used.
But what exactly is ChatGPT and what does it say about the state of AI now, and in the future? Today at Elon reached out to Assistant Professor of Computer Science Ryan Mattfeld, who has been using ChatGPT as a discussion point in the classroom and has insight into how it could transform our use of AI.
ChatGPT was launched on Nov. 30, 2022, and I first heard about it in early December. The first sample I saw was of a piece of software written by ChatGPT, including an explanation of how the code worked. I was immediately skeptical about the capabilities, and I initially assumed the example was cherry-picked. I continued to hear more about ChatGPT, so at the end of the semester, I created an account and started playing around with it.
I was immediately blown away. I gave it an assignment used in one of my 2000-level courses from the prior semester, which included a specific programming task embedded in a story/scenario about creating robots. This was multiple paragraphs long, had unnecessary information, and referenced parts of the class that ChatGPT had no knowledge of. I thought it would stump ChatGPT.
I was wrong. It immediately produced an accurate solution, including an explanation for the code written and a description of how it solved the provided problem. I quickly emailed everyone in my department to alert them that ChatGPT was real and demanded attention.
ChatGPT and search engines have two different goals. The primary goal of a search engine is to try to direct you to accurate resources. The primary goal of ChatGPT is to generate reasonable-sounding responses to inputs using natural language. The most critical difference is that ChatGPT’s primary goal does not include accuracy. That is certainly a secondary goal, but it is not a guarantee.
That said, there is certainly overlap. Part of ChatGPT’s primary goal includes condensing a wide range of data sources into a useful response. So, when ChatGPT does provide an accurate response, it simplifies searches, allows for a dialog, and provides answers in a conversational way. Rather than finding a list of sites, hunting for the ones that relate to your topic, opening the sites, finding the specific information you want, and often backing up and trying again, ChatGPT can provide a response in easy-to-understand language that is clear, direct and helpful.
In addition, ChatGPT allows you to follow up. If you want to dive deeper into one part of its response, you can. Just request a more elaborate answer in the part that is most interesting to you.
Yes! In fact, this was one of the primary mechanisms in its development and continues to be used for fine-tuning.
Now, when using ChatGPT yourself, you will see options to give the responses it generates “thumbs up” or “thumbs down”. If you choose “thumbs down”, you can describe why you did not like the response. This is used to further fine-tune ChatGPT.
Working in the field of computer science, my first inclination, of course, was to consider how ChatGPT is starting to be used in software development. It is surprisingly effective at writing good code for relatively simple programs.
However, it is not perfect. It still makes mistakes and can produce non-functional, but good-looking code. So, I expect that the most successful software developers will use it as a tool to help but will need to be capable of reviewing the responses to identify and correct errors. There is still a strong need for knowledgeable humans in the loop to verify that the solution provided was correct rather than convincing but wrong.
Another interesting perspective for computer scientists is that ChatGPT is an open API, which means that software developers can integrate ChatGPT into new coding projects we develop. This means that we will soon see ChatGPT integrated into many, many other programs and applications. I expect that ChatGPT will improve over time, and I expect that we will see it in more and more places. For example, ChatGPT is already integrated into Bing and is soon to be integrated with Microsoft Word.
Of course, ChatGPT is starting to affect areas outside of computer science as well. I created an assignment in the Technology and Society core capstone that I taught in January related to ChatGPT. The assignment included using and analyzing ChatGPT in several ways, including a competition where students tried — and largely failed — to identify which papers were written by their peers and which were written by ChatGPT. The final question in the assignment asked them to consider how ChatGPT may affect a specific area of society with supporting evidence. In particular, law, computer science, journalism, and education were a few areas where multiple students predicted ChatGPT will make a significant impact.
There are really several ways to consider this question. It has many upsides. It can make apps that are easier for anyone to use. Rather than requiring specifically formatted and particular requests that computers often need, using common language will soon be effective much more broadly. This also has positive accessibility implications in assisting people that have language deficiencies. There are, of course, also all of the other cool things it can do — from playing games to helping remove writer’s block, to explaining complex concepts in simpler terms, to simulating a conversation.
There are, of course, also some downsides and risks. Broadly speaking, if someone relies on ChatGPT too much, it could hinder their development. Making mistakes is critical in the learning process. If ChatGPT is used to skip this step, it could lead to a false sense of ability and confidence.
In addition to this more indirect but very significant concern, ChatGPT could also be directly used for harm. Prior iterations of ChatGPT and other generative AI (the broad term for this type of technology) have been in development for years. Prior iterations pulled directly from information available on the internet without a filter against hatred, lies, and biases. Part of the reason ChatGPT has taken off is because it is currently the most successful at reducing harmful instructions, biased content, and inaccurate information. However, none of these shortcomings have been perfectly eliminated. ChatGPT is very effective at generating very convincing narratives, even if they are not true.
I am most immediately concerned with how easy ChatGPT may make the lives of those seeking to widely spread misinformation. The concept that we are or soon will be in a “post-truth world” scares me.
Google, in particular, is extremely concerned by ChatGPT. Their business model is almost entirely based on collecting advertisement revenue, which is driven by the number of people using their service. They have diversified some across Youtube, cloud computing, and other areas. However, if ChatGPT ends up replacing even 50% of Google searches, then I would guess that Google ends up losing about 30% of their total revenue.
The tech industry knew generative AI would be coming soon, but ChatGPT changed the “soon” to “now”, so the other companies that have been working on these technologies are now extremely focused on releasing their versions. It will be interesting to compare the effectiveness of each of the generative AI options. I suspect that many will be released before they are ready.
I was first blown away by its ability to code, but this is a narrow application. The more I have used it, I would say that it amazes me most in its versatility and ability to hold a dialogue. For example, I have used it to generate a text-based adventure very similar to the choose-your-own-adventure type books I remember loving as a child. You can pick a theme or topic or for your adventure or allow ChatGPT to choose. Then you can make decisions and change the way the story develops. Alternatively, you can ask it to explain quantum computing in a way that a 10-year-old can understand, and it does a pretty good job. You can provide samples of your own writing and ask it to respond to a question in your own writing style. You can ask it to write a short story in the style of your favorite author.
I think it shines in its ability to help with writer’s block or in other scenarios where you are just stuck. For example, my three-year-old daughter was invited to her five-year-old friend’s birthday party. I was having trouble coming up with gift ideas for a five-year-old, so I asked ChatGPT to give recommendations. It provided a great list of varied ideas and even added:
“Remember that every child is different and what may be a hit with one child may not be with another, so it’s always a good idea to consider the child’s interests and personality when choosing a gift.”
I had no knowledge of this particular five-year-old’s interests, so I requested more choices in the category of Legos and building blocks, which sounded like the most fun to me. With my follow-up request, ChatGPT provided specific options including Classic Lego sets, Duplo Blocks, Mega Blocks, magnetic building blocks, and building sets with themes. It described each choice in detail and when one set may be more appropriate than another.
This two-minute exchange saved me probably 20 minutes of hunting through websites to find options and helped both provide a broad range of starting options and more detail for the specific types of gifts I was interested in.
Faculty & Staff
Computer Science Elon College, the College of Arts & Sciences News Bureau
Elon faculty members and a local nonprofit leader spoke about their personal experiences with mentorship and how they used those lessons with those they mentor today.
The student-run strategic communications agency captured one silver and four bronze accolades at the AAF Triangle’s 2023 American Advertising Awards gala.
Why does dementia affect some populations more than others? Life experiences, chronic stressors and lack of health equity may be key to understanding, neuroscientist Indira Turney said during a Voices of Discovery lecture Monday.
The Elon community came together and raised over $2.6 million for students during Elon Day 2023 on Tuesday, March 7.
Get more Elon news delivered to your inbox.
© 2023 Elon University | All Rights Reserved