Recently, Geoffrey Hinton—a key figure in the development of artificial intelligence (AI)—announced his resignation from Google. The news has shocked many in the technology industry and caused us to contemplate Hinton’s reasons for resignation. Hinton cited growing concerns over the future impact of AI technology as his reason for stepping back. Specifically, Hinton seems to be concerned that the technological advancements in the field of AI have accelerated more quickly than he anticipated, and he worries that we might not have the social, legal, and ethical infrastructure in place to keep up with the technology. Hinton claims he was motivated to resign so that he could have the space to speak candidly and honestly about his worries without having to contend with a potential conflict of interest.
The media frenzy caused by Hinton’s resignation is only the latest in an ongoing series of recent developments in AI technology. The constant barrage of headlines and updates from the industry brings much excitement at the prospect of what capabilities AI will bring to our fingertips. At the same time, Hinton’s resignation signals for many, further anxiety about the rate at which we’re developing AI technology. With calls to temporarily pause development on AI technology and growing concerns about how AI may impact the job market, you may be feeling a growing sense of anxiety about the future.
Here at DelCor, we’re excited about AI technology—it’s hard not to be! But we also want to make sure we’re keeping a close eye on any signs for potential concern. For many of us, there’s not much to do at the moment besides staying tuned in and trying to be the most informed consumers we can be.
You may have seen our last article on AI in which we stressed the importance of getting out there and becoming intimately familiar with what AI technology can do and what it looks like. Well, we decided to take our own advice and ask ChatGPT to help us digest the news about Hinton. The results are interesting to say the least.
We provided ChatGPT with two prompts and asked it to write a blog as a response. Our first prompt asked ChatGPT about how Hinton’s resignation from Google will affect the future of AI. The second asked about the validity of Hinton’s fear that AI poses an existential threat to humanity. We provided each of these prompts to ChatGPT twice in order to see how the responses might change. Here’s what we learned.
Prompt 1: How Will Hinton’s Recent Resignation from Google Affect the Future of AI?
In the first response, ChatGPT provides some basic background information about Hinton and his work before gesturing at some possible developments that might arise due to Hinton’s resignation. From our perspective, ChatGPT appears to offer conjecture about trends that are “likely” to occur with no evidence or reference to precedence:
Hinton's resignation is likely to spark a competition for talent in the AI industry. With Hinton leaving Google, other companies and research organizations may try to recruit top AI researchers and engineers away from the company. This competition could drive up salaries and create a more challenging recruiting environment for Google and other tech giants.
ChatGPT goes on to say that “Hinton's resignation may also have some positive impacts on the future of AI. For one, it could inspire more diversity and innovation in the field.” Interestingly enough, ChatGPT closes its response by mentioning that it’s difficult to predict the impact of Hinton’s resignation but only after offering some noticeably distinct likelihoods.
In the second response, ChatGPT provides a different perspective stating that while Hinton’s resignation is important, it is unlikely to have a major impact on industry in the long run. Such a conclusion is inconsistent with the predictions offered in the first response. While the two responses differ in tone and what they predict about the impact of Hinton’s resignation, they share some details. For one, they are almost exactly the same length. There are also notable similarities in style. For example, they both clearly communicate a “conclusion” to the articles. The articles seem to indicate that while ChatGPT has a clear idea of what a blog might look like, the content of the blog is notably variable even with an identical prompt.
Prompt 2: How Valid is Hinton’s Fear that AI Poses an Existential Threat to Humanity?
In response to the second prompt, ChatGPT offers a similar disparity in tone between the two articles it produced. In the first response, ChatGPT provides a fairly dismissive response, noting that not only do we have strict ethical oversight in place but also that AI is far from being capable of contributing to such harm.
The second response offers a much more balanced tone, opting to present competing perspectives on the matter by vaguely referring to experts in the field: “The validity of this fear is a matter of debate among experts in the field . . . some experts argue that the risk of an AI-induced existential threat is overstated.” While ChatGPT fails to name any experts and fails to offer any kind of citation through all the responses we procured, it at least attempts to place the positions it’s representing within the larger conversation taking place in the industry. We think this indicates a more nuanced and elevated approach than the first.
So, what can we learn from this? While this wasn’t meant to be a study of ChatGPT’s ability to convey the intricate details of a complex issue, it was meant to be a learning opportunity. As it’s becoming increasingly likely (WAPO $) that we’ll encounter and use AI content, it’s more important than ever to expose yourself to what AI does well and what AI struggles with at the present moment.
We found that ChatGPT was more than capable of producing content that passed the bar in terms of style and grammar. And, depending on the topic and the audience, it seems ChatGPT can paint in broad strokes well enough to pass the litmus test. So, while ChatGPT’s articles were believable, they weren’t all that informative. It failed to cite sources, present evidence in support of its claims, and provide a consistent perspective throughout. As a result, it’s difficult to walk away from those articles feeling like we learned much as readers.
While ChatGPT and similar tools may have these problems currently, they will likely evolve and improve beyond these faults. And there may be other models that are already more than capable of this feat. For that reason, in addition to the fact that ChatGPT’s articles were believably human, it is imperative that we train our eyes to be discerning. With plenty of evidence about AI hallucinations (NYT $), legitimate concerns about whether we have the policies and security measures in place to keep up with the rapid expansion of AI technology, and the tempered anxiety of experts like Geoffrey Hinton, it’s clear we’ll have to get better at understanding AI. As a consumer, it pays to read carefully and sometimes look twice at things to get a sense of what is being said and how it’s being said. Semantics is, we think, the last frontier for AI.
So, what can we do now to keep abreast of the meteoric evolution of generative AI?
- Observe: While AI has been around for years, generative AI is moving quickly and will take many twists and turns. Continue to learn about the multiple facets of AI from ethics to efficiencies.
- Practice and Compare: Identify pilots and opportunities to practice using AI tools. Teach staff and members how to validate information as you learn to work with new technology. For example, learn how to identify AI generated content for articles and proposals.
- Research and Advocate:
During the early phase of this technology, associations can play a role to influence how generative tools find and use information.
Meanwhile, we’ll leave you with this public service announcement from over 50 years ago in 1971. In it, the President’s Council for Physical Fitness encourages us to be less sedentary. It’s hard not to see the potential parallels with AI.