Artificial intelligence (AI) has been a topic of discussion since the late 1960s when HAL first told Dave he was “unable to” open the pod bay doors because “the mission is too important for me (HAL) to allow you (Dave) to jeopardize it.” Although the concept of AI had been around well before “2001: A Space Odyssey” hit the big screen, it took a self-aware computer in a science fiction movie to bring the concept to the mainstream—and do so in a highly negative light.
These days, AI is being used as a blanket term to describe many different things, but generative AI seems to be the current AI flavor of the month. Generative AI is a term used to describe AI systems that can create text, images, or other media based on user prompts. Chatbots and other online tools increasingly are using generative AI to produce human-like responses to customer inquiries, with the highly popular (and controversial) ChatGPT being perhaps the best-known example of generative AI.
A Generative AI Primer
Generative AI utilizes deep learning to gain its intelligence—and the more it ingests, the “smarter” it becomes. That is not to say that the technology thinks; rather, it creates outputs based on its analysis of the large data sets it is fed, finding patterns and uncovering trends that humans might not have the capacity or ability to. Google is using generative AI to fix problems in healthcare; organizations are incorporating it to improve the effectiveness of their sales teams; law firms are using it to improve litigation of Lemon Law enforcement. The list goes on, but the point is this: There is not one industry that would not—and will not—benefit from generative AI.
That said, it’s worth noting that generative AI is a perfect example of “garbage in, garbage out.” There have been many reports about the bias inherent in generative AI models based on the information they are fed—human-created information that reflects the natural biases we have toward race, gender and other human characteristics. The more skewed the information is, the more likely the generative AI model will spit out information that is, well, wrong, in both accuracy and perception. Articles in Bloomberg and even Rolling Stone highlight the dangers these biases can cause—and already are causing—through the increased use of generative AI.
Finding the Good in Generative AI
Understanding the biases and risks associated with generative AI can go a long way toward building better large language models (LLMs), the algorithms that power generative AI. Prompt engineering, currently a nascent but growing area of data science, is a way for organizations to add context to the information in the LLM as well as train the model for use in specific industries.
Over time, LLMs will shrink in size as organizations rely less on the general all-you-can-eat style of training and focus more on feeding the LLM with data that is more aligned with the need or the industry vertical. According to an article in Computerworld, “When LLMs focus their AI and compute power on smaller datasets … they perform as well or better than the enormous LLMs that rely on massive, amorphous data sets. They can also be more accurate in creating the content users seek — and they’re much cheaper to train.”
Future-Ready Generative AI
There’s a lot that AI can do, much of which remains to be seen. But AI, and what it can do, is on the minds of technologists of all stripes. As generative AI—and other flavors of AI, for that matter—continue to evolve and mature, it’s clear it will become less of a tool and more of a necessity. AI’s usefulness, however, will derive entirely from the human intelligence necessary to teach it well.
Pingback: 2024 Trends: Better Business Through Generative AI -
Pingback: Can AI Help Ease WFH Anxiety? The Remote Work Question