History and Development of AI Tools
While AI has become an increasingly popular tool in both availability and use, AI technologies are by no means new. The earliest roots of AI can be traced back to the creation of the early computer in the 1950s. Throughout the later half of the 1900s other technological advances played critical roles in developing what we now call artificial intelligence, but these early iterations operated on a much smaller scale and were not nearly as accessible for the average person to use.
AI tools began to enter public and commercial spaces during the release of virtual assistant products like Apple's Siri and Amazon's Alexa in the early 2010s. Both used natural language processing, which gives computers the ability to understand and spoken word and text like humans. Siri and Alexa were not the first uses of natural language processing, but were now made for the average consumer rather than companies like IBM and NASA. Today more advanced AI tools like ChatGPT, Dalle, and Gemini are being made available to the average consumer in both free and paid versions, greatly increasing its use and potential impacts on society. For a deeper history on AI, see this AI timeline by Coursera and IBM's article "What is artificial intelligence?," written by Cole Stryker and Eda Kavlakoglu.
Machine Learning and Large Language Models
Today's most popular AI tools are largely shaped by the development of machine learning, neural networks, and large language models (LLMs).
Tools like ChatGPT and Microsoft CoPilot are built from LLMs. LLMs are advanced forms of artificial intelligence that allow computers to not only understand spoken word and text like humans, but to also generate human-like text in response. These models rely on massive amounts of training data, which includes text from a wide variety of sources (both from the Internet and books or other print material) and contexts. By having diverse training data, these models can learn the nuances of human language in all its forms. To learn more about how LLMs are created and used, check out these two Medium articles: "A Very Gentle Introduction to Large Language Models without the Hype" by Mark Riedl and "How Large Language Models work" by Andreas Stöffelbauer.
The Age of Generative AI
Many of the AI technology we see in use today uses machine learning to identify patterns in more than just text, but also images and other digital content. This type of advance AI is what we currently call generative artificial intelligence (gen AI), AI that is designed to generate new text, images, video, or audio based on the content that it has studied.
In order for the computer to generate new material, it needs to:
To get a more detailed breakdown of how gen AI process different kinds of digital content, check out University of Illinois Urbana-Champaign's AI LibGuide.
Some of the content generative AI tools can create include:
To explore the various generative AI products out there today, see the Overview of AI Products page. And to learn even more about what gen AI is and how it works, see IBM's article "What is generative AI?," written by Cole Stryker and Mark Scapicchio.