OpenAI announced an addition to its popular large language model, ChatGPT, at the end of October 2024. After a selective launch of the aptly named SearchGPT in July, the American AI research organization (with plans to expand in Europe) decided to integrate the new search functions directly into the chatbot rather than promote it as a separate product.
Google has been the long-time dominator of the search engine market, but AI creators and supporters are seeking to set a new precedent. Last year, Google’s parent company, Alphabet, adopted AI-powered summaries (formerly known as Bard and rebranded earlier this year as Gemini) as a part of users’ search experiences to remain competitive.
SearchGPT also rivals big names like Microsoft’s Bing with Copilot (developed via its partnership with OpenAI) and up-and-coming services like Amazon’s Perplexity. The new features are designed to enhance the user experience by providing more accurate and relevant information directly within conversations. This advancement integrates advanced search capabilities, allowing users to pull data from a broader range of sources effortlessly. In other words, this isn’t the typical search experience we’ve become accustomed to with Google.
OpenAI introduced SearchGPT as a ChatGPT search engine that “leverages third-party search providers, as well as content provided directly by [its] partners, to provide the information users are looking for.” The private company, valued at $157 billion, stated a commitment to working with publishers and reporters to provide a “thriving ecosystem of [content] creators” that keeps “journalism at the forefront of its efforts… bringing more choice to search.” In other words, contrary to how ChatGPT works, SearchGPT will answer questions, cite the publication from which the information was pulled, and link back to it.
SearchGPT differs from Google because you aren’t just typing in a word or phrase that yields results. Instead, you are asking questions that can be followed up with more questions, building on the context as the dialogue unfolds. The move positions ChatGPT as a versatile generative AI (GenAI) tool for generating text and retrieving context-specific information, thereby setting a new benchmark in the competitive landscape of AI-powered search technologies.
OpenAI is known for its advancements in AI technology and self-acclaimed responsible generative AI development, setting new benchmarks in the AI landscape. Established to ensure that AI benefits all of humanity, OpenAI has been at the forefront of creating state-of-the-art AI models that purportedly align with ethical standards and societal well-being.
One of its most significant contributions to the field is developing the ChatGPT series, which has revolutionized natural language processing and human-computer interaction. The recent launch of SearchGPT further cements OpenAI’s commitment to innovation, blending information retrieval with intellectual dialogue.
However, experts argue that chatbot-powered search engines are far from fail-proof despite the benefits. At the beginning of 2023, one publication described AI language models as notorious for “presenting falsehoods as facts”—a phenomenon known as “hallucinations.” These hallucinations are often presented confidently with fabricated citations or support, false images or other materials referred to as “deepfakes,” or embedded bias perpetuating harmful stereotypes.
The article went on to say that OpenAI emphasizes that ChatGPT “is still just a research project.” When integrating Copilot into Bing, Microsoft used the disclaimer that “search results might not be reliable.” Finally, Google was slow to roll out its AI chatbot technology because its leadership “worried about the reputational risk.”
The truth is that GenAI, such as SearchGPT, is still quite fresh. Some experts, like University of Washington Professor Chirag Shah, called users “guinea pigs” of tech companies. Shah pointed out that tech companies are taking a less conservative approach to releasing these AI systems, meaning there will be hiccups, and it is up to the users to do “the work of testing [the] technology for free.”
This is further seen with the recent public preview of Microsoft Copilot’s AI agents. These agents are more than just chatbots. They are expected to “parrot human digital output well enough to pass inspection” while “[performing] a series of linked tasks based on information.” It’s GenAI that isn’t limited to chat. Instead, they can perform around-the-clock, autonomous, decision-making tasks within project management, sales and marketing, human resources, and more by planning, learning, and adapting to changing conditions.
The irony is that Chief Product Officer for Responsible AI, Sarah Bird, noted that “extra safety considerations” are synonymous with autonomous agents and assured the public of Microsoft’s commitment to “ensuring they behave.” This additional manual oversight and data access management by IT departments is intended to mitigate risks but also speaks to the lack of confidence in AI as an independent resource.
As a result of its limitations, like misinformation, data privacy violations, and copyright infringement, organizations considering the use of GenAI to make work more efficient must approach the integration with caution, ensuring robust safeguards and ethical considerations are in place. A 2023 global survey by Adobe showed that 90% of employees believe GenAI can help them increase productivity and efficiency in their roles. Additionally, 76% of business leaders were ready to adopt GenAI into company workflows, with 39% believing it would be a daily assistive tool and 79% expecting employees to use it often.
With the growing popularity of AI, businesses must strike a balance between innovation and human ingenuity. Humans bring a range of qualities to the workplace that are difficult, if not impossible, for AI to replicate. Likewise, GenAI offers several compelling advantages that make it an asset in the workplace. Given the strengths and limitations of both humans and GenAI, the most effective approach for the modern workplace is collaboration.
GenAI can augment human capabilities by handling routine and data-intensive tasks, allowing humans to focus on more strategic and creative endeavors. This collaboration can lead to greater innovation and problem-solving as humans bring their emotional intelligence and ethical judgment to bear on the insights provided by GenAI. By leveraging the unique capabilities of each, organizations can achieve a synergy that maximizes productivity, innovation, and ethical decision-making.
Additional tips to avoid GenAI pitfalls include:
- Fact-check all information
- Beware of false citations or inaccurate sources
- Evaluate output for biases
- Ensure that information is up to date
- Know the limitations of chatbot-powered search engines
- Use as an assistive resource rather than a replacement tool
- Avoid inputting sensitive data into public GenAI systems
- Provide employees with continued training on how to ethically and effectively use AI in the workplace
Oxford can help your business safely and strategically integrate GenAI tools, such as chatbots, into your workflow to enhance rather than hinder employee morale, productivity, and overall operations. When used responsibly and effectively, AI can be a powerful tool to improve business efficiency, innovation, and customer satisfaction. Digital Transformation Practice Director, Alie Doostdar, attests to Oxford’s ability to help you “develop Responsible AI Framework to ensure your AI systems are transparent, ethical, and trustworthy.” Embrace the future of technology with confidence and let Oxford be your partner in navigating the world of artificial intelligence.