What’s Next for Generative AI?

Generative Ai

Generative AI took the world by storm in recent years after several chatbots, like ChatGPT, entered the public domain. The chatbots generated human-like text with a speed that seemed almost magical – writing sonnets in the style of Shakespeare, translating texts between numerous languages, churning out computer code and so much more.

Businesses and business pundits saw the potential benefits immediately. However recent months have seen small but growing doubts about generative AI. Detractors say generative AI’s capabilities have been overhyped. Hallucinations – false statements that generative AI models can make – decrease its usefulness, and many businesses have yet to find an ideal strategy to use these tools. And though ChatGPT is one of the fastest-growing applications of all time, the proportion of people who say they use it regularly remains fairly rare.

In “The Impact of Technology in 2025 and Beyond: an IEEE Global Study,” a recent survey of global technology leaders, 91 percent of respondents agreed that “in 2025 there will be a generative AI reckoning as public fascination and perception shift into a greater understanding of and expectations for what the technology can and should do — in terms of accuracy of results, transparency around deepfakes and more.”

But the survey doesn’t anticipate a lasting stumbling block for generative AI. A sizeable majority (91%) also agreed that “generative AI innovation, exploration and adoption will continue at lightning speed in 2025.”

So, what’s in store for generative AI in 2025? What’s the product roadmap, and what impact will they have on how we work and live?

More Multimodal Capabilities

IEEE Senior Member Daozhuang Lin expects generative AI models to make it easier to provide images and videos from short text snippets in the coming years. Text-to-image, text-to-video and speech synthesis will improve, and models will achieve better contextual understanding across diverse inputs.

“The first step is the deep integration of multi-modal to create more complex, detailed, accurate and self-consistent content for consumers and even professional content creators,” Lin said.

Cleaning Up Accuracy and Bias

Concerns over hallucinations, accuracy and bias have also slowed the adoption of generative AI models. Bias may creep in when the models are trained on biased data. Some image-generating models may show a preference for people of a certain race.

“The developers of the model need to focus on how to remove the bias and ethical issues generated by AI in the process of consumer data training,” Lin said. “It’s important to guide users to more universal and long-lasting values and to guide the model to become more ‘kind’.”

Improved Context Window

One limitation generative AI model face is the amount of information they can process at one time in a prompt. This is referred to as the context window or context size. Imagine, for example, that you need to input a very long prompt – or description – in an attempt to generate an image. At some point, the generative AI model will not be able to process the entire prompt. The output will only reflect a portion of the prompt, omitting potentially important information.

In another scenario, you may need to have a conversation with the model about a long document. As the conversation progresses, the model may forget earlier parts of the conversation.

Improving the context window would allow generative AI models to handle more complex tasks and improve the coherence of their responses.

“The limit of what we can do with generative AI has yet to be reached; we are not at the plateau of this technology,” said Hector Azpurua, an IEEE Graduate Student Member.

 

 

Disclaimer: The information contained in this press release is provided by a third party. We do not endorse or guarantee its accuracy. Recommendations, suggestions, views and opinions given by the experts are their own. These do not represent the views of Shreyas Web Media Solutions Pvt Ltd.

Leave a Reply

Your email address will not be published. Required fields are marked *