Microsoft is reportedly considering investing $10 billion in OpenAI, the startup that created the ultra-popular chatbot ChatGPT. Microsoft plans to integrate it into its Office product catalog and into its Bing browser. The tech giant has already invested more than $1 billion in OpenAI. According to The Information, some of these features could be in place as early as March.
The stakes are high. If the process is successful, it will enable a large number of users to have powerful artificial intelligence (AI) tools. So what will ChatGPT-powered Microsoft products look like? We posed the question to Microsoft and OpenAI. Neither of these two companies wished to answer our questions. However, we have enough information to make educated and intelligent guesses. Spoiler: You’ll be excited about these innovations if, like me, you consider creating PowerPoint presentations and answering emails to be boring tasks.
Let’s start with online research. This is the use that has received the greatest attention from the media and users. ChatGPT’s popularity has rattled Google, which reportedly considers it “code red” for its ubiquitous search engine. Microsoft would hope to integrate ChatGPT into its (more maligned) Bing search engine.
ChatGPT: how to know if what we read on the Internet was written by a human or by an artificial intelligence?
According to Melanie Mitchell, a researcher at the Santa Fe Institute, a nonprofit research organization, AI could work as a front end of Bing that responds to user queries in human language. So, AI-assisted search could translate to something like this: instead of having a list of links, you’ll get a full paragraph with the answer you’re looking for.
However, there’s a good reason why Google hasn’t yet integrated its own powerful language models into search. It’s well known that models like ChatGPT tend to deliver biased, damaging and factually incorrect content. They are great at generating fluent text that reads as if it was written by a human, but they are unaware of what they are producing. They state both facts and false information, displaying the same confidence in both cases.
Search engine: thanks to ChatGPT, Microsoft wants to compete with Google
Today, when people search for information online, they are presented with a wide range of options and have the ability to judge for themselves which results are reliable. A chat AI like ChatGPT removes this layer of “human evaluation” and forces people to take results at face value, said Chirag Shah, a professor of computer science at the University of Washington who specializes in search engines. People may not even notice that these AI systems are generating biased content or misinformation and can spread it even further, he continues.
Asked about this, OpenAI remained silent on how the company trains its models to be more accurate. A spokesperson said ChatGPT is a research demo and is updated based on real-world feedback, but it’s unclear how it will work in practice. Delivering accurate results is essential if Microsoft wants people to stop “googling” things.
By then, apps such as Outlook and Office are more likely to get an AI injection, Chirag Shah believes. ChatGPT’s potential to help people write faster and in fluent language could make it Microsoft’s flagship application.
According to Chirag Shah, the language models could be integrated into Word in order to facilitate the synthesis of reports, the writing of proposals or the generation of ideas. They could also equip e-mail programs as well as Word with better auto-complete tools, he adds. And it’s not just about Word. Microsoft has already announced that it will also use DALL-E, OpenAI’s text-to-image generator, to create images for PowerPoint presentations.
According to Chirag Shah, we are not far from the day when large language models will be able to respond to voice commands or read text such as emails, for example. This could be a boon for people with learning difficulties or visual impairments.
Can language patterns influence our behavior and ideas?
Online research isn’t the only type of research the app could improve. Microsoft could use the system to help users find emails and documents.
But here’s the question everyone should be asking: is this the direction we want to go?
By blindly adopting these technologies and automating our communications and creative ideas, we risk losing our autonomy to machines. There is a risk of “regression” to a point where our personality is removed from our messages, alert Melanie Mitchell.
Artificial intelligence: here is what awaits us in 2023
“We will end up in a situation where bots will write emails to bots and where bots will reply to other bots,” she says. “It doesn’t seem like a great world to me.”
Language models are also great imitators. Every instruction entered in ChatGPT contributes to training it and helping it to improve. In the future, when these technologies become more integrated into our daily tools, they can learn our personal writing style and preferences. They might even manipulate us into buying products or acting in a certain way, warns Melanie Mitchell.
It’s also unclear whether these tools actually improve our productivity since people will always have to edit and recheck the accuracy of AI-generated content. Furthermore, there is a risk that people will blindly trust AI, which is a recurring problem with new technologies.
“We will all be beta testers of these technologies,” says Melanie Mitchell.
Article by Melissa Heikkilä, translated from English by Kozi Pastakia.
The Lensa AI app generated hypersexualized avatars for me without my consent
Receive our latest news
Every day, the selection of main info of the day.