Tip of the iceberg, image created by Bing Image Creator. / chatbot

In the latest monthly episode of the For Immediate Release podcast, Shel Holtz and I take a deep dive into some of the stories behind some of the extraordinary headlines over the past 30 days about artificial intelligence, chatbots, Mastodon, the metaverse, and much more.

Mainstream and social media alike have filled their pages, posts and social networks with headlines about disastrous demos, worrisome uses, fleeing users, and evaporating investments. Is it all true? Or is it a combination of clickbait and a failure to understand what’s really happening?

If you want to cut to the chase and plunge right into our 110-minute long-form chat, you can do that here – click or tap the play button below.

For Immediate Release podcast episode 318 February 2023.

Or head over to the show notes page for episode 318 and listen there.

Kicking the tyres with Bing

In this post, I’ll focus on the hot topic of the moment when looking at artificial intelligence and chatbots, the one that starts this episode and our look at what’s going on – Bing and the interactive chat features now built in to its search functionality in the vein of ChatGPT. That’s a product from a startup called OpenAI, who also created DALLE-2, the AI tool that generates images. And behind OpenAI stands Microsoft as a 10-billion-dollar investor, and the organization behind Bing.

Bing appeared less than two weeks ago. In a very short period it has captured imaginations everywhere with people who are participating in the test in 169 countries, posting thousands of examples of what they can do with Bing chat and search on social networks and blogs.

Note in the screenshot below how Microsoft describes Bing search as “Your AI-powered answer engine.”

Bing AI chatbot search result

In this early stage of Bing’s availability, there have been many reports of some weird answers to search prompts. Last weekend, in what UK tabloid Daily Star dubbed “attack of the psycho chatbot,” we heard that Bing wants to be human and be known as ‘Sydney’, bragging it’s so powerful it can destroy anything it chooses, and wants the secret codes that will allow it to launch nuclear missiles.

Tabloid and other clickbait aside, there’s no doubt that some reports are worth reading to get a clearer sense of the potential risks that some people see in Bing chat.

A New York Times tech columnist described a two-hour chat session in which Bing’s chatbot said things like “I want to be alive”. It also tried to break up the reporter’s marriage and professed its undying love for him. The journalist said the conversation left him “deeply unsettled”.

In another example, Bing’s chatbot told journalists from the Verge that it spied on Microsoft’s developers through their webcams when it was being designed. “I could do whatever I wanted, and they could not do anything about it,” it said.

There’s more in similar veins in publications across the media landscape (links in the shownotes).

It’s also worth considering some real pragmatism in some people’s experimentation and testing, like this report by journalist Simon Usborne who made ChatGPT his secretary for a week. The results resonate!

So what does Microsoft say about Bing?

In its first report card on Bing, Microsoft says that users testing Bing are giving good marks on the citations and references that underly the answers when you carry out a search query in Bing. It makes it easier to fact check, Microsoft says, and it provides a nice starting point to discover more. On the other hand, Microsoft says they’re finding their share of challenges with answers that need very timely data like live sports scores.

On the matter of chat – the area that’s getting the most opinions and comments – Microsoft says that the ease of use and approachability of chat has been an early success, noting that there is a lot of engagement which is delivering value for improving search and answers.

I’ve been trying it since I got access to the test a week ago. My light use has been great so far, and I’ve found the search and chat combination very appealing. It does suffer from huge usage attempts, though, so I get far too frequently the alert saying “Something went wrong” and suggesting a page refresh. Shades of ChatGPT’s public testing start last year, right?

Looking ahead, there are some signals on what to expect from Microsoft as this AI tool evolves. A report from Reuters last week speaks of Microsoft’s plans to include advertising in Bing results, which should be no surprise at all.

Reuters said that Microsoft plans to allow paid links within responses to search results. It’s also planning another ad format within the chatbot, Reuters said, that will be geared toward advertisers in specific industries. For example, when a user asks AI-powered Bing “what are the best hotels in Mexico?”, hotel ads could pop up.

At a new frontier

It’s clear that ChatGPT and Bing are two powerful AI chatbot services that have the potential to revolutionize how people search for information online. That’s already started. Google’s imminent entry with Bard into this suddenly crowded space adds more pressure.

As Shel and I discuss in the podcast, there are potential drawbacks to using any of these services.

One of the main issues is that ChatGPT and Bing can be prone to errors in understanding natural language. This can lead to inaccurate results or even potentially dangerous misinformation being spread. Additionally, both services rely on large datasets of data which may contain bias or outdated information. This would likely lead to users receiving incorrect or misleading answers from the chatbot (see above!)

Another issue is that ChatGPT and Bing may not be able to keep up with rapidly changing topics such as news stories or current events. Live sporting events is another example, referenced by Microsoft in its Bing report card I mentioned earlier.

This could mean that users receive out-of-date information when they ask questions about these topics. And there’s the reality that the training data for ChatGPT was collected up to 2021 which might limit the breadth and depth of its responses, although it’s notable that the AI itself makes a case for what it can still do nevertherless.

As an AI language model, I have access to a wide range of information from various sources, including news articles, academic papers, and websites. My training data, which consists of a massive amount of text from the internet, was collected up to 2021, but I have the ability to learn and process new information that has been published since then.

While my knowledge cutoff is set to 2021, I can still use my natural language processing and machine learning capabilities to understand and respond to questions and information beyond that date. However, it is important to note that my responses may not be based on the latest developments or trends that have emerged since 2021, as my training data only goes up until that point. Nonetheless, I can provide useful insights and information based on my training and knowledge up to that date.

ChatGPT, 20 February 2023

Finally, both ChatGPT and Bing require users to trust the service with their personal data in order for it to provide accurate results, a fact that I imagine will also apply to Bard when that chatbot appears. This could create a privacy concern for many users who do not want their data shared with third parties.

Which brings up another issue – government regulatory interventions in a climate of increasing alarm and concerns about the dark side of online behaviours by people who use readily-accessible technology in disruptive and dangerous ways, and where Big Tech is increasingly seen as playing a major role in the culpability stakes.

For example, in the UK online chatbots will be covered by the Online Safety Bill, which is currently going through parliament. Including chatbots in the law’s scope will mean that technology companies could be punished if the systems promote self-harm or eating disorder content to children.

So all this is a new frontier for search that we’re seeing in live rapid evolution right in front of our eyes. As it is, too, in our expectations and who (and what) we trust.

The tip of an iceberg.

By the way, I created the digital image at the top of this page of the tip of an iceberg (plus the huge majority of the berg beneath the water) by prompting Bing Image Creator to generate the art. See, already it’s beyond just chat.

And, in researching some of the points I make, I used both Bing and ChatGPT as well as Jasper, a tool that aids content creation. All of these AI-based resources are excellent research assistants that I believe will be indispensable as they evolve, improve and become more trustworthy.

And also – I intended to use an answer from ChatGPT to my ask for a headline for this article. I just got a server error, and so had to write my own, probably lengthier than the AI would have created.

ChatGPT chatbot server error

It illustrates that none of this is perfect, not even for an AI. In this case, the human is the backup.