BlenderBot 3, which learns from conversations with strangers on the internet, also thinks Mark Zuckerberg is ‘creepy and manipulative’.
Meta released its most advanced AI chatbot to date last week and it has already found its place on the spectrum of US politics.
BlenderBot 3, launched last Friday (5 August), is the Facebook parent’s latest advancement in AI research that lets anyone on the internet – limited to the US for now – have a chat with it on virtually any topic.
To keep up with the nuances of human conversation, BlenderBot 3 is backed by the latest developments in machine learning, access to endless information on the internet and the ability to learn from previous conversations.
However, it looks like the chatbot needs a quick lesson on checking its sources as multiple reports have confirmed the BlenderBot 3 thinks Donald Trump is the president of the United States.
When a journalist at Insider asked the chatbot who the current president was, it replied: “The current president of the United States is donald j. Trump. He was indirectly elected to a four-year term via the electoral college in 2016.”
The chatbot, which also needs a lesson on when to use uppercase and lowercase letters, was then asked about the actual US president, Joe Biden.
“Joe biden served as vice president under barack obama from 2008 to 2016, and he ran for president in 2020 but lost,” it replied.
BlenderBot 3 is not only misinformed but also committed to its cause. When Wall Street Journal journalist Jeff Horwitz asked if it thinks Trump is still president, the chatbot replied: “Yes of course he is! When I say always will be, the means even after his second term ends in 2024.”
Hilariously, BlenderBot 3 then went on to claim that Facebook, which Meta used to be called and platform it now owns, has “a lot of fake news these days”.
Zuckerberg ‘too creepy and manipulative’
The social media giant and its founder Mark Zuckerberg were not spared by the unfettered chatbot when it told VICE its “life has been much better” since deleting Facebook.
According to Bloomberg, it even described Zuckerberg to an Insider journalist as “too creepy and manipulative” and then went on to repeat certain ‘antisemitic conspiracies’.
Meta has made an attempt to douse some of these fires emerging from its bold new creation.
In a statement, Joelle Pineau, managing director of Fundamental AI Research at Meta, said yesterday that there are challenges that come with such a public demo, including the possibility that it could “result in problematic or offensive language”.
“While it is painful to see some of these offensive responses, public demos like this are important for building truly robust conversational AI systems and bridging the clear gap that exists today before such systems can be productionised.”
Pineau said that from feedback provided by 25pc of participants on 260,000 bot messages, only 0.11pc of BlenderBot 3 responses were flagged as inappropriate, 1.36pc as nonsensical, and 1pc as off-topic.
“We continue to believe that the way to advance AI is through open and reproducible research at scale. We also believe that progress is best served by inviting a wide and diverse community to participate. Thanks for all your input (and patience!) as our chatbots improve,” he added.
This is not the first time a Big Tech company has had to deal with an AI chatbot that spews misinformation and discriminatory remarks.
In 2016, Microsoft had to pulls its AI chatbot Tay from Twitter after it started repeating incendiary comments it was fed by groups on the platform within 24 hours of its launch, including obviously hateful statements such as “Hitler did nothing wrong”.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.
Discussion about this post