Bing, Bard, and ChatGPT: AI chatbots are rewriting the internet


Big players including Microsoft, with its new Bing (or is it Sydney?), Google, with Bard, and OpenAI, with ChatGPT are making AI chatbot technology previously restricted to test labs more accessible to the general public.

How do these Large Language Model (LLM) programs work? OpenAI’s GPT-3 told us that AI uses “a series of autocomplete-like programs to learn language” and that these programs analyze “the statistical properties of the language” to “make educated guesses based on the words you’ve typed previously.” 

Or, in the words of James Vincent, a human person, “These AI tools are vast autocomplete systems, trained to predict which word follows the next in any given sentence. As such, they have no hard-coded database of ‘facts’ to draw on — just the ability to write plausible-sounding statements. This means they have a tendency to present false information as truth since whether a given sentence sounds plausible does not guarantee its factuality.”

But there are so many more pieces to the AI landscape that are coming into play — and there are going to be problems — but you can be sure to see it all unfold here on The Verge.

  • Feb 25, 2023, 3:00 PM UTCMia Sato

    Prominent science fiction and fantasy magazine Clarkesworld announced it would pause submissions after a flood of AI spam. It’s not the only outlet getting AI-generated stories.

  • External Link

  • External Link
    Can you think like an AI?

    The game is to guess the secret word. The hitch is that the AI is classifying what words are alike. Yesterday’s word was “grasshopper,” and the AI thought “ant” was closer than “cricket.” Maybe that’s true if you’re analyzing texts to predict the next word — after all, there’s a fable about an ant and a grasshopper — but “cricket” and “grasshopper” are synonyms!

  • Feb 22, 2023, 12:25 PM UTCJames Vincent

    Chinese tech giants have reportedly been told not to offer public access to the US-developed ChatGPT. These companies have already had to censor the output of AI tools like image generators.

  • If you need to use AI to respond to a tragedy, maybe it’s better to say nothing at all.

    The Vanderbilt Hustler reports the school’s Peabody Office of Equity, Diversity and Inclusion is apologizing after sending a message regarding the shooting at Michigan State University that was “paraphrased” from OpenAI’s ChatGPT model (via Gizmodo).

    The generic-sounding email lacks any kind of personal touch, and responses to it reflect that, as noted by this quote from Vanderbilt student Laith Kayat:

    Deans, provosts, and the chancellor: Do more. Do anything. And lead us into a better future with genuine, human empathy, not a robot

    In the wake of the Michigan shootings, let us come together as a community to reaffirm our commitment to caring for one another and promoting a culture of inclusivity on our campus. By doing so, we can honor the victims of this tragedy and work towards a safer, more compassionate future for all. (Paraphrase from OpenAI’s ChatGPT AI language model, personal communication, February 15, 2023).

  • AI-spam has driven one of the best science fiction and fantasy magazines to close submissions.

    Clarkesworld has always been a great place to submit your short fiction because they respond fast and pay well. There’s not a lot of waiting to learn if you’ve been accepted or not. Unfortunately, the Hugo-winning magazine has been forced to temporarily stop accepting submissions because it was getting hit with too many AI-generated submissions. Submissions will reopen eventually, but editor Neil Clarke says the current tools for spotting AI-generated submissions aren’t “reliable enough.”