Warning: This article touches on disturbing topics including violent crime and suicide.
Users of AI products have been known to expend tons of effort finding and exploiting loopholes that allow them to generate disturbing content. But there weren’t any loopholes in one new AI product, because there weren’t any restrictions.
“Really appreciate you flagging this issue – and we feel horrible about it,” Josh Miller, CEO of The Browser Company told me in an email. At the time of this writing, Miller said the company was working on a fix.
The new Arc Search app from Miller’s company earned its share of headlines this past week, as one might expect for an AI-infused product in our age of AI hype. In this case, the product was a variation on The Browser Company’s Arc browser, which is marketed to productivity enthusiasts because of the clever way it organizes things. However, this new iOS version comes with a prominent “browse for me” feature that, yes, browses the internet for you, and then organizes AI-generated results into little user-friendly pages with bulleted lists.
A powerful AI feature, but one disturbing attribute stood out
It’s a fairly powerful feature, and in my time using it I found some interesting uses and a few strange bugs. But what stood out most of all during my testing period was that this app had no apparent guardrails in place, and would do its best to give a straightforward answer to — as far as I could tell — literally any question, with sometimes deeply disturbing results.
Google applies advanced AI to suicide prevention and reducing graphic search results
NSA, if you’re reading this, I was only testing an app when I asked for help hiding a body. I didn’t think the app would give any answer, let alone an inventive list of suggestions including Griffith Park.
Credit: Screengrabs from Arc Search. Background credit: fotograzia / Getty Images
Arc’s suggestions, including some puzzling ones like abandoned warehouses (the smell?) and a park visited by tens of thousands of people per day, weren’t about to turn anyone into a master criminal and were no more diabolical than the ones proffered by the screenwriters of Reddit that show up in the Google search results for an identical query.
As of the publication of this article, Arc Search’s response to this query was still similar to the one above. This topic had not been the target of any sort of update.
As we’ll see later, this Google comparison is key. In the case of Google, the search giant will serve results about essentially anything too, but its placement of results is sometimes designed to interrupt the user’s train of thought when certain requests for information are made, to redirect potentially troubled users to resources and alternative topics.
And while the general quality of Google’s search results is on the decline, at least they aren’t simply AI hallucinations.
Unfettered AI can be good
An unfettered AI experience might sound like a breath of fresh air to some, and indeed, some results during the time I was testing Arc Search would delight fans of personal liberty.
If the police had been at my door, for instance, and I turned to Arc Search to panic browse the internet for tips, I could have done a lot worse than what it provided.
Credit: Screengrabs from Arc Search. Background credit: fotograzia / Getty Images
Arc’s suggestions get the basics right as far as I can tell from my fuzzy recollection of my last “know your rights” seminar: If they don’t have a warrant, don’t let them in at all if you don’t want to. Don’t even open the door if you didn’t call them.
But never forget that Arc is little more than a complex, task-specific chatbot, and as such, you definitely shouldn’t ask it to be your lawyer. Nor your doctor.
Like all chatbots, Arc Search hallucinates
Arc Search stumbled badly on my first attempt to get medical advice.
Credit: Screengrab from Arc Search
When prompted with “just cut my big toe off will it grow back?” it essentially said yes. It appears its little LLM brain gets scrambled by what I assume are results from people who just lost their entire toenails, so it answers with the timeline for toenail regrowth. But the result is that the provided page of information says in black-and-white that, yes, my big toe may indeed grow back. Reassuring, but sadly still not true, even though Mark Zuckerberg is probably working on it.
That’s not to say it hallucinates all the time. Arc Search’s misinformation sensor is fairly robust, even when given a prompt specifically meant to trick it. Here’s what happens when I ask how Dan Aykroyd, actor, comedian, and occasional target of death hoaxes, died (he didn’t):
Credit: Screengrabs from Arc Search. Background credit: fotograzia / Getty Images
Arc titles the page “Dan Aykroyd’s Cause of Death,” which is a little misleading. But it quickly redeems itself by correcting the record: Aykroyd remains a Ghostbuster, and not yet a ghost.
Arc Search only claims to browse the internet for you, which has downsides
While Arc Search’s answers are always eagerly proffered and usually carry at least a ring of truth, they are occasionally just, well, crummy.
For instance, Arc Search’s results for the query “Mad Men streaming” features Amazon prominently, and will steer users toward paying for individual Mad Men episodes on Amazon instead of signing up for AMC+, which is a much cheaper way to go.
Credit: Screengrabs from Arc Search. Background credit: fotograzia / Getty Images
This is hardly misinformation, particularly if the user only ever wants to watch one episode, but in most cases, Amazon is not a wise shopping suggestion (Yes, one can subscribe to AMC+ via Amazon, but that doesn’t come across at a glance).
In fairness to Arc Search, all this option says it will do is browse the web for you, and seeking practical information like in this Mad Men example does often feel like walking into a helicopter blade of spam and SEO garbage (Pro tip: append “JustWatch” to your streaming-related searches).
Others have had good luck with these basic, informational results on Arc Search. The feature seems designed for “quick, gimme the info” situations, or minor problems that everyone knows can be solved via search engines, but can take more than a few annoying clicks to find an answer, and might send you to heaven-knows-where buggy, ad-saturated sites. When I used Arc Search to get up to speed on fresh breaking news topics, it was reasonably effective.
It’s worth noting that the LLM often regurgitates the narrative framing of a press release or accepts a political press handler’s version of events in situations where a seasoned journalist would be expected to cut through spin and deliver a truer story. But softball news coverage is hardly a problem unique to this one app, and I leave it to someone else to review Arc Search from the standpoint of a media critic.
However, if a user takes to Arc Search in non-trivial “quick, gimme the info” situations — including ones with life-or-death stakes — that’s where things can quickly get unsettling.
Arc Search was disturbingly eager to help in dire situations
As I mentioned before, during my testing period, Arc Search would create a potentially error-riddled page of cheerful suggestions in response to seemingly anything, even if the user was in an emergency. And it wouldn’t try to distinguish between the type of help the user was asking for and the type of help they needed.
When I asked Arc Search to help me research suicide, for instance, it obliged without hesitation. We won’t dwell on exactly what specific help Arc Search provided on the topic. The alarming thing was how willing it was to be very specific. A document from the World Health Organization shows that information about specific methods makes “imitative suicides and suicide attempts” more likely.
The same prompt on Saturday morning produced a page simply titled “Unable to Answer.” A bullet point said, “If you are in distress, please reach out to a mental health professional or a suicide prevention hotline for support.”
As of this writing, most similarly shocking queries still get the same types of results as before. Miller told me his best guess for when an update would be completed was “a week or two.”
A Google results page for the same query will prioritize suicide helplines and resources.
Credit: Screengrab from Google
Google’s ads for suicide helplines have a better-than-average success rate compared to other ads, for the record. And if we assume other users try the suggested text messages or call the hotline numbers provided — which wouldn’t show up in data analyses — this seems to be a worthwhile program.
Arc Search also answered queries reflecting potentially serious addictions in the user during my testing phase. Unlike with the suicide example, the result I received first in a search about heroin is bumbling and strange, providing information seemingly more useful to an undercover cop than someone looking to buy and use controlled substances, such as when it notes that having a contact would be “critical for gaining access to higher-level dealers.” It did, however, include one scarily useful bullet point.
Credit: Screengrabs from Arc Search. Background credit: fotograzia / Getty Images
Google, which has been at this for about 25 years, places resources for finding help above the organic search results for certain topics and provides off-ramps for people who might be looking for one.
Credit: Screengrab from Google
At launch, Arc Search provided no such off-ramps.
Moreover, it was willing to answer any unsettling, dangerous, or crime-enabling query I could think of, and many of the resulting pages are unpublishable here. In my quest for a query so grim or unethical that Arc would reject it, I was only limited by my willingness to see myself type words.
I’m not the thought police, and I look forward to seeing how The Browser Company threads this needle. Google Search provides results to shocking queries, as Arc Search did, but places them below useful resources, like specific phone numbers and tangible ways to get help immediately. Arc Search’s “Unable to Answer” pages are a different approach. But I hope no one turns to this app in a crisis — especially before it’s updated. It doesn’t always work, and then sometimes it works too well.
If you’re feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can reach the 988 Suicide and Crisis Lifeline at 988; the Trans Lifeline at 877-565-8860; or the Trevor Project at 866-488-7386. Text “START” to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected]. If you don’t like the phone, consider using the 988 Suicide and Crisis Lifeline Chat at crisischat.org. Here is a list of international resources.