cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Share your feedback on the AI services experiment in Nightly

asafko
Employee
Employee

Hi folks, 

In the next few days, we will start the Nightly experiment which provides easy access to AI services from the sidebar. This functionality is entirely optional, and it’s there to see if it’s a helpful addition to Firefox. It is not built into any core functionality and needs to be turned on by you to see it. 

If you want to try the experiment, activate it via Nightly Settings > Firefox Labs (please see full instructions here). 

We’d love to hear your feedback once you try out the feature, and we’re open to all your ideas and thoughts, whether it’s small tweaks to the current experience or big, creative suggestions that could boost your productivity and make accessing your favorite tools and services in Firefox even easier.

Thanks so much for helping us improve Firefox!

3,488 REPLIES 3,488

Thanks..  is working ...

nclm
Making moves

Please Mozilla, don’t get lost. AI chatbots is a very silly feature, and it makes it hard to take you seriously. I know investors love it these days when tech companies integrate anything “AI” in their product, but you’re Mozilla, we all expect you to behave differently and for the good of the users and the web. There is no demand for AI chatbots, only unsolicited offer from the tech world, please don’t be part of this! There is nothing ethical about about offering a feature that is consistently giving misinformation based on stolen data processed on energy and water wasting machines. It would be lovely if you could focus your ressources on actually improving the browser and on communication campaigns to get more users, not on jumping on the latest fad. Thank you ❤️

Have you found any uses of AI chatbots for your own situation? It sounds like you correctly don't trust the responses, and others with similar concerns can still find value with these tools such as entertainment.

What is entertaining about weakening the privacy standards of one of the only browsers worthy of respect in the whole internet? I love you Firefox. Please, please, please don't ruin what you have. You'll force me off my only & favorite browser. To be frank, if Firefox becomes Chrome with a coat of paint, I'll bail I swear to god - and I won't be the only one. What will investors say then? You're the last bastion of integrity in the whole game. Don't throw that away...

Part of the reason for Firefox providing choices is so people can switch to those with better privacy standards. Do you have suggestions for existing providers that should be in the list or are you suggesting that Mozilla should have a service with better privacy?

As you are aware, providers with ethical privacy standards do not exist. It would be more honest to either not respond to AI critics at all, or directly tell them you (and Mozilla) don't care about their concerns.

If the matter is choice then make this an extension or don't make it at all. I choose to not associate with this kind of AI tech on ethical grounds. If Mozilla will continue with this plan to integrate this into Firefox then I will choose a different browser, and I say that as a Firefox advocate for over a decade.

It's extremely clear that what everyone here is suggesting is that adding "AI" to your product degrades the privacy standards your users expect. Is it worth destroying your reputation?

I would advocate that users avoid Firefox going forward because you can't trust what abusive features they're going to add.

There are zero chatbot providers with ethical privacy standards and no Firefox users should be forced into any interaction with this software. Make it an entirely optional opt-in extension or don't proceed with it at all, that is the only "better" option here.

Respectfully, I expect better of Mozilla than encouraging environmental harm for entertainment.

So if the reponses these bots give are "correctly" not to be trusted, do you think that some people finding the prompt results "entertaining" is worth the mass data scraping, the plagiarism and theft, and the considerable energy cost required to run these machines? If we're still talking about whether they have a use in the first place when the negatives are staring us in the face, why is this being added as a core feature to the base version of the browser?

If someone at Mozillla is dead-set on making this, then at minimum make it an extension so that people can opt-in to having it on their computer in the first place, rather than directing a mass audience to use an external private tool based on stolen data that *correctly* shouldn't be trusted.

Firefox is a web browser, not a game. People don't want Firefox to entertain them, they want it to do the job they downloaded it for: enable them to browse the internet with relative privacy. Chatbot integration degrades the privacy protection of the browser in, frankly, unacceptable ways.

Those who wish to use a chatbot for entertainment can use the standard web browser functionality to navigate to one on the web.

wutongtaiwan
Familiar face

I want the AI chatbot feature, it's free, unlimited and no account login, and there are no geographical restrictions, and I don't want to be blocked because I'm in China

Are you able to download and run a https://llamafile.ai as that would be free, unlimited and no account login. It runs on your computer, so you don't need internet access to chat, but it could be slow depending on your hardware.

wutongtaiwan
Familiar face

It would be even better if AI could be used for search, enabling an open-source AI model locally, and when searching, the AI aggregates the search results and answers them in the form of a human conversation, just like a chatbot

Are you suggesting the chatbot to look at the search result page content or would you expect Firefox/chatbot visit some linked pages for you? There are AI services that do this like https://perplexity.ai that you could configure as a custom provider and also supports passing in your custom prompts.

I want the chatbot to look at the content of the search results page, summarize it based on the content, and answer in human language

Something like this?

search result summarize.png

yes

piradata
Making moves

Loved the feature, I always have a open tab just for chatgpt, so its a big win, and its nice that it can be disabled or opt in/out for people that don't like it.

What I miss is just a way to simply open the chat on the sidebar, the only way I found is to right click and chose one option in 'Summarize', 'Simplify Language' or 'Quiz Me', but neither of these options can be used to just open a plain clear new chat, and if I close the sidebar, there is no way to open it again on the previous conversation.

What i would suggest, is simply a button on the bookmark toolbar, to simply open close the chat, that would suffice already 🙂

You can Customize toolbar to add a sidebar toggle button, and the chatbot will open a plain new chat. If you are logged in to some chat providers, you should be able to access previous conversations as well. Potentially Firefox could restore the last conversation, but others might prefer a clear chat, so maybe we could restore when reopening as well as provide a button to start new?

The improved sidebar also has a button to open/close the AI chatbot (as well as other sidebar tools).

francois79
Making moves

Hey

Thanks for the work done.

I do not understand why you plug only onto private llm services where some of them are using private llm 

My ideal is to be able to connect to llm model server such as https://github.com/ollama/ollama
Or https://github.com/vllm-project/vllm
Best through proxies https://github.com/BerriAI/litellm

Is it possible?

 

You can configure a custom provider to any url including those running on a local server such as your own computer. At a glance, it looks like there's various web UIs for ollama that can be separately installed, so you could at least have a private chatbot in the sidebar, but I'm not sure if any of them currently accepts passing in prompts. https://llamafile.ai is another llm model server that has a built in web chat interface that accepts ?q=prompt urls.

Potentially Firefox can directly interface with these services to render a custom Firefox chatbot or power non-chatbot experiences instead of currently requiring a web chat interface.

SpidFightFR-Dev
Making moves

Hey guys, i just tried to nightly version, with AI stuff.

Honestly it's good, but it would better fit the standards of privacy Mozilla displays if it was possible to download and run models (E.G: 7b llama3 or Mixtral models) from the web browser (considering the pc running firefox is powerful enough). In a similar fashion of GPT4All.

It would allow for a less-powerfull yet completely offline model execution.

Just food for thoughts, it's similar to the "localhost" approach with locallama, it's not very practical on windows for example, it would require a docker instance running on wsl to make something clean.

Peace - Spid. ✌️

I believe GPT4All exposes an inference http API but not a chatbot page that could be shown in the Firefox sidebar. Currently Firefox is rendering (potentially local) server-provided html in the sidebar, but we could build our own Firefox chat interface to show the results of inference -- potentially handled directly within Firefox similar to translations and alt-text or a server of your choice such as GPT4All running your preferred model.

Are there any particular use cases that you think must be handled with a local model?

Anonymous
Not applicable

This whole thing flies in the face of the Mozilla Manifesto.

Are there privacy-focused models that you would recommend? This chatbot feature is compatible with locally running llamafile with models that can be trained on data you find acceptable. Even if the quality of these models aren't comparable yet, hopefully we can help others try them out and contribute to these efforts so that we can get them to a quality level to include by default in Firefox.

I believe the bigger issue to people isn't a perceived lack of ability to use models that are:

  • Locally running
  • Open source
  • Trained on data you find acceptable

Rather, the core issue is that it seems Mozilla is promoting and incentivizing people to run models that are:

  • Running on computers they don't own and have no control over
  • Closed, proprietary, and in the hands of companies with horrible reputations
  • Trained on data that people often find unacceptable or believe was unethically sourced

Furthermore, these models also:

  • Consistently fail at delivering information in a safe and trustworthy manner
  • Save data you provide them to use in furthering their objectives at the cost of user privacy

Even the Mozilla manifesto—I know it's not a rigid set of internal rules or anything, but just for the sake of argument—boldly states:

  • 4. Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.

    Throwing unaware users at OpenAI doesn't seem like a good way of protecting their privacy.

Then there are additional concerns, such as the tight coupling with the browser (due to reasonable limitations or not is beside the point), or the fact that Mozilla is, in a manner, endorsing and enabling these companies and their practices.

So it's understandable that this feature just isn't right to a lot of people.

I understand Firefox must remain a competitive browser if it wants to regain market share. I am aware that many are implementing similar features, and nobody wants to fall behind. I know that many users are, in fact, quite excited about this feature. Maybe the developers are, as well. Hell, even I admit I'd love to have a somehow technomagically ethically-sourced and ran LLM helping me around the web.

However, I imagine this is a very sensitive topic for a significant amount of people. There is a lot of frustration with current AI trends and big tech, especially among the most tech-aware users. I think it's safe to assume that if this reaches stable as it is, there will be much valid complaining and criticism towards Mozilla. Either you already knew this, or you didn't think far enough ahead.

Sadly, I'm not sure what's the right answer here. I sincerely hope you folks manage to figure it out.

But asking...


@Mardak wrote:

Are there privacy-focused models that you would recommend?


...with all respect, seems very tone-deaf considering all this.

I asked about privacy-focused models because there was a specific comment about training data before it was edited and made anonymous.

I see, I apologize for that, Mardak. Seems I assumed much.

Despite that, I hope the rest of my comment still provided some value. I was going to post something like it regardless, and just happened to read your response on my way there. The main point, of promotion, is my main concern with the feature.

The current interface for listing compatible providers doesn't really promote any in particular, and similarly there's no default choice either as users get to decide to turn it on and which to use. Additionally, the sidebar allows for easy switching to different providers as yet another reminder to help people discover alternatives, and this will likely be helpful when we add more choices that could include local inference of open models trained with more ethically sourced data.

There is work in providers supporting passing in prompts and showing responses in the sidebar, and there's even more on Firefox's end for a local inference chatbot. Do you think it's reasonable to start with what's already working to provide value to those who would already be using an existing chatbot while we work on better alternatives?

Sorry for the delay. I wrote a big reply trying to explain my thoughts, but apparently that wasn't posted. Either I forgot to click Reply, or Connect somehow ate it. A bit annoying, since this isn't the first time something like this happened, and there isn't even a draft saved on my profile... that makes me a little sad. Here's my attempt at recreating that comment:

I don't believe that because something is opt-in, has no default, and is easily switchable, it means you are incapable of promoting that thing. It took effort to put those options there, and Mozilla chooses which providers are easily picked with a click. People will be more likely to use those options. Most people will use those options. That is a form of promotion in itself, even if unintentional, even if smaller. Mozilla cannot claim to not be promoting these in some capacity when you take into account the full context of what including them as options means, long-term.

And I imagine the easy switching sidebar thing goes the other way around, too: given the trends we've seen in the field so far, even users that are painstakingly convinced to give future safe/open/private AI a chance will be able to quickly switch to a commercial alternative and see how much better they are (actual information quality notwithstanding, of course).

>Do you think it's reasonable to start with what's already working to provide value to those who would already be using an existing chatbot while we work on better alternatives?

That's a tough one. You should know that I'm very biased in this topic, so even if I try to answer from an objective point of view, my perspective is... not very positive. I do think it's relevant, though. So, to make an attempt:

It would be if the AI landscape wasn't currently such a nightmare. I believe releasing the feature with "only what's already working" will look quite bad. Mozilla might explain, we're working on offering better options soon, but then folks will ask, why didn't you do so from the start? And if the answer is we didn't want to be late to the market, well, that actually makes it worse. It feels rushed, and uncaring of the users who have a complicated relationship with the related technologies.

I know something in Nightly or Beta behind an option shouldn't be taken as released. Unfortunately, news travels fast and there's always people keeping an eye on and sharing what's happening in Nightly, and a lot of users don't fully grasp that—or might even forget it outright—in the heat of discussion. Even those aware of this might find themselves worried that, should they not make their voices heard loudly enough, it'll be too late to avoid something they consider bad. I don't say this to excuse rude people around the internet spreading misinformation, but to remind everyone that the human element can be difficult to handle even when you do everything right.

I think holding on until you have something better to show... will still make some people mad, of course, that's our lovely community. But I imagine it'd be significantly better, since Mozilla would have something more to show and point to than, respectfully, a chatbot sidebar that by default lets you easily pick between: questionable startup, shady startup, big tech, questionable startup, and finally, another questionable startup. All engaging in a controversial field with legislation years behind.

For comparison, everyone I've told about (and explained the context behind) project Bergamot has found it fascinating. It's bucking the trend, uncompromising, boldly stating: no, we will not surrender privacy for translations. We want both, and we'll have both. Meanwhile, most people I talk to about AI are tired and just about done with the topic (reminder: I'm biased). Releasing a sidebar that, to most users, only connects them with popular AI chatbots gives the impression that Mozilla is merely chasing a trend without deeper consideration.

I don't know how much more is within the scope of what you folks can reasonably be expected to do, though. What if it takes too long? What if it doesn't work well enough, or doesn't work at all? And if developers lose motivation, seeing a nearly complete feature languish instead of reaching users? If the situation changes and work has to be scrapped?

I don't have answers to these questions, which is why I end up considering more extreme options despite not really liking them, such as halting development of the feature altogether, for now.

Hope that helps.

 

Mozilla works in the open transparently and collaboratively with the community so that we can explore, iterate and learn from feedback on the way to releasing a feature for the diverse general audience. Practically some things won't be as polished early on especially this feature that relies on what chatbots and/or models are compatible.

Since we began this exploration, there's been significant progress in AI landscape such as https://llamafile.ai now with OLMo-7B (Open Language Models) allowing people to locally run a private open-source model trained on open data, so this feature supporting user choice could help grow even more truly open-source LLM efforts. However, this also isn't quite practical for average Release users, so again should we have waited until all of these are ready and polished before we even start landing code in Nightly?

I think it depends on what Mozilla intends for this feature to be as it reaches stable Release.

Does Mozilla plan to, as an example, allow running a local model with the same effort as other options by an average user? As in, with Firefox managing download, installation and running the model for less tech-savvy users.

If deeper integration of local/open AI is...

  • Considered a "nice to have", but nonessential for launch of the feature on stable, I'm against the feature entirely. We'll simply agree to disagree, and I'll refrain from commenting further as to not hinder discussion for the interested.
  • Definitely in the plans, regardless of the implementation, I understand putting the feature in nightly to help development and start the feedback cycle. I've thought some more about what you said, and the opt-in nature definitely helps, here.
  • Undecided, that's understandable, but please say so clearly and explain the current situation if possible. In this case, putting it in nightly seems risky, though I'm sure you have your reasons. I would argue against the feature reaching Beta while this remains undecided.

Maybe I missed a statement, but I've only seen mentions of possibilities, no concrete plans.

>This chatbot feature *is compatible with* locally running llamafile
>when we add more choices that *could include* local inference of open models
>You *can configure* a custom provider to any url including those running on a local server

I'd really like it if Mozilla clearly and explicitly stated its plans regarding integration of local and/or open language models for this feature.

Sorry, but even if OLMo markets itself as an "open" model built from public domain data, it isn't that. A quick look at their dataset shows that the largest source of data is Common Crawl, which absolutely contains copyrighted content.

I understand that the purpose of this discussion is to promote community engagement in the AI so that there might be more adoption from those most involved and thus create a better product, but at the core of it those who care most about the product see this addition as a wrong path. It seems like the discussion at large has shown that the core audience of the browser do not want AI, and that chasing this dragon will only lead to financial peril and a frustrated user base.

Are you suggesting that those who use ChatGPT or other any AI would not be core Firefox users and wouldn't be interested in using Firefox more even if it improved their experience and privacy?

In what way would Firefox integration improve the privacy of a ChatGPT user?

Either they are a paid-up OpenAI customer and have contractual terms that say that their data won't be used for training, or they're using a free account and they're willingly (or unknowningly) throwing their privacy out of the window.

No amount of integration in a browser (or OS, or other app) is going to change that fundamental hyper-funded AI corp vs user balance.

And no, having other models available won't change much either. If they're a ChatGPT user then they'll see ChatGPT and use it. If they become concerned about their privacy then they'll look for other options using a search engine. No amount of "these options are in a list that most people will look at once, set to their preferred option, and then never touch again" is going to materially impact user behaviour.

Sorry, but user privacy is not the only thing here. There are no large language models with any meaningful capabilities that are not trained on stolen copyrighted data. That in itself is a privacy violation of millions of third party people. Doesn't matter if the model's weights are open source. It is still built from stolen data.