When I search “How old is Biden” I get back an AI answer stating that he is 78. This is not correct. The correct answer is 80.
Suggestion: I should be able to suggest or propose a correction to the AI results and for such suggested corrections, the AI should learn the correct answer or at least ‘think twice’ and verify if the correction is valid. I love the AI results but I think it would be even better if user feedback were incorporated to further improve its results. The AI should be learning from the community, the searchers, and should give further ‘weight’ to a frequent or common correction suggestions.
Thanks for the suggestion, it is possible that some AI results are not always correct, you have to take into account that chatGpt is currently being used Perhaps it would be much easier to apply a corrective action if it were a presearch own model
I think part of the issue is that presearch AI is leveraging chatGPT a version that has 2 year old data. It’s also very biased.
I like the warning info label that was added but I would acknowledge that presearch AI is currently leveraging chatGPT so users are not immediately turned off by PResearch AI. In the future it could ingest current data and news and not be reliant on chatGPT.
Also would need to address the concern of users training or trying to change to something that may also be wrong (not accurate). Who determines that is substantiated truth? I like the idea of user feedback and curation something to maybe consider for communities implementation because it seems to fall into that similar idea of community curation.
Current LLM is based on chatgpt 3.5 version which have been built over datasets till 2021. So actually AI is corretly calculating the age 78 for that time.
Yeah, I understand that this is something to be improved, but there is no immediate quick fix for the 2years of missing data. If there would be an update to the underlying LLM, this would be definitely incorporated in.
This is why I recommended adding to the current warning label that Presearch AI is currently leveraging chatGPT in a privacy protected way.
You want to take credit for providing access to this API that often provides great immediate answers to queries. However, you don’t want current and new users to think is that Presearch AI is inaccurate or biased and turned off by the technology. So you should be more transparent where the results are being derived from.
In the future perhaps Presearch provides its own organic AI results at which point you can take ownership and credit for it all.
Additionally, as I recommended in other areas I think presearch should find other AI API models that are either free or pay as you go. These could also be incorporated by presearch using the same technology that is being leveraged for chatGPT access. However, for pay as you go AI models you could have a floating PRE charge to get access for a day, week, month, or for x number of uses. It becomes another AI toggle so that the user can choose to use it when they want and turn it off when they don’t. This is in line with Presearch vision to add many search providers under one roof and could be a very unique feature.
I would love to see side-by-side AI results on different search queries. This would give me the ability to determine if I want to continue using chatGPT or if I might consider paying a PRE subscription for another that provides more recent relevant results.
This also makes everyone’s PRE more valuable because additional utilities and uses for PRE would be start to become available.
Yes, currently being based on chatGpt it presents errors in some queries and also outdated data, that would be different with the decentralized Al executed by the nodes, I don’t know if the team will make any changes while that comes or it will stay as it is, anyway we will always keep you informed of any developments.