|
they added "the new bing" to edge and its really bad. it appears to be some heavily braindead version of the full bing chat which is itself a more braindead chat gpt. it cant do anything more than bing search results and if you tell it the results are wrong (because they're bing search results) it breaks and you cant search for anything.
|
# ¿ Mar 15, 2023 23:37 |
|
|
# ¿ May 14, 2024 01:33 |
|
these media people getting self owned by AI hype suck rear end, but the worst form is the OpenAI ceo going on press tours talking about how his ai is "JUST TOOO GOOD" and "MUST BE REGULATED NOW!!!" "If we dont regulate it now, MY AI that you can buy right now is going to take over the world! Its just that good!!" these clowns just reprint it without any critical thought
|
# ¿ Jun 3, 2023 05:20 |
|
"We just let anybody buy it! its crazy! we're just so nuts to let everyone buy our incredible world changing ai! SoMeBoDy StOp Us!!"
|
# ¿ Jun 13, 2023 22:55 |
|
Beeftweeter posted:lmao
|
# ¿ Jun 30, 2023 23:50 |
|
Powerful Two-Hander posted:this the loving funniest thing MS has done. They have learned nothing from Tay at all. turns out ai rules, actually
|
# ¿ Oct 6, 2023 14:08 |
|
nah his brain is mush and thats just gibberish.
|
# ¿ Nov 15, 2023 02:39 |
|
i dont think its possible for the models to produce objectively correct results consistently from the large models. And the more you clamp down on it with guiderails for the purposes of accuracy the closer you get to it resembling a text indexer or a database query. Like you're gonna basically get to the point where all youu're using the AI for is to translate "What time is space jam playing?" to SELECT * FROM MovieTimes where movie='space jam' and datettime>= now and location = '<user location>' in order to guarantee safe results. for subjective output like images or speech generation or w/e this is less important.
|
# ¿ Nov 30, 2023 05:19 |
|
Chalks posted:The thing about making it 100% trustworthy is that you have to massively increase how often it will refuse to answer due to uncertainty, possibly to the point of uselessness. See thats not even it. the problem is it has no concept of objective certainty. Just consensus based on the model
|
# ¿ Nov 30, 2023 14:07 |
|
yeah, for something like chatgpt the generative aspect is litterrally the only aspect. there are no non-generative components here that you could somehow limit it to using. Like theres no way to ask it to spit out the original data from the training set. The best you can do is get it to generate something approximating the original input.
|
# ¿ Nov 30, 2023 14:30 |
|
I kind of dislike the term hallucination because it implies that theres something "real" that the thing could be generating. All of it is new generated content and none of it is the original content. Even if the original training set was 100% verified factual data, generational AI like this still has to generate (aka hallucinate) a new response no matter what. The original, verified data IS NOT in the model in any way we would consider safe for the purposes of objective decision making.
|
# ¿ Nov 30, 2023 16:41 |
|
yeah that too
|
# ¿ Nov 30, 2023 16:47 |
|
|
# ¿ May 14, 2024 01:33 |
|
Doesnt require much to rebrand your pixel interpolation algorithm as AI and then charge more
|
# ¿ Apr 6, 2024 22:12 |