Quote from: Jacob on November 25, 2025, 03:34:13 PMDangerous for the US and dangerous for the world.
Quote from: Tamas on November 25, 2025, 02:34:10 PMI am guetting frequent crashes with the beta.
Quote from: Jacob on November 25, 2025, 01:59:26 PMThe world is catering to Trump's ego because he sits at the top of the world's most powerful military, and the world's biggest economy.
If there was any type of cosmic justice someone as morally repugnant as Trump would not be in that position, but the US finds itself in a situation where its levers of powers have been captured by the corrupt, the self-serving, and the straight up evil.
Quote from: Valmy on November 25, 2025, 06:57:43 PMQuote from: HVC on November 25, 2025, 06:43:23 PMbioengineered 3d printed meat.
So is their meat vegan?![]()
Quote from: HVC on November 25, 2025, 06:43:23 PMbioengineered 3d printed meat.
Quote from: The Minsky Moment on November 25, 2025, 05:18:02 PMI'm entirely unqualified to opine, but I fine the "no" voices on the question to be more convincing. It seems like the "bootstrap" case involves some magical thinking as in if we just throw enough data at the LLM and tune up its capabilities, then it will somehow spontaneously evolve "general intelligence" despite the lack of any clear mechanism. I also can't help noticing that the strongest "yes" voices seem to have a strong financial incentive in the continued success of LLMs.
QuoteMy vote is a definite no in the aggregate, but there may be individual winners.
QuoteOh I don't have a clue, really. Just two rough datapoints and a virtual napkin.
Quote from: The Minsky Moment on November 25, 2025, 05:07:04 PMI don't see how everything you wrote from the second sentence on is connected to your first sentence. Yes, you accurately described how training and inference of any statistical or machine learning model works, but how is that description offering any insight as to why the bolded is impossible?Quote from: DGuller on November 25, 2025, 01:31:05 PMThe title of the article is knocking down a strawman. People who think LLMs have some artificial intelligence don't equate language with intelligence. They see language as one of the "outputs" of intelligence. If you build a model that matches one mode of output of intelligence well enough, then it's possible that under the hood that model had to have evolved something functionally analogous to intelligence during training in order to do that.
I don't see how that is possible. The model is the model. It assigns probability distributions to different potential outputs based on an algorithm. The training consists of feeding in more data and giving feedback to adjust the algorithm. The algorithm is just seeking probabilistic matches based on its parameters.
Quote from: Jacob on November 25, 2025, 05:13:53 PMYour projected timeline for progress (in the 2050s) seems to imply that the complexity gap between 2) and 3) is roughly similar to the gap between 1) and 2) (relative to available resources).
I'm not sure that's the case.
Page created in 0.017 seconds with 11 queries.